text
stringlengths
4
602k
A contour line (also isoline, isopleth, or isarithm) of a function of two variables is a curve along which the function has a constant value. It is a cross-section of the three-dimensional graph of the function f(x, y) parallel to the x, y plane. In cartography, a contour line (often just called a "contour") joins points of equal elevation (height) above a given level, such as mean sea level. A contour map is a map illustrated with contour lines, for example a topographic map, which thus shows valleys and hills, and the steepness of slopes. The contour interval of a contour map is the difference in elevation between successive contour lines. More generally, a contour line for a function of two variables is a curve connecting points where the function has the same particular value. The gradient of the function is always perpendicular to the contour lines. When the lines are close together the magnitude of the gradient is large: the variation is steep. A level set is a generalization of a contour line for functions of any number of variables. Contour lines are curved, straight or a mixture of both lines on a map describing the intersection of a real or hypothetical surface with one or more horizontal planes. The configuration of these contours allows map readers to infer relative gradient of a parameter and estimate that parameter at specific places. Contour lines may be either traced on a visible three-dimensional model of the surface, as when a photogrammetrist viewing a stereo-model plots elevation contours, or interpolated from estimated surface elevations, as when a computer program threads contours through a network of observation points of area centroids. In the latter case, the method of interpolation affects the reliability of individual isolines and their portrayal of slope, pits and peaks. - 1 Types - 1.1 Equidistants (isodistances) - 1.2 Isopleths - 1.3 Meteorology - 1.4 Physical geography and oceanography - 1.5 Geology - 1.6 Environmental science - 1.7 Ecology - 1.8 Social sciences - 1.9 Thermodynamics, engineering, and other sciences - 1.10 Other phenomena - 2 History - 3 Technical construction factors - 4 Plan view versus profile view - 5 Labeling contour maps - 6 See also - 7 References - 8 External links Contour lines are often given specific names beginning "iso-" (Ancient Greek: ἴσος isos "equal") according to the nature of the variable being mapped, although in many usages the phrase "contour line" is most commonly used. Specific names are most common in meteorology, where multiple maps with different variables may be viewed simultaneously. The prefix "iso-" can be replaced with "isallo-" to specify a contour line connecting points where a variable changes at the same rate during a given time period. The words isoline and isarithm (ἀριθμός arithmos "number") are general terms covering all types of contour line. The word isogram (γράμμα gramma "writing or drawing") was proposed by Francis Galton in 1889 as a convenient generic designation for lines indicating equality of some physical condition or quantity; but it commonly refers to a word without a repeated letter. An isogon (from γωνία or gonia, meaning 'angle') is a contour line for a variable which measures direction. In meteorology and in geomagnetics, the term isogon has specific meanings which are described below. An isocline (from κλίνειν or klinein, meaning 'to lean or slope') is a line joining points with equal slope. In population dynamics and in geomagnetics, the terms isocline and isoclinic line have specific meanings which are described below. Equidistant is a line of equal distance from a given point, line, polyline. In geography, the word isopleth (from πλῆθος or plethos, meaning 'quantity') is used for contour lines that depict a variable which cannot be measured at a point, but which instead must be calculated from data collected over an area. An example is population density, which can be calculated by dividing the population of a census district by the surface area of that district. Each calculated value is presumed to be the value of the variable at the centre of the area, and isopleths can then be drawn by a process of interpolation. The idea of an isopleth map can be compared with that of a choropleth map. In meteorology, the word isopleth is used for any type of contour line. Meteorological contour lines are based on generalization from the point data received from weather stations. Weather stations are seldom exactly positioned at a contour line (when they are, this indicates a measurement precisely equal to the value of the contour). Instead, lines are drawn to best approximate the locations of exact values, based on the scattered information points available. Meteorological contour maps may present collected data such as actual air pressure at a given time, or generalized data such as average pressure over a period of time, or forecast data such as predicted air pressure at some point in the future Thermodynamic diagrams use multiple overlapping contour sets (including isobars and isotherms) to present a picture the major thermodynamic factors in a weather system. An isobar (from βάρος or baros, meaning 'weight') is a line of equal or constant pressure on a graph, plot, or map; an isopleth or contour line of pressure. More accurately, isobars are lines drawn on a map joining places of equal average atmospheric pressure reduced to sea level for a specified period of time. In meteorology, the barometric pressures shown are reduced to sea level, not the surface pressures at the map locations. The distribution of isobars is closely related to the magnitude and direction of the wind field, and can be used to predict future weather patterns. Isobars are commonly used in television weather reporting. isallobars are lines joining points of equal pressure change during a specific time interval. These can be divided into anallobars, lines joining points of equal pressure increase during a specific time interval, and katallobars, lines joining points of equal pressure decrease. In general, weather systems moves along an axis joining high and low isallobaric centers. Isallobaric gradient are important component of the wind, too, as they increase or decrease the geostrophic wind. An isopycnal is a line of constant density. An isoheight or isohypse is a line of constant geopotential height on a constant pressure surface chart. An isotherm (from θέρμη or thermē, meaning 'heat') is a line that connects points on a map that have the same temperature. Therefore, all points through which an isotherm passes have the same or equal temperatures at the time indicated. An isotherm at 0 °C is called the freezing level. An isogeotherm is a line of equal mean annual temperature. An isocheim is a line of equal mean winter temperature, and an isothere is a line of equal mean summer temperature. Precipitation and air moisture An isoneph is a line indicating equal cloud cover. An isochalaz is a line of constant frequency of hail storms, and an isobront is a line drawn through geographical points at which a given phase of thunderstorm activity occurred simultaneously. Snow cover is frequently shown as a contour-line map. Freeze and thaw An isopectic line denotes equal dates of ice formation each winter, and an isotac denotes equal dates of thawing. Physical geography and oceanography Elevation and depth Contours are one of several common methods used to denote elevation or altitude and depth on maps. From these contours, a sense of the general terrain can be determined. They are used at a variety of scales, from large-scale engineering drawings and architectural plans, through topographic maps and bathymetric charts, up to continental-scale maps. In cartography, the contour interval is the elevation difference between adjacent contour lines. The contour interval should be the same over a single map. When calculated as a ratio against the map scale, a sense of the hilliness of the terrain can be derived. There are several rules to note when interpreting terrain contour lines: - The rule of V's: sharp-pointed vees usually are in stream valleys, with the drainage channel passing through the point of the vee, with the vee pointing upstream. This is a consequence of erosion. - The rule of O's: closed loops are normally uphill on the inside and downhill on the outside, and the innermost loop is the highest area. If a loop instead represents a depression, some maps note this by short lines radiating from the inside of the loop, called "hachures". - Spacing of contours: close contours indicate a steep slope; distant contours a shallow slope. Two or more contour lines merging indicates a cliff. By counting the number of contours that cross a segment of a stream, you can approximate the stream gradient. Of course, to determine differences in elevation between two points, the contour interval, or distance in altitude between two adjacent contour lines, must be known, and this is given at the bottom of the map. Usually contour intervals are consistent throughout a map, but there are exceptions. Sometimes intermediate contours are present in flatter areas; these can be dashed or dotted lines at half the noted contour interval. When contours are used with hypsometric tints on a small-scale map that includes mountains and flatter low-lying areas, it is common to have smaller intervals at lower elevations so that detail is shown in all areas. Conversely, for an island which consists of a plateau surrounded by steep cliffs, it is possible to use smaller intervals as the height increases. An isopotential map is a measure of electrostatic potential in space, often depicted in two dimensions with the electostatic charges inducing that electric potential. The term equipotential line or isopotential line refers to a curve of constant electric potential. Whether crossing an equipotential line represents ascending or descending the potential is inferred from the labels on the charges. In three dimensions, equipotential surfaces may be depicted with a two dimensional cross-section, showing equipotential lines at the intersection of the surfaces and the cross-section. The general mathematical term level set is often used to describe the full collection of points having a particular potential, especially in higher dimensional space. In the study of the Earth's magnetic field, the term isogon or isogonic line refers to a line of constant magnetic declination, the variation of magnetic north from geographic north. An agonic line is drawn through points of zero magnetic declination. An isoporic line refers to a line of constant annual variation of magnetic declination . An isoclinic line connects points of equal magnetic dip, and an aclinic line is the isoclinic line of magnetic dip zero. An isodynamic line (from δύναμις or dynamis meaning 'power') connects points with the same intensity of magnetic force. Besides ocean depth, oceanographers use contour to describe diffuse variable phenomena much as meteorologists do with atmospheric phenomena. In particular, isobathytherms are lines showing depths of water with equal temperature, isohalines show lines of equal ocean salinity, and Isopycnals are surfaces of equal water density. Various geological data are rendered as contour maps in structural geology, sedimentology, stratigraphy and economic geology. Contour maps are used to show the below ground surface of geologic strata, fault surfaces (especially low angle thrust faults) and unconformities. Isopach maps use isopachs (lines of equal thickness) to illustrate variations in thickness of geologic units. In discussing pollution, density maps can be very useful in indicating sources and areas of greatest contamination. Contour maps are especially useful for diffuse forms or scales of pollution. Acid precipitation is indicated on maps with isoplats. Some of the most widespread applications of environmental science contour maps involve mapping of environmental noise (where lines of equal sound pressure level are denoted isobels), air pollution, soil contamination, thermal pollution and groundwater contamination. By contour planting and contour ploughing, the rate of water runoff and thus soil erosion can be substantially reduced; this is especially important in riparian zones. An isoflor is an isopleth contour connecting areas of comparable biological diversity. Usually, the variable is the number of species of a given genus or family that occurs in a region. Isoflor maps are thus used to show distribution patterns and trends such as centres of diversity. In economics, contour lines can be used to describe features which vary quantitatively over space. An isochrone shows lines of equivalent drive time or travel time to a given location and is used in the generation of isochrone maps. An isotim shows equivalent transport costs from the source of a raw material, and an isodapane shows equivalent cost of travel time. Contour lines are also used to display non-geographic information in economics. Indifference curves (as shown at left) are used to show bundles of goods to which a person would assign equal utility. An isoquant (in the image at right) is a curve of equal production quantity for alternative combinations of input usages, and an isocost curve (also in the image at right) shows alternative combinations of input usages having equal production costs. Thermodynamics, engineering, and other sciences Various types of graphs in thermodynamics, engineering, and other sciences use isobars (constant pressure), isotherms (constant temperature), isochors (constant specific volume), or other types of isolines, even though these graphs are usually not related to maps. Such isolines are useful for representing more than two dimensions (or quantities) on two-dimensional graphs. Common examples in thermodynamics are some types of phase diagrams. - isochasm: aurora equal occurrence - isochor: volume - isodose: Absorbed dose of radiation - isophene: biological events occurring with coincidence such as plants flowering - isophote: illuminance The idea of lines that join points of equal value was rediscovered several times. In 1701, Edmond Halley used such lines (isogons) on a chart of magnetic variation. The Dutch engineer Nicholas Cruquius drew the bed of the river Merwede with lines of equal depth (isobaths) at intervals of 1 fathom in 1727, and Philippe Buache used them at 10-fathom intervals on a chart of the English Channel that was prepared in 1737 and published in 1752. Such lines were used to describe a land surface (contour lines) in a map of the Duchy of Modena and Reggio by Domenico Vandelli in 1746, and they were studied theoretically by Ducarla in 1771, and Charles Hutton used them when calculating the volume of a hill in 1777. In 1791, a map of France by J. L. Dupain-Triel used contour lines at 20-metre intervals, hachures, spot-heights and a vertical section. In 1801, the chief of the Corps of Engineers, Haxo, used contour lines at the larger scale of 1:500 on a plan of his projects for Rocca d'Aufo. By around 1843, when the Ordnance Survey started to regularly record contour lines in Great Britain and Ireland, they were already in general use in European countries. Isobaths were not routinely used on nautical charts until those of Russia from 1834, and those of Britain from 1838. When maps with contour lines became common, the idea spread to other applications. Perhaps the latest to develop are air quality and noise pollution contour maps, which first appeared in the US, in approximately 1970, largely as a result of national legislation requiring spatial delineation of these parameters. In 2007, Pictometry International was the first to allow users to dynamically generate elevation contour lines to be laid over oblique images. Technical construction factors To maximize readability of contour maps, there are several design choices available to the map creator, principally line weight, line color, line type and method of numerical marking. Line weight is simply the darkness or thickness of the line used. This choice is made based upon the least intrusive form of contours that enable the reader to decipher the background information in the map itself. If there is little or no content on the base map, the contour lines may be drawn with relatively heavy thickness. Also, for many forms of contours such as topographic maps, it is common to vary the line weight and/or color, so that a different line characteristic occurs for certain numerical values. For example, in the topographic map above, the even hundred foot elevations are shown in a different weight from the twenty foot intervals. Line color is the choice of any number of pigments that suit the display. Sometimes a sheen or gloss is used as well as color to set the contour lines apart from the base map. Line colour can be varied to show other information. Line type refers to whether the basic contour line is solid, dashed, dotted or broken in some other pattern to create the desired effect. Dotted or dashed lines are often used when the underlying base map conveys very important (or difficult to read) information. Broken line types are used when the location of the contour line is inferred. Numerical marking is the manner of denoting the arithmetical values of contour lines. This can be done by placing numbers along some of the contour lines, typically using interpolation for intervening lines. Alternatively a map key can be produced associating the contours with their values. If the contour lines are not numerically labeled and adjacent lines have the same style (with the same weight, color and type), then the direction of the gradient cannot be determined from the contour lines alone. However if the contour lines cycle through three or more styles, then the direction of the gradient can be determined from the lines. The orientation of the numerical text labels is often used to indicate the direction of the slope. Plan view versus profile view Most commonly contour lines are drawn in plan view, or as an observer in space would view the Earth's surface: ordinary map form. However, some parameters can often be displayed in profile view showing a vertical profile of the parameter mapped. Some of the most common parameters mapped in profile are air pollutant concentrations and sound levels. In each of those cases it may be important to analyze (air pollutant concentrations or sound levels) at varying heights so as to determine the air quality or noise health effects on people at different elevations, for example, living on different floor levels of an urban apartment. In actuality, both plan and profile view contour maps are used in air pollution and noise pollution studies. Labeling contour maps Labels are a critical component of elevation maps. A properly labeled contour map helps the reader to quickly interpret the shape of the terrain. If numbers are placed close to each other, it means that the terrain is steep. Labels should be placed along a slightly curved line "pointing" to the summit or nadir, from several directions if possible, making the visual identification of the summit or nadir easy. Contour labels can be oriented so a reader is facing uphill when reading the label. Manual labeling of contour maps is a time-consuming process, however, there are a few software systems that can do the job automatically and in accordance with cartographic conventions, called automatic label placement. - Courant, Richard, Herbert Robbins, and Ian Stewart. What Is Mathematics?: An Elementary Approach to Ideas and Methods. New York: Oxford University Press, 1996. p. 344. - contour line - contour map - Tracy, John C. Plane Surveying; A Text-Book and Pocket Manual. New York: J. Wiley & Sons, 1907. p. 337. - Davis, John C., 1986, Statistics and data analysis in geology, Wiley ISBN 0-471-08079-9 - Oxford English Dictionary; see also: Nature, 40, 1889, p.651. - Arthur H. Robinson, "The geneaology of the isopleth", Cartographic Journal, 8, 49-53, 1971. - T. Slocum, R. McMaster, F. Kessler, and H. Howard, Thematic Cartography and Geographic Visualization, 2nd edition, Pearson, 2005, ISBN 0-13-035123-7, p. 272. - Edward J. Hopkins, Ph.D. (1996-06-10). "Surface Weather Analysis Chart". University of Wisconsin. Retrieved 2007-05-10. - World Meteorological Organisation. "Isallobar". Eumetcal. Retrieved 12 April 2014. - World Meteorological Organisation. "Anallobar". Eumetcal. Retrieved 12 April 2014. - World Meteorological Organisation. "Katallobar". Eumetcal. Retrieved 12 April 2014. - "Forecasting weather system movement with pressure tendency". Chapter 13 - Weather Forecasting. Lyndon State College Atmospheric Sciences. Retrieved 12 April 2014. - DataStreme Atmosphere (2008-04-28). "Air Temperature Patterns". American Meteorological Society. Archived from the original on 2008-05-11. Retrieved 2010-02-07. - Sark (Sercq), D Survey, Ministry of Defence, Series M 824, Sheet Sark, Edition 4 GSGS, 1965, OCLC 27636277. Scale 1:10,560. Contour intervals: 50 feet up to 200, 20 feet from 200 to 300, and 10 feet above 300. - "isoporic line". 1946. Retrieved 2015-07-20. - "Isobel". 2005-01-05. Retrieved 2010-04-25. - Specht, Raymond. Heathlands and related shrublands: Analytical studies. Elsevier. pp. 219–220. - Laver, Michael and Kenneth A. Shepsle (1996) Making and breaking governments pictures. - Thrower, N. J. W. Maps and Civilization: Cartography in Culture and Society, University of Chicago Press, 1972, revised 1996, page 97; and Jardine, Lisa Ingenious Pursuits: Building the Scientific Revolution, Little, Brown, and Company, 1999, page 31. - R. A. Skelton, "Cartography", History of Technology, Oxford, vol. 6, pp. 612-614, 1958. - Colonel Berthaut, La Carte de France, vol. 1, p. 139, quoted by Close. - C. Hutton, "An account of the calculations made from the survey and measures taken at Schehallien, in order to ascertain the mean density of the Earth", Philosophical Transactions of the Royal Society of London, vol. 68, pp. 756-757 - C. Close, The Early Years of the Ordnance Survey, 1926, republished by David and Charles, 1969, ISBN 0-7153-4477-3, pp. 141-144. - T. Owen and E. Pilbeam, Ordnance Survey: Map Makers to Britain since 1791, HMSO, 1992, ISBN 0-11-701507-5. - Imhof, E., “Die Anordnung der Namen in der Karte,” Annuaire International de Cartographie II, Orell-Füssli Verlag, Zürich, 93-129, 1962. - Freeman, H., “Computer Name Placement,” ch. 29, in Geographical Information Systems, 1, D.J. Maguire, M.F. Goodchild, and D.W. Rhind, John Wiley, New York, 1991, 449-460.
How can the Common Core Math be implemented in the Classroom? How can I teach the Common Core Math at home? How can I get homework help for the Common Core Math? NYS Common Core Lessons and Worksheets |Kindergarten||Grade 1||Grade 2| |Grade 3||Grade 4||Grade 5| |Grade 6||Grade 7||Grade 8| |High School Algebra||Common Core| The following lessons are based on the New York State (NYS) Common Core Math Standards. They consist of lesson plans, worksheets (from the NYSED) and videos to help you prepare to teach Common Core Math in the classroom or at home. There are lots of help for classwork and homework. Each grade is divided into six or seven modules. Mid-module and End-Module Assessments are also included. The lessons are divided into Fluency Practice, Application Problem, Concept Development, and Student Debrief. The worksheets are divided into Problem Set, Exit Ticket, and Homework. Numbers to 10 Two-Dimensional and Three-Dimensional Shapes Comparison of Length, Weight, Capacity Number Pairs, Addition and Subtraction to 10 Counting to 100 Analyzing, Comparing, and Composing Shapes Grade 1 Mathematics Sums and Differences to 10 Introduction to Place Value Through Addition and Subtraction Within 20 Ordering and Comparing Length Measurements as Numbers Place Value, Comparison, Addition and Subtraction to 40 Identifying, Composing, and Partitioning Shapes Place Value, Comparison, Addition and Subtraction to 100 Grade 2 Mathematics Sums and Differences to 20 Addition and Subtraction of Length Units Place Value, Counting, and Comparison of Numbers to 1,000 Addition and Subtraction Within 200 with Word Problems to 100 Addition and Subtraction Within 1,000 with Word Problems to 100 Foundations of Multiplication and Division Problem Solving with Length, Money, and Data Fractions as Equal Parts of Shapes, Time Grade 3 Mathematics Properties of Multiplication and Division and Solving Problems with Units of 2 and 10 Place Value and Problem Solving with Units of Measure Multiplication and Division with Units of 0, 1, 6, and Multiples of 10 Multiplication and Area Fractions as Numbers on the Number Line Collecting and Displaying Data Geometry and Measurement Word Problems Grade 4 Mathematics Place Value, Rounding, and Algorithms for Addition and Subtraction Unit Conversions and Problem Solving with Metric Measurement Multi-Digit Multiplication and Division Angle Measure and Plane Figures Fraction Equivalence, Ordering, and Operations Exploring Measurement with Multiplication Grade 5 Mathematics Place Value and Decimal Fractions Multi-Digit Whole Number and Decimal Fraction Operations Addition and Subtraction of Fractions Line Plots of Fraction Measurements Addition and Multiplication with Volume and Area Problem Solving with the Coordinate Plane Grade 6 Mathematics Ratios and Unit Rates Arithmetic Operations Including Division of Fractions Expressions and Equations Area, Surface Area, and Volume Problems Statistics Grade 7 Mathematics Ratios and Proportional Relationship Expressions and Equations Percent and Proportional Relationships Statistics and Probability Grade 8 Mathematics Integer Exponents and Scientific Notation The Concept of Congruence Examples of Functions from Geometry Introduction to Irrational Numbers High School Algebra I Linear and Exponential Sequences Functions and Their Graphs Transformations of Functions Using Functions and Graphs to Solve Problems Have a look at the following videos for insights on how to implement the Core in classrooms and homes across America. We also have lesson plans, assessments and worksheets to help you in your preparation. In this first video, we will join Sarah as she explains the Common Core State Standards and offers insights on how to implement the Core in classrooms. We will learn how teachers and students can shift their math classrooms to promote mathematical reasoning. She emphasized on the need to focus on fewer concepts, coherence for mastery and an approach with more rigor. Focus means less rote memorization and more deep procedural knowledge and conceptual understanding. Rigor means having procedural fluency and conceptual understanding. She talks about the six shifts in teaching Mathematics: Focus, Coherence, Fluency, Deep Understanding, Application, and Dual Intensity. Classrooms should be creative, engaged and even noisy. Families can be involved in applying the mathematical concepts. In this video Sarah explains what is the Common Core and where did it come from. How to read the Common Core State Standards with confidence and perspective. She explains that “Common doesn’t mean the same and the Standards are not the curriculum”. She also shows how to read the grade level standards for mathematics and for reading and writing. Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
Today we're going to talk about linear equations in real world scenarios. So we're going to do an example of a real world scenario. And we'll talk about how the different aspects of that scenario correspond to a graph of a linear equation that could be used to represent that scenario. All right, so let's suppose that we have Celeste, who goes on a bike ride. She rides 112 miles in 7 hours. So we know that we could determine her average speed or her average rate by using the relationship between distance, rate, and time, where her average speed or rate is going to be equal to the distance that she traveled divided by the time that it took her to travel that distance. So that's going to be equal to 112 miles overs 7 hours. Dividing 112 divided by 7, I see that her rate or average speed is going to be 16 miles per hour. We could also look at this scenario in terms of a graph. So on this graph, the variable on the horizontal axis is her hours. And the variable on the vertical axis is the number of miles she's traveled. So we know that at the beginning of her bike ride, at 0 hours, she's traveled 0 miles. And we know that at the end of her bike ride, she's traveled 112 miles in the seven hours. So we can think about this graph in terms of an equation in slope intercept form, or y equals mx plus b, where again, m is the slope of our line and b is going to be the y-intercept. So let's think about what those two things mean in the context of our situation. So the y-intercept is going to be where it crosses the y-axis, or the vertical axis. But also the y-intercept is the value of y when our x variable is equal to 0, so when our hours is equal to zero. We know that at the beginning of her trip when hours is equal to 0, her distance in miles is also going to be equal to 0. So the y-intercept of this graph is equal to 0. And in thinking about my equation, y equals mx plus b, I know that my value for b is just going to be equal to 0. Then I can figure out my value for m, which is, again, the slope. To do that, I know I can think about my slope formula, which uses the change in the vertical distance between my two points-- so that's going to be 112 minus 0-- over the change in my vertical distance between the two points. So that's going to be 7 minus 0. Simplifying on the top and on the bottom, 112 minus 0 is 112. And 7 minus 0 is just 7. When I divide these two values, I get a slope of 16. So I can see again that the slope of this line is going to also be equal to the average speed that Celeste traveled on her bike ride. It was 16 miles per hour. And the slope of this line is 16, or 16 over 1. So I can substitute a value of 16 in for my slope. And my equation becomes y equals 16x plus 0, which would be the same as just y is equal to 16x. All right, so let's see how we can use either or both of our graph and our equation to answer some questions about Celeste's bike journey. So let's say that we wanted to know how far had the Celeste biked after 2 and 1/2 hours. So let's start by looking at our equation. If I want to know the distance, I know that's my y variable. And I want to know the distance after 2.5 hours. And I know that the time in hours is my x variable. So I'm going to have y is equal to 16 times 2.5. I'm going to substitute 2.5 in for my x. Simplifying this, 16 times 2.5 is going to give me 40. So I see that after 2.5 hours, she has traveled a total distance of 40 miles. Another question I could answer is, how long is it going to take her to travel 75 miles? So again, using my equation, I know that my distance, 75 miles, would be my y variable. And that's going to be equal to 16 times however long it took me to go the 75 miles. So solving this equation for x, which will be my time, I'm going to divide both sides by 16. And I see that x is going to be equal to approximately 4.7 hours. So I can verify these two conclusions by looking at my graph. So I want to verify that after 2.5 hours, I've gone 40 miles. So looking at my graph, at 2.5 miles, if I go up to my graph, it does look like that is approximately 40 miles. So when we're looking at our graph, it may not be completely accurate. But we can see that it is definitely reasonable from my graph. 2.5 hours matches up about with 40 miles. Let's verify that 75 miles corresponds to approximately 4.7 hours. So again, on my graph, if I go to 75 miles, I should see that it corresponds to approximately 4.7 hours. And from the graph, I see it that is definitely reasonable. So let's go over our key points form today. As usual, make sure you have them in your notes so you can refer to them later. The slope of a line is the average rate of change between two variables. The y-intercept of a line is the initial value of the dependent variable y when the independent variable x is 0. So I hope that these key points and examples helped you understand a little bit more about linear equations in real world scenarios. Keep using your notes and keep on practicing, and soon you'll be a pro. Thanks for watching.
Animal communication is the transfer of information from one or a group of animals (sender or senders) to one or more other animals (receiver or receivers) which affects either the current or future behavior of the receivers. The transfer of information may be deliberate (e.g. a courtship display) or it may be unintentional (e.g. a prey animal detecting the scent of a predator). When animal communication involves multiple receivers, this may be referred to as an "audience". The study of animal communication is a rapidly growing area of study and plays an important part in the disciplines of animal behavior, sociobiology, neurobiology and animal cognition. Even in the 21st century, many prior understandings related to diverse fields such as personal symbolic name use, animal emotions, learning and animal sexual behavior, long thought to be well understood, have been revolutionized. When the information sent from the sender to receiver is either an act or a structure that manipulates the behavior of the receiver, it is referred to as a "signal". Signalling theory predicts that for the signal to be maintained in the population, the receiver should in most cases receive some benefit from the interaction as well as the sender. Both the production of the signal from the sender and the perception and subsequent response from the receiver need to coevolve. It is important to study both the sender and receiver of the interaction, since the maintenance and persistence of the signal is dependent on the ability to both produce and recognize the signal. In many taxa, signals involve multiple mechanisms, i.e. multimodal signaling. - 1 Modes - 2 Autocommunication - 3 Functions - 4 Interpretation of animal behavior - 5 Intraspecific - 6 Interspecific - 7 Other aspects - 8 See also - 9 References - 10 External links - Gestures: The best known form of communication involves the display of distinctive body parts, or distinctive bodily movements; often these occur in combination, so a movement acts to reveal or emphasize a body part. A notable example is the presentation of a parent herring gull’s bill to its chick signals feeding time. Like many gulls, the herring gull has a brightly coloured bill, yellow with a red spot on the lower mandible near the tip. When the parent returns to the nest with food, it stands over its chick and taps the bill on the ground in front of it; this elicits a begging response from a hungry chick (pecking at the red spot), which stimulates the parent to regurgitate food in front of it. The complete signal therefore involves a distinctive morphological feature (body part), the red-spotted bill, and a distinctive movement (tapping towards the ground) which makes the red spot highly visible to the chick. While all primates use some form of gesture, Frans de Waal came to the conclusion that apes and humans are unique in that only they are able to use intentional gestures to communicate. He tested the hypothesis of gesture evolving into language by studying the gestures of bonobos and chimps. - Facial expression: Facial gestures play an important role in animal communication. It is a motor expression of one or multiple facial features in response to some event or a signal of intention for further actions. See emotion in animals for further information on possible signals of emotion. Dogs, for example, express anger through snarling and showing their teeth. In alarm their ears will perk up. When fearful, dogs will pull back their ears, expose their teeth slightly and squint their eyes. Jeffrey Mogil studied the facial expressions of mice during increments of increasing pain; there were five recognizable facial expressions; orbital tightening, nose and cheek bulge, and changes in ear and whisker carriage. - Gaze following: Coordination among social animals is facilitated by monitoring of each other's head and eye orientation. Long recognized in human developmental studies as an important component of communication, there has recently begun to be much more attention on the abilities of animals to follow the gaze of those they interact with, whether members of their own species or humans. Studies have been conducted on apes, monkeys, dogs, birds, and tortoises, and have focused on two different tasks: "follow[ing] another’s gaze into distant space" and "follow[ing] another’s gaze geometrically around a visual barrier e.g. by repositioning themselves to follow a gaze cue when faced with a barrier blocking their view". The first ability has been found among a broad range of animals, while the second has been demonstrated only for apes, dogs (and wolves), and corvids (ravens), and attempts to demonstrate this "geometric gaze following" in marmoset and ibis gave negative results. Researchers do not yet have a clear picture of the cognitive basis of gaze following abilities, but developmental evidence indicates that "simple" gaze following and "geometric" gaze following are likely to rely on distinct cognitive foundations. - Color change: Color change can be separated into morphological color change, in which changes occur in relation to stage of development, or physiological color change, in which color change is triggered by mood, social context, or abiotic factors such as temperature. Physiological color change is a versatile mode for communication across a diverse array of taxa. Some cephalopods, such as the octopus and the cuttlefish, have specialized skin cells (chromatophores) that can change the apparent colour, opacity, and reflectiveness of their skin. In addition to being used for camouflage, rapid changes in skin colour are used while hunting and in courtship rituals. The colour changes in cuttlefish can be especially intricate as they are able to communicate two entirely different signals simultaneously from opposite sides of their body. When a male cuttlefish courts a female in the presence of other males, he displays two different sides: a male pattern facing the female, and a female pattern facing away, to deceive other males. Many animals communicate information about themselves without necessarily changing their behaviour. For example, sexual dimorphism in size or pelage communicates which sex the animal is. Other passive signals can be cyclical in nature. For example, in olive baboons, the beginning of the female's ovulation is a signal to the males that she is ready to mate. During ovulation, the skin of the female's anogenital area swells and turns a bright red/pink. - Bioluminescent communication: Communication by the production of light occurs commonly in vertebrates and invertebrates in the oceans, particularly at depths (e.g. angler fish). Two well known forms of land bioluminescence are fireflies and glow worms. Other insects, insect larvae, annelids, arachnids and even species of fungi possess bioluminescent abilities. Some bioluminescent animals produce the light themselves whereas others have a symbiotic relationship with bioluminescent bacteria. (See also: List of bioluminescent organisms) Many animals communicate through vocalization. Vocal communication is essential for many tasks, including mating rituals, warning calls, conveying location of food sources, and social learning. In a number of species, males perform calls during mating rituals as a form of competition against other males and to signal females, including hammer-headed bats, red deer, humpback whales, elephant seals, and songbirds. For more information on bird song, see bird vocalization. In various species, whale vocalizations have been found to have different dialects based on region. Other instances of vocal communication include the alarm calls of the Campbell monkey, the territorial calls of gibbons, and the use of frequency in greater spear-nosed bats to distinguish between groups. Another example of an animal that gives alarm calls is the vervet monkey. The vervet monkey gives a distinct alarm call for each of its four different predators. The other monkeys will react in a specific way, depending on which alarm call is issued by an individual. For example, the alarm calls for pythons and eagles are different because the response to each predator must be different. If an alarm call is given for a python, the monkeys must climb into the trees to avoid the python. In contrast, if an alarm call is given for an eagle, the monkeys must get low to the ground and seek a hiding place. It is important for these alarm calls to be distinguishable, so that the monkeys can recognize the threat and respond appropriately. Prairie dogs also have very complex communication when it comes to predator detection. According to Con Slobodchikoff and others, prairie dog calls contain specific information as to what type of predator is present, how big it is and how fast it is approaching. Not all animals use vocalization as a means of auditory communication. Many arthropods rub specialized body parts together to produce sound. This is known as stridulation. Crickets and grasshoppers are well known for this, but many others use stridulation as well, including crustaceans, spiders, scorpions, wasps, ants, beetles, butterflies, moths, millipedes, and centipedes. Another means of auditory communication is the vibration of swim bladders in bony fish. The structure of swim bladders and the attached sonic muscles varies greatly across bony fish families, resulting in a wide range of sound production. Striking body parts together can also produce auditory signals. A popular example of this is the tail tip vibration of rattlesnakes as a warning signal. Other examples include bill clacking in birds, wing clapping in manakin courtship displays, and chest beating in gorillas. Despite being the oldest method of communication, chemical communication is one of the least understood forms due to the “noisy” nature and sheer abundance of chemicals in our environment, and the difficulty of detecting and measuring all chemical species within a sample. The primary function of chemical reception is to detect resources (i.e. food) and this function was an adaptive trait (adaptation) first derived in single-celled organisms (bacteria), living in the oceans, during the early days of life on Earth. Over evolutionary time, this function became more refined, allowing some organisms to differentiate between chemical compounds emanating from resources, conspecifics (same species; i.e., mates and kin), and heterospecifics (different species; i.e., competitors and predators). The ability to detect chemicals present within the environment allowed organisms to associate advantageous or adaptive behaviors with the information provided by the chemicals. For instance, a small minnow species may do well to avoid habitat with a detectable concentration of chemical cue associated with a predator species such as northern pike. Minnows with the ability to perceive the presence of predators before they are close enough to be seen and then respond with adaptive behavior (such as hiding) are more likely to survive and reproduce, as will their offspring due to the nature of inheritance (heredity). A rare form of animal communication is electrocommunication. It is seen primarily in aquatic animals, though some land mammals, notably the platypus and echidnas are capable of electroreception and thus theoretically of electrocommunication. Weakly electric fishes are an example of electrocommunication tied with an active sensory modality for electrolocation. These fish communicate by generating an electric field using what is known as an electric organ. This field, and any changes of it, is detected by electroreceptors. Differences in waveforms and frequencies convey information on species, sex, and individuals. Differences and changes in waveforms can be in response to hormones, circadian rhythms, and interactions with other fish. Some predators, such as sharks and rays, are able to eavesdrop on these electrogenic fish through passive electroreception. Touch can be an important factor in social interactions, for example in fights or in a mating context. In both occasions, the use of touch will increase as an interaction escalates. In a fight, touch can be used to challenge an opponent, to coordinate movements during the escalation of the fight, and it can be used by the loser to perform submissive actions afterwards. Mammals will initiate mating by grooming, stroking or rubbing against each other. This provides the opportunity to assess chemical signals of the potential mate, or apply additional chemical signals. Touch is also used to announce the intention of the male to mount the female, such as a male kangaroo grabbing the tail of a female. During mating, touch stimuli are important for pair positioning, coordination and genital stimulation. In social integration, touch is a widely used communication system. The most widespread behaviour involving touch is allogrooming, the grooming of one animal by another. This has several functions; it removes parasites and debris from the body of the groomed animal, reaffirms the affiliative bond or hierarchical relationship between the animals involved, and gives the groomer an opportunity to examine olfactory cues on the groomed individual, perhaps adding additional ones. This behaviour has been observed in social insects, birds and mammals. Another mechanism for social integration is prolonged physical contact or huddling. This can be used to transfer olfactory or tactile information, or for heat exchange. Some organisms live in constant contact in a colony, for example colonial corals. Because of the tight linkages between individuals, the entire colony can react on the aversive or alarm movements made by only a few individuals. In several herbivorous insect nymphs and larvae, aggregations where there is prolonged contact play a major role in group coordination. This may take the form of a procession or a rosette. The behaviour to form a rosette is called cycloalexy. Touch can also be a way to inform conspecifics about the environment. Some ant species recruit fellow workers to new food finds by first tapping them with their antennae and forelegs, then leading them to the food source while keeping physical contact. Another example of this is the waggle dance of honey bees. Seismic communication is the exchange of information using self-generated vibrational signals transmitted via a substrate such as the soil, water, spider webs, plant stems, or a blade of grass among others. This communication holds many advantages such as being able to be sent regardless of light and noise levels, and having short ranges and short persistence with little danger of detection by predators. There are animals that use seismic communication from a large number of taxa. Some of these include, but are not limited to, frogs, kangaroo rats, mole rats, bees, and nematode worms. Tetrapods for the most part use a body part to drum against the ground to create seismic waves, which in turn are received through the sacculus. The sacculus is an organ in the inner ear containing a membranous sac that is used for balance aid, but in animals that use this communication can detect seismic waves. Vibrations and other communication channels are not necessarily mutually exclusive, but can be used in multi-modal communication. The ability to sense infrared (IR) thermal radiation evolved independently in various families of snakes. It allows these reptiles to form thermal images of the radiant heat emitted by predators or prey at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes. It was previously thought that the organs evolved primarily as prey detectors, but it is now believed that it may also be used in thermoregulatory decision making. The facial pits enabling this sense underwent parallel evolution in pitvipers and some boas and pythons, having evolved once in pitvipers and multiple times in boas and pythons. The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (loreal pit), while boas and pythons have three or more comparatively smaller pits lining the upper and sometimes the lower lip, in or between the scales. Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure. Within the family Viperidae, the pit organ is seen only in the subfamily Crotalinae: the pitvipers. Despite the detection of IR radiation, the pits’ IR mechanism is disimilar to photoreceptors; while photoreceptors detect light via photochemical reactions, the protein in the facial pits of snakes is a temperature sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than chemical reaction to light. This is consistent with the thin pit membrane, which allows incoming IR radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as vascularize the pit membrane to rapidly cool the ion channel back to its original “resting” or “inactive” temperature. Common vampire bats (Desmodus rotundus) have specialized IR sensors in their nose-leaf. Vampire bats are the only mammals that feed exclusively on blood. The IR sense enables Desmodus to localize homeothermic animals such as cattle and horses within a range of about 10 to 15 cm. This infrared perception is possibly used in detecting regions of maximal blood flow on targeted prey. Autocommunication is a type of communication system in which the sender and receiver are the same individual. The sender emits a signal that is altered by the environment and eventually is received by the same individual. Certain alterations code for specific conditions which provide information about the environment that can be used to indicate food, predators or conspecifics. Because the sender and receiver are the same animal, selection pressure maximizes signal efficacy, i.e. “the degree to which an emitted signal is correctly identified by a receiver despite propagation distortion and noise.” Autocommunication can be divided in two main systems. The first is active electrolocation found in the electric fish Gymnotiformes (knifefishes) and Mormyridae (elephantfish) and also in the platypus (Ornithorhynchus anatinus). The second form of autocommunication is echolocation, found in bats and Odontoceti. There are many functions of animal communication. However, some have been studied in more detail than others. This includes: - Communication during contests: Animal communication plays a vital role in determining the winner of contest over a resource. Many species have distinct signals that signal aggression or willingness to attack or signals to convey retreat during competitions over food, territories, or mates. - Mating rituals: Animals produce signals to attract the attention of a possible mate or to solidify pair bonds. These signals frequently involve the display of body parts or postures. For example, a gazelle will assume characteristic poses to initiate mating. Mating signals can also include the use of olfactory signals or calls unique to a species. Animals that form lasting pair bonds often have symmetrical displays that they make to each other. Famous examples are the mutual presentation of reeds by great crested grebes studied by Julian Huxley, the triumph displays shown by many species of geese and penguins on their nest sites, and the spectacular courtship displays by birds of paradise. - Ownership/territorial: Signals used to claim or defend a territory, food, or a mate. - Food-related signals: Many animals make "food calls" to attract a mate, offspring, or other members of a social group to a food source. Perhaps the most elaborate food-related signal is the Waggle dance of honeybees studied by Karl von Frisch. One well-known example of begging of offspring in a clutch or litter is altricial songbirds. Young ravens signal will signal to older ravens when they encounter new or untested food. Rhesus macaques will send food calls to inform other monkeys of a food source to avoid punishment. Pheromones are released by many social insects to lead the other members of the society to the food source. For example, ants leave a pheromone trail on the ground that can be followed by other ants to lead them to the food source. - Alarm calls: Alarm calls communicate the threat of a predator. This allows all members of a social group (and sometimes other species) to respond accordingly. This may include running for cover, becoming immobile, or gathering into a group to reduce the risk of attack. Alarm signals are not always vocalizations. Crushed ants will release an alarm pheromone to attract more ants and send them into an attack state. - Meta-communication: Signals that will modify the meaning of subsequent signals. One example is the 'play face' in dogs which signals that a subsequent aggressive signal is part of a play fight rather than a serious aggressive episode. Interpretation of animal behavior Animal behavior is sometimes very hard to interpret. It is not so hard to describe, but to give the right meaning to behavior is much harder. Psychological interpretations of animal behavior are often anthropomorphized, leading to wrong conclusions. On the other hand, the similarities between human behavior and certain animal behavior cannot be ignored. Anthropomorphizing behavior is most often applied to domesticated animals, like cats and dogs, and with apes, because of the close phylogenetic relationship to humans. An experiment on chimpanzees shows that a small “dose of anthropomorphizing” often gives better scientific results than when researchers try to describe all behavior objectively, but skepticism remains for this concept. Interpreting animal behavior is vital when considering the context in which the signal is produced. This context is important for the recipient, humans or otherwise, to get the intended meaning of a signal. Research about the right interpretation of animal behavior has been going on for decades. In 1955, Cherry said “meaning is... like the beauty of a complexion; it lies altogether in the eye of the beholder”. Although his research was mainly on human behavior, he also used it for animal behavior. The difficulty in studying the meaning of signals lies in the range of possible responses. The same gesture may have multiple meanings, depending on context and other associated behaviors. Because of this, generalizations such as "X means Y" are often, but not always, accurate. Combining the information from context and the signal itself, will provide a more thorough meaning of communication. For example, a domestic dog's simple tail wag may be used in subtly different ways to convey many meanings as illustrated in Charles Darwin's The Expression of the Emotions in Man and Animals published in 1872. Combined with other body language, in a specific context, many gestures e.g. yawns, direction of vision, all convey meaning. Thus statements that a particular action "means" something, should always be interpreted as "often means". As with human beings, who may smile or hug or stand a particular way for multiple reasons, many animals also re-use gestures. The right interpretation of animal behavior can be critical, such as clinical research with laboratory animals. Certain behaviors indicates pain levels, which is important to perfect medical procedures, or ensure animal’s welfare and minimize pain. When animals are in pain, they often behave differently. When given pain relief, their behavior returns to normal. Much animal communication occurs between members of the species and this is the context in which it has been most intensively studied. Most of the forms and functions of communication described above are relevant to intraspecific communication. Many examples of communication take place between members of different species. Animals communicate to other animals with various signs: visual, sound, echolocation, vibrations, body language, and smell. Prey to predator If a prey animal moves, makes a noise or vibrations, or emits a smell in such a way that a predator can detect it, this is consistent with the definition of "communication" given above. This type of communication is known as interceptive eavesdropping, where a predator intercepts the message being conveyed to conspecifics. There are however, some actions of prey species that are clearly communications to actual or potential predators. A good example is warning colouration: species such as wasps that are capable of harming potential predators are often brightly coloured, and this modifies the behaviour of the predator, who either instinctively or as the result of experience will avoid attacking such an animal. Some forms of mimicry fall in the same category: for example hoverflies are coloured in the same way as wasps, and although they are unable to sting, the strong avoidance of wasps by predators gives the hoverfly some protection. There are also behavioural changes that act in a similar way to warning colouration. For example, canines such as wolves and coyotes may adopt an aggressive posture, such as growling with their teeth bared, to indicate they will fight if necessary, and rattlesnakes use their well-known rattle to warn potential predators of their venomous bite. Sometimes, a behavioural change and warning colouration will be combined, as in certain species of amphibians which have most of their body coloured to blend with their surroundings, except for a brightly coloured belly. When confronted with a potential threat, they show their belly, indicating that they are poisonous in some way. Another example of prey to predator communication is the pursuit-deterrent signal. Pursuit-deterrent signals occur when prey indicates to a predator that pursuit would be unprofitable because the signaler is prepared to escape. Pursuit-deterrent signals provide a benefit to both the signaler and receiver; they prevent the sender from wasting time and energy fleeing, and they prevent the receiver from investing in a costly pursuit that is unlikely to result in capture. Such signals can advertise prey’s ability to escape, and reflect phenotypic condition (quality advertisement), or can advertise that the prey has detected the predator (perception advertisement). Pursuit-deterrent signals have been reported for a wide variety of taxa, including fish (Godin and Davis, 1995), lizards (Cooper et al., 2004), ungulates (Caro, 1995), rabbits (Holley 1993), primates (Zuberbuhler et al. 1997), rodents (Shelley and Blumstein 2005, Clark, 2005), and birds (Alvarez, 1993, Murphy, 2006, 2007). A familiar example of quality advertisement pursuit-deterrent signal is stotting (sometimes called pronking), a pronounced combination of stiff-legged running while simultaneously jumping shown by some antelopes such as Thomson's gazelle in the presence of a predator. At least 11 hypotheses for stotting have been proposed. A leading theory today is that it alerts predators that the element of surprise has been lost. Predators like cheetahs rely on surprise attacks, proven by the fact that chases are rarely successful when antelope stot. Predators do not waste energy on a chase that will likely be unsuccessful (optimal foraging behaviour). Quality advertisement can be communicated by modes other than visual. The banner-tailed kangaroo rat produces several complex foot-drumming patterns in a number of different contexts, one of which is when it encounters a snake. The foot-drumming may alert nearby offspring but most likely conveys vibrations through the ground that the rat is too alert for a successful attack, thus preventing the snake's predatory pursuit. Predator to prey Typically, predators attempt to reduce communication to prey as this will generally reduce the effectiveness of their hunting. However, some forms of predator to prey communication occur in ways that change the behaviour of the prey and make their capture easier, i.e. deception by the predator. A well-known example is the angler fish, an ambush predator which waits for its prey to come to it. It has a fleshy bioluminescent growth protruding from its forehead which it dangles in front of its jaws. Smaller fish attempt to take the lure, placing themselves in a better position for the angler fish to catch them. Another example of deceptive communication is observed in the genus of jumping spiders (Myrmarachne). These spiders are commonly referred to as “antmimicking spiders” because of the way they wave their front legs in the air to simulate antennae. Various ways in which humans interpret the behaviour of domestic animals, or give commands to them, are consistent with the definition of interspecies communication. Depending on the context, they might be considered to be predator to prey communication, or to reflect forms of commensalism. The recent experiments on animal language are perhaps the most sophisticated attempt yet to establish human/animal communication, though their relation to natural animal communication is uncertain. Lacking in the study of human-animal communication is a focus on expressive communication from animal to human specifically. Horses are taught not to communicate (for safety). Dogs and horses are generally not encouraged to communicate expressively, but are encouraged to develop receptive language (understanding). Since the late 1990s, one scientist, Sean Senechal, has been developing, studying, and using the learned visible, expressive language in dogs and horses. By teaching these animals a gestural (human made) American Sign Language-like language, the animals have been found to use the new signs on their own to get what they need. The importance of communication is evident from the highly elaborate morphology, behaviour and physiology that some animals have evolved to facilitate this. These include some of the most striking structures in the animal kingdom, such as the peacock's tail, the antlers of a stag and the frill of the frill-necked lizard, but also include even the modest red spot on a European herring gull's bill. Highly elaborate behaviours have evolved for communication such as the dancing of cranes, the pattern changes of cuttlefish, and the gathering and arranging of materials by bowerbirds. Other evidence for the importance of communication in animals is the prioritisation of physiological features to this function, for example, birdsong appears to have brain structures entirely devoted to its production. All these adaptations require evolutionary explanation. There are two aspects to the required explanation: - identifying a route by which an animal that lacked the relevant feature or behaviour could acquire it; - identifying the selective pressure that makes it adaptive for animals to develop structures that facilitate communication, emit communications, and respond to them. Significant contributions to the first of these problems were made by Konrad Lorenz and other early ethologists. By comparing related species within groups, they showed that movements and body parts that in the primitive forms had no communicative function could be "captured" in a context where communication would be functional for one or both partners, and could evolve into a more elaborate, specialised form. For example, Desmond Morris showed in a study of grass finches that a beak-wiping response occurred in a range of species, serving a preening function, but that in some species this had been elaborated into a courtship signal. The second problem has been more controversial. The early ethologists assumed that communication occurred for the good of the species as a whole, but this would require a process of group selection which is believed to be mathematically impossible in the evolution of sexually reproducing animals. Altruism towards an unrelated group is not widely accepted in the scientific community, but rather can be seen as reciprocal altruism, expecting the same behaviour from others, a benefit of living in a group. Sociobiologists argued that behaviours that benefited a whole group of animals might emerge as a result of selection pressures acting solely on the individual. A gene-centered view of evolution proposes that behaviours that enabled a gene to become wider established within a population would become positively selected for, even if their effect on individuals or the species as a whole was detrimental; In the case of communication, an important discussion by John Krebs and Richard Dawkins established hypotheses for the evolution of such apparently altruistic or mutualistic communications as alarm calls and courtship signals to emerge under individual selection. This led to the realization that communication might not always be "honest" (indeed, there are some obvious examples where it is not, as in mimicry). The possibility of evolutionarily stable dishonest communication has been the subject of much controversy, with Amotz Zahavi in particular arguing that it cannot exist in the long term. Sociobiologists have also been concerned with the evolution of apparently excessive signaling structures such as the peacock's tail; it is widely thought that these can only emerge as a result of sexual selection, which can create a positive feedback process that leads to the rapid exaggeration of a characteristic that confers an advantage in a competitive mate-selection situation. One theory to explain the evolution of traits like a peacock's tail is 'runaway selection'. This requires two traits—a trait that exists, like the bright tail, and a prexisting bias in the female to select for that trait. Females prefer the more elaborate tails, and thus those males are able to mate successfully. Exploiting the psychology of the female, a positive feedback loop is enacted and the tail becomes bigger and brighter. Eventually, the evolution will level off because the survival costs to the male do not allow for the trait to be elaborated any further. Two theories exist to explain runaway selection. The first is the good genes hypothesis. This theory states that an elaborate display is an honest signal of fitness and truly is a better mate. The second is the handicap hypothesis. This explains that the peacock's tail is a handicap, requiring energy to keep and makes it more visible to predators. Thus, the signal is costly to maintain, and remains an honest indicator of the signaler's condition. Another assumption is that the signal is more costly for low quality males to produce than for higher quality males to produce. This is simply because the higher quality males have more energy reserves available to allocate to costly signaling. Ethologists and sociobiologists have characteristically analysed animal communication in terms of more or less automatic responses to stimuli, without raising the question of whether the animals concerned understand the meaning of the signals they emit and receive. That is a key question in animal cognition. There are some signalling systems that seem to demand a more advanced understanding. A much discussed example is the use of alarm calls by vervet monkeys. Robert Seyfarth and Dorothy Cheney showed that these animals emit different alarm calls in the presence of different predators (leopards, eagles, and snakes), and the monkeys that hear the calls respond appropriately - but that this ability develops over time, and also takes into account the experience of the individual emitting the call. Metacommunication, discussed above, also seems to require a more sophisticated cognitive process. It has been reported that bottlenose dolphins can recognize identity information from whistles even when otherwise stripped of the characteristics of the whistle; making dolphins the only animals other than humans that have been shown to transmit identity information independent of the caller’s voice or location. The paper concludes that: The fact that signature whistle shape carries identity information independent from voice features presents the possibility to use these whistles as referential signals, either addressing individuals or referring to them, similar to the use of names in humans. Given the cognitive abilities of bottlenose dolphins, their vocal learning and copying skills, and their fission–fusion social structure, this possibility is an intriguing one that demands further investigation.— V. M. Janik, et al. Animal communication and human behaviour Another controversial issue is the extent to which human behaviours resemble animal communication, or whether all such communication has disappeared as a result of our linguistic capacity. Some of our bodily features - eyebrows, beards and moustaches, deep adult male voices, perhaps female breasts - strongly resemble adaptations to producing signals. Ethologists such as Irenäus Eibl-Eibesfeldt have argued that facial gestures such as smiling, grimacing, and the eyebrow flash on greeting are universal human communicative signals that can be related to corresponding signals in other primates. Given how recently spoken language has emerged, it is very likely that human body language does include some more or less involuntary responses that have a similar origin to the communication we have. Humans also often seek to mimic animals' communicative signals in order to interact with them. For example, cats have a mild affiliative response of slowly closing their eyes; humans often mimic this signal towards a pet cat to establish a tolerant relationship. Stroking, petting and rubbing pet animals are all actions that probably work through their natural patterns of interspecific communication. Dogs have shown an ability to understand human communication. In object choice tasks, dogs utilize human communicative gestures such as pointing and direction of gaze in order to locate hidden food and toys. It has also been shown that dogs exhibit a left gaze bias when looking at human faces, indicating that they are capable of reading human emotions. It is interesting to note that dogs do not make use of direction of gaze or exhibit left gaze bias with other dogs. A new approach in the 21st century in the field of animal communication uses applied behavioral analysis (ABA), specifically Functional Communication Training (FCT). This FCT previously has been used in schools and clinics with humans with special needs, such as children with autism, to help them develop language. Sean Senechal, at the AnimalSign Center has been using an approach similar to this FCT with domesticated animals, such as dogs (since 2004) and horses (since 2000) with encouraging results and benefits to the animals and people. Functional communication training for animals, Senechal calls "AnimalSign Language". This includes teaching communication through gestures (like simplified American sign language), Picture Exchange Communication System, tapping, and vocalisation. The process for animals includes simplified and modified techniques. Animal communication and linguistics |Do animals have language? - Michele Bishop, TED Ed, 4:54, September 10, 2015| For linguistics, the interest of animal communication systems lies in their similarities to and differences from human language: - Human languages are characterized for having a double articulation (in the characterization of French linguist André Martinet). It means that complex linguistic expressions can be broken down in meaningful elements (such as morphemes and words), which in turn are composed of smallest phonetic elements that affect meaning, called phonemes. Animal signals, however, do not exhibit this dual structure. - In general, animal utterances are responses to external stimuli, and do not refer to matters removed in time and space. Matters of relevance at a distance, such as distant food sources, tend to be indicated to other individuals by body language instead, for example wolf activity before a hunt, or the information conveyed in honeybee dance language.It is therefore unclear to what extent utterances are automatic responses and to what extent deliberate intent plays a part. - In contrast to human language, animal communication systems are usually not able to express conceptual generalizations. (Cetaceans and some primates may be notable exceptions). - Human languages combine elements to produce new messages (a property known as creativity). One factor in this is that much human language growth is based upon conceptual ideas and hypothetical structures, both being far greater capabilities in humans than animals. This appears far less common in animal communication systems, although current research into animal culture is still an ongoing process with many new discoveries. A recent and interesting area of development is the discovery that the use of syntax in language, and the ability to produce "sentences", is not limited to humans either. The first good evidence of syntax in non-humans, reported in 2006, is from the greater spot-nosed monkey (Cercopithecus nictitans) of Nigeria. This is the first evidence that some animals can take discrete units of communication, and build them up into a sequence which then carries a different meaning from the individual "words": - The greater spot-nosed monkeys have two main alarm sounds. A sound known onomatopoeiacally as the "pyow" warns against a lurking leopard, and a coughing sound that scientists call a "hack" is used when an eagle is flying nearby. - "Observationally and experimentally we have demonstrated that this sequence [of up to three 'pyows' followed by up to four 'hacks'] serves to elicit group movement... the 'pyow-hack' sequence means something like 'let's go!' [a command telling others to move]... The implications are that primates at least may be able to ignore the usual relationship between an individual alarm call, and the meaning it might convey under certain circumstances... To our knowledge this is the first good evidence of a syntax-like natural communication system in a non-human species." - Animal consciousness - Anthrozoology (human–animal studies) - Body language - Dear enemy effect and Nasty neighbour effect - Deception in animals - Degeneracy (biology) - Emotion in animals - Forms of activity and interpersonal relations - International Society for Biosemiotic Studies - Origin of language - Origin of speech - Sir Philip Sidney game - Talking animal - Witzany, G., ed. (2014). Biocommunication of Animals. Dortrecht: Springer. ISBN 978-94-007-7413-1. - Maynard-Smith and Harper, 2003 - de Waal - Langford, D.J. et al., (2010). Coding of facial expressions of pain in the laboratory mouse. Nature Methods, May 9th. pp. 1-3. doi:10.1038/nmeth.1455 - Range F. and Virányi, Z. (2011). Development of gaze following abilities in wolves (Canis Lupus). PLoS ONE 6(2): e16888. doi:10.1371/journal.pone.0016888 - Cloney, R.A. & E. Florey 1968. Ultrastructure of cephalopod chromatophore organs. Z. Zellforsch Mikrosk. Anat. 89: 250-280. PMID 5700268 - Hanlon, R.T.; Messenger, J.B. (1996). Cephalopod Behaviour. Cambridge University Press. p. 121. ISBN 0-521-64583-2. - Williams, Sarah (2012). "Two-faced fish tricks competitors". Science Now. Retrieved March 16, 2013. - Motluk, Alison (2001). "Big Bottom". New Scientist. 19 (7). - Ehrlich, Paul R.; David S. Dobkin & Darryl Wheye. ""Bird Voices" and "Vocal Development" from Birds of Stanford essays". Retrieved 9 Sep 2008. - Slabbekoorn, Hans, Smith, Thomas B. "Bird song, ecology and speciation." Philosophical Transactions: Biology Sciences 357.1420 (2002). 493-503. - Carey, Bjorn. Whales Found to Speak in Dialects. Live Science. 3 Jan. 2006. - Zuberbühler, Klause. "Predator-specific alarm calls in Campbell's monkeys, Cercopithecus campbelli." Behavioral Ecology and Sociobiology 50.5 (2001). 414-442 - Boughman, Janette W. "Vocal learning by greater spear-nosed bats." Proceedings: Biological Sciences 265.1392 (1998). 227-233 - Krulwich, Robert. "New Language Discovered: Prairiedogese". Retrieved 20 May 2015. - Edwards, Lin (4 February 2010). "Prairie dogs may have the most complex language". Retrieved 20 May 2015. - DeMello, Margo (2007). "Yips, barks and chirps: the language of prairie dogs". Retrieved 20 May 2015. - "Prairie dogs' language decoded by scientists". CBC News. 21 June 2013. Retrieved 20 May 2015. - Bradbury, J.W., and S.L. Vehrencamp. Principles of Animal Communication. Sunderland, MA: Sinauer Associates Inc., 2011. Print. - Brown, G.E., D.P. Chivers, and R.J.F. Smith. 1995. Localized defecation by pike: A response to labelling by cyprinid alarm pheromone?. Behavioral Ecology and Sociobiology. 36.2: 105-110. - "Electrocommunication". Davidson College. Retrieved 2011-03-03. - Bradbury, J.W., and S.L. Vehrencamp. Principles of Animal Communication. Sunderland, MA: Sinauer Associates Inc., 2011. - Narins, Peter M. "Seismic Communication in Anuran Amphibians." BioScience40.4 (1990): 268. Print. - (Kardong & Mackessy 1991) - (Krochmal et al. 2004) - (Pough et al. 1992) - (Gracheva et al. 2010) - Kürten, L., Schmidt, U. and Schäfer, K. (1984). "Warm and cold receptors in the nose of the vampire bat, Desmodus rotundus". Naturwissenschaften. 71: 327–328. - "Web of Life:Vibrational communication in animals". Retrieved 8 December 2012. - Sean Senechal: Dogs can sign, too. A breakthrough method of teaching your dog to communicate to you, 2009, Random House/Crown/TenSpeed Press - discussed at length by Richard Dawkins under the subject of his book The Selfish Gene - V. M. Janik, L. S. Sayigh, and R. S. Wells: "Signature whistle shape conveys identity information to bottlenose dolphins", Proceedings of the National Academy of Sciences, vol. 103 no 21, May 23, 2006 - Hare, B., Call, J. & Tomasello, M.: "Communication of food location between human and dog (Canis familiaris).", Evolution of Communication, 2, 137–159, 1998. - K. Guo, K. Meints, C. Hall, S. Hall & D. Mills: "Left gaze bias in humans, rhesus monkeys and rhesus domestic dogs." "Animal Cognition", vol. 12, 2009 - "Do animals have language? - Michele Bishop". TED Ed. 10 September 2015. Retrieved 11 September 2015. - The Times May 18, 2006, p.3 - Brandon Kiem. "Rudiments of Language Discovered in Monkeys". Wiredscience. Retrieved 2013-03-15. |Wikimedia Commons has media related to Animal communication.| - Animal Communicator - Documentary - The Elgin Center for Zoosemiotic Research - Zoosemiotics: animal communication on the web - The Animal Communication Project - International Bioacoustics Council research on animal language. - Animal Sounds different animal sounds to listen and download. - The British Library Sound Archive contains over 150,000 recordings of animal sounds and natural atmospheres from all over the world.
What Is a Mind Map? A mind map in the simplest terms is a technique in which a complex subject goes from a macro view (think, wide angle) to micro view (think, close-up) by creating a web of topics and sub-topics related to the main subject. Another way to think of mind mapping is a visual representation of a main idea in which the creator designs a web of connecting thoughts or ideas in a graphic or artistic form. Mind mapping examples show that any subject area can utilize mind maps as a means to brainstorm. Students enjoy creating their own mind maps, which are an excellent tool for educators in assessing a student's understanding of a subject. Mind maps can also assist students in discerning topics for essay or research papers. No need to reinvent the wheel, on the World Wide Web there are several sites that offer downloadable mind mapping samples for educators. These mind maps can be used as handouts for students or as templates for students to fill-in their own ideas. There is also free mind mapping software that aids in the creation of mind maps. The following sites offer downloadable templates: The following sites offer free mind mapping software: Tips When Mind Mapping Whether an educator downloads mind-mapping samples or creates a mind map from scratch, there are a few things to keep in mind when creating mind maps. - Remember that mind mapping is form of brainstorming. In order to promote deep, effective thinking, eliminate as many outside distractions as possible, i.e. turn off computers, cell phones and other distracting electronics; dim lights; avoid interruptions. - Begin with a main idea or question. For instance, for a history unit on the Civil War, the main idea might be "slavery" or "Lincoln" or "How did the underground railroad work?" For an English literature unit it might be "Which Shakespearean plays address the theme of parent/child relationships and how?" or "American poets". - Be creative! This does not have to look like a thesis statement. Use colors, draw pictures and allow for freethinking or thinking outside the box. Most importantly, have large paper on which to create the mind map. - Finally, remember no idea or thought is incorrect, silly, or undeserving. Put them all down. Once you have exhausted all possibilities, then you can weed out the items that do not work. A form of brainstorming, mind mapping is non-linear, creative, expressive and fun. Mind mapping is an excellent tool for the classroom. As an assessment tool, mind mapping is effective in any subject area. Mind mapping also assists in discerning topics for papers or research. Educators may obtain mind-mapping examples online, download mind-mapping software, or take the leap to create their own dynamic mind mapping templates to use in class with students. This post is part of the series: Mind Mapping - Using Mind Mapping to Help Your Students - Learning Mind Mapping With Elementary Students - Mind Mapping: Helping Students Fine Tune Studies - Mind Mapping Software Solutions for Teachers
How do you approximate the binomial coefficient? The binomial coefficients are the integers calculated using the formula: (nk)=n!k! (n−k)!. The binomial theorem provides a method for expanding binomials raised to powers without directly multiplying each factor: (x+y)n= nΣk=0 (nk) xn−kyk. Use Pascal’s triangle to quickly determine the binomial coefficients. Which equation determines the Stirling approximation? Stirling’s formula can also be expressed as an estimate for log(n!): (1.1) log(n!) = nlog n − n + 1 2 log n + 1 2 log(2π) + εn, where εn → 0 as n → ∞. What is binomial coefficient give an example? The Binomial Coefficients Specifically, the binomial coefficient C(n, k) counts the number of ways to form an unordered collection of k items chosen from a collection of n distinct items. For example, if you wanted to make a 2-person committee from a group of four people, the number of ways to do this is C(4, 2). What does binomial coefficient mean in statistics? The binomial coefficient is the number of ways of picking unordered outcomes from possibilities, also known as a combination or combinatorial number. Why is Stirling approximation used? instead (as one often is), Stirling’s approximation reduces a calculation of n logarithms (logn+log(n−1)+…) to just one ((n+1/2)logn−n+O(1)). Approximations can be used in lots of ways. In particular, it makes it easier to compare n! to other functions, to compute limits, etc. What is Stirling approximation in statistical mechanics? In mathematics, Stirling’s approximation (or Stirling’s formula) is an approximation for factorials. It is a good approximation, leading to accurate results even for small values of. . It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre. What is Stirling approximation in physics? The Stirling formula or Stirling’s approximation formula is used to give the approximate value for a factorial function (n!). This can also be used for Gamma function. Stirling’s formula is also used in applied mathematics. It makes finding out the factorial of larger numbers easy. What is the binomial coefficient used for? In combinatorics, the binomial coefficient is used to denote the number of possible ways to choose a subset of objects of a given numerosity from a larger set. It is so called because it can be used to write the coefficients of the expansion of a power of a binomial. What is approximate distribution? normal approximation: The process of using the normal curve to estimate the shape of the distribution of a data set. central limit theorem: The theorem that states: If the sum of independent identically distributed random variables has a finite variance, then it will be (approximately) normally distributed. When can binomial approximate hypergeometric? As a rule of thumb, if the population size is more than 20 times the sample size (N > 20 n), then we may use binomial probabilities in place of hypergeometric probabilities. We next illustrate this approximation in some examples.
Wildlife conservation is the practice of protecting wild plant and animal species and their habitats. Among the goals of wildlife conservation are to ensure that nature will be around for future generations to enjoy and to recognize the importance of wildlife and wilderness lands to humans. Many nations have government agencies dedicated to wildlife conservation, which help to implement policies designed to protect wildlife. Numerous independent non-profit organizations also promote various wildlife conservation causes. Wildlife conservation has become an increasingly important practice due to the negative effects of human activity on wildlife. The science of extinction is called dirology. An endangered species is defined as a population of a living being that is at the danger of becoming extinct because of several reasons. The reasons can include that the species has a very low population or is threatened by the varying environmental or prepositional parameters. Major threats to wildlife Fewer natural wildlife habitat areas remain each year. Moreover, the habitat that remains has often been degraded to bear little resemblance to the wild areas which existed in the past.Habitat loss—due to destruction, fragmentation or degradation of habitat—is the primary threat to the survival of wildlife in the United States. When an ecosystem has been dramatically changed by human activities—such as agriculture, oil and gas exploration, commercial development or water diversion—it may no longer be able to provide the food, water, cover, and places to raise young. Every day there are fewer places left that wildlife can call home. There are three major kinds of habitat loss: - Habitat destruction: A bulldozer pushing down trees is the iconic image of habitat destruction. Other ways that people are directly destroying habitat, include filling in wetlands, dredging rivers, mowing fields, and cutting down trees. - Habitat fragmentation: Much of the remaining terrestrial wildlife habitat in the U.S. has been cut up into fragments by roads and development. Aquatic species’ habitat has been fragmented by dams and water diversions. These fragments of habitat may not be large or connected enough to support species that need a large territory in which to find mates and food. The loss and fragmentation of habitat make it difficult for migratory species to find places to rest and feed along their migration routes. - Habitat degradation: Pollution, invasive species and disruption of ecosystem processes (such as changing the intensity of fires in an ecosystem) are some of the ways habitats can become so degraded that they no longer support native wildlife. - Climate change: Global warming is making hot days hotter, rainfall and flooding heavier, hurricanes stronger and droughts more severe. This intensification of weather and climate extremes will be the most visible impact of global warming in our everyday lives. It is also causing dangerous changes to the landscape of our world, adding stress to wildlife species and their habitat. Since many types of plants and animals have specific habitat requirements, climate change could cause disastrous loss of wildlife species. A slight drop or rise in average rainfall will translate into large seasonal changes.Hibernating mammals, reptiles, amphibians and insects are harmed and disturbed. Plants and wildlife are sensitive to moisture change so, they will be harmed by any change in moisture level. Natural phenomena like floods, earthquakes, volcanoes, lightning, forest fires. - Unregulated Hunting and poaching: Unregulated hunting and poaching causes a major threat to wildlife. Along with this, mismanagement of forest department and forest guards triggers this problem. - Pollution: Pollutants released into the environment are ingested by a wide variety of organisms.Pesticides and toxic chemical being widely used, making the environment toxic to certain plants, insects, and rodents. - Perhaps the largest threat is the extreme growing indifference of the public to wildlife, conservation and environmental issues in general. Over-exploitation of resources, i.e., exploitation of wild populations for food has resulted in population crashes (over-fishing and over-grazing for example). - Over exploitation is the over use of wildlife and plant species by people for food, clothing, pets, medicine, sport and many other purposes. People have always depended on wildlife and plants for food, clothing, medicine, shelter and many other needs. But today we are taking more than the natural world can supply. The danger is that if we take too many individuals of a species from their natural environment, the species may no longer be able to survive. The loss of one species can affect many other species in an ecosystem. The hunting, trapping, collecting and fishing of wildlife at unsustainable levels is not something new. The passenger pigeon was hunted to extinction early in the last century, and over-hunting nearly caused the extinction of the American bison and several species of whales. Today, the Endangered Species Act protects some U.S. species that were in danger from over exploitation, and the Convention on International Trade in Endangered Species of Fauna and Flora (CITES) works to prevent the global trade of wildlife. But there are many species that are not protected from being illegally traded or over-harvested. North American Model of Wildlife Conservation The North American Model of Wildlife Conservation is a set of principles that has guided management and conservation decisions in the United States and Canada. Although not formally articulated until 2001, the model has its origins in 19th century conservation movements, the near extinction of several species of wildlife (including the American Bison) and the rise of sportsmen with the middle class. Beginning in the 1860s sportsmen began to organize and advocate for the preservation of wilderness areas and wildlife. The North American Model of Wildlife Conservation rests on two basic principles – fish and wildlife are for the non-commercial use of citizens, and should be managed such that they are available at optimum population levels forever. These core principles are elaborated upon in the seven major tenets of the model: - Wildlife as Public Trust Resources. - Elimination of Markets for Game. - Allocation of Wildlife by Law - Wildlife Should Only be Killed for a Legitimate Purpose - Wildlife is Considered an International Resource - Science is the Proper Tool for Discharge of Wildlife Policy - Democracy of Hunting Wildlife conservation as a government involvement In 1972, the Government of India enacted a law called the Wildlife Conservation Act. Soon after enactment, a trend emerged whereby policymakers enacted regulations on conservation. State and non-state actors began to follow a detailed "framework" to work toward successful conservation. The World Conservation Strategy was developed in 1980 by the "International Union for Conservation of Nature and Natural Resources" (IUCN) with advice, cooperation and financial assistance of the United Nations Environment Programme (UNEP) and the World Wildlife Fund and in collaboration with the Food and Agriculture Organization of the United Nations (FAO) and the United Nations Educational, Scientific and Cultural Organization (Unesco)" The strategy aims to "provide an intellectual framework and practical guidance for conservation actions." This thorough guidebook covers everything from the intended "users" of the strategy to its very priorities. It even includes a map section containing areas that have large seafood consumption and are therefore endangered by over fishing. The main sections are as follows: - The objectives of conservation and requirements for their achievement: - Maintenance of essential ecological processes and life-support systems. - Preservation of genetic diversity that is flora and fauna. - Sustainable utilization of species and ecosystems. - Priorities for national action: - A framework for national and sub-national conservation strategies. - Policy making and the integration of conservation and development. - Environmental planning and rational use allocation. - Priorities for international action: - International action: law and assistance. - Tropical forests and dry lands. - A global programme for the protection of genetic resource areas. - Tropical forests - Deserts and areas subject to desertification. As “major development agencies” became “discouraged with the public sector” of environmental conservation in the late 1980s, these agencies began to lean their support towards the “private sector” or non-government organizations (NGOs). In a World Bank Discussion Paper it is made apparent that “the explosive emergence of nongovernmental organizations” was widely known to government policy makers. Seeing this rise in NGO support, the U.S. Congress made amendments to the Foreign Assistance Act in 1979 and 1986 “earmarking U.S. Agency for International Development (USAID) funds for biodiversity”. From 1990 moving through recent years environmental conservation in the NGO sector has become increasingly more focused on the political and economic impact of USAID given towards the “Environment and Natural Resources”. After the terror attacks on the World Trade Centers on September 11, 2001 and the start of former President Bush’s War on Terror, maintaining and improving the quality of the environment and natural resources became a “priority” to “prevent international tensions” according to the Legislation on Foreign Relations Through 2002 and section 117 of the 1961 Foreign Assistance Act. Furthermore in 2002 U.S. Congress modified the section on endangered species of the previously amended Foreign Assistance Act. Sec. 119.100 Endangered Species: (a) The Congress finds the survival of many animals and plant species is endangered by over hunting, by the presence of toxic chemicals in water, air and soil, and by the destruction of habitats. The Congress further finds that the extinction of animal and plant species is an irreparable loss with potentially serious environmental and economic consequences for developing and developed countries alike. Accordingly, the preservation of animal and plant species through the regulation of the hunting and trade in endangered species, through limitations on the pollution of natural ecosystems, and through the protection of wildlife habitats should be an important objective of the United States development assistance. (b) 100 In order to preserve biological diversity, the President is authorized to furnish assistance under this part, notwithstanding section 660,101 to assist countries in protecting and maintaining wildlife habitats and in developing sound wildlife management and plant conservation programs. Special efforts should be made to establish and maintain wildlife sanctuaries, reserves, and parks; to enact and enforce anti-poaching measures; and to identify, study, and catalog animal and plant species, especially in tropical environments. The amendments to the section also included modifications on the section concerning "PVOs and other Nongovernmental Organizations." The section requires that PVOs and NGOs "to the fullest extent possible involve local people with all stages of design and implementation." These amendments to the Foreign Assistance Act and the recent[when?] rise in USAID funding towards foreign environmental conservation have led to several disagreements in terms of NGOs' role in foreign development. Active non-government organizations Many NGOs exist to actively promote, or be involved with wildlife conservation: - The Nature Conservancy is a US charitable environmental organization that works to preserve the plants, animals, and natural communities that represent the diversity of life on Earth by protecting the lands and waters they need to survive. - World Wide Fund for Nature (WWF) is an international non-governmental organization working on issues regarding the conservation, research and restoration of the environment, formerly named the World Wildlife Fund, which remains its official name in Canada and the United States. It is the world's largest independent conservation organization with over 5 million supporters worldwide, working in more than 90 countries, supporting around 1300 conservation and environmental projects around the world. It is a charity, with approximately 60% of its funding coming from voluntary donations by private individuals. 45% of the fund's income comes from the Netherlands, the United Kingdom and the United States. - Wildlife Conservation Society - Audubon Society - Traffic (conservation programme) - Safari Club International - WildEarth Guardians - "Cooperative Alliance for Refuge Enhancement". CARE. Retrieved 1 June 2012. - "Wildlife Conservation". Conservation and Wildlife. Retrieved 1 June 2012. - McCallum, M.L. 2010. Future climate change spells catastrophe for Blanchard's Cricket Frog (Acris blanchardi). Acta Herpetologica 5:119 - 130. - McCallum, M.L., J.L. McCallum, and S.E. Trauth. 2009. Predicted climate change may spark box turtle declines. Amphibia-Reptilia 30:259 - 264. - McCallum, M.L. and G.W. Bury. 2013. Google search patterns suggest declining interest in the environment. Biodiversity and Conservation DOI: 10.1007/s10531-013-0476-6 - Organ, J.F.; V. Geist, S.P. Mahoney, S. Williams, P.R. Krausman, G.R. Batcheller, T.A. Decker, R. Carmichael, P. Nanjappa, R. Regan, R.A. Medellin, R. Cantu, R.E. McCabe, S. Craven, G.M. Vecellio, and D.J. Decker (2012). The North American Model of Wildlife Conservation.. The Wildlife Society Technical Review 12-04. (Bethesda, Maryland: The Wildlife Society). ISBN 978-0-9830402-3-1. - Geist, V.; S.P. Mahoney, and J.F. Organ. (2001). "Why hunting has defined the North American model of wildlife conservation.". Transactions of the North American Wildlife and Natural Resources Conference 66: 175–185. - Mahoney, Shane (May–June 2004). "The North American Wildlife Conservation Model". Bugle (Rocky Mountain Elk Foundation) 21 (3). - "TWS Final Position Statement". Retrieved 2011-04-04. - "World Conservation Strategy" (PDF). Retrieved 2011-05-01. - Meyer, Carrie A. (1993). "Environmental NGOs in Ecuador: An Economic Analysis of Institutional Change". The Journal of Developing Areas 27 (2): 191–210. - "The Foreign Assistance Act of 1961, as amended" (PDF). Retrieved 2011-05-01. - "About Us - Learn More About The Nature Conservancy". Nature.org. 2011-02-23. Retrieved 2011-05-01. - "WWF in Brief". World Wildlife Fund. Retrieved 2011-05-01.
|This article needs additional citations for verification. (September 2011)| A radome (the word is a contraction of radar and dome) is a structural, weatherproof enclosure that protects a microwave (e.g. radar) antenna. The radome is constructed of material that minimally attenuates the electromagnetic signal transmitted or received by the antenna. In other words, the radome is transparent to radar or radio waves. Radomes protect the antenna surfaces from weather or conceal antenna electronic equipment from public view. They also protect nearby personnel from being accidentally struck by quickly-rotating antennas. On rotary-wing and fixed-wing aircraft using microwave satellite for beyond-line-of-sight communication, radomes often appear as blisters on the fuselage. In addition to protection, radomes also streamline the antenna system, thus reducing drag. A radome is often used to prevent ice and freezing rain from accumulating directly onto the metal surface of antennas. In the case of a spinning radar dish antenna, the radome also protects the antenna from debris and rotational irregularities due to wind. Its shape is easily identified by its hardshell, which has strong properties against being damaged. For stationary antennas, excessive amounts of ice can de-tune the antenna to the point where its impedance at the input frequency rises drastically, causing voltage standing wave ratio (VSWR) to rise as well. This reflected power goes back to the transmitter, where it can cause overheating. A foldback circuit can act to prevent this; however, one drawback of its use is that it causes the station's output power to drop dramatically, reducing its range. A radome prevents that by covering the antenna's exposed parts with a sturdy, weatherproof material, typically fibreglass, which keeps debris or ice away from the antenna to prevent any serious issues. One of the main driving forces behind the development of fibreglass as a structural material was the need during World War II for radomes. When considering structural load, the use of a radome greatly reduces wind load in both normal and iced conditions. Many tower sites require or prefer the use of radomes for wind loading benefits and for protection from falling ice or debris. Sometimes radomes may be unsightly if near the ground, and heaters could be used instead. Usually running on direct current, the heaters do not interfere physically or electrically with the alternating current of the radio transmission. The Menwith Hill electronic surveillance base, which includes over 30 radomes, is widely believed to regularly intercept satellite communications. At Menwith Hill, the radome enclosures have a further use in preventing observers from deducing the direction of the antennas, and therefore which satellites are being targeted. The same point was also made with respect to the radomes of the ECHELON facilities. The US Air Force Aerospace Defense Command operated and maintained dozens of air defense radar stations in the US including Alaska during the Cold War. Most of the radars used at these ground stations were protected by rigid or inflatable radomes. The radomes were typically at least 50 feet in diameter and the radomes were attached to standardized radar tower buildings that housed the radar transmitter, receiver and antenna. Some of these radomes were very large. The CW-620 was a space frame rigid radome with a maximum diameter of 150 feet, and a height of 84 feet. This radome consisted of 590 panels, and was designed for winds up to 150 mph. The total radome weight was 204,400 pounds with a surface area of 39,600 square feet. The CW-620 radome was designed and constructed by Sperry-Rand Corp. for the Columbus Division of North American Aviation. This radome was originally used for the FPS-35 search radar at Baker Air Force Station, OR. When the Baker AFS was closed the radome was moved to provide a high-school gymnasium in Payette, ID. Pictures and documents are available online at radomes.org/museum for Baker AFS/821st Radar Squadron. For maritime satellite communications service, radomes are widely used to protect dish antennas which are continually tracking fixed satellites while the ship experiences pitch, roll and yaw movements. Large cruise ships and oil tankers may have radomes over 3 m in diameter covering antennas for broadband transmissions for television, voice, data, and the Internet, while recent developments allow similar services from smaller installations such as the 85 cm motorised dish used in the ASTRA2Connect Maritime Broadband system. Small private yachts may use radomes as small as 26 cm in diameter for voice and low-speed data. An Active Electronically Scanned Array is a form of radar installation that has no moving parts as such and in ground based installations a radome is not necessary. An example of this is the "pyramid" which replaced the "tourist attraction" golfball-style radome installations at RAF Fylingdales. - example of dented radome - example of helicopter radome - Gordon, J.E., The New Science of Strong Materials: 2nd Edition, Pelican, 1976. - Photograph of Mount Hebo while active overlooking Pacific Ocean |Wikimedia Commons has media related to Radomes.|
The definition of a continuous quantity THE SUBJECT MATTER OF DIFFERENTIAL CALCULUS is the rate of change of functions of continuous quantities. To understand what that means, we must distinguish what is continuous from what is discrete. A natural number is a collection of indivisible and separate units. The people in the room, the electrons in an atom, the names of numbers. They are discrete units. You cannot take half of any one. If you do, it will not be that unit -- it will not have that same name -- any more. Half of what is called a person is not also a person. We count things that are discrete: One person, two, three, four, and so on. But consider the distance between A and B. That distance is not composed of discrete units. There is nothing to count -- it is not a number of anything. We say, instead, that it is a continuous whole. That means that as we go from A to B, the line "continues" without a break. The mathematical line is abstracted from the boundary of a plane figure: the boundary of a circle, a square, and so on. All unbroken lines, curved or straight, are continuous. Now, a collection of discrete units will have only certain parts. Of 10 people, we can take only half of them, a fifth, or a tenth. When we divide any discrete collection, we will eventually come to an indivisible one; in this case, one person. But since the length AB is continuous, we could divide it into any number of parts. Not only could we take half of it, we could take any part we please -- a tenth, a hundredth, or a billionth -- because AB is not composed of units, of things that have the same name. And most important, any part of AB, however small, will still be a length. That is the idea of a continuum, or a continuous quantity. There is no limit to the smallness of the parts into which it could be divided. We imagine a continuum to be "infinitely divisible," which is a brief way of saying that no matter into how many parts it has been divided, it could be divided still further. And each part will itself be infinitely divisible. A defining property of a continuous quantity, such as the line AC, is that if it is divided at any point B, then the right-hand boundary B of the part AB, coincides with the left-hand boundary B of the part BC. The parts AB, BC make contact -- they are connected -- at the point B. That allows AB to continue into BC without a gap. In other words, if a continuous quantity were decomposed into parts or intervals, then all parts are connected. All parts share their boundaries. The lines AB, B'C do not share a common boundary, a common endpoint -- they are not connected. And so there is not a continuous line that joins A and C. But if we join BB', then what were originally two endpoints, two boundaries, become one. AB now continues into BC without a gap. The word continuous comes from a Latin root meaning held together. What is it that holds a line together to make it whole? Again, no matter where a line might be divided, the right and left endpoints, as B above, coincide as one. In Lesson 3 we will see how that leads to the definition of a continuous function. Whatever is continuous we call a magnitude. Magnitudes are of different kinds: distance, area, time, speed. In calculus and physics, we regard magnitudes as being measureable. Since they are continuous, we could divide a magnitude into any units of measure, however small. We could divide time, for example, into seconds, or hundredths of a seconds, or trillionths of a second. We sum this up in the following definition: DEFINITION 1. We say that a quantity is continuous if there is no limit to the smallness of the parts into which it could be divided, and 1) no matter where it might be divided, the parts share a common boundary, and 2) each part is a quantity of the same kind. See the Problem below. The prime example of a continuum is length. A line could be divided into any number of parts, which themselves could be divided; the line will then be composed of those parts. A continuum cannot be composed of points, because points are indivisible; they cannot be divided into parts, as required by the definition. "Point" is a convenient word, when we need it, to refer to the boundary of an interval or where two lines meet. But points do not exist until we point to them! In calculus, anything more than that is unnecessary. In fact, if points were in any sense real entities, then the "two" points, B, B' above, could not become one. That, at any rate, has been the meaning of the words point and continuum since ancient times. In the 19th century, the abstractions of modernism found their expression in mathematics as well, and certain mathematicians created a radically different meaning for those words. They began with what they called "points," and they ascribed to them a primary logical existence. They then defined a "continuum," and specifically a "line," as a "set" of those "points." (That meaning of "point" became unexplainedly linked with the geometrical meaning.) And what were called the real "numbers" were then identified with the infinity of those "points." That is logic, which does not require that words have their customary meanings -- or any meaning for that matter. It requires only that words -- "point," "number," "infinity" -- obey the formal rules of a language. A logical theory therefore may be nothing more than a formal game, even to the point of fantasy. See the Appendix: Are the real numbers really numbers? Problem. Which of these is continuous and which is discrete? To see the answer, pass your mouse over the colored area. To cover the answer again, click "Refresh" ("Reload"). Do the problem yourself first! a) The leaves on a tree. b) The stars in the sky. c) The distance from here to the Moon. Continuous. Our idea of distance, of length, is that it could have any size, however large or however small. d) A bag of apples. f) A dozen eggs. Discrete. (But if they're scrambled?) g) 60 minutes. Continuous. Our idea of time, like our idea of distance, is that there is no smallest unit. Any part of 60 minutes is still time. h) Motion from one place to another. Continuous. The idea of any quantity of motion is that there is no limit to its smallness. i) Pearls on a necklace j) The area of a circle. As area, it is continuous; any part of an area is also an area. But as a form, a circle is discrete; half a circle is not also a circle. k) The volume of a sphere. As volume, it is continuous. As a form, a sphere is discrete. l) A gallon of water. Continuous. We think of volume as having any part. And any part is still a volume of water. m) Molecules of water. Discrete. In other words, if we could keep dividing a quantity of water, then ultimately (in theory) we would come to one molecule. If we divided that, it would no longer be water! n) A chapter in a book. Discrete. Surely, half a chapter is not also a chapter. If you think that half an event is also an event, then you will say that an event -- such as a birthday party -- is continuous. (We are not speaking of the time in which the event occurs. We are speaking of the event itself.) Otherwise, you will say that events are discrete. p) The changing shape of a balloon as it's being inflated. Continuous. The shape is changing continuously. q) The evolution of biological forms; that is, from fish to man n) (according to the theory). What do you think? Was it like a balloon being inflated? Or was each new form discrete? r) Words. Discrete. If you think that the hundredth part of an idea is also an idea, (Really?), then you will say that ideas are continuous. Discrete. Half a meaning? u) The proof of a theorem. Discrete. Half a proof? v) The names of numbers. Surely, the names of anything are discrete. w) The universe. Discrete. Is half a universe also a universe? Apart from our conceptions of time, space and motion, we see that virtually everything we encounter is discrete. Even a motion picture -- where the figures on the screen appear to be in continuous motion -- is made up of individual frames, which are discrete. Calculus, however, is the study of magnitudes; of things that are continuous. Next Lesson: Limits Table of Contents | Home Please make a donation to keep TheMathPage online. Even $1 will help. Copyright © 2015 Lawrence Spector Questions or comments?
WHAT IS THROMBOSIS When a clot (thrombus) forms in a blood vessel (artery or vein), it is called thrombosis. Thrombosis or clotting occurs due to an interplay and cascade of activation of coagulation factors (clotting proteins) in the blood. Among these, factor X and factor II (thrombin) form the final common and significant pathways leading to the formation of factor I (fibrin) which cross-links to form the stable clot. Vitamin K plays an important role in the activation of some key coagulation factors, and clot formation. The body also has anticoagulation factors and plasmin (breaks down fibrin – fibrinolytic) to regulate the clotting process, however, when aggravating risk factors and stimulating mechanisms are dominant, clot formation occurs. WHAT CAUSES THROMBOSIS The triad of mechanisms leading to thrombosis, often referred to as Virchow’s triad includes the following – Blood vessels wall injury – Injury to the vessel wall lining (endothelium) of the blood vessels can be caused by direct trauma but more commonly it occurs over a period of time due to smoking, diabetes, high blood pressure, bad cholesterol (LDL) build-up, and formation of plaques (atherosclerosis) causing cardiovascular disease (CVD). This is seen in arteries where blood flows under high pressure making the endothelium prone to injury. When endothelial injury occurs, there is activation and aggregation of platelets that plug the injury site, and release of a substance called ’tissue factor’, with both of these then activating coagulation factors in the blood to form the clot. Stasis (stagnation) of blood flow – This is seen to occur more in leg veins due to immobility and impaired muscle action, along with the weakening of the vein valves and wall leading to backflow and pooling of blood (chronic venous insufficiency). Stasis can cause more interaction between clotting factors and also damage the vessel wall over time, leading to more clot formation. Stasis can also occur when the heart is not pumping out blood adequately causing congestion and back-pressure in the veins. This is seen in heart-beat abnormalities like atrial fibrillation and in congestive heart failure. Hypercoagulability – This refers to conditions in the body that increase the tendency and predisposition of clotting due to high amounts of tissue factor and inflammatory mediators (released during infections, cancers, etc.), or abnormalities in the coagulation or anticoagulation factors. Stagnation of blood flow (stasis) caused due to prolonged immobilization and inactivity causing decreased muscle action - Hospitalization (especially if >72 hours) - Post-surgery (especially hip/knee replacement and major abdominal surgery) - Post-fractures (especially lower limb) - Post-trauma (especially with backbone injury) - Illness requiring bed-rest - Long flights or car road trips - Sedentary desk jobs or lifestyle - Atrial fibrillation - Heart failure - Chronic Venous Insufficiency – varicose veins Damage to the wall of the blood vessel (endothelial injury) - High blood pressure (hypertension) - Lipid abnormalities – High LDL or triglycerides - Plaque formation (atherosclerosis) and cardiovascular disease (CVD) - Infections (like COVID) - Insertion of an intravenous line (especially central line) Conditions that increase the tendency to clotting (hypercoagulable state) - Cancer and chemotherapy - Major surgeries - Inflammatory bowel disease (IBD) - Hormone replacement therapy (HRT) and oral contraceptives - Clotting disorders (thrombophilia) - Infections like COVID, and others like measles, chickenpox, flu, dengue, hepatitis, and HIV. - Cardiovascular disease (CVD) - Kidney failure and dialysis - Chronic Obstructive Pulmonary Disease (COPD) - Family history of thrombosis - Age>60 years When a clot forms in an artery it can obstruct blood flow to the organ being supplied. This becomes dangerous when an artery supplying a vital organ like the heart or brain gets blocked. Thrombosis in arteries is usually due to pre-existing atherosclerosis and cardiovascular disease (CVD), and risk factors like high BP, diabetes, lipid abnormalities, and smoking. These factors cause endothelial injury that leads to the activation of platelets and the clotting proteins in the blood. When such comorbidities are present, factors that increase hypercoagulability like infections (COVID), cancer, surgeries, and other diseases, etc. further add to the clotting risk. Blockage of the coronary artery and its branches can lead to angina and heart attack (myocardial infarction-MI), while blockage of an artery supplying part of the brain can lead to stroke. In patients with a high risk of thrombosis or previous cardiovascular events like MI/stroke or those who have undergone angioplasty/bypass surgery, medicines to block platelet action (aspirin, clopidogrel, ticlopidine) are given. In very high-risk cases, an anticoagulant (drug inhibiting coagulation factors) is also added. In case a clot needs to be dissolved urgently as in MI or stroke, agents called thrombolytics are used. Procedures and surgeries like angioplasty and bypass grafting are done in CVD to open the blocked artery or bypass it, respectively. Read: Cardiovascular Disease See below – Medicines for prevention and treatment of thrombosis. VENOUS THROMBOSIS AND THROMBOPHLEBITIS The formation of a clot in any vein is called venous thrombosis. Conditions causing venous stasis and hypercoagulability are the main cause of venous thrombosis. Inflammation of the vein is called phlebitis which can occur due to direct trauma to the vein during surgery or placing a catheter, or more commonly as a result of a blood clot when it is called thrombophlebitis. The most common sites of thrombosis and thrombophlebitis are the leg veins followed by the arm. Thrombosis can occur in both superficial and deep veins, but it is deep vein thrombosis (DVT) that is more dangerous due to the risk of ‘embolism’. A clot formed in a vein is attached to the inner wall of the vein, but sometimes a part of it can break loose (embolus), reaching the heart and then the lung (pulmonary embolism –PE) which can even be fatal. DVT and PE are together called venous thromboembolism (VTE). SIGNS AND SYMPTOMS Thrombosis especially DVT is itself often quiet and diagnosed when tests are performed due to the presence of associated risk factors, or unfortunately when symptoms of pulmonary embolism occur. The presenting signs and symptoms are usually due to thrombophlebitis. In the case of superficial veins, there may be redness, itchiness, burning pain, and tenderness along the affected vein with leg/arm swelling. Deep vein thrombophlebitis may also present with redness and pain, but often there are no symptoms, or sometimes only swelling in the affected limb is present. A low-grade fever may be present, which can become high grade if an infection sets in. The overlying skin may sometimes break down forming a venous ulcer. Pulmonary embolism can present quite suddenly with shortness of breath, cough (sometimes with blood – hemoptysis), chest pain, wheezing, sweating, palpitation, and loss of consciousness. Around 5% of patients with DVT can develop pulmonary embolism which is one of the greatest emergencies in medicine. The fatality rate of acute PE when untreated is about 30% out of which 10% die suddenly. However, with high suspicion, timely diagnosis, and intervention, the death rate can be reduced by three-fourths. Prevention is the cornerstone of managing DVT. This is done by evaluating risk factors and assessing the actual risk of developing DVT. The risk factors are accorded points, and based on the number of points, the person may be assessed as having a mild, moderate, or high risk of DVT. D-dimer (marker for thrombosis) levels in the blood can also help assess risk. It is important that while interpreting D-dimer levels in patients >50 years, the age-corrected cut-off (patient’s age in years × 10 ug/l) should be used. In case of strong suspicion or presence of symptoms of thrombophlebitis like swelling, pain, and skin changes, imaging by a vascular duplex ultrasound is performed. High-risk cases like those immobilized or hospitalized after major surgeries, fractures, trauma, or medical illnesses, are usually started on preventive doses of medicines called anticoagulants (blood thinners). Sometimes compression stockings also may be given to prevent stasis of blood flow in the veins. The most important thing is to prevent the occurrence of pulmonary embolism and keep an astute alert for the same. MEDICINES FOR THROMBOSIS Anticoagulants (blood thinners or clot preventers) These include injectable medicines used in the hospital setting like heparin and low molecular weight heparin (LMWH – enoxaparin and dalteparin) and fondaparinux. Oral anticoagulants which can be used in the home setting or after discharge from the hospital include the conventional drug warfarin and the newer agents called directly-acting oral anticoagulants (DOACs – apixaban, dabigatran, rivaroxaban) also called novel oral anticoagulants (NOACs). The most important side effect of anticoagulants is the risk of bleeding manifesting as blood in stools, urine, and small hemorrhages under the skin, or sometimes serious intracranial bleed. DOACs do not require constant monitoring of prothrombin time (PT-INR) like warfarin does, as their risk of causing bleeding is low. In addition, warfarin can interact with many food items and other drugs, which has to be discussed with the treating doctor. Anticoagulants are used for the prevention and treatment of DVT for up to 3 months or sometimes longer depending on thrombosis versus clotting risk and history of DVT or PE. While the LMWH and DOACs act by inhibiting the action of factor X/II, warfarin acts by blocking vitamin K required for clotting. Anticoagulants prevent the formation of new clots and also prevent existing clots from getting bigger so that the body can more easily break them down. However, anticoagulants cannot break down a clot. These include drugs like aspirin, clopidogrel, and ticlopidine. These drugs are useful in cardiovascular disease (CVD) to prevent events like angina, heart attack, and stroke, as clots forming in arteries due to atherosclerosis are due to endothelial injury and are high in platelets. However, these drugs have limited value in venous thrombosis. They may be used in low-risk cases and in patients with co-existing CVD. Thrombolytics (clot busters or breakers) These drugs are required in some cases of DVT with large clots and also for treating PE. These are given intravenously, and sometimes directly into the clot itself through a catheter inserted under imaging (duplex ultrasound) guidance. Such drugs act by breaking down fibrin and so are also called fibrinolytics. These include tissue plasminogen activators (t-PA – alteplase, reteplase, tenecteplase), anistreplase, streptokinase, and urokinase. In addition to VTE, these drugs are also used to treat heart attack (myocardial infarction) or stroke caused by a clot blocking a coronary artery or cerebral artery respectively. PROCEDURES AND SURGERY By using the thin tube called the catheter inserted into the vein through a small incision, large clots can be removed (thrombectomy) or dissolved (thrombolysis). This is usually performed by a vascular surgeon in the operation theater under local anesthesia. In some cases, vena cava filters are inserted into the abdomen to prevent pulmonary embolism. For any query, additional information or to discuss any case, leave a comment or write to firstname.lastname@example.org and be assured of a response soon.
The moon has a radius of 1,080-miles (1,737-kilometers). This means that the moon's diameter is 2,159-miles (3,475-kilometers) across. For comparison, the diameter of Earth is 7,918-miles (12,742-kilometers). This shows that the moon is approximately a quarter the width of Earth. The moon's size impacts specific processes on Earth, the most notable of which is the tides. The moon’s gravitational pull causes the oceans to bulge on either side. As the planet rotates, these bulges also rotate, causing the rise and fall of seawater. Gravitational force is dictated by the size and distance of objects, so if the moon were the size of Mars’s moons, Phobos and Deimos, it would not have as strong of a gravitational pull and thus would not affect the tides. Compared To Other Objects in the Solar System It can be hard to visualize just how big celestial bodies are and the moon is no exception. Earth’s moon is fairly big in comparison to other moons in the solar system, but what exactly does that mean? The moon orbiting Earth is the fifth largest moon in the solar system. Moons larger than ours include, in decreasing order, Ganymede, Titan, Callisto, and Io. Even though Earth’s moon is not the biggest in the solar system, it is the largest when compared to the planet it orbits. The moon is also quite a bit bigger than the dwarf planet, Pluto which has a radius of 1,477-miles (2377-kilometers). This makes Pluto around 70% the size of Earth's moon. Compared to countries on Earth It still might be hard to visualize just how large the moon is when only comparing it to other celestial objects. Comparing the size of our moon to countries and continents on Earth might show more context into its sheer size. The United States of America has a diameter of 2,897-miles (4,662-kilometers), making it almost the same width as the moon. However, having the diameter of the moon only paints a part of the picture. To compare the moon's size to the size of countries on Earth, the surface area is required. The moon’s surface area is 14.6-million square miles (38-million square kilometers). When compared again to the United States of America, the moon has approximately four times as much surface area. This is smaller than the continent of Asia (including Russia) which boasts a surface area of 17.2-million square miles (44.6 million square kilometers).
When your students approach you asking, “When will we need this stuff anyway?” you need a good answer. Better yet, if you can provide a lesson plan that ties in what real-world activities you want your students to understand, you will be able to actually show them how they will use the information you are teaching. Shopping is one area in which skills learned in school are used on a regular basis. From math to English, these subjects speak to a shopping setting. Whether you are setting up a classroom store in your elementary school classroom, analyzing shopping trends in a high school sociology class or creating advertising in a language arts class in middle school, you will find many applications for your teaching efforts in the category of shopping. Here are some resources you can choose from as you look for lesson plans for a shopping unit. Elementary School (K-5) - Math at the Grocery Store (grades K-2) – Show your students that math is real by taking them on a virtual grocery store trip where they can weigh, count and price items. Don’t forget the coupons! - What’s on Sale? (grades 3-5) – This lesson teaches students to calculate prices with various price discounts, using calculators and changing decimals into fractions. - Let’s Play Grocery Store (ESL Lesson Plan) (grades K-2) – Designed for young elementary or new English learners, this grocery store lesson works on grocery shopping vocabulary. - Student Economist (grades 3-5) – Students create a classroom store and analyze consumer behavior and economic trends while watching classmates shop. - Shopping the Sunday Circular (grades 3-6) – Teach students to calculate price-per-unit to find the best value by using grocery store circulars that come out each week. - Opening Your Class Store and Bank (grades 3-5) – Students learn how a cashier and a banker work as they open a class bank and store. - Discount and Sales Tax – How Much Does This Shirt Cost? (grades 4-6) – The price tag on a shirt is rarely the price you pay. Teach students how to calculate sales tax and discounts to find the out-of-pocket costs for an item. - Shopping with Money (grades 2-3) – Children will enjoy shopping as they apply their knowledge of adding and subtracting money. They will also create and solve shopping-related word problems. - Going Shopping (Holiday Themed Lesson) (grades 3-5) – Students learn how to shop and make healthy eating choices while preparing a Thanksgiving meal that fits the MyPlate food group guidelines. - 30 Math Activities with Shopping Catalogues (grades 1-6) – Grab some shopping catalogs and get ready to learn with these excellent real-world math activities. - Be an Ad Detective (grades 3-6) – Teach children to be savvy when it comes to advertisements as they assume the role of detective and find surprising places where advertisers are doing their work. - Grocery Store Skills: Sorting Food Into Bags (Worksheet) (grades 1-2) – Sorting is an important concept in early grades, and this fun worksheet-based lesson lets them get more practice with this important skill. - Shop Smart – A Mock Grocery Store (grades 1-6) – Set up a mock grocery store that students can use to gain real-world shopping experiences. - Shop Till You Drop (grades 6-8) – This award-winning lesson plan from TeachersFirst teaches students how to use consumer math skills to plan meals, shop within a budget and locate items in a virtual grocery store. - Shop Around (grades 6-8) – Grab the Sunday paper’s ads, and let the kids choose items they wish to purchase, then shop for the best price. This teaches the value of shopping around when shopping on a budget. - Money Math (grades 7-9) – Get four lessons on money that teach everything from saving to shopping for home improvement project needs. - Shopping – Unit Pricing (grades 5-8) – Uses graphic organizers, which you can download, to help students choose the most cost effective items based on price per unit. - Savvy Shopping (grades 6-8) – This lesson shows students how to comparison shop to find the best deals on consumer goods and services. - Money Management: Grocery Shopping for a Family Profile (grades 6-8) – Not all families have the same buying needs. Use this lesson to give your students a family profile that they have to use, and then instruct them to meal plan, budget and shop for their family. - The Shopping Challenge Lesson (grades 6-9) – Students create a two-week menu and shopping list that has all of the food and beverages they would need to survive for a two-week period. The lesson uses Peapod.com as a resource. - Teach Your Students the Skill of Comparison Shopping (grades 6-8) – This adaptable lesson uses advertising materials to help students learn the importance and value of comparison shopping. - Grocery Shopping and Budgeting (grade 9) – This lesson plan helps students establish a budget based on the outlined needs and resources. - How Much Is It? A Shopping Lesson Plan (grades 7-9) – Introduce vocabulary and utilize information gaps, conversations, class surveys and role-playing to budget and shop among their classmates. - Online Shopping and Budgeting (grades 5-8) – Online shopping adds new factors into the shopping experience, including shipping costs, more discounts and more comparison options. This lesson will look at shopping on a budget with an online shopping twist. - Grocery Shopping with Coupons (grades 4-8) – Does clipping coupons really help your shopping budget? Use this interactive lesson to explore this with your students. - Back to School Shopping Trip: Excel Exercise (grades 4-8) – This lesson for the computer class uses Microsoft Excel to help students learn to compute tax and total cost on a shopping trip. - A Lesson Plan for Fantasy Christmas Shopping (grades 4-12) – Students will be given a budget, then go to work assembling Christmas gift choices for those family members as they go on a shopping fantasy trip. - Marketplace: The Argentina Barter Fair (grades 6-9) – This gives a look at bartering, currency and inflation by analyzing the Argentinean economic shutdown of April 2002. - Why Does Money Have Value? (grades 6-9) – You couldn’t go shopping without money! Yet why does a worthless piece of paper suddenly become valuable because it has $100 imprinted on it? This lesson looks at the reason currency works as a way to pay for goods. - What Can I Afford? (grades 6-8) – Using cell phone plan shopping as a guideline, this lesson will discuss pricing structures, marketing, budgeting and shopping skills. It also looks at checking accounts and how check books work. - Choosing a Target Market & Market Segmentation (grades 11-12) – Marketing is a vital component of running a retail store. This lesson will look at choosing a target market and market segmentation as it applies to retail stores. - Retail Store Project (JCPenney Department Store) (grades 10-11) – Students will use language arts skills to discuss and examine the challenges a large department store’s Security Department faces, then present strategies for solving those problems. - Consumer Awareness – Making Smart Buying Decisions (grades 9-12) – Smart buying decisions involve much more than just going to the store and picking up the first item you see. This in-depth unit looks at how to avoid scams, find the best bargain and more while shopping smart. - Consumed – A Black Friday Lesson Plan (grades 9-12) – This lesson plan from the New York Times looks at how Black Friday feeds the shopping mindset programmed into so many Americans. - Consumerism – Grocery Shopping (grades 9-10) – This lesson, which can work in groups or individuals, looks at the psychology of consumerism and marketing as it relates to grocery store shopping. - Consumer Education: Financial Literacy – Advertising and Shopping (grades 9-12) – With group and individual activities, this lesson plan set will help students become smarter about their responses to advertising when they are shopping. - Customer Service Lessons and Worksheets (grades 9-12) – Many students will get their first jobs in retail, and they need to understand how to deliver good customer service. This lesson will help. - Comparison Shopping (grades 9-12) – Here’s another practical lesson teaching the invaluable skill of comparison shopping. - Store Brand vs. Name Brand (grades 10-12) – This lesson focuses on comparing quality and prices between name and store brand items to determine those times when name brand makes sense, and when store brand is just as good. - Becoming a Wise Consumer – Comparison Shopping (grades 9-12) – Teaches students how to make wise buying decisions by learning to comparison shop when solving multi-step and real-world math problems.
Here is a set of resources from Biology corner Analyzing and Graphing Data Analyzing Data – make and interpret graphs, summarize data trends Graphing Data – Flow Rates – graph the flow rate of liquids in a pipe, simple plot and draw two lines Graphing Practice – given data sets, such as video games scores and shirt colors, students create line and bar graphs, activity paired with growing sponge animals while students wait on results Interpreting Graphs and English Usage – simple graph showing tadpoles, this is more of a vocabulary lesson on words used to interpret graphs, such as fluctuate, decline, stabilize… Interpreting Graphs – shows a pie chart with grades, a scatter plot, and a few line graphs with questions to answer about each. Data Collection is Fun(gi) – use notes gathered in a field journal to create a data table to organize information about fungi and graph the relationship between fruiting body size and number.
Computer number format A computer number format is the internal representation of numeric values in digital computer and calculator hardware and software. Normally, numeric values are stored as groupings of bits, named for the number of bits that compose them. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the bit format used by the computer's instruction set generally requires conversion for external use such as printing and display. Different types of processors may have different internal representations of numerical values. Different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory. Binary number representation Computers represent data in sets of binary digits. The representation is composed of bits, which in turn are grouped into larger sets such as bytes. |Binary String||Octal value| |Length of Bit String (b)||Number of Possible Values (N)| A bit is a binary digit that represents one of two states. The concept of a bit can be understood as a value of either 1 or 0, on or off, yes or no, true or false, or encoded by a switch or toggle of some kind. While a single bit, on its own, is able to represent only two values, a string of bits may be used to represent larger values. For example, a string of three bits can represent up to eight distinct values as illustrated in Table 1. As the number of bits composing a string increases, the number of possible 0 and 1 combinations increases exponentially. While a single bit allows only two value-combinations and two bits combined can make four separate values and so on. The amount of possible combinations doubles with each binary digit added as illustrated in Table 2. Groupings with a specific number of bits are used to represent varying things and have specific names. A byte is a bit string containing the number of bits needed to represent a character. On most modern computers, this is an eight bit string. Because the definition of a byte is related to the number of bits composing a character, some older computers have used a different bit length for their byte. In many computer architectures, the byte is used to address specific areas of memory. For example, even though 64-bit processors may address memory sixty-four bits at a time, they may still split that memory into eight-bit pieces. This is called byte-addressable memory. Historically, many CPUs read data in some multiple of eight bits. Because the byte size of eight bits is so common, but the definition is not standardized, the term octet is sometimes used to explicitly describe an eight bit sequence. A nibble (sometimes nybble), is a number composed of four bits. Being a half-byte, the nibble was named as a play on words. A person may need several nibbles for one bite from something; similarly, a nybble is a part of a byte. Because four bits allow for sixteen values, a nibble is sometimes known as a hexadecimal digit. Octal and hex number display Octal and hex are convenient ways to represent binary numbers, as used by computers. Computer engineers often need to write out binary quantities, but in practice writing out a binary number such as 1001001101010001 is tedious and prone to errors. Therefore, binary quantities are written in a base-8, or "octal", or, much more commonly, a base-16, "hexadecimal" or "hex", number format. In the decimal system, there are 10 digits, 0 through 9, which combine to form numbers. In an octal system, there are only 8 digits, 0 through 7. That is, the value of an octal "10" is the same as a decimal "8", an octal "20" is a decimal "16", and so on. In a hexadecimal system, there are 16 digits, 0 through 9 followed, by convention, with A through F. That is, a hex "10" is the same as a decimal "16" and a hex "20" is the same as a decimal "32". An example and comparison of numbers in different bases is described in the chart below. When typing numbers, formatting characters are used to describe the number system, for example 000_0000B or 0b000_00000 for binary and 0F8H or 0xf8 for hexadecimal numbers. Converting between bases |Decimal Value||Binary Value||Octal Value||Hexadecimal Value| Each of these number systems are positional systems, but while decimal weights are powers of 10, the octal weights are powers of 8 and the hex weights are powers of 16. To convert from hex or octal to decimal, for each digit one multiplies the value of the digit by the value of its position and then adds the results. For example: Representing fractions in binary Fixed-point formatting can be useful to represent fractions in binary. The number of bits needed for the precision and range desired must be chosen to store the fractional and integer parts of a number. For instance, using a 32-bit format, 16 bits may be used for the integer and 16 for the fraction. The eight's bit is followed by the four's bit, then the two's bit, then the one's bit. The fractional bits continue the pattern set by the integer bits. The next bit is the half's bit, then the quarter's bit, then the ⅛'s bit, and so on. For example: |integer bits||fractional bits| |0.500||=||1⁄2||=||00000000 00000000.10000000 00000000| |1.250||=||1 1⁄4||=||00000000 00000001.01000000 00000000| |7.375||=||7 3⁄8||=||00000000 00000111.01100000 00000000| This form of encoding cannot represent some values in binary. For example, the fraction , 0.2 in decimal, the closest approximations would be as follows: |13107 / 65536||=||00000000 00000000.00110011 00110011||=||0.1999969... in decimal| |13108 / 65536||=||00000000 00000000.00110011 00110100||=||0.2000122... in decimal| Even if more digits are used, an exact representation is impossible. The number , written in decimal as 0.333333333..., continues indefinitely. If prematurely terminated, the value would not represent precisely. While both unsigned and signed integers are used in digital systems, even a 32-bit integer is not enough to handle all the range of numbers a calculator can handle, and that's not even including fractions. To approximate the greater range and precision of real numbers, we have to abandon signed integers and fixed-point numbers and go to a "floating-point" format. In the decimal system, we are familiar with floating-point numbers of the form (scientific notation): - 1.1030402 × 105 = 1.1030402 × 100000 = 110304.02 or, more compactly: which means "1.1030402 times 1 followed by 5 zeroes". We have a certain numeric value (1.1030402) known as a "significand", multiplied by a power of 10 (E5, meaning 105 or 100,000), known as an "exponent". If we have a negative exponent, that means the number is multiplied by a 1 that many places to the right of the decimal point. For example: - 2.3434E-6 = 2.3434 × 10−6 = 2.3434 × 0.000001 = 0.0000023434 The advantage of this scheme is that by using the exponent we can get a much wider range of numbers, even if the number of digits in the significand, or the "numeric precision", is much smaller than the range. Similar binary floating-point formats can be defined for computers. There is a number of such schemes, the most popular has been defined by Institute of Electrical and Electronics Engineers (IEEE). The IEEE 754-2008 standard specification defines a 64 bit floating-point format with: - an 11-bit binary exponent, using "excess-1023" format. Excess-1023 means the exponent appears as an unsigned binary integer from 0 to 2047; subtracting 1023 gives the actual signed value - a 52-bit significand, also an unsigned binary number, defining a fractional value with a leading implied "1" - a sign bit, giving the sign of the number. Let's see what this format looks like by showing how such a number would be stored in 8 bytes of memory: where "S" denotes the sign bit, "x" denotes an exponent bit, and "m" denotes a significand bit. Once the bits here have been extracted, they are converted with the computation: - <sign> × (1 + <fractional significand>) × 2<exponent> - 1023 This scheme provides numbers valid out to about 15 decimal digits, with the following range of numbers: The specification also defines several special values that are not defined numbers, and are known as NaNs, for "Not A Number". These are used by programs to designate invalid operations and the like. Some programs also use 32-bit floating-point numbers. The most common scheme uses a 23-bit significand with a sign bit, plus an 8-bit exponent in "excess-127" format, giving seven valid decimal digits. The bits are converted to a numeric value with the computation: - <sign> × (1 + <fractional significand>) × 2<exponent> - 127 leading to the following range of numbers: Such floating-point numbers are known as "reals" or "floats" in general, but with a number of variations: A 32-bit float value is sometimes called a "real32" or a "single", meaning "single-precision floating-point value". A 64-bit float is sometimes called a "real64" or a "double", meaning "double-precision floating-point value". The relation between numbers and bit patterns is chosen for convenience in computer manipulation; eight bytes stored in computer memory may represent a 64-bit real, two 32-bit reals, or four signed or unsigned integers, or some other kind of data that fits into eight bytes. The only difference is how the computer interprets them. If the computer stored four unsigned integers and then read them back from memory as a 64-bit real, it almost always would be a perfectly valid real number, though it would be junk data. Only a finite range of real numbers can be represented with a given number of bits. Arithmetic operations can overflow or underflow, producing a value too large or too small to be represented. The representation has a limited precision. For example, only 15 decimal digits can be represented with a 64-bit real. If a very small floating-point number is added to a large one, the result is just the large one. The small number was too small to even show up in 15 or 16 digits of resolution, and the computer effectively discards it. Analyzing the effect of limited precision is a well-studied problem. Estimates of the magnitude of round-off errors and methods to limit their effect on large calculations are part of any large computation project. The precision limit is different from the range limit, as it affects the significand, not the exponent. The significand is a binary fraction that doesn't necessarily perfectly match a decimal fraction. In many cases a sum of reciprocal powers of 2 does not matches a specific decimal fraction, and the results of computations will be slightly off. For example, the decimal fraction "0.1" is equivalent to an infinitely repeating binary fraction: 0.000110011 ... Numbers in programming languages This section needs additional citations for verification. (December 2018) (Learn how and when to remove this template message) Programming in assembly language requires the programmer to keep track of the representation of numbers. Where the processor does not support a required mathematical operation, the programmer must work out a suitable algorithm and instruction sequence to carry out the operation; on some microprocessors, even integer multiplication must be done in software. High-level programming languages such as LISP and Python offer an abstract number that may be an expanded type such as rational, bignum, or complex. Mathematical operations are carried out by library routines provided by the implementation of the language. A given mathematical symbol in the source code, by operator overloading, will invoke different object code appropriate to the representation of the numerical type; mathematical operations on any number—whether signed, unsigned, rational, floating-point, fixed-point, integral, or complex—are written exactly the same way. Notes and references - Jon Stokes (2007). Inside the machine: an illustrated introduction to microprocessors and computer architecture. No Starch Press. p. 66. ISBN 978-1-59327-104-6. - "byte definition". Retrieved 24 April 2012. - "Microprocessor and CPU (Central Processing Unit)". Network Dictionary. Retrieved 1 May 2012. - "nybble definition". Retrieved 3 May 2012. - "Nybble". TechTerms.com. Retrieved 3 May 2012. - Goebel, Greg. "Computer Numbering Format". Retrieved 10 September 2012.
Geometric explanation of differentiation Geometric explanation of differentiation- If we find the derivative of function f(x) at x = x0 then it is equal to the slope of the tangent to the graph of given function f(x) at the given point [(x0, f(x0))]. But what is a tangent line? ►It is not merely a simple line that joins the graph of the given function at one point. ►It is actually the limit of the secant lines joining points P = [(x0, f(x0)] and Q on the graph of f(x) as Q moves very much close to P. The tangent line contacts the graph of the given function at the given point [(x0, f(x0)] the slope of the tangent line matches the direction of the graph at that point. The tangent line is the straight line that best approximates the graph at that point. As we are given the graph of the given function, we can draw the tangent to this graph easily. Still, we’ll like to make calculations involving the tangent line and so will require a calculative method to explore the tangent line. We can easily calculate the equation of the tangent line by using the slope-point form of the line. We slope of a line is m and it’s passing through a point (x0,y0) then its equation will be y − y0 = m(x − x0) So now we have the formula for the equation of the tangent line. It’s clear that to get an actual equation for the tangent line, we should know the exact coordinates of point P. If we have the value of x0 with us we calculate y0 as y = f(x0) The second thing we must know is the slope of the line m = f’(x0) Which we call the derivative of given function f(x). The derivative f’(x0) of given function f at x0 is equal to the slope of the tangent line to y = f(x) at the point P = (x0, f(x0). Differentiation Using Formulas- We can use derivatives of different types of functions to solve our problems : (ix) D (secx) = secx . tanx (x) D (cosecx) = – cosecx . cotx (xii) D (constant) = 0 where D = These formulas are the result of differetiation by the first principle Inverse Functions And Their Derivatives : Theorems On Derivatives: If u and v are a derivable function of x, then, (i) (ii) where K is any constant (iii) known as “ Product Rule ” (iv) known as “Quotient Rule ” (v) If y = f(u) & u = g(x) then “Chain Rule ” Logarithmic Differentiation: To find the derivative of : (i) a function which is the product or quotient of a number of functions OR (ii) a function of the form where f & g are both differentiable, it will be found convenient to take the logarithm of the function first & then differentiate. This is called Logarithmic Differentiation. Implicit Differentiation: (i) In order to find dy/dx, in the case of implicit functions, we differentiate each term w.r.t. x regarding y as a function of x & then collect terms in dy/dx together on one side to finally find dy/dx. (ii) In answers of dy/dx in the case of implicit functions, both x & y are present. Parametric Differentiation: If y = f(q) & x = g(q) where q is a parameter, then Derivative Of A Function w.r.t. Another Function-: Let y = f(x) & z = g(x) Derivatives Of Order Two & Three: Let a function y = f(x) be defined on an open interval (a, b). It’s derivative, if it exists on(a, b) is a certain function f'(x) [or (dy/dx) or y’ ] & is called the first derivative of y w.r.t. x. If it happens that the first derivative has a derivative on (a, b) then this derivative is called the second derivative of y w. r. t. x & is denoted by f”(x) or (d2y/dx2) or y”. Similarly, the 3rd order derivative of y w. r. t. X, if it exists, is defined by It is also denoted by f”'(x) or y”’. All maths tutors suggest solving a fair amount of questions based on the geometric explanation of differentiation Example-Find the tangent line to the following function at z=3 for the given function We can find the derivative of the given function using basic differentiation as discussed in the previous post We are already given that z=3 so Equation of tangent line is y − y0 = m(x − x0) here y0=R(3)=√7 Putting these values we get the equation of a tangent line So that’s all for the geometric explanation of differentiation, I will discuss Application of Derivatives in my next post Example- Differentiate the following function Ans: We can apply quotient rule in these questions You can try these questions in the meantime 28 – Differentiation (09-10-15) with answers.pdf
Number Jumble Math Game Lesson Plan: Multiples In this lesson plan, which is adaptable for grades 2-5, students will use BrainPOP Jr. and BrainPOP resources (including an online math game) to practice multiplying whole numbers and/or decimals. Students will identify patterns within a multiplication table and create their own multiplication tables with unique patterns. - Accurately multiply whole numbers and/or decimals. - Identify patterns within a multiplication table. - Make sense of problems and persevere in solving them. - Look for and make use of structure. - Look for and express regularity in repeated reasoning. - Internet access for BrainPOP - Interactive whiteboard (or just an LCD projector) - Computers for students to use - Large sheet of paper or poster board for each pair or small group of students - Crayons or highlights - Envelopes or plastic baggies to hold game pieces - Photocopies of the Graphic Organizer (optional) Preparation:This lesson plan uses a free online math game developed by Play Power Labs. Number Jumble is a puzzle game in which players find patterns in a multiplication table. The numbers in the table are all mixed up and need to be put back into the right position. This game helps strengthen students' conceptual understanding of multiplying either whole numbers or decimals, depending on which version of the game you select, and addresses multiple Common Core State Standards (CCSS), as identified in the objectives above. To prepare for this lesson, explore the BrainPOP and/or BrainPOP Jr. movie topics related to this game, and determine which ones are most appropriate for your students and learning objectives. Explore the Number Jumble game and decide whether you would like your students to begin with whole numbers or decimals. As you familiarize yourself with the game, you'll notice that players must click a number tile they'd like to move and then click the location where they'd like to move it to. You can earn a time bonus for clearing the board quickly. Red number tiles earn double bonus points. - Use BrainPOP and/or BrainPOP Jr. movies to build background knowledge about multiplication. Younger children and students working below grade level may benefit from the Multiplication Movie Topics on BrainPOP Jr. to help them understand equal groups and arrays. You can show the Decimals, Exponents, or Factoring movies to more advanced students. - Refresh students' understanding of multiplication tables by passing out the Graphic Organizer or allowing students to access it on computers and type directly into it. Set a timer, and challenge students to complete as much of the multiplication table as they can in a given time period. - Allow students to work collaboratively to check their work against one another's papers or a completed multiplication table. Have students record their scores, and let them know they'll complete the same activity again later on in the unit so they can measure their progress. - Talk with students about the patterns they see in the multiplication table. Students may want to use different color highlighters or crayons to mark the various patterns. You can also play the Multiplication Movie at this time to help students make connections about the meaning of multiplication and the patterns within a multiplication table. - Project the Number Jumble game for the class to see and play one round together as a class. Demonstrate how to click the number tile you want to move and then click the space you want to move it to. Encourage students to notice patterns as they play and talk about how observing patterns serves as a strategy for game play. - Allow students to explore the game on their own or with a partner for 5-20 minutes, depending on how much class time you have allotted. - Challenge students to work collaboratively to create their own version of a Number Jumble puzzle. Provide a large sheet of paper or poster board for each pair or group. Have students create a multiplication table that is similar to one in the game, or an original table that follows a student-created pattern. - Check students' work for accuracy, then have them cut apart each square to create movable pieces. Provide an envelope or plastic baggie for students to keep their pieces in. - Have students trade their puzzles with one another and practice completing them as quickly as possible. Challenge students to identify the patterns in each table. You may want to keep the puzzles in your math center so students can explore them throughout the school year.
Introduction | Example | Tutorial | Applications | Comments Introduction - VBA Absolute Value This VBA tutorial shows you how to take the absolute value of a number with the VBA Abs function. The VBA Abs function will return the absolute value of a number, variable or formula in your macro. The VBA Abs function accepts one argument - a number - and will return the magnitude of that number by ignoring its sign. Keep exercising your macro skills and you too can develop VBA Abs! See what I did there? Okay, okay. Let’s look at an example. Example - VBA Absolute Value VBA Abs Function Sub VBA_Absolute_Value() Dim d1 As Double Dim d2 As Double Dim d3 As Double d1 = 1.5 d2 = -0.8 d3 = Abs(d1 * d2) Debug.Print d3 'Yields +1.2 End Sub Becoming a VBA expert isn't hard Over 5000 members are improving their VBA skills for free with our email tutorials. Why don't you join them? Our experts share time-saving VBA tips and we'll give you access to our huge macro library - it's sure to speed up your macro development. Tutorial - VBA Absolute Value How to find the absolute value in VBA First, a refresher on what the absolute value of a number is. The absolute value of a number is a mathematical term meaning the magnitude of a number, ignoring the sign. In other words, the absolute value of a number is always the positive equivalent of the number representing how for away that number is from zero. The VBA Abs function strips the sign from your numbers or numerical expressions to return the absolute value. To do that, the Abs function accepts one argument: a number. The number can be a variable or even a formula that produces a numeric result. In other words, any numeric expression will do. Here’s the syntax for the Abs function: In the example above, the “Number” argument is the product of the variables Common Error with the Abs function If you accidentally pass the Abs function a string or other non-numeric value, you will get a “Run-time error ‘13’ Type Mismatch” error. With that said, the Abs function isn’t dumb. If you try to take the absolute value of a string that represents a number, it will properly process the string as a number and return the absolute value. Here’s an example to demonstrate what I mean: Sub VBA_Abs_String() Debug.Print Abs("-5.4") 'Yields +5.4 End Sub You’ll want to use the VBA Abs function anytime you need to absolute value of a number. It’s a good function to use when you’re passing an argument to the VBA Sqr function since you can’t take the square root of a negative number. If you liked this VBA tutorial, I hope you’ll shop my Excel Add-ins store to show your support for wellsr.com. Without revenue from great readers like you, it’s difficult to grow the site and reach more users.
What is rate of reaction pdf The rate of a chemical reaction is the speed with which reactants are converted to products. Collision theory is used to explain why chemical reactions occur at different rates. Collision theory states that in order for a reaction to proceed, the. Chapter 1 : Rate of Reaction 1. Rate of reaction Rate of reaction is the measurement of the speed which reactants are converted into products in a chemical reaction / change in a selected quantity during a reaction per unit time. Average rate of reaction is the average value of the rate of reaction over an interval of Chapter 11 Rate of Reaction niu.edu.tw =Factors That Affect Reaction Rate= The following factors can affect the rate of a reaction. Collision theory can be used to explain their affect.. Rate of Reaction 3 Model 2 – Two Species in the Reaction 6. The graph in Model 2 contains the same data as that in Model 1, but data about a second species Be sure that if students do not list “rate or speed of reaction or reaction time,” add it to the list. Go down the list and ask the class to report on each factor as it pertains to their. Chapter 1 : Rate of Reaction 1. Rate of reaction Rate of reaction is the measurement of the speed which reactants are converted into products in a chemical reaction / change in a selected quantity during a reaction per unit time. Average rate of reaction is the average value of the rate of reaction over an interval of IB Chemistry notes on rates of reaction =Factors That Affect Reaction Rate= The following factors can affect the rate of a reaction. Collision theory can be used to explain their affect.. for the rate of reaction. Hence, in a first-order reaction the rate Hence, in a first-order reaction the rate is proportional to one concentration; in a second-order reaction. The rate of reaction of a solid substance is related to its surface area. In a reaction between a solid and an aqueous/liquid/gas species, increasing the surface area of the solid-phase reactant increases the number of collisions per second and therefore increases the reaction rate. In a reaction between magnesium metal and hydrochloric acid, magnesium atoms must collide with the hydrogen ions Chem4Kids.com Reactions Rates of Reaction 80 Experiment Starter Sheet - Investigating the rate of reaction between sodium thiosulphate and hydrochloric acid Here is a suggested method to investigate the effect of varying the concentration. These rates are related to, but different from, reaction rates. A reaction rate is a property of a given reaction, not of chemical species. The relationships A reaction rate is a property of a given reaction, not of chemical species.. Reaction rates cannot be calculated from balanced equations. Suppose you wish to express the average rate of the following reac- tion during the time period beginning at time t 1 and ending at time t 2 . 9.7 Theories of Reaction Rates Chemistry LibreTexts The rate of a reaction is the speed at which a chemical reaction happens. If a reaction has a low rate, that means the molecules combine at a slower speed than a reaction with a high rate.. The change in the number of moles / litre of any reactant or product per unit time is known as rate of reaction. The rate measured over a long time interval is called average rate and the rate measured for an infinitesimally small time interval is called instantaneous rate.. Summary of Factors That Affect Chemical Reaction Rate The chart below is a summary of the main factors that influence reaction rate. Keep in mind, there is typically a maximum effect, after which changing a factor will have no effect or will slow a reaction.. reaction rate constant with temperature, catalyst concentration, and proportions of reactants. With these kinetic data and a knowledge of the reactor configuration, the development of a computer simulation model of the esterification reaction is invaluable for optimizing esterification reaction operation (25−28). However, all esterification reactions do not necessarily permit straightforward. Rate of Reaction Crowley's Science Corner Chapter 11 Rate of Reaction. Outline 1. Meaning of reaction rate 2. Reaction rate and concentration 3. Reactant concentration and time 4. Models for reaction rate 5. Reaction rate and temperature 6. Catalysis 7. Reaction mechanisms. Thermochemistry • We have looked at the energy involved in a chemical reaction • Chapter 7 • Some reactions evolve heat (exothermic) • Some reactions Chapter 16 Reaction Rates Reaction Rates • Increasing the concentration (or surface area) of reactants or the reaction temperature increases reaction rate by increasing the number of effective collisions. 15 Conditions that Effect Reaction Rates • Increasing the concentration or surface area of one or more reactants increases the number of effective collisions by increasing the total number of collisions (fraction - Worksheet Reaction Rates Name - Text 1 RATES OF REACTION COURSEWORK - THEORY OF REACTION RATES MIT OpenCourseWare - Worksheet Reaction Rates Name 14035 R1 Chemistry 2007 Sample assessment instrument Extended experimental investigation: Reaction rate This sample has been compiled by the QSA to …. The rate law or rate equation for a chemical reaction is an equation that links the reaction rate with the concentrations or pressures of the reactants and constant parameters (normally rate coefficients and partial reaction orders).. RATES OF REACTION FACTORS - Pathwayz a. the rate of reaction is the decrease in concentration of reactants or the increase in concentration of products with time. b. how reaction rates depend on such …. 14035 R1 Chemistry 2007 Sample assessment instrument Extended experimental investigation: Reaction rate This sample has been compiled by the QSA to …. Reaction rates cannot be calculated from balanced equations. Suppose you wish to express the average rate of the following reac- tion during the time period beginning at time t 1 and ending at time t 2 .. VI. Determination of Reaction Orders: 6. Complementarity of Methods: a. “Reaction Order Over Time”: Monitoring the disappearance of A over the course of a reaction (sections VI.B.1-4) affords the “Reaction Read more: Once Upon A Road Trip Pdf. An exercise physiologist can not only help you to understand your pain in a more comprehensive manner, they can also assist you in exposing you to painful and feared movements in a controlled approach. The effect of concentration on rate – Student sheet To study 1. RATES OF REACTION FACTORS - Pathwayz 2. The Rate of a Chemical Reaction Chemistry LibreTexts 3. Chem4Kids.com Reactions Rates of Reaction Chem4Kids.com Reactions Rates of Reaction 1/08/2016 · Example: Write rate equation for reaction between A and B where A is 1st order and B is 2nd order. r = k[A][B]2 overall order is 3 Calculate the unit of k. Rates of Reaction Pearson Publishing Ltd.
A linear equation is almost like any other equation, with two expressions set equal to each other. Linear equations have one or two variables. When substituting values for the variables in a true linear equation and graphing the coordinates, all correct points lie on the same line. For a simple slope-intercept linear equation, one must determine the slope and the y-intercept first. Use a line already drawn on a graph and its demonstrated points before creating a linear equation. Follow this formula in making slope-intercept linear equations: y = mx + b. Determine the value of m, which is the slope (rise over run). Find the slope by finding any two points on a line. For this example, use points (1,4) and (2,6). Subtract the x value of the first point from the x value of the second point. Do the same for the y values. Divide these values to get your slope. Example: (6-4) / (2/1) = 2 / 1 = 2 The slope, or m, equals 2. Substitute 2 for m in the equation, so it should now look like this: y = 2x + b. Find a point on the line and substitute the values into your equation. For example, for the point (1,4), use the x and y values in the equation to get 4 = 2(1) + b. Solve the equation and determine the value of b, or the value at which the line intersects the x-axis. In this case, subtract the multiplied slope and x value from the y value. The final solution is y = 2x + 2. - Photo Credit BananaStock/BananaStock/Getty Images How to Graph Linear Equations Line them up!
The Distributive Property, Part 2 Let's use rectangles to understand the distributive property with variables. 10.1: Possible Areas - A rectangle has a width of 4 units and a length of \(m\) units. Write an expression for the area of this rectangle. What is the area of the rectangle if \(m\) is: - Could the area of this rectangle be 11 square units? Why or why not? 10.2: Partitioned Rectangles When Lengths are Unknown Here are two rectangles. The length and width of one rectangle are 8 and 5. The width of the other rectangle is 5, but its length is unknown so we labeled it \(x\). Write an expression for the sum of the areas of the two rectangles. The two rectangles can be composed into one larger rectangle as shown. What are the width and length of the new, large rectangle? - Write an expression for the total area of the large rectangle as the product of its width and its length. 10.3: Areas of Partitioned Rectangles For each rectangle, write expressions for the length and width and two expressions for the total area. Record them in the table. Check your expressions in each row with your group and discuss any disagreements. |rectangle||width||length||area as a product of width times length |area as a sum of the areas of the smaller rectangles Here is an area diagram of a rectangle. - Find the lengths \(w\), \(x\), \(y\), and \(z\), and the area \(A\). All values are whole numbers. - Can you find another set of lengths that will work? How many possibilities are there? Here is a rectangle composed of two smaller rectangles A and B. Based on the drawing, we can make several observations about the area of the rectangle: - One side length of the large rectangle is 3 and the other is \(2+x\), so its area is \(3(2+x)\). - Since the large rectangle can be decomposed into two smaller rectangles, A and B, with no overlap, the area of the large rectangle is also the sum of the areas of rectangles A and B: \(3(2) + 3(x)\) or \(6+3x\). - Since both expressions represent the area of the large rectangle, they are equivalent to each other. \(3(2+x)\) is equivalent to \(6 + 3x\). We can see that multiplying 3 by the sum \(2+x\) is equivalent to multiplying 3 by 2 and then 3 by \(x\) and adding the two products. This relationship is an example of the distributive property. \(\displaystyle 3(2+x) = 3 \boldcdot 2 + 3 \boldcdot x\) - equivalent expressions Equivalent expressions are always equal to each other. If the expressions have variables, they are equal whenever the same value is used for the variable in each expression. For example, \(3x+4x\) is equivalent to \(5x+2x\). No matter what value we use for \(x\), these expressions are always equal. When \(x\) is 3, both expressions equal 21. When \(x\) is 10, both expressions equal 70. A term is a part of an expression. It can be a single number, a variable, or a number and a variable that are multiplied together. For example, the expression \(5x + 18\) has two terms. The first term is \(5x\) and the second term is 18.
Let’s analyze a simple series circuit, determining the voltage drops across individual resistors: Determine the Total Circuit Resistance From the given values of individual resistances, we can determine a total circuit resistance, knowing that resistances add in series: Use Ohm’s Law to Calculate Electron Flow From here, we can use Ohm’s Law (I=E/R) to determine the total current, which we know will be the same as each resistor current, currents being equal in all parts of a series circuit: Now, knowing that the circuit current is 2 mA, we can use Ohm’s Law (E=IR) to calculate voltage across each resistor: It should be apparent that the voltage drop across each resistor is proportional to its resistance, given that the current is the same through all resistors. Notice how the voltage across R2 is double that of the voltage across R1, just as the resistance of R2 is double that of R1. If we were to change the total voltage, we would find this proportionality of voltage drops remains constant: The voltage across R2 is still exactly twice that of R1‘s drop, despite the fact that the source voltage has changed. The proportionality of voltage drops (ratio of one to another) is strictly a function of resistance values. With a little more observation, it becomes apparent that the voltage drop across each resistor is also a fixed proportion of the supply voltage. The voltage across R1, for example, was 10 volts when the battery supply was 45 volts. When the battery voltage was increased to 180 volts (4 times as much), the voltage drop across R1 also increased by a factor of 4 (from 10 to 40 volts). The ratio between R1‘s voltage drop and total voltage, however, did not change: Likewise, none of the other voltage drop ratios changed with the increased supply voltage either: Voltage Divider Formula For this reason a series circuit is often called a voltage divider for its ability to proportion—or divide—the total voltage into fractional portions of constant ratio. With a little bit of algebra, we can derive a formula for determining series resistor voltage drop given nothing more than total voltage, individual resistance, and total resistance: The ratio of individual resistance to total resistance is the same as the ratio of individual voltage drop to total supply voltage in a voltage divider circuit. This is known as the voltage divider formula, and it is a short-cut method for determining voltage drop in a series circuit without going through the current calculation(s) of Ohm’s Law. Using this formula, we can re-analyze the example circuit’s voltage drops in fewer steps: Voltage dividers find wide application in electric meter circuits, where specific combinations of series resistors are used to “divide” a voltage into precise proportions as part of a voltage measurement device. One device frequently used as a voltage-dividing component is the potentiometer, which is a resistor with a movable element positioned by a manual knob or lever. The movable element, typically called a wiper, makes contact with a resistive strip of material (commonly called the slidewire if made of resistive metal wire) at any point selected by the manual control: The wiper contact is the left-facing arrow symbol drawn in the middle of the vertical resistor element. As it is moved up, it contacts the resistive strip closer to terminal 1 and further away from terminal 2, lowering resistance to terminal 1 and raising resistance to terminal 2. As it is moved down, the opposite effect results. The resistance as measured between terminals 1 and 2 is constant for any wiper position. Rotary vs. Linear Potentiometers Shown here are internal illustrations of two potentiometer types, rotary and linear: Some linear potentiometers are actuated by straight-line motion of a lever or slide button. Others, like the one depicted in the previous illustration, are actuated by a turn-screw for fine adjustment ability. The latter units are sometimes referred to as trimpots, because they work well for applications requiring a variable resistance to be “trimmed” to some precise value. It should be noted that not all linear potentiometers have the same terminal assignments as shown in this illustration. With some, the wiper terminal is in the middle, between the two end terminals. The following photograph shows a real, rotary potentiometer with exposed wiper and slidewire for easy viewing. The shaft which moves the wiper has been turned almost fully clockwise so that the wiper is nearly touching the left terminal end of the slidewire: Here is the same potentiometer with the wiper shaft moved almost to the full-counterclockwise position, so that the wiper is near the other extreme end of travel: If a constant voltage is applied between the outer terminals (across the length of the slidewire), the wiper position will tap off a fraction of the applied voltage, measurable between the wiper contact and either of the other two terminals. The fractional value depends entirely on the physical position of the wiper: The Importance of Potentiometer Application Just like the fixed voltage divider, the potentiometer’s voltage division ratio is strictly a function of resistance and not of the magnitude of applied voltage. In other words, if the potentiometer knob or lever is moved to the 50 percent (exact center) position, the voltage dropped between wiper and either outside terminal would be exactly 1/2 of the applied voltage, no matter what that voltage happens to be, or what the end-to-end resistance of the potentiometer is. In other words, a potentiometer functions as a variable voltage divider where the voltage division ratio is set by wiper position. This application of the potentiometer is a very useful means of obtaining a variable voltage from a fixed-voltage source such as a battery. If a circuit you’re building requires a certain amount of voltage that is less than the value of an available battery’s voltage, you may connect the outer terminals of a potentiometer across that battery and “dial-up” whatever voltage you need between the potentiometer wiper and one of the outer terminals for use in your circuit: When used in this manner, the name potentiometer makes perfect sense: they meter (control) the potential(voltage) applied across them by creating a variable voltage-divider ratio. This use of the three-terminal potentiometer as a variable voltage divider is very popular in circuit design. Shown here are several small potentiometers of the kind commonly used in consumer electronic equipment and by hobbyists and students in constructing circuits: The smaller units on the very left and very right are designed to plug into a solderless breadboard or be soldered into a printed circuit board. The middle units are designed to be mounted on a flat panel with wires soldered to each of the three terminals. Here are three more potentiometers, more specialized than the set just shown: The large “Helipot” unit is a laboratory potentiometer designed for quick and easy connection to a circuit. The unit in the lower-left corner of the photograph is the same type of potentiometer, just without a case or 10-turn counting dial. Both of these potentiometers are precision units, using multi-turn helical-track resistance strips and wiper mechanisms for making small adjustments. The unit on the lower-right is a panel-mount potentiometer, designed for rough service in industrial applications. - Series circuits proportion, or divide, the total supply voltage among individual voltage drops, the proportions being strictly dependent upon resistances: ERn = ETotal (Rn / RTotal) - A potentiometer is a variable-resistance component with three connection points, frequently used as an adjustable voltage divider.
Torque, moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis, fulcrum, or pivot. Just as a force is a push or a pull, a torque can be thought of as a twist to an object. Mathematically, torque is defined as the cross product of the lever-arm distance vector and the force vector, which tends to produce rotation. Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a torque (turning force) that loosens or tightens the nut or bolt. The magnitude of torque depends on three quantities: the force applied, the length of the lever arm connecting the axis to the point of force application, and the angle between the force vector and the lever arm. In symbols: - τ is the torque vector and τ is the magnitude of the torque, - r is the displacement vector (a vector from the point from which torque is measured to the point where force is applied), - F is the force vector, - × denotes the cross product, - θ is the angle between the force vector and the lever arm vector. The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a mechanical advantage. - 1 Terminology - 2 History - 3 Definition and relation to angular momentum - 4 Units - 5 Special cases and other facts - 6 Machine torque - 7 Relationship between torque, power, and energy - 8 Principle of moments - 9 Torque multiplier - 10 See also - 11 References - 12 External links This article follows US physics terminology by using the word torque. In the UK and in US mechanical engineering, this is called moment of force, usually shortened to moment. In US mechanical engineering, the term torque means "the resultant moment of a Couple," and (unlike in UK physics), the terms torque and moment are not interchangeable. Torque is defined mathematically as the rate of change of angular momentum of an object. The definition of torque states that one or both of the angular velocity or the moment of inertia of an object are changing. Moment is the general term used for the tendency of one or more applied forces to rotate an object about an axis, but not necessarily to change the angular momentum of the object (the concept which is called torque in physics). For example, a rotational force applied to a shaft causing acceleration, such as a drill bit accelerating from rest, results in a moment called a torque. By contrast, a lateral force on a beam produces a moment (called a bending moment), but since the angular momentum of the beam is not changing, this bending moment is not called a torque. Similarly with any force couple on an object that has no change to its angular momentum, such moment is also not called a torque. This article follows the US physics terminology by calling all moments by the term torque, whether or not they cause the angular momentum of an object to change. The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia and angular acceleration, respectively. Definition and relation to angular momentum A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand are curled from the direction of the lever arm to the direction of the force, then the thumb points in the direction of the torque. More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product: where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively, where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque. It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of the rotation that this torque would initiate, starting from rest, and its direction is determined by the right-hand rule. The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum, where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum: For rotation about a fixed axis, where α is the angular acceleration of the body, measured in rad/s2. This equation has the limitation that the torque equation is to be only written about instantaneous axis of rotation or center of mass for any type of motion - either motion is pure translation, pure rotation or mixed motion. I = Moment of inertia about point about which torque is written (either about instantaneous axis of rotation or center of mass only). If body is in translatory equilibrium then the torque equation is same about all points in the plane of motion. A torque is not necessarily limited to rotation around a fixed axis, however. It may change the magnitude and/or direction of the angular momentum vector, depending on the angle between the velocity vector and the non-radial component of the force vector, as viewed in the pivot's frame of reference. A net torque on a spinning body therefore may result in a precession without necessarily causing a change in spin rate. Proof of the equivalence of definitions The definition of angular momentum for a single particle is: where "×" indicates the vector cross product, p is the particle's linear momentum, and r is the displacement vector from the origin (the origin is assumed to be a fixed location anywhere in space). The time-derivative of this is: This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definition of force (whether or not mass is constant) and the definition of velocity The cross product of momentum with its associated velocity is zero because velocity and momentum are parallel, so the second term vanishes. By definition, torque τ = r × F. Therefore torque on a particle is equal to the first derivative of its angular momentum with respect to time. If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that This is a general proof. Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian. The unit newton metre is properly denoted N·m or N m. This avoids ambiguity with mN, millinewtons. The SI unit for energy or work is the joule. It is dimensionally equivalent to a force of one newton acting over a distance of one metre, but it is not used for torque. Energy and torque are entirely different concepts, so the practice of using different unit names (i.e., reserving newton metres for torque and using only joules for energy) helps avoid mistakes and misunderstandings. The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1 N·m applied through a full revolution will require an energy of exactly 2π joules. Mathematically, In Imperial units, "pound-force-feet" (lb·ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (oz·in) are used, and other non-SI units of torque includes "metre-kilograms-force". For all these units, the word "force" is often left out. For example, abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that the "pound" is pound-force and not pound-mass). This is an example of the confusion caused by the use of traditional units that may be avoided with SI units because of the careful distinction in SI between force (in newtons) and mass (in kilograms). Sometimes one may see torque given units that do not dimensionally make sense. For example: gram centimetre. In these units, "gram" should be understood as the force given by the weight of 1 gram at the surface of the earth, i.e., 0.00980665 N. The surface of the earth is understood to have a standard acceleration of gravity (9.80665 m/s2). Special cases and other facts Moment arm formula A very useful special case, often given as the definition of torque in fields other than physics, is as follows: The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force: For example, if a person places a force of 10 N at the terminal end of a wrench which is 0.5 m long (or a force of 10 N exactly 0.5 m from the twist point of a wrench of any length), the torque will be 5 N-m – assuming that the person moves the wrench by applying force in the plane of movement of and perpendicular to the wrench. For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in two-dimensions, three equations are used. Net force versus torque When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of your point of reference. If the net force is not zero, and is the torque measured from , then the torque measured from is ... Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a dynamometer, and shown as a torque curve. Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam engines can start heavy loads from zero RPM without a clutch. Relationship between torque, power, and energy If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass, where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body. It follows from the work-energy theorem that W also represents the change in the rotational kinetic energy Er of the body, given by Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed – not on the resulting acceleration, if any). In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation. Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. Conversion to other units A conversion factor may be necessary when using different units of power, torque, or angular speed. For example, if rotational speed (revolutions per time) is used in place of angular speed (radians per time), we multiply by a factor of 2π radians per revolution. In the following formulas, P is power, τ is torque and ω is rotational speed. Dividing on the left by 60 seconds per minute gives us the following. where rotational speed is in revolutions per minute (rpm). Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm for rotational speed. This results in the formula changing to: The constant below (in foot pounds per minute) changes with the definition of the horsepower; for example, using metric horsepower, it becomes approximately 32,550. Use of other units (e.g. BTU per hour for power) would require a different custom conversion factor. For a rotating object, the linear distance covered at the circumference of rotation is the product of the radius with the angle covered. That is: linear distance = radius × angular distance. And by definition, linear distance = linear speed × time = radius × angular speed × time. By the definition of torque: torque = radius × force. We can rearrange this to determine force = torque ÷ radius. These two values can be substituted into the definition of power: The radius r and time t have dropped out of the equation. However, angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give: If torque is in newton metres and rotational speed in revolutions per second, the above equation gives power in newton metres per second or watts. If Imperial units are used, and if torque is in pounds-force feet and rotational speed in revolutions per minute, the above equation gives power in foot pounds-force per minute. The horsepower form of the equation is then derived by applying the conversion factor 33,000 ft·lbf/min per horsepower: Principle of moments The Principle of Moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the sum of torques due to several forces applied to a single point is equal to the torque due to the sum (resultant) of the forces. Mathematically, this follows from: A torque multiplier is a gear box with reduction ratios greater than 1. The given torque at the input gets multiplied as per the reduction ratio and transmitted to the output, thereby achieving greater torque, but with reduced rotational speed. - Serway, R. A. and Jewett, Jr. J. W. (2003). Physics for Scientists and Engineers. 6th Ed. Brooks Cole. ISBN 0-534-40842-7. - Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4. - Physics for Engineering by Hendricks, Subramony, and Van Blerk, Chinappi page 148, Web link - SI brochure - Dynamics, Theory and Applications by T.R. Kane and D.A. Levinson, 1985, pp. 90–99: Free download - "Right Hand Rule for Torque". Retrieved 2007-09-08. - Halliday, David; Resnick, Robert (1970). Fundamentals of Physics. John Wiley & Sons, Inc. pp. 184–85. - From the official SI website: "...For example, the quantity torque may be thought of as the cross product of force and distance, suggesting the unit newton metre, or it may be thought of as energy per angle, suggesting the unit joule per radian." - "SI brochure Ed. 8, Section 5.1". Bureau International des Poids et Mesures. 2006. Retrieved 2007-04-01. - See, for example: "CNC Cookbook: Dictionary: N-Code to PWM". Retrieved 2008-12-17. - Kleppner, Daniel; Kolenkow, Robert (1973). An Introduction to Mechanics. McGraw-Hill. pp. 267–68. - Power and Torque Explained A clear explanation of the relationship between Power and Torque, and how they relate to engine performance. - "Horsepower and Torque" An article showing how power, torque, and gearing affect a vehicle's performance. - "Torque vs. Horsepower: Yet Another Argument" An automotive perspective - a discussion of torque and angular momentum in an online textbook - Torque and Angular Momentum in Circular Motion on Project PHYSNET. - An interactive simulation of torque - Torque Unit Converter
SummaryStudents learn that ordinary citizens, including students like themselves, can make meaningful contributions to science through the concept of "citizen science." First, students learn some examples of ongoing citizen science projects that are common around the world, such as medical research, medication testing and donating idle computer time to perform scientific calculations. Then they explore Zooniverse, an interactive website that shows how research in areas from marine biology to astronomy leverage the power of the Internet to use the assistance of non-scientists to classify large amounts of data that is unclassifiable by machines for various reasons. To conclude, student groups act as engineering teams to brainstorm projects ideas for their own town that could benefit from community help, then design conceptual interactive websites that could organize and support the projects. In this activity, students explore several citizen science projects to see how engineering can be used to make advances in scientific research. Like engineers, they analyze the design of one of the citizen science projects to first understand the need driving the project. Specifically they learn about the problem that needs to be solved, the basics of how that problem is solved, and how the project's design could potentially be improved. Then they work in small groups to design citizen science projects of their own. Basic ability to use a computer and access the Internet. After completing this activity, students should be able to: - Describe what a specific citizen science project monitors/researches, how it does that, and how it enables researchers to make more informed decisions. - Explain the practical societal benefits that a specific project might have. - List the fields of science and engineering involved in a specific project and explain how each group contributes to the project. - Explain the role of citizens in a specific citizen science project. - Describe several projects in their own community that could benefit from the help of citizen scientists. More Curriculum Like This Students are introduced to the engineering design process, focusing on the concept of brainstorming design alternatives. They learn that engineering is about designing creative ways to improve existing artifacts, technologies or processes, or developing new inventions that benefit society. Students are introduced to the historical motivation for space exploration. They learn about the International Space Station as an example of space travel innovation and are introduced to new and futuristic ideas that space engineers are currently working on to propel space research far into the fut... Students explore the concept of optical character recognition (OCR) in a problem-solving environment. They research OCR and OCR techniques and then apply those methods to the design challenge by developing algorithms capable of correctly "reading" a number on a typical high school sports scoreboard.... As students learn about the creation of biodomes, they are introduced to the steps of the engineering design process, including guidelines for brainstorming. They learn how engineers are involved in the design and construction of biodomes and use brainstorming to come up with ideas for possible biod... Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. - Define the criteria and constraints of a design problem with sufficient precision to ensure a successful solution, taking into account relevant scientific principles and potential impacts on people and the natural environment that may limit possible solutions. (Grades 6 - 8) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! - Design and use instruments to gather data. (Grades 6 - 8) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! - Use data collected to analyze and interpret trends in order to identify the positive and negative effects of a technology. (Grades 6 - 8) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! - Identify trends and monitor potential consequences of technological development. (Grades 6 - 8) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! - Develop a model to generate data for iterative testing and modification of a proposed object, tool, or process such that an optimal design can be achieved. (Grades 6 - 8) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! - Apply scientific principles to design a method for monitoring and minimizing a human impact on the environment. (Grades 6 - 8) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! - Evaluate competing design solutions using a systematic process to determine how well they meet the criteria and constraints of the problem. (Grades 6 - 8) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! Each group needs: - computers with Internet access, preferably one per student - blank paper - colored pencils or markers, for website design - Exploring a Zooniverse Project Worksheet, one per student - Website Project Design Chart, one per student To share with the entire class: - Citizen Science with Zooniverse Presentation, Microsoft® PowerPoint® file - computer with Internet access and projector to display the presentation slides, websites and video Have you ever heard of "citizen science" or "Zooniverse"? (Listen to student ideas.) Today we are going to talk about how anyone can help the scientific community tackle some of its biggest projects, and then we're going to brainstorm some projects of our own. Let's get started by talking about citizen science and then we will explore the "Zooniverse." brainstorming: A group problem-solving process in which each person in the group contributes his or her ideas in an open forum, building on the ideas of others, with the purpose to generate a large number of potential solutions to a design challenge. During this step, wild ideas are encouraged and are recorded, but not evaluated. citizen: In the context of "citizen science," a person who is not a professional scientist or researcher, at least not in the area of research for which s/he is helping. citizen science: Scientific research conducted by non-scientists. Sometimes called "public participation in scientific research or "crowd science." engineer: A person who applies his/her understanding of science and math to creating things for the benefit of humanity and our world. technology: The application of scientific knowledge to solve practical problems. Zooniverse: A website that is "home to the internet's largest, most popular and most successful citizen science projects." Zooniverse project: A citizen science project that can be done online entirely through a web browser at the Zooniverse website. In this activity, students explore several citizen science projects to see how engineering can be used to make scientific research advances. Like engineers, they analyze the design of one of citizen science project to understand the need driving the project—the problem to be solved, the basics of how that problem is solved—and then consider how the project's design could potentially be improved. Following this, students work in small groups to design citizen science projects and supporting websites of their own. As an example citizen science project, the Seafloor Explorer project at https://www.seafloorexplorer.org/#%21/science was designed by fisheries biologists, software engineers and specialists from other related fields who worked together to come up with solutions to monitor the ecological balance of the sea life in the northeast continental shelf. The fisheries biologists described to the software engineers what they wanted and the software engineers created computer software solutions. In this case, the biologists want to monitor the ecological balance on the northeast continental to ensure that the population of the sea creatures stays in balance, which is important so that no species become endangered. It also helps the biologists study and learn more about sea creatures in general, which could provide other societal benefits. Responding to this, the software engineers designed and implemented a solution consisting of a website where citizens classify the data, a database where the data is stored and queried, and computer artificial intelligence that learns the from citizens' classifications in order to eventually classify the images itself. Another benefit is that the citizen science from this project also helps the software engineers generally improve their design of artificial intelligence capabilities. Before the Activity - Conduct a number of the Zooniverse projects to learn how they work and gain some insight on a few specific examples, which will help you knowledgeably share facts and description about a few examples when introducing the class to Zooniverse, as well as be a resource for any technical problems students might encounter. - Create a Zooniverse account. Some Zooniverse projects require an account while others do not. Preferably, direct students to projects that do not require accounts or let them create their own accounts. Or, log them in with your account. If you log them in with your account, the tutorial for any projects that you have completed will need to be restarted. - Make copies of the Exploring a Zooniverse Project Worksheet and Website Project Design Chart, one each per student. - Prepare to show students the Citizen Science with Zooniverse Presentation, which includes accessing websites and an online video. With the Students - (Begin at slide 1 of the presentation.) Present the Introduction/Motivation content to the class; see if students have any prior knowledge about citizen science and Zooniverse. - (slide 2) Citizen science is scientific research that ordinary citizens can help with. Usually it involves people working with or being led by a team of scientists, engineers and/or other researchers with varied backgrounds and expertise. Many different types of citizen science are going on around the world at this very moment; maybe you have heard of some of them. - (slide 3) One example of ongoing citizen science is medical research; that's when researchers and doctors test out new medical treatments on sick people who volunteer to be test subjects. For example, trying a new way to fix a broken leg or participating in clinical trials for new cancer treatment. - (slide 4) Another example of ongoing citizen science is medication testing; that's when people are paid to test medications for medical conditions such as high cholesterol or heartburn. Have you seen drug advertisements on TV or in magazines and noticed the long list of warnings and side-effects? Those side-effects are discovered by this testing. - (slide 5) Psychologists study peoples' minds. During their research, they see how people behave in certain situations and how different conditions and stimuli influence their brains. For example, the MRI machine shown on this slide is used to conduct research because it shows people's brain activity in response to how they are thinking and feeling in response to certain stimuli. - (slide 6) Another example is the use of people's personal computers to help with complex scientific calculations, such as SETI @ home, which can be downloaded on almost any computer to process data when your computer is not being used. You do not need to do anything except download and set up the program on your computer. SETI stands for Search for Extra-Terrestrial Intelligence. How does it work? The Arecibo Radio Telescope in Puerto Rico, shown on the slide, which is the largest dish antenna in the world, captures radio waves from outer space on the feed antenna suspended above it on cables and examines them for signs of intelligent life. How many of you have a radio in your family's car? This is the same idea, except SETI listens to radio signals from distant space instead of from nearby radio stations. Right now, all the radio signals that have been found seem to be unintelligent noise, like the static noise on the radio when no station is dialed in. However, more radio signals exist than can possibly be examined by scientists, which is why citizen scientists can volunteer their computers to help examine the radio data. - (slide 7) Another very common type of citizen science is the classification of data that needs humans to look at it because it cannot be deciphered by machines. That's where Zooniverse comes in. Let's look at the first Zooniverse project, Galaxy Zoo, and do one classification together. As a class, conduct the interactive Galaxy Zoo activity by following these steps: - Click the Galaxy Zoo link on the slide to go to the Zooniverse website, at https://www.zooniverse.org/project/hubble. - Once there, read the introduction: We need your help to classify the Hubble Space Telescope's hundreds of thousands of galaxy images according to their shapes—a task at which your brain is better than even the most advanced computer. - Then click the "Take part" button (goes to http://www.galaxyzoo.org/) and then click the "Begin Classifying" button. - When an image of a galaxy appears, ask students for advice on how to classify it. - Click on the "Examples" button to show students how to use the "Help" feature to learn more about how to classify the different types of galaxies. - Explain that multiple people classify each of galaxy, so if a mistake is made in classification it is not a problem and will eventually get corrected or figured out by consensus. - Get agreement or majority vote from students on how to classify the galaxy. - Get input from students at each step in order to answer all the questions about classifying the galaxy until its classification is complete. - Once a galaxy has been successfully classified, congratulate students on having made a small, but meaningful contribution to science. - (slides 8-9) Now that students have first-hand experience with how citizen science works, give them some background on Zooniverse, its history and project types. Show them a 1:28-minute video that recaps how Galaxy Zoo harnessed the brain power of 250,000 volunteers to classify the shapes of hundreds of thousands of galaxies captured by the Hubble Space Telescope, at https://www.youtube.com/watch?v=-T9wizyDV8c. Zooniverse's science and laboratory projects include a range of topic categories: space, climate, humanities, nature and biology. Look at the list of ~20 Zooniverse projects at https://www.zooniverse.org/projects#all. - (slide 10) Explain the activity steps listed on the slide: get a worksheet, go to the Zooniverse website, pick a project and explore it, answer the worksheet questions, use reasoned inference as necessary. - (slide 11) Give students some research tips by going to the Zooniverse website at https://www.zooniverse.org/ and scrolling through the main page, showing the different categories of projects, and then clicking on the Galaxy Zoo project. On the Galaxy Zoo project website, point out the links to the various website sections (story, science, discuss) and their sub-pages, and explain that from pages like this on their projects' websites, they will find information to help them answer the worksheet questions. - (slide 12) Address any student questions before getting started. As needed, help students navigate to the Zooniverse website and its list of projects. - (slide 13) Hand out the worksheets and direct students to get started, working individually or in pairs (or larger groups, depending on computer availability). Leave slide 10 on the projector screen so that students can refer to it as a reminder of what they should be doing. - (slide 14) After 25 minutes, have students share their observations, thoughts, experiences and findings, as well as any other information they recorded on their worksheets. - Ask students to reflect upon the citizen science project they explored and share with the class a way to improve the project. This is something engineers think about doing a lot—how to make something better. As an example, one possible design improvement for the Seafloor Explorer project might be to add zoom-in functionality for when people are classifying the images because some of the sea life is small and hard to see. This improvement would make the project easier to use, thus attracting more people to participate. - Once students have finished exploring the Zooniverse site, point out that the projects on the website are all large-scale and of national or international interest. Invite a discussion of different types of projects like Zooniverse that could be implemented more locally, in their own community. Ask: What are some projects that could be done in your community that citizens could help with? - Start brainstorming with students, writing their suggestions on the classroom board. It may be hard at first, after seeing such large-scale projects, but try to get them thinking about hands-on assistance or awareness. What about a website devoted to environmental engineering issues (ecosystem, soil, air or water pollution)? Cleaning up parks? Roads? Pothole reporting? A website for reporting environmental hazards? Downed trees or clogged street drains after storms?Continue brainstorming ways that citizens might contribute to community-wide problems via websites or social media. - Once a half-dozen or so ideas have been suggested, divide the class into groups of two to four students each. Direct groups to each design a website that addresses its local project topic (either chosen or assigned). Review the steps of the engineering design process with students and emphasize that they focus on brainstorming ideas and the initial design steps of the process (more than the implementing and testing of their designs). Hand out the design chart and give groups 20 minutes to come up with overall website designs. - Next, have students use colored pencils or markers and blank paper to sketch their website designs. Make sure the graphical work generally follows their design charts and that a main page and sub-pages are designed in such a way that the information is clear, easy to use and has some way for community members to contribute. Provide ~25 minutes for this portion of the activity. - Conclude by having teams present their projects and website designs to the class, as described in the Assessment section. If a Zooniverse project does not work properly, switch to a different web browser. What Do You Know? As part of the Introduction/Motivation section, ask students questions to find out what they know about citizen science and the Zooniverse. Activity Embedded Assessment Exploring: After handing out the Exploring a Zooniverse Project Worksheet and allowing time for students to get started, move around the classroom, answering any questions and observing what students are doing on the project websites. Collect the worksheets for review and grading. Refer to the Exploring a Zooniverse Project Worksheet Example Answers; these example answers may be shared with the students in advance of their own investigations if you think it would help them understand the activity assignment. Website Design: While student groups are brainstorming, planning and designing their websites, walk around to make sure students are on task, answer questions and give feedback to ensure quick progress. Share the Exciting Things You've Learned about Your Project: Have student groups present their websites to the class. Make sure they explain both the technical and graphical aspects of their websites. An example design might be a project in which citizens analyze pictures of birds taken by skyward or tree-facing cameras to better understand behavior and migration patterns of birds that live in a community park. During the presentation, expect students to clearly explain how the website would function, the responsibilities and tasks for the citizens involved, and how the information generated by citizens could be used by scientist and/or engineers to help solve a community problem or need. Bonus Question: What steps of the engineering design process did you complete? (Example answer: Understanding the need > brainstorming to come up with many different ideas > selecting one idea > creating a plan.) Additional Multimedia Support Show students an example local community participation website called ParkScan San Francisco (http://www.parkscan.org/), which provides a way for citizens to report local park conditions (overflowing trash cans, dry grassy areas, broken sprinklers, fallen tree limbs, broken playground equipment, graffiti-tagged walls) to keep them clean, safe and fun. The website requires citizen observers to give their email addresses and it provides a tutorial, map and form to submit a description with photos. Information about the steps of the engineering design process: https://www.teachengineering.org/engrdesignprocess.php. "Citizen Science." Scientific America. Scientific America, n.d. Web. 15 Jul 2013. http://www.scientificamerican.com/citizen-science/ "How SETI@home works." SETI@Home. University of California, n.d. Web. 16 Jul 2013. http://seticlassic.ssl.berkeley.edu/about_seti/about_seti_at_home_1.html "Purpose." Zooniverse: Real Science Online. Citizen Science Alliance, n.d. Web. 16 Jul 2013. https://www.zooniverse.org/about ContributorsPaul Cain, Yasche Glass, Jennifer Nider, Sujatha Prakash, Lori Rice Copyright© 2013 by Regents of the University of Colorado; original © 2012 Kansas State University Supporting ProgramGK-12 INSIGHT Program, Kansas State University This activity was developed under National Science Foundation GK-12 grant no. DGE 0948019. However, these contents do not necessarily represent the policies of the National Science Foundation, and you should not assume endorsement by the federal government. Last modified: July 20, 2017
A particle accelerator is a device that uses electromagnetic fields to propel charged particles to high speeds and to contain them in well-defined beams. Large accelerators are best known for their use in particle physics as colliders (e.g. the LHC at CERN, RHIC at Brookhaven National Laboratory, and Tevatron at Fermilab). Other kinds of particle accelerators are used in a large variety of applications, including particle therapy for oncological purposes, and as synchrotron light sources for the study of condensed matter physics. There are currently more than 30,000 accelerators in operation around the world. There are two basic classes of accelerators: electrostatic and oscillating field accelerators. Electrostatic accelerators use static electric fields to accelerate particles. A small-scale example of this class is the cathode ray tube in an ordinary old television set. Other examples are the Cockcroft–Walton generator and the Van de Graaff generator. The achievable kinetic energy for particles in these devices is limited by electrical breakdown. Oscillating field accelerators, on the other hand, use radio frequency electromagnetic fields to accelerate particles, and circumvent the breakdown problem. This class, which was first developed in the 1920s, is the basis for all modern accelerator concepts and large-scale facilities. Rolf Widerøe, Gustav Ising, Leó Szilárd, Donald Kerst, and Ernest Lawrence are considered pioneers of this field, conceiving and building the first operational linear particle accelerator, the betatron, and the cyclotron. Because colliders can give evidence of the structure of the subatomic world, accelerators were commonly referred to as atom smashers in the 20th century. Despite the fact that most accelerators (but not ion facilities) actually propel subatomic particles, the term persists in popular usage when referring to particle accelerators in general. - 1 Uses - 2 Electrostatic particle accelerators - 3 Oscillating field particle accelerators - 4 Targets and detectors - 5 Higher energies - 6 See also - 7 References - 8 External links Beams of high-energy particles are useful for both fundamental and applied research in the sciences, and also in many technical and industrial fields unrelated to fundamental research. It has been estimated that there are approximately 30,000 accelerators worldwide. Of these, only about 1% are research machines with energies above 1 GeV, while about 44% are for radiotherapy, 41% for ion implantation, 9% for industrial processing and research, and 4% for biomedical and other low-energy research. The bar graph shows the breakdown of the number of industrial accelerators according to their applications. The numbers are based on 2012 statistics available from various sources, including production and sales data published in presentations or market surveys, and data provided by a number of manufacturers. The largest particle accelerators with the highest particle energies are the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and the Large Hadron Collider (LHC) at CERN (which came on-line in mid-November 2009). These accelerators are used for experimental particle physics. For the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and the interactions of the simplest kinds of particles: leptons (e.g. electrons and positrons) and quarks for the matter, or photons and gluons for the field quanta. Since isolated quarks are experimentally unavailable due to color confinement, the simplest available experiments involve the interactions of, first, leptons with each other, and second, of leptons with nucleons, which are composed of quarks and gluons. To study the collisions of quarks with each other, scientists resort to collisions of nucleons, which at high energy may be usefully considered as essentially 2-body interactions of the quarks and gluons of which they are composed. Thus elementary particle physicists tend to use machines creating beams of electrons, positrons, protons, and antiprotons, interacting with each other or with the simplest nuclei (e.g., hydrogen or deuterium) at the highest possible energies, generally hundreds of GeV or more. Nuclear physicists and cosmologists may use beams of bare atomic nuclei, stripped of electrons, to investigate the structure, interactions, and properties of the nuclei themselves, and of condensed matter at extremely high temperatures and densities, such as might have occurred in the first moments of the Big Bang. These investigations often involve collisions of heavy nuclei – of atoms like iron or gold – at energies of several GeV per nucleon. Particle accelerators can also produce proton beams, which can produce proton-rich medical or research isotopes as opposed to the neutron-rich ones made in fission reactors; however, recent work has shown how to make 99Mo, usually made in reactors, by accelerating isotopes of hydrogen, although this method still requires a reactor to produce tritium. An example of this type of machine is LANSCE at Los Alamos. Besides being of fundamental interest, high energy electrons may be coaxed into emitting extremely bright and coherent beams of high energy photons via synchrotron radiation, which have numerous uses in the study of atomic structure, chemistry, condensed matter physics, biology, and technology. Examples include the ESRF in Grenoble, France, which has recently been used to extract detailed 3-dimensional images of insects trapped in amber. Thus there is a great demand for electron accelerators of moderate (GeV) energy and high intensity. Low-energy machines and particle therapy Everyday examples of particle accelerators are cathode ray tubes found in television sets and X-ray generators. These low-energy accelerators use a single pair of electrodes with a DC voltage of a few thousand volts between them. In an X-ray generator, the target itself is one of the electrodes. A low-energy particle accelerator called an ion implanter is used in the manufacture of integrated circuits. At lower energies, beams of accelerated nuclei are also used in medicine as particle therapy, for the treatment of cancer. DC accelerator types capable of accelerating particles to speeds sufficient to cause nuclear reactions are Cockcroft-Walton generators or voltage multipliers, which convert AC to high voltage DC, or Van de Graaff generators that use static electricity carried by belts. Electrostatic particle accelerators Historically, the first accelerators used simple technology of a single static high voltage to accelerate charged particles. The charged particle was accelerated through an evacuated tube with an electrode at either end, with the static potential across it. Since the particle passed only once through the potential difference, the output energy was limited to the accelerating voltage of the machine. While this method is still extremely popular today, with the electrostatic accelerators greatly out-numbering any other type, they are more suited to lower energy studies owing to the practical voltage limit of about 1 MV for air insulated machines, or 30 MV when the accelerator is operated in a tank of pressurized gas with high dielectric strength, such as sulfur hexafluoride. In a tandem accelerator the potential is used twice to accelerate the particles, by reversing the charge of the particles while they are inside the terminal. This is possible with the acceleration of atomic nuclei by using anions (negatively charged ions), and then passing the beam through a thin foil to strip electrons off the anions inside the high voltage terminal, converting them to cations (positively charged ions), which are accelerated again as they leave the terminal. The two main types of electrostatic accelerator are the Cockcroft-Walton accelerator, which uses a diode-capacitor voltage multiplier to produce high voltage, and the Van de Graaff accelerator, which uses a moving fabric belt to carry charge to the high voltage electrode. Although electrostatic accelerators accelerate particles along a straight line, the term linear accelerator is more often used for accelerators that employ oscillating rather than static electric fields. Oscillating field particle accelerators |This section needs additional citations for verification. (December 2013)| Due to the high voltage ceiling imposed by electrical discharge, in order to accelerate particles to higher energies, techniques involving more than one lower, but oscillating, high voltage sources are used. The electrodes can either be arranged to accelerate particles in a line or circle, depending on whether the particles are subject to a magnetic field while they are accelerated, causing their trajectories to arc. Linear particle accelerators In a linear particle accelerator (linac), particles are accelerated in a straight line with a target of interest at one end. They are often used to provide an initial low-energy kick to particles before they are injected into circular accelerators. The longest linac in the world is the Stanford Linear Accelerator, SLAC, which is 3 km (1.9 mi) long. SLAC is an electron-positron collider. Linear high-energy accelerators use a linear array of plates (or drift tubes) to which an alternating high-energy field is applied. As the particles approach a plate they are accelerated towards it by an opposite polarity charge applied to the plate. As they pass through a hole in the plate, the polarity is switched so that the plate now repels them and they are now accelerated by it towards the next plate. Normally a stream of "bunches" of particles are accelerated, so a carefully controlled AC voltage is applied to each plate to continuously repeat this process for each bunch. As the particles approach the speed of light the switching rate of the electric fields becomes so high that they operate at radio frequencies, and so microwave cavities are used in higher energy machines instead of simple plates. Linear accelerators are also widely used in medicine, for radiotherapy and radiosurgery. Medical grade linacs accelerate electrons using a klystron and a complex bending magnet arrangement which produces a beam of 6-30 MeV energy. The electrons can be used directly or they can be collided with a target to produce a beam of X-rays. The reliability, flexibility and accuracy of the radiation beam produced has largely supplanted the older use of cobalt-60 therapy as a treatment tool. Circular or cyclic accelerators In the circular accelerator, particles move in a circle until they reach sufficient energy. The particle track is typically bent into a circle using electromagnets. The advantage of circular accelerators over linear accelerators (linacs) is that the ring topology allows continuous acceleration, as the particle can transit indefinitely. Another advantage is that a circular accelerator is smaller than a linear accelerator of comparable power (i.e. a linac would have to be extremely long to have the equivalent power of a circular accelerator). Depending on the energy and the particle being accelerated, circular accelerators suffer a disadvantage in that the particles emit synchrotron radiation. When any charged particle is accelerated, it emits electromagnetic radiation and secondary emissions. As a particle traveling in a circle is always accelerating towards the center of the circle, it continuously radiates towards the tangent of the circle. This radiation is called synchrotron light and depends highly on the mass of the accelerating particle. For this reason, many high energy electron accelerators are linacs. Certain accelerators (synchrotrons) are however built specially for producing synchrotron light (X-rays). Since the special theory of relativity requires that matter always travels slower than the speed of light in a vacuum, in high-energy accelerators, as the energy increases the particle speed approaches the speed of light as a limit, but never attains it. Therefore, particle physicists do not generally think in terms of speed, but rather in terms of a particle's energy or momentum, usually measured in electron volts (eV). An important principle for circular accelerators, and particle beams in general, is that the curvature of the particle trajectory is proportional to the particle charge and to the magnetic field, but inversely proportional to the (typically relativistic) momentum. The earliest operational circular accelerators were cyclotrons, invented in 1929 by Ernest O. Lawrence at the University of California, Berkeley. Cyclotrons have a single pair of hollow 'D'-shaped plates to accelerate the particles and a single large dipole magnet to bend their path into a circular orbit. It is a characteristic property of charged particles in a uniform and constant magnetic field B that they orbit with a constant period, at a frequency called the cyclotron frequency, so long as their speed is small compared to the speed of light c. This means that the accelerating D's of a cyclotron can be driven at a constant frequency by a radio frequency (RF) accelerating power source, as the beam spirals outwards continuously. The particles are injected in the centre of the magnet and are extracted at the outer edge at their maximum energy. Cyclotrons reach an energy limit because of relativistic effects whereby the particles effectively become more massive, so that their cyclotron frequency drops out of synch with the accelerating RF. Therefore, simple cyclotrons can accelerate protons only to an energy of around 15 million electron volts (15 MeV, corresponding to a speed of roughly 10% of c), because the protons get out of phase with the driving electric field. If accelerated further, the beam would continue to spiral outward to a larger radius but the particles would no longer gain enough speed to complete the larger circle in step with the accelerating RF. To accommodate relativistic effects the magnetic field needs to be increased to higher radii like it is done in isochronous cyclotrons.for An example of an isochronous cyclotron is the PSI Ring cyclotron in Switzerland, which provides protons at the energy of 590 MeV which corresponds to roughly 80% of the speed of light. The advantage of such a cyclotron is the maximum achievable extracted proton current which is currently 2.2 mA. The energy and current correspond to 1.3 MW beam power which is the highest of any accelerator currently existing. Synchrocyclotrons and isochronous cyclotrons A classic cyclotron can be modified to increase its energy limit. The historically first approach was the synchrocyclotron, which accelerates the particles in bunches. It uses a constant magnetic field , but reduces the accelerating field's frequency so as to keep the particles in step as they spiral outward, matching their mass-dependent cyclotron resonance frequency. This approach suffers from low average beam intensity due to the bunching, and again from the need for a huge magnet of large radius and constant field over the larger orbit demanded by high energy. The second approach to the problem of accelerating relativistic particles is the isochronous cyclotron. In such a structure, the accelerating field's frequency (and the cyclotron resonance frequency) is kept constant for all energies by shaping the magnet poles so to increase magnetic field with radius. Thus, all particles get accelerated in isochronous time intervals. Higher energy particles travel a shorter distance in each orbit than they would in a classical cyclotron, thus remaining in phase with the accelerating field. The advantage of the isochronous cyclotron is that it can deliver continuous beams of higher average intensity, which is useful for some applications. The main disadvantages are the size and cost of the large magnet needed, and the difficulty in achieving the high magnetic field values required at the outer edge of the structure. Synchrocyclotrons have not been built since the isochronous cyclotron was developed. Another type of circular accelerator, invented in 1940 for accelerating electrons, is the Betatron, a concept which originates ultimately from Norwegian-German scientist Rolf Widerøe. These machines, like synchrotrons, use a donut-shaped ring magnet (see below) with a cyclically increasing B field, but accelerate the particles by induction from the increasing magnetic field, as if they were the secondary winding in a transformer, due to the changing magnetic flux through the orbit. Achieving constant orbital radius while supplying the proper accelerating electric field requires that the magnetic flux linking the orbit be somewhat independent of the magnetic field on the orbit, bending the particles into a constant radius curve. These machines have in practice been limited by the large radiative losses suffered by the electrons moving at nearly the speed of light in a relatively small radius orbit. To reach still higher energies, with relativistic mass approaching or exceeding the rest mass of the particles (for protons, billions of electron volts or GeV), it is necessary to use a synchrotron. This is an accelerator in which the particles are accelerated in a ring of constant radius. An immediate advantage over cyclotrons is that the magnetic field need only be present over the actual region of the particle orbits, which is much narrower than that of the ring. (The largest cyclotron built in the US had a 184-inch-diameter (4.7 m) magnet pole, whereas the diameter of synchrotrons such as the LEP and LHC is nearly 10 km. The aperture of the two beams of the LHC is of the order of a millimeter.) However, since the particle momentum increases during acceleration, it is necessary to turn up the magnetic field B in proportion to maintain constant curvature of the orbit. In consequence, synchrotrons cannot accelerate particles continuously, as cyclotrons can, but must operate cyclically, supplying particles in bunches, which are delivered to a target or an external beam in beam "spills" typically every few seconds. Since high energy synchrotrons do most of their work on particles that are already traveling at nearly the speed of light c, the time to complete one orbit of the ring is nearly constant, as is the frequency of the RF cavity resonators used to drive the acceleration. In modern synchrotrons, the beam aperture is small and the magnetic field does not cover the entire area of the particle orbit as it does for a cyclotron, so several necessary functions can be separated. Instead of one huge magnet, one has a line of hundreds of bending magnets, enclosing (or enclosed by) vacuum connecting pipes. The design of synchrotrons was revolutionized in the early 1950s with the discovery of the strong focusing concept. The focusing of the beam is handled independently by specialized quadrupole magnets, while the acceleration itself is accomplished in separate RF sections, rather similar to short linear accelerators. Also, there is no necessity that cyclic machines be circular, but rather the beam pipe may have straight sections between magnets where beams may collide, be cooled, etc. This has developed into an entire separate subject, called "beam physics" or "beam optics". More complex modern synchrotrons such as the Tevatron, LEP, and LHC may deliver the particle bunches into storage rings of magnets with constant B, where they can continue to orbit for long periods for experimentation or further acceleration. The highest-energy machines such as the Tevatron and LHC are actually accelerator complexes, with a cascade of specialized elements in series, including linear accelerators for initial beam creation, one or more low energy synchrotrons to reach intermediate energy, storage rings where beams can be accumulated or "cooled" (reducing the magnet aperture required and permitting tighter focusing; see beam cooling), and a last large ring for final acceleration and experimentation. Circular electron accelerators fell somewhat out of favor for particle physics around the time that SLAC's linear particle accelerator was constructed, because their synchrotron losses were considered economically prohibitive and because their beam intensity was lower than for the unpulsed linear machines. The Cornell Electron Synchrotron, built at low cost in the late 1970s, was the first in a series of high-energy circular electron accelerators built for fundamental particle physics, the last being LEP, built at CERN, which was used from 1989 until 2000. A large number of electron synchrotrons have been built in the past two decades, as part of synchrotron light sources that emit ultraviolet light and X rays; see below. For some applications, it is useful to store beams of high energy particles for some time (with modern high vacuum technology, up to many hours) without further acceleration. This is especially true for colliding beam accelerators, in which two beams moving in opposite directions are made to collide with each other, with a large gain in effective collision energy. Because relatively few collisions occur at each pass through the intersection point of the two beams, it is customary to first accelerate the beams to the desired energy, and then store them in storage rings, which are essentially synchrotron rings of magnets, with no significant RF power for acceleration. Synchrotron radiation sources Some circular accelerators have been built to deliberately generate radiation (called synchrotron light) as X-rays also called synchrotron radiation, for example the Diamond Light Source which has been built at the Rutherford Appleton Laboratory in England or the Advanced Photon Source at Argonne National Laboratory in Illinois, USA. High-energy X-rays are useful for X-ray spectroscopy of proteins or X-ray absorption fine structure (XAFS), for example. Synchrotron radiation is more powerfully emitted by lighter particles, so these accelerators are invariably electron accelerators. Synchrotron radiation allows for better imaging as researched and developed at SLAC's SPEAR. Fixed-Field Alternating Gradient accelerators (FFAG)s, in which a very strong radial field gradient, combined with strong focusing, allows the beam to be confined to a narrow ring, are an extension of the isochronous cyclotron idea that is lately under development. They use RF accelerating sections between the magnets, and so are isochronous for relativistic particles like electrons (which achieve essentially the speed of light at only a few MeV), but only over a limited energy range for protons and heavier particles at sub-relativistic energies. Like the isochronous cyclotrons, they achieve continuous beam operation, but without the need for a huge dipole bending magnet covering the entire radius of the orbits. Ernest Lawrence's first cyclotron was a mere 4 inches (100 mm) in diameter. Later, in 1939, he built a machine with a 60-inch diameter pole face, and planned one with a 184-inch diameter in 1942, which was, however, taken over for World War II-related work connected with uranium isotope separation; after the war it continued in service for research and medicine over many years. The first large proton synchrotron was the Cosmotron at Brookhaven National Laboratory, which accelerated protons to about 3 GeV (1953–1968). The Bevatron at Berkeley, completed in 1954, was specifically designed to accelerate protons to sufficient energy to create antiprotons, and verify the particle-antiparticle symmetry of nature, then only theorized. The Alternating Gradient Synchrotron (AGS) at Brookhaven (1960–) was the first large synchrotron with alternating gradient, "strong focusing" magnets, which greatly reduced the required aperture of the beam, and correspondingly the size and cost of the bending magnets. The Proton Synchrotron, built at CERN (1959–), was the first major European particle accelerator and generally similar to the AGS. The Stanford Linear Accelerator, SLAC, became operational in 1966, accelerating electrons to 30 GeV in a 3 km long waveguide, buried in a tunnel and powered by hundreds of large klystrons. It is still the largest linear accelerator in existence, and has been upgraded with the addition of storage rings and an electron-positron collider facility. It is also an X-ray and UV synchrotron photon source. The Fermilab Tevatron has a ring with a beam path of 4 miles (6.4 km). It has received several upgrades, and has functioned as a proton-antiproton collider until it was shut down due to budget cuts on September 30, 2011. The largest circular accelerator ever built was the LEP synchrotron at CERN with a circumference 26.6 kilometers, which was an electron/positron collider. It achieved an energy of 209 GeV before it was dismantled in 2000 so that the underground tunnel could be used for the Large Hadron Collider (LHC). The LHC is a proton collider, and currently the world's largest and highest-energy accelerator, expected to achieve 14 TeV energy per beam, and currently operating at half that. The aborted Superconducting Super Collider (SSC) in Texas would have had a circumference of 87 km. Construction was started in 1991, but abandoned in 1993. Very large circular accelerators are invariably built in underground tunnels a few metres wide to minimize the disruption and cost of building such a structure on the surface, and to provide shielding against intense secondary radiations that occur, which are extremely penetrating at high energies. Current accelerators such as the Spallation Neutron Source, incorporate superconducting cryomodules. The Relativistic Heavy Ion Collider, and Large Hadron Collider also make use of superconducting magnets and RF cavity resonators to accelerate particles. Targets and detectors The output of a particle accelerator can generally be directed towards multiple lines of experiments, one at a given time, by means of a deviating electromagnet. This makes it possible to operate multiple experiments without needing to move things around or shutting down the entire accelerator beam. Except for synchrotron radiation sources, the purpose of an accelerator is to generate high-energy particles for interaction with matter. This is usually a fixed target, such as the phosphor coating on the back of the screen in the case of a television tube; a piece of uranium in an accelerator designed as a neutron source; or a tungsten target for an X-ray generator. In a linac, the target is simply fitted to the end of the accelerator. The particle track in a cyclotron is a spiral outwards from the centre of the circular machine, so the accelerated particles emerge from a fixed point as for a linear accelerator. For synchrotrons, the situation is more complex. Particles are accelerated to the desired energy. Then, a fast acting dipole magnet is used to switch the particles out of the circular synchrotron tube and towards the target. A variation commonly used for particle physics research is a collider, also called a storage ring collider. Two circular synchrotrons are built in close proximity – usually on top of each other and using the same magnets (which are then of more complicated design to accommodate both beam tubes). Bunches of particles travel in opposite directions around the two accelerators and collide at intersections between them. This can increase the energy enormously; whereas in a fixed-target experiment the energy available to produce new particles is proportional to the square root of the beam energy, in a collider the available energy is linear. At present the highest energy accelerators are all circular colliders, but both hadron accelerators and electron accelerators are running into limits. Higher energy hadron and ion cyclic accelerators will require accelerator tunnels of larger physical size due to the increased beam rigidity. For cyclic electron accelerators, a limit on practical bend radius is placed by synchrotron radiation losses and the next generation will probably be linear accelerators 10 times the current length. An example of such a next generation electron accelerator is the 40 km long International Linear Collider, due to be constructed between 2015–2020. It is believed that plasma wakefield acceleration in the form of electron-beam 'afterburners' and standalone laser pulsers might be able to provide dramatic increases in efficiency over RF accelerators within two to three decades. In plasma wakefield accelerators, the beam cavity is filled with a plasma (rather than vacuum). A short pulse of electrons or laser light either constitutes or immediately trails the particles that are being accelerated. The pulse disrupts the plasma, causing the charged particles in the plasma to integrate into and move toward the rear of the bunch of particles that are being accelerated. This process transfers energy to the particle bunch, accelerating it further, and continues as long as the pulse is coherent. Energy gradients as steep as 200 GeV/m have been achieved over millimeter-scale distances using laser pulsers and gradients approaching 1 GeV/m are being produced on the multi-centimeter-scale with electron-beam systems, in contrast to a limit of about 0.1 GeV/m for radio-frequency acceleration alone. Existing electron accelerators such as SLAC could use electron-beam afterburners to greatly increase the energy of their particle beams, at the cost of beam intensity. Electron systems in general can provide tightly collimated, reliable beams; laser systems may offer more power and compactness. Thus, plasma wakefield accelerators could be used – if technical issues can be resolved – to both increase the maximum energy of the largest accelerators and to bring high energies into university laboratories and medical centres. Higher than 0.25 GeV/m gradients have been achieved by a dielectric laser accelerator, which may present another viable approach to building compact high-energy accelerators. Black hole production and public safety concerns In the future, the possibility of black hole production at the highest energy accelerators may arise if certain predictions of superstring theory are accurate. This and other exotic possibilities have led to public safety concerns that have been widely reported in connection with the LHC, which began operation in 2008. The various possible dangerous scenarios have been assessed as presenting "no conceivable danger" in the latest risk assessment produced by the LHC Safety Assessment Group. If black holes are produced, it is theoretically predicted that such small black holes should evaporate extremely quickly via Bekenstein-Hawking radiation, but which is as yet experimentally unconfirmed. If colliders can produce black holes, cosmic rays (and particularly ultra-high-energy cosmic rays, UHECRs) must have been producing them for eons, but they have yet to harm anybody. It has been argued that to conserve energy and momentum, any black holes created in a collision between an UHECR and local matter would necessarily be produced moving at relativistic speed with respect to the Earth, and should escape into space, as their accretion and growth rate should be very slow, while black holes produced in colliders (with components of equal mass) would have some chance of having a velocity less than Earth escape velocity, 11.2 km per sec, and would be liable to capture and subsequent growth. Yet even on such scenarios the collisions of UHECRs with white dwarfs and neutron stars would lead to their rapid destruction, but these bodies are observed to be common astronomical objects. Thus if stable micro black holes should be produced, they must grow far too slowly to cause any noticeable macroscopic effects within the natural lifetime of the solar system. - Accelerator physics - Atom smasher (disambiguation) - Dielectric wall accelerator - Nuclear transmutation - List of accelerators in particle physics - Rolf Widerøe The idea of a particle accelerator has also been used in television shows such as The Flash. - Livingston, M. S.; Blewett, J. (1969). Particle Accelerators. New York: McGraw-Hill. ISBN 1-114-44384-0. - Witman, Sarah. "Ten things you might not know about particle accelerators". Symmetry Magazine. Fermi National Accelerator Laboratory. Retrieved 21 April 2014. - Pedro Waloschek (ed.): The Infancy of Particle Accelerators: Life and Work of Rolf Wideröe, Vieweg, 1994 - "six Million Volt Atom Smasher Creates New Elements". Popular Mechanics: 580. April 1935. - Higgins, A. G. (December 18, 2009). "Atom Smasher Preparing 2010 New Science Restart". U.S. News & World Report. - Cho, A. (June 2, 2006). "Aging Atom Smasher Runs All Out in Race for Most Coveted Particle". Science 312 (5778): 1302. doi:10.1126/science.312.5778.1302. - "Atom smasher". American Heritage Science Dictionary. Houghton Mifflin Harcourt. 2005. p. 49. ISBN 978-0-618-45504-1. - Feder, T. (2010). "Accelerator school travels university circuit" (PDF). Physics Today 63 (2): 20. Bibcode:2010PhT....63b..20F. doi:10.1063/1.3326981. - Hamm, Robert W.; Hamm, Marianne E. (2012). Industrial Accelerators and Their Applications. World Scientific. ISBN 978-981-4307-04-8. - "CERN management confirms new LHC restart schedule" (Press release). CERN Press Office. February 9, 2009. Retrieved 2009-02-10. - "CERN reports on progress towards LHC restart" (Press release). CERN Press Office. June 19, 2009. Retrieved 2009-07-21. - "Two circulating beams bring first collisions in the LHC" (Press release). CERN Press Office. November 23, 2009. Retrieved 2009-11-23. - Nagai, Y.; Hatsukawa, Y. (2009). "Production of 99Mo for Nuclear Medicine by 100Mo(n,2n)99Mo". Journal of the Physical Society of Japan 78 (3): 033201. Bibcode:2009JPSJ...78c3201N. doi:10.1143/JPSJ.78.033201. - Amos, J. (April 1, 2008). "Secret 'dino bugs' revealed". BBC News. Retrieved 2008-09-11. - Chao, A. W.; Mess, K. H.; Tigner, M.; et al., eds. (2013). Handbook of Accelerator Physics and Engineering (2nd ed.). World Scientific. ISBN 978-981-4417-17-4. - Courant, E. D.; Livingston, M. S.; Snyder, H. S. (1952). "The Strong-Focusing Synchrotron—A New High Energy Accelerator". Physical Review 88 (5): 1190–1196. Bibcode:1952PhRv...88.1190C. doi:10.1103/PhysRev.88.1190. - Blewett, J. P. (1952). "Radial Focusing in the Linear Accelerator". Physical Review 88 (5): 1197–1199. Bibcode:1952PhRv...88.1197B. doi:10.1103/PhysRev.88.1197. - "The Alternating Gradient Concept". Brookhaven National Laboratory. - "World of Beams Homepage". Lawrence Berkeley National Laboratory. - Clery, D. (2010). "The Next Big Beam?". Science 327 (5962): 142–144. Bibcode:2010Sci...327..142C. doi:10.1126/science.327.5962.142. - Wright, M. E. (April 2005). "Riding the Plasma Wave of the Future". Symmetry Magazine 2 (3): 12. - Briezman, B. N.; et al. "Self-Focused Particle Beam Drivers for Plasma Wakefield Accelerators" (PDF). Retrieved 2005-05-13. - Peralta, E. A.; et al. "Demonstration of electron acceleration in a laser-driven dielectric microstructure". Retrieved 2014-05-01. - "An Interview with Dr. Steve Giddings". ESI Special Topics. Thomson Reuters. July 2004. - Chamblin, A.; Nayak, G. C. (2002). "Black hole production at the CERN LHC: String balls and black holes from pp and lead-lead collisions". Physical Review D 66 (9): 091901. arXiv:hep-ph/0206060. Bibcode:2002PhRvD..66i1901C. doi:10.1103/PhysRevD.66.091901. - Ellis, J. LHC Safety Assessment Group; et al. (5 September 2008). "Review of the Safety of LHC Collisions" (PDF). Journal of Physics G 35 (11): 115004. arXiv:0806.3414. Bibcode:2008JPhG...35k5004E. doi:10.1088/0954-3899/35/11/115004. CERN record. - Jaffe, R.; Busza, W.; Sandweiss, J.; Wilczek, F. (2000). "Review of Speculative "Disaster Scenarios" at RHIC". Reviews of Modern Physics 72 (4): 1125–1140. arXiv:hep-ph/9910333. Bibcode:2000RvMP...72.1125J. doi:10.1103/RevModPhys.72.1125. |Wikimedia Commons has media related to Particle accelerators.| - What are particle accelerators used for? - Stanley Humphries (1999) Principles of Charged Particle Acceleration - Particle Accelerators around the world - Wolfgang K. H. Panofsky: The Evolution of Particle Accelerators & Colliders, (PDF), Stanford, 1997 - P.J. Bryant, A Brief History and Review of Accelerators (PDF), CERN, 1994. - Heilbron, J.L.; Robert W. Seidel (1989). Lawrence and His Laboratory: A History of the Lawrence Berkeley Laboratory. Berkeley: University of California Press. ISBN 0-520-06426-7. - David Kestenbaum, Massive Particle Accelerator Revving Up NPR's Morning Edition article on 9 April 2007 - Ragnar Hellborg (ed.), ed. (2005). Electrostatic Accelerators: Fundamentals and Applications. Springer. ISBN 978-3-540-23983-3. - Fred's World of Science - Annotated bibliography for particle accelerators from the Alsos Digital Library for Nuclear Issues - Accelerators-for-Society.org, to know more about applications of accelerators for Research and Development, energy and environment, health and medicine, industry, material characterization.
Note: A briefer, edited version of this article appeared in D.M. Wegner & J. Pennebaker (Eds.) Handbook of Mental Control. Englewood Cliffs, N.J.: Prentice-Hall, 1993. THE FIVE DISTINCTIONS AND SEVEN PRINCIPLES OF MEMORY By way of background, we summarize here some general principles that seem to govern the operation of the memory system, as abstracted from the research literature. Space does not permit complete documentation of each of the assertions that follow. For a thorough treatment of the cognitive psychology of memory, see the texts by Anderson (1990), Baddeley (1976, 1990), Crowder (1976), Ellis and Hunt (1989), Klatzky (1980), and Loftus and Loftus (1976). Forms of Memory and the Classification of Knowledge Memory is the repository of knowledge stored in the mind, but not all knowledge is alike. One important distinction is between declarative and procedural knowledge (Anderson, 1976; Winograd, 1975). Declarative knowledge is knowledge of facts, knowledge that has truth value. Procedural knowledge is knowledge of the skills, rules, and strategies that are used to manipulate and transform declarative knowledge in the course of perceiving, remembering, and thinking. Within the domain of declarative knowledge, a further distinction can be drawn between episodic and semantic knowledge (Tulving, 1972, 1983). Episodic knowledge is autobiographical memory: such memories record a raw description of an event; but they also contain information about the spatiotemporal context in which the event took place, and the self as the agent or experiencer of the event. Semantic knowledge is generic and categorical: it is stored in a format that is independent of episodic context and self-reference. In forming episodic memories, the cognitive system draws on pre-existing world-knowledge stored in semantic memory; similarly, the accumulation of similar episodic memories may lead to the development of a context-free representation of what these events had in common. Declarative knowledge, whether episodic or semantic in nature, can be represented in propositional format: that is, as assertions about subjects, verbs, and objects; these abstract propositions, in turn, are connected in larger networks where nodes stand for concepts (or for propositions about concepts), and links stand for the relations between concepts. By contrast, procedural knowledge can be represented in productions: that is, statements having an if-then, goal-condition-action format. Individual productions are then linked into whole production systems that accomplish some mental or behavioral task. Of course, the goals and conditions in a production system are also nodes in declarative memory. When these nodes are activated by acts of perception, memory, and thought, the corresponding productions are executed (as in the ACT* theory of Anderson, 1983a). It should be noted that our focus on meaning-based propositional representations is for convenience only. A number of theorists, particularly Paivio (1971, 1986), have argued that knowledge is also represented in concrete, analog formats that preserve the perceptual structure of objects and events. Anderson (1983a) has argued for at least two forms of perception-based representations: spatial images, which preserve the spatial configurations of objects and their components (e.g., up/down, left/right, front/back); and linear strings preserve the temporal relations among events (e.g., first/last, before/after/inbetween). The use of spatial image representations is illustrated by classic research on mental rotation (e.g., Shepard & Cooper, 1982) and image scanning (Kosslyn, 1980). The use of linear string representations is illustrated by work on scripts in social judgment (Shank & Abelson, 1977; Wyer, Shoben, Fuhrman, & Bodenhausen, 1985) and memory for public events (Huttenlocher, Hedges, & Prohaska, 1988). For a critique of dual-code theories of memory, see Anderson (1978) and Pylyshyn (1981). Expressions of Memory Memories can be expressed in a variety of ways. In free recall, the person is simply asked to remember one or more events that occurred at a particular place and time; the term "free" indicates that there are no constraints on the manner in which these items are recalled. In serial recall, the person must recall the items in the order in which they occurred. In cued recall, the person is given specific prompts or hints concerning the item(s) to be recalled -- the first letter or first syllable, a category label, or a semantic associate. In recognition, the person is asked to examine a list of items, and to distinguish between those that occurred at a specified place and time (targets, or "old" items) and those that did not (lures, distractors, or "new" items). Particularly in the case of recognition, the subjects' responses may be accompanied by rating of their confidence that they are correct. There are many variants and combinations of recall and recognition, but all such tests have one thing in common: they require the person to bring a memory into phenomenal awareness -- to become conscious of a past event, so that it can be described to someone else. However, there are other expressions of memory that do not require conscious recollection. Consider, for example, savings in relearning, in which the subjects show facilitation in relearning a list that had been studied sometime in the past. Significant savings are obtained regardless of whether the subject recalls or recognizes the list items. The same is true for positive and negative transfer effects. For example, if subjects study a list of words such as APPEAL, MINERAL, ELASTIC, BOULDER, AND FOREST, and then are asked to complete three-letter stems with the first word that comes to mind, they are much more likely to complete the stem ELA___ with ELATED than with ELASTIC -- what is known as a priming effect. However, significant priming is obtained even in subjects who are densely amnesic for the wordlist itself. Thus, there are some expressions of memory that do not seem to require conscious recollection of a past event. On the basis of results such as these, Schacter (1987) has drawn a distinction between two forms of memory, explicit and implicit (for similar distinctions see Jacoby and Dallas, 1981; Eich, 1984; Johnson & Hasher, 1987; Richardson-Klavehn & Bjork, 1988). Explicit memory involves the conscious recollection of some previous episode. Explicit memory tasks make clear reference to some event in the past, and ask the subject to deliberately remember some aspect of the incident. By contrast, implicit memory is demonstrated by any change in experience, thought, or action that is attributable to some past event. Implicit memory tasks do not necessarily refer to prior episodes in the subject's life, and do not require him or her to remember any experiences, qua experiences, at all. A large body of research indicates that explicit and implicit memory are dissociable in at least two senses. First, studies of a variety of amnesic states associated with brain damage, electroconvulsive shock, general anesthesia, and hypnosis (see below) reveal that explicit memory can be impaired while implicit memory is spared. Second, elaborative processing at the time of encoding affects explicit but not implicit memory, while a change in modality of presentation at the time of test affects implicit but not explicit memory. There is some controversy about whether explicit and implicit memory reflect the operations of two independent memory systems in the brain (Roediger, 1990; Roediger, Weldon, & Challis, 1989; Schacter, 1987, 1990; Tulving & Schacter, 1990). In any case, the phenomena of implicit memory, reflecting the influence of past episodes on the performance of procedural and semantic memory tasks, as well as other forms of perceptual and language processing, comprise a clear case of the unconscious influence of a past event on current functioning (Kihlstrom, 1987, 1990; Kihlstrom, Barnhardt, & Tataryn, 1991). Although explicit memory is epitomized by recall and recognition, and implicit memory by priming effects, the distinction should not be drawn too sharply. Every explicit memory test has its implicit memory counterpart. This relationship is clearest in the case of cued recall. A subject who has studied a list including the word ELASTIC may be cued with the stem ELA- and asked to complete it either with a word from the study list (an explicit memory test) or with the first word that comes to mind (an implicit memory test). For recognition, the subject may be presented with ELASTIC and asked either whether it was on the list (explicit memory), or whether it was presented prior to a masking stimulus (implicit memory). Even for free recall, subjects may be asked either to remember the items of the list (explicit memory) or to report whatever words come into their minds (implicit memory). More to the point, perhaps, ostensibly explicit memory tasks have an implicit memory component, and vice-versa. For example, Mandler (1980) has argued that successful recognition of an item may reflect a feeling of familiarity mediated by priming effects (and thus close to implicit memory) in the absence of actual retrieval of the episode in which the item was presented (essentially explicit memory). And subjects may strategically use their conscious recollections of list items to generate a mental set, facilitating performance on perceptual identification, stem completion, and other ostensibly implicit memory tests. Stages of Processing In analyzing the success or failure of any attempt at remembering (or, for that matter, at forgetting), it is convenient to divide memory processing into three stages (Crowder, 1976). Encoding has to do with the acquisition of knowledge -- in the general case, the creation of a memory trace representing some experience. Storage has to do with the retention of trace information over a period of time. Retrieval has to do with the utilization of stored information in the course of experience, thought, and action. In principle, and instance of remembering of forgetting can be attributed to processes occurring at any of these stages, along or in combination. Thus, an event can be forgotten because it has not been encoded; because it was lost from storage during the retention interval; or because an available memory was not retrieved. The Encoding Stage. Traditional theories of memory, as represented by the work of Ebbinghaus (1885), Thorndike's (1913) Law of Practice, and indeed the entire passive-association tradition of S-R learning theory, emphasizes the role of repetition and rehearsal in memory encoding. However, classic studies by Craik and his colleagues (e.g., Craik & Lockhart, 1972; Craik & Tulving, 1975; Craik & Watkins, 1973) support a distinction between maintenance rehearsal and elaborative rehearsal. Maintenance rehearsal, or rote repetition, maintains items in an active state; elaborative rehearsal links new items to pre-existing knowledge. These experiments, and many others, illustrate the elaboration principle (Anderson & Reder, 1979): The probability of remembering an event is a function of the degree to which that event is related to pre-existing knowledge during processing. The elaboration principle applies to the processing of individual events; but memory is also improved if we connect individual events to each other. This effect is illustrated by other classic studies, on the role of associative clustering, category clustering, or subjective organization (Bower, 1970b; Mandler, 1967, 1979). That is, list items tend to be reorganized in memory, so that items which are associatively or conceptually related tend to be recalled together regardless of their order of presentation. Subjective organization is a similar phenomenon, except that the order of recall tends to be determined by an image or narrative that is idiosyncratic to the subject, rather than widely shared semantic relationships. All three phenomena illustrate the organization principle: The probability of remembering an event is a function of the degree to which that event was related to other events during processing. The difference between elaborative and organizational processing corresponds to the distinction between item-specific information, which increases the distinctiveness of each item, and relational information, which highlights the similarities between items (Hunt & Einstein, 1981). The Storage Stage. Assuming that a memory trace has been adequately encoded, it is now available for use. So long as attention is devoted to the item, it remains in a high state of readiness, and is extremely likely to be retrieved; when the trace is no longer an object of attention, the probability of successful retrieval progressively diminishes. This empirical fact, known since Ebbinghaus, may be summarized as the time-dependency principle: The probability of remembering an event is a negative function of the length of time between encoding and retrieval. Of course, there are instances in which knowledge is preserved at remarkably high levels over extremely long periods of time, raising the question of a "permastore" (Bahrick, 1984). In general, there are two accounts of what happens over the retention interval. One view, which may be attributed to Ebbinghaus (1885), and Thorndike (1913), emphasizes the passive decay of unrehearsed memories, just as footprints are washed away by wind and tide. Another view, which forms the basis for the interference tradition in memory, asserts that other items, especially those newly encoded during the retention interval weaken the target memory traces, or otherwise compete with their retrieval. Interference is dramatically illustrated in the fan effect, in which increases in the number of facts associated with a concept increases the time required to retrieve any one of these facts. Although there is some evidence for trace decay, and for the actual destruction of memory traces (Loftus & Loftus, 1980) once a trace has been encoded in memory, the chief cause of forgetting appears to be some sort of proactive or retroactive interference. The Retrieval Stage. The implication of interference is that once a trace has been consolidated in memory, its storage is essentially permanent. Assuming that a memory trace has been adequately encoded, and has been preserved over the retention interval, it must be retrieved in order to answer a query or used in other information-processing functions. However, memory fluctuates from trial to trial. For example, Tulving (1964) presented subjects with a list of words, followed by a series of memory tests, with no further opportunity to study. The number of items remained essentially constant from trial to trial (about 50% of the original list). However, the exact items recalled varied: an item remembered on one trial might be forgotten on the next, and vice-versa. This finding illustrates the distinction between availability and accessibility (Tulving & Pearlstone, 1966): Items that are available in memory may not be accessible on any particular attempt at retrieval. To some extent, accessibility is affected by encoding and storage factors: elaborate, organized memories are more reliably accessible than those that are not; and recent memories are more reliably accessible than remote ones. However, accessibility is also determined by factors present at the time of retrieval. One important determinant of accessibility is the amount of cue information supplied with the query. Consider a comparison of three measures of episodic memory: as a general rule, free recall tests produce less memory than cued recall tests (Tulving & Pearlstone, 1966), while recognition tests produce the most. This is to be expected on the basis of the amount of information supplied with the retrieval cue. In free recall, the cue ("What were the words on the list you learned?") is very impoverished: at best, is specifies only the spatiotemporal context of the to-be-remembered event; in cued recall, additional information is supplied about the nature of the target event ("What were the animal names on the list?"); in recognition, the cue is a copy of the event itself ("Was one of the words LION or BEAR?). Such comparisons yield the cue-dependency principle (Tulving, 1974): The probability of remembering an event increases with the amount of information supplied by the retrieval cue. But effective retrieval cues must also contain the right kind of information, as well as the right amount. Thus, the word AMBER, studied in a list of words including ORANGE and RED, may be retrieved when cued by the category COLOR, but not when cued by the category FIRST NAME OF A GIRL OR WOMAN. This finding illustrates the encoding specificity principle (Tulving & Thomson, 1973): The probability of remembering an event is a function of the extent to which cues processed at the time of encoding are also processed at the time of retrieval. Encoding specificity appears to underlie the phenomena of state-dependent retention, in which psychoactive drugs such as alcohol or barbiturate are administered during encoding or retrieval: in these cases, memory is best when there is a match between the state in which the material was studied, and the state in which memory is tested (for a review, see Eich, 1980, 1989). Similar effects have been found for environmental setting (e.g., Smith, 1988), and emotional state (e.g., Eich & Metcalfe, 1989). Such "context-dependent memory" effects are themselves cue-dependent: they are typically found with free-recall tests, and only rarely on tests of cued recall and recognition (Eich, 1980). This suggests that contextual information is relatively weak, and can be swamped by other cues (Eich, 1980; Kihlstrom, 1989; Kihlstrom, Brenneman, Pistole, & Shor, 1985). Students taking multiple-choice exams are not aided by being seated in the same room in which they heard the lecture (and, in any event, much of the test concerns textbook material, which presumably was encoded in the library or dormitory). Nevertheless, such context effects do illustrate the importance of congruence between encoding and retrieval conditions, which is what the encoding specificity principle is all about. Memory for particular events is importantly determined by our expectations and beliefs, represented as generic knowledge structures known as schemata. The first to appreciate this point was Bartlett (1932), in his attack on the associationistic tradition represented by Ebbinghaus and Thorndike. The important role played by such factors organized pre-existing knowledge illustrates the schematic processing principle (Hastie, 1980): The probability of remembering an event is a function of the degree to which that event is congruent with pre-existing expectations and beliefs. It turns out, however, that the precise relationship between event and schema is important. Some events are schema-congruent, meaning that they would be expected by the schema in place; others are schema-incongruent, or counterexpectational; still others are schema-irrelevant, meaning that they do not bear on the schema one way or the other. Although considerable research converges on the conclusion that schema-congruent events are remembered better than schema-irrelevant ones, Hastie and his colleagues (e.g., Hastie & Kumar, 1979; Hastie, 1980, 1981) have pointed out that schema-incongruent items are remembered best of all. The U-shaped function relating schema-congruence and memorability appears to find its explanation in two different processes. Schema-incongruent events, because of their surprising value, receive extra processing at the time of encoding, as the perceiver tries to take account of them. And at the time of retrieval, the subject can draw on the schema itself to generate cues that will help gain access to schema-congruent events. Schema-irrelevant events enjoy neither of these advantages, and thus are poorly remembered. Bartlett's view of memory as schema-driven lies at the foundation of his view of remembering as reconstructive rather than reproductive. Just as perceiving an object is sometimes more like painting a picture than inspecting a photograph (Neisser, 1967, 1976), so remembering an event is more like writing a book than retrieving one from the shelf. Some evidence for the reconstructive nature of remembering is provided by Bransford and Franks' (1971) studies of sentence memory, in which subjects falsely recognize sentences whose meanings are consistent with those that they actually studied; and by studies by Loftus (1978) and others on the effects of leading questions and other misinformation on eyewitness testimony. Although Loftus' notion that post-event misinformation overwrites, and replaces, event information in memory has been strongly challenged (e.g., McCloskey & Zaragoza, 1985), nothing contradicts the notion that memory cannot be misled, confused, and biased by changes in perspective and other events occurring after the fact. These errors, confusions, and biases illustrate the reconstruction principle: A memory of an event reflects a blend of information retrieved from a specific trace of that event with knowledge, expectations, and beliefs derived from other sources. These five distinctions -- between declarative and procedural knowledge, episodic and semantic memory, explicit and implicit expressions, the stages of processing, and availability and accessibility -- and seven principles -- elaboration, organization, time-dependency, cue-dependency, schematic processing, encoding specificity, and reconstruction -- provide sort of an user's manual for the human memory system. We will have many occasions to observe their operation as we examine the prospects for the self-regulation of memory. MNEMONICS AND MNEMOTECHNICS Given these principles, it would seem that the prospects for the self-control of memory functioning would be relatively good, at least on the positive end. There are things we can do to promote remembering and prevent forgetting: for example, elaborating and organizing the material at the time of encoding, or supplying sufficient and appropriate cues at the time of retrieval. Since the time of ancient Greeks, these sorts of strategies have been codified in the form of a set of mnemonic devices, or techniques to aid in memory. The history of mnemonic devices, from ancient times through the Renaissance, has been documented authoritatively by Yates (1966). In ancient Greece and Rome, when parchment was expensive and printing unknown, some system of memorizing was required by poets and orators, who had to deliver long addresses with a high degree of accuracy. Along with invention, disposition, elocution, and pronunciation, memory was one of the five aspects of rhetoric defined by Cicero in his De oratore of 55 B.C. The classical system of memory aids is commonly attributed to the poet Simonides of Ceos, who dramatically demonstrated their use by identifying the victims of a disaster through his knowledge of where they had been sitting at a banquet table. Simonides relied on the mnemonic of places and images, by which familiar places were selected as storage spaces for the items, represented by images, comprising that which we wish to remember. The techniques of artificial memory were referred to in Aristotle's treatise, De memoria et reminiscentia (4th century B.C.), and codified in the anonymous Ad Herennium ("To Herennius") of 82 B.C.. The author of Ad Herennium, commonly (but wrongly, says Yates, 1966) thought to be Cicero himself, presents a detailed set of rules for the selection of places and images for memorizing. Ad Herennium formed the basis of all subsequent treatments of the ars memorativa, including Cicero's De oratore and Quintillian's Institutio oratore (1st century A.D). The mnemotechnics of Ad Herennium were revived in medieval Europe by Albertus Magnus (in De bono) and by Thomas Aquinas (in Summa Theologiae) -- both seeing artificial memory as an aspect of the virtue of prudence. In 1596, the Jesuit missionary Matteo Ricci brought the system of places and images to China, as an example of the powers to be acquired with conversion to Christianity (Spence, 1984). The method of places and images also forms the basis of the most popular mnemonics of the modern era, the method of loci, the pegword technique, the link method, bizarre imagery, and the keyword system (Bellezza, 1981; for popular treatments, see Cermak, 1976; Herrmann, 1988; Higbee, 1977; Lorayne & Lucas, 1974). The method of loci is the "places and images" technique in pure form: subjects mentally associate each item to be remembered with a familiar spatial location on a mental map. In the pegword system, subjects first memorize a simple rhyming scheme -- ONE-BUN, TWO-SHOE, THREE-TREE, etc. -- and then associate an image of each to-be-remembered item with the concrete objects referred to. In the link method, the pegwords are dropped, and subjects are instructed to associate adjacent items together in an interactive image -- the more unusual, even bizarre, the better. Finally, in the keyword mnemonic for learning foreign-language vocabulary, a foreign word is represented by a substitute word in the native language, and the two words are associated by means of visual imagery. There are also verbal mnemonic systems, such as the use of the acronym ROY G BIV (in the United States) or the sentence RICHARD OF YORK GAINS BATTLES IN VAIN (in England) to remember the colors of the visible spectrum in order of wavelength; EVERY GOOD BOY DOES FINE for the lines of the treble clef in music, and GOOD BOYS DO FINE ALWAYS (or GOOD BOYS DESERVE FUDGE ALWAYS, depending on your childhood piano teacher) for the corresponding lines in the bass clef; the spaces are F-A-C-E for the treble clef and ALL COWS EAT GRASS for the bass. Such systems are familiar in the training of health-care providers, who must often memorize ordered lists of things like bones and muscles. SOME CRIMINALS HAVE UNDERESTIMATED ROYAL CANADIAN MOUNTED POLICE gives the bones of the upper limbs (scapula, clavicle, humerus, ulna, radius, carpals, metacarpals, and phalanges); LAZY ZULU PURSUING DARK DAMOSELS gives the stages of cell division (leptotene, zygotene, pachytene, dilotene, and diakinesis); LAZY FRENCH TART LYING NAKED IN ANTICIPATION gives the order of cranial nerves in the superior orbital tissue of the skull (lacrimal, frontal, trochlear, lateral, nasociliary, internal, and abducens). By the way, the racism and sexism of these mnemonics is in the original, and it is probably not an accident that these sentences were originally designed to serve the professional advancement of white men. There is probably an entire sociological thesis to be written on the role of racial, ethnic, gender, and sexual categories in mnemonic devices. Finally, the following rhyme offered by Baddeley (1976) provides the first 20 digits of pi (each digit given by the number of letters in the word): I wish I could remember Pi Eureka cried the great inventor Christmas Pudding Christmas Pie Is the problem's very center. One important feature of the modern approach, however, is that it puts folk wisdom and rhetorical claims to empirical test (Bower, 1970a; Higbee, 1978, 1988; McDaniel & Pressley, 1987; Morris, 1978; Pressley & Mullally, 1984; Pressley & McDaniel, 1988; Roediger, 1980; Wood, 1967). So, for example, Bugelski, Kidd, and Segmen (1968) showed that the pegword system actually works, except under those conditions where subjects are not given enough time to form appropriate images. Bower and Reitman (1972) showed that the same pegs could be used to memorize several different lists of words, so long as each new image was included in a compound of previously formed images. Under these conditions of progressive elaboration, the pegword system and the method of loci were equally effective. Roediger (1980) confirmed this finding, and showed further that the loci, pegword, and link methods were superior to rote rehearsal, or the formation of mental images of the objects represented by individual words. Wollen, Weber, and Lowry (1972) showed that while mnemonically effective images interact, they need not be bizarre; in fact, bizarre images may even interfere with memory (Collyer, Jonides, & Bevan, 1972). Regardless of these qualifications, ample testimony to the effectiveness of mnemonic techniques is provided by single-case studies of amateur and professional mnemonists (Brown & Deffenbacher, 1975; Gordon, Valentine, & Wilding, 1984; Wilding & Valentine, 1985). Shereshevskii (S.), the subject of the classic study by Luria (1968), possessed a remarkable talent for synesthesia, which apparently allowed him to form extremely rich, distinctive images of to-be-remembered material. He also made extensive use of the method of loci, drawing on his detailed knowledge of Moscow. He also used a variety of verbal and semantic strategies, including the grouping of nonsense syllables to produce pseudo-Russian letter strings. Other mnemonists who have been studied, including W.J. Bottell, known in England as "Datas" (1904) and the model for "Mr. Memory" in Alfred Hitchcock's The 39 Steps, relied heavily on visual imagery. Similarly, A.C. Aitken, a mathematician at the University of Edinburgh with a reputation as a lightning calculator, employed verbal recoding, as well as rhythm (Hunter, 1962, 1977): he was able to recall the value of pi to 1,000 decimal places! Subject SF, an athlete studied by Chase and Ericsson (1982), learned to memorize strings of up to 81 digits, after only one presentation trial, by converting chunks of digits into running times that were meaningful to him. On the other hand, Subject VP, a store clerk studied by Hunt and Love (1972), generally neglected the classic mnemonic techniques, relying instead on verbal recoding strategies and rote memorization. Interest in mnemonic devices continues, especially among those concerned with the treatment of individuals with learning and memory disorders: mentally retarded and learning disabled children and adults, the aged, and the brain-damaged; mnemonics are also popular with teachers of foreign language vocabulary. An extremely interesting aspect of research mnemonics concerns cross-cultural differences in memory, particularly comparisons between literate and preliterate cultures (Cole & Gay, 1972; Cole, Gay, Click, & Sharp, 1971; Wagner, 1978a, 1978b). Clearly, the effectiveness of these mnemonic devices illustrates the principles of encoding and retrieval discussed earlier. The method of loci, and the pegword system, connects list items to things that are already known -- in our terms, it promotes elaboration. Similarly, the images involved in the pegword link systems provide for elaborate, rich, and distinctive encodings of single items. The organizational principle is illustrated by the pegword system, in which the sound of the integer serves as a cue for the name of the pegword; and then the pegword provides a contextual cue for retrieval of the list item itself -- as does the familiar place in the method of loci. In the link method, successive items are grouped together in images -- another instance of organizational processing. Perhaps the relative inefficacy of bizarre images reflects the difficulty of retrieving unusual or unfamiliar interitem links. Finally, the success of interactive images in bringing list items to mind illustrates the principles of cue-dependency and encoding specificity in retrieval. At the same time, the utility of mnemonic devices would seem to be limited. Perhaps reflecting their origins in the needs of poets and orators with limited access to paper and printing, they are best at preserving the order in which items were presented. When order need not be preserved, their advantages over other encoding strategies is reduced (Roediger, 1980). More important, some mnemonic devices require enormous expenditures of cognitive effort on the part of subjects, both in memorizing the mnemonic, and in memorizing the material that the mnemonic is supposed to help reproduce. Consider the remark of one of Matteo Ricci's pupils, as reported by Spence (1984): "It takes a lot of memory to remember these things". Is the "Zulu" sentence for bones or nerves? And what was that "pi" jingle (and who needs to know that value to more than four digits, anyway? (Roediger & Thorpe, 1978) and increasing practice with retrieval (Roediger & Payne, 1982). Similarly, early findings from Erdelyi's laboratory yielded hypermnesia when pictures, but not words, served as the to-be-remembered items (Erdelyi & Becker, 1974). This situation led Erdelyi (1982, 1984, 1988) to suggest that pictorial materials were privileged with respect to hypermnesia, and to speculate that imaginal processing is an important mediator of the effect. However, some experiments (e.g., Belmore, 1981; Erdelyi, Buschke, & Finkelstein, 1977; Roediger & Thorpe, 1978) have obtained hypermnesia for verbal materials, so the difference between verbal and nonverbal representations, or verbal vs. nonverbal processing, cannot be critical. A series of experiments by Mross, Klein, and Kihlstrom have shed more light on the conditions under which hypermnesia for words, and perhaps hypermnesia in general, occurs (Klein, Loftus, Kihlstrom, & Aseron, 1989; Mross, Klein, Loftus, & Kihlstrom, 1991). Mross et al. (1991), replicating the procedures of Erdelyi and Becker (1974), found significant hypermnesia for both pictures and words, although the magnitude of the effect was greater in the former case. In a second study, their stimulus materials shifted from words and pictures representing concrete objects to trait adjectives representing highly abstract personality descriptors. Following the "levels of processing" paradigm of Craik and Lockhart (1972), independent groups of subjects studied the items under one of four conditions: orthographic, phonemic, semantic, and self-referent: they then completed a series of two or three recall trials without any further study of the list. Significant hypermnesia was observed only in the self-referent condition. A third study replicated this finding, substituting an imagery task for the phonemic condition of Experiment 2. A fourth experiment compared just the phonemic and self-referent condition, and found evidence of hypermnesia only in the latter. A final experiment by Klein et al. (1989) showed that pleasantness ratings (an elaborative task involving the processing of single items) increases the intertrial recovery component of hypermnesia, while category sorting (an organizational task involving the processing of interitem associations) decreases the intertrial forgetting component. Hypermnesia results from an net advantage of intertrial recovery over intertrial forgetting. Thus, both elaborative and organizational processing promote hypermnesia, though the end is accomplished by different means in the two cases. The findings of these experiments speak to a number of theoretical controversies concerning the nature of the hypermnesia effect. For example, Erdelyi (1982, 1984, 1988) has suggested that imaginal (nonverbal) processing is critical for the occurrence of hypermnesia. Mross et al. (1990) obtained hypermnesia for words in four separate experiments, and Klein et al. (1989) added a fifth, even though no imagery instructions were given to the subjects. Of course, it might be the case that the subjects spontaneously engaged in such a recoding process. However, the use of highly abstract personality trait terms as stimuli in the work of Mross et al. (1990), and the failure of explicit imagery instructions to produce hypermnesia, diminish this possibility to a considerable extent. The effects of imaginal processing may be mediated by a more general effect of elaborative processing at the time of encoding. Imaginal processing may be a highly effective way to produce elaborate encodings, but other processing tasks could be equally or more effective in this regard (Belmore, 1981; Klein et al., 1989). In the final analysis, Mross' and Klein's experiments indicate that the amount of hypermnesia observed with words, at least, is a function of the manner in which they are processed: self-referent processing yielded hypermnesia, while orthographic, phonemic, and semantic processing did not. Elaborative processing promotes intertrial recovery, while organizational processing prevents intertrial forgetting. These results join those of others who have found effect of encoding variables on hypermnesia within an intentional-learning paradigm -- although they differ in that significant hypermnesia was not obtained in the semantic condition of Experiment 2 (Belmore, 1981; Roediger, Payne, Gillespie, & Lean, 1982). On the other hand, Roediger and his colleagues have argued that retrieval factors are critical in producing hypermnesia. Roediger (1982; Roediger et al., 1982; see also Roediger & Thorpe, 1978) noted that cumulative recall functions, of which hypermnesia could be considered a special case (in which intertrial recovery exceeds intertrial forgetting), have the property that the higher the asymptote of recall, the more slowly that asymptote is approached. Thus, according to their argument, hypermnesia is more likely to be shown in cases where initial levels of recall are high. Pictures generally show higher initial recall than words; and words subject to imaginal or elaborative encoding show higher initial recall then those that are not. However, they argue that hypermnesia is not due to encoding conditions per se; rather, any condition resulting in high initial levels of recall would have the same effect. Thus, they showed that claim that high levels of cumulative recall -- their characterization of hypermnesia -- is more likely to be obtained on a semantic memory task involving the generation of instances from large rather than small categories (Roediger et al., 1982). On the other hand, Mross et al. (1991, Experiment 4) arranged their stimulus materials in such a way as to reverse the normal relation between level of processing and level of recall. Paralleling the set-size manipulation of Roediger et al. (1982), four times as many items were presented for a phonemic judgment as for a semantic one. More phonemic than semantic items were recalled on the initial trial, and overall, indicating that asymptotic levels of recall were higher in the phonemic condition. Nevertheless, there was no hypermnesia observed in the phonemic condition. These results are not consistent with the hypothesis that level of recall determines the extent of hypermnesia; but they are consistent with the hypothesis that encoding factors play an important role. Of course, as Erdelyi (1982) argued and Roediger and Challis now (1988) agree, cumulative recall is not the same as hypermnesia (see also Payne, 1986, 1987). Almost any set of conditions will show an incremental recall function, reflected in the appearance of new items over trials, but not all conditions yield hypermnesia, reflected in a net increase in recall from trial to trial. In terms of the usual repeated-testing procedure, cumulative recall is sensitive only to intertrial recovery. The problem is that intertrial recovery is necessary, but not sufficient, for hypermnesia to occur. What is needed additionally is either for intertrial forgetting to be reduced, or for intertrial recovery to exceed intertrial forgetting. Intertrial forgetting is the key to hypermnesia, and cumulative recall functions ignore this factor altogether. In any event, the findings of Roediger et al's set-size experiment may be amenable to an alternative explanation in terms of encoding rather than retrieval processes. In the specific categories employed by Roediger, category size seems to have been confounded with the extent of interitem associations. The greatest cumulative recall was observed for sports and the least for U.S. presidents, with birds falling somewhere inbetween. Accessing one item from the sport category, then, would be very likely to lead to the retrieval of another item from that category, and so on. In addition, more items in the sports category could serve as subject- generated retrieval cues, suggesting sports that have not yet been retrieved. Thus, the important factor is not the number of items in the set, but rather the richness of the associative network linking the items to each other. Consistent with this point, research by Klein et al. (1989) indicates that tasks promoting well-organized and richly elaborated encodings are powerful determinants of hypermnesia for verbal material. In their study, reliable hypermnesia for word lists was found with tasks encouraging either elaborative or organizational processing at encoding; and when encoding conditions encouraged both elaborative and organizational processing, more hypermnesia was found than for either type of processing alone. It should be recalled that the recovery of previously unrecalled items is ubiquitous in multitrial experiments; thus, cumulative recall always increases across trials, and hypermnesia occurs in those instances where intertrial recovery exceeds intertrial forgetting. In the final analysis, Klein et al. found that both elaborative and organizational activity contributed to hypermnesia; elaborative activity promotes intertrial recovery, while organization prevents intertrial forgetting. In summary, studies of hypermnesia offer a new perspective on the enhancement of memory, by showing that items, once lost, are not necessarily gone forever. Continued efforts at retrieval will almost always yield previously forgotten material, even in the absence of changes in cue information provided to the subject (such as are accomplished by shifts from free recall to cued recall or recognition). However, there are clear limits on the magnitude of this effect. Under ordinary circumstances, the number of initially forgotten items that are subsequently recovered is equalled, or even surpassed, by the number of initially remembered items that are subsequently forgotten. Thus, in many cases, net recall remains constant at best; more likely it decreases, producing the phenomenon of time-dependency retrieval. But just as there are strategies that can be employed to promote good initial recall, there are strategies that enhance intertrial recovery and diminish intertrial forgetting. Item gains are enhanced by elaborative activity, which produces a rich, distinctive memory trace that is more likely to be contacted by search and retrieval processes. Similarly, item losses are reduced by organizational activity, which focuses on the similarities among items, and thus enhances the likelihood that recollection of one item will serve as a cue for the retrieval of another one. The popular reputation of hypnosis as a means of transcending one's normal voluntary capacity -- as reflected in the "generation of hypers" noted by Marcuse (1959), coupled with fact that hypnotic suggestions can produce profound alterations in cognitive functioning, has led some investigators to suggest that it can be employed to enhance memory, over and above whatever effects can be achieved by the use of mnemonic devices and other strategies available to nonhypnotized subjects. This technique was employed by Breuer and Freud (1893-1895) in their Studies on Hysteria, and was revived in World War I and again in World War II as an adjunct to brief hypnotherapy for war neurosis (Grinker & Spiegel, 1945; Watkins, 1949; for a particularly vivid portrayal of this technique, see John Huston's 1944-1945 propaganda film, Let There Be Light). More recently, hypnotic techniques have been employed in "past lives therapy", an occult practice in which patients search for the source of their present troubles in the sins and misfortunes of their previous existences; and in forensic situations, where witnesses and victims, and even suspects and defendants, may be hypnotized in the process of gathering evidence in civil and criminal cases. Hypnosis and Learning Although most attention in this area focuses on the effects of hypnosis on the retrieval of memories initially encoded in the normal waking state, a number of studies have examined the question of whether learning itself can be enhanced through hypnosis. Certainly this line of research received some impetus from the reports of many 19th-century authorities that mesmerized or hypnotized subjects gave evidence of the transcendence of normal voluntary capacity: changes ranging from increases in verbal fluency and physical strength to clairvoyance. Nevertheless, an early study by Gray (1934) answered the question only weakly in the affirmative: a small group of poor spellers improved their spelling ability somewhat when the learning occurred in hypnosis. Similarly, Sears (1955) reported that subjects who learned Morse Code in hypnosis made fewer errors than those whose learning took place under nonhypnotic conditions. More dramatic results were reported in a series of studies by Cooper and his associates, employing hypnotic time distortion and hallucinated practice. Briefly, subjects were asked to hallucinate engaging in some activity, and at its conclusion were given suggestions that a long interval had passed (e.g., 30 minutes) when the actual elapsed time had been considerably shorter (e.g., 10 seconds). The idea is that this expansion of subjective time effectively increases the amount of study, or practice, that could be performed per unit of objective time. Cooper and Erickson (1950, 1954) reported, for example, that hallucinated practice led to marked improvement in a subject's ability to play the violin. A more systematic study by Cooper and Rodgin (1954), concerned with the learning of nonsense syllables, also gave positive results. Unfortunately, there were no statistical tests of the differences between treatment conditions; even so, the effects of hypnotic time distortion and hallucinated practice were seen only on the immediate test; the superiority of hypnosis virtually disappeared at retest, 24 hours later. Another study, by Cooper and Tuthill (1954), found no objective improvements in handwriting with hallucinated practice in time distortion, even though the subjects generally perceived themselves as having improved. A more recent experiment also yielded negative results (Barber & Calverley, 1964). On the other hand, Krauss, Katzell, and Krauss (1974) reported positive findings in a study of verbal learning: hypnotized subjects were allotted three minutes to study the list, but were told they had studied it for 10 minutes. Unfortunately, Johnson (1976) and Wagstaff and Ovenden (1979) failed to replicate these results: in fact, their subjects did worse under time distortion than in control conditions. In the most comprehensive study to date, St. Jean (1980) repeated the essential features of the Krauss et al. design, paying careful attention to details of subject selection and the wording of the suggestion. Although the highly hypnotizable subjects reported that they experienced distortions of the passage of time, as suggested, there were no effects on learning. The combination of time distortion and hallucinated practice is ingenious, but of course it makes some assumptions that are not necessarily valid. First, can mental practice substitute for actual physical practice? In fact, there is considerable evidence for this proposition (Feltz & Landers, 1983). Because hypnotic hallucinations are closely related to mental images, there is no reason to think that hallucinated practice might not be effective as well. But time distortion is another matter: the assumption is that the hallucination of something is the same as the thing itself, and there is no reason to think that this is the case. In fact, such an assumption flies in the face of a wealth of literature on hypnotic hallucinations, which shows that they are inadequate substitutes for the actual stimulus state of affairs (Sutcliffe, 1960, 1961; Kihlstrom & Hoyt, 1988). Thus, while hypnosis, and hypnotic suggestion, can produce distortions in time perception, just as they can produce other distortions in subjective experience, these distortions do not necessarily have consequences for learning and memory (St. Jean, 1989). A rather different approach to this problem has been taken by investigators who have offered subjects direct suggestions for improved learning, without reference to time distortion or hallucinated practice (e.g., Fowler, 1961; Parker & Barber, 1964). Unfortunately, interpretation of such studies is made difficult by a number of methodological considerations (for a review of methodological problems in hypnosis research, see Sheehan & Perry, 1977). For example, the induction of hypnosis might merely increase the motivation of subjects to engage in the experimental task (Barber, 1969; London & Fuhrer, 1961), independent of any effects of hypnosis per se. Moreover, subjects may respond to the demand characteristics of such an experiment by holding back on their performance during baseline tests and other nonhypnotic conditions, thus manifesting an illusory improvement under hypnosis (Scharf & Zamansky, 1963; Zamansky, Scharf, & Brightbill, 1964). Some of these problems have been addressed by a special paradigm introduced by London and Fuhrer (1961), in which hypnotizable subjects are compared to objectively insusceptible subjects who have been persuaded that they are responsive to hypnosis. To this may be added procedures adopted by Zamansky and Scharf (Scharf & Zamansky, 1963; Zamansky, Scharf, & Brightbill, 1964) to evaluate order effects driven by expectancies generated by the comparison of hypnotic and nonhypnotic conditions. Studies of muscular performance using the unadorned London-Fuhrer design have generally found that when subjects are given hypnotic exhortations for enhancement, equivalent performance levels are shown by hypnotizable subjects and insusceptible subjects who believe that they are hypnotizable (e.g., Evans & Orne, 1965; London & Fuhrer, 1961). Similar results have been obtained for measures of rote learning (London, Conant, & Davison, 1965; Rosenhan & London, 1963; Schulman & London, 1963). Thus, the available evidence suggests that hypnotic suggestions do not enhance the learning process. However, it should be noted that most of these studies have used an hypnotic induction based on suggestions for relaxation and sleep, which might interfere with both motor performance and learning. Relaxation is not necessary for hypnosis, however (Banyai & Hilgard, 1976), and it remains possible that different results would be obtained if suggestions for an active, alert form of hypnosis were given instead. Moreover, suggestions that capitalize on the hypnotized subject's capacity for imaginative involvement may prove to be better than mere exhortations (Slotnick, Liebert, and Hilgard, 1965). Thus, the issue of the hypnotic enhancement of learning and performance should not be considered closed. Hypnosis and Remembering Laboratory studies of hypermnesia have a history extending back to the beginnings of the modern period of hypnosis research (for other reviews, see Erdelyi, 1988; Smith, 1983). For example, Young (1925, 1926) taught his subjects lists of nonsense syllables in the normal waking state, and then subsequently tested recall in and out of hypnosis, each time motivating subjects for maximal recall. There was no advantage of hypnosis over the waking test. Later experiments employing nonsense syllables also failed to find any effect of hypnosis (Baker, Haynes, & Patrick, 1983; Barber & Calverly, 1966; Huse, 1930; Mitchell, 1932). By contrast, studies employing meaningful linguistic or pictorial material have sometimes shown hypermnesia effects. Stalnaker and Riddle (1932) tested college students on their recollections for prose passages and verse that had been committed to memory at least one year previously. Testing in hypnosis, with suggestions for hypermnesia, resulted in a significant enhancement over waking recall. These findings have been confirmed by other investigators who tested memory for prose, poetry, filmed material, and real-world memories (DePiano & Salzberg, 1982; Hofling, Heyl, & Wright, 1971; Young, 1926). In the first direct comparison of nonsense with meaningful material, White, Fox, and Harris (1940) found that hypermnesia suggestions resulted in a striking improvement in memory for the poetry and travelogue, but had no effect on memory for nonsense syllables. Similar results were also obtained by Rosenthal (1944) and Dhanens and Lundy (1975), who compared nonsense syllables with poetry and with prose, respectively. On the basis of this kind of evidence, it might be concluded that laboratory studies tend to support the conclusions from uncontrolled case studies. However, it should be noted that the effects achieved in the laboratory, while sometimes statistically significant, are rarely dramatic. Moreover, it fairly clear that any gains obtained during hypnosis are not attributable to hypnosis per se, but rather to normal hypermnesia effects of the sort described earlier. Thus, at least four investigations (Nogrady, McConkey, & Perry, 1985; Register & Kihlstrom, 1987, 1988; Whitehouse, Dinges, Orne, & Orne, 1991), adapting the hypermnesia paradigm introduced by Erdelyi and Becker (1974), found significant increments in memory for pictures or words in trials conducted during hypnosis; but these increments were matched, if not exceeded, by gains made by control subjects tested without hypnosis. Two studies have observed small gains in memory attributable to hypnosis (Shields & Knox, 1986; Stager & Lundy, 1985), but neither finding has been replicated (Lytle & Lundy, 1988). Moreover, Register and Kihlstrom (1987, 1988) found in that levels of hypermnesia were no higher in hypnotizable subjects than in those who were insusceptible to hypnosis -- thus strengthening the inference that whatever improvements occurred were the result of nonhypnotic processes. Most important, it seems clear that the increase in valid memory may be accompanied by an equivalent or greater increment in confabulations and false recollections. In the experiment by Stalnaker and Riddle (1932), for example, hypnosis produced a substantial increase in confabulation over the normal waking state, so that overall memory accuracy was very poor. Apparently the hypnotized subjects were more willing to attempt recall, and to accept their productions -- however erroneous they proved to be -- as reasonable facsimiles of the originals. These conclusions are supported by more recent experiments by Dywan (1988; Dywan & Bowers, 1983) and Nogrady et al. (1986), who found that hypnotic suggestions for hypermnesia produced more false recollections by hypnotizable than insusceptible subjects. Whitehouse et al. (1991) found that hynosis increased the confidence associated with memory reports that had been characterized as mere guesses on a prehypnotic test. Dywan and Bowers (1983) have suggested that hypnosis impairs the process of reality monitoring, so that hypnotized subjects are more likely to confuse imagination with perception (Johnson & Raye, 1981). Proponents of forensic hypnosis often discount these sorts of findings on the ground that they are obtained in sterile, laboratory investigations that bear little resemblance to the real-world circumstances in which hypnosis is actually used -- an argument that closely resembles that made by some researchers allied with the "ecological memory" movement (for critiques, see Banaji & Crowder, 1989, 1991; for more positive views, see Aanstoos, 1991; Bahrick, 1991; Conway, 1991; Ceci & Bronfenbrenner, 1991; Gruenberg, Morris, & Sykes, 1991; Loftus, 1991; Morton, 1991; Neisser, 1978, 1991; for attempts at reconciliation, see Bruce, 1991; Klatzky, 1991; Roediger, 1991; Tulving, 1991). However, the evidence supporting this assertion is rather weak. Reiser (1976), a police department psychologist who has trained many investigators in hypnosis, has claimed that the vast majority of investigators who tried hypnosis found it to be helpful; but such testimonials cannot substitute for actual evidence. In fact, a remarkable doctoral dissertation by Sloane (1981), conducted under Reiser's supervision randomly assigned witnesses and victims in actual cases being investigated by the Los Angeles Police Department to hypnotic and nonhypnotic conditions, and found no advantage for hypnosis. A study by Timm (1981), in which police officers themselves were witnesses to a mock crime (after having been relieved of their firearms through a ruse!), gave similar results. A later study by Geiselman, Fisher, MacKinnon, and Holland (1985), employing very lifelike police training films as stimuli and actual police officers as investigators, did show some advantage for hypnosis over an untreated control condition; however, the benefits of hypnosis were matched by unhypnotized subjects led through a "cognitive interview" capitalizing on various cognitive strategies (unfortunately, there was no comparison condition in which the cognitive interview was administered during hypnosis). Thus, the available evidence does not indicate that hypnosis has any privileged status as a technique for enhancing memory. To paraphrase Nogrady et al. (1985), trying hypnosis seems to be no better than merely trying again. In fact, trying hypnosis may make things worse, because hypnosis -- almost by definition -- entails enhanced responsiveness to suggestion. Therefore, if memory is tainted by leading questions and other suggestive influences, as Loftus' work suggests, these elements may be even more likely to incorporated into memories that have been refreshed by hypnosis. Putnam (1979) was the first to demonstrate this effect. He exposed his subjects to a variant of Loftus' (1975) paradigm, in which subjects viewed a videotape of a traffic accident followed by an interrogation that included leading questions. Those subjects who were interviewed while they were hypnotized were more likely to incorporate the misleading postevent information into their memory reports. Similar results were obtained by Zelig and Beidelman (1981) and Sanders and Simmons (1983). Register and Kihlstrom (1987), employing a variant of Loftus' procedure introduced by Gudjonsson (1984), failed to find that hypnosis increased interrogative suggestibility; but errors introduced during the hypnotic test did carry over to subsequent nonhypnotic tests. An extensive and complex series of studies by Sheehan and his colleagues (e.g., Sheehan, 1987, 1988a, 1988b; Sheehan & Grigg, 1985; Sheehan, Grigg, & McCann, 1984; Sheehan & Tilden, 1983, 1984, 1986) found that subjects tested during hypnosis were more confident in their memory reports than were those tested in the normal waking state -- regardless of the accuracy of these reports. The situation is even worse, apparently, when the suggestions are more explicit, as in the case of hypnotically suggested paramnesias (Kihlstrom & Hoyt, 1990; Levitt & Chapman, 1979; Reyher, 1967). Laurence and Perry (1983) suggested (falsely, of course) to a group of hypnotized subjects that on a particular night they had awakened to a noise. After hypnosis was terminated, all the subjects remembered the suggested event as if it had actually occurred; almost half of the subjects maintained this belief even when told that the event had been suggested to them by the hypnotist. Similar results were obtained by a number of investigators (Labelle, Laurence, Nadon, & Perry, 1990; Lynn, Milano, & Weekes, 1991; McCann & Sheehan, 1988; McConkey & Kinoshita, 1985-1986; McConkey, Labelle, Bibb, & Bryant, 1990; Sheehan, Statham, & Jamieson, 1991; Spanos, Gwynn, Comer, Baltruweit, & deGroh, 1989; Spanos & McLean, 1985-1986). Unfortunately, the precise conditions under which the pseudomemory effect can be obtained remain obscure. Equally important, it remains unclear whether the pseudomemories reflect actual changes in stored memory traces or biases in memory reporting -- an issue that also has been raised in the postevent misinformation effect observed outside hypnosis (e.g., McCloskey & Zaragoza, 1985; Loftus, Schooler, & Wagenaar, 1985; Metcalfe, 1990; Tversky & Tuchin, 1989). Direct suggestions for hypermnesia are often accompanied by suggestions for age-regression: that the subject is reverting to an earlier period in his or her own life, reliving an event, and acting in a manner characteristic, of that age (for reviews, see Nash, 1987; Perry, Laurence, D'Eon, & Tallant, 1988; Reiff & Scheerer, 1959; O'Connell, Shor, & Orne, 1970; Yates, 1961). Most research on this phenomenon has addressed the question of whether the age-regressed adult reverts to modes of psychological functioning that are characteristic of the target age, typically in childhood. Upon closer examination, however, the naive concept hypnotic age regression proves to be a complex blend of three elements: ablation, the functional loss of the person's knowledge, abilities, and memories acquired after the suggested age; reinstatement, a return to archaic, or at least chronologically earlier modes, of cognitive and emotional functioning (i.e., procedural and semantic knowledge); and revivification, improved access to memories (i.e., episodic knowledge) from the suggested age (and before). There is no evidence that the subject age-regressed to childhood loses access to his or her adult knowledge and abilities (O'Connell et al., 1970; Orne, 1951; Perry & Walsh, 1978). Thus, adults regressed to childhood, and asked to take dictation from the hypnotist, may write, in a childlike hand but without spelling errors, the sentence "I am conducting an experiment which will assess my psychological capacities" -- a behavior that is clearly beyond the capacity of most children; alternatively, an adult who arrived in America as a monolingual child may reply in his native tongue to questions posed to him in English (Orne, 1951). Such conduct is one of the classic examples of what Orne (1959) called trance logic -- the hypnotized subject's tendency to freely mix illusion and reality while responding to hypnotic suggestions. Although the interpretation of trance logic is controversial (e.g., Spanos, 1986; McConkey, Bryant, Bibb, & Kihlstrom, 1991), contradictions between childlike and adult behavior have been observed too often to sustain the notion that age-regression involves the forgetting of adult procedural and declarative knowledge. It is possible, as Spanos (1986) has suggested, that trance logic reflects incomplete responding on the part of hypnotized subjects. On the other hand, it is also possible that the contradictions observed in age regression reflect the impact of adult knowledge that is denied to conscious awareness, but nevertheless continues to influence the behavior and experience of the age-regressed subject -- much in the manner of an implicit memory (Schacter, 1987). In principle, however, the prospects for reinstatement are more promising: the hallucinated environment created by age regression may provide a context that facilitates the retrieval of procedural knowledge characteristic of childhood. Nevertheless, the evidence for reinstatement is ambiguous. As (1962) found a college student who had spoken a Finnish-Swedish dialect until age eight, but who no longer remembered the language; his knowledge of the language improved somewhat under hypnotic age-regression. More dramatic findings were obtained by Fromm (1970) in a nisei student who denied any knowledge of Japanese; when age-regressed, she broke into fluent if childish Japanese. In contrast, Kihlstrom (1978a) reported an unsuccessful attempt to revive Mandarin in a college undergraduate who had not spoken the language since kindergarten in Taiwan. What accounts for these different outcomes is not clear: Fromm's subject was highly hypnotizable, and had been imprisoned in an American concentration camp during World War II (suggesting that her knowledge of Japanese had been covered by repression); Kihlstrom's subject was completely refractory to hypnosis. In terms of experimental studies, Nash (1987) has found no convincing evidence favoring the reinstatement of childlike modes of mental functioning, whether these are defined in terms of physiological responses (e.g., the Babinski reflex, in which the toes fan upward in response to plantar stimulation), loss of mental age on IQ tests (e.g., the Stanford-Binet), reversion to preconceptual (Werner) or preformal (Piaget) modes of thought (e.g., failing to predict the order in which three spheres will emerge from a hollow tube after it has been rotated through half or whole turns; defining right or wrong in terms of what is rewarded or punished), or perceptual processes (e.g., changes in magnitude of the Ponzo and Poggendorf illusions; the return of eidetic imagery ostensibly prominent in children). Perhaps the most compelling evidence for reinstatement are studies by Nash and his colleagues (Nash, Johnson, & Tipton, 1979; Nash, Lynn, Stanley, Frauman, & Rhue, 1985), in which subjects regressed to age 3 and imagining a frightening situation, behaved in an age-appropriate manner: searching for teddy bears and other "transitional objects". Interestingly, insusceptible subjects simulating hypnosis do not behave in this manner. However, these results are vitiated to some extent by interviews of the subjects' mothers, which revealed that the transitional objects chosen by the age-regressed subjects were not typically those actually possessed by those subjects as children (Nash, Drake, Wiley, Khalsa, & Lynn, 1986). Thus, as Nash (1987) noted, age-regression may reinstate childlike modes of emotional functioning, but it does not necessarily revive specific childhood memories. The revivification component of age regression is conceptually similar to the recovery of memory in hypermnesia; and, as with reinstatement, it is possible, at least in principle, that the hallucination of an age-appropriate environment might facilitate the retrieval of childhood memories. Everyone who has administered a the Stanford Hypnotic Susceptibility Scale Form C, which includes a suggestion for age regression, has observed subjects who appear to relive episodes from childhood that have been forgotten, or not remembered for a long time. Supporting these observations, Young (1926) was able to elicit a substantial number of early recollections, whose accuracy was independently verified, in two hypnotizable subjects. And more recently, Hofling, Heyl, and Wright (1971) compared subjects' recall of personal experiences to actual diary entries made at the time, and found superior memory during hypnosis compared to a nonhypnotic session. Unfortunately, neither of these experiments examined false recollections that may have been produced by the subjects; and the obvious difficulty in obtaining independent verification effectively prevents many more studies of this sort from being done, in order to understand better the conditions under which these improvements in memory might be obtained. In the absence of independent confirmation, it should be understood that the apparent enhancement of memory occurring as a result of hypnosis may be illusory. But even independent confirmation does not guarantee that hypnosis itself is responsible for the appearance of revivification: the enhancement of memory may come from general world-knowledge or cues provided by the experimenter, rather than improved access to trace information. The salient cautionary tale is provided by True (1949), who reported that age-regressed subjects were able to identify at better than chance levels the day of the week on which their birthdays, and Christmas, fell in their fourth, seventh, and tenth years. Yates (1961) and Barber (1969) noted that the correct day can be calculated by the use of a fairly simple algorithm. However, it remains to be seen whether most, or even many, subjects know the formula in question; moreover, the procedure requires that subjects know the day of the week on which these holidays fall in the current year -- information that is probably not known by most subjects. More to the point, it is now known that the experimenter in question knew the answers to the questions as they were asked; when the experimenter is kept blind to the correct answer, response levels fall to chance (O'Connell et al., 1970). Notes on Forensic Hypnosis Despite the poverty of evidence supporting the idea that memory can be enhanced by hypnotic suggestions, hypnosis has come to be used by police officers, attorneys, and even judges in an effort to refresh or bolster the memories of witnesses, victims, and suspects in criminal investigations. Their conviction in the utility of forensic hypnosis is bolstered by occasional cases in which the use of hypnosis was associated with the recovery of useful clues (e.g., Dorcus, 1960; Raginsky, 1969). One such case was the kidnapping, in Chowchilla, California, of a schoolbus full of children: when hypnotized, the driver recalled a portion of a license plate that was eventually traced to a vehicle used by the perpetrators (Kroger & Douce, 1979). Such successes, when combined with reports of the hypnotic recovery of traumatic memories during psychotherapy (e.g., Breuer & Freud, 1893-1895) has led to the development of a virtual industry of forensic hypnosis. Of course, Freud later concluded that the reports of his patients were fantasies, not veridical memories. And although the Chowchilla kidnapping is often counted as a success, it is often forgotten that the driver also recalled a license tag that had no connection to the crime; it was other evidence that led to the successful solution to the case. Then, too, Dorcus (1960) had reported as many successes as failures in his own experience: reviewing his cases, the operative factor seems to have been the extent to which the memories were encoded in the first place. Moreover, a number of instances have been recorded where the memories produced by hypnotized witnesses and victims have proved highly implausible or even false (for a sampling, see Orne, 1979). The inherent unreliability of hypnotically elicited memories -- the difficulty of distinguishing between illusion and reality, the susceptibility of hypnotically refreshed memory to distortion by inadvertent suggestion, and the tendency of subjects to enhance the credibility of memories produced through hypnosis -- creates problems in the courtroom. These problems are enhanced by the possibility that investigators, and jurors, will give more credence than they deserve to memories refreshed by hypnosis (Labelle, Lamarche, & Laurence, 1990; McConkey, 1986; McConkey & Jupp, 1985, 1985-1986; Wilson, Greene, & Loftus, 1986). Thus, under the worst-case scenario, a hypnotized witness may produce an entirely false memory under hypnosis; testify to it convincingly; and be believed; even if the witness' memory does not change under hypnotic interrogation, the fact that a particular item of information, true or false, is remembered both in and out of hypnosis may lead the witness, and jurors, to give more credibility to the testimony than would be warranted. For these reasons, and in response to a number of cases that were prosecuted on the basis of evidence that later proved to be incorrect, both the medical establishment (American Medical Association, 1985) and the courts (Diamond, 1980; Kuplicki, 1988; Laurence & Perry, 1988; Orne, 1979; Orne, Dinges, & Orne, 1990; Orne, Soskis, Dinges, & Orne, 1984; Orne, Whitehouse, Dinges & Orne, 1988; Udolf, 1983, 1990) have begun to establish guidelines for the introduction and evaluation of hypnotically elicited memories. By this time, the issue of hypnosis has been considered by courts in more than half of the United States (and by courts in Canada, Australia, and other countries as well). In a recent review, Scheflin and Shapiro (1989) cite more than 400 appellate cases from more than 40 states in which hypnosis has been involved in one way or another. A full review of the legal status of forensic hypnosis is beyond the scope of this paper. In general, however, courts in the United States have taken one of three positions: (1) total exclusion of testimony based on hypnotically refreshed memory (e.g., State vs. Mack, 1980; People vs. Shirley, 1982); (2) total admission, with the weight of the evidence to be determined by the jury (e.g., Harding vs. State, 1968); and (3) admission of hypnotically refreshed memory, provided that certain procedural safeguards (such as those proposed by Orne, 1979; Orne et al., 1984; see also Ault, 1979) have been followed during the hypnotic session (e.g., State vs. Hurd, 1981; State vs. Armstrong, 1983). Perhaps the dominant position in the state courts is a per se exclusion of all hypnotically elicited evidence, and some courts have gone so far as to exclude from testimony even the pre-hypnotic memories of a witness who has been subsequently hypnotized (Kuplicki, 1988), on the grounds that hypnosis may distort prehypnotic as well as hypnotic memories -- for example, by inflating the subject's confidence in what he or she had already remembered. The conflicting laws operative in different jurisdictions virtually guarantee that the issue of forensic hypnosis will eventually come before the Supreme Court. In fact, while a number of cases involving hypnotized witnesses and victims have been denied certiorari, a case involving a hypnotized defendant was recently decided: Rock v. Arkansas (1987; for reviews of this case, see Kuplicki, 1988; Orne et al., 1990; Perry & Laurence, 1990; Udolf, 1990). By a hairline (5-4) majority, the Court (whose majority decision was authored by Justice Blackmun) decided that a defendant's hypnotically refreshed memories are admissible in court, without any restrictions or constraints. However, a reading of the opinion makes it clear that the Court's decision rested more on a concern for the defendant's Fifth Amendment right to testify in his or her own behalf, than it did on any acceptance of hypnotic technique. Under the United States Constitution, defendants are given every opportunity to defend themselves, and this includes resort to hypnosis. In fact, the Court's opinion (especially the minority view, authored by Chief Justice Rhenquist) clearly recognizes the problems posed by the use of hypnosis in the legal system. There are a number of different legal issues here (for early treatments, see Diamond, 1980; Warner, 1979; Worthington, 1979; for a recent overview, see Kuplicki, 1988). First of all, the question of whether hypnosis, as a scientific technique for the enhancement of memory, meets the standards for the admission of scientific evidence. Under the "Frye Rule" (Frye vs. United States, 1923) which currently governs the admissability of scientific evidence, "the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field to which it belongs". While hypnosis is clearly established as a potentially efficacious treatment modality in medicine and psychotherapy (American Medical Association, 1958) and a legitimate topic for scientific research (as evidenced by the establishment of Division 30, Psychological Hypnosis, of the American Psychological Association), there is no consensus about the reliability of hypnotically enhanced memory. In fact, if there is such a consensus, it is represented by the recent position paper of a committee of the American Medical Association (American Medical Association, 1985): hypnotically refreshed memory is inherently unreliable. There are also constitutional issues at stake, particularly surrounding the Sixth Amendment, which gives defendants the right to confront witnesses against them. After all, hypnosis has the potential to permanently distort a witness' memory -- thus leading, in effect, to the destruction of potential exculpatory evidence. Hypnosis can increase the likelihood of both unintended confabulations and the influence of leading questions and other misinformation. The confusion between illusion and reality that is part and parcel of the hypnotic experience may be fascinating in the laboratory and perhaps useful in the clinic; but it is potentially fatal in the courtroom. The myths surrounding the wonders of hypnosis may lead witnesses to inappropriately inflate their confidence in what they remember; or they may lead jurors to inappropriately accept their memories as accurate. In any event, the result is a threat to the validity of the evidence presented to factfinders. Because defendants have rights that the state does not, the decision in Rock v. Arkansas does not imply that testimony by hypnotized witnesses and victims will be allowed without restraint. The result is likely to be a bifurcated rule (Kuplicki, 1988) in which hypnosis is permitted to defendants with few restrictions, but severely constrained when used with witnesses and victims. For the present, however, those who use hypnosis forensically should be aware of the dangers posed by its use, and should conform their procedures to the sorts of guidelines adopted in many jurisdictions. The purpose of these procedural safeguards is twofold: (1) to minimize the possibility that the witness' independent memory will be contaminated by hypnosis; and (2) to maximize the likelihood that such contamination will be detected if it has occurred. One set of guidelines, based on those proposed by Orne (1979) and adopted in the United States by the Federal Bureau of Investigation (Ault, 1979), follows. It should be understood that it is the responsibility of the party employing hypnosis to affirmatively establish that these guidelines have been followed. (1) There should be a prima facie case that hypnosis is appropriate. Memories that have not been properly encoded are not likely to be retrieved, even by heroic means. Thus, hypnosis will be of no use in cases where the witness did not have a good view of the critical events, or was intoxicated or sustained head injury at the time of the crime. (2) For the same reason, there should be an objective assessment of the subject's hypnotizability, employing one or another of the standardized scales developed for this purpose. Hypnosis will be of no use with subjects who are not at least somewhat hypnotizable. (3) The hypnotist should be an experienced professional, knowledgeable of basic principles of psychological functioning and the scientific method. Forensic hypnosis raises cognitive issues, such as the nature of memory, and clinical issues, such as the subject's emotional reactions to any new information yielded by the procedure, and the hypnotist must be capable of evaluating and dealing with the situation on both counts. (4) The hypnotist should be a consultant acting independently of any investigative agency, either prosecution/plaintiff or defense/respondent, so as to emphasize the goal of the procedure: collecting information rather than supporting a particular viewpoint. (5) The hypnotist should be informed of only the barest details of the case at hand, so as to minimize the possibility that his or her preconceptions will influence the course of the hypnotic interview. In any event, a written record of all information transmitted to the hypnotist should be preserved. (6) A thorough interview should be conducted by the hypnotist, in advance of the hypnotic session, in order to establish a baseline against which any subsequent changes in memory can be evaluated. (7) Throughout the pre-hypnotic and hypnotic interview, the hypnotist and the subject should be isolated from other people, especially those who have independent knowledge of the facts of the case, suspects, etc., so as to preclude the possibility of inadvertent cuing and contamination of the subject's memory. (8) A complete recording of all interactions between hypnotist and subject should be kept, to permit evaluation of the degree to which untoward influence may have occurred. These standards are obviously difficult (though not impossible) to meet. For this reason, and because of the continuing constitutional controversy attached to forensic hypnosis, investigators are advised to confine their use of hypnosis to the gathering of investigative leads. Under these circumstances, hypnotically refreshed memories are not introduced into evidence, and the case is based solely on independently verifiable evidence. ?four decimals, anyway) the phenomenon of hypermnesia is no longer in doubt, controversy continues over the mechanisms responsible for the effect. For example, Roediger and his colleagues have suggested that hypermnesia is mediated by the increased time permitted for recall PROSPECTS FOR THE STRATEGIC CONTROL OF MEMORY The conclusion that emerges from this review is that the strategic self-regulation of memory is possible. The possibility of successful self-regulation flows naturally from the point of view that memory is a skilled activity as well as a mental storehouse, and from the reasonable assumption that people can acquire and perfect cognitive as well as motor skills. Certainly, the sorts of principles that control memory function can serve as guides for successful self-regulation. We can remember things better by paying active attention to them at the time they occur, deliberately engaging in elaborative and organizational activity that will establish links between one item of information and another; and we can facilitate forgetting by neglecting to do so. Forgetting will increase with the passage of time, if we allow it to happen; but continued rumination about the to-be-forgotten material may prevent this natural process from occurring. Once-forgotten items can be recovered, too, if somehow we are able to find the right cues to gain access to them; and some spontaneous recovery is to be expected as well, especially if the information was well-encoded in the first place. Remembering an event can be facilitated by returning to the environment, or mood state, present at the time the event occurred. Remembering is improved by taking generic world-knowledge into account, so that the person need not rely exclusively on trace information. And, perhaps, memories can be recoded, and thus altered, in the light of information acquired after the event in question. In the absence of conscious recollection, sheer guessing -- influenced by implicit memory, which is much less constrained by the conditions of encoding and retrieval -- may lead the person to better-than-chance levels of memory performance. At the same time, there are clear constraints on what can be achieved through strategic remembering and forgetting. Aside from hope and luck, little can be done to improve the situation where encodings were poor, and the retrieval environment is impoverished. Elaborative and organizational activity both require active cognitive effort, and thus are affected by limitations on attentional resources. Retrieval cues help memory, but they must be the right sorts of cues, compatible with the way in which the information was processed at the time of its original encoding. World-knowledge, and postevent information, may distort a person's memory for what actually occurred. And attempts at deliberate forgetting, or the retrospective alteration of memories, may change accessibility but not availability. Thus the forgotten knowledge, or the original memory, may nonetheless continue to influence the person's experience, thought, and action in the form of implicit memory. Hypnosis, for all its apparent wonders, does not eliminate these constraints. It can be a powerful technique for altering conscious experience, but it does so by following, rather than transcending, the laws that govern ordinary mental life. Thus, hypnosis presents some interesting possibilities for the self-control of memory, but it confronts the same sorts of limitations as well. Thus, hypnotic suggestions for amnesia may be very effective in reducing the person's conscious awareness of some event, but -- like nonhypnotic directed forgetting -- it can be breached, to some extent, by deliberate efforts at recall, and by cued recall and recognition procedures. More important, the forgotten memories may still be expressed implicitly, outside of conscious awareness. So far as we can tell, hypnosis does not, in and of itself, facilitate learning; and it does not appear to add anything to the hypermnesia that occurs in the normal waking state. Although, in principle, a hypnotically hallucinated environment might supply new cues to facilitate remembering, it must be remembered that the cues in question are hallucinatory, not veridical, and thus may produce misleading results -- the more so because hypnotized subjects are highly responsive to suggestions. Hypnosis, in its classic manifestations, has a profoundly delusional quality: thus, the subjective conviction that accompanies hypnotic remembering should not be confused with accuracy. For this reason, the clinical and forensic use of hypnosis to refresh recollection is fraught with dangers, and is to be used, if at all, with considerable circumspection. But the mere existence of limitations, and the sad fact that hypnosis cannot make us better than we are, should not deter us from acquiring, and deploying, our skills of remembering and forgetting. There is much that we can do in both respects. The point of view represented here is based on research supported by Grant MH-35856 from the National Institute of Mental Health. We thank Jill Booker, Jeffrey Bowers, Jennifer Dorfman, Elizabeth Glisky, Martha Glisky, Lori Marchese, Susan McGovern, Sheila Mulvaney, Robin Pennington, Michael Polster, Barbara Routhieux, Victor Shames, and Michael Valdessari for their comments. Ammons, H., & Irion, A. L. (1954). A note on the Ballard reminiscence phenomenon. Journal of Experimental Psychology, 48, 184-186. Aanstoos, C.M. (1991). Experimental psychology and the challenge of real life. American Psychologist, 46, 77-78. American Medical Association (1958). Medical use of hypnosis. Journal of the American Medical Association, 168, 186-189. American Medical Association (1985). Scientific status of refreshing recollection by the use of hypnosis. Journal of the American Medical Association, 253, 1918-1923. Anderson, J.R. (1976). Language, memory, and thought. Hillsdale, N.J.: Erlbaum. Anderson, J.R. (1978). Arguments concerning representations for mental imagery. Psychological Review, 85, 249-277. Anderson, J.R. (1983). The architecture of cognition. Cambridge: Harvard University Press. Anderson, J.R. (1990). Cognitive psychology and its implications. 3rd Ed. San Francisco: Freeman. Anderson, J.R., & Reder, L.M. (1979). An elaborative processing explanation of depth of processing. In L.S. Cermak & F.I.M. Craik (Eds.), Levels of processing in human memory (pp. 385-403). As, A. (1962). The recovery of forgotten language knowlege through hypnotic age regression: A case report. American Journal of Clinical Hypnosis, 5, 24-29. Ault, R.L. (1979). FBI guidelines for use of hypnosis. International Journal of Clinical & Experimental Hypnosis, 27, 449-451. Baddeley, A.D. (1976). The psychology of memory. New York: Basic Books. Baddeley, A.D. (1990). Human memory: Theory and practice. Boston: Allyn & Bacon. Bahrick, H.P. (1984). Semantic memory content in permastore: 50 years of memory for Spanish learned in school. Journal of Experimental Psychology: General, 113, 1-29. Bahrick, H.P. (1991). A speedy recovery from bankruptcy for ecological memory research. American Psychologist, 46, 76-77. Baker, R.A., Haynes, B., & Patrick, B.S. (1983). Hypnosis, memory, and incidental memory. American Journal of Clinical Hypnosis, 25, 253-262. Ballard, P.B. (1913). Oblivescence and reminiscence. British Journal of Psychology (Monograph Supplements), 1 (2). Banaji, M.R., & Crowder, R.G. (1989). The brkruptcy of everyday memory. American Psychologist, 44, 1185-1193. Banaji, M.R., & Crowder, R.G. (1991). Some everyday thoughts on ecologically valid methods. American Psychologist, 46, 78-79. Banyai, E.I., & Hilgard, E.R. (1976). A comparison of active-alert hypnotic induction with traditional relaxation induction. Journal of Abnormal Psychology, 85, 218-224. Barber, T.X. (1969). Hypnosis: A scientific approach. New York: Van Nostrand Reinhold. Barber, T.X., & Calverley, D.S. (1964). Toward a theory of "hypnotic" behavior: An experimental study of "hypnotic time distortion". Archives of General Psychiatry, 10, 209-216. Barber, T.X., & Calverley, D.S. (1966). Toward a theory of "hypnotic" behavior: Experimental analyses of suggested amnesia. Journal of Abnormal Psychology, 71, 95-107. Bartlett, F.C. (1932). Remembering: A study in experimental and social psychology. Cambridge: Cambridge University Press. Bellezza, F.S. (1981). Mnemonic devices: Classification, characteristics and criteria. Review of Educational Research, 51, 247-275. Belmore, S. M. (1981). Imagery and semantic elaboration in hypermnesia for words. Journal of Experimental Psychology: Human Learning and Memory, 7, 191-203. Bertrand, L.D., Spanos, N.P., & Parkinson, B. (1983). Test of the dissipation hypothesis of posthypnotic amnesia. Psychological Reports, 52, 667-671. Bertrand, L.D., Spanos, N.P., & Radtke, H.L. (1990). Contextual effects on priming during hypnotic amnesia. Journal of Research in Personality, 24, 271-290. Bjork, R.A. (1970). Positive forgetting: The noninterference of items intentionally forgotten. Journal of Verbal Learning & Verbal Behavior, 9, 255-268. Bjork, R.A. (1972). Theoretical implications of directed forgetting. In A.W. Melton & E. Martin Eds.), Coding proceses in human memory (pp. 217-235). Washington, D.C.: Winston. Bjork, R.A. (1978). The updating of human memory. In G.H. Bower (Eds.), The Psychology of Learning and Motivation (Vol. 12, pp. 235-259). New York: Academic. Bjork, R.A. (1989). Retrieval inhibition as an adaptive mechanism in human memory. In H.L. Roediger & F.I.M. Craik (Eds.), Varieties of memory and consciousness: Essays in honour of Endel Tulving (pp. 195-210). Hillsdale, N.J.: Erlbaum. Bjork, R.A., & Bjork, E.L. (1991, August). Dissociations in the impact of to-be-forgotten infomration on memory. Paper presented at the annual meeting of the American Psychological Association, San Francisco. Bjork, R.A., & Bjork, E.L. (in press). A new theory of disuse and an old theory of stimulus fluctuation. In A.F. Healy, S.M. Kosslyn, & R.M. Shiffrin (Eds.), From learning processes to cognitive processes: Essays in honor of William K. Estes (Vol. 2, pp. xxx-xxx). Hillsdale, N.J.: Erlbaum. Block, R.A. (1971). Effects of instructions to forget in short-term memory. Journal of Experimental Psychology, 89, 1-9. Booker, J. (1991). Interference effects in implicit and explicit memory. Unpublished doctoral dissertation, Universiy of Arizona. Bower, G.H. (1970a). Analysis of a mnemonic device. American Scientist, 58, 496-510. Bower, G.H. (1970b). Organizational factors in memory. Cognitive Psychology, 1, 18-46. Bower, G.H., & Reitman, J.S. (1972). Mnemonic elaboration in multilist learning. Journal of Verbal Learning & Verbal Behavior, 11, 478-485. Bransford & Franks, J.J. (1971). The abstraction of linguistic ideas. Cognitive Psychology, 2, 331-350. Breuer & Freud. (1895/1955). Studies on hysteria. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud, Vol. 2. London: Hogarth Press. Brown, E., & Deffenbacher, K. (1975). Forgotten mnemonists. Journal of the History of the Behavioral Sciences, 11, 342-349. Brown, W. (1923). To what extent is memory measured by a single recall? Journal of Experimental Psychology, 6, 377-385. Bruce, D. (1991). Mechanistic and functional explanations of memory. American Psychologist, 46, 46-48. Bugelski, B.R., Kidd, E., & Segmen, J. (1968). Image as a mediator in one-trial paired-associate learning. Journal of Experimental Psychology, 76, 69-73. Buxton, C. E. (1943). The status of research in reminiscence. Psychological Bulletin, 40, 313-340. Ceci, S.J., & Bronfenbrenner, W. (1991). On the demise of everyday memory: "The rumors of my death are much exaggerated" (Mark Twain). American Psychologist, 46, 27-31. Cermak, L.S. (1976). Improving your memory. New York: McGraw-Hill. Chase, W.G., & Ericsson, K.A. (1982). Skill and working memory. In G.H. Bower (Ed.), The psychology of learning and motivation (Vol. 16, pp. 1-58). New York: Academic Press. Coe, W.C. (1978). The credibility of posthypnotic amnesia: A contextualist's view. International Journal of Clinical & Experimental Hypnosis, 26, 218-245. Coe, W.C., Basden, B., Basden, D., & Graham, C. (1976). Posthypnotic amnesia: Suggestions of an active process in dissociative phenomena. Journal of Abnormal Psychology, 85, 455-458. Coe, W.C., Basden, B.H., Basden, D., Fikes, T., Gargano, G.J., & Webb, M. (1989). Directed forgetting and posthypnotic amnesia: Information-processing and social contexts. Journal of Personality & Social Psychology, 56, 189-198. Coe, W.C., Baugher, R.J., Krimm, W.R., & Smith, J.A. (1976). A further examination of selective recall following hypnosis. International Journal of Clinical & Experimental Hypnosis, 24, 13-21. Coe, W.C., & Sluis, A.S.E. (1989). Increasing contextual pressures to breach posthypnotic amnesia. Journal of Personality & Social Psychology, 57, 885-894. Coe, W.C., Taul, J.H., Basden, D., & Basden, B. (1973). Investigation of the dissociation hypothesis and disorganized retrieval in posthypnotic amnesia with reptoactive inhibition in free-recall learning. Proceedings of the 81st annual convention of the American Psychological Association, 8, 1081-1082. Coe, W.C., & Yashinski, E. (1985). Volitional experiences associated with breaching amnesia. Journal of Personality & Social Psychology, 48, 716-722. Cole, M., & Gay, J. (1972). Culture and memory. American Anthropologist, 74, 1066-1084. Cole, M., Gay, J., Glick, J., & Sharp, D. (1971). The cultural context of learning and thinking. New York: Basic Books. Collyer, S.C., Jonides, J., & Bevan, W. (1972). Images as memory aids: Is bizarreness helpful? American Journal of Psychology, 85, 31-38. Conway, M.A. (1991). In defense of everyday memory. American Psychologist, 46, 19-26. Cooper, L.F., & Erickson, M.H. (1950). Time distortion in hypnosis II. Bulletin of the Georgetown University Medical Center, 4, 50-68. Cooper, L.F., & Rodgin, D.W. (1952). Time distortion in hypnosis and non-motor learning. Science, 115, 500-502. Cooper, L.F., & Tuthill, C.E. (1952). Time distortion in hypnosis and motor learning. Journal of Psychology, 34, 67-76. Cooper, L.F., & Erickson, M.H. (1954). Time distortion in hypnosis. Baltimore: Williams & Wilkins. Cooper, L.M. (1979). Hypnotic amnesia. In E. Fromm & R.E. Shor (Eds.), Hypnosis: Developments in research and new perspectives (pp. 305-349). New York: Aldine. Cooper, L.M., & London, P. (1973). Reactivation of memory by hypnosis and suggestion. International Journal of Clinical & Experimental Hypnosis, 21, 312-323. Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning & Verbal Behavior, 11, 671-684. Craik, F.I.M., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General, 104, 268-294. Craik, F.I.M., & Watkins, M.J. (1973). The role of rehearsal in short-term memory. Journal of Verbal Learning & Verbal Behavior, 12, 598-607. Crowder. (1976). Principles of learning and memory. Hillsdale, N.J.: Erlbaum. Datas. (1904). A simple system of memory training. London: Gale & Polden. Davidson, T.M., & Bowers, K.S. (1991). Selective posthypnotic amnesia: Is it a successful attempt to forget or an unsuccessful attempt to remember? Journal of Abnormal Psychology, 100, 133-143. DePiano, F.A., & Salzberg, H.C. (1981). Hypnosis as an aid to recall of meaningful information presented under three types of arousal. International Journal of Clinical & Experimental Hypnosis, 29, 383-400. Dhanens, T.P., & Lundy, R.M. (1975). Hypnotic and waking suggestions and recall. International Journal of Clinical & Experimental Hypnosis, 23, 68-79. Diamond, B. (1980). Inherent problems in the use of pritrial hypnosis on a prospective witness. California Law Review, 68, 313-349. Dillon, R.F., & Spanos, N.P. (1983). Proactive interference and the functional ablation hypothesis: More disconfirmatory data. International Journal of Clinical & Experimental Hypnosis, 13, 47-56. Dorcus, R.M. (1960). Recall under hypnosis of amnestic events. International Journal of Clinical & Experimental Hypnosis, 8, 57-60. Dywan, J. (1987). The imagery factor in hypnotic hypermnesia. International Journal of Clinical & Experimental Hypnosis, 36, 312-326. Dywan, J., & Bowers, K. S. (1983). The use of hypnosis to enhance recall. Science, 222, 184-185. Ebbinghaus, H. (1885). Memory: A contribution to experimental psychology (H.A. Ruger & C.E. Bussenues, trans.). New York: Teachers College, Columbia University. Translation published, 1913. Eich, E. (1984). Memory for unattended events: Remembering with and without awareness. Memory & Cognition, 12, 105-111. Eich, E. (1989). Theoretical issues in state dependent memory. In H.L. Roediger & F.I.M. Craik (Eds.), Varieties of memory and consciousness: Essays in honour of Endel Tulving (pp. 331-354). Hillsdale, N.J.: Erlbaum. Eich, E., & Metcalfe, J. (1989). Mood dependent memory for internal versus external events. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15, 443-455. Eich, J.E. (1980). The cue-dependent nature of state-dependent retrieval. Memory & Cognition, 8, 157-173. Ellis, H.C., & Hunt, R.R. (1989). Fundamentals of human memory and cognition. Dubuque, Iowa: Brown. Epstein, W. (1969). Poststimulus output specification and differential retrieval from short-term memory. Journal of Experimental Psychology, 82, 168-174. Epstein, W. (1970). Facilitation of retrieval resulting from post-input exclusion of part of the input. Journal of Experimental Psychology, 86, 190-195. Epstein, W. (1972). Mechanisms of directed forgetting. In G.H. Bower (Ed.), The Psychology of learning and motivation (Vol. 6, pp. 147-191). New York: Academic. Erdelyi, M. H. (1982). A note on the level of recall, level of processing, and imagery hypothesis of hypermnesia. Journal of Verbal Learning and Verbal Behavior, 21, 656-661. Erdelyi, M.H. (1984). The recovery of unconscious (inaccessible) memories: Laboratory studies of hypermnesia. In G.H. Bower (Ed.), The psychology of learning and motivation (Vol. 18, pp. 95-127). New York: Academic. Erdelyi, M.H. (1988). Hypermnesia: The effect of hypnosis, fantasy, and concentration. In H.M. Pettinati (Ed.), Hypnosis and memory (pp. 64-94). New York: Guilford. Erdelyi, M.H. (1990). Repression, reconstruction, and defense: History and integration of the psychoanalytic and experimental frameworks. In J. Singer (Ed.), Repression and dissociation: Implications for personality theory, psychopathology, and health. Chicago: University of Chicago Press. Erdelyi, M. H., & Becker, J. (1974). Hypermnesia for pictures: Incremental memory for pictures but not words in multiple recall trials. Cognitive Psychology, 6, 159-171. Erdelyi, M. H., Buschke, H. & Finkelstein, S. (1977). Hypermnesia for Socratic stimuli: The growth of recall for an internally generated memory list abstracted from a series of riddles. Memory and Cognition, 5, 283-286. Erdelyi, M. H., Finkelstein, S., Herrell, N., Miller, B., & Thomas, J. 1976). Coding modality vs. input modality in hypermnesia: Is a rose a rose a rose? Cognition, 4, 311-319. Erdelyi, M.H., & Goldberg, B. (1979). Let's not sweep repression under the rug: Toward a cognitive psychology of repression. In J.F. Kihlstrom & F.J. Evans (Eds.), Functional disorders of memory (pp. 355-402). Hillsdale, N.J.: Erlbaum. Erdelyi, M. H., & Kleinbard, J. (1978). Has Ebbinghaus decayed with time?: The growth of recall (hypermnesia) over days. Journal of Experimental Psychology: Human Learning and Memory, 4, 275-289. Erdelyi, M. H., & Stein, J. B. (1981). Recognition hypermnesia: The growth of recognition memory (d') over time with repeated testing. Cognition, 9, 23-33. Evans, F.J. (1979). Contextual forgetting: Posthypnotic source amnesia. Journal of Abnormal Psychology, 88, 556-563. Evans, F.J. (1988). Posthypnotic amnesia: Dissociation of content and context. In H.M. Pettinati (Ed.), Hypnosis and memory (pp. 157-192). New York: Guilford. Evans, F.J., & Kihlstrom, J.F. (1973). Posthypnotic amnesia as disrupted retrieval. Journal of Abnormal Psychology, 82, 317-323. Evans, F.J., & Orne, M.T. (1965). Motivation, performance, and hypnosis. International Journal of Clinical & Experimental Hypnosis, 19, 277-296. Evans, F.J., & Thorn, W.A.F. (1966). Two types of posthypnotic amnesia: Recall amnesia and source amnesia. International Journal of Clinical & Experimental Hypnosis, 14, 333-343. Feltz, D., & Landers, D. (1983). The effects of mental practice on motor skill learning and performance: A meta-analysis. Journal of Sport Psychology, 5, 25-57. Fowler, W.L. (1961). Hypnosis and learning. International Journal of Clinical & Experimental Hypnosis, 9, 223-232. Frankel, F.H. (1988). The clinical use of hypnosis in aiding recall. In H.M. Pettinati (Ed.), Hypnosis and memory (pp. 247-264). New York: Guilford. Freud, S. (1915/1957). Repression. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 14, pp. 141-158). London: Hogarth Press. Fromm, E. (1970). Age regression with unexpected reappearance of a repressed childhood language. International Journal of Clinical & Experimental Hypnosis, 18, 79-88. Frye v. United States, 203 F. 1013, 1923. Geiselman, R.E., & Bagheri, B. (1985). Repetition effects in directed forgetting: Evidence for retrieval inhibition. Memory & Cognition, 13, 57-62. Geiselman, R.E., Bjork, R.A., & Fishman, D. (1983a). Disrupted retrieval in directed forgetting: A link with posthypnotic amnesia. Journal of Experimental Psychology: General, 112, 58-72. Geiselman, R.E., Fisher, R.P., MacKinnnon, D.P., & Holland, H.L. (1985). Eyewitness memory enhancement in the police interview: Cognitive retrieval mnemonics versus hypnosis. Journal of Applied Psychology, 70, 401-412. Geiselman, R.E., MacKinnon, D.P., Fishman, D.L., Jaenicke, C., Larner, B.R., Schoenberg, S., & Swartz, S. (1983b). Mechanisms of hypnotic and nonhypnotic forgetting. Journal of Experimental Psychology: Learning, Memory, & Cognition, 9, 626-635. Golding, J.M., Fowler, S.B., Long, D.L., & Latta, H. (1990). Instructions to disregard potentially useful information: The effects of prgmatics on evaluative judgments and recall. Journal of Memory & Language, 29, 212-227. Gordon, P., Valentine, E., & Wilding, J.M. (1984). One man's memory: A study of a mnemonist. British Journal of Psychology, 75, 1-14. Graham, K.R., & Patton, A. (1968). Retroactive inhibition, hypnosis, and hypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 16, 68-74. Gray, W.H. (1934). The effect of hypnosis on learning to spell. Journal of Educational Psychology, 25, 471-473. Gregg, V.H. (1979). Posthypnotic amnesia and general memory theory. Bulletin of the British Society of Experimental & Clinical Hypnosis, 2, 11-14. Gregg, V.H. (1980). Posthypnotic amnesia for recently learned material: A comment on the paper by J.F. Kihlstrom (1980). Bulletin of the British Society of Experimental & Clinical Hypnosis, 2, 11-14. Grinker, R.R., & Spiegel, J.P. (1945). Men under stress. New York: McGraw-Hill. Gruneberg, M.M., Morris, P.E., & Sykes, R.N. (1978). Practical aspects of memory. London: Academic. Gruneberg, M.M., Morris, P.E., & Sykes, R.N. (1988). Practical aspects of memory: Current research issues. Vol. 1: Memory in everyday life. Vol. 2: Clinical and educational implications. Chichester: Wiley. Gruneberg, M.M., Morris, P.E., & Sykes, R.N. (1991). The obituary on everday memory and its practical application is premature. American Psychologist, 46, 74-75. Gudjonsson, G.H. (1984). A new scale of interrogative suggestibility. Personality & Individual Differences, 5, 303-314. Haber, R.N. (1979). Twenty years of haunting eidetic imagery: Where's the ghost? Behavioral & Brain Sciences, 2, 583-629. Harding vs. State, 5 Md. App. 230, 246 A. 2d 302, 1968. Hasher, L., Riebman, B., & Wren, F. (1976). Imagery and the retention of free recall learning. Journal of Experimental Psychology: Human Learning and Memory, 2, 172-181. Hastie, R. (1980). Memory for behavioral infomration that confirms or contradicts a personality impression. In R. Hastie, T.M. Ostrom, E.B. Ebbesen, R.S. Wyer, D.L. Hamilton, & D.E. Carlston (Eds.), Person memory: The cognitive basis of social perception (pp. 155-177). Hillsdale, N.J.: Erlbaum. Hastie, R. (1981). Schematic principles in human memory. In E.T. Higgins, C.P. Herman, & M.P. Zanna (Eds.), Social cognition: The Ontario Symposium (Vol. 1, pp. 39-88). Hillsdale, N.J.: Erlbaum. Hastie, R., & Kumar, P.A. (1979). Person memory: Personality traits as organizing principles in memory for behaviors. Journal of Personality & Social Psychology, 37, 25-38. Herrmann, D.J. (1988). Memory improvement techniques. New York: Ballantine. Higbee, K.L. (1977). Your memory: How it works and how to improve it. Englewood Cliffs, N.J.: Prentice-Hall. Higbee, K.L. (1978). Some pseudo-limitations of mnemonics. In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory (pp. 147-154). London: Academic. Higbee, K.L. (1988). Practical aspects of mnemonics. In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory: Current research and issues (Vol. 2, pp. 403-408). Chichester: Wiley. Hilgard, E.R., & Cooper, L.M. (1965). Spontaneous and suggested posthypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 13, 261-273. Hilgard, E.R., & Hommel, L.S. (1961). Selective amnesia for events within hypnosis in relation to repression. Journal of Personality, 29, 205-216. Hirst, W., Johnson, M.K., Kim, J.K., Phelps, E.A., Risse, G., & Volpe, B.T. (1986). Recognition and recall in amnesics. Journal of Experimental Psychology: Learning, Memory, & Cognition, 12, 445-451. Hofling, C.K., Heyl, B., & Wright, D. (1971). The ratio of total recoverable memories to conscious memories in normal subjects. Comprehensive Psychiatry, 12, 371-379. Howard, M.L., & Coe, W.C. (1980). The effect of context and subjects' perceived control in breaching posthypnotic amnesia. Journal of Personality, 48, 342-359. Huber, P.W. (1991). Galileo's revenge: Junk science in the coutroom. New York: Basic Books. Huesmann, L.R., Gruder, C.L., & Dorst, G. (1987). A process model of posthypnotic amnesia. Cognitive Psychology, 19, 33-62. Hull, C.L. (1933) Hypnosis and suggestibility: An experimental approach. New York: Appleton-Century-Crofts. Hunt, E., & Love, T. (1972). How good can memory be? In a.W. Melton & E. Martin (Eds.), Coding processes in human memory (pp. 237-260). Washington, D.C.: Winston. Hunt, R.R. & Einstein, G.O. (1981). Relational and item-specific information in memory. Journal of Verbal Learning and Verbal Behavior, 20, 497-514. Hunter, I.M.L. (1962). An exceptional talent for calculative thinking. British Journal of Psychology, 34, 243-258. Hunter, I.M.L. (1977). An exceptional memory. British Journal of Psychology, 68, 155-164. Huse, B. (1930). Does the hypnotic trance favor the recall of faint memories? Journal of Experimental Psychology, 13, 519-529. Huttenlocher, J., Hedges, L., & Prohaska, V. (1988). Hierarchical organization in ordered domains: Estimating the dates of events. Psychological Review, 95, 471-484. Jacoby, L.L., & Dallas, M. (1981). On the relationship between autobiographical memory and perceptual learning. Journal of Experimental Psychology: General, 110, 306-340. Johnson, M.K., & Hasher, L. (1987). Human learning and memory. Annual Review of Psychology, 39, 631-668. Johnson, M.K., & Raye. (1981). Reality monitoring. Psychological Review, 88, 67-85. Johnson, R.F.Q. (1976). Hypnotic time-distortion and the enhancement of learning: New data pertinent to the Krauss-Katzell-Krauss experiment. American Journal of Clinical Hypnosis, 19, 89-102. Kihlstrom, J.F. (1978a). Attempt to revive a forgotten childhood language by means of hypnosis (Hypnosis Research Memorandum #148). Stanford, Ca.: Laboratory of Hypnosis Research, Department of Psychology, Stanford University. Kihlstrom, J.F. (1978b). Context and cognition in posthypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 26, 246-267. Kihlstrom, J.F. (1980). Posthypnotic amnesia for recently learned material: Interactions with "episodic" and "semantic" memory. Cognitive Psychology, 12, 227-251. Kihlstrom, J.F. (1983). Instructed forgetting: Hypnotic and nonhypnotic. Journal of Experimental Psychology: General, 112, 73-79. Kihlstrom, J.F. (1985). Posthypnotic amnesia and the dissociation of memory. In G.H. Bower (Ed.), The psychology of learning and motivation (Vol. 19, pp. 131-178). New York: Academic Press. Kihlstrom, J.F. (1987). The cognitive unconscious. Science, 237, 1445-1452. Kihlstrom, J.F. (1989). On what does mood-dependent memory depend? Journal of Social Behavior and Personality, 4, 23-32. Kihlstrom, J.F. (1990). The psychological unconscious. In L. Pervin (Ed.), Handbook of personality: Theory and research (pp. 445-464). New York: Guilford. Kihlstrom, J.F. (1991). Hypnosis: A sesquicentennial essay. International Journal of Clinical & Experimental Hypnosis, in press. Kihlstrom, J.F., Barnhardt, T.M., & Tataryn, D.J. (1991). The psychological unconscious: Found, lost, and regained. American Psychologist, in press. Kihlstrom, J.F., Brenneman, H.A., Pistole, D.D., & Shor, R.E. (1985). Hypnosis as a retrieval cue in posthypnotic amnesia. Journal of Abnormal Psychology, 94, 264-271. Kihlstrom, J.F., Easton, R.D., & Shor, R.E. (1983). Spontaneous recovery of memory during posthypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 26, 246-267. Kihlstrom, J.F., & Evans, F.J. (1976). Recovery of memory after posthypnotic amnesia. Journal of Abnormal Psychology, 85, 564-569. Kihlstrom, J.F., & Evans, F.J. (1977). Residual effect of suggestions for posthypnotic amnesia: A reexamination. Journal of Abnormal Psychology, 86, 327-333. Kihlstrom, J.F., & Evans, F.J. (1978). Generic recall during posthypnotic amnesia. Bulletin of the Psychonomic Society, 12, 57-60. Kihlstrom, J.F., & Evans, F.J. (1979). Memory retrieval processes in posthypnotic amnesia. In J.F. Kihlstrom & F.J. Evans (Eds.), Functional disorders of memory (pp. 179-218). Hillsdale, N.J.: Erlbaum. Kihlstrom, J.F., Evans, F.J., Orne, E.C., & Orne, M.T. (1980). Attempting to breach posthypnotic amnesia. Journal of Abnormal Psychology, 89, 603-616. Kihlstrom, J.F., & Hoyt, I.P. (1988). Hypnosis and the psychology of delusions. In T.F. Oltmanns & B.A. Maher (Eds.), Delusional beliefs (pp. 66-109). New York: Wiley-Interscience. Kihlstrom, J.F., & Hoyt, I.P. (1990). Repression, dissociation, and hypnosis. In J.L. Singer (Ed.), Repression and dissociation: Implications for personality theory, psychopathology, and health (pp 181-208). Chicago: University of Chicago Press. Kihlstrom, J.F., & McConkey, K.M. (1990). William James and hypnosis: A centennial reflection. Psychological Science, 1, 174-178. Kihlstrom, J.F., & Shor, R.E. (1978). Recall and recognition during posthypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 26, 330-349. Kihlstrom, J.F., & Wilson, L. (1984). Temporal organization of recall during posthypnotic amnesia. Journal of Abnormal Psychology, 93, 200-208. Kihlstrom, J.F., & Wilson, L. (1988). Rejoinder to Spanos, Bertrand, and Perlini. Journal of Abnormal Psychology, 97, 381-383. Klatzky, R.L. (1980). Human memory: Structures and processes. 2nd Ed. San Francisco: Freeman. Klatzky, R.L. (1991). Let's be friends. American Psychologist, 46, 43-45. Klein, S.B., Loftus, J., Kihlstrom, J.F., & Aseron, R. (1989). The effects of item-specific and relational information on hypermnesic recall. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15, 1192-1197. Kosslyn, S.M. (1980). Image and mind. Cambridge: Harvard University Press. Krauss, H.K., Katzell, R., & Krauss, B.J. (1974). Effect of hypnotic time-distortion upon free-recall learning. Journal of Abnormal Psychology, 83, 141-144. Kroger, W.S., & Douce, R.G. (1979). Hypnosis in criminal investigation. International Journal of Clinical & Experimental Hypnosis, 27, 358-374. Kuplicki, F.P. (1988). Fifth, Sixth, and Fourteenth Amendments: A constitutional paradigm for determining the admissibility of hypnotically refreshed memory. Journal of Criminal Law and Criminology, 78, 853-876. Labelle, L., Lamarche, M.C., & Laurence, J.-R. (1990). Potential jurors' opinions on the effects of hypnosis on eyewitness identification. International Journal of Clinical & Experimental Hypnosis, 38, 315-319. Labelle, L., Laurence, J.-R., Nadon, R., & Perry, C. (1990). Hypnotizability, preference for an imagic cognitive style, and memory creation in hypnosis. Journal of Abnormal Psychology, 99, 222-228. Laurence, J.-R., & Perry, C. (1983). Hypnotically created memory among highly hypnotizable subjects. Science, 222, 523-524. Laurence, J.-R., Nadon, R., Nogrady, H., & Perry, C. (1986). Duality, dissociation, and memory creation in highly hypnotizable subjects. International Journal of Clinical & Experimental Hypnosis, 34, 295-310. Laurence, J.-R., & Perry, C. (1988). Hypnosis, will, and memory: A psycho-legal history. New York: Guilford. Levitt, E.E., & Chapman, R. (1979). Hypnosis as a research metho. In E. Fromm & R.E. Shor (Eds.), Hypnosis: Developments in research and new perspectives (pp. 185-216). New York: Aldine. Lockhart, R.S., Craik, F.I.M., & Jacoby, L.L. (1976). Depth of processing, recognition, and recall. In J. Brown (Ed.), Recall and recognition (pp. 75-102). New York: Wiley. Loftus, E.F. (1975). Leading questions and the eyewitness report. Cognitive Psychology, 7, 560-572. Loftus, E.F. (1978). Reconstructive memory processes in eyewitness memory. In B.D. Sales (Ed.), Perspectives in law and psychology (pp. xxx-xxx). New York: Plenum. Loftus, E.F. (1991). The glitter of everyday memory... and the gold. American Psychologist, 46, 16-18. Loftus, E.F., & Loftus, G.R. (1980). On the permanence of stored information in the human brain. American Psychologist, 35, 409-420. Loftus, E.F., Schooler, J., & Wagenaar, W.A. (1985). The fate of memory: Comment on McCloskey and Zaragoza. Journal of Experimental Psychology: General, 114, 375-380. Loftus, G.R., & Loftus, E.F. (1976). Human memory: The processing of information. Hillsdale, N.J.: Erlbaum. London, P., Conant, M., & Davison, G.C. (1966). More hypnosis in the unhypnotizable: Effects of hypnosis and exhortation on rote learning. Journal of Personality, 34, 71-79. London, P., & Fuhrer, M. (1961). Hypnosis, motvation, and performance. Journal of Personality, 29, 321-333. Lorayne, H., & Lucas, J. (1974). The memory book. New York: Stein & Day. Luria, S. (1968). The mind of a mnemonist: A little book about a big memory. New York: Basic Books. Lynn, S.J., Milano, M., & Weekes, J.R. (1991). Hypnosis and pseudomemories: The effects of prehypnotic expectancies. Journal of Personality & Social Psychology, 60, 318-326. MacLeod, C.M. (1989). Directed forgettng affets both direct and indirect tests of memory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15, 13-21. Mandler, G. (1967). Organization and memory. In K.W. Spence & J.T. Spence (Eds.), The psychology of learning and motivation, Vol. 1 (pp. 327-372). New York: Academic Press. Mandler, G. (1979). Organization, memory, and mental structures. In C.R. Puff (Ed.), Memory organization and structure (pp. 303-319). New York: Academic Press. Mandler, G. (1980). Recognizing: The judgment of previous occurrence. Psychological Review, 87, 252-271. Mandler, J. (1979). Categorical and schematic organization in memory. In C.R. Puff (Ed.), Memory organization and structure (pp. 259-299). New York: Academic. Marcuse, F.L. (1959). Hypnosis: Fact and fiction. Harmondsworth: Penguin. McCann, T., & Sheehan, P.W. (1988). Hypnotically induced pseudomemories: Sampling their conditions among hypnotizable subjects. Journal of Personality & Social Psychology, 54, 339-346. McCloskey, M., & Zaragoza, M. (1985). Misleading postevent information and memory for events: Arguments and evidence against memory impairment hypotheses. Journal of Experimental Psychology: General, 114, 1-16. McConkey, K.M. (1986). Opinions about hypnosis and self-hypnosis before and after hypnotic testing. International Journal of Clinical & Experimental Hypnosis, 34, 311-319. McConkey, K.M., Bryant, R.A., Bibb, B.C., & Kihlstrom, J.F. (1991). Trance logic in hypnosis and imagination. Journal of Abnormal Psychology, 100, 464-472. McConkey, K.M., & Jupp, J.J. (1985). Opinions about the forensic use of hypnosis. Australian Psychologist, 20, 283-291. McConkey, K.M., & Jupp, J.J. (1985-1986). A survey of opinions about hypnosis. British Journal of Experimental & Clinical Hypnosis, 3, 87-93. McConkey, K.M., & Kinoshita, S. (1985-1986). Creating memories and reports: Comment on Spanos & McLean. British Journal of Experimental & Clinical Hypnosis, 3, 162-166. McConkey, K.M., Labelle, L., Bibb, B.C., & Bryant, R.A. (1990). Hypnosis and suggested pseudomemory: The relevance of text context. Australian Journal of Psychology, 42, 197-205. McConkey, K.M., & Sheehan, P.W. (1981). The impact of videotape playback of hypnotic events on posthypnotic amnesia. Journal of Abnormal Psychology, 90, 46-54. McConkey, K.M., & Sheehan, P.W., & Cross, D.G. (1980). Posthypnotic amnesia: Seeing is not remembering. British Journal of Social & Clinical Psychology, 19, 99-107. McDaniel, M.A., & Pressley, M. (1987). Imagery and related mnemonic processes: Theories, individual differences, and applications. New York: Springer Verlag. McGeoch, G.O. (1935). The conditions of reminiscence. American Journal of Psychology, 47, 65-89. Metcalfe, J. (1990). Composite holographic associative recall model (CHARM) and blended memories in eyewitness testimony. Journal of Experimental Psychology: General, 119, 145-160. Mitchell, M.B. (1932). Retroactive inhibition and hypnosis. Journal of General Psychology, 7, 343-358. Morris, P.E. (1978). Sense and nonsense in traditional mnemonics. In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory (pp. 155-163). London: Academic. Morton, J. (1991). The bankruptcy of everyday memory. American Psychologist, 46, 32-33. Mross, E.F., Klein, S.B., Loftus, J., & Kihlstrom, J.F. (1990). Levels of processing and levels of recall in hypermnesia. Unpublished manuscript, University of Colorado, Boulder, Co. Nash, M.R. (1987). What, if anything, is regressed about hypnotic age regression? A review of the empirical literature. Psychological Bulletin, 102, 42-52. Nash, M.R., Drake, S.D., Wiley, S., Khalsa, S., & Lynn, s.J. (1986). The accuracy of recall by hypnotically age regressed subjects. Journal of Abnormal Psychology, 95, 298-300. Nash, M.R., Johnson, L.S., & Tipton, R. (1979). Hypnotic age regression and the occurrence of transitional object relationships. Journal of Abnormal Psychology, 88, 547-555. Nash, M.R., Lynn, S.J., Stanley, S., Frauman, D.C., & Rhue, J. (1985). Hypnotic age regression and the importance of assessing interpersonally relevant affect. International Journal of Clinical & Experimental Hypnosis, 33, 224-235. Neisser, U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts. Neisser, U. (1976). Cognition and reality. San Francisco: Freeman. Neisser, U. (1978). Memory: What are the important questions? In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory (pp. 3-24). London: Academic. Neisser, U. (1991). A case of misplaced nostalgia. American Psychologist, 46, 34-36. Nelson, T.O. (1978). Detecting small amounts of information in memory: Savings for nonrecognized items. Journal of Experimental Psychology: Human Learning & Memory, 4, 453-468. Nogrady, H., McConkey, K.M., & Perry, C. (1985). Enhancing visual memory: Trying hypnosis, trying imagination, trying again. Journal of Abnormal Psychology, 94, 195-204. O'Connell, D.N. (1966). Selective recall of hypnotic susceptibility items: Evidence for repression or enhancement? International Journal of Clinical & Experimental Hypnosis, 14, 150-161. O'Connell, D.N., Shor, R.E., & Orne, M.T. (1970). Hypnotic age regression: An empirical and methodological analysis. Journal of Abnormal Psychology Monograph, 76(3, Pt. 2), 1-32. Orne, M.T. (1951). The mechanisms of hypnotic age regression: An experimental study. Journal of Abnormal & Social Psychology, 46, 213-225. Orne, M.T. (1959). The nature of hypnosis: Artifact and essence. Journal of Abnormal & Social Psychology, 58, 277-299. Orne, M.T. (1962). On the social psychology of the psychological experiment: With special reference to demand characteristics and their implications. American Psychologist, 17, 776-783. Orne, M.T. (1979). The use and misuse of hypnosis in court. International Journal of Clinical & Experimental Hypnosis, 27, 311-341. Orne, M.T., Dinges, D.F., & Orne, E.C. (1990). Rock v. Arkansas: Hypnosis, the defendant's privilege. International Journal of Clinical & Experimental Hypnosis, 38, 250-265. Orne, M.T., Soskis, D.A., Dinges, D.F., & Orne, E.C. (1984). Hypnotically induced testimony. In G.L. Wells & E.F. Loftus (Eds.), Eyewitness testimony: Psychological perspectives (pp. 171-213). Cambridge: Cambridge University Press. Orne, M.T., Whitehouse, W.G., Dinges, D.F., & Orne, E.C. (1988). Reconstructing memory through hypnosis: Forensic and clinical implications. In H.M. Pettinati (Ed.), Hypnosis and memory (pp. 21-63). New York: Guilford. Paivio, A. (1971). Imagery and cognitive processes. New York: Holt, Rinehart, & Winston. Paivio, A. (1986). Mental representations: A dual-coding approach. New York: Oxford University Press. Parker, P.D., & Barber, T.X. (1964). Hypnosis, task-motivating instructions, and learning performance. Journal of Abnormal & Social Psychology, 69, 499-504. Patten, E.F. (1932). Does posthypnotic amnesia apply to practice effects? Journal of General Psychology, 7, 196-201. Payne, D.G. (1986). Hypermnesia for pictures and words: Testing the recall level hypothesis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 12, 16-29. Payne, D.G. (1987). Hypermnesia and reminiscence in recall: A historical and empirical review. Psychological Bulletin, 101, 5-27. People vs. Shirley, 31 Cal. 3d 18; 641 P.2d 775; 181 Cal. Rptr. 243, 1982. Perry, C., & Laurence, J.-R. (1990). Hypnosis with a criminal defendant and a crime witness: Two recent related cases. International Journal of Clinical & Experimental Hypnosis, 38, 266-282. Perry, C.W., Laurence, J.-R., D'Eon, J., & Tallant, B. (1988). Hypnotic age regression techniques in the elicitation of memories: Applied uses and abuses. In H.M. Pettainati (Ed.), Hypnosis and memory (pp. 128-154). New York: Guilford. Perry, C., & Walsh, B. (1978). Inconsistencies and anomalies of response as a defining characteristic of hypnosis. Journal of Abnormal Psychology, 87, 574-577. Pettinati, H.M., & Evans, F.J. (1978). Posthypnotic amnesia: Evaluation of selective recall of successful experiences. International Journal of Clinical & Experimental Hypnosis, 26, 317-329. Pettinati, H.M., Evans, F.J., Orne, E.C., & Orne, M.T. (1981). Restricted use of success cues in retrieval during posthypnotic amnesia. Journal of Abnormal Psychology, 90, 345-353. Pressley, M., & McDaniel, M.A. (1988). Doing mnemonics research well: Some general guidelines and a study. In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory: Current research and issues (Vol. 2, pp. 409-414). Chichester: Wiley. Pressley, M., & Mullally, J. (1984). Alternative research paradigms in the analysis of mnemonics. Contemporary Educational Psychology, 9, 48-60. Putnam, W.H. (1979). Hypnosis and distortions in eyewitness memory. International Journal of Clinical & Experimental Hypnosis, 27, 437-448. Pylyshyn, Z. (1981). The imagery debate: Analogue media versus tacit knowledge. Psychological Review, 88, 16-45. Rabinowitz, J.C., Mandler, G., & Patterson, K.E. (1977). Determinants of recognition and recall: Accessibility and generation. Journal of Experimental Psychology: General, 106, 302-329. Radtke, H.L., & Spanos, N.P. (1981). Temporal sequencing during posthypnotic amnesia: A methodological critique. Journal of Abnormal Psychology, 90, 476-485. Radtke, H.L., Thompson, V.A., & Egger, L.A. (1987). Use of retrieval cues in breaching hypnotic amnesia. Journal of Abnormal Psychology, 96, 335-340. Raginsky, B. (1969). Hypnotic recall of aircrash cause. International Journal of Clinical & Experimental Hypnosis, 17, 1-19. Reed, H. (1970). Studies of the interference process in short-term memory. Journal of Experimental Psychology, 84, 452-457. Register, P.A., & Kihlstrom, J.F. (1986). Hypnotic effects on hypermnesia. International Journal of Clinical and Experimental Hypnosis, 35, 155-170. Register, P.A., & Kihlstrom, J.F. (1987). Hypnosis and interrogative suggestibility. Personality and Individual Differences, 9, 549-558. Reiff, R., & Scheerer, M. (1959). Memory and hypnotic age regression: Develpmental aspects of cognitive function explored through hypnosis. New York: International Universities Press. Reiser, M. (1976). Hypnosis as an aid in criminal investigation. Police Chief, 46, 39-40. Reyher, J. (1967). Hypnosis in research on psychopathology. In J.E. Gordon (Ed.), Handbook of clinical and experimental hypnosis (pp. 110-147). New York: Macmillan. Richardson-Klavehn, A., & Bjork, R.A. (1988). Measures of memory. Annual Review of Psychology, 39, 475-543. Rock v. Arkansas, 288 Ark. 566; 708 S.W. 2d 78, 1986; 55 L.W. 4925, 1987; 107 S. Ct. 2704, 1987. Roediger, H.L. (1980). The effectiveness of four mnemonics in ordering recall. Journal of Experimental Psychology: Human Learning & Memory, 6, 558-567. Roediger, H.L. (1982). Hypermnesia: The importance of recall time and asymptotic level of recall. Journal of Verbal Learning and Verbal Behavior, 21, 662-665. Roediger, H.L. (1990). Implicit memory: A commentary. Bulletin of the Psychonomic Society, 28, 373-380. Roediger, H.L. (1991). They read an article? A commentary on the everyday memory controversy. American Psychologist, 46, 37-40. Roediger, H.L., & Challis, B.H. (1988). Hypermnesia: Improvements in recall with repeated testing. In C. Izawa (Ed.), Current issues in cognitive processes: The Tulane Floweree Symposium on Cognition (p. 175-199). Hillsdale, N.J.: Erlbaum. Roediger, H. L., & Payne, D. G. (1982). Hypermnesia: The role of repeated testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 8, 66-72. Roediger, H. L., Payne, D. G., Gillespie, G. L., & Lean, D. S. (1982). Hypermnesia as determined by level of recall. Journal of Verbal Learning and Verbal Behavior, 21, 635-655. Roediger, H. L., & Thorpe, L. A. (1978). The role of recall time in producing hypermnesia. Memory and Cognition, 6, 286-305. Roediger, H.L., Weldon, M.S., & Challis, B.H. (1989). Explaining dissociations between implicit and explicit measures of retention: A processing account. In H.L. Roediger & F.I.M. Craik (Eds.), Varieties of memory and consciousness: Essays in honour of Endel Tulving (pp. 3-41). Hillsdale, N.J.: Erlbaum. Rosenhan, D., & London, P. (1963). Hypnosis in the unhypnotizable: A study in rote learning. Journal of Experimental Psychology, 65, 30-34. Rosenthal, B.G. (1944). Hypnotic recall of material learned under anxiety and non-anxiety producing conditions. Journal of Experimental Psychology, 34, 369-389. Sanders, G.S., & Simmons, W.L. (1983). Use of hypnosis to enhance eyewitness accuracy: Does it work? Journal of Applied Psychology, 68, 70-77. Schacter, D.L. (1987). Implicit memory: History and current status. Journal of Experimental Psychology: Learning, Memory, & Cognition 13, 501-518. Schacter, D.L. (1990). Perceptual representation systems and implicit memory: Toward a resolution of the multipme memory systems debate. In A. Diamond (Ed.), Developmental and neural bases of higher cognitive function. Annals of the New York Academy of Sciences 608, 543-571. Schacter, D.L., Harbluk, J.L., & McLachlan, D.R. (1984). Retrieval without recollection: An experimental analysis of source amnesia. Journal of Verbal Learning & Verbal Behavior, 23, 593-611. Schacter, D.L., & Kihlstrom, J.F. (1989). Functional amnesia. In F. Boller & J. Graffman (Eds.), Handbook of neuropsychology (Vol. 3, pp. 209-231). Amsterdam: Elsevier. Scharf, B., & Zamansky, H.S. (1963). Reduction of word-recognition threshold under hypnosis. Perceptual & Motor Skills, 17, 499-510. Scheflin, A.W., & Shapiro, J.L. (1989). Trance on trial. New York: Guilford. Schul, Y., & Burnstein, E. (1985). When discounting fails: Conditions under which individuals use discredited information in making a judgment. Journal of Personality & Social Psychology, 49, 894-903. Schuyler, B.A., & Coe, W.C. (1981). A physiological investigation of volitional and nonvolitional experience during posthypnotic amnesia. Journal of Personality & Social Psychology, 40, 1160-1169. Schuyler, B.A., & Coe, W.C. (1989). More on volitional experiences and breaching posthypnotic amnesia. International Journal of Clinical & Experimental Hypnosis, 37, 320-331. Sears, A.B. (1955). A comparison of hypnotic and waking learning of the International Morse Code. Journal of Clinical & Experimental Hypnosis, 3, 215-221. Shank, R., & Abelson, R. (1977). Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Hillsdale, N.J.: Erlbaum. Shapiro, S. R., & Erdelyi, M. H. (1974). Hypermnesia for pictures but not for words. Journal of Experimental Psychology, 1974, 103, 1218-1219. Sheehan, P.W. (1988a). Confidence, memory, and hypnosis. In H.M. Pettinati (Ed.), Hypnosis and memory (pp. 95-127). New York: Guilford. Sheehan, P.W. (1988b). Memory distortion in hypnosis. International Journal of Clinical & Experimental Hypnosis, 36, 296-311. Sheehan, P.W., & Grigg, L. (1985). Hypnosis and the acceptance of an implausible cognitive set. British Journal of Experimental & Clinical Hypnosis, 3, 5-12. Sheehan, P.W., Grigg, L., & McCann, T. (1984). Memory distortion following exposure to false information in hypnosis. Journal of Abnormal Psychology, 93, 259-265. Sheehan, P.W., & Perry, C.W. (1977). Methodologies of hypnosis: A critical appraisal of contemporary paradigms of hypnosis. Hillsdale, N.J.: Erlbaum. Sheehan, P.W., Statham, D., & Jamieson, G.A. (1991). Pseudomemory effects over time in the hypnotic setting. Journal of Abnormal Psychology, 100, 39-44. Sheehan, P.W., & Tilden, J. (1983). Effects of suggestibility and hypnosis on accurate and distorted retrieval from memory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 9, 283-293. Sheehan, P.W., & Tilden, J. (1984). Real and simulated occurrences of memory distortion following hypnotic induction. Journal of Abnormal Psychology, 93, 47-57. Sheehan, P.W., & Tilden, J. (1986). The consistency of occurrences of memory distortion following hypnotic induction. International Journal of Clinical & Experimental Hypnosis, 34, 122-137. Shepard, R.N., & Cooper, L.A. (1982). Mental images and their transformations. Cambridge: MIT Press. Shields, I.W., & Knox, V.J. (1986). Level of processing as a determinant of hypnotic hypermnesia. Journal of Abnormal Psychology, 95, 358-364. Shimamura, A.P. & Squire, L.R. (1987). A neuropsychological study of fact memory and source amnesia. Journal of Experimental Psychology: Learning, Memory, & Cognition, 13, 464-473. Silva, C.E., & Kirsch, I. (1987). Breaching amnesia by manipulating expectancy. Journal of Abnormal Psychology, 96, 325-329. Singer, J.L. (1990). Repression and dissociation: Implications for personality theory, psychopathology, and health. Chicago: University of Chicago Press. Sloane, M.C. (1981). A comparison of hypnosis vs. waking state and visual vs. non-visual recall instructions for witness/victim memory retrieval in actual major crimes. Doctoral dissertation, Florida State University. Ann Arbor, Mi.: University Microfilms International, #8125873. Slotnick, R.S., Liebert, R.M., & Hilgard, E.R. (1965). The enhancement of muscular performance in hypnosis through exhortation and involving instructions. Journal of Personality, 33, 37-45. Slotnick, R., & London, P. (1965). Influence of instructions on hypnotic and nonhypnotic performance. Journal of Abnormal Psychology, 70, 38-46. Smith, M.C. (1983). Hypnotic memory enhancement of witnesses: Does it work? Psychological Bulletin, 94, 387-407. Smith, S. (1988). Environmental context-dependent memory. In G.M. Davies & D.M. Thomson (Eds.), Memory in context: Context in memory (pp. 13-34. London: Wiley. Spanos, N.P. (1986). Hypnotic behavior: A social-psychological interpretation of amnesia, analgesia, and "trance logic". Behavioral & Brain Sciences, 9, 449-502. Spanos, N.P., Bertrand, L., & Perlini, A.H. (1988). Reduced clustering during hypnotic amnesia for a long word list: Comment. Journal of Abnormal Psychology, 97, 378-380. Spanos, N.P., & Bodorik, H.L. (1977). Suggested amnesia and disorganized recall in hypnotic and task-motivated subjects. Journal of Abnormal Psychology, 86, 295-305. Spanos, N.P., deGroot, H.P., & Gwynn, M.I. (1987). Trance logic as incomplete responding. Journal of Personality & Social Psychology, 53, 911-921. Spanos, N.P., Gwynn, M.I., Comer, S.L., Baltruweit, W.J., & de Groh, M. (1989). Are hypnotically induced pseudomemories resistant to cross-examination? Law & Human Behavior, 13, 271-289. Spanos, N.P., Della Malva, C.L., Gwynn, M.I., & Bertrand, L.D. (1988). Social psychological factors in the genesis of posthypnotic source amnesia. Journal of Abnormal Psychology, 1988, 97, 322-329. Spanos, N.P., James, B., & Degroot, H.P. (1990). Detection of simulated hypnotic amnesia. Journal of Abnormal Psychology, 99, 179-182. Spanos, N.P., & McLean, J. (1985-1986a). Hypnotically created pseudomemories: Memory distortions or reporting biases? British Journal of Experimental & Clinical Hypnosis, 3, 155-159. Spanos, N.P., & McLean, J. (1985-1986b). Hypnotically created false reports do not demonstrate pseudomemories. British Journal of Experimental & Clinical Hypnosis, 3, 160-161. Spanos, N.P., Radtke, H.L., & Bertrand, L.D. (1984). Hypnotic amnesia as a strategic enactment: Breaching amnesia in highly susceptible subjects. Journal of Personality & Social Psychology, 47, 1155-1169. Spanos, N.P., Radtke, H.L., & Dubreuil, D.L. (1982). Episodic and semantic memory in posthypnotic amnesia: A reevaluation. Journal of Personality & Social Psychology, 43, 565-573. Spanos, N.P., Tkachyk, M.E., Bertrand, L.D., & Weekes, J.R. (1984). The dissipation hypothesis of posthypnotic amnesia: More disconfirming evidence. Psychological Reports, 55, 191-196. Spence, J.D. (1984). The memory palace of Matteo Ricci. New York: Viking Penguin. Stalnaker, J.M., & Riddle, E.E. (1932). The effect of hypnosis on long-delayed recall. Journal of General Psychology, 6, 429-440. State v. Armstrong, 110 Wis. 2d 555; 329 N.W. 2d 386; 461 U.S. 946, 1983. State v. Hurd, 86 N.J. 525; 432 A. 2d 86, 1981. State v. Mack, 292 N.W. 2d 764; 27 Cr. L. 1043, 1980. (Minn.) St. Jean, R. (1980). Hypnotic time distortion and learning: Another look. Journal of Abnormal Psychology, 89, 20-24. St. Jean, R. (1989). Hypnosis and time perception. In N.P. Spanos & J.F. Chaves (Eds.), Hypnosis: The cognitive-behavioral perspective (pp. 175-186). Buffalo, N.Y.: Prometheus Press. St. Jean, R., & Coe, W.C. (1981). Recall and recognition memory during posthypnotic amnesia: A failure to confirm the disrupted-search hypothesis and the memory disorganization hypothesis. Journal of Abnormal Psychology, 90, 231-241. Sutcliffe, J.P. (1960). "Credulous" and "sceptical" views of hypnotic phenomena: A review of certain evidence and methodology. International Journal of Clinical & Experimental Hypnosis, 8, 73-101. Sutcliffe, J.P. (1961). "Credulous" and "skeptical" views of hypnotic phenomena: Experiments in esthesia, hallucination, and delusion. Journal of Abnormal & Social Psychology, 62, 189-200. Thorndike, E.L. (1913). Educational psychology: The psychology of learning. Vol. 2. New York: Teachers College Press. Timm, H.W. (1981). The effect of forensic hypnosis techniques on eyewitness recall and recognition. Journal of Police Science & Administration, 9, 188-194. Tkachyk, M.E., Spanos, N.P., & Bertrand, L.D. (1985). Variables affeting subjective organization during posthypnotic amnesia. Journal of Research in Personality, 19, 95-108. True, R.M. (1949). Experimental control in hypnotic age regression. Science, 110, 583. Tulving, E. (1964). Intratrial and intertrial retention: Notes toward a theory of free recall verbal learning. Psychological Review, 71, 219-237. Tulving, E. (1967). The effects of presentation and recall of material in free-recall learning. Journal of Verbal Learning and Verbal Behavior, 6, 175-184. Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organization of memory (pp. 381-403). New York: Academic. Tulving, E. (1974). Cue-dependent forgetting. American Scientist, 62, 74-82. Tulving, E. (1976). Ecphoric processes in recall and recognition. In J. Brown (Ed.), Recall and recognition (pp. 37-73). New York: Wiley. Tulving, E. (1983). Elements of episodic memory. Oxford: Clarendon Press. Tulving, E. (1991). Memory research is not a zero-sum game. American Psychologist, 46, 41-42. Tulving, E., & Pearlstone, Z. (1966). Availability versus accessibility of information in memory for words. Journal of Verbal Learning and Verbal Behavior, 5, 381-391. Tulving, E., & Schacter, D.L. (1990). Priming and human memory systems. Science, 247, 301-306. Tulving, E., & Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review, 80, 352-373. Tversky, B., & Tuchin, M. (1989). A reconciliation of the evidence on eyewitness testimony: Comments on McCloskey and Zaragoza. Journal of Experimental Psychology: General, 118, 86-91. Udolf, R. (1983). Forensic hypnosis: Psychological and legal aspects. Lexington, Ma.: Heath. Udolf. (1990). Rock v. Arkansas: A critique. International Journal of Clinical & Experimental Hypnosis, 38, 239-249. Wagner, D.A. (1978a). Culture and mnemonics. In M.M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory (pp. 180-188). London: Academic. Wagner, D.A. (1978b). Memories of Morocco: The influence of age, schooling, and environment on memory. Cognitive Psychology, 10, 1-28. Wagstaff, G.F., & Ovenden, M. (1979). Hypnotic time distortion and free-recall learning: An attempted replication. Psychological Research, 40, 291-298. Waldfogel, S. (1948). The frequency and affective character of childhood memories. Psychological Monographs, 62, 39-48. Warner, K.E. (1979). The use of hypnosis in the defense of criminal cases. International Journal of Clinical & Experimental Hypnosis, 27, 417-436. Watkins, J.G. (1949). Hypnotherapy of war neuroses: A clinical psychologist's casebook. New York: Ronald Press. Wells, W.R. (1940). The extent and duration of experimentally induced amnesia. Journal of Psychology, 2, 137-131. White, R.W., Fox, G.F., & Harris, W.W. (1940). Hypnotic hypermnesia for recently learned material. Journal of Abnormal & Social Psychology, 35, 88-103. Whitehouse, W.G., Dinges, D.F., Orne, E.C., & Orne, M.T. (1991). Hypnotic hypermnesia: Enhanced memory accessibility or report bias? Journal of Experimental Psychology: Learning, Memory, & Cognition xx, xxx-xxx. Wilson, L., Greene, E., & Loftus, E.F. (1986). Beliefs about forensic hypnosis. International Journal of Clinical & Experimental Hypnosis, 34, 110-121. Wilding, J.M., & Valentine, E. (1985). One man's memory for prose, faces, and names. British Journal of Psychology, 76, 215-219. Williamsen, J.A., Johnson, H.J., & Eriksen, C.W. (1965). Some characteristics of posthypnotic amnesia. Journal of Abnormal Psychology, 70, 123-131. Wilson, L., & Kihlstrom, J.F. (1986). Subjective and categorical organization of recall in posthypnotic amnesia. Journal of Abnormal Psychology, 95, 264-273. Winograd, T. (1975). Computer memories: A metaphor for memory organization. In C.N. Cofer (Ed.), The structure of human memory (pp. 133-161). San Francisco: Freeman. Wollen, K.A., Weber, A., & Lowry, D. (1972). Bizarreness versus interaction of mental images as determinants of learning. Cognitive Psychology, 3, 518-523. Wood, G. (1967). Mnemonic systems in recall. Journal of Educational Psychology Monograph, 58(6, Pt. 2). Worthington, T.S. (1979). The use in court of hypnotically enhanced testimony. International Journal of Clinical & Experimental Hypnosis, 27, 402-416. Wyer, R.S., & Budesheim, T.L. (1987). Person memory and judgments: The impact of information that one is told to disregard. Journal of Personality & Social Psychology, 53, 14-29. Yates, A. (1961). Hypnotic age regression. Psychological Bulletin, 88, 429-440. Yates, F.A. (1966). The art of memory. Chicago: University of Chicago Press. Young, P.C. (1925). An experimental study of mental and physical functions in the normal and hypnotic states. American Journal of Psychology, 36, 214-232. Young, P.C. (1926). An experimental study of mental and physical functions in the normal and hypnotic states: Additional results. American Journal of Psychology, 37, 345-356. Yuille, J.C. (1983). Imagery, memory, and cognition: Essays in honor of Allan Paivio. Hillsdale, N.J.: Erlbaum. Zamansky, H. (1985-1986). Hypnotically created pseudomemories: Memory distortions or reporting biases? British Journal of Experimental & Clinical Hypnosis, 3, 160-161. Zamansky, H.S., Scharf, B., & Brightbill, R. (1964). The effect of expectancy for hypnosis on prehypnotic performance. Journal of Personality, 32, 236-248. Zelig, M., & Beidelman, W.B. (1981). The investigative use of hypnosis: A word of caution. International Journal of Clinical & Experimental Hypnosis, 29, 401-412.
the Physics Education Technology Project This student hand-out was created specifically to accompany the PhET simulation "Nuclear Fission". Appropriate for grades 8-12, it provides guided directions on using the simulation to ensure that students stay focused on learning goals. The simulation features a neutron gun that "fires" an accelerated neutron into a Uranium-235 nucleus. By using this printed guide, students will be prompted to think about what happens in a nuclear reaction, what makes a nucleus "fissionable" and how nuclear power containment vessels prevent a runaway chain reaction. The fission simulation, which must be open and displayed to complete this activity, is available from PhET at: Nuclear Fission. This lesson is part of PhET (Physics Education Technology Project), a large collection of free interactive simulations for science education. 9-12: 3C/H5. Human inventiveness has brought new risks as well as improvements to human existence. 4. The Physical Setting 4D. The Structure of Matter 6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope. 9-12: 4D/H1. Atoms are made of a positively charged nucleus surrounded by negatively charged electrons. The nucleus is a tiny fraction of the volume of an atom but makes up almost all of its mass. The nucleus is composed of protons and neutrons which have roughly the same mass but differ in that protons are positively charged while neutrons have no electric charge. 9-12: 4D/H2. The number of protons in the nucleus determines what an atom's electron configuration can be and so defines the element. An atom's electron configuration, particularly the outermost electrons, determines how the atom can interact with other atoms. Atoms form bonds to other atoms by transferring or sharing electrons. 9-12: 4D/H3. Although neutrons have little effect on how an atom interacts with other atoms, the number of neutrons does affect the mass and stability of the nucleus. Isotopes of the same element have the same number of protons (and therefore of electrons) but differ in the number of neutrons. 9-12: 4D/H4. The nucleus of radioactive isotopes is unstable and spontaneously decays, emitting particles and/or wavelike radiation. It cannot be predicted exactly when, if ever, an unstable nucleus will decay, but a large group of identical nuclei decay at a predictable rate. This predictability of decay rate allows radioactivity to be used for estimating the age of materials that contain radioactive substances. 11. Common Themes 6-8: 11B/M1. Models are often used to think about processes that happen too slowly, too quickly, or on too small a scale to observe directly. They are also used for processes that are too vast, too complex, or too dangerous to study. 6-8: 11B/M4. Simulations are often useful in modeling events and processes. 6-8: 11D/M3. Natural phenomena often involve sizes, durations, and speeds that are extremely small or extremely large. These phenomena may be difficult to appreciate because they involve magnitudes far outside human experience. Common Core State Standards for Mathematics Alignments High School — Functions (9-12) Linear, Quadratic, and Exponential Models? (9-12) F-LE.1.c Recognize situations in which a quantity grows or decays by a constant percent rate per unit interval relative to another. F-LE.3 Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function. %0 Electronic Source %A Blaisdell, Mark %D July 8, 2009 %T PhET Teacher Activities: Nuclear Fission Simulation - Student Guide %I Physics Education Technology Project %V 2015 %N 29 January 2015 %8 July 8, 2009 %9 application/ms-word %U http://phet.colorado.edu/en/contributions/view/3249 Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
Autism Spectrum Disorder (ASD) is defined by the American Psychiatric Foundation as “a complex developmental condition that involves persistent challenges in social interaction, speech and nonverbal communication, and restricted/repetitive behaviors.” There is a wide range of effects and severity of symptoms experienced by people who are diagnosed with ASD. The U.S. Centers for Disease Control and Prevention estimates that autism spectrum disorders are present in 1 in 59 children. ASD is about four times as prevalent in boys than in girls, with 1 in 37 boys diagnosed as having ASD, compared to 1 in 151 girls. The most popular treatment for children with autism spectrum disorder is applied behavior analysis (ABA), which the Association for Science in Autism Treatment describes as the use of interventions to improve “socially important behavior.” Behavior analytic interventions are based on learning theory and methods that have been studied scientifically and shown to be effective in improving the lives of people with autism spectrum disorders. The antecedent-behavior-consequence (ABC) method of assessing functional behavior can be combined with an intervention such as task analysis as the basis for effective interventions in children with autism spectrum disorder. These types of assessments and interventions work to “increase appropriate skills and decreas[e] maladaptive behaviors,” as Psych Central reports. The goal of a task analysis is to break down and simplify complex tasks in order to provide step-by-step guidance on how to complete specific behaviors. This guide describes several specific task analysis techniques and presents examples of their application in diverse settings. What Is Task Analysis? The National Professional Development Center on Autism Spectrum Disorders defines task analysis as a teaching process that breaks down complex activities into a series of simple steps that students are able to learn more easily. Researchers have shown that task analysis meets the criteria for evidence-based practice by improving adoption of “appropriate behaviors and communication skills” by children in preschool, elementary school, and middle school. Task analysis techniques fall into two broad categories, as the Autism Classroom blog explains: - The desired skill can be broken into discrete steps that are performed in sequence, such as the appropriate way to wash one’s hands. The steps are linked via “chaining,” which signals the completion of each step as a cue to begin the next step. - Alternatively, a task can be divided into short chunks of time, so a 20-minute activity may be broken into five four-minute segments. This approach is frequently associated with “shaping,” which teaches new behaviors by reinforcing “successive approximations” of the behavior rather than repeating previous approximations, as the Association for Science in Autism Treatment explains. However, a simple definition of what task analysis is doesn’t explain why the approach has become so important in educating children with ASD. Three characteristics are vital to the success of task analysis as a teaching method: - Consistency: If three different people demonstrate to a student how to perform a specific activity, such as brushing teeth, the student will likely be shown three different methods, because each “teacher” performs the activity in a unique way. This can leave the student confused. Task analysis ensures that a single approach is presented and reinforced in all learning situations. - Individualization: Each student has unique strengths and weaknesses, so task analysis methods can be customized to meet the student’s specific circumstances. For example, when teaching a student to remain in a group for 20 minutes via shaping, the task increments can be varied to the abilities of the student, with some responding best to two-minute chunks and others to five-minute blocks. - Systematic instruction: One challenge students with ASD face is dealing with the many variables that complicate learning. Task analysis relies on “discrete trial programs” that divide activities into small steps that culminate in the end goal. For example, students who have learned four of the eight steps entailed in tying their shoes have successfully mastered those four steps, although they have not yet achieved the end goal. The task analysis technique of chaining has two primary components, as ThoughtCo. explains: - Forward chaining relies on the student learning from the start of the task sequence through each step of the task in sequence, so step two begins only after step one is completed. Each step is first modeled by the instructor and then imitated by the student, although some students will require hand-over-hand prompting followed by “fading” of the prompt as the student exhibits increasing mastery of the step. - Backward chaining begins by teaching the student the last step of the task, first by having the student observe the teacher and then by having the student assist the teacher. After the last step has been grasped (though not yet perfected), the instructor turns to the second-to-last step of the process and continues backward to the initial steps. An example is learning to do laundry: the student is first taught how to remove the clothes from the dryer and fold them, then how to transfer the clothes from the washer to the dryer, and all preceding steps in the process one-by-one in reverse order. Other effective task analysis techniques include these two approaches: - Discrete trial instruction: The teacher gives the student a short, clear instruction and provides a prompt to help the student complete the instruction, whether by modeling the target response or guiding the student’s own response. As the student progresses, the prompt is removed gradually. When the student responds accurately, the teacher offers immediate positive feedback; when the student’s response is incorrect, the teacher demonstrates or guides the student to perform the correct response. - Modeling: The student is shown the target behavior and is then instructed to imitate that behavior. Modeling has proven effective in teaching social, play, and self-help skills. What Is the Purpose of Task Analysis? The goal of applied behavior analysis is to help people with ASD learn the fundamental skills that will allow them to lead independent lives. Task analysis is one of several methods used by applied behavior analysts to understand and modify a person’s behavior. The Autism Classroom describes task analysis as both “unexciting” and “critical to systematic instruction.” The advantages of task analysis over other ABA approaches are explained by Autism Speaks: - Task analysis is easy to adapt to the needs of each individual learner. - The techniques can be applied in multiple settings, including classrooms, homes, and the community. - The skills taught via task analysis are practical in the student’s everyday life. - Task analysis can be used in one-on-one instruction and in group settings. When preparing an ABA program for a student, applied behavior analysts begin by assessing the student’s skills, as well as the goals and preferences of the student and the student’s family. Age appropriate skills evaluated in the initial assessment serve as the foundation for the student’s specific treatment goals. These skills include the following: - Communication and language skills - Social interaction - Self-help (hygiene, healthy living, etc.) - Play and relaxation activities - Motor skills - Academic skills The primary use of task analysis in ABA settings is to teach activities for daily living (ADLs), as Total Spectrum explains. ADLs are actions that most people complete on a daily basis, such as setting a table for dinner or purchasing an item and asking for change. For people with autism spectrum disorder, however, these skills are especially important as these types of activities serve as the foundation for their independence. Individuals with autism spectrum disorder gain a better understanding of basic living skills by focusing on the mastery of individual steps in a complex process. Task analysis can be applied to any process that can be broken into multiple steps. Once the steps have been identified and the directions created, instructors devise a learning plan that is customized to the needs and goals of the student. The instruction often relies heavily on visual support tools, such as cards, small replicas of objects, or the objects themselves. In addition to helping the student with autism spectrum disorder, task analysis can improve the quality of life for all family members. Strong skills in communication, interpersonal relations, and social interactions help enable people with ASD to lead successful, independent lives. Autism Speaks outlines the purpose of task analysis and the many ways task analysis and other ABA approaches benefit individuals with ASD, their families, and their communities: - Task analysis replaces problem behaviors with new skills, so students learn “what to do” rather than simply “what to stop doing.” - Reinforcement increases on-task positive behaviors and minimizes negative behaviors. - Tasks that teach self-monitoring and self-control engender skills that are easily transferred to social and job-related capabilities. - Responding positively to a student’s behavior prevents unintentionally rewarding problem behavior. - Students are better able to focus on and comply with specific tasks, which motivates them to perform. - By improving cognitive skills, the tasks make it easier for students to learn other academic subjects. - Learning appropriate behaviors in specific situations helps students generalize skills and apply them outside the classroom. Demonstrating the Task Analysis for Brushing Teeth Teeth brushing is a daily routine for dental hygiene that most adults perform with little conscious thought, but it is an example of an activity that can be challenging for children with autism spectrum disorder. Behavioral Health Works describes the task analysis for brushing teeth. The teaching begins by reinforcing the reason for the activity: to have clean, healthy teeth. The next steps may seem intuitive to adults, but the process can be formidable for children who have never brushed their teeth themselves and may fear the sensory components of teeth brushing or making a mistake. By dividing the task into a sequence of discrete actions, children are more confident that they can perform each subtask correctly. Task analysis has been shown to teach these types of skills much more quickly than alternative instruction methods. Few adults would guess that the relatively simple act of brushing one’s teeth is comprised of at least 18 separate operations: - Pick up the toothbrush. - Turn on the water tap. - Wash and rinse the toothbrush. - Turn off the water. - Pick up the toothpaste tube. - Remove the cap from the tube. - Place a dab of toothpaste on the bristles of the toothbrush. - Put the cap back on the tube of toothpaste. - Use the bristle end of the brush to scrub all of the teeth gently. (This step may need to be broken into several subtasks, such as, “Start brushing the teeth in the top left corner of your mouth, then brush the top center, then the top right, then the bottom right,” etc.) - After brushing all the teeth, spit the toothpaste into the sink. - Turn on the water. - Rinse off the toothbrush. - Place the toothbrush back into its holder. - Pick up a rinsing cup. - Fill it partially with water. - Turn off the water. - Rinse the mouth with water from the cup. - Spit the water into the sink. By breaking down the task into smaller activities, students are less likely to feel overwhelmed by the overall objective. However, students with ASD will likely need to master one or two of the steps at a time and then link the separate activities using either forward chaining or backward chaining, as ThoughtCo. describes: - For students who are able to learn multiple steps at one time, forward chaining can be used to link the steps in the proper sequence via modeling and verbal prompts. Once the student demonstrates mastery of the first few linked steps without guidance, the next linked steps of the task can be taught. - For students who lack strong language skills, backward chaining allows the teacher to perform the initial steps hand over hand while naming each step. This gives the student an opportunity to practice each step while simultaneously learning the corresponding vocabulary. Prompting is removed as the last steps of the process are taught, but reinforcement continues until the student has mastered the entire task. The task analysis for brushing teeth can be facilitated by creating a visual schedule that indicates when the student has completed each step. The student can review the visual schedule before beginning the task, or the schedule can be placed on the counter so the student can refer to it as each step is performed. Demonstrating the Task Analysis for Washing Hands One of the simplest and most effective ways to prevent illness — in oneself and in others — is by washing one’s hands. The CDC recommends that people wash their hands frequently each day: - Before and after preparing food - Before eating - Before and after treating a cut or wound - After using the bathroom - After blowing the nose, coughing, or sneezing - After touching an animal, animal feed, or animal waste - After handling pet food or pet treats - After touching garbage The CDC divides hand washing into five separate operations: - Wet the hands with clean running water, turn off the tap, and apply soap. - Rub the hands together with the soap to create a lather that covers the front and back of the hands and goes between the fingers and under the fingernails. - Scrub the hands for a minimum of 20 seconds. - Thoroughly rinse the hands under clean running water and then turn off the tap. - Dry the hands using a clean towel or air dryer. However, the task analysis for washing hands breaks down the process into several more discrete steps, as the New Behavioral Network describes: - Stand in front of the sink. - Turn on the water tap. - Run the water over the hands thoroughly. - Apply soap to the hands. - Turn off the water. - Scrub the hands for 20 seconds. - Turn the water back on. - Rinse the soap off the hands thoroughly. - Turn off the water. - Dry the hands. As with the task analysis for teeth brushing, breaking down the complexities of such basic hygiene tasks into smaller pieces helps individuals with autism spectrum disorder to build a chain of learning that completes the overall task when the separate steps are linked together. The forward and backward chaining taught as part of these exercises can be transferred to other social and employment situations. A Look at Other Task Analysis Examples The range of applications for task analysis in ABA therapy is limited only by the imagination of teachers and the needs of students. - Accessible ABA highlights the many ways chaining can be combined with task analysis to teach students with autism spectrum disorder using the methods that are most effective for the way these students learn. A task analysis example demonstrating the versatility of this approach is learning how to put on a pair of pants, which may include steps for sliding each foot into each pant leg one at a time, pulling the pants up, and buttoning and zipping them. - Think Psych offers the task analysis example of teaching students with autism spectrum disorder how to eat yogurt, steps for which include opening the refrigerator, taking the yogurt container out, removing the lid of the container, retrieving a spoon from the utensil drawer, using the spoon to eat the yogurt, throwing the empty yogurt container in the trash, and placing the dirty spoon in the dishwasher. - The Autism Community in Action explains how to use task analysis to teach a student with autism spectrum disorder how to fold a towel, which starts by laying the towel flat on a table, taking the top corners of the towel in each hand, bringing the top edge down to the bottom edge, bringing the left edge of the towel to the right edge, smoothing the towel flat, and placing the folded towel in a basket or closet. - ThoughtCo. provides an example of task analysis with backward chaining to help a student learn how to do laundry. The instruction begins when the load of laundry is completed: The student begins by removing the laundry from the dryer and folding it, and after this step is mastered, the student is shown how to set the dryer and push the start button. The instruction works backward step-by-step through the washing and drying process, culminating with lessons on how to sort the dirty laundry and load it into the washer. Preparing for a Satisfying Career in ABA Therapy Task analysis and other ABA techniques are part of a comprehensive evidence-based practice that teaches students with autism spectrum disorder the life skills they will need to live independently. Visual presentation approaches and breaking down complex tasks into a series of simple steps are keys to helping children with ASD process information quickly and simply. Graduate programs such as Regis College’s online Master of Science in Applied Behavior Analysis prepare students who are starting their careers or looking to advance in their field. Among the career options available to MS-ABA graduates are ABA training coordinator, clinical supervisor, and clinical director. Graduates often work at outpatient care centers or government agencies, or in private practice. Learn More About ABA Therapy Strategies Discover more about how Regis College’s online Master of Science in Applied Behavior Analysis degree program helps address the growing need for health professionals trained in task analysis and other ABA methods that help students with autism learn the skills they will need to lead independent lives.
What is Artificial Neural Network? How Human works. So let's first look at how we, as a human, make decisions. Usually, we will gather as many information as possible first before we a decision is made. When deciding what movie to watch, we usually decide it based on - past experience (watched movies) - friends' opinions The key is, gather as much data as we can. Then, based on all of those data, we decide which information is more important. Say, if I trust the reviews more than my friends' opinions, I will add some sort of "priority" based on the reviews. If the review tells me that it's a 2/10 and my friends tell me that it's a 100/10, And because I trust the reviews MORE than I trust my friends, I decided not to watch the movie. This concept of gathering data, prioritize which one is more important, and decide based on that, is exactly how ANN works. But with numbers, and mathematics – because "robots". How ANN works. ANN works by getting the input data and filter it through each hidden layer by using mathematical formula to predict the output. Essentially the same as our decision making algorithm. Say you wanna build a machine that can recognize some written numbers. Of course it is easy for us to do it but for a machine to recognize the written word, or in this case, numbers, is hard. Based on this photo, we can easily tell which number is which. - 2 is 2 - 3 is 3 - 0 is 0 Effortlessly, because we recognize them. No matter how weird the handwriting is. Even if we have never seen it before, because we recognize patterns. But to give a machine to do the same thing, they'll need to learn it first. This is where ANN takes place. And we are going to use the supervised learning method, as an example. How machines learn First, we need data. Lots of data. In this case, we are going to take each individual pixel value of the image. If you have no idea about what I'm saying, it is basically a value in hexadecimal to represent the color. But in this case, we're going to make it simple. If it's white, the value will be 1. If it's black, the value will be 0. And anything else from off-white to dark gray will be valued within that range. Off white might be 0.12. Dark gray might be 0.86. You get the idea. And let's say this image is 10 by 10 pixels wide. So we have a total of 100 inputs. Each carrying its own value. And then we are going to feed it to the input layer, in this case, it's 100 different values. Thus 100 green circles, or called nodes. Let's say that the first hidden layer is responsible to recognize whether it is a line, a curve, or a loop. Thus, there'll be 3 nodes in the first hidden layer. The input value will be calculated with a mathematical formula here. Usually the nodes will add all of the inputs and adjust it using a sigmoid function to make it within the range of 0 to 1 again. So for example the summation of all 100 value of colours that we got before is 10, Then 10 will be inserted into the sigmoid function and the output will be within 0 to 1. Say it's 0.2 That is for a single node in the hidden layer, we have to do it for all the other nodes too. You may be thinking. "All of it will be the same because the input are all the same" And yes, you are right. But, there's more. As I said before, the first hidden layer is responsible to predict whether it is a line, a curve, or a loop right? If all of the formula and the inputs are the same, how on earth can you get a different output value? Well, introducing "bias value", a number in which, it will be added to each summation. And each node has a different bias value. For example, the first node of the hidden layer's bias value is -10. Second is 40 Third is 100 Now the output will be different. The summation of all input was 10 before, now by adding the bias value, it will be different. - Node 1: 10 + (-10) = 0 - Node 2: 10 + 40 = 50 - Node 3: 10 + 100 = 110 And ohh, before I forget, do you see all of those lines connecting the nodes from input layer to nodes from the hidden layer. Yes, all of them. EVERY. SINGLE. ONE. Carries a different values, that'll be multiplied with the input data. - Input 1: 0.2 - Input 2: 0.5 - Input 3: 0.7 Each line will alter the value of each input and effect the summation of all the inputs. Here's the value at each line: - Line 1: 4 - Line 2: 2 - Line 3: 3 The input value now will be different. Respective input after altered, - Input 1: 0.2 x 4 = 0.8 - Input 2: 0.5 x 2 = 1.0 - Input 3: 0.7 x 3 = 2.1 Now as you can see, the value will be massively altered due to the bias value of each node and alteration value from each line. Maybe it is hard to understand it for now because I didn't organize my flow correctly. - The value of each pixel on our image of a handwritten value is retrieved. - Our image is 10 by 10 pixels, it means that we have a total of 100 input. - Each value is within the range of 0 to 1. - Each node has its own bias value. - Each line has its own value. Now we have a completely different value for each node on the hidden layer after going through the mathematical formula; Altered by the value from each line Altered by the bias value in each node Congratulations, we have just passed the first hidden layer. There's like thousands more because the number of hidden layers depends on the goal we are trying to achieve. But say for our handwritten number recognition, there are only 2 hidden layers. We need to keep doing the math until we reach the output layer before we can even get the result. But thank god this is all done by a machine so it is pretty fast. But wait, what if the prediction is not correct? Simple answer, we need to change the value of the bias value and the value of each line. How do we know which one to change you ask? Well, that's the complicated answer. And that is actually the part where the machine actually learns. Do you wonder, how does the machine even know that it gives the wrong answer? Or how do they know which value to change? How does the change affect the output? If yes, then good. If no, then try to think about it. Again I am sorry, that part needs another post. It is way too complicated and I think even this post needs time to be understood. But good luck. If you are confused with my explanation, you can watch the video below. It helps to make you understand better. Did you find this article valuable? Support Afrie Irham by becoming a sponsor. Any amount is appreciated!
Microsoft Office Suite allows you to control data in word processing, spreadsheet and PowerPoint programs. Microsoft's Excel program allows you to build spreadsheets with mathematical formulas that adjust data according to your instructions on an ongoing basis. You must learn to choose cells and create formulas to perform the programs basic functions. Learn how to subtract in Excel. Part 1 of 4: Learn Excel Cell Organization 1Open Microsoft Excel on your computer.Ad 2Load an already existing spreadsheet or choose to make a new document. 3Notice that the columns each have a letter associated with them. They start at the left side with "A" and continue indefinitely toward the right. 4Notice that the rows are associated with a number. This counts the order of the row in the spreadsheet. 5Choose any cell on the spreadsheet. The letter at the top and the row number at the side should be highlighted in a darker color. This indicates the name of the cell according to column and row. - For example, a cell may be referred to as H15 or A4. This is the format that you will use to refer to cells when you create formulas. Part 2 of 4: Enter Information in Excel 1Type values into your spreadsheet. If you are entering a negative number in the cell, place a minus sign before the number. - Create a table that uses columns and rows to organize your data. If you want to do this, leave a line at the top of each column to enter headings for the columns below. For example, you may place the word "Date" in cell A1, if you wish to organize the sheet by date. - Other popular column headings include name, address, description, cost, amount payable and amount receivable. You may want to place a "Total" column near the end if you want to add or subtract different costs. 2Save the sheet under a name. Use the right click button on your mouse to change "Sheet 1" to a new name. Add sheets as needed with the plus sign. 3Format your cells in the column according to whether they are words or numbers. - Highlight a group of cells by clicking on 1 cell and dragging it with your mouse until you've covered all the desired cells. You can also click on the letter at the top of the column or number at the row, to select an entire column or row. - Right click on the selected cells and click "Format Cells". - Click on "Number" or "Currency" to indicate what values you will want to subtract. Specify the number of decimal places and currency symbol as needed, and click "OK." 4Save your Excel document frequently. Part 3 of 4: Create a Subtraction Formula 1Click on the cell where you would like to place the answer to your subtraction formula. - For example, you may want to click on a cell in the "Totals" column, or a cell in the column titled "Less Payment Received." 2Find the formula bar at the top. It has a blank bar next to the letters "fx." 3Move your mouse cursor to the blank bar while the cell is still selected. 4Enter an equals sign, "=" in the bar. - You can also press the "fx" function button to start the formula. This will load an equals sign into your formula bar. - A drop list may pop up that asks you what type of formula you would like to start. Subtraction falls under the "Sum" category, because adding a negative number is the same as subtracting a positive number. 5Type in the letter and number location of the first value you would like to use in your subtraction equation. - Enter a minus sign after that number. This is the dash on many keyboards. 6Type in the letter and number location of the second value you would like to subtract from the first value. 7Press "Enter," and the second number will be subtracted from the first number. Your selected cell will move along to the next cell in the column when you press "Enter." - For example, your subtraction formula may be "=A2-A5" 8Repeat this with individual cells, by clicking on each cell individually. - You can also copy formulas for an entire column, if you are choosing to subtract items from the same row. Fill out the first 2 subtraction formulas individually. Then, highlight those cells and drag them down the column. - Look for the small box with a plus sign that pops up near the bottom of those cells. Click the arrow and choose "Fill Formatting Only." This will copy the type of formula you have just used, but move the values to the correct vertical row. Part 4 of 4: Subtract Cells in a Range 1Click on the cell at the end of that group of values. This part will help you subtract a group of values in a column or row. 2Press the "fx" button next to the function bar. 3Place your cursor in the blank function bar after the equals sign. 4Type "SUM" or choose that option from the list of Excel functions you can make. 5Enter an open parenthesis symbol. Add the first cell in the range you want to subtract. For example, "B2." 6Type in a colon symbol. This will indicate that you are using a range. 7Enter the last cell in the range you want to subtract. For example, "B17." Add a closed parenthesis symbol. 8Press "Enter." Excel will add the values together. If the values are negative, it will subtract them. - For example, your entire formula may be "=SUM(B2:B17)." We could really use your help! Categories: Microsoft Excel In other languages: Español: Cómo restar en Excel, Português: Como Subtrair no Excel, Italiano: Come Fare delle Sottrazioni in Excel, Deutsch: Subtrahieren in Excel, Français: Comment soustraire dans Excel, Русский: вычитать в Excel, 中文: 在Excel中进行减法, Bahasa Indonesia: Mengurangi di Excel Thanks to all authors for creating a page that has been read 153,807 times.
Back to the Space Place Index Antennas, Designed by Darwin Who in their right mind would design this bizarre-looking antenna? Actually, nobody did. It evolved. cue from nature, NASA engineers used a kind of "artificial evolution" to find this design. The result may look odd, but it works very well. "The evolutionary process improves the design of antennas, just as evolution in nature leads to fitter plants and animals," says Jason Lohn, leader of the Evolvable Systems Group at NASA's Ames The improvement comes from Darwin's idea of natural selection: only the fittest members of a generation survive to produce offspring. Over many generations, traits that hinder survival are weeded out, while beneficial traits become more common. "In the end," he says, "you have the design equivalent of a shark, honed over countless generations to be well adapted to its environment and tasks." Evolutionary computation, as it's called, applies this principle to hardware design. It's particularly useful for tackling problems that are difficult to solve by hand--like the design of new antennas. Designing a new antenna for NASA's Space Technology 5 (ST-5) mission was the challenge facing Lohn's group. ST-5 will explore how TV-sized "nano-satellites" can perform the tasks of much larger, conventional satellites at a cheaper cost. Antennas on these satellites must be smaller than usual, yet capable of doing everything that a bigger antenna can do. The evolution of this bizarre-looking antenna happened inside a computer. Many random designs were tested in a computer simulation. The computer judged their performance against certain goals for the design: efficiency, a narrow or wide broadcast angle, frequency range, and so on. As in nature, only the best performers were kept, and these served as parents of a new generation. To make the new generation, the traits of the best designs were randomly mixed by the computer to produce fresh, new designs-just as a father and mother's genes are mixed to make unique children. This new generation was again tested in the computer simulation, and the best designs became the parents of yet another This process was repeated thousands, millions of times, until it settled onto an optimal, shark-like design that wouldn't improve any further. With today's fast computers, millions of generations can be simulated in only a day or so. The result: an excellent antenna with an odd shape no human would, or about artificial evolution, see http://ic.arc.nasa.gov/story.php?sid=86&sec . For more about Space Technology 5, see nmp.nasa.gov/st5. For an animation that helps explain to kids how ST5's antenna sends pictures through space, go to This article was provided by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. About LUNAR | Old Gallery | Member Pages | Presentations & Docs | Space Place | Mailing Lists | Other Rocketry Pages | Site Map | All content is the responsibility of LUNAR. If you have comments or suggestions regarding these web pages, please contact the Copyright © 1992 - 2023 LUNAR
Since the sequencing of the human genome eight years ago, enormous progress has been made in analyzing and understanding it. Nevertheless, the function of most human genes is still barely understood. An important first step in determining the function of a gene or protein is to compare its sequence with the sequences of hundreds of other organisms that are experimentally easier to investigate. From the functions of related genes or proteins identified in these database searches, the researchers can often infer the unknown functions of human or animal genes. Now, computational biologists Johannes Soeding and Andreas Biegert of the Gene Center of LMU Munich, Germany, have successfully developed a method that makes database searches significantly more sensitive, while being just as quick. Instead of comparing sequences letter by letter, their idea is to take the sequence neighbors surrounding each letter into account during the comparison. This idea should be generally applicable in other areas of sequence searching and sequence analysis. The rule for both genes and proteins is: their function is primarily based on the sequence of their DNA or amino acid components. Genes with similar sequences frequently have a similar function. The same goes for proteins, although for them, the three-dimensional structure into which they fold, and which cannot be predicted offhand from their sequence, is equally important. Still, similar protein sequences suggest relatedness or, in other words, the descent from a common ancestral protein, and with it a similar function. Accordingly, the sequences and functions of genes and proteins all get stored away into databases, which scientists around the world use for comparing their new data against. But even the best and most frequently used algorithms such as BLAST (Basic Local Alignment Search Tool) have to make use of certain simplifications in order to make efficient searching in the gigantic databases possible at all. After all, the researchers expect BLAST to compare a given sequence - the letter code describing the sequence of DNA components or amino acids - with all sequences in the database in just a few minutes. Search engines like BLAST evaluate the similarity between a pair of sequences by aligning them underneath each other in such a way that similar amino acids come to lie in the same columns. The sequence similarity is then calculated by adding the similarities of all aligned amino acids. Here, the similarity between amino acids is measured by how often they mutate into each other without adverse effects, a measure that largely coincides with how similar their sizes and other biophysical properties are. BLAST has been the most important method for sequence searching since its development in 1990. It is called up around 500,000 times a day from all around the world. Yet this tried and true program is far from perfect. When evaluating the similarity of two amino acids, it ignores their neighboring amino acids, their sequence context. Johannes Soeding and Andreas Biegert of the Gene Center Munich and the cluster of excellence "Center for Integrated Protein Science Munich (CIPSM)" of LMU Munich have now developed a method that significantly improves similarity searches: Their "context-specific" BLAST, or CS-BLAST, can sniff out twice as many distant "relatives" of proteins as BLAST. When determining the similarity of an amino acid to the reference sequence, CS-BLAST includes the sequence context of every amino acid, namely its six left and six right sequence neighbors, in the analysis. "The idea is that the context says much more about how likely two amino acids are to mutate into each other", explains Soeding, who heads the group for "Protein Bioinformatics and Computational Biology" at the Gene Center Munich. "Take as an example folded and unfolded regions in proteins. In an unfolded region, the amino acid valine, for example, can usually mutate into any of the other 19 amino acids without any adverse effect. In a folded region on the other hand, it will mutate with high probability into other hydrophobic, or water-repelling, amino acids." The program is based on a very general idea that can be applied to every kind of sequence search and alignment method. The researchers have demonstrated this at the example of PSI-BLAST, an algorithm in which the related sequences already found are aligned one under the other into a so-called multiple alignment. This makes it possible, for example, to identify positions at which only certain amino acids can occur, which improves PSI-BLAST's ability to distinguish related from unrelated proteins. "We managed to increase PSI-BLAST's sensitivity significantly by making use of the sequence context. That way, two consecutive searches using our context-specific version of PSI-BLAST deliver better results than five searches using the conventional engine," says Soeding. The new method is just as fast despite its better sensitivity, explains the researcher, because the sequence search takes place in two steps: "Both in conventional BLAST and in our method, a search matrix is first calculated," Soeding continues. "This step is more complicated when you do it our way, but at one second, it is still very fast. Only the second step, the database search using the search matrix, takes a lot of time - and this step is the same for both approaches." In the future, the scientists intend to apply the newly developed algorithm to genomic alignments as well, where not only individual genes, but rather entire segments of DNA are compared. "As with proteins, there are certain key regions in DNA that fulfill crucial regulatory functions," explains Soeding. "You can identify these regulatory regions, which are important for a deeper understanding of many diseases, by comparing the human genome with those of other mammals." Using a context-specific method, the LMU researchers intend to substantially improve the quality of such genomic alignments and, with it, the identification of regulatory regions. "We believe context-specific methods could become standard throughout the entire field of biological sequence analysis," concludes Soeding. Source: Ludwig-Maximilians-Universität München Explore further: Honeybees trained in Croatia to find land mines
Introduction to else if Statement in Python The ‘if’ condition statement is the most frequently used condition statement in python programming. The ‘if’ statement is used to evaluate whether a set of code needs to be executed or not. The flow happens like , if the statement is true then execute a below code set , when false then move into the next code set. here the role of the else if statement is to get re evaluated with a different set of condition when the if condition check fails. Below are the syntax for the else if Statement in Python: elif condition statement: . . . . . . else-if Code block Explanation: The else-if statement in python is represented as elif. where ‘el’ abbreviates as else and ‘if ‘ represents normal if statement. the condition which is going to be evaluated is presented after the else-if statement. the colon ( ‘ : ‘ ) is used to mention the end of condition statement being used. the code block which needs to be executed when the else-if statement is successful is placed after the else if condition. Explanation: The Flow starts with a if statement check, if the statement check fails then the control flows to the else if statement check, when the else if statement check evaluates as true then the flow goes to body of the else if statement. How does else if Statement work in Python? The procedure in which else if statement works is as below: - Check for the initial if statement. - If the firstly mentioned if statement is evaluated as factual , then execute the body of the if statement directly. - In case the firstly mentioned if statement is false then the control flows to the else section of the program. - When the else section of the program holds a if statement with it then it becomes a else-if statement. - The else if statement’s code block will be executed when the condition stated in the elif statement evaluates as true. - In case if this condition is evaluated as false then the condition flows to the end of the if block and starts executing the code segment after the if block. Examples to Implement else if Statement in Python Examples to implement else if Statement in Python as below, # Python program to illustrate if-elif-else # Variable Declaration section check_variable = 20 # Program logic section if (check_variable == 2): print ("check_variable is 2") elif (check_variable == 4): print ("check_variable is 4") elif (check_variable == 6): print ("check_variable is 6") elif (check_variable == 8): print ("check_variable is 8") elif (check_variable == 10): print ("check_variable is 10") elif (check_variable == 12): print ("check_variable is 12") elif (check_variable == 14): print ("check_variable is 14") elif (check_variable == 16): print ("check_variable is 16") elif (check_variable == 18): print ("check_variable is 18") elif (check_variable == 20): print ("check_variable is 20") print ("check_variable is not present") Explanation: This program is coded in such way to depict the flow of control in if statements in python programming. As noticed the program consists of multiple else-if blocks , the program logic explanation is as below, - The check variable is assigned with value 20. - From check variable value is verified for an exact match against all even number falling between 2 to 20. - When a exact match is deducted then the value is printed in the console. from country_list import countries_for_language from collections import deque # extract all the values of the countries Nation_dictionary = dict( countries_for_language( 'en' ) ) #Add all the country names to a stack Nation_stack_variable = deque() for i in Nation_values: print( ' Type of stack variable used : ' , type(Nation_stack_variable),'\n') print( ' Vales of the stack variable : ' , Nation_stack_variable ,'\n') Nation_temp = Nation_stack_variable.pop() if Nation_temp == 'A': elif Nation_temp == 'Z': print( ' \n Stack variable values after Pop : ' , Nation_stack_variable , '\n') Explanation: Here the program utilizes a collection data type for organizing the stack. this procedure brings in the deque class which belongs to the collection library. At this point the ‘ country_list ‘ library import is accountable for dragging the list of all the countries. All values from the country list library is taken out into a dictionary variable since the country library extract is in the format of dictionary. The dictionary arrangement where the first alphabet is a notation for the key beneath which the exact country name is located. Input_value = int(input(" Factorial Number : ")) factorial = 1 if Input_value < 0: print(" Negative number cannot be placed for factorial determination ") elif Input_value == 0: print(" No factorial value for zero ") for i in range(1,Input_value + 1): factorial = factorial*i print("Value for the factorial " , Input_value , "is" , factorial) Explanation: The factorial value of a given number is calculated by means of looping technique ,The ‘Input_variable’ is used for fleeting the integer value intended for which the factorial value is accepted to be calculated. Also the variable ‘Factorial’ is set to initial value of 1. At first the keyed in value is verified whether it is a positive integer or not, in case if the integer is a negative value then this is notified in the console using a print statement, the necessity for maintaining the input value as positive is because the factorial value for a negative integer does not exist. so the check is implied such that the keyed in value is greater than zero. Also the next check ensures that the keyed in value is not zero. For optimized execution every programming language relies on its looping and condition statements. Here these condition statements like else if helps to attain better control over the program logic execution. This is a guide to else if Statement in Python. Here we discuss Syntax to else if Statement in Python, how does it work, examples with codes and outputs. You can also go through our other related articles to learn more –
Microorganisms that live in the depths of an oil reservoir can withstand such extreme conditions they can be used in harsh chemical processes. Norwegian researchers have been hard at work cataloguing these species with the use of DNA sequencing technologies. Petroleum reservoirs that lie 2-3 kilometres beneath the seabed hold more than just oil and gas. These deep reservoirs are also home to micro-organisms that have lived in isolation for millions of years, ever since the process of converting organic matter to fossil fuels first began. The organisms live in some of the most extreme conditions found on Earth, with temperatures as high as the boiling point of water, pressures that are more than 250 times atmospheric pressure, and an environment full of heavy metals and other toxic chemical compounds, without access to light and air. In other words, these microbes (bacteria and Archaea) are tough. A place where time stands still The relatives of these organisms once lived on Earth's surface. And because of the limitations imposed by the environment since they were buried, they have not evolved significantly. They have adapted, but even this has happened at an infinitely slow rate. They also propagate very slowly because they have so little access to important nutrients. Their doubling time, or the time a microbial cell needs to divide into two, can be many years. "Time has basically stood still for these microorganisms, down in the dark," says Alexander Wentzel, a senior scientist at SINTEF. "But there is a great deal of life down there in the oil, we have found many dozens of different microorganisms," says Anna Lewin, a researcher at the Norwegian University of Science and Technology's (NTNU) Department of Biotechnology. "Some are known from before, but others are new. One characteristic they share is that they are thermophilic, meaning that they love the heat. There are similar organisms in areas where there is volcanic activity, such as in geysers and black smokers." Wentzel and Lewin and their partners at Statoil have made some startling discoveries since they began their research back in 2008. The microorganisms they have brought up from the black and gooey depths provide interesting answers to questions raised by basic research, and can also be used to improve chemical processes. "The basic research aspect is interesting when you are looking at such an old and extreme environment. This information provides us a better understanding of what early life was like on Earth and the evolution that has taken place over millions of years," says Lewin. The researchers have thus far studied samples from different oil reservoirs on the Norwegian continental shelf. "There is a big difference between the microbes that live down in oil reservoirs and the ones found on Earth's surface. But we see surprisingly little difference between the various oil reservoirs," says Lewin. High temperature enzymes The microorganisms found in oil reservoirs have a metabolism that is based on thermostable enzymes, which are enzymes that work well at high temperatures. These kinds of enzymes can be used to streamline various chemical processes because they can withstand high temperatures. "Chemical processes are generally much faster at high temperatures. That means that these enzymes can help to streamline chemical processes, which can make them economically important. The market potential in industrial biotechnology alone is huge. For example, the enzymes could be used in processes involving biomass decomposition, such as wood to produce biofuels and other biorefinery products. The enzymes can help the process go much faster and make it more profitable," says Wentzel. Another possible application would be in improving CO2 capture, because the enzymes can withstand the extreme temperature variations that are common during the CO2 capture process. "But these are just two examples of the kinds of things we are looking at," says Wentzel. "The samples from the oil reservoir are a goldmine of enzymes for new and improved biotechnological processes in a whole range of different market segments. We hope that our research and technology can help Norwegian industry develop modern processes to increase our international competitiveness." First in the world This is the first time that someone has been able to obtain live samples from an oil reservoir and subject them to such a comprehensive study of microbial communities. "Extracting these samples is a methodological challenge. The micro-organisms live under such high pressure that if the pressure drops quickly, they simply disintegrate," Wentzel says. "It's also extremely important to avoid contaminating the samples when you are collecting them. We avoid this by making sure the equipment and staff on the oil platform are optimally prepared. The key was taking samples in special pressurized bottles that were connected directly to the pipes that typically transport oil and gas from the reservoir to the oil platform. During sampling, oil production from the platform had to be stopped. "Stopping oil production is enormously costly, so that each sample we took is worth its weight in gold. Our close cooperation with Statoil is what has made this possible," says Wentzel. The researchers are also the first in the world to have used metagenomics with these types of samples. This has given them the opportunity to study a whole community of micro-organisms taken from their natural environment with unprecedented reliability and detail. The researchers are now working to get samples from several other oil reservoirs so they can confirm their hypotheses and continue their work on enzymes with industrial potential. They have published a number of scientific articles and book chapters about their research, most recently in Environmental Microbiology. The above post is reprinted from materials provided by The Norwegian University of Science and Technology (NTNU). Note: Materials may be edited for content and length. Cite This Page:
Kidney stone disease Kidney stone disease, also known as urolithiasis, is when a solid piece of material (kidney stone) develops in the urinary tract. Kidney stones typically form in the kidney and leave the body in the urine stream. A small stone may pass without causing symptoms. If a stone grows to more than 5 millimeters (0.2 in) it can cause blockage of the ureter resulting in severe pain in the lower back or abdomen. A stone may also result in blood in the urine, vomiting, or painful urination. About half of people will have another stone within ten years. |Kidney stone disease| |Other names||Urolithiasis, kidney stone, renal calculus, nephrolith, kidney stone disease,| |A kidney stone, 8 millimeters (0.3 in) in diameter| |Symptoms||Severe pain in the lower back or abdomen, blood in the urine, vomiting, nausea| |Causes||Genetic and environmental factors| |Diagnostic method||Based on symptoms, urine testing, medical imaging| |Differential diagnosis||Abdominal aortic aneurysm, diverticulitis, appendicitis, pyelonephritis| |Prevention||Drinking fluids such that more than two liters of urine are produced per day| |Treatment||Pain medication, extracorporeal shock wave lithotripsy, ureteroscopy, percutaneous nephrolithotomy| |Frequency||22.1 million (2015)| Most stones form due to a combination of genetics and environmental factors. Risk factors include high urine calcium levels; obesity; certain foods; some medications; calcium supplements; hyperparathyroidism; gout and not drinking enough fluids. Stones form in the kidney when minerals in urine are at high concentration. The diagnosis is usually based on symptoms, urine testing, and medical imaging. Blood tests may also be useful. Stones are typically classified by their location: nephrolithiasis (in the kidney), ureterolithiasis (in the ureter), cystolithiasis (in the bladder), or by what they are made of (calcium oxalate, uric acid, struvite, cystine). In those who have had stones, prevention is by drinking fluids such that more than two liters of urine are produced per day. If this is not effective enough, thiazide diuretic, citrate, or allopurinol may be taken. It is recommended that soft drinks containing phosphoric acid (typically colas) be avoided. When a stone causes no symptoms, no treatment is needed. Otherwise pain control is usually the first measure, using medications such as nonsteroidal anti-inflammatory drugs or opioids. Larger stones may be helped to pass with the medication tamsulosin or may require procedures such as extracorporeal shock wave lithotripsy, ureteroscopy, or percutaneous nephrolithotomy. Between 1% and 15% of people globally are affected by kidney stones at some point in their lives. In 2015, 22.1 million cases occurred, resulting in about 16,100 deaths. They have become more common in the Western world since the 1970s. Generally, more men are affected than women. Kidney stones have affected humans throughout history with descriptions of surgery to remove them dating from as early as 600 BC. Signs and symptomsEdit The hallmark of a stone that obstructs the ureter or renal pelvis is excruciating, intermittent pain that radiates from the flank to the groin or to the inner thigh. This pain, known as renal colic, is often described as one of the strongest pain sensations known. Renal colic caused by kidney stones is commonly accompanied by urinary urgency, restlessness, hematuria, sweating, nausea, and vomiting. It typically comes in waves lasting 20 to 60 minutes caused by peristaltic contractions of the ureter as it attempts to expel the stone. The embryological link between the urinary tract, the genital system, and the gastrointestinal tract is the basis of the radiation of pain to the gonads, as well as the nausea and vomiting that are also common in urolithiasis. Postrenal azotemia and hydronephrosis can be observed following the obstruction of urine flow through one or both ureters. Pain in the lower-left quadrant can sometimes be confused with diverticulitis because the sigmoid colon overlaps the ureter, and the exact location of the pain may be difficult to isolate due to the proximity of these two structures. High dietary intake of animal protein, sodium, sugars including honey, refined sugars, fructose and high fructose corn syrup, oxalate, grapefruit juice, and apple juice may increase the risk of kidney stone formation. Kidney stones can result from an underlying metabolic condition, such as distal renal tubular acidosis, Dent's disease, hyperparathyroidism, primary hyperoxaluria, or medullary sponge kidney. 3–20% of people who form kidney stones have medullary sponge kidney. A person with recurrent kidney stones may be screened for such disorders. This is typically done with a 24-hour urine collection. The urine is analyzed for features that promote stone formation. Calcium is one component of the most common type of human kidney stones, calcium oxalate. Some studies[which?] suggest that people who take calcium or vitamin D as a dietary supplement have a higher risk of developing kidney stones. In the United States, kidney stone formation was used as an indicator of excess calcium intake by the Reference Daily Intake committee for calcium in adults. In the early 1990s, a study conducted for the Women's Health Initiative in the US found that postmenopausal women who consumed 1000 mg of supplemental calcium and 400 international units of vitamin D per day for seven years had a 17% higher risk of developing kidney stones than subjects taking a placebo. The Nurses' Health Study also showed an association between supplemental calcium intake and kidney stone formation. Unlike supplemental calcium, high intakes of dietary calcium do not appear to cause kidney stones and may actually protect against their development. This is perhaps related to the role of calcium in binding ingested oxalate in the gastrointestinal tract. As the amount of calcium intake decreases, the amount of oxalate available for absorption into the bloodstream increases; this oxalate is then excreted in greater amounts into the urine by the kidneys. In the urine, oxalate is a very strong promoter of calcium oxalate precipitation—about 15 times stronger than calcium. A 2004 study found that diets low in calcium are associated with a higher overall risk for kidney stone formation. For most individuals, other risk factors for kidney stones, such as high intakes of dietary oxalates and low fluid intake, play a greater role than calcium intake. Calcium is not the only electrolyte that influences the formation of kidney stones. For example, by increasing urinary calcium excretion, high dietary sodium may increase the risk of stone formation. Drinking fluoridated tap water may increase the risk of kidney stone formation by a similar mechanism, though further epidemiologic studies are warranted to determine whether fluoride in drinking water is associated with an increased incidence of kidney stones. High dietary intake of potassium appears to reduce the risk of stone formation because potassium promotes the urinary excretion of citrate, an inhibitor of calcium crystal formation. Diets in Western nations typically contain a large proportion of animal protein. Eating animal protein creates an acid load that increases urinary excretion of calcium and uric acid and reduced citrate. Urinary excretion of excess sulfurous amino acids (e.g., cysteine and methionine), uric acid, and other acidic metabolites from animal protein acidifies the urine, which promotes the formation of kidney stones. Low urinary-citrate excretion is also commonly found in those with a high dietary intake of animal protein, whereas vegetarians tend to have higher levels of citrate excretion. Low urinary citrate, too, promotes stone formation. The evidence linking vitamin C supplements with an increased rate of kidney stones is inconclusive. The excess dietary intake of vitamin C might increase the risk of calcium-oxalate stone formation; in practice, this is rarely encountered. The link between vitamin D intake and kidney stones is also tenuous. Excessive vitamin D supplementation may increase the risk of stone formation by increasing the intestinal absorption of calcium; correction of a deficiency does not. There are no conclusive data demonstrating a cause-and-effect relationship between alcoholic beverage consumption and kidney stones. However, some people have theorized that certain behaviors associated with frequent and binge drinking can lead to dehydration, which can, in turn, lead to the development of kidney stones. The American Urological Association has projected that global warming will lead to an increased incidence of kidney stones in the United States by expanding the "kidney stone belt" of the southern United States. Hypocitraturia or low urinary-citrate excretion (defined as less than 320 mg/day) can cause kidney stones in up to 2/3 of cases. The protective role of citrate is linked to several mechanisms; in fact, citrate reduces urinary supersaturation of calcium salts by forming soluble complexes with calcium ions and by inhibiting crystal growth and aggregation. The therapy with potassium citrate, or magnesium potassium citrate, is commonly prescribed in clinical practice in order to increase urinary citrate and to reduce stone formation rates. Supersaturation of urineEdit When the urine becomes supersaturated (when the urine solvent contains more solutes than it can hold in solution) with one or more calculogenic (crystal-forming) substances, a seed crystal may form through the process of nucleation. Heterogeneous nucleation (where there is a solid surface present on which a crystal can grow) proceeds more rapidly than homogeneous nucleation (where a crystal must grow in a liquid medium with no such surface), because it requires less energy. Adhering to cells on the surface of a renal papilla, a seed crystal can grow and aggregate into an organized mass. Depending on the chemical composition of the crystal, the stone-forming process may proceed more rapidly when the urine pH is unusually high or low. Supersaturation of the urine with respect to a calculogenic compound is pH-dependent. For example, at a pH of 7.0, the solubility of uric acid in urine is 158 mg/100 ml. Reducing the pH to 5.0 decreases the solubility of uric acid to less than 8 mg/100 ml. The formation of uric-acid stones requires a combination of hyperuricosuria (high urine uric-acid levels) and low urine pH; hyperuricosuria alone is not associated with uric-acid stone formation if the urine pH is alkaline. Supersaturation of the urine is a necessary, but not a sufficient, condition for the development of any urinary calculus. Supersaturation is likely the underlying cause of uric acid and cystine stones, but calcium-based stones (especially calcium oxalate stones) may have a more complex cause. Inhibitors of stone formationEdit Normal urine contains chelating agents, such as citrate, that inhibit the nucleation, growth, and aggregation of calcium-containing crystals. Other endogenous inhibitors include calgranulin (an S-100 calcium-binding protein), Tamm–Horsfall protein, glycosaminoglycans, uropontin (a form of osteopontin), nephrocalcin (an acidic glycoprotein), prothrombin F1 peptide, and bikunin (uronic acid-rich protein). The biochemical mechanisms of action of these substances have not yet been thoroughly elucidated. However, when these substances fall below their normal proportions, stones can form from an aggregation of crystals. Sufficient dietary intake of magnesium and citrate inhibits the formation of calcium oxalate and calcium phosphate stones; in addition, magnesium and citrate operate synergistically to inhibit kidney stones. Magnesium's efficacy in subduing stone formation and growth is dose-dependent. Diagnosis of kidney stones is made on the basis of information obtained from the history, physical examination, urinalysis, and radiographic studies. Clinical diagnosis is usually made on the basis of the location and severity of the pain, which is typically colicky in nature (comes and goes in spasmodic waves). Pain in the back occurs when calculi produce an obstruction in the kidney. Physical examination may reveal fever and tenderness at the costovertebral angle on the affected side. In people with a history of stones, those who are less than 50 years of age and are presenting with the symptoms of stones without any concerning signs do not require helical CT scan imaging. A CT scan is also not typically recommended in children. Otherwise a noncontrast helical CT scan with 5 millimeters (0.2 in) sections is the diagnostic modality of choice in the radiographic evaluation of suspected nephrolithiasis. All stones are detectable on CT scans except very rare stones composed of certain drug residues in the urine, such as from indinavir. Calcium-containing stones are relatively radiodense, and they can often be detected by a traditional radiograph of the abdomen that includes the kidneys, ureters, and bladder (KUB film). Some 60% of all renal stones are radiopaque. In general, calcium phosphate stones have the greatest density, followed by calcium oxalate and magnesium ammonium phosphate stones. Cystine calculi are only faintly radiodense, while uric acid stones are usually entirely radiolucent. Where a CT scan is unavailable, an intravenous pyelogram may be performed to help confirm the diagnosis of urolithiasis. This involves intravenous injection of a contrast agent followed by a KUB film. Uroliths present in the kidneys, ureters, or bladder may be better defined by the use of this contrast agent. Stones can also be detected by a retrograde pyelogram, where a similar contrast agent is injected directly into the distal ostium of the ureter (where the ureter terminates as it enters the bladder). Renal ultrasonography can sometimes be useful, because it gives details about the presence of hydronephrosis, suggesting that the stone is blocking the outflow of urine. Radiolucent stones, which do not appear on KUB, may show up on ultrasound imaging studies. Other advantages of renal ultrasonography include its low cost and absence of radiation exposure. Ultrasound imaging is useful for detecting stones in situations where X-rays or CT scans are discouraged, such as in children or pregnant women. Despite these advantages, renal ultrasonography in 2009 was not considered a substitute for noncontrast helical CT scan in the initial diagnostic evaluation of urolithiasis. The main reason for this is that, compared with CT, renal ultrasonography more often fails to detect small stones (especially ureteral stones) and other serious disorders that could be causing the symptoms. A 2014 study confirmed that ultrasonography rather than CT as an initial diagnostic test results in less radiation exposure and did not find any significant complications. Renal ultrasonograph of a stone located at the pyeloureteric junction with accompanying hydronephrosis. Measurement of a 5.6 mm large kidney stone in soft tissue versus skeletal CT window. - microscopic examination of the urine, which may show red blood cells, bacteria, leukocytes, urinary casts, and crystals; - urine culture to identify any infecting organisms present in the urinary tract and sensitivity to determine the susceptibility of these organisms to specific antibiotics; - complete blood count, looking for neutrophilia (increased neutrophil granulocyte count) suggestive of bacterial infection, as seen in the setting of struvite stones; - renal function tests to look for abnormally high blood calcium blood levels (hypercalcemia); - 24 hour urine collection to measure total daily urinary volume, magnesium, sodium, uric acid, calcium, citrate, oxalate, and phosphate; - collection of stones (by urinating through a StoneScreen kidney stone collection cup or a simple tea strainer) is useful. Chemical analysis of collected stones can establish their composition, which in turn can help to guide future preventive and therapeutic management. |Kidney stone type||Population||Circumstances||Color||Sensitivity||Details| |Calcium oxalate||80%||when urine is acidic (decreased pH)||Black/dark brown||Radio-opaque||Some of the oxalate in urine is produced by the body. Calcium and oxalate in the diet play a part but are not the only factors that affect the formation of calcium oxalate stones. Dietary oxalate is found in many vegetables, fruits, and nuts. Calcium from bone may also play a role in kidney stone formation.| |Calcium phosphate||5–10%||when urine is alkaline (high pH)||Dirty white||Radio-opaque||Tends to grow in alkaline urine especially when proteus bacteria are present.| |Uric acid||5–10%||when urine is persistently acidic||Yellow/reddish brown||Radiolucent||Diets rich in animal proteins and purines: substances found naturally in all food but especially in organ meats, fish, and shellfish.| |Struvite||10–15%||infections in the kidney||Dirty white||Radio-opaque||Prevention of struvite stones depends on staying infection-free. Diet has not been shown to affect struvite stone formation.| |Cystine||1–2%||rare genetic disorder||Pink/yellow||Radio-opaque||Cystine, an amino acid (one of the building blocks of protein), leaks through the kidneys and into the urine to form crystals.| |Xanthine||Extremely rare||Brick red||Radiolucent| By far, the most common type of kidney stones worldwide contains calcium. For example, calcium-containing stones represent about 80% of all cases in the United States; these typically contain calcium oxalate either alone or in combination with calcium phosphate in the form of apatite or brushite. Factors that promote the precipitation of oxalate crystals in the urine, such as primary hyperoxaluria, are associated with the development of calcium oxalate stones. The formation of calcium phosphate stones is associated with conditions such as hyperparathyroidism and renal tubular acidosis. Oxaluria is increased in patients with certain gastrointestinal disorders including inflammatory bowel disease such as Crohn's disease or in patients who have undergone resection of the small bowel or small-bowel bypass procedures. Oxaluria is also increased in patients who consume increased amounts of oxalate (found in vegetables and nuts). Primary hyperoxaluria is a rare autosomal recessive condition that usually presents in childhood. Calcium oxalate crystals in urine appear as 'envelopes' microscopically. They may also form 'dumbbells.' About 10–15% of urinary calculi are composed of struvite (ammonium magnesium phosphate, NH4MgPO4·6H2O). Struvite stones (also known as "infection stones," urease, or triple-phosphate stones) form most often in the presence of infection by urea-splitting bacteria. Using the enzyme urease, these organisms metabolize urea into ammonia and carbon dioxide. This alkalinizes the urine, resulting in favorable conditions for the formation of struvite stones. Proteus mirabilis, Proteus vulgaris, and Morganella morganii are the most common organisms isolated; less common organisms include Ureaplasma urealyticum and some species of Providencia, Klebsiella, Serratia, and Enterobacter. These infection stones are commonly observed in people who have factors that predispose them to urinary tract infections, such as those with spinal cord injury and other forms of neurogenic bladder, ileal conduit urinary diversion, vesicoureteral reflux, and obstructive uropathies. They are also commonly seen in people with underlying metabolic disorders, such as idiopathic hypercalciuria, hyperparathyroidism, and gout. Infection stones can grow rapidly, forming large calyceal staghorn (antler-shaped) calculi requiring invasive surgery such as percutaneous nephrolithotomy for definitive treatment. Struvite stones (triple-phosphate/magnesium ammonium phosphate) have a 'coffin lid' morphology by microscopy. Uric acid stonesEdit About 5–10% of all stones are formed from uric acid. People with certain metabolic abnormalities, including obesity, may produce uric acid stones. They also may form in association with conditions that cause hyperuricosuria (an excessive amount of uric acid in the urine) with or without hyperuricemia (an excessive amount of uric acid in the serum). They may also form in association with disorders of acid/base metabolism where the urine is excessively acidic (low pH), resulting in precipitation of uric acid crystals. A diagnosis of uric acid urolithiasis is supported by the presence of a radiolucent stone in the face of persistent urine acidity, in conjunction with the finding of uric acid crystals in fresh urine samples. As noted above (section on calcium oxalate stones), people with inflammatory bowel disease (Crohn's disease, ulcerative colitis) tend to have hyperoxaluria and form oxalate stones. They also have a tendency to form urate stones. Urate stones are especially common after colon resection. Uric acid stones appear as pleomorphic crystals, usually diamond-shaped. They may also look like squares or rods which are polarizable. People with certain rare inborn errors of metabolism have a propensity to accumulate crystal-forming substances in their urine. For example, those with cystinuria, cystinosis, and Fanconi syndrome may form stones composed of cystine. Cystine stone formation can be treated with urine alkalinization and dietary protein restriction. People afflicted with xanthinuria often produce stones composed of xanthine. People afflicted with adenine phosphoribosyltransferase deficiency may produce 2,8-dihydroxyadenine stones, alkaptonurics produce homogentisic acid stones, and iminoglycinurics produce stones of glycine, proline, and hydroxyproline. Urolithiasis has also been noted to occur in the setting of therapeutic drug use, with crystals of drug forming within the renal tract in some people currently being treated with agents such as indinavir, sulfadiazine, and triamterene. Urolithiasis refers to stones originating anywhere in the urinary system, including the kidneys and bladder. Nephrolithiasis refers to the presence of such stones in the kidneys. Calyceal calculi are aggregations in either the minor or major calyx, parts of the kidney that pass urine into the ureter (the tube connecting the kidneys to the urinary bladder). The condition is called ureterolithiasis when a calculus is located in the ureter. Stones may also form or pass into the bladder, a condition referred to as bladder stones. Stones less than 5 mm (0.2 in) in diameter pass spontaneously in up to 98% of cases, while those measuring 5 to 10 mm (0.2 to 0.4 in) in diameter pass spontaneously in less than 53% of cases. Stones that are large enough to fill out the renal calyces are called staghorn stones, and are composed of struvite in a vast majority of cases, which forms only in the presence of urease-forming bacteria. Other forms that can possibly grow to become staghorn stones are those composed of cystine, calcium oxalate monohydrate, and uric acid. Preventative measures depend on the type of stones. In those with calcium stones, drinking lots of fluids, thiazide diuretics and citrate are effective as is allopurinol in those with high uric acid levels in the blood or urine. Specific therapy should be tailored to the type of stones involved. Diet can have an effect on the development of kidney stones. Preventive strategies include some combination of dietary modifications and medications with the goal of reducing the excretory load of calculogenic compounds on the kidneys. Dietary recommendations to minimize the formation of kidney stones include: - Increasing total fluid intake to more than two liters per day of urine output. - Limiting cola soft drinks, to less than one liter per week. - Limiting animal protein intake to no more than two meals daily (an association between animal protein and recurrence of kidney stones has been shown in men). Maintenance of dilute urine by means of vigorous fluid therapy is beneficial in all forms of kidney stones, so increasing urine volume is a key principle for the prevention of kidney stones. Fluid intake should be sufficient to maintain a urine output of at least 2 litres (68 US fl oz) per day. A high fluid intake has been associated with a 40% reduction in recurrence risk. The quality of the evidence for this, however, is not very good. Calcium binds with available oxalate in the gastrointestinal tract, thereby preventing its absorption into the bloodstream, and reducing oxalate absorption decreases kidney stone risk in susceptible people. Because of this, some doctors recommend chewing calcium tablets during meals containing oxalate foods. Calcium citrate supplements can be taken with meals if dietary calcium cannot be increased by other means. The preferred calcium supplement for people at risk of stone formation is calcium citrate because it helps to increase urinary citrate excretion. Aside from vigorous oral hydration and eating more dietary calcium, other prevention strategies include avoidance of large doses of supplemental vitamin C and restriction of oxalate-rich foods such as leaf vegetables, rhubarb, soy products and chocolate. However, no randomized, controlled trial of oxalate restriction has been performed to test the hypothesis that oxalate restriction reduces stone formation. Some evidence indicates magnesium intake decreases the risk of symptomatic kidney stones. The mainstay for medical management of uric acid stones is alkalinization (increasing the pH) of the urine. Uric acid stones are among the few types amenable to dissolution therapy, referred to as chemolysis. Chemolysis is usually achieved through the use of oral medications, although in some cases, intravenous agents or even instillation of certain irrigating agents directly onto the stone can be performed, using antegrade nephrostomy or retrograde ureteral catheters. Acetazolamide is a medication that alkalinizes the urine. In addition to acetazolamide or as an alternative, certain dietary supplements are available that produce a similar alkalinization of the urine. These include sodium bicarbonate, potassium citrate, magnesium citrate, and Bicitra (a combination of citric acid monohydrate and sodium citrate dihydrate). Aside from alkalinization of the urine, these supplements have the added advantage of increasing the urinary citrate level, which helps to reduce the aggregation of calcium oxalate stones. Increasing the urine pH to around 6.5 provides optimal conditions for dissolution of uric acid stones. Increasing the urine pH to a value higher than 7.0 increases the risk of calcium phosphate stone formation. Testing the urine periodically with nitrazine paper can help to ensure the urine pH remains in this optimal range. Using this approach, stone dissolution rate can be expected to be around 10 mm (0.4 in) of stone radius per month. One of the recognized medical therapies for prevention of stones is the thiazide and thiazide-like diuretics, such as chlorthalidone or indapamide. These drugs inhibit the formation of calcium-containing stones by reducing urinary calcium excretion. Sodium restriction is necessary for clinical effect of thiazides, as sodium excess promotes calcium excretion. Thiazides work best for renal leak hypercalciuria (high urine calcium levels), a condition in which high urinary calcium levels are caused by a primary kidney defect. Thiazides are useful for treating absorptive hypercalciuria, a condition in which high urinary calcium is a result of excess absorption from the gastrointestinal tract. For people with hyperuricosuria and calcium stones, allopurinol is one of the few treatments that have been shown to reduce kidney stone recurrences. Allopurinol interferes with the production of uric acid in the liver. The drug is also used in people with gout or hyperuricemia (high serum uric acid levels). Dosage is adjusted to maintain a reduced urinary excretion of uric acid. Serum uric acid level at or below 6 mg/100 ml) is often a therapeutic goal. Hyperuricemia is not necessary for the formation of uric acid stones; hyperuricosuria can occur in the presence of normal or even low serum uric acid. Some practitioners advocate adding allopurinol only in people in whom hyperuricosuria and hyperuricemia persist, despite the use of a urine-alkalinizing agent such as sodium bicarbonate or potassium citrate. Stone size influences the rate of spontaneous stone passage. For example, up to 98% of small stones (less than 5 mm (0.2 in) in diameter) may pass spontaneously through urination within four weeks of the onset of symptoms, but for larger stones (5 to 10 mm (0.2 to 0.4 in) in diameter), the rate of spontaneous passage decreases to less than 53%. Initial stone location also influences the likelihood of spontaneous stone passage. Rates increase from 48% for stones located in the proximal ureter to 79% for stones located at the vesicoureteric junction, regardless of stone size. Assuming no high-grade obstruction or associated infection is found in the urinary tract, and symptoms are relatively mild, various nonsurgical measures can be used to encourage the passage of a stone. Repeat stone formers benefit from more intense management, including proper fluid intake and use of certain medications, as well as careful monitoring. Management of pain often requires intravenous administration of NSAIDs or opioids. NSAIDs appear somewhat better than opioids or paracetamol in those with normal kidney function. Medications by mouth are often effective for less severe discomfort. The use of antispasmodics does not have further benefit. Medical expulsive therapyEdit The use of medications to speed the spontaneous passage of stones in the ureter is referred to as medical expulsive therapy. Several agents, including alpha adrenergic blockers (such as tamsulosin) and calcium channel blockers (such as nifedipine), may be effective. Alpha-blockers likely result in more people passing their stones, and they may pass their stones in a shorter time. Alpha-blockers appear to be more effective for larger stones (over 5 mm in size) than smaller stones. A combination of tamsulosin and a corticosteroid may be better than tamsulosin alone. These treatments also appear to be a useful in addition to lithotripsy. Extracorporeal shock wave lithotripsy (ESWL) is a noninvasive technique for the removal of kidney stones. Most ESWL is carried out when the stone is present near the renal pelvis. ESWL involves the use of a lithotriptor machine to deliver externally applied, focused, high-intensity pulses of ultrasonic energy to cause fragmentation of a stone over a period of around 30–60 minutes. Following its introduction in the United States in February 1984, ESWL was rapidly and widely accepted as a treatment alternative for renal and ureteral stones. It is currently used in the treatment of uncomplicated stones located in the kidney and upper ureter, provided the aggregate stone burden (stone size and number) is less than 20 mm (0.8 in) and the anatomy of the involved kidney is normal. For a stone greater than 10 mm (0.4 in), ESWL may not help break the stone in one treatment; instead, two or three treatments may be needed. Some 80 to 85% of simple renal calculi can be effectively treated with ESWL. A number of factors can influence its efficacy, including chemical composition of the stone, presence of anomalous renal anatomy and the specific location of the stone within the kidney, presence of hydronephrosis, body mass index, and distance of the stone from the surface of the skin. Common adverse effects of ESWL include acute trauma, such as bruising at the site of shock administration, and damage to blood vessels of the kidney. In fact, the vast majority of people who are treated with a typical dose of shock waves using currently accepted treatment settings are likely to experience some degree of acute kidney injury. ESWL-induced acute kidney injury is dose-dependent (increases with the total number of shock waves administered and with the power setting of the lithotriptor) and can be severe, including internal bleeding and subcapsular hematomas. On rare occasions, such cases may require blood transfusion and even lead to acute renal failure. Hematoma rates may be related to the type of lithotriptor used; hematoma rates of less than 1% and up to 13% have been reported for different lithotriptor machines. Recent studies show reduced acute tissue injury when the treatment protocol includes a brief pause following the initiation of treatment, and both improved stone breakage and a reduction in injury when ESWL is carried out at slow shock wave rate. In addition to the aforementioned potential for acute kidney injury, animal studies suggest these acute injuries may progress to scar formation, resulting in loss of functional renal volume. Recent prospective studies also indicate elderly people are at increased risk of developing new-onset hypertension following ESWL. In addition, a retrospective case-control study published by researchers from the Mayo Clinic in 2006 has found an increased risk of developing diabetes mellitus and hypertension in people who had undergone ESWL, compared with age and gender-matched people who had undergone nonsurgical treatment. Whether or not acute trauma progresses to long-term effects probably depends on multiple factors that include the shock wave dose (i.e., the number of shock waves delivered, rate of delivery, power setting, acoustic characteristics of the particular lithotriptor, and frequency of retreatment), as well as certain intrinsic predisposing pathophysiologic risk factors. To address these concerns, the American Urological Association established the Shock Wave Lithotripsy Task Force to provide an expert opinion on the safety and risk-benefit ratio of ESWL. The task force published a white paper outlining their conclusions in 2009. They concluded the risk-benefit ratio remains favorable for many people. The advantages of ESWL include its noninvasive nature, the fact that it is technically easy to treat most upper urinary tract calculi, and that, at least acutely, it is a well-tolerated, low-morbidity treatment for the vast majority of people. However, they recommended slowing the shock wave firing rate from 120 pulses per minute to 60 pulses per minute to reduce the risk of renal injury and increase the degree of stone fragmentation. Most stones under 5 mm (0.2 in) pass spontaneously. Prompt surgery may, nonetheless, be required in persons with only one working kidney, bilateral obstructing stones, a urinary tract infection and thus, it is presumed, an infected kidney, or intractable pain. Beginning in the mid-1980s, less invasive treatments such as extracorporeal shock wave lithotripsy, ureteroscopy, and percutaneous nephrolithotomy began to replace open surgery as the modalities of choice for the surgical management of urolithiasis. More recently, flexible ureteroscopy has been adapted to facilitate retrograde nephrostomy creation for percutaneous nephrolithotomy. This approach is still under investigation, though early results are favorable. Percutaneous nephrolithotomy or, rarely, anatrophic nephrolithotomy, is the treatment of choice for large or complicated stones (such as calyceal staghorn calculi) or stones that cannot be extracted using less invasive procedures. Ureteroscopy has become increasingly popular as flexible and rigid fiberoptic ureteroscopes have become smaller. One ureteroscopic technique involves the placement of a ureteral stent (a small tube extending from the bladder, up the ureter and into the kidney) to provide immediate relief of an obstructed kidney. Stent placement can be useful for saving a kidney at risk for postrenal acute renal failure due to the increased hydrostatic pressure, swelling and infection (pyelonephritis and pyonephrosis) caused by an obstructing stone. Ureteral stents vary in length from 24 to 30 cm (9.4 to 11.8 in) and most have a shape commonly referred to as a "double-J" or "double pigtail", because of the curl at both ends. They are designed to allow urine to flow past an obstruction in the ureter. They may be retained in the ureter for days to weeks as infections resolve and as stones are dissolved or fragmented by ESWL or by some other treatment. The stents dilate the ureters, which can facilitate instrumentation, and they also provide a clear landmark to aid in the visualization of the ureters and any associated stones on radiographic examinations. The presence of indwelling ureteral stents may cause minimal to moderate discomfort, frequency or urgency incontinence, and infection, which in general resolves on removal. Most ureteral stents can be removed cystoscopically during an office visit under topical anesthesia after resolution of urolithiasis. More definitive ureteroscopic techniques for stone extraction (rather than simply bypassing the obstruction) include basket extraction and ultrasound ureterolithotripsy. Laser lithotripsy is another technique, which involves the use of a holmium:yttrium aluminium garnet (Ho:YAG) laser to fragment stones in the bladder, ureters, and kidneys. Ureteroscopic techniques are generally more effective than ESWL for treating stones located in the lower ureter, with success rates of 93–100% using Ho:YAG laser lithotripsy. Although ESWL has been traditionally preferred by many practitioners for treating stones located in the upper ureter, more recent experience suggests ureteroscopic techniques offer distinct advantages in the treatment of upper ureteral stones. Specifically, the overall success rate is higher, fewer repeat interventions and postoperative visits are needed, and treatment costs are lower after ureteroscopic treatment when compared with ESWL. These advantages are especially apparent with stones greater than 10 mm (0.4 in) in diameter. However, because ureteroscopy of the upper ureter is much more challenging than ESWL, many urologists still prefer to use ESWL as a first-line treatment for stones of less than 10 mm, and ureteroscopy for those greater than 10 mm in diameter. Ureteroscopy is the preferred treatment in pregnant and morbidly obese people, as well as those with bleeding disorders. |Country||Earliest prevalence (years)||Latest prevalence (years)| |United States||2.6% (1964–1972)||5.2% (1988–1994)| |Italy||1.2% (1983)||1.7% (1993–1994)| |Scotland||3.8% (1977)||3.5% (1987)| |Spain||0.1% (1977)||10.0% (1991)| |Country||New cases per 100,000 (year)||Trend| |United States||116 (2000)||decreasing| Kidney stones affect all geographical, cultural, and racial groups. The lifetime risk is about 10 to 15% in the developed world, but can be as high as 20 to 25% in the Middle East. The increased risk of dehydration in hot climates, coupled with a diet 50% lower in calcium and 250% higher in oxalates compared to Western diets, accounts for the higher net risk in the Middle East. In the Middle East, uric acid stones are more common than calcium-containing stones. The number of deaths due to kidney stones is estimated at 19,000 per year being fairly consistent between 1990 and 2010. In North America and Europe, the annual number of new cases per year of kidney stones is roughly 0.5%. In the United States, the frequency in the population of urolithiasis has increased from 3.2% to 5.2% from the mid-1970s to the mid-1990s. In the United States, about 9% of the population has had a kidney stone. The total cost for treating urolithiasis was US$2 billion in 2003. About 65–80% of those with kidney stones are men; most stones in women are due to either metabolic defects (such as cystinuria) or infection. (p. 1057) Men most commonly experience their first episode between 30 and 40 years of age, whereas for women, the age at first presentation is somewhat later. The age of onset shows a bimodal distribution in women, with episodes peaking at 35 and 55 years. Recurrence rates are estimated at 50% over a 10-year and 75% over 20-year period, with some people experiencing ten or more episodes over the course of a lifetime. A 2010 review concluded that rates of disease are increasing. The existence of kidney stones was first recorded thousands of years ago, and lithotomy for the removal of stones is one of the earliest known surgical procedures. In 1901, a stone discovered in the pelvis of an ancient Egyptian mummy was dated to 4,800 BC. Medical texts from ancient Mesopotamia, India, China, Persia, Greece, and Rome all mentioned calculous disease. Part of the Hippocratic Oath suggests there were practicing surgeons in ancient Greece to whom physicians were to defer for lithotomies. The Roman medical treatise De Medicina by Aulus Cornelius Celsus contained a description of lithotomy, and this work served as the basis for this procedure until the 18th century. Famous people who were kidney stone formers include Napoleon I, Epicurus, Napoleon III, Peter the Great, Louis XIV, George IV, Oliver Cromwell, Lyndon B. Johnson, Benjamin Franklin, Michel de Montaigne, Francis Bacon, Isaac Newton, Samuel Pepys, William Harvey, Herman Boerhaave, and Antonio Scarpa. New techniques in lithotomy began to emerge starting in 1520, but the operation remained risky. After Henry Jacob Bigelow popularized the technique of litholapaxy in 1878, the mortality rate dropped from about 24% to 2.4%. However, other treatment techniques continued to produce a high level of mortality, especially among inexperienced urologists. In 1980, Dornier MedTech introduced extracorporeal shock wave lithotripsy for breaking up stones via acoustical pulses, and this technique has since come into widespread use. Renal calculus is from the Latin rēnēs meaning "kidneys," and calculus meaning "pebble". Lithiasis (stone formation) in the kidneys is called nephrolithiasis (//), from nephro- meaning kidney + -lith meaning stone and -iasis meaning disorder. Crystallization of calcium oxalate appears to be inhibited by certain substances in the urine that retard the formation, growth, aggregation, and adherence of crystals to renal cells. By purifying urine using salt precipitation, isoelectric focusing, and size-exclusion chromatography, some researchers have found that calgranulin, a protein formed in the kidney, is a potent inhibitor of the in vivo formation of calcium oxalate crystals. Considering its extremely high levels of inhibition of growth and aggregation of calcium oxalate crystals, calgranulin might be an important intrinsic factor in the prevention of nephrolithiasis. Although kidney stones do not often occur in children, the incidence is increasing. These stones are in the kidney in two thirds of reported cases, and in the ureter in the remaining cases. Older children are at greater risk independent of sex. As with adults, most pediatric kidney stones are predominantly composed of calcium oxalate; struvite and calcium phosphate stones are less common. Calcium oxalate stones in children are associated with high amounts of calcium, oxalate, and magnesium in acidic urine. Among ruminants, uroliths more commonly cause problems in males than in females; the sigmoid flexure of the ruminant male urinary tract is more likely to obstruct passage. Early-castrated males are at greater risk, because of lesser urethral diameter. Alkaline (higher) pH favors formation of carbonate and phosphate calculi. For domestic ruminants, dietary cation: anion balance is sometimes adjusted to assure a slightly acidic urine pH, for prevention of calculus formation Differing generalizations regarding effects of pH on formation of silicate uroliths may be found. In this connection, it may be noted that under some circumstances, calcium carbonate accompanies silica in siliceous uroliths. Pelleted feeds may be conducive to formation of phosphate uroliths, because of increased urinary phosphorus excretion. This is attributable to lower saliva production where pelleted rations containing finely ground constituents are fed. With less blood phosphate partitioned into saliva, more tends to be excreted in urine. (Most saliva phosphate is fecally excreted.) Oxalate uroliths can occur in ruminants, although such problems from oxalate ingestion may be relatively uncommon. Ruminant urolithiasis associated with oxalate ingestion has been reported. However, no renal tubular damage or visible deposition of calcium oxalate crystals in kidneys was found in yearling wether sheep fed diets containing soluble oxalate at 6.5 percent of dietary dry matter for about 100 days. Conditions limiting water intake can result in stone formation. Various surgical interventions, e.g. amputation of the urethral process at its base near the glans penis in male ruminants, perineal urethrostomy, or tube cystostomy may be considered for relief of obstructive urolithiasis. - Schulsinger DA (2014). Kidney Stone Disease: Say NO to Stones!. Springer. p. 27. ISBN 9783319121055. Archived from the original on 8 September 2017. - "Kidney Stones in Adults". February 2013. Archived from the original on 11 May 2015. Retrieved 22 May 2015. - Knoll T, Pearle MS (2012). Clinical Management of Urolithiasis. Springer Science & Business Media. p. 21. ISBN 9783642287329. Archived from the original on 8 September 2017. - Qaseem A, Dallas P, Forciea MA, Starkey M, et al. (November 2014). "Dietary and pharmacologic management to prevent recurrent nephrolithiasis in adults: a clinical practice guideline from the American College of Physicians". Annals of Internal Medicine. 161 (9): 659–67. doi:10.7326/M13-2908. PMID 25364887. - Vos T, Allen C, Arora M, Barber RM, Bhutta ZA, Brown A, et al. (GBD 2015 Disease and Injury Incidence and Prevalence Collaborators) (October 2016). "Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990-2015: a systematic analysis for the Global Burden of Disease Study 2015". Lancet. 388 (10053): 1545–1602. doi:10.1016/S0140-6736(16)31678-6. PMC 5055577. PMID 27733282. - Vos T, Allen C, Arora M, Barber RM, Bhutta ZA, Brown A, et al. (GBD 2015 Disease and Injury Incidence and Prevalence Collaborators) (October 2016). "Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980-2015: a systematic analysis for the Global Burden of Disease Study 2015". Lancet. 388 (10053): 1459–1544. doi:10.1016/s0140-6736(16)31012-1. PMC 5388903. PMID 27733281. - Miller NL, Lingeman JE (March 2007). "Management of kidney stones" (PDF). BMJ. 334 (7591): 468–72. doi:10.1136/bmj.39113.480185.80. PMC 1808123. PMID 17332586. Archived (PDF) from the original on 27 December 2010. - Morgan MS, Pearle MS (March 2016). "Medical management of renal stones". BMJ. 352: i52. doi:10.1136/bmj.i52. PMID 26977089. - Afshar K, Jafari S, Marks AJ, Eftekhari A, MacNeily AE (June 2015). "Nonsteroidal anti-inflammatory drugs (NSAIDs) and non-opioids for acute renal colic". The Cochrane Database of Systematic Reviews. 6 (6): CD006027. doi:10.1002/14651858.CD006027.pub2. PMID 26120804. - Wang RC, Smith-Bindman R, Whitaker E, Neilson J, Allen IE, Stoller ML, Fahimi J (March 2017). "Effect of Tamsulosin on Stone Passage for Ureteral Stones: A Systematic Review and Meta-analysis". Annals of Emergency Medicine. 69 (3): 353–361.e3. doi:10.1016/j.annemergmed.2016.06.044. PMID 27616037. - Preminger GM (2007). "Chapter 148: Stones in the Urinary Tract". In Cutler RE (ed.). The Merck Manual of Medical Information Home Edition (3rd ed.). Whitehouse Station, New Jersey: Merck Sharp and Dohme Corporation. - Nephrolithiasis~Overview at eMedicine § Background. - Pearle MS, Calhoun EA, Curhan GC (2007). "Ch. 8: Urolithiasis" (PDF). In Litwin MS, Saigal CS (eds.). Urologic Diseases in America (NIH Publication No. 07–5512). Bethesda, Maryland: National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, United States Public Health Service, United States Department of Health and Human Services. pp. 283–319. Archived (PDF) from the original on 18 October 2011. - Cavendish M (2008). "Kidney disorders". Diseases and Disorders. 2 (1st ed.). Tarrytown, New York: Marshall Cavendish Corporation. pp. 490–3. ISBN 978-0-7614-7772-3. - Curhan GC, Willett WC, Rimm EB, Spiegelman D, et al. (February 1996). "Prospective study of beverage use and the risk of kidney stones" (PDF). American Journal of Epidemiology. 143 (3): 240–7. doi:10.1093/oxfordjournals.aje.a008734. PMID 8561157. - Knight J, Assimos DG, Easter L, Holmes RP (November 2010). "Metabolism of fructose to oxalate and glycolate". Hormone and Metabolic Research = Hormon- und Stoffwechselforschung = Hormones et Metabolisme. 42 (12): 868–73. doi:10.1055/s-0030-1265145. PMC 3139422. PMID 20842614. - Johri N, Cooper B, Robertson W, Choong S, et al. (2010). "An update and practical guide to renal stone management". Nephron Clinical Practice. 116 (3): c159–71. doi:10.1159/000317196. PMID 20606476. - Moe OW (January 2006). "Kidney stones: pathophysiology and medical management" (PDF). Lancet. 367 (9507): 333–44. doi:10.1016/S0140-6736(06)68071-9. PMID 16443041. Archived (PDF) from the original on 15 August 2011. - Thakker RV (March 2000). "Pathogenesis of Dent's disease and related syndromes of X-linked nephrolithiasis" (PDF). Kidney International. 57 (3): 787–93. doi:10.1046/j.1523-1755.2000.00916.x. PMID 10720930. Archived (PDF) from the original on 5 November 2012. - National Endocrine and Metabolic Diseases Information Service (2006). "Hyperparathyroidism (NIH Publication No. 6–3425)". Information about Endocrine and Metabolic Diseases: A-Z list of Topics and Titles. Bethesda, Maryland: National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Public Health Service, US Department of Health and Human Services. Archived from the original on 24 May 2011. Retrieved 27 July 2011. - Hoppe B, Langman CB (October 2003). "A United States survey on diagnosis, treatment, and outcome of primary hyperoxaluria". Pediatric Nephrology. 18 (10): 986–91. doi:10.1007/s00467-003-1234-x. PMID 12920626. - Reilly RF, Ch. 13: "Nephrolithiasis". In Reilly Jr & Perazella 2005, pp. 192–207. - National Kidney and Urologic Diseases Information Clearinghouse (2008). "Medullary Sponge Kidney (NIH Publication No. 08–6235)". Kidney & Urologic Diseases: A-Z list of Topics and Titles. Bethesda, Maryland: National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Public Health Service, US Department of Health and Human Services. Archived from the original on 7 August 2011. Retrieved 27 July 2011. - National Digestive Diseases Information Clearinghouse (2006). "Crohn's Disease (NIH Publication No. 06–3410)". Digestive Diseases: A-Z List of Topics and Titles. Bethesda, Maryland: National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, United States Public Health Service, United States Department of Health and Human Services. Archived from the original on 9 June 2014. Retrieved 27 July 2011. - Farmer RG, Mir-Madjlessi SH, Kiser WS (1974). "Urinary excretion of oxalate, calcium, magnesium, and uric acid in inflammatory bowel disease". Cleveland Clinic Quarterly. 41 (3): 109–17. doi:10.3949/ccjm.41.3.109. PMID 4416806. - "Summary". In Committee to Review Dietary Reference Intakes for Vitamin D and Calcium 2011, pp. 1–14. - "Tolerable upper intake levels: Calcium and vitamin D". In Committee to Review Dietary Reference Intakes for Vitamin D and Calcium 2011, pp. 403–56. - Parmar MS (June 2004). "Kidney stones". BMJ. 328 (7453): 1420–4. doi:10.1136/bmj.328.7453.1420. PMC 421787. PMID 15191979. - Liebman M, Al-Wahsh IA (May 2011). "Probiotics and other key determinants of dietary oxalate absorption" (PDF). Advances in Nutrition. 2 (3): 254–60. doi:10.3945/an.111.000414. PMC 3090165. PMID 22332057. Archived (PDF) from the original on 16 January 2016. - Committee on Fluoride in Drinking Water of the National Academy of Sciences (2006). "Chapter 9: Effects on the Renal System". Fluoride in Drinking Water: A Scientific Review of EPA's Standards. Washington, DC: The National Academies Press. pp. 236–48. ISBN 978-0-309-65799-0. Archived from the original on 30 July 2011. - Ferraro PM, Mandel EI, Curhan GC, Gambaro G, Taylor EN (October 2016). "Dietary Protein and Potassium, Diet-Dependent Net Acid Load, and Risk of Incident Kidney Stones". Clinical Journal of the American Society of Nephrology. 11 (10): 1834–1844. doi:10.2215/CJN.01520216. PMC 5053786. PMID 27445166. - Riley JM, Kim H, Averch TD, Kim HJ (December 2013). "Effect of magnesium on calcium and oxalate ion binding". Journal of Endourology. 27 (12): 1487–92. doi:10.1089/end.2013.0173. PMC 3883082. PMID 24127630. - Negri AL, Spivacow FR, Del Valle EE (2013). "[Diet in the treatment of renal lithiasis. Pathophysiological basis]". Medicina. 73 (3): 267–71. PMID 23732207. - Goodwin JS, Tangum MR (November 1998). "Battling quackery: attitudes about micronutrient supplements in American academic medicine". Archives of Internal Medicine. 158 (20): 2187–91. doi:10.1001/archinte.158.20.2187. PMID 9818798. - Traxer O, Pearle MS, Gattegno B, Thibault P (December 2003). "[Vitamin C and stone risk. Review of the literature]". Progres en Urologie. 13 (6): 1290–4. PMID 15000301. - Chen X, Shen L, Gu X, Dai X, Zhang L, Xu Y, Zhou P (October 2014). "High-dose supplementation with vitamin C--induced pediatric urolithiasis: the first case report in a child and literature review". Urology. 84 (4): 922–4. doi:10.1016/j.urology.2014.07.021. PMID 25260453. - Rodman JS, Seidman C (1996). "Ch. 8: Dietary Troublemakers". In Rodman JS, Seidman C, Jones R (eds.). No More Kidney Stones (1st ed.). New York: John Wiley & Sons, Inc. pp. 46–57. ISBN 978-0-471-12587-7. - Brawer MK, Makarov DV, Partin AW, Roehrborn CG, Nickel JC, Lu SH, Yoshimura N, Chancellor MB, Assimos DG (2008). "Best of the 2008 AUA Annual Meeting: Highlights from the 2008 Annual Meeting of the American Urological Association, May 17-22, 2008, Orlando, FL". Reviews in Urology. 10 (2): 136–56. PMC 2483319. PMID 18660856. - Mirheydar HS, Banapour P, Massoudi R, Palazzi KL, Jabaji R, Reid EG, Millard FE, Kane CJ, Sur RL (December 2014). "What is the incidence of kidney stones after chemotherapy in patients with lymphoproliferative or myeloproliferative disorders?". International Braz J Urol. 40 (6): 772–80. doi:10.1590/S1677-5538.IBJU.2014.06.08. PMID 25615245. - Caudarella R, Vescini F (September 2009). "Urinary citrate and renal stone disease: the preventive role of alkali citrate treatment". Archivio Italiano di Urologia, Andrologia. 81 (3): 182–7. PMID 19911682. - Perazella MA, Ch. 14: "Urinalysis". In Reilly Jr & Perazella 2005, pp. 209–26. - Knudsen BE, Beiko DT, Denstedt JD, Ch. 16: "Uric Acid Urolithiasis". In Stoller & Meng 2007, pp. 299–308. - Nephrolithiasis~Overview at eMedicine § Pathophysiology. - Coe FL, Evan A, Worcester E (October 2005). "Kidney stone disease". The Journal of Clinical Investigation. 115 (10): 2598–608. doi:10.1172/JCI26662. PMC 1236703. PMID 16200192. - del Valle EE, Spivacow FR, Negri AL (2013). "[Citrate and renal stones]". Medicina. 73 (4): 363–8. PMID 23924538. - Anoia EJ, Paik ML, Resnick MI (2009). "Ch. 7: Anatrophic Nephrolithomy". In Graham SD, Keane TE (eds.). Glenn's Urologic Surgery (7th ed.). Philadelphia: Lippincott Williams & Wilkins. pp. 45–50. ISBN 978-0-7817-9141-0. - Weaver SH, Jenkins P (2002). "Ch. 14: Renal and Urological Care". Illustrated Manual of Nursing Practice (3rd ed.). Lippincott Williams & Wilkins. ISBN 978-1-58255-082-4. - American College of Emergency Physicians (27 October 2014). "Ten Things Physicians and Patients Should Question". Choosing Wisely. Archived from the original on 7 March 2014. Retrieved 14 January 2015. - "American Urological Association | Choosing Wisely". www.choosingwisely.org. Archived from the original on 23 February 2017. Retrieved 28 May 2017. - Smith RC, Varanelli M (July 2000). "Diagnosis and management of acute ureterolithiasis: CT is truth". AJR. American Journal of Roentgenology. 175 (1): 3–6. doi:10.2214/ajr.175.1.1750003. PMID 10882237. - Fang L (2009). "Chapter 135: Approach to the Paient with Nephrolithiasis". In Goroll AH, Mulley AG (eds.). Primary care medicine: office evaluation and management of the adult patient (6th ed.). Philadelphia: Lippincott Williams & Wilkins. pp. 962–7. ISBN 978-0-7817-7513-7. - Pietrow PK, Karellas ME (July 2006). "Medical management of common urinary calculi" (PDF). American Family Physician. 74 (1): 86–94. PMID 16848382. Archived (PDF) from the original on 23 November 2011. - Bushinsky D, Coe FL, Moe OW (2007). "Ch. 37: Nephrolithiasis". In Brenner BM (ed.). Brenner and Rector's The Kidney. 1 (8th ed.). Philadelphia: WB Saunders. pp. 1299–349. ISBN 978-1-4160-3105-5. Archived from the original on 8 October 2011. - Smith RC, Levine J, Rosenfeld AT (September 1999). "Helical CT of urinary tract stones. Epidemiology, origin, pathophysiology, diagnosis, and management". Radiologic Clinics of North America. 37 (5): 911–52, v. doi:10.1016/S0033-8389(05)70138-X. PMID 10494278. - Cormier CM, Canzoneri BJ, Lewis DF, Briery C, et al. (November 2006). "Urolithiasis in pregnancy: Current diagnosis, treatment, and pregnancy complications" (PDF). Obstetrical & Gynecological Survey. 61 (11): 733–41. doi:10.1097/01.ogx.0000243773.05916.7a. PMID 17044950. Archived from the original (PDF) on 11 March 2012. - Smith-Bindman R, Aubin C, Bailitz J, Bengiamin RN, Camargo CA, Corbo J, et al. (September 2014). "Ultrasonography versus computed tomography for suspected nephrolithiasis". The New England Journal of Medicine. 371 (12): 1100–10. doi:10.1056/NEJMoa1404446. PMID 25229916. - National Kidney and Urologic Diseases Information Clearinghouse (2007). "Kidney Stones in Adults (NIH Publication No. 08–2495)". Kidney & Urologic Diseases: A-Z list of Topics and Titles. Bethesda, Maryland: National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Public Health Service, US Department of Health and Human Services. Archived from the original on 26 July 2011. Retrieved 27 July 2011. - Becker KL (2001). Principles and practice of endocrinology and metabolism (3 ed.). Philadelphia, Pa. [u.a.]: Lippincott, Williams & Wilkins. p. 684. ISBN 978-0-7817-1750-2. Archived from the original on 8 September 2017. - "Cystine stones". UpToDate. Archived from the original on 26 February 2014. Retrieved 20 February 2014. - Bailey & Love's/25th/1296 - National Endocrine and Metabolic Diseases Information Service (2008). "Renal Tubular Acidosis (NIH Publication No. 09–4696)". Kidney & Urologic Diseases: A-Z list of Topics and Titles. Bethesda, Maryland: National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Public Health Service, US Department of Health and Human Services. Archived from the original on 28 July 2011. Retrieved 27 July 2011. - De Mais D (2009). ASCP Quick Compendium of Clinical Pathology (2nd ed.). Chicago: ASCP Press. - Weiss M, Liapis H, Tomaszewski JE, Arend LJ (2007). "Chapter 22: Pyelonephritis and Other Infections, Reflux Nephropathy, Hydronephrosis, and Nephrolithiasis". In Jennette JC, Olson JL, Schwartz MM, Silva FG (eds.). Heptinstall's Pathology of the Kidney. 2 (6th ed.). Philadelphia: Lippincott Williams & Wilkins. pp. 991–1082. ISBN 978-0-7817-4750-9. - Halabe A, Sperling O (1994). "Uric acid nephrolithiasis". Mineral and Electrolyte Metabolism. 20 (6): 424–31. PMID 7783706. - Kamatani N (December 1996). "[Adenine phosphoribosyltransferase(APRT) deficiency]". Nihon Rinsho. Japanese Journal of Clinical Medicine (in Japanese). 54 (12): 3321–7. PMID 8976113. - Rosenberg LE, Durant JL, Elsas LJ (June 1968). "Familial iminoglycinuria. An inborn error of renal tubular transport". The New England Journal of Medicine. 278 (26): 1407–13. doi:10.1056/NEJM196806272782601. PMID 5652624. - Coşkun T, Ozalp I, Tokatli A (1993). "Iminoglycinuria: a benign type of inherited aminoaciduria". The Turkish Journal of Pediatrics. 35 (2): 121–5. PMID 7504361. - Merck Sharp; Dohme Corporation (2010). "Patient Information about Crixivan for HIV (Human Immunodeficiency Virus) Infection" (PDF). Crixivan® (indinavir sulfate) Capsules. Whitehouse Station, New Jersey: Merck Sharp & Dohme Corporation. Archived (PDF) from the original on 15 August 2011. Retrieved 27 July 2011. - Schlossberg D, Samuel R (2011). "Sulfadiazine". Antibiotic Manual: A Guide to Commonly Used Antimicrobials (1st ed.). Shelton, Connecticut: People's Medical Publishing House. pp. 411–12. ISBN 978-1-60795-084-4. - Carr MC, Prien EL, Babayan RK (December 1990). "Triamterene nephrolithiasis: renewed attention is warranted". The Journal of Urology. 144 (6): 1339–40. doi:10.1016/S0022-5347(17)39734-3. PMID 2231920. - McNutt WF (1893). "Chapter VII: Vesical Calculi (Cysto-lithiasis)". Diseases of the Kidneys and Bladder: A Text-book for Students of Medicine. IV: Diseases of the Bladder. Philadelphia: J.B. Lippincott Company. pp. 185–6. - Gettman MT, Segura JW (March 2005). "Management of ureteric stones: issues and controversies". BJU International. 95 Suppl 2 (Supplement 2): 85–93. doi:10.1111/j.1464-410X.2005.05206.x. PMID 15720341. - Segura, Joseph W. (1997). "STAGHORN CALCULI". Urologic Clinics of North America. 24 (1): 71–80. doi:10.1016/S0094-0143(05)70355-4. ISSN 0094-0143. - Fink HA, Wilt TJ, Eidman KE, Garimella PS, et al. (April 2013). "Medical management to prevent recurrent nephrolithiasis in adults: a systematic review for an American College of Physicians Clinical Guideline". Annals of Internal Medicine. 158 (7): 535–43. doi:10.7326/0003-4819-158-7-201304020-00005. PMID 23546565. - Qaseem A, Dallas P, Forciea MA, Starkey M, Denberg TD (November 2014). "Dietary and pharmacologic management to prevent recurrent nephrolithiasis in adults: a clinical practice guideline from the American College of Physicians". Annals of Internal Medicine. 161 (9): 659–67. doi:10.7326/m13-2908. PMID 25364887. - Goldfarb DS, Coe FL (November 1999). "Prevention of recurrent nephrolithiasis". American Family Physician. 60 (8): 2269–76. PMID 10593318. Archived from the original on 22 August 2005. - Finkielstein VA, Goldfarb DS (May 2006). "Strategies for preventing calcium oxalate stones". CMAJ. 174 (10): 1407–9. doi:10.1503/cmaj.051517. PMC 1455427. PMID 16682705. Archived from the original on 15 October 2008. - Fink HA, Wilt TJ, Eidman KE, Garimella PS, MacDonald R, Rutks IR, Brasure M, Kane RL, Monga M (July 2012). "Recurrent Nephrolithiasis in Adults: Comparative Effectiveness of Preventive Medical Strategies". PMID 22896859. - "What are kidney stones?". kidney.org. Archived from the original on 14 May 2013. Retrieved 19 August 2013. - Taylor EN, Curhan GC (September 2006). "Diet and fluid prescription in stone disease". Kidney International. 70 (5): 835–9. doi:10.1038/sj.ki.5001656. PMID 16837923. - Heaney RP (March 2006). "Nutrition and chronic disease" (PDF). Mayo Clinic Proceedings. 81 (3): 297–9. doi:10.4065/81.3.297. PMID 16529131. Archived from the original (PDF) on 16 July 2011. - Tiselius HG (May 2003). "Epidemiology and medical management of stone disease". BJU International. 91 (8): 758–67. doi:10.1046/j.1464-410X.2003.04208.x. PMID 12709088. - Taylor EN, Stampfer MJ, Curhan GC (December 2004). "Dietary factors and the risk of incident kidney stones in men: new insights after 14 years of follow-up" (PDF). Journal of the American Society of Nephrology. 15 (12): 3225–32. doi:10.1097/01.ASN.0000146012.44570.20. PMID 15579526. - Cicerello E, Merlo F, Maccatrozzo L (September 2010). "Urinary alkalization for the treatment of uric acid nephrolithiasis". Archivio Italiano di Urologia, Andrologia. 82 (3): 145–8. PMID 21121431. - Cameron JS, Simmonds HA (June 1987). "Use and abuse of allopurinol". British Medical Journal. 294 (6586): 1504–5. doi:10.1136/bmj.294.6586.1504. PMC 1246665. PMID 3607420. - Macaluso JN (November 1996). "Management of stone disease--bearing the burden". The Journal of Urology. 156 (5): 1579–80. doi:10.1016/S0022-5347(01)65452-1. PMID 8863542. - Pathan SA, Mitra B, Cameron PA (April 2018). "A Systematic Review and Meta-analysis Comparing the Efficacy of Nonsteroidal Anti-inflammatory Drugs, Opioids, and Paracetamol in the Treatment of Acute Renal Colic". European Urology. 73 (4): 583–595. doi:10.1016/j.eururo.2017.11.001. PMID 29174580. - Seitz C, Liatsikos E, Porpiglia F, Tiselius HG, et al. (September 2009). "Medical therapy to facilitate the passage of stones: what is the evidence?". European Urology. 56 (3): 455–71. doi:10.1016/j.eururo.2009.06.012. PMID 19560860. - Campschroer T, Zhu X, Vernooij RW, Lock MT (April 2018). "Alpha-blockers as medical expulsive therapy for ureteral stones". The Cochrane Database of Systematic Reviews. 4: CD008509. doi:10.1002/14651858.CD008509.pub3. PMID 29620795. - Shock Wave Lithotripsy Task Force (2009). "Current Perspective on Adverse Effects in Shock Wave Lithotripsy" (PDF). Clinical Guidelines. Linthicum, Maryland: American Urological Association. Archived from the original (PDF) on 18 July 2013. Retrieved 13 October 2015. - Lingeman JE, Matlaga BR, Evan AP (2007). "Surgical Management of Urinary Lithiasis". In Wein AJ, Kavoussi LR, Novick AC, Partin AW, Peters CA (eds.). Campbell-Walsh Urology. Philadelphia: W. B. Saunders. pp. 1431–1507. - Preminger GM, Tiselius HG, Assimos DG, Alken P, Buck C, Gallucci M, Knoll T, Lingeman JE, Nakada SY, Pearle MS, Sarica K, Türk C, Wolf JS (December 2007). "2007 guideline for the management of ureteral calculi". The Journal of Urology. 178 (6): 2418–34. doi:10.1016/j.juro.2007.09.107. PMID 17993340. - Evan AP, McAteer JA (1996). "Ch. 28: Q-effects of Shock Wave Lithotripsy". In Coe FL, Favus MJ, Pak CY, Parks JH, Preminger GM (eds.). Kidney Stones: Medical and Surgical Management. Philadelphia: Lippincott-Raven. pp. 549–60. - Evan AP, Willis LR (2007). "Ch. 41: Extracorporeal Shock Wave Lithotripsy: Complications". In Smith AD, Badlani GH, Bagley DH, Clayman RV, Docimo SG (eds.). Smith's Textbook on Endourology. Hamilton, Ontario, Canada: B C Decker, Inc. pp. 353–65. - Young JG, Keeley FX, Ch. 38: "Indications for Surgical Removal, Including Asymptomatic Stones". In Rao, Preminger & Kavanagh 2011, pp. 441–54. - Wynberg JB, Borin JF, Vicena JZ, Hannosh V, Salmon SA (October 2012). "Flexible ureteroscopy-directed retrograde nephrostomy for percutaneous nephrolithotomy: description of a technique". Journal of Endourology. 26 (10): 1268–74. doi:10.1089/end.2012.0160. PMID 22563900. - Lam JS, Gupta M, Ch. 25: "Ureteral Stents". In Stoller & Meng 2007, pp. 465–83. - Marks AJ, Qiu J, Milner TE, Chan KF, Teichman JM, Ch. 26: "Laser Lithotripsy Physics". In Rao, Preminger & Kavanagh 2011, pp. 301–10. - Romero V, Akpinar H, Assimos DG (2010). "Kidney stones: a global picture of prevalence, incidence, and associated risk factors". Reviews in Urology. 12 (2–3): e86–96. PMC 2931286. PMID 20811557. - Lieske JC, Segura JW (2004). "Ch. 7: Evaluation and Medical Management of Kidney Stones". In Potts JM (ed.). Essential Urology: A Guide to Clinical Practice (1st ed.). Totowa, New Jersey: Humana Press. pp. 117–52. ISBN 978-1-58829-109-7. - Lozano R, Naghavi M, Foreman K, Lim S, et al. (December 2012). "Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the Global Burden of Disease Study 2010". Lancet. 380 (9859): 2095–128. doi:10.1016/S0140-6736(12)61728-0. hdl:10536/DRO/DU:30050819. PMID 23245604. - Windus D (2008). The Washington manual nephrology subspecialty consult (2nd ed.). Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins Health. p. 235. ISBN 978-0-7817-9149-6. Archived from the original on 9 September 2016. - Eknoyan, G (2004). "History of urolithiasis". Clinical Reviews in Bone and Mineral Metabolism. 2 (3): 177–85. doi:10.1385/BMM:2:3:177. ISSN 1534-8644. - Celsus AC (1831). "Book VII, Chapter XXVI: Of the operation necessary in a suppression of urine, and lithotomy". In Collier GF (ed.). A translation of the eight books of Aul. Corn. Celsus on medicine (2nd ed.). London: Simpkin and Marshall. pp. 306–14. Archived from the original on 8 July 2014. - Shah J, Whitfield HN (May 2002). "Urolithiasis through the ages". BJU International. 89 (8): 801–10. doi:10.1046/j.1464-410X.2002.02769.x. PMID 11972501. - Ellis H (1969). A History of Bladder Stone. Oxford, England: Blackwell Scientific Publications. ISBN 978-0-632-06140-2. - Bigelow HJ (1878). Litholapaxy or rapid lithotrity with evacuation. Boston: A. Williams and Company. p. 29. - Marks AR, Neill US, Coe FL, Evan A, Worcester E, eds. (2008). "Chapter 114: Kidney stone disease". Science in medicine: the JCI textbook of molecular medicine. Part II: Kidney and urinary tract (1st ed.). Sudbury, Massachusetts: Jones and Bartlett Publishers. pp. 898–908. ISBN 978-0-7637-5083-1. Archived from the original on 9 September 2016. - Dwyer ME, Krambeck AE, Bergstralh EJ, Milliner DS, Lieske JC, Rule AD (July 2012). "Temporal trends in incidence of kidney stones among children: a 25-year population based study". The Journal of Urology. 188 (1): 247–52. doi:10.1016/j.juro.2012.03.021. PMC 3482509. PMID 22595060. - "Diet and Definition of Kidney Stones, Renal Calculi". Archived from the original on 17 November 2007. Retrieved 11 October 2013. - Kirejczyk JK, Porowski T, Filonowicz R, Kazberuk A, Stefanowicz M, Wasilewska A, Debek W (February 2014). "An association between kidney stone composition and urinary metabolic disturbances in children". Journal of Pediatric Urology. 10 (1): 130–5. doi:10.1016/j.jpurol.2013.07.010. PMID 23953243. - Pugh DG, Baird N (27 May 2012). Sheep & Goat Medicine - E-Book. Elsevier Health Sciences. ISBN 978-1-4377-2354-0. - Bushman DH, Emerick RJ, Embry LB (December 1965). "Experimentally induced ovine phosphatic urolithiasis: relationships involving dietary calcium, phosphorus and magnesium". The Journal of Nutrition. 87 (4): 499–504. doi:10.1093/jn/87.4.499. PMID 5841867. - Stewart SR, Emerick RJ, Pritchard RH (1991). "Effects of dietary ammonium chloride and variations in calcium to phosphorus ratio on silica urolithiasis in sheep". J. Anim. Sci. 69 (5): 2225–2229. doi:10.2527/1991.6952225x. - Forman SA, Whiting F, Connell R (1959). "Silica urolithiasis in beef cattle. 3. Chemical and physical composition of the uroliths". Can. J. Comp. Med. 23 (4): 157–162. - Scott D, Buchan W (1988). "The effects of feeding pelleted diets made from either coarsely or finely ground hay on phosphorus balance and on the partition of phosphorus excretion between urine and faeces in the sheep". Q. J. Exp. Physiol. 73 (3): 315–322. doi:10.1113/expphysiol.1988.sp003148. - Bravo D, Sauvant D, Bogaert C, Meschy F (2003). "III. Quantitative aspects of phosphorus excretion in ruminants". Reproduction, Nutrition, Development. 43 (3): 285–300. doi:10.1051/rnd:2003021. PMID 14620634. - Waltner-Toews, D. and D. H. Meadows. 1980. Case report: Urolithiasis in a herd of beef cattle associated with oxalate ingestion. Can. Vet. J. 21: 61-62 - James LF, Butcher JE (1972). "Halogeton poisoning of sheep: effect of high level oxalate intake". J. Animal Sci. 35 (6): 1233–1238. doi:10.2527/jas1972.3561233x. - Kahn, C. M. (ed.) 2005. Merck veterinary manual. 9th Ed. Merck & Co., Inc., Whitehouse Station. - Committee to Review Dietary Reference Intakes for Vitamin D and Calcium, Institute of Medicine of the National Academies (2011). Ross AC, Taylor CL, Yaktine AL, Del HB (eds.). Dietary Reference Intakes for Calcium and Vitamin D. Washington, DC: The National Academies Press. doi:10.17226/13050. ISBN 978-0-309-16394-1. PMID 21796828. - Marks AJ, Qiu J, Milner TE, Chan KF, Teichman JM (2011). "Laser Lithotripsy Physics". In Rao PN, Preminger GM, Kavanagh JP (eds.). Urinary Tract Stone Disease (1st ed.). London: Springer-Verlag. pp. 301–309. doi:10.1007/978-1-84800-362-0_26. ISBN 978-1-84800-361-3. - Reilly RF, Perazella MA, eds. (2005). Nephrology in 30 Days (1st ed.). New York: The McGraw-Hill Companies, Inc. ISBN 978-0-07-143701-1. - Stoller ML, Meng MV, eds. (2007). Urinary stone disease: the practical guide to medical and surgical management (1st ed.). Totowa, New Jersey: Humana Press. ISBN 978-1-59259-972-1.[dead link] |Wikimedia Commons has media related to Kidney stones.| - Kidney stone disease at Curlie - Information from the European Urological Association - Kidney Stone Guide Book University of Chicago Kidney Stone Program
About This Chapter 3rd Grade Math: Multiplication & Division - Chapter Summary If your 3rd grade students need some assistance when it comes to multiplication and division, this chapter can help. Each lesson is clearly labeled and focuses on one key topic. So for example, if your student is having problems performing multiplication with mental math, using skip counting and finger tricks, or dividing numbers with a remainder, you can look at the chapter menu and easily point your students in the right direction. All of the lessons include a corresponding quiz to assess students' understanding and help reinforce the lesson topics. - Steps for performing multiplication - Input-output tables for multiplication - Multiplying by 10 - Whole number multiplication with and without regrouping - Multiplying 3 or more numbers - Using the expanded notation method - Connections between multiplication and division - Methods for division problems - Dividing numbers by 1 - Steps for performing division and long division 1. How to Perform Multiplication: Steps & Examples Multiplication, one of the basic operations of math, is important not just in elementary math but also in higher math. Watch this video lesson to learn how you can multiply with ease. 2. Learning Multiplication Facts to 10 Using Skip Counting Multiplication facts might not be your favorite thing to learn in math class, but they are super important to know! In this lesson, we will use skip counting as a method for learning your multiplication facts zero through 10. 3. Learning Multiplication Facts to 10 Using Finger Tricks When it comes to multiplication, finger tricks are very useful tools, and they're fun! This trick is for multiplication tables of 6, 7, 8, and 9. When you need to find an answer quickly, finger tricks can save the day. 4. Working with Multiplication Input-Output Tables In this lesson, we will find patterns in input-output tables, and use those patterns to find missing data in the tables. We will use the four operations: multiplication, division, addition and subtraction for these tables. 5. How to Multiply by 10 In this lesson, you will practice multiplying whole numbers and decimals by 10. You will learn the rules that make it very easy to solve all multiplication by 10 problems. 6. Multiplying Whole Numbers With & Without Regrouping: Lesson for Kids Solve 2 x 8 without using paper and pencil. Now, try to solve 28 x 8; it's a little harder, right? As you get older, you can't always solve a multiplication problem in your head. Let's take a look at how to multiply with and without regrouping. 7. How to Multiply Three or More Numbers We use multiplication all the time in life, but have you ever wondered how to multiply more than two numbers together quickly? In this lesson, you will learn how to multiply three or more numbers together. 8. Expanded Notation Method for Multiplication Expanded notation allows you to create simpler multiplication problems and add them together by breaking down one of the factors into smaller numbers. In this video, you'll learn how to use expanded notation and organize the information through the box method. 9. Using Mental Math for Multiplication Do you want to impress your friends and family with your amazing brain? You can when you learn a really easy way to multiply single digits by multiples of 10 and 100 in this lesson. 10. The Relationship Between Multiplication & Division Did you know that you use multiplication and division when you are sharing food, such as cookies? In this lesson, you will learn about how multiplication and division go together. 11. Division Lesson for Kids: Definition & Method Traditionally people believed that they had to know how to multiply before they could divide, but there are more strategies. Strategies such as drawing pictures or using objects, skip counting, and subtracting are all ways we'll look at using in this lesson about division. 12. How to Divide by 1 Watch this video lesson to learn how you can divide any number by 1. Learn the rule that will allow you to solve any division by 1 problem in an instant. 13. How to Perform Division: Steps & Examples Watch this video lesson to learn how you can divide. Division is one of the basic operations of math. It is one of the fundamental operations that all higher mathematics depends upon. 14. Long Division Steps: Lesson for Kids This lesson shows the process for completing long division equations, which involves division, multiplication, and subtraction. We'll look at equations with and without remainders and equations. 15. Dividing With a Remainder In this lesson you will learn what to do when you are dividing and you have a number leftover. We will be using basic division steps and learning all about remainders. Earning College Credit Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Other chapters within the 3rd Grade Math course - 3rd Grade Math: Number Sense - 3rd Grade Math: Creating & Interpreting Graphs - 3rd Grade Math: Understanding Fractions & Decimals - 3rd Grade Math: Addition & Subtraction - 3rd Grade Math: Arithmetic With Money - 3rd Grade Math: Word Problems & Equations - 3rd Grade Math: Measuring Physical Quantities - 3rd Grade Math: Reading & Recording Time - 3rd Grade Math: Geometric Shapes & Figures
Equations & Word Problems are practically always included in the GED Math Test. These questions may look easy but it takes a lot of practice if you want to achieve consistently. So let’s begin learning. Next Lesson: Review & Practice Test The transcript of the video is for your convenience. Here are the needed steps: First, we set up an equation. All solutions to word problems need to include carefully crafted equations that accurately describe the constraints in that problem statement. Solving the Equation. In the previous step, we always must solve the equation set up. Then, answer the question. This an easily overlooked step. To give you an example, the problem may be asking for Jane’s age, but the solution of your equation is giving the age of Liz, Jane’s sister. So be sure you’re answering the initial question that was asked in the problem. Your solution needs to be written in a sentence that has appropriate units. For example: Amelie has withdrawn $125 from her savings bank account. Because of this withdrawal, her account’s current balance is now $1,200. What was the account’s original balance before the withdrawal? B − 125 = 1200 is the original equation. B − 125 + 125 = 1200 + 125 (we’ve added 125 to both sides of our equation). B = 1325 (On the left, by adding 125, we “undo” the subtracting 125 effect which brings us back to B. On the right side: 1200+125 = 1325. The answer to the question ks: The original balance in the account was $1,325. One more example: A triangle’s perimeter is 114 feet. Two of the triangle’s sides are measuring 30 feet and 40 feet, respectively. What is the measure of the triangle’s third side? 114 = x + 30 + 40 is our equation. 114 = x + 70 114 − 70 = x + 70 − 70 (we subtracted 70 from both sides). 44 = x On the right side of the equation, subtracting 70 will “undo” the adding 70 effect and bring us back to x. On the left side: 114 − 70 equals 44. Thd answer to the Question: The unknown triangle side is 44 feet. Last Updated on
Splash Screen. Five-Minute Check (over Chapter 1) NGSSS Then/Now New Vocabulary Example 1: Patterns and Conjecture Example 2: Algebraic and Geometric Conjectures Example 3: Real-World Example: Make Conjectures from Data Example 4: Find Counterexamples. Lesson Menu. A B C D. Example 1: Patterns and Conjecture Example 2: Algebraic and Geometric Conjectures Example 3: Real-World Example: Make Conjectures from Data Example 4: Find CounterexamplesLesson Menu Identify the solid. A. triangular pyramid B. triangular prism C. rectangular pyramid D. cone5-Minute Check 1 Find the distance between A(–3, 7) and B(1, 4). D. 55-Minute Check 2 Find mC if C and D are supplementary, mC = 3y – 5, and mD = 8y + 20. D. 455-Minute Check 3 Find SR if R is the midpoint of SU shown in the figure. D. 05-Minute Check 4 Find nif bisects VWY. D. 125-Minute Check 5 The midpoint of AB is (3, –2). The coordinates of A are (7, –1). What are the coordinates of B? A. (–1, –3) B. (4, –1) C. (1, 3) D. (–4, 1)5-Minute Check 6 LA.910.1.6.5 The student will relate new vocabulary to familiar words. MA.912.G.8.3Determine whether a solution is reasonable in the context of the original situation.NGSSS Patterns and Conjecture A. Write a conjecture that describes the pattern 2, 4, 12, 48, 240. Then use your conjecture to find the next item in the sequence. Step 1 Look for a pattern. 2 4 12 48 240 Step 2 Make a conjecture The numbers are multiplied by 2, 3, 4, and 5. The next number will be multiplied by 6. So, it will be 6 ● 240 or 1440. 3 9 18 Patterns and Conjecture B. Write a conjecture that describes the pattern shown. Then use your conjecture to find the next item in the sequence. Step 1 Look for a pattern.Example 1 Step 2 Make a conjecture. Conjecture: Notice that 6 is 3 × 2 and 9 is 3 × 3. The next figure will increase by 3 × 4 or 12 segments. So, the next figure will have 18 + 12 or 30 segments. CheckDraw the nextfigure to checkyour conjecture.Example 1 A. Write a conjecture that describes the pattern in the sequence. Then use your conjecture to find the next item in the sequence. B. Write a conjecture that describes the pattern in the sequence. Then use your conjecture to find the next item in the sequence. A. The next figure will have 10 circles. B. The next figure will have 10 + 5 or 15 circles. C. The next figure will have 15 + 5 or 20 circles. D. The next figure will have 15 + 6 or 21 circles.Example 1 A. Make a conjecture about the sum of an odd number and an even number. List some examples that support your conjecture. Step 1 List some examples. 1 + 2 = 3 1 + 4 = 5 4 + 5 = 9 5 + 6 = 11 Step 2 Look for a pattern. Notice that the sums 3, 5, 9, and 11 are all odd numbers. Step 3 Make a conjecture. Answer: The sum of an odd number and even number is odd.Example 2 B. For points L, M, and N, LM = 20, MN = 6,andLN = 14.Make a conjecture and draw a figure to illustrate your conjecture. Step 1 Draw a figure. Step 2 Examine the figure. Since LN + MN = LM, the points can be collinear with point N between points L and M. Step 3 Make a conjecture. Answer: L, M, and N are collinear.Example 2 A. Make a conjecture about the product of two odd numbers. A. The product is odd. B. The product is even. C. The product is sometimes even, sometimes odd. D. The product is a prime number.Example 2 B. Given: ACE is a right triangle with AC = CE. Which figure would illustrate the following conjecture? ΔACE is isosceles, C is a right angle, and is the hypotenuse. A. SALESThe table shows the total sales for the first three months a store is open. The owner wants to predict the sales for the fourth month. Make a statistical graph that best displays the data. Since you want to look for a pattern over time, use a scatter plot to display the data. Label the horizontal axis with the months and the vertical axis with the amount of sales. Plot each set of data.Example 3 B.SALESThe table shows the total sales for the first three months a store is open. The owner wants to predict the sales for the fourth month. Make a conjecture about the sales in the fourth month and justify your claim or prediction. Look for patterns in the data. The sales triple each month. Answer: The sales triple each month, so in the fourth month there will be $4500 × 3 or $13,500 in sales.Example 3 A. SCHOOL The table shows the enrollment of incoming freshmen at a high school over the last four years. The school wants to predict the number of freshmen for next year. Make a statistical graph that best displays the data.Example 3 B. SCHOOL The table shows the enrollment of incoming freshmen at a high school over the last four years. The school wants to predict the number of freshmen for next year. Make a conjecture about the enrollment for next year. A.Enrollment will increase by about 25 students; 358 students. B.Enrollment will increase by about 50 students; 383 students. C.Enrollment will decrease by about 20 students; 313 students. D.Enrollment will stay about the same; 335 students.Example 3 UNEMPLOYMENTBased on the table showing unemployment rates for various counties in Texas, find a counterexample for the following statement. The unemployment rate is highest in the cities with the most people.Example 4 Examine the data in the table. Find two cities such that the population of the first is greater than the population of the second while the unemployment rate of the first is less than the unemployment rate of the second. El Paso has a greater population than Maverick while El Paso has a lower unemployment rate than Maverick. Answer: Maverick has a population of 50,436 people in its population, and it has a higher rate of unemployment than El Paso, which has 713,126 people in its population.Example 4 DRIVINGThis table shows selected states, the 2000 population of each state, and the number of people per 1000 residents who are licensed drivers in each state. Based on the table, which two states could be used as a counterexample for the following statement?The greater the population of a state, the lower the number of drivers per 1000 residents. A. Texas & California B. Vermont & Texas C. Wisconsin & West Virginia D. Alabama & West VirginiaExample 4
When Did Galaxies Get Their Spirals? What makes a spiral galaxy, well, spiral? And how long does it take them to get in a spin? Continue reading → Look in any given point in the sky and you will see galaxies. Billions and billions and billions of galaxies. Look closer and you'll find they can be categorized into three main types of galaxy, based on their apparent shape: elliptical, spiral, and irregular. But what makes a spiral galaxy, well, spiral? And how long does it take them to get in a spin? In a fascinating study to be published in the Astrophysical Journal, married astronomer team Debra Elmegreen (of Vassar College in Poughkeepsie, New York) and Bruce Elmegreen (at IBM's T.J. Watson Research Center in Yorktown Heights, New York) looked to the famous Hubble Ultra-Deep Field (UDF) observation of a tiny, ‘empty' patch of sky in the constellation Fornax. The observation gathered data from September 2003 to January 2004, capturing light that was generated right at the dawn of the Universe. The ground-shaking revelation to come from the UDF is that even a tiny region of the sky that appears to be empty is actually stuffed full of faint, distant galaxies and in this particular observation, around 10,000 galaxies can be seen. After some intense scrutiny, the researchers were able to pick out 269 spiral galaxies in the UDF, but whittled that number down to 41 - the others were discarded due to the lack of red-shift data (a metric that would reveal the galaxy's distance and therefore its age) or the inability to clearly see a spiral pattern. But of those 41 galaxies, the Elmegreens were able to sub-divide them into five morphological classifications - from the clumpy-armed spirals that had a "wooly" appearance and two symmetrical spiral arm galaxies (designated "Grand Design" galaxies) to more mature, multi-armed spiral structures, not too dissimilar to our galaxy. The different classifications painted a picture of spiral galaxy evolution and has now given astronomers a very privileged look into when the spirals of a galaxy formed in the early Universe. "The onset of spiral structure in galaxies appears to occur between redshifts 1.4 and 1.8 when disks have developed a cool stellar component, rotation dominates over turbulent motions in the gas, and massive clumps become less frequent," write the astronomers. The redshift of a galaxy directly relates to that galaxy's age. As the Universe expands, ancient light traveling through the universe will get stretched. This ‘light-stretching' is known as redshift. The higher the redshift, the further the light has traveled, so the older it is. Therefore, from the redshift measurements of this small collection of galaxies in the UDF, the researchers have found that a definite spiral galaxy structure begins to form for galaxies at redshift 1.8, which equates to approximately 3.7 billion years after the Big Bang. However, these are only the embryos of spiral galaxies, the "woolly"-type galaxies with very basic structures smeared with nebulous clouds of star formation. It's not until approximately 8 billion years after the Big Bang (redshift 0.6) that more complex, multi-arm spiral structures form. "The observations of different spiral types are consistent with the interpretation that clumpy disks form first and then transition to spirals as the accretion rate and gas velocity dispersion decrease, and the growing population of old fast-moving stars begins to dominate the disk mass," they write. In a nutshell, early galaxies are a turbulent mess of gas, dust and voracious star formation. These tumultuous times are not conducive to the galaxy settling into a more refined spiral structure. But given enough time, older stars begin to dominate the galactic landscape as the once-giant star formation regions shrink. These factors limit the instabilities throughout the galaxy, heralding a long, quiescent spiral galaxy structure not too dissimilar to the Milky Way's shape some 13.75 billion years after the Big Bang. As pointed out by The Physics arXiv Blog, although this research goes a long way to describe the evolution of galaxies in the earlier phases of the Universe, it would be interesting to see how dark matter factors in. Dark matter is known to pervade the known Universe and has been linked with galaxy growth. Also, how do the supermassive black holes, known to lurk in the centers of the majority of galaxies, factor into the evolution of spiral galaxies? This study highlights the incredible power of the Hubble Space Telescope and proves that the data it provides continues to transform how we view the Cosmos. Pre-print of publication: The Onset of Spiral Structure in the Universe, Elmegreen and Elmegreen, 2013. arXiv:1312.2215 [astro-ph.CO] The Hubble Ultra Deep Field, is an image of a small region of space in the constellation Fornax, composited from Hubble Space Telescope data accumulated over a period from Sept. 3, 2003 through Jan. 16, 2004. The patch of sky in which the galaxies reside was chosen because it had a low density of bright stars in the near-field. To celebrate its 23rd year in space, the Hubble Space Telescope snapped this view of the famous Horsehead nebula in infrared light. Usually obscured by the thick clouds of dust and gas, baby stars can be seen cocooned inside this stellar nursery. For the last 23 years, Hubble has been looking deep into the Cosmos returning over a million observations of nebular such as this, but also planets, exoplanets, galaxies and clusters of galaxies. The mission is a testament to the the human spirit to want to explore and discover. Here are some of our favorite recent observations to come from the veteran mission. Light from an ancient galaxy 10 billion light-years away has been bent and magnified by the galaxy cluster RCS2 032727-132623. Without the help of this lensing effect, the distant galaxy would be extremely faint. This is 30 Doradus, deep inside the Tarantula Nebula, located over 170,000 light-years away in the Large Magellanic Cloud, a small satellite galaxy of the Milky Way. 30 Doradus is an intense star-forming region where millions of baby stars are birthed inside the thick clouds of dust and gas. NGC 3314 is actually two galaxies overlapping. They’re not colliding – as they are separated by tens of millions of light-years – but from our perspective, the pair appears to be in a weird cosmic dance. Arp 116 consists of a very odd galactic couple. M60 is the huge elliptical galaxy to the left and NGC 4647 is the small spiral galaxy to the right. M60 is famous for containing a gargantuan supermassive black hole in its core weighing in at 4.5 billion solar masses. With help from the Karl G. Jansky Very Large Array (VLA) radio telescope in New Mexico, Hubble has observed the awesome power of the supermassive black hole in the core of elliptical galaxy Hercules A. Long jets of gas are being blasted deep into space as the active black hole churns away inside the galaxy’s nucleus. The striking Sharpless 2-106 star-forming region is approximately 2,000 light-years from Earth and has a rather beautiful appearance. The dust and gas of the stellar nursery has created a nebula that looks like a ‘snow angel.’ NGC 922 is a spiral galaxy with a difference. Over 300 million years ago, a smaller galaxy (called 2MASXI J0224301-244443) careened through the center of its disk causing a galactic-scale smash-up, blasting out the other side. This massive disruption generated waves of gravitational energy, triggering pockets of new star formation – highlighted by the pink nebulae encircling the galaxy. Four hundred years ago a star exploded as a type 1a supernova in the Large Magellanic Cloud (LMC) some 170,000 light-years from Earth. This is what was left behind. The beautiful ring-like structure of supernova remnant (SNR) 0509-67.5 is highlighted by Hubble and NASA’s Chandra X-ray space observatory observations. The X-ray data (blue/green hues) are caused by the shockwave of the supernova heating ambient gases. The intricate wisps of thin gas (billions of times less dense than smoke in our atmosphere) from Herbig-Haro 110 are captured in this stunning Hubble observation. Herbig-Haro objects are young stars in the throes of adolescence, blasting jets of gas from their poles. Contained within an area a fraction of the diameter of the moon, astronomers counted thousands of galaxies in the deepest observation ever made by Hubble. Combining 10 years of Hubble observations, the Hubble eXtreme Deep Field (XDF) has picked out galaxies that were forming when the Universe was a fraction of the age it is now.
Radiation pressure is the pressure exerted upon any surface due to the exchange of momentum between the object and the electromagnetic field. This includes the momentum of light or electromagnetic radiation of any wavelength which is absorbed, reflected, or otherwise emitted (e.g. black body radiation) by matter on any scale (from macroscopic objects to dust particles to gas molecules). The forces generated by radiation pressure are generally too small to be noticed under everyday circumstances; however, they are important in some physical processes. This particularly includes objects in outer space where it is usually the main force acting on objects besides gravity, and where the net effect of a tiny force may have a large cumulative effect over long periods of time. For example, had the effects of the sun's radiation pressure on the spacecraft of the Viking program been ignored, the spacecraft would have missed Mars orbit by about 15,000 km (9,300 mi). Radiation pressure from starlight is crucial in a number of astrophysical processes as well. The significance of radiation pressure increases rapidly at extremely high temperatures, and can sometimes dwarf the usual gas pressure, for instance in stellar interiors and thermonuclear weapons. Radiation pressure can equally well be accounted for by considering the momentum of a classical electromagnetic field or in terms of the momenta of photons, particles of light. The interaction of electromagnetic waves or photons with matter may involve an exchange of momentum. Due to the law of conservation of momentum, any change in the total momentum of the waves or photons must involve an equal and opposite change in the momentum of the matter it interacted with (Newton's third law of motion), as is illustrated in the accompanying figure for the case of light being perfectly reflected by a surface. This transfer of momentum is the general explanation for what we term radiation pressure. - 1 Discovery - 2 Theory - 3 Solar radiation pressure - 4 Cosmic effects of radiation pressure - 5 Laser applications of radiation pressure - 6 See also - 7 References - 8 Further reading The assertion that light, as electromagnetic radiation, has the property of momentum and thus exerts a pressure upon any surface it is exposed to was published by James Clerk Maxwell in 1862, and proven experimentally by Russian physicist Pyotr Lebedev in 1900 and by Ernest Fox Nichols and Gordon Ferrie Hull in 1901. The pressure is very feeble, but can be detected by allowing the radiation to fall upon a delicately poised vane of reflective metal in a Nichols radiometer (this should not be confused with the Crookes radiometer, whose characteristic motion is not caused by radiation pressure but by impacting gas molecules). Radiation pressure can be viewed as a consequence of the conservation of momentum given the momentum attributed to electromagnetic radiation. That momentum can be equally well calculated on the basis of electromagnetic theory or from the combined momenta of a stream of photons, giving identical results as is shown below. Radiation pressure from momentum of an electromagnetic wave According to Maxwell's theory of electromagnetism, an electromagnetic wave carries momentum, which will be transferred to an opaque surface it strikes. The energy flux (irradiance) of a plane wave is calculated using the Poynting vector , whose magnitude we denote by S. S divided by the speed of light is the density of the linear momentum per unit area (pressure) of the electromagnetic field. That pressure is experienced as radiation pressure on the surface: If the surface is planar at an angle α to the incident wave, the intensity across the surface will be geometrically reduced by the cosine of that angle and the component of the radiation force against the surface will also be reduced by the cosine of α, resulting in a pressure: The momentum from the incident wave is in the same direction of that wave. But only the component of that momentum normal to the surface contributes to the pressure on the surface, as given above. The component of that force tangent to the surface is not called pressure. Radiation pressure from reflection The above treatment for an incident wave accounts for the radiation pressure experienced by a black (totally absorbing) body. If the wave is specularly reflected, then the recoil due to the reflected wave will further contribute to the radiation pressure. In the case of a perfect reflector, this pressure will be identical to the pressure caused by the incident wave: thus doubling the net radiation pressure on the surface: For a partially reflective surface, the second term must be multiplied by the reflectivity (also known as reflection coefficient of intensity), so that the increase is less than double. For a diffusely reflective surface, the details of the reflection and geometry must be taken into account, again resulting in an increased net radiation pressure of less than double. Radiation pressure by emission Just as a wave reflected from a body contributes to the net radiation pressure experienced, a body that emits radiation of its own (rather than reflected) obtains a radiation pressure again given by the irradiance of that emission in the direction normal to the surface Ie: The emission can be from black body radiation or any other radiative mechanism. Since all materials emit black body radiation (unless they are totally reflective or at absolute zero), this source for radiation pressure is ubiquitous but usually very tiny. However, because black body radiation increases rapidly with temperature (according to the fourth power of temperature as given by the Stefan–Boltzmann law), radiation pressure due to the temperature of a very hot object (or due to incoming black body radiation from similarly hot surroundings) can become very significant. This becomes important in stellar interiors which are at millions of degrees. Radiation pressure in terms of photons Electromagnetic radiation can be viewed in terms of particles rather than waves; these particles are known as photons. Photons do not have a rest-mass, however photons are never at rest (they move at the speed of light) and acquire a momentum nonetheless which is given by: The radiation pressure again can be seen as the transfer of each photon's momentum to the opaque surface, plus the momentum due to a (possible) recoil photon for a (partially) reflecting surface. Since an incident wave of irradiance If over an area A has a power of IfA, this implies a flux of If/Ep photons per second per unit area striking the surface. Combining this with the above expression for the momentum of a single photon, results in the same relationships between irradiance and radiation pressure described above using classical electromagnetics. And again, reflected or otherwise emitted photons will contribute to the net radiation pressure identically. Compression in a uniform radiation field Surfaces of a body in thermal equilibrium with its surroundings at a temperature T, will be surrounded by a uniform radiation field described by the Planck black body radiation law, and will experience a compressive pressure due to that impinging radiation, its reflection, and its own black body emission. From that it can be shown that the resulting pressure is equal to one third of the total radiant energy per unit volume in the surrounding space. Quantitatively, this can be expressed as Solar radiation pressure Solar radiation pressure is due to the sun's radiation at closer distances, thus especially within the solar system. While it acts on all objects, its net effect is generally greater on smaller bodies since they have a larger ratio of surface area to mass. All spacecraft experience such a pressure except when they are behind the earth's shadow. All stars have a spectral energy distribution that depends on their surface temperature. The distribution is approximately that of black-body radiation. This distribution must be taken into account when calculating the radiation pressure or identifying reflector materials for optimizing a solar sail for instance. Pressures of absorption and reflection Solar radiation pressure at the earth's distance from the sun, may be calculated by dividing the solar constant W (above) by the speed of light c. For an absorbing sheet facing the sun, this is simply: This result is in the S.I. unit Pascals, equivalent to N/m2 (Newtons per square meter). For a sheet at an angle α to the sun, the effective area A of a sheet is reduced by a geometrical factor resulting in a force in the direction of the sunlight of: To find the component of this force normal to the surface, another cosine factor must be applied resulting in a pressure P on the surface of: Note, however, that in order to account for the net effect of solar radiation on a spacecraft for instance, one would need to consider the total force (in the direction away from the sun) given by the preceding equation, rather than just the component normal to the surface that we identify as "pressure". The solar constant is defined for the sun's radiation at the distance to the earth, also known as one astronomical unit (AU). Consequently, at a distance of R astronomical units (R thus being dimensionless), applying the inverse square law, we would find: Finally, considering not an absorbing but a perfectly reflecting surface, the pressure is doubled due to the reflected wave, resulting in: Note that unlike the case of an absorbing material, the resulting force on a reflecting body is given exactly by this pressure acting normal to the surface, with the tangential forces from the incident and reflecting waves canceling each other. In practice, materials are neither totally reflecting nor totally absorbing, so the resulting force will be a weighted average of the forces calculated using these formulae. Solar radiation pressure on perfect reflector at normal incidence (α=0) Distance from sun Radiation pressure in μPa (μN/m2) 0.20 AU 227 0.39 AU (Mercury) 60.6 0.72 AU (Venus) 17.4 1.00 AU (Earth) 9.08 1.52 AU (Mars) 3.91 3.00 AU (Typical asteroid) 1.01 5.20 AU (Jupiter) 0.34 Radiation pressure perturbations Solar radiation pressure is a source of orbital perturbations. It significantly affects the orbits and trajectories of small bodies including all spacecraft. Solar radiation pressure affects bodies throughout much of the Solar System. Small bodies are more affected than large because of their lower mass relative to their surface area. Spacecraft are affected along with natural bodies (comets, asteroids, dust grains, gas molecules). The radiation pressure results in forces and torques on the bodies that can change their translational and rotational motions. Translational changes affect the orbits of the bodies. Rotational rates may increase or decrease. Loosely aggregated bodies may break apart under high rotation rates. Dust grains can either leave the Solar System or spiral into the Sun. A whole body is typically composed of numerous surfaces that have different orientations on the body. The facets may be flat or curved. They will have different areas. They may have optical properties differing from other aspects. At any particular time, some facets will be exposed to the Sun and some will be in shadow. Each surface exposed to the Sun will be reflecting, absorbing, and emitting radiation. Facets in shadow will be emitting radiation. The summation of pressures across all of the facets will define the net force and torque on the body. These can be calculated using the equations in the preceding sections. The Yarkovsky effect affects the translation of a small body. It results from a face leaving solar exposure being at a higher temperature than a face approaching solar exposure. The radiation emitted from the warmer face will be more intense than that of the opposite face, resulting in a net force on the body that will affect its motion. The YORP effect is a collection of effects expanding upon the earlier concept of the Yarkovsky effect, but of a similar nature. It affects the spin properties of bodies. The Poynting–Robertson effect applies to grain-size particles. From the perspective of a grain of dust circling the Sun, the Sun's radiation appears to be coming from a slightly forward direction (aberration of light). Therefore, the absorption of this radiation leads to a force with a component against the direction of movement. (The angle of aberration is tiny since the radiation is moving at the speed of light while the dust grain is moving many orders of magnitude slower than that.) The result is a gradual spiral of dust grains into the Sun. Over long periods of time, this effect cleans out much of the dust in the Solar System. While rather small in comparison to other forces, the radiation pressure force is inexorable. Over long periods of time, the net effect of the force is substantial. Such feeble pressures can produce marked effects upon minute particles like gas ions and electrons, and are essential in the theory of electron emission from the Sun, of cometary material, and so on. Because the ratio of surface area to volume (and thus mass) increases with decreasing particle size, dusty (micrometre-size) particles are susceptible to radiation pressure even in the outer solar system. For example, the evolution of the outer rings of Saturn is significantly influenced by radiation pressure. As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction" which would oppose the movement of matter. He wrote, “radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief.” Solar sailing, an experimental method of spacecraft propulsion, uses radiation pressure from the Sun as a motive force. The idea of interplanetary travel by light was mentioned by Jules Verne in From the Earth to the Moon. A sail reflects about 90% of the incident radiation. The 10% that is absorbed is radiated away from both surfaces, with the proportion emitted from the unlit surface depending on the thermal conductivity of the sail. A sail has curvature, surface irregularities, and other minor factors that affect its performance. Cosmic effects of radiation pressure Radiation pressure has had a major effect on the development of the cosmos, from the birth of the universe to ongoing formation of stars and shaping of clouds of dust and gasses on a wide range of scales. The early universe Galaxy formation and evolution The process of galaxy formation and evolution began early in the history of the cosmos. Observations of the early universe strongly suggest that objects grew from bottom-up (i.e., smaller objects merging to form larger ones). As stars are thereby formed and become sources of electromagnetic radiation, radiation pressure from the stars becomes a factor in the dynamics of remaining circumstellar material. Clouds of dust and gases The gravitational compression of clouds of dust and gases is strongly influenced by radiation pressure, especially when the condensations lead to star births. The larger young stars forming within the compressed clouds emit intense levels of radiation that shift the clouds, causing either dispersion or condensations in nearby regions, which influences birth rates in those nearby regions. Clusters of stars Stars predominantly form in regions of large clouds of dust and gases, giving rise to star clusters. Radiation pressure from the member stars eventually disperses the clouds, which can have a profound effect on the evolution of the cluster. Many open clusters are inherently unstable, with a small enough mass that the escape velocity of the system is lower than the average velocity of the constituent stars. These clusters will rapidly disperse within a few million years. In many cases, the stripping away of the gas from which the cluster formed by the radiation pressure of the hot young stars reduces the cluster mass enough to allow rapid dispersal. Star formation is the process by which dense regions within molecular clouds in interstellar space collapse to form stars. As a branch of astronomy, star formation includes the study of the interstellar medium and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function. Stellar planetary systems Planetary systems are generally believed to form as part of the same process that results in star formation. A protoplanetary disk forms by gravitational collapse of a molecular cloud, called a solar nebula, and then evolves into a planetary system by collisions and gravitational capture. Radiation pressure can clear a region in the immediate vicinity of the star. As the formation process continues, radiation pressure continues to play a role in affecting the distribution of matter. In particular, dust and grains can spiral into the star or escape the stellar system under the action of radiation pressure. In stellar interiors the temperatures are very high. Stellar models predict a temperature of 15 MK in the center of the Sun, and at the cores of supergiant stars the temperature may exceed 1 GK. As the radiation pressure scales as the fourth power of the temperature, it becomes important at these high temperatures. In the Sun, radiation pressure is still quite small when compared to the gas pressure. In the heaviest non-degenerate stars, radiation pressure is the dominant pressure component. Solar radiation pressure strongly affects comet tails. Solar heating causes gases to be released from the comet nucleus, which also carry away dust grains. Radiation pressure and solar wind then drive the dust and gases away from the Sun's direction. The gases form a generally straight tail, while slower moving dust particles create a broader, curving tail. Laser applications of radiation pressure Laser cooling is applied to cooling materials very close to absolute zero. Atoms traveling towards a laser light source perceive a doppler effect tuned to the absorption frequency of the target element. The radiation pressure on the atom slows movement in a particular direction until the Doppler effect moves out of the frequency range of the element, causing an overall cooling effect. Large lasers operating in space have been suggested as a means of propelling sail craft in beam-powered propulsion. The reflection of a laser pulse from the surface of an elastic solid gives rise to various types of elastic waves that propagate inside the solid. The weakest waves are generally those that are generated by the radiation pressure acting during the reflection of the light. Recently, such light-pressure-induced elastic waves were observed inside an ultrahigh-reflectivity dielectric mirror. These waves are the most basic fingerprint of a light-solid matter interaction on the macroscopic scale. - Stellar Atmospheres, D. Mihalas (1978), Second edition, W H Freeman & Co - Eddington, A. S., & Eddington, A. S. (1988). The internal constitution of the stars. Cambridge University Press. - Chandrasekhar, S. (2013). Radiative transfer. Courier Corporation. - Eugene Hecht, "Optics", 4th edition - Johannes Kepler (1619). De Cometis Libelli Tres. - P. Lebedev, 1901, "Untersuchungen über die Druckkräfte des Lichtes", Annalen der Physik, 1901 - Nichols, E.F & Hull, G.F. (1903) The Pressure due to Radiation, The Astrophysical Journal,Vol.17 No.5, p.315-351 - Wright, Jerome L. (1992), Space Sailing, Gordon and Breach Science Publishers - Shankar R., Principles of Quantum Mechanics, 2nd edition. - Carroll, Bradley W. & Dale A. Ostlie, An Introduction to Modern Astrophysics, 2nd edition. - Jackson, John David, (1999) Classical Electrodynamics. - Kardar, Mehran. "Statistical Physics of Particles". - Planck's law - Kopp, G.; Lean, J. L. (2011). "A new, lower value of total solar irradiance: Evidence and climate significance". Geophysical Research Letters 38. - Georgevic, R. M. (1973) "The Solar Radiation Pressure Forces and Torques Model", The Journal of the Astronautical Sciences, Vol. 27, No. 1, Jan–Feb. First known publication describing how solar radiation pressure creates forces and torques that affect spacecraft. - Einstein, A. (1909). On the development of our views concerning the nature and constitution of radiation. Translated in: The Collected Papers of Albert Einstein, vol. 2 (Princeton University Press, Princeton, 1989). Princeton, NJ: Princeton University Press. p. 391. - Dale A. Ostlie and Bradley W. Carroll, An Introduction to Modern Astrophysics (2nd edition), page 341, Pearson, San Francisco, 2007 - T. Požar and J. Možina (2013), Measurement of elastic waves induced by the reflection of light. Physical Review Letters 111 (18), 185501
A DNA microarray (also commonly known as DNA chip or biochip) is a collection of microscopic DNA spots attached to a solid surface. Scientists use DNA microarrays to measure the expression levels of large numbers of genes simultaneously or to genotype multiple regions of a genome. Each DNA spot contains picomoles (10−12 moles) of a specific DNA sequence, known as probes (or reporters or oligos). These can be a short section of a gene or other DNA element that are used to hybridize a cDNA or cRNA (also called anti-sense RNA) sample (called target) under high-stringency conditions. Probe-target hybridization is usually detected and quantified by detection of fluorophore-, silver-, or chemiluminescence-labeled targets to determine relative abundance of nucleic acid sequences in the target. The original nucleic acid arrays were macro arrays approximately 9 cm × 12 cm and the first computerized image based analysis was published in 1981. It was invented by Patrick O. Brown. The core principle behind microarrays is hybridization between two DNA strands, the property of complementary nucleic acid sequences to specifically pair with each other by forming hydrogen bonds between complementary nucleotide base pairs. A high number of complementary base pairs in a nucleotide sequence means tighter non-covalent bonding between the two strands. After washing off non-specific bonding sequences, only strongly paired strands will remain hybridized. Fluorescently labeled target sequences that bind to a probe sequence generate a signal that depends on the hybridization conditions (such as temperature), and washing after hybridization. Total strength of the signal, from a spot (feature), depends upon the amount of target sample binding to the probes present on that spot. Microarrays use relative quantitation in which the intensity of a feature is compared to the intensity of the same feature under a different condition, and the identity of the feature is known by its position. Uses and types Many types of arrays exist and the broadest distinction is whether they are spatially arranged on a surface or on coded beads: - The traditional solid-phase array is a collection of orderly microscopic "spots", called features, each with thousands of identical and specific probes attached to a solid surface, such as glass, plastic or silicon biochip (commonly known as a genome chip, DNA chip or gene array). Thousands of these features can be placed in known locations on a single DNA microarray. - The alternative bead array is a collection of microscopic polystyrene beads, each with a specific probe and a ratio of two or more dyes, which do not interfere with the fluorescent dyes used on the target sequence. DNA microarrays can be used to detect DNA (as in comparative genomic hybridization), or detect RNA (most commonly as cDNA after reverse transcription) that may or may not be translated into proteins. The process of measuring gene expression via cDNA is called expression analysis or expression profiling. |Application or technology||Synopsis| |Gene expression profiling||In an mRNA or gene expression profiling experiment the expression levels of thousands of genes are simultaneously monitored to study the effects of certain treatments, diseases, and developmental stages on gene expression. For example, microarray-based gene expression profiling can be used to identify genes whose expression is changed in response to pathogens or other organisms by comparing gene expression in infected to that in uninfected cells or tissues.| |Comparative genomic hybridization||Assessing genome content in different cells or closely related organisms, as originally described by Patrick Brown, Jonathan Pollack, Ash Alizadeh and colleagues at Stanford.| |GeneID||Small microarrays to check IDs of organisms in food and feed (like GMO ), mycoplasms in cell culture, or pathogens for disease detection, mostly combining PCR and microarray technology.| |Chromatin immunoprecipitation on Chip||DNA sequences bound to a particular protein can be isolated by immunoprecipitating that protein (ChIP), these fragments can be then hybridized to a microarray (such as a tiling array) allowing the determination of protein binding site occupancy throughout the genome. Example protein to immunoprecipitate are histone modifications (H3K27me3, H3K4me2, H3K9me3, etc.), Polycomb-group protein (PRC2:Suz12, PRC1:YY1) and trithorax-group protein (Ash1) to study the epigenetic landscape or RNA Polymerase II to study the transcription landscape.| |DamID||Analogously to ChIP, genomic regions bound by a protein of interest can be isolated and used to probe a microarray to determine binding site occupancy. Unlike ChIP, DamID does not require antibodies but makes use of adenine methylation near the protein's binding sites to selectively amplify those regions, introduced by expressing minute amounts of protein of interest fused to bacterial DNA adenine methyltransferase.| |SNP detection||Identifying single nucleotide polymorphism among alleles within or between populations. Several applications of microarrays make use of SNP detection, including genotyping, forensic analysis, measuring predisposition to disease, identifying drug-candidates, evaluating germline mutations in individuals or somatic mutations in cancers, assessing loss of heterozygosity, or genetic linkage analysis.| |Alternative splicing detection||An exon junction array design uses probes specific to the expected or potential splice sites of predicted exons for a gene. It is of intermediate density, or coverage, to a typical gene expression array (with 1–3 probes per gene) and a genomic tiling array (with hundreds or thousands of probes per gene). It is used to assay the expression of alternative splice forms of a gene. Exon arrays have a different design, employing probes designed to detect each individual exon for known or predicted genes, and can be used for detecting different splicing isoforms.| |Fusion genes microarray||A Fusion gene microarray can detect fusion transcripts, e.g. from cancer specimens. The principle behind this is building on the alternative splicing microarrays. The oligo design strategy enables combined measurements of chimeric transcript junctions with exon-wise measurements of individual fusion partners.| |Tiling array||Genome tiling arrays consist of overlapping probes designed to densely represent a genomic region of interest, sometimes as large as an entire human chromosome. The purpose is to empirically detect expression of transcripts or alternatively spliced forms which may not have been previously known or predicted.| |Double-stranded B-DNA microarrays||Right-handed double-stranded B-DNA microarrays can be used to characterize novel drugs and biologicals that can be employed to bind specific regions of immobilized, intact, double-stranded DNA. This approach can be used to inhibit gene expression. They also allow for characterization of their structure under different environmental conditions.| |Double-stranded Z-DNA microarrays||Left-handed double-stranded Z-DNA microarrays can be used to identify short sequences of the alternative Z-DNA structure located within longer stretches of right-handed B-DNA genes (e.g., transcriptional enhancement, recombination, RNA editing). The microarrays also allow for characterization of their structure under different environmental conditions.| |Multi-stranded DNA microarrays (triplex-DNA microarrays and quadruplex-DNA microarrays)||Multi-stranded DNA and RNA microarrays can be used to identify novel drugs that bind to these multi-stranded nucleic acid sequences. This approach can be used to discover new drugs and biologicals that have the ability to inhibit gene expression. These microarrays also allow for characterization of their structure under different environmental conditions.| Microarrays can be manufactured in different ways, depending on the number of probes under examination, costs, customization requirements, and the type of scientific question being asked. Arrays from commercial vendors may have as few as 10 probes or as many as 5 million or more micrometre-scale probes. Spotted vs. in situ synthesised arrays Microarrays can be fabricated using a variety of technologies, including printing with fine-pointed pins onto glass slides, photolithography using pre-made masks, photolithography using dynamic micromirror devices, ink-jet printing, or electrochemistry on microelectrode arrays. In spotted microarrays, the probes are oligonucleotides, cDNA or small fragments of PCR products that correspond to mRNAs. The probes are synthesized prior to deposition on the array surface and are then "spotted" onto glass. A common approach utilizes an array of fine pins or needles controlled by a robotic arm that is dipped into wells containing DNA probes and then depositing each probe at designated locations on the array surface. The resulting "grid" of probes represents the nucleic acid profiles of the prepared probes and is ready to receive complementary cDNA or cRNA "targets" derived from experimental or clinical samples. This technique is used by research scientists around the world to produce "in-house" printed microarrays from their own labs. These arrays may be easily customized for each experiment, because researchers can choose the probes and printing locations on the arrays, synthesize the probes in their own lab (or collaborating facility), and spot the arrays. They can then generate their own labeled samples for hybridization, hybridize the samples to the array, and finally scan the arrays with their own equipment. This provides a relatively low-cost microarray that may be customized for each study, and avoids the costs of purchasing often more expensive commercial arrays that may represent vast numbers of genes that are not of interest to the investigator. Publications exist which indicate in-house spotted microarrays may not provide the same level of sensitivity compared to commercial oligonucleotide arrays, possibly owing to the small batch sizes and reduced printing efficiencies when compared to industrial manufactures of oligo arrays. In oligonucleotide microarrays, the probes are short sequences designed to match parts of the sequence of known or predicted open reading frames. Although oligonucleotide probes are often used in "spotted" microarrays, the term "oligonucleotide array" most often refers to a specific technique of manufacturing. Oligonucleotide arrays are produced by printing short oligonucleotide sequences designed to represent a single gene or family of gene splice-variants by synthesizing this sequence directly onto the array surface instead of depositing intact sequences. Sequences may be longer (60-mer probes such as the Agilent design) or shorter (25-mer probes produced by Affymetrix) depending on the desired purpose; longer probes are more specific to individual target genes, shorter probes may be spotted in higher density across the array and are cheaper to manufacture. One technique used to produce oligonucleotide arrays include photolithographic synthesis (Affymetrix) on a silica substrate where light and light-sensitive masking agents are used to "build" a sequence one nucleotide at a time across the entire array. Each applicable probe is selectively "unmasked" prior to bathing the array in a solution of a single nucleotide, then a masking reaction takes place and the next set of probes are unmasked in preparation for a different nucleotide exposure. After many repetitions, the sequences of every probe become fully constructed. More recently, Maskless Array Synthesis from NimbleGen Systems has combined flexibility with large numbers of probes. Two-channel vs. one-channel detection Two-color microarrays or two-channel microarrays are typically hybridized with cDNA prepared from two samples to be compared (e.g. diseased tissue versus healthy tissue) and that are labeled with two different fluorophores. Fluorescent dyes commonly used for cDNA labeling include Cy3, which has a fluorescence emission wavelength of 570 nm (corresponding to the green part of the light spectrum), and Cy5 with a fluorescence emission wavelength of 670 nm (corresponding to the red part of the light spectrum). The two Cy-labeled cDNA samples are mixed and hybridized to a single microarray that is then scanned in a microarray scanner to visualize fluorescence of the two fluorophores after excitation with a laser beam of a defined wavelength. Relative intensities of each fluorophore may then be used in ratio-based analysis to identify up-regulated and down-regulated genes. Oligonucleotide microarrays often carry control probes designed to hybridize with RNA spike-ins. The degree of hybridization between the spike-ins and the control probes is used to normalize the hybridization measurements for the target probes. Although absolute levels of gene expression may be determined in the two-color array in rare instances, the relative differences in expression among different spots within a sample and between samples is the preferred method of data analysis for the two-color system. Examples of providers for such microarrays includes Agilent with their Dual-Mode platform, Eppendorf with their DualChip platform for colorimetric Silverquant labeling, and TeleChem International with Arrayit. In single-channel microarrays or one-color microarrays, the arrays provide intensity data for each probe or probe set indicating a relative level of hybridization with the labeled target. However, they do not truly indicate abundance levels of a gene but rather relative abundance when compared to other samples or conditions when processed in the same experiment. Each RNA molecule encounters protocol and batch-specific bias during amplification, labeling, and hybridization phases of the experiment making comparisons between genes for the same microarray uninformative. The comparison of two conditions for the same gene requires two separate single-dye hybridizations. Several popular single-channel systems are the Affymetrix "Gene Chip", Illumina "Bead Chip", Agilent single-channel arrays, the Applied Microarrays "CodeLink" arrays, and the Eppendorf "DualChip & Silverquant". One strength of the single-dye system lies in the fact that an aberrant sample cannot affect the raw data derived from other samples, because each array chip is exposed to only one sample (as opposed to a two-color system in which a single low-quality sample may drastically impinge on overall data precision even if the other sample was of high quality). Another benefit is that data are more easily compared to arrays from different experiments as long as batch effects have been accounted for. One channel microarray may be the only choice in some situations. Suppose samples need to be compared: then the number of experiments required using the two channel arrays quickly becomes unfeasible, unless a sample is used as a reference. |number of samples||one-channel microarray||two channel microarray|| two channel microarray (with reference) A typical protocol This is an example of a DNA microarray experiment which includes details for a particular case to better explain DNA microarray experiments, while listing modifications for RNA or other alternative experiments. - The two samples to be compared (pairwise comparison) are grown/acquired. In this example treated sample (case) and untreated sample (control). - The nucleic acid of interest is purified: this can be RNA for expression profiling, DNA for comparative hybridization, or DNA/RNA bound to a particular protein which is immunoprecipitated (ChIP-on-chip) for epigenetic or regulation studies. In this example total RNA is isolated (both nuclear and cytoplasmic) by Guanidinium thiocyanate-phenol-chloroform extraction (e.g. Trizol) which isolates most RNA (whereas column methods have a cut off of 200 nucleotides) and if done correctly has a better purity. - The purified RNA is analysed for quality (by capillary electrophoresis) and quantity (for example, by using a NanoDrop or NanoPhotometer spectrometer). If the material is of acceptable quality and sufficient quantity is present (e.g., >1μg, although the required amount varies by microarray platform), the experiment can proceed. - The labeled product is generated via reverse transcription and followed by an optional PCR amplification. The RNA is reverse transcribed with either polyT primers (which amplify only mRNA) or random primers (which amplify all RNA, most of which is rRNA). miRNA microarrays ligate an oligonucleotide to the purified small RNA (isolated with a fractionator), which is then reverse transcribed and amplified. - The label is added either during the reverse transcription step, or following amplification if it is performed. The sense labeling is dependent on the microarray; e.g. if the label is added with the RT mix, the cDNA is antisense and the microarray probe is sense, except in the case of negative controls. - The label is typically fluorescent; only one machine uses radiolabels. - The labeling can be direct (not used) or indirect (requires a coupling stage). For two-channel arrays, the coupling stage occurs before hybridization, using aminoallyl uridine triphosphate (aminoallyl-UTP, or aaUTP) and NHS amino-reactive dyes (such as cyanine dyes); for single-channel arrays, the coupling stage occurs after hybridization, using biotin and labeled streptavidin. The modified nucleotides (usually in a ratio of 1 aaUTP: 4 TTP (thymidine triphosphate)) are added enzymatically in a low ratio to normal nucleotides, typically resulting in 1 every 60 bases. The aaDNA is then purified with a column (using a phosphate buffer solution, as Tris contains amine groups). The aminoallyl group is an amine group on a long linker attached to the nucleobase, which reacts with a reactive dye. - A form of replicate known as a dye flip can be performed to control for dye artifacts in two-channel experiments; for a dye flip, a second slide is used, with the labels swapped (the sample that was labeled with Cy3 in the first slide is labeled with Cy5, and vice versa). In this example, aminoallyl-UTP is present in the reverse-transcribed mixture. - The labeled samples are then mixed with a proprietary hybridization solution which can consist of SDS, SSC, dextran sulfate, a blocking agent (such as Cot-1 DNA, salmon sperm DNA, calf thymus DNA, PolyA, or PolyT), Denhardt's solution, or formamine. - The mixture is denatured and added to the pinholes of the microarray. The holes are sealed and the microarray hybridized, either in a hyb oven, where the microarray is mixed by rotation, or in a mixer, where the microarray is mixed by alternating pressure at the pinholes. - After an overnight hybridization, all nonspecific binding is washed off (SDS and SSC). - The microarray is dried and scanned by a machine that uses a laser to excite the dye and measures the emission levels with a detector. - The image is gridded with a template and the intensities of each feature (composed of several pixels) is quantified. - The raw data is normalized; the simplest normalization method is to subtract background intensity and scale so that the total intensities of the features of the two channels are equal, or to use the intensity of a reference gene to calculate the t-value for all of the intensities. More sophisticated methods include z-ratio, loess and lowess regression and RMA (robust multichip analysis) for Affymetrix chips (single-channel, silicon chip, in situ synthesized short oligonucleotides). Microarrays and bioinformatics The advent of inexpensive microarray experiments created several specific bioinformatics challenges: the multiple levels of replication in experimental design (Experimental design); the number of platforms and independent groups and data format (Standardization); the statistical treatment of the data (Data analysis); mapping each probe to the mRNA transcript that it measures (Annotation); the sheer volume of data and the ability to share it (Data warehousing). Due to the biological complexity of gene expression, the considerations of experimental design that are discussed in the expression profiling article are of critical importance if statistically and biologically valid conclusions are to be drawn from the data. There are three main elements to consider when designing a microarray experiment. First, replication of the biological samples is essential for drawing conclusions from the experiment. Second, technical replicates (two RNA samples obtained from each experimental unit) help to ensure precision and allow for testing differences within treatment groups. The biological replicates include independent RNA extractions and technical replicates may be two aliquots of the same extraction. Third, spots of each cDNA clone or oligonucleotide are present as replicates (at least duplicates) on the microarray slide, to provide a measure of technical precision in each hybridization. It is critical that information about the sample preparation and handling is discussed, in order to help identify the independent units in the experiment and to avoid inflated estimates of statistical significance. Microarray data is difficult to exchange due to the lack of standardization in platform fabrication, assay protocols, and analysis methods. This presents an interoperability problem in bioinformatics. Various grass-roots open-source projects are trying to ease the exchange and analysis of data produced with non-proprietary chips: For example, the "Minimum Information About a Microarray Experiment" (MIAME) checklist helps define the level of detail that should exist and is being adopted by many journals as a requirement for the submission of papers incorporating microarray results. But MIAME does not describe the format for the information, so while many formats can support the MIAME requirements, as of 2007[update] no format permits verification of complete semantic compliance. The "MicroArray Quality Control (MAQC) Project" is being conducted by the US Food and Drug Administration (FDA) to develop standards and quality control metrics which will eventually allow the use of MicroArray data in drug discovery, clinical practice and regulatory decision-making. The MGED Society has developed standards for the representation of gene expression experiment results and relevant annotations. Microarray data sets are commonly very large, and analytical precision is influenced by a number of variables. Statistical challenges include taking into account effects of background noise and appropriate normalization of the data. Normalization methods may be suited to specific platforms and, in the case of commercial platforms, the analysis may be proprietary. Algorithms that affect statistical analysis include: - Image analysis: gridding, spot recognition of the scanned image (segmentation algorithm), removal or marking of poor-quality and low-intensity features (called flagging). - Data processing: background subtraction (based on global or local background), determination of spot intensities and intensity ratios, visualisation of data (e.g. see MA plot), and log-transformation of ratios, global or local normalization of intensity ratios, and segmentation into different copy number regions using step detection algorithms. - Class discovery analysis: This analytic approach, sometimes called unsupervised classification or knowledge discovery, tries to identify whether microarrays (objects, patients, mice, etc.) or genes cluster together in groups. Identifying naturally existing groups of objects (microarrays or genes) which cluster together can enable the discovery of new groups that otherwise were not previously known to exist. During knowledge discovery analysis, various unsupervised classification techniques can be employed with DNA microarray data to identify novel clusters (classes) of arrays. This type of approach is not hypothesis-driven, but rather is based on iterative pattern recognition or statistical learning methods to find an "optimal" number of clusters in the data. Examples of unsupervised analyses methods include self-organizing maps, neural gas, k-means cluster analyses, hierarchical cluster analysis, Genomic Signal Processing based clustering and model-based cluster analysis. For some of these methods the user also has to define a distance measure between pairs of objects. Although the Pearson correlation coefficient is usually employed, several other measures have been proposed and evaluated in the literature. The input data used in class discovery analyses are commonly based on lists of genes having high informativeness (low noise) based on low values of the coefficient of variation or high values of Shannon entropy, etc. The determination of the most likely or optimal number of clusters obtained from an unsupervised analysis is called cluster validity. Some commonly used metrics for cluster validity are the silhouette index, Davies-Bouldin index, Dunn's index, or Hubert's statistic. - Class prediction analysis: This approach, called supervised classification, establishes the basis for developing a predictive model into which future unknown test objects can be input in order to predict the most likely class membership of the test objects. Supervised analysis for class prediction involves use of techniques such as linear regression, k-nearest neighbor, learning vector quantization, decision tree analysis, random forests, naive Bayes, logistic regression, kernel regression, artificial neural networks, support vector machines, mixture of experts, and supervised neural gas. In addition, various metaheuristic methods are employed, such as genetic algorithms, covariance matrix self-adaptation, particle swarm optimization, and ant colony optimization. Input data for class prediction are usually based on filtered lists of genes which are predictive of class, determined using classical hypothesis tests (next section), Gini diversity index, or information gain (entropy). - Hypothesis-driven statistical analysis: Identification of statistically significant changes in gene expression are commonly identified using the t-test, ANOVA, Bayesian methodMann–Whitney test methods tailored to microarray data sets, which take into account multiple comparisons or cluster analysis. These methods assess statistical power based on the variation present in the data and the number of experimental replicates, and can help minimize Type I and type II errors in the analyses. - Dimensional reduction: Analysts often reduce the number of dimensions (genes) prior to data analysis. This may involve linear approaches such as principal components analysis (PCA), or non-linear manifold learning (distance metric learning) using kernel PCA, diffusion maps, Laplacian eigenmaps, local linear embedding, locally preserving projections, and Sammon's mapping. - Network-based methods: Statistical methods that take the underlying structure of gene networks into account, representing either associative or causative interactions or dependencies among gene products. Weighted gene co-expression network analysis is widely used for identifying co-expression modules and intramodular hub genes. Modules may corresponds to cell types or pathways. Highly connected intramodular hubs best represent their respective modules. Microarray data may require further processing aimed at reducing the dimensionality of the data to aid comprehension and more focused analysis. Other methods permit analysis of data consisting of a low number of biological or technical replicates; for example, the Local Pooled Error (LPE) test pools standard deviations of genes with similar expression levels in an effort to compensate for insufficient replication. The relation between a probe and the mRNA that it is expected to detect is not trivial. Some mRNAs may cross-hybridize probes in the array that are supposed to detect another mRNA. In addition, mRNAs may experience amplification bias that is sequence or molecule-specific. Thirdly, probes that are designed to detect the mRNA of a particular gene may be relying on genomic EST information that is incorrectly associated with that gene. Microarray data was found to be more useful when compared to other similar datasets. The sheer volume of data, specialized formats (such as MIAME), and curation efforts associated with the datasets require specialized databases to store the data. A number of open-source data warehousing solutions, such as InterMine and BioMart, have been created for the specific purpose of integrating diverse biological datasets, and also support analysis. Advances in massively parallel sequencing has led to the development of RNA-Seq technology, that enables a whole transcriptome shotgun approach to characterize and quantify gene expression. Unlike microarrays, which need a reference genome and transcriptome to be available before the microarray itself can be designed, RNA-Seq can also be used for new model organisms whose genome has not been sequenced yet. - An array or slide is a collection of features spatially arranged in a two dimensional grid, arranged in columns and rows. - Block or subarray: a group of spots, typically made in one print round; several subarrays/ blocks form an array. - Case/control: an experimental design paradigm especially suited to the two-colour array system, in which a condition chosen as control (such as healthy tissue or state) is compared to an altered condition (such as a diseased tissue or state). - Channel: the fluorescence output recorded in the scanner for an individual fluorophore and can even be ultraviolet. - Dye flip or dye swap or fluor reversal: reciprocal labelling of DNA targets with the two dyes to account for dye bias in experiments. - Scanner: an instrument used to detect and quantify the intensity of fluorescence of spots on a microarray slide, by selectively exciting fluorophores with a laser and measuring the fluorescence with a filter (optics) photomultiplier system. - Spot or feature: a small area on an array slide that contains picomoles of specific DNA samples. - For other relevant terms see: - Transcriptomics technologies - Microarray analysis techniques - Microarray databases - Cyanine dyes, such as Cy3 and Cy5, are commonly used fluorophores with microarrays - Gene chip analysis - Significance analysis of microarrays - Methylation specific oligonucleotide microarray - Microfluidics or lab-on-chip - Phenotype microarray - Systems biology - Whole genome sequencing - Taub, Floyd (1983). "Laboratory methods: Sequential comparative hybridizations analyzed by computerized image processing can identify and quantitate regulated RNAs". DNA. 2 (4): 309–327. doi:10.1089/dna.1983.2.309. PMID 6198132. - Adomas A; Heller G; Olson A; Osborne J; Karlsson M; Nahalkova J; Van Zyl L; Sederoff R; Stenlid J; Finlay R; Asiegbu FO (2008). "Comparative analysis of transcript abundance in Pinus sylvestris after challenge with a saprotrophic, pathogenic or mutualistic fungus". Tree Physiol. 28 (6): 885–897. doi:10.1093/treephys/28.6.885. PMID 18381269. - Pollack JR; Perou CM; Alizadeh AA; Eisen MB; Pergamenschikov A; Williams CF; Jeffrey SS; Botstein D; Brown PO (1999). "Genome-wide analysis of DNA copy-number changes using cDNA microarrays". Nat Genet. 23 (1): 41–46. doi:10.1038/12640. PMID 10471496. S2CID 997032. - Moran G; Stokes C; Thewes S; Hube B; Coleman DC; Sullivan D (2004). "Comparative genomics using Candida albicans DNA microarrays reveals absence and divergence of virulence-associated genes in Candida dubliniensis". Microbiology. 150 (Pt 10): 3363–3382. doi:10.1099/mic.0.27221-0. PMID 15470115. - Hacia JG; Fan JB; Ryder O; Jin L; Edgemon K; Ghandour G; Mayer RA; Sun B; Hsie L; Robbins CM; Brody LC; Wang D; Lander ES; Lipshutz R; Fodor SP; Collins FS (1999). "Determination of ancestral alleles for human single-nucleotide polymorphisms using high-density oligonucleotide arrays". Nat Genet. 22 (2): 164–167. doi:10.1038/9674. PMID 10369258. S2CID 41718227. - Gagna, Claude E.; Lambert, W. Clark (1 May 2009). "Novel multistranded, alternative, plasmid and helical transitional DNA and RNA microarrays: implications for therapeutics". Pharmacogenomics. 10 (5): 895–914. doi:10.2217/pgs.09.27. ISSN 1744-8042. PMID 19450135. - Gagna, Claude E.; Clark Lambert, W. (1 March 2007). "Cell biology, chemogenomics and chemoproteomics - application to drug discovery". Expert Opinion on Drug Discovery. 2 (3): 381–401. doi:10.1517/174604126.96.36.1991. ISSN 1746-0441. PMID 23484648. S2CID 41959328. - Mukherjee, Anirban; Vasquez, Karen M. (1 August 2011). "Triplex technology in studies of DNA damage, DNA repair, and mutagenesis". Biochimie. 93 (8): 1197–1208. doi:10.1016/j.biochi.2011.04.001. ISSN 1638-6183. PMC 3545518. PMID 21501652. - Rhodes, Daniela; Lipps, Hans J. (15 October 2015). "G-quadruplexes and their regulatory roles in biology". Nucleic Acids Research. 43 (18): 8627–8637. doi:10.1093/nar/gkv862. ISSN 1362-4962. PMC 4605312. PMID 26350216. - J Biochem Biophys Methods. 2000 Mar 16;42(3):105-10. DNA-printing: utilization of a standard inkjet printer for the transfer of nucleic acids to solid supports. Goldmann T, Gonzalez JS. - Lausted C; et al. (2004). "POSaM: a fast, flexible, open-source, inkjet oligonucleotide synthesizer and microarrayer". Genome Biology. 5 (8): R58. doi:10.1186/gb-2004-5-8-r58. PMC 507883. PMID 15287980. - Bammler T, Beyer RP; Consortium, Members of the Toxicogenomics Research; Kerr, X; Jing, LX; Lapidus, S; Lasarev, DA; Paules, RS; Li, JL; Phillips, SO (2005). "Standardizing global gene expression analysis between laboratories and across platforms". Nat Methods. 2 (5): 351–356. doi:10.1038/nmeth754. PMID 15846362. S2CID 195368323. - Pease AC; Solas D; Sullivan EJ; Cronin MT; Holmes CP; Fodor SP (1994). "Light-generated oligonucleotide arrays for rapid DNA sequence analysis". PNAS. 91 (11): 5022–5026. doi:10.1073/pnas.91.11.5022. PMC 43922. PMID 8197176. - Nuwaysir EF; Huang W; Albert TJ; Singh J; Nuwaysir K; Pitas A; Richmond T; Gorski T; Berg JP; Ballin J; McCormick M; Norton J; Pollock T; Sumwalt T; Butcher L; Porter D; Molla M; Hall C; Blattner F; Sussman MR; Wallace RL; Cerrina F; Green RD (2002). "Gene Expression Analysis Using Oligonucleotide Arrays Produced by Maskless Photolithography". Genome Res. 12 (11): 1749–1755. doi:10.1101/gr.362402. PMC 187555. PMID 12421762. - Shalon D; Smith SJ; Brown PO (1996). "A DNA microarray system for analyzing complex DNA samples using two-color fluorescent probe hybridization". Genome Res. 6 (7): 639–645. doi:10.1101/gr.6.7.639. PMID 8796352. - Tang T; François N; Glatigny A; Agier N; Mucchielli MH; Aggerbeck L; Delacroix H (2007). "Expression ratio evaluation in two-colour microarray experiments is significantly improved by correcting image misalignment". Bioinformatics. 23 (20): 2686–2691. doi:10.1093/bioinformatics/btm399. PMID 17698492. - Shafee, Thomas; Lowe, Rohan (2017). "Eukaryotic and prokaryotic gene structure". WikiJournal of Medicine. 4 (1). doi:10.15347/wjm/2017.002. ISSN 2002-4436. - Churchill, GA (2002). "Fundamentals of experimental design for cDNA microarrays" (PDF). Nature Genetics. supplement. 32: 490–5. doi:10.1038/ng1031. PMID 12454643. S2CID 15412245. Archived from the original (– Scholar search) on 8 May 2005. Retrieved 12 December 2013. - NCTR Center for Toxicoinformatics - MAQC Project - "Prosigna | Prosigna algorithm". prosigna.com. Retrieved 22 June 2017. - Little, M.A.; Jones, N.S. (2011). "Generalized Methods and Solvers for Piecewise Constant Signals: Part I" (PDF). Proceedings of the Royal Society A. 467 (2135): 3088–3114. doi:10.1098/rspa.2010.0671. PMC 3191861. PMID 22003312. - Peterson, Leif E. (2013). Classification Analysis of DNA Microarrays. John Wiley and Sons. ISBN 978-0-470-17081-6. - De Souto M et al. (2008) Clustering cancer gene expression data: a comparative study, BMC Bioinformatics, 9(497). - Istepanian R, Sungoor A, Nebel J-C (2011) Comparative Analysis of Genomic Signal Processing for Microarray data Clustering, IEEE Transactions on NanoBioscience, 10(4): 225-238. - Jaskowiak, Pablo A; Campello, Ricardo JGB; Costa, Ivan G (2014). "On the selection of appropriate distances for gene expression data clustering". BMC Bioinformatics. 15 (Suppl 2): S2. doi:10.1186/1471-2105-15-S2-S2. PMC 4072854. PMID 24564555. - Bolshakova N, Azuaje F (2003) Cluster validation techniques for genome expression data, Signal Processing, Vol. 83, pp. 825–833. - Ben Gal, I.; Shani, A.; Gohr, A.; Grau, J.; Arviv, S.; Shmilovici, A.; Posch, S.; Grosse, I. (2005). "Identification of transcription factor binding sites with variable-order Bayesian networks". Bioinformatics. 21 (11): 2657–2666. doi:10.1093/bioinformatics/bti410. ISSN 1367-4803. PMID 15797905. - Yuk Fai Leung and Duccio Cavalieri, Fundamentals of cDNA microarray data analysis. Trends in Genetics Vol.19 No.11 November 2003. - Priness I.; Maimon O.; Ben-Gal I. (2007). "Evaluation of gene-expression clustering via mutual information distance measure". BMC Bioinformatics. 8 (1): 111. doi:10.1186/1471-2105-8-111. PMC 1858704. PMID 17397530. - Wei C; Li J; Bumgarner RE (2004). "Sample size for detecting differentially expressed genes in microarray experiments". BMC Genomics. 5: 87. doi:10.1186/1471-2164-5-87. PMC 533874. PMID 15533245. - Emmert-Streib, F. & Dehmer, M. (2008). Analysis of Microarray Data A Network-Based Approach. Wiley-VCH. ISBN 978-3-527-31822-3. - Wouters L; Gõhlmann HW; Bijnens L; Kass SU; Molenberghs G; Lewi PJ (2003). "Graphical exploration of gene expression data: a comparative study of three multivariate methods". Biometrics. 59 (4): 1131–1139. CiteSeerX 10.1.1.730.3670. doi:10.1111/j.0006-341X.2003.00130.x. PMID 14969494. - Jain N; Thatte J; Braciale T; Ley K; O'Connell M; Lee JK (2003). "Local-pooled-error test for identifying differentially expressed genes with a small number of replicated microarrays". Bioinformatics. 19 (15): 1945–1951. doi:10.1093/bioinformatics/btg264. PMID 14555628. - Barbosa-Morais, N. L.; Dunning, M. J.; Samarajiwa, S. A.; Darot, J. F. J.; Ritchie, M. E.; Lynch, A. G.; Tavare, S. (18 November 2009). "A re-annotation pipeline for Illumina BeadArrays: improving the interpretation of gene expression data". Nucleic Acids Research. 38 (3): e17. doi:10.1093/nar/gkp942. PMC 2817484. PMID 19923232. - Mortazavi, Ali; Brian A Williams; Kenneth McCue; Lorian Schaeffer; Barbara Wold (July 2008). "Mapping and quantifying mammalian transcriptomes by RNA-Seq". Nat Methods. 5 (7): 621–628. doi:10.1038/nmeth.1226. ISSN 1548-7091. PMID 18516045. S2CID 205418589. - Wang, Zhong; Mark Gerstein; Michael Snyder (January 2009). "RNA-Seq: a revolutionary tool for transcriptomics". Nat Rev Genet. 10 (1): 57–63. doi:10.1038/nrg2484. ISSN 1471-0056. PMC 2949280. PMID 19015660. |Library resources about | |Wikimedia Commons has media related to DNA microarrays.| - Gene Expression at Curlie - Micro Scale Products and Services for Biochemistry and Molecular Biology at Curlie - Products and Services for Gene Expression at Curlie - Online Services for Gene Expression Analysis at Curlie - Microarray Animation 1Lec.com - PLoS Biology Primer: Microarray Analysis - Rundown of microarray technology - ArrayMining.net – a free web-server for online microarray analysis - Microarray - How does it work? - PNAS Commentary: Discovery of Principles of Nature from Mathematical Modeling of DNA Microarray Data - DNA microarray virtual experiment
Presentation on theme: "6.1 Introduction The General Quadratic Equation in x and y has the form: Where A, B, C, D, E, F are constants. The graphs of these equations are called."— Presentation transcript: 6.1 Introduction The General Quadratic Equation in x and y has the form: Where A, B, C, D, E, F are constants. The graphs of these equations are called Conic Sections, or simply Conics. There are three basic distinct figures that result from the intersection of a plane with a double-napped cone: 1.) Parabola 2.) Ellipse 3.) Hyperbola 6.2 Parabolas A parabola is the set of points in a plane that are equidistant from a given point, called the focal point, and a given line that does not contain the focal point called the directrix. The axis and vertex of a parabola: The axis of the parabola is the line that goes through the focal point and is perpendicular to the directrix. The point of intersection of the axis and the parabola is the vertex. A standard form for the equation of the parabola is derived by superimposing an xy-coordinate system so that the axis of the parabola corresponds with the y-axis, and the vertex of the parabola is located midway between the focal point and the directrix (at the origin). Axis Directrix Focal Point vertex figure not drawn to scale If the focal point is located at (0, c), then the directrix has equation y = -c Note: By the definition of the parabola, the distance between any point (x, y) on the parabola and the focal point is equal to the distance between that point (x, y) and the directrix. Standard Position Parabolas: A parabola with focal point at (0, c), vertex at (0, 0), and directrix y = -c is said to be in standard position with axis along the y-axis and has equation: Similarly, a parabola with focal point at (c, 0), vertex at (0, 0), and directrix x = -c is in standard position with axis along the x-axis and has equation: Example 1: Find an equation of the parabola with focal point (0, -2) and directrix y = 2. Visualize or draw the given information. Since we see that the focal point has c = -2 and directrix y = 2, the value of the directrix is greater in value than the focal point. When this occurs we know that the parabola opens downward. The focal point lies along the y-axis. since the vertex is the midpoint of the line segment from the focal point to the directrix, the vertex is the origin. All of this implies that the parabola is in standard position with axis along the y-axis. This tells us which equation to use. Ex 2: Find the focal point and directrix of the parabola with equation Solution:Complete the square on the y variable so that we can compare this equation with the equation of a parabola in standard position. This graph is a translation of the equation x = 2y 2 We know that this equation will open right or left with axis along or parallel to the x-axis. If we compare the parent function for this example to the standard form equation for this example … x = 2y 2 We see that Now solve for c. Since c is positive we know that the graph opens to the right. Focal Point:Directrix: Ex 3: Determine the equation of the parabola with focal point at (1, 5) and directrix y = -1 Solution: From the given information we know that the parabola opens upward. The distance between the focal point and directrix is 6 units. This tells us that the vertex is at (1, 2) since the vertex is midway between the two. This also tells us that c = 3. Therefore, the focal point has coordinate ( 0, 3) We must use equation: Substitute the value of c into the formula and simplify. Since the vertex is at (1, 2) this tells we have a vertical shift of 2 units and a horizontal shift of 1 unit to the right. Parabolas have a distinctive reflective property. Any wave (sound, light, etc.) gets reflected from the focus. Focus Note: If a parabola is rotated about its x-axis, a surface is created called a Paraboloid. Therefore, all of the cross-sections of this paraboloid, using the same axis, share the same focal point. Ex 4: A satellite dish receiver has its amplifier in line with the edge of the dish. The diameter of the dish at the edge is 1 meter. How deep is the dish? y x 1 meter Cross-section of dish This figure represents a parabola in standard position with equation: Because the diameter is 1 meter and half of the satellite lies on each side of the y-axis, we know that the point on the parabola is (1/2, c). Since ½ is on the curve, we can substitute ½ into the formula for x and substitute c in for y. Therefore, the distance from the focal point and the vertex (depth of the dish) is 0.25 m or 25 centimeters.
Typically, cash was made by either minting coins or publishing currency. Nowadays, many cash is kept electronically as username and passwords, so cash could be developed or damaged by just changing the info within the reports. Before 1900, sovereign governments had been in control of minting coins or printing currency — often with disastrous outcomes. Today, the method of getting cash is handled by main banking institutions, not to ever match the whims of politicians, but to quickly attain certain well-established objectives, such as for example low inflation, maximum growth, or employment that is high. Cash is usually created — or destroyed — electronically as information in accounts held by main banking institutions. The creation or destruction of cash is recorded into the bank that is central stability sheet. Consequently, to comprehend the availability of money, you have to know the way it’s recorded into the bank’s stability sheet. A central bank’s stability sheet, like many stability sheets, is divided in to assets and liabilities. The central bank’s stability sheet could be divided further into assets and liabilities whilst the bankers’ bank and assets and liabilities due to the fact federal federal government’s bank, as shown when you look at the following table: |Bankers’ Bank||Loans||Bank Accounts| |National’s Bank||SecuritiesForeign Exchange Reserves||CurrencyGovernment’s Account|
Like what you saw? Create FREE Account and: Economics: Cost & Revenue - Concept One of the most important parts of economics is knowing the revenues and costs and how they relate to increased production. These can both be modeled by functions. These cost and revenue functions can then be manipulated like any other function. The profit is the difference between total revenue and total cost. There're a lot of applications in Economics for Calculus. Before we get into those applications we have to talk a little bit about some basic terms, cost, revenue and profit. First let's talk about cost, suppose your business manufactures sneakers, let x be the number of pairs that your company makes. A cost function tells you how much it costs to produce x pairs of sneakers so here's the x axis number of pairs of sneakers and here's the cost axis and your function might look something like this. If you produced 0 sneakers you would still have some cost because they're a fixed cost associated with running a business. Like rent for your factory or salaries for your workers and so on. So this would be the fixed cost and then the additional cost is called the variable cost, this is the cost of producing certain number of sneakers so it's variable cost plus fixed cost. And this model, this linear model assumes that the cost stays constant for example the cost of materials does not go up the more you buy them or go down. This is a more variable model that takes into account the effect of initially buying in bulk, like decrease the cost of materials a little bit so the cost curve will curve down a little bit but then if you buy too much you might create a shortage and the cost might sky rocket. So a cost function can be more complicated it doesn't need to be just linear, and that's cost. Let's take a look at revenue, if you get p dollars for each pair of sneakers you sell and you sell x pairs , revenue is going to be the price of the sneakers times the number of pairs you sell. This will be the amount of money that you're bringing into your company, p times x that's the revenue. The revenue function could look like this r equals p times x if the price is constant you'll just have this linear function for your revenue. But it is also possible that the price isn't constant that if you sell, if you have too many pairs of sneakers out in the market the price is going to go down. The demand point then goes down so you might have this kind of concave down piece at the end. So you have linear and a non linear revenue function. And profit is revenue minus cost right, the money that you take into your company minus the money that you spend that's profit. And if you look at your cost function so this is your cost function and your revenue function, the point at which they cross is called the break even point. It's at this point where you're producing just the right number of shoes where cost and revenue are exactly the same. Your profit will be 0 but it's at this point where you can start making a profit. If you produce more than x sub 0 shoes you'll make a profit. Your revenue will be above your cost, if you produce less your revenue will be below. So this is a really important point in Economics the break even point, so we have cost, revenue and profit these are the important ideas that we're going to be talking about in the next few lessons.
When NASA’s Transiting Exoplanet Survey Satellite launched into space in April 2018, it did so with a specific goal: to search the universe for new planets. But in recently published research, a team of astronomers at The Ohio State University showed that the survey, nicknamed TESS, could also be used to monitor a particular type of supernova, giving scientists more clues about what causes white dwarf stars to explode–and about the elements those explosions leave behind. “We have known for years that these stars explode, but we have terrible ideas of why they explode,” said Patrick Vallely, lead author of the study and an Ohio State astronomy graduate student. “The big thing here is that we are able to show that this supernova isn’t consistent with having a white dwarf (take mass) directly from a standard star companion and explode into it–the kind of standard idea that had led to people trying to find hydrogen signatures in the first place. That is, because the TESS light curve doesn’t show any evidence of the explosion slamming into the surface of a companion, and because the hydrogen signatures in the SALT spectra don’t evolve like the other elements, we can rule out that standard model.” Their research, detailed in the Monthly Notices of the Royal Astronomical Society, represents the first published findings about a supernova observed using TESS, and add new insights to long-held theories about the elements left behind after a white dwarf star explodes into a supernova. Those elements have long troubled astronomers. A white dwarf explodes into a specific type of supernova, a 1a, after gathering mass from a nearby companion star and growing too big to remain stable, astronomers believe. But if that is true, then the explosion should, astronomers have theorized, leave behind trace elements of hydrogen, a crucial building block of stars and the entire universe. (White dwarf stars, by their nature, have already burned through their own hydrogen and so would not be a source of hydrogen in a supernova.) But until this TESS-based observation of a supernova, astronomers had never seen those hydrogen traces in the explosion’s aftermath: This supernova is the first of its type in which astronomers have measured hydrogen. That hydrogen, first reported by a team from the Observatories of the Carnegie Institution for Science, could change the nature of what astronomers know about white dwarf supernovae. “The most interesting thing about this particular supernova is the hydrogen we saw in its spectra (the elements the explosion leaves behind),” Vallely said. “We’ve been looking for hydrogen and helium in the spectra of this type of supernova for years–those elements help us understand what caused the supernova in the first place.” The hydrogen could mean that the white dwarf consumed a nearby star. In that scenario, the second star would be a normal star in the middle of its lifespan–not a second white dwarf. But when astronomers measured the light curve from this supernova, the curve indicated that the second star was in fact a second white dwarf. So where did the hydrogen come from? Professor of Astronomy Kris Stanek, Vallely’s adviser at Ohio State and a co-author on this paper, said it is possible that the hydrogen came from a companion star–a standard, regular star–but he thinks it is more likely that the hydrogen came from a third star that happened to be near the exploding white dwarf and was consumed in the supernova by chance. “We would think that because we see this hydrogen, it means that the white dwarf consumed a second star and exploded, but based on the light curve we saw from this supernova, that might not be true,” Stanek said. “Based on the light curve, the most likely thing that happened, we think, is that the hydrogen might be coming from a third star in the system,” Stanek added. “So the prevailing scenario, at least at Ohio State right now, is that the way to make a Type Ia (pronounced 1-A) supernova is by having two white dwarf stars interacting–colliding even. But also having a third star that provides the hydrogen.” For the Ohio State research, Vallely, Stanek and a team of astronomers from around the world combined data from TESS, a 10-centimeter-diameter telescope, with data from the All-Sky Automated Survey for Supernovae (ASAS-SN for short.) ASAS-SN is led by Ohio State and is made up of small telescopes around the world watching the sky for supernovae in far-away galaxies. TESS, by comparison, is designed to search the skies for planets in our nearby galaxy–and to provide data much more quickly than previous satellite telescopes. That means that the Ohio State team was able to use data from TESS to see what was happening around the supernova in the first moments after it exploded–an unprecedented opportunity. The team combined data from TESS and ASAS-SN with data from the South African Large Telescope to evaluate the elements left behind in the supernova’s wake. They found both hydrogen and helium there, two indicators that the exploding star had somehow consumed a nearby companion star. “What is really cool about these results is, when we combine the data, we can learn new things,” Stanek said. “And this supernova is the first exciting case of that synergy.” The supernova this team observed was a Type Ia, a type of supernova that can occur when two stars orbit one another–what astronomers call a binary system. In some cases of a Type I supernova, one of those stars is a white dwarf. A white dwarf has burned off all its nuclear fuel, leaving behind only a very hot core. (White dwarf temperatures exceed 100,000 degrees Kelvin–nearly 200,000 degrees Fahrenheit.) Unless the star grows bigger by stealing bits of energy and matter from a nearby star, the white dwarf spends the next billion years cooling down before turning into a lump of black carbon. But if the white dwarf and another star are in a binary system, the white dwarf slowly takes mass from the other star until, eventually, the white dwarf explodes into a supernova. Type I supernovae are important for space science–they help astronomers measure distance in space, and help them calculate how quickly the universe is expanding (a discovery so important that it won the Nobel Prize in Physics in 2011.) “These are the most famous type of supernova–they led to dark energy being discovered in the 1990s,” Vallely said. “They are responsible for the existence of so many elements in the universe. But we don’t really understand the physics behind them that well. And that’s what I really like about combining TESS and ASAS-SN here, that we can build up this data and use it to figure out a little more about these supernovae.” Scientists broadly agree that the companion star leads to a white dwarf supernova, but the mechanism of that explosion, and the makeup of the companion star, are less clear. This finding, Stanek said, provides some evidence that the companion star in this type of supernova is likely another white dwarf. “We are seeing something new in this data, and it helps our understanding of the Ia supernova phenomenon,” he said. “And we can explain this all in terms of the scenarios we already have–we just need to allow for the third star in this case to be the source of the hydrogen.”
The chipmunks are a Holarctic group of ground squirrels currently allocated to the genus Tamias within the tribe Marmotini (Rodentia: Sciuridae). Cranial, postcranial, and genital morphology, cytogenetics, and genetics each separate them into three distinctive and monophyletic lineages now treated as subgenera. These groups are found in eastern North America, western North America, and Asia, respectively. However, available genetic data (mainly from mitochondrial cytochrome b) demonstrate that the chipmunk lineages diverged early in the evolution of the Marmotini, well before various widely accepted genera of marmotine ground squirrels. Comparisons of genetic distances also indicate that the chipmunk lineages are as or more distinctive from one another as are most ground squirrel genera. Chipmunk fossils were present in the late Oligocene of North America and shortly afterwards in Asia, prior to the main radiation of Holarctic ground squirrels. Because they are coordinate in morphological, genetic, and chronologic terms with ground squirrel genera, the three chipmunk lineages should be recognized as three distinct genera, namely, TamiasIlliger, 1811, EutamiasTrouessart, 1880, and NeotamiasA. H. Howell, 1929. Each is unambiguously diagnosable on the basis of cranial, post-cranial, and external morphology. The chipmunks represent a species-rich radiation of ground squirrels that range over much of northern Eurasia and North America. At present, 25 species are recognized in a single genus, Tamias (Thorington and Hoffmann 2005, IUCN 2014). As a group, chipmunks represent almost 9% of the 285 squirrel species recognized worldwide (Thorington et al. 2012). Their diurnal activity, conspicuousness, abundance, and distribution in areas accessible to scientists in Europe, Asia, and North America all suggest that chipmunks should be thoroughly studied and systematically well known. However, this is true neither concerning species limits nor their phylogenetic relationships. Chipmunks are typically both conservative and variable in terms of cranial and external morphology (Merriam 1886, Allen 1890, Pocock 1923, Patterson 1983), delaying appreciation of their true species richness (Howell 1929, Hall and Kelson 1959). The number of recognized chipmunk species grew with the realization that contiguously allopatric taxa were often strikingly divergent in genital morphology (White 1953a, Callahan 1977, Sutton 1982, Patterson 1984). However, whereas preliminary studies at contact zones between chipmunk species indicated congruence between genital morphology, vocalizations, and pelage (Sutton and Nadler 1974, Patterson and Heaney 1987, Sutton 1987, Gannon and Lawlor 1989, Gannon and Stanley 1991), more sophisticated studies have documented complex cases of hybridization and introgression (Good and Sullivan 2001, Demboski and Sullivan 2003, Good et al. 2003, 2008, Hird and Sullivan 2009, Hird et al. 2010). Regional studies based on both nuclear and mitochondrial sequences have shown varying degrees of past introgression among an array of chipmunk species in western North America (Reid et al. 2012, Sullivan et al. 2014). Assessing the lineages of chipmunks has been equally complicated. Currently, the 25 recognized species are allocated to three subgenera within the genus Tamias: Tamias Illiger, 1811 for the lone species in eastern North America; Eutamias Trouessart, 1880 for the one recognized Eurasian species (but see Obolenskaya et al. 2009); and NeotamiasA. H. Howell, 1929 for 23 species from western North America. However, for much of the last century, two genera of chipmunks were recognized (Howell 1929, 1938, Hall and Kelson 1959, Hall 1981): Tamias for chipmunks lacking P3 (among other characters) and Eutamias (including Neotamias as a subgenus) for forms retaining this tooth (but see Ellerman 1940 and Bryant 1945, who treated them as one). Allocating all chipmunks to a single genus became generally accepted following Nadler et al. (1977), Corbet (1978), and Ellis and Maxson (1979) and was codified by global checklists (Corbet and Hill 1980 and later editions, Honacki et al. 1982 and later editions). Although several analyses of mitochondrial and nuclear gene sequences have clarified the interspecific relationships of chipmunks (Piaggio and Spicer 2000, 2001, Reid et al. 2012) and made nomenclatural recommendations to separate them as distinct genera (Piaggio and Spicer 2001), little attention has focused on their higher-level relationships. Collectively, chipmunks belong either in their own tribe Tamiini (Black 1963, McKenna and Bell 1997) or as the sister group to all other Holarctic ground squirrels and marmots within the tribe Marmotini (Thorington et al. 2012) of the squirrel subfamily Xerinae. The remarkable diversity of social systems among ground squirrels and marmots has invited phylogenetic analyses to document their historical relationships (e.g., Harrison et al. 2003), inadvertently uncovering paraphyly in some widely recognized groups (Herron et al. 2004). Recently, the ground squirrel genus Spermophilus was revised to eliminate its paraphyly with respect to both prairie dogs (Cynomys) and marmots (Marmota). Eight former subgenera of Spermophilus were thereby elevated to generic rank on the basis of diagnostic morphology, distinctive craniometrics, and reciprocal monophyly in molecular phylogenetic studies (Helgen et al. 2009). This revised taxonomy of ground squirrels and marmots has since been widely adopted (Thorington et al. 2012, Bradley et al. 2005, Ge et al. 2014, Roskov et al. 2014). Recently, Ge et al. (2014) produced a timetree of Sciuridae suggesting that the earliest divergences among chipmunks are as old as or older than splits among the Holarctic ground squirrels. However, their analysis was calibrated at multiple nodes within the Xerinae, and the constraints on these nodes could have contributed to their result. Specifically, Ge and colleagues constrained the earliest divergence among Holarctic ground squirrels to be 16 Mya but imposed no constraints on the age of chipmunks. Systemic bias in molecular age estimates has been demonstrated in numerous studies (Ho and Jermiin 2004, Jansa et al. 2006, Norris et al. 2015). Deep branches may be underestimated relative to more recent branches (Ho and Larson 2006), especially in situations where fast-evolving genes have become saturated (Hugall et al. 2007, Dornburg et al. 2014). If such systemic bias is present, it may affect both chipmunks and Holarctic ground squirrels equally, but the bias would be corrected only for the Holarctic ground squirrels, thanks to the presence of a calibrating fossil. A similar analysis without age constraints is needed to confirm whether the apparent age of chipmunk lineages results from their underlying genetics. Assigning ranks to supraspecific groups is ultimately a subjective exercise, although some have argued that higher taxa are real natural entities (Humphreys and Barraclough 2014). Group names are referents for sets of taxa, and extended arguments have been made for adopting a rankless system (e.g., de Queiroz 2006). However, most biologists regard the taxonomic ranks as a nested system for information storage and retrieval (Mayr 1969, Hawksworth 2010). The value of any hierarchical system depends on coordinate rankings for coordinate entities, and this value is particularly great for sister taxa, where biological comparisons are most meaningful (Benton 2007). Relative to other squirrel genera, how different are the three chipmunk lineages currently regarded as subgenera? In particular, how do evolutionary differences among chipmunks compare to those among the ground squirrels that comprise their sister group? The taxonomic rank accorded to the chipmunk lineages should be comparable to the ranks separating other equivalent groups of Marmotini. To address these questions, we compiled available evidence from morphology, paleontology, and genetics. We analyzed genetic sequence data generated in previously published studies across the Marmotini using both distance metrics and a Bayesian approach to estimate relative divergence times. We also compared these relative estimates to calibrated estimates of divergence times and to the fossil record itself. Materials and methods Genetic data were obtained from already-generated sequences deposited in GenBank (http://www.ncbi.nlm.nih.gov/genbank/). Cytochrome b (Cytb) is the only gene where sequence data are currently available for representatives of the three subgenera of chipmunks. Although sequences have been generated for exon 1 of the interphotoreceptor binding protein (IRBP) in sciurids (DeBry and Sagel 2001, Mercer and Roth 2003, Roth and Mercer 2008), they are available for only two of the chipmunk lineages (Tamias and Eutamias), are otherwise very limited in relevant taxon sampling, and present the additional problem that different studies sequenced different regions of the gene. Due to these limitations, we restricted our analysis of IRBP to observing a handful of representative pairwise comparisons of related genera using exon 1 but analyzed the Cytb data in greater depth. The complete Cytb gene (1140 bp) was analyzed for 65 species of Marmotini and seven outgroup sciurids, whereas IRBP data (1093 bp) were analyzed for eight taxa. GenBank accession numbers are listed in Appendix 1. Sequences were aligned using Clustal W (http://www.clustal.org/clustal2/) (Larkin et al. 2007) and modified by eye. Pairwise genetic distances (both uncorrected p and maximum likelihood) were calculated with PAUP* [version 4.0b8 (http://paup.software.informer.com/download/), Swofford 2003], and statistics were calculated in R (R Development Core Team 2011). The appropriate model of evolution for Cytb (HKY+Γ) and IRBP (TrN+Γ) were determined using the AIC option in jModeltest (http://www.jModeltest.org) (Darriba et al. 2012). Nodal support and relative divergences times were estimated using BEAST [version 1.8.1 (http://beast.bio.ed.ac.uk), Drummond et al. 2012]. Relative ages were estimated instead of absolute ages in order to ensure that the genetic data could be evaluated independently of fossil information. Because the best available fossils to calibrate this analysis are at or near the nodes of interest (see Ge et al. 2014), any molecular dating analysis that employs these fossils a priori may unduly reflect age estimates based on paleontology instead of genetics. Because our goal in this study was not to determine the best estimates of divergence times but instead to evaluate how levels of divergence compare across the Marmotini, we estimated relative ages based exclusively on genetic data. Trees derived solely from genetic data can then be calibrated a posteriori in order to obtain absolute age estimates without distorting their branching order (Conroy and van Tuinen 2003, Norris et al. 2015). The BEAST analysis was performed with a single calibration at the chipmunk vs. ground squirrel+marmot split using a normal distribution where mean=1.0 and standard deviation=0.0001 in order to yield results that round to 1.0 within three decimal places. The uncorrelated lognormal relaxed molecular clock model was used, the mean substitution rate was not fixed, and the Yule speciation model applied. The program was run for 30,000,000 generations, sampled every 3000 generations with a burn-in of 1000. Runs were visualized using Tracer 1.6 (http://www.mybiosoftware.com/tracer-1-5-analyse-results-bayesian-mcmc-programs-beast-mrbayes.html) (Rambaut et al. 2014), and all effective sample sizes (ESS) were verified to be greater than 300. The BEAST analysis recovered the chipmunks and Holarctic ground squirrels and marmots as a strongly supported monophyletic group (Figure 1). The Asian rock squirrels, Sciurotamias, were not recovered as part of this clade (Supporting Information 1). The focal clade contains two strongly supported subgroups, corresponding to the chipmunks on one hand and the Holarctic ground squirrels and marmots on the other (Figure 1). Results suggest that a Neotamias vs. Tamias+Eutamias split represents the next evolutionary divergence within Marmotini (median height=0.678; 95% highest posterior density [HPD]=0.554–0.803). In fact, the 95% HPD for chipmunks is earlier and non-overlapping with the nodes of the ground squirrels and marmots, excepting only the origin of that clade (median=0.566; 95% HPD=0.464–0.667) and its earliest divergence event, the Ammospermophilus vs. Callospermophilus split (median=0.449; 95% HPD=0.346–0.565). The results also suggest that the younger Tamias vs. Eutamias node (median=0.515; 95% HPD=0.388–0.650) is also older than all but the oldest node within the ground squirrels and marmots. The 95% HPD for this chipmunk divergence overlaps several intergeneric divergences within the ground squirrel lineage, but it does not overlap with that of the Poliocitellus+Ictidomys+ Cynomys+Xerospermophilus clade (median=0.249; 95% HPD=0.201–0.304) or its subclades. There are no examples in the Marmotini where 95% HPD values for within-genus evolution overlap with the splits among the three chipmunk lineages. The earliest within-genus divergence recovered was within Spermophilus (median=0.256; 95% HPD=0.203–0.317). The divergence of chipmunks vs. ground squirrels+ marmots is documented in the fossil record by Nototamias and Miospermophilus, both present in the Geringian (26.3–30.8 Mya; Late Oligocene) of Nebraska (Pratt and Morgan 1989, Korth 1992, Bailey 2004). If 26.3 Mya is applied as an a posteriori calibration, the earliest divergence between chipmunk lineages would be dated to 17.8 Mya (95% HPD=14.6–21.1 Mya), and the Tamias vs. Eutamias split would be dated to 13.5 Mya (95% HPD=10.2–17.1 Mya). In contrast, the earliest divergence in the ground squirrel+marmot clade would be dated to 14.9 Mya (95% HPD=12.2–17.5 Mya), the Poliocitellus vs. Ictidomys split (the most recent intergeneric node recovered) would be dated to 5.2 Mya (95% HPD=3.8–6.8 Mya), and the divergence of crown Spermophilus would be dated to 6.7 Mya (95% HPD=5.3–8.3 Mya). Comparisons based on Cytb genetic distances exhibit a similar pattern. Pairwise genetic distances (both HKY+Γ and uncorrected p) between chipmunk lineages were completely non-overlapping with pairwise distances found among congeneric species (Figure 2). Within-genus HKY+Γ distances ranged up to 0.1565 (0.1264 for uncorrected p), whereas pairwise distances between chipmunk lineages ranged from 0.2149 to 0.2749 (uncorrected p=0.1518 to 0.1785 – Table 1). These values fall at the upper end of pairwise distances between genera of ground squirrels and marmots (HKY+Γ distance=0.0992–0.2845; uncorrected p=0.0900–0.1842). The results for the IRBP data (Table 1) yielded a distance for Tamias vs. Eutamias (TrN+Γ=0.0226; uncorrected p=0.0211) somewhat smaller than the mean distance found between genera of ground squirrels and marmots (TrN+Γ=0.0271; uncorrected p=0.0250) but well within the range observed for between-genera comparisons (TrN+Γ=0.0114–0.0398; uncorrected p=0.0110–0.0357). |Tamias – Eutamias||0.0211||0.0226||0.1686||0.2459| |Ammospermophilus – Callospermophilus||0.0238||0.0254||0.1649||0.2342| |Ictidomys – Marmota||0.0238||0.0258||0.1387||0.1750| |Cynomys – Xerospermophilus||0.0257||0.0283||0.1018||0.1165| |Tamias – Dremomys||0.0732||0.09436||0.2002||0.3682| |Marmota – Dremomys||0.0677||0.08523||0.2105||0.4009| |Mean distance between Holarctic ground squirrel genera||0.0250| |Mean distance between chipmunk lineages||–||–||0.1638| |Mean distance within genera||–||–||0.0788| Values are either uncorrected p distances or maximum likelihood distances. Ranges are shown in parentheses. Discussion and conclusions The BEAST analysis strongly supported the monophyly of the chipmunks and the Holarctic ground squirrels and marmots to the exclusion of all other squirrels tested (Supporting Information 1). Interestingly, our analysis excluded the Asian rock squirrels Sciurotamias, which are typically treated as members of the Marmotini (Thorington et al. 2012). Sciurotamias was recovered well outside this clade, in a poorly supported grouping with tree squirrels and flying squirrels (Supplemental Figure 1), but we cannot reject the possibilities that Sciurotamias is sister to the Marmotini sensu stricto or belongs in a different part of the tree. Ge et al. (2014), for example, suggested an affinity between Asian Sciurotamias and African Protoxerini. Additional taxonomic and genetic sampling will be needed to securely place Sciurotamias. In the following discussion, we use Marmotini sensu stricto to refer to the well-supported clade of chipmunks plus Holarctic ground squirrels and marmots and consider the placement of Sciurotamiasincertae sedis (Table 2). |Tribe Marmotini Pocock, 1923 (16 genera, 97 species)| |Chipmunks (3 genera, 25 species)| |Eutamias Trouessart, 1880 (1)a| |Neotamias A. H. Howell, 1929 (23)| |Tamias Illiger, 1811 (1)| |Holarctic ground squirrels and marmots (12 genera, 70 species)| |Ammospermophilus Merriam, 1892 (5)| |CallospermophilusMerriam, 1897 (3)| |Cynomys Rafinesque, 1817 (5)| |Ictidomys J. A. Allen, 1877 (3)| |Marmota Blumenbach, 1779 (15)| |Notocitellus A. H. Howell, 1938 (2)| |Otospermophilus Brandt, 1844(3)| |Poliocitellus A. H. Howell, 19938 (1)| |Spermophilus F. Cuvier, 1825 (15)| |Urocitellus Obolenskij, 1927 (12)| |Xerospermophilus Merriam, 1892 (4)| |? Asian rock squirrels (1 genus, 2 species)b| |Sciurotamias Miller, 1901 (2)| aSee Obolenskaya et al. (2009) for evidence that Eutamias sibiricus contains three distinctive populations that may take rank as species. bMay not belong to the Marmotini (see Discussion and Supplemental Information 1). The Cytb analyses offer unambiguous evidence that the three chipmunk lineages are at least as distinct as any recognized genus of Holarctic ground squirrels and marmots. They appear to have diverged earlier than many of the intergeneric splits within their sister lineage (Figure 1). Our relative timetree is consistent with the analysis of Ge et al. (2014), based on a priori calibrations, so that neither appears to be an artifact of the methodology employed. Neither the estimated divergence dates nor Cytb genetic distances of chipmunk lineages overlap the within-genus diversification of the ground squirrels and marmots. Amounts of sequence divergence between the three chipmunk lineages are typical of the divergences between other genera of Marmotini (Figure 2). They actually exceed in magnitude those that distinguish many familiar squirrel groups, including Cynomys, Marmota, and the various lineages that were long united under Spermophilus. The IRBP data were less conclusive (Table 1), in part for the methodological reasons cited earlier. The genetic distance between Tamias and Eutamias fell within the range of between-genus comparisons among ground squirrels and marmots but, unlike Cytb, were not higher than most of these comparisons. IRBP sequences were unavailable for the Neotamias lineage, which was more divergent in terms of Cytb. The absence of a good nuclear dataset is clearly a limitation to assessing genetic divergence among chipmunk lineages; this highlights the need for additional data collection. Saturation of fast-evolving mitochondrial genes is known to limit their effectiveness in recovering older divergence events (Simon et al. 1994). Nevertheless, the limitations of fast-evolving genes are expressed as lack of support for clades, not as the spurious appearance of well-supported nodes. Cytb has frequently been used in mammals to assess phylogenetic relationships above the family level (e.g., Agnarsson et al. 2011) and has been shown to be surprisingly effective at these deeper nodes (Tobe et al. 2010). As noted earlier, although saturation may introduce systemic bias in age estimates (Hugall et al. 2007, Dornburg et al. 2014), such bias should influence both the chipmunk and Holarctic ground squirrel clades equally as long as no calibrating fossils are incorporated a priori. At present, the best available dataset (Cytb) provides a compelling and unambiguous argument that the three chipmunk lineages exhibit genus-level genetic differentiation. Determining the implications of these genetic differences for chipmunk nomenclature requires an assessment of evolutionary and historical relationships. In morphological terms, the chipmunk lineages are at least as distinctive from one another as are the various lineages of ground squirrels. In the last truly comprehensive revision of living forms, Howell (1938) treated Tamias and Eutamias as distinct genera, while retaining Callospermophilus, Ictidomys, Notocitellus, Otospermophilus, Poliocitellus, and Xerospermophilus all as subgenera of Spermophilus. White (1953b) even placed Tamias and Eutamias+Neotamias in separate tribes (Marmotini and Callosciurini, respectively). Although studies based on immunology, karyology, and electrophoresis offered inconclusive and sometimes conflicting evidence of chipmunk sister groups (Hight et al. 1974, Nadler et al. 1977, Levenson et al. 1985), DNA sequence analyses have reliably recovered the three groups of chipmunks as distinct and monophyletic (Piaggio and Spicer 2000, 2001, Herron et al. 2004, Reid et al. 2012). Different phylogenetic methods and datasets applied to chipmunk lineages tend to recover different sister pairs (Ellis and Maxson 1979, Levenson et al. 1985, Herron et al. 2004), indicating a quasi-trichotomy. The ectoparasite complements of fleas and sucking lice infesting the North American lineages Tamias and Neotamias are exclusive and non-overlapping, suggesting that neither has been derived from the other and both have been long separated (Jameson 1999). Until recently, the only timetrees available for squirrels were either coarse and for the entire family (e.g., Mercer and Roth 2003) or else focused on other clades (Harrison et al. 2003, Roth and Mercer 2008). Using three internal calibration points on the subfamily Xerinae, Ge et al. (2014) reconstructed the earliest evolution of Marmotini as being in Asia but most of its diversification, including the divergence of chipmunks and ground squirrels (which they dated to 22.7–29 Mya), as taking place in North America. Fossil relatives of both chipmunks (Nototamias) and ground squirrels (Miospermophilus) both appear in Nebraska deposits that date back to the Geringian (Late Oligocene, 26.3–30.8 Mya; Pratt and Morgan 1989, Korth 1992, Bailey 2004), but these taxa are well established from numerous localities by the Oligocene-Miocene boundary (∼24 Mya). Chipmunk fossils are also known from this same time period in Asia (23.8–26 Mya; Eutamias indet., Qiu 1988; Meng et al. 2008) and from the Middle Miocene in Europe (11–18 Mya; Tamias eviensis, Doukas 2003, Koufos 2006). In contrast, Ge et al. (2014) dated the initial radiation of crown-group ground squirrels to 14.76–17.93 Mya. Harrison et al. (2003) had earlier dated this same event to 10–14 Mya. In the timetree of Ge et al. (2014), no currently recognized crown for ground squirrel or marmot genera has a 95% CI that even overlaps with the most recent of the chipmunk divergences; all divergences among living ground squirrel lineages took place subsequently, mainly during the late Miocene. These dates are broadly consistent with the dates of more than 4600 fossil sciurids gleaned from NOW (Fortelius 2015) and the Paleobiology Database (paleobiodb.org) (compiled as Appendix 2 of Ge et al. 2014). Most extant genera of caviomorph and phiomorph rodents can also be traced to divergences that took place during the Miocene (Patterson and Upham 2014, Upham and Patterson In press). The Middle Miocene origin for chipmunk lineages that is suggested by both fossils and molecular dating falls comfortably within the range typically associated with genus-level divergence (Holt and Jønsson 2014). Since Howell’s revisions of chipmunks (1929) and ground squirrels (1938), no author has challenged the reciprocal monophyly of the three chipmunk lineages. Nor is it difficult to diagnose Tamias, Neotamias, and Eutamias in terms of cranial, post-cranial, and external morphology, genital bones, cytogenetics, or DNA sequences. Failure to recognize these groups as distinct genera has stemmed from their unquestioned membership in a single lineage and the mosaic of pleisiomorphic and apomorphic characters that have been used to define them. Unquantified impressions that the chipmunks are more similar to one another than are the ground squirrels can now be quantitatively refuted by genetic comparisons. Within-group genetic variation (particularly within species-rich Neotamias) certainly does not dwarf between-group distances (Figures 1 and 2). The three chipmunk groups stand readily as valid genera, and continued use of antiquated nomenclature clouds the systematic relationships of nearly 9% of the world’s squirrel species. A revised classification of the Marmotini is presented in Table 2. Because the literature containing diagnostic characters of chipmunks (Howell 1929, 1938, White 1953b) is now dated and widely scattered, revised diagnoses of the genera are here collected and appended. The section “Included species” contains the proper scientific name and authorship of each of the 25 currently recognized species. The distinction and integrity of these lineages is such that these three genera have no additional synonyms other than Sciurus, used in the 18th century; in the course of history, all chipmunk species have been recognized under Tamias and all but Tamias striatus have been recognized under Eutamias. Only the western North American forms have been called Neotamias (e.g., Piaggio and Spicer 2001). Tamias Illiger, 1811 Diagnosis Median dark dorsal stripe narrow and flanked by two paler stripes more than twice its width; all four of the dark stripes short, none extending onto the rump or shoulder; ears broad, rounded at the tips; tail <40% total length; antorbital foramen rounded (suborbicular); posterior border of the zygomatic notch reaches level of P4–M1; postorbital process long and broad basally; well-developed lambdoidal crest; palate long, extending well beyond the plane of the last molars; auditory bullae relatively small; upper incisors with weak or no longitudinal striations; a single upper premolar (P4), whose anterior root projects buccal to the masseteric knob; upper molar series slightly convergent posteriorly; head of the malleus elongate, the planes formed by the lamina and manubrium ca. 60°; hypohyal and ceratohyal elements of hyoid apparatus separate in adults; conjoining tendon between anterior and posterior sets of digastric muscles rounded in cross-section; baculum nearly straight, upturned at tip, with slight median ridge on ventral surface. Karyotype 2N=38; Giemsa-stained chromosomes show at least nine structural rearrangements from the pattern shown by Neotamias (Nadler et al. 1977). Type speciesSciurus striatusLinnaeus, 1758. Included speciesTamias striatus (Linnaeus, 1758) – Eastern chipmunk. Eutamias Trouessart, 1880 Diagnosis Dorsal stripes all subequally spaced; lateral pair of dark stripes shorter than median trio, which reach onto shoulder and rump; antorbital foramen suborbicular; ears broad, rounded at the tips; tail 40–50% total length; posterior border of the zygomatic notch extends only to level of P4; postorbital process broad basally; well-developed lambdoidal crest; palate short, extending to or just behind the plane of the last molars; auditory bullae relatively large; upper incisors with numerous distinct longitudinal striations; normally two upper premolars, the anterior root of P4 projects lingual to the masseteric knob; upper molar series slightly convergent posteriorly; head of the malleus not elongated, the planes formed by the lamina and manubrium ca. 90°; hypohyal and ceratohyal elements of hyoid apparatus completely fused in adults; conjoining tendon between anterior and posterior sets of digastric muscles ribbonlike; baculum gradually tapering from base to tip, evenly curved at tip, with a microscopic ridge on the dorsal surface. Karyotype 2N=38, with two to three autosomal and one Y chromosome rearrangement separating it from either of the North American lineages (Nadler et al. 1977). Type speciesSciurus striatus asiaticus Gmelin, 1788. Included speciesEutamias sibiricus (Laxmann, 1769) – Siberian chipmunk. Recent studies of call notes (Pisanu et al. 2013) indicate regional heterogeneity of Eurasian chipmunks, and morphology and Cytb sequences indicate three distinctive geographic populations in Eastern Asia (Obolenskaya et al. 2009). Sequence divergence between these forms (>10% in Cytb sequences) is sufficiently great that they may represent distinct species (Lee et al. 2008), but studies focused on contact zones are needed to establish the rank of these taxa. Neotamias A. H. Howell, 1929 Diagnosis Dorsal stripes all subequal in width; lateral pair of dark stripes shorter than medial trio; ears narrower, more pointed at the tips; tail 40–50% total length; antorbital foramen slitlike (narrowly oval); posterior border of the zygomatic notch extends only to level of P4; postorbital process gracile at base; lambdoidal crest barely discernable; palate short, extending to or just behind the plane of the last molars; auditory bullae relatively large; upper incisors with numerous distinct longitudinal striations; two upper premolars; the anterior root of P4 projects lingual to the masseteric knob; upper molar series approximately parallel; head of the malleus not elongated, the planes formed by the lamina and manubrium ca. 90°; hypohyal and ceratohyal elements of hyoid apparatus completely fused in adults; conjoining tendon between anterior and posterior sets of digastric muscles ribbonlike; baculum highly variable but abruptly angled at tip with a prominent dorsal ridge (“keel”). Karyotype 2N=38, “A” and “B” karyotypes, differing from each other by a single pericentric inversion of the smallest autosome (Nadler et al. 1977). Type speciesTamias asiaticus merriamiJ. A. Allen, 1889. Included speciesNeotamias alpinus (Merriam, 1893) – Alpine chipmunk; Neotamias amoenus (J. A. Allen, 1890) – Yellow-pine chipmunk; Neotamias bulleri (J. A. Allen, 1889) – Buller’s chipmunk; Neotamias canipes (Bailey, 1902) – Gray-footed chipmunk; Neotamias cinereicollis (J. A. Allen, 1890) – Gray-collared chipmunk; Neotamias dorsalis (Baird, 1855) – Cliff chipmunk; Neotamias durangae (J. A. Allen, 1903) – Durango chipmunk; Neotamias merriami (J. A. Allen, 1889) – Merriam’s chipmunk; Neotamias minimus (Bachman, 1839) – Least chipmunk; Neotamias obscurus (J. A. Allen, 1890) – California chipmunk; Neotamias ochrogenys (Merriam, 1897) – Yellow-cheeked chipmunk; Neotamias palmeri (Merriam, 1897) – Palmer’s chipmunk; Neotamias panamintinus (Merriam, 1893) – Panamint chipmunk; Neotamias quadrimaculatus (Gray, 1867) – Long-eared chipmunk; Neotamias quadrivittatus (Say, 1822) – Colorado chipmunk; Neotamias ruficaudus (A. H. Howell, 1920) – Red-tailed chipmunk; Neotamias rufus (Hoffmeister and Ellis, 1979) – Hopi chipmunk; Neotamias senex (J. A. Allen, 1890) – Shadow chipmunk; Neotamias siskiyou (A. H. Howell, 1922) – Siskiyou chipmunk; Neotamias sonomae (Grinnell, 1915) – Sonoma chipmunk; Neotamias speciosus (Merriam, 1890) – Lodgepole chipmunk; Neotamias townsendii (Bachman, 1839) – Townsend’s chipmunk; Neotamias umbrinus (J. A. Allen, 1890) – Uinta chipmunk. This analysis is largely a compilation of various research findings on squirrels (including chipmunks) over much of the last century. We are accordingly grateful to all those making observations on which this report is based. The many contributions of A. H. Howell, J. A. White, D. A. Sutton, and R. S. Hoffmann to understanding of chipmunks are especially worthy of mention. Thanks to J. M. Bates and S. Lidgard for feedback and to N. S. Upham, J. Demboski, and two anonymous reviewers for constructive reviews of the manuscript that improved its cogency. |Cytb:Ammospermophilus harrisii AF157926*; Callosciurus prevostii AB499914; Callospermophilus lateralis AF157950*; Callospermophilus madrensis AF157946; Callospermophilus saturatus AF157917; Cynomys gunnisoni AF157930*; Cynomys ludovicianus JQ885590; Cynomys mexicanus DQ106852; Cynomys parvidens AF157929; Dremomys pernyi HQ698363*; Eutamias sibiricus KF990333*; Glaucomys volans AF157921; Ictidomys mexicanus AF157852*; Marmota broweri JN024621*; Marmota caligata FJ438940; Marmota olympus JF313271; Neotamias alpinus KJ452914; Neotamias amoenus AY121090; Neotamias canipes KJ139459; Neotamias cinereicollis KJ139547; Neotamias dorsalis KJ139583; Neotamias durangae JN042437; Neotamias merriami JN042549; Neotamias minimus KJ453081; Neotamias obscurus JN042551; Neotamias panamintinus KJ453106; Neotamias quadrimaculatus JN042497; Neotamias quadrivittatus KJ139530; Neotamias ruficaudus JN042448; Neotamias rufus KJ139468; Neotamias senex JN042532; Neotamias siskiyou JN042509; Neotamias sonomae JN042530; Neotamias speciosus JN042483; Neotamias townsendii JN042504; Neotamias umbrinus KJ139640; Notocitellus adocetus AF157843; Notocitellus annulatus AF157849; Otospermophilus atricapillus JF925312; Otospermophilus beecheyi AF157918; Otospermophilus variegatus AF157878; Poliocitellus franklinii AF157894; Ratufa bicolor NC023780; Sciurotamias davidianus KC005710; Sciurus carolinensis FJ200744; Spermophilopsis leptodactylus AF157865; Spermophilus alashanicus AF157868; Spermophilus citellus KC971254; Spermophilus dauricus AF157871; Spermophilus fulvus AF157913; Spermophilus major AF157903; Spermophilus pallidicauda AF157866; Spermophilus pygmaeus AF157910; Spermophilus relictus AF157867; Spermophilus suslicus AF157897; Spermophilus xanthoprymnus AF157909; Tamias striatus JN042555*; Urocitellus armatus AF157901; Urocitellus beldingi AF157881; Urocitellus brunneus AF157952; Urocitellus columbianus AF157882; Urocitellus elegans AF157891; Urocitellus parryii AF157931; Urocitellus richardsoni AF157914; Urocitellus townsendi AF157938; Urocitellus undulatus AF157912; Urocitellus washingtoni AF157936; Xerospermophilus mohavensis AF157928*; Xerospermophilus perotensis AF157840; Xerospermophilus spilosoma DQ106854; Xerospermophilus tereticaudus AF157941; Xerus rutilus AY452690.| |IRBP:Ammospermophilus harrisii AY227583; Callospermophilus lateralis AY227586; Cynomys leucurus AY227584; Dremomys pernayi HQ698525; Eutamias sibiricus AB253981; Ictidomys tridecemlineatus AF297278; Marmota monax AJ427237; Tamias striatus JN414824; Xerospermophilus mohavensis JX065593.| Cytb sequences used as representative examples in Table 1 are marked with an asterisk. Agnarsson, I., C.M. Zambrana-Torrelio, N.P. Flores-Saldana and L.J. May-Collado. 2011. A time-calibrated species-level phylogeny of bats (Chiroptera, Mammalia). PLoS Currents 3: currents. RRN1212.10.1371/currents.RRN1212Search in Google Scholar Allen, J.A. 1889. Notes on a collection of mammals from southern Mexico, with descriptions of new species of the genera Sciurus, Tamias, and Sigmodon. Bull. Am. Mus. Nat. Hist. 2: 165–181.Search in Google Scholar Allen, J.A. 1890. A review of some of the North American ground squirrels of the genus Tamias. Bull. Am. Mus. Nat. Hist. 3: 45–116.Search in Google Scholar Allen, J.A. 1903. List of mammals collected by Mr. J. H. Batty in New Mexico and Durango, with descriptions of new species and subspecies. Bull. Am. Mus. Nat. Hist. 19: 587–612.Search in Google Scholar Bachman, J. 1839. Description of several new species of American quadrupeds. J. Acad. Nat. Sci. Philadelphia 8: 57–74.Search in Google Scholar Bailey, B.E. 2004. Biostratigraphy and biochronology of early Arikareean through late Hemingfordian small mammal faunas from the Nebraska Panhandle and adjacent areas. Paludicola 4: 81–113.Search in Google Scholar Bailey, V. 1902. Seven new mammals from western Texas. Proc. Biol. Soc. Wash. XV: 117–120.Search in Google Scholar Baird, S.F. 1855. Characteristics of some new species of Mammalia, collected by the U.S. and Mexican Boundary Survey, Major W.H. Emory, U.S.A. Commissioner. Proc. Acad. Nat. Sci., Philadelphia 7: 331–333.Search in Google Scholar Benton, M.J. 2007. The Phylocode: beating a dead horse? Acta Palaeont. Pol. 52: 651–655.Search in Google Scholar Black, C.C. 1963. A review of the North American Tertiary Sciuridae. Bull. Mus. Comp. Zool. 130: 113–248.Search in Google Scholar Bradley, L.C., B.R. Amman, J.G. Brant, L.R. Mcaliley, F. Mendez-Harclerode, J.R. Suchecki, C. Jones, H.H. Genoways, R.J. Baker, and R.D. Bradley. 2005. Mammalogy at Texas Tech University: a historical perspective. Occas. Pap. Mus. Texas Tech Univ. 243: 1–30.10.5962/bhl.title.156982Search in Google Scholar Conroy, C.J. and M. van Tuinen. 2003. Extracting time from phylogenies: positive interplay between fossil and genetic data. J. Mamm. 84: 444–455.10.1644/1545-1542(2003)084<0444:ETFPPI>2.0.CO;2Search in Google Scholar Corbet, G.B. 1978. The mammals of the Palaearctic region: a taxonomic review. British Museum (Natural History), London.Search in Google Scholar Corbet, G.B. and J.E. Hill. 1980. A world list of mammalian species. British Museum (Natural History) and Comstock Publishing Associates, a division of Cornell University Press, London and Ithaca, NY.Search in Google Scholar Demboski, J.R. and J. Sullivan. 2003. Extensive mtDNA variation within the yellow-pine chipmunk, Tamias amoenus (Rodentia: Sciuridae), and phylogeographic inferences for northwest North America. Mol. Phylogen. Evol. 26: 389–408.10.1016/S1055-7903(02)00363-9Search in Google Scholar Dornburg, A., J.P. Townsend, M. Friedman and T. J. Near. 2014. Phylogenetic informativeness reconciles ray-finned fish molecular divergence times. BMC Evol. Biol. 14: 169.10.1186/s12862-014-0169-0Search in Google Scholar Doukas, C.S. 2003. The MN4 faunas of Aliveri and Karydia (Greece). Coloquios de Paleontología, Volumen Extraordinario 1: 127–133.Search in Google Scholar Drummond, A.J., M.A. Suchard, D. Xie and A. Rambaut. 2012. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Mol. Biol. Evol. 29: 1969–1973.10.1093/molbev/mss075Search in Google Scholar PubMed PubMed Central Ellerman, J.R. 1940. The families and genera of living rodents, Vol. 1. Rodents other than Muridae. British Museum (Natural History), London.Search in Google Scholar Ge, D.Y., X. Liu, X.F. Lv, Z.Q. Zhang, L. Xia and Q.S. Yang. 2014. Historical biogeography and body form evolution of ground squirrels (Sciuridae: Xerinae). Evol. Biol. 41: 99–114.10.1007/s11692-013-9250-7Search in Google Scholar Good, J.M. and J. Sullivan. 2001. Phylogeography of the red-tailed chipmunk (Tamias ruficaudus), a northern Rocky Mountain endemic. Mol. Ecol. 10: 2683–2695.10.1046/j.0962-1083.2001.01397.xSearch in Google Scholar PubMed Good, J.M., J.R. Demboski, D.W. Nagorsen and J. Sullivan. 2003. Phylogeography and introgressive hybridization: chipmunks (genus Tamias) in the northern Rocky Mountains. Evolution 57: 1900–1916.10.1111/j.0014-3820.2003.tb00597.xSearch in Google Scholar PubMed Good, J.M., S. Hird, N. Reid, J.R. Demboski, S.J. Steppan, T.R. Martin-Nims and J. Sullivan. 2008. Ancient hybridization and mitochondrial capture between two species of chipmunks. Mol. Ecol. 17: 1313–1327.10.1111/j.1365-294X.2007.03640.xSearch in Google Scholar PubMed Grinnell, J. 1915. Eutamias sonomae, a new chipmunk from the inner northern coast belt of California. Univ. Calif. Publ. Zool. 12: 321–325.Search in Google Scholar Hall, E.R. 1981. The mammals of North America, 2nd ed. Vol. 1. John Wiley and Sons, New York.Search in Google Scholar Hall, E.R. and K.R. Kelson. 1959. The mammals of North America. Vol. 1. The Ronald Press Company, New York.Search in Google Scholar Harrison, R.G., S.M. Bogdanowicz, R.S. Hoffmann, E. Yensen and P.W. Sherman. 2003. Phylogeny and evolutionary history of the ground squirrels (Rodentia: Marmotinae). J. Mammal. Evol. 10: 249–276.10.1023/B:JOMM.0000015105.96065.f0Search in Google Scholar Hawksworth, D.L. 2010. Terms used in bionomenclature: the naming of organisms and plant communities: including terms used in botanical, cultivated plant, phylogenetic, phytosociological, prokaryote (bacteriological), virus, and zoological nomenclature. Global Biodiversity Information Facility (GBIF), Copenhagen.Search in Google Scholar Herron, M.D., T.A. Castoe and C.L. Parkinson. 2004. Sciurid phylogeny and the paraphyly of Holarctic ground squirrels (Spermophilus). Mol. Phylogen. Evol. 31: 1015–1030.10.1016/j.ympev.2003.09.015Search in Google Scholar PubMed Hird, S. and J. Sullivan. 2009. Assessment of gene flow across a hybrid zone in red-tailed chipmunks (Tamias ruficaudus). Mol. Ecol. 18: 3097–3109.10.1111/j.1365-294X.2009.04196.xSearch in Google Scholar PubMed Hird, S., N. Reid, J. Demboski and J. Sullivan. 2010. Introgression at differentially aged hybrid zones in red-tailed chipmunks. Genetica 138: 869–883.10.1007/s10709-010-9470-zSearch in Google Scholar PubMed Hoffmeister, D.F. and L.S. Ellis. 1979. Geographic variation in Eutamias quadrivittatus with comments on the taxonomy of other Arizonan chipmunks. Southwest. Natur. 24: 655–665.10.2307/3670524Search in Google Scholar Honacki, J.H., K.E. Kinman and J.W. Koeppl. 1982. Mammal species of the world: a taxonomic and geographic reference. Allen Press and Assoc. Syst. Coll., Lawrence, Kansas.Search in Google Scholar Howell, A.H. 1920. Description of a new chipmunk from Glacier National Park, Montana. Proc. Biol. Soc. Wash. 33: 91–92.Search in Google Scholar Hugall, A.F., R. Foster and M.S. Lee. 2007. Calibration choice, rate smoothing, and the pattern of tetrapod diversification according to the long nuclear gene RAG-1. Syst. Biol. 56: 543–563.10.1080/10635150701477825Search in Google Scholar PubMed Humphreys, A.M. and T.G. Barraclough. 2014. The evolutionary reality of higher taxa in mammals. Proc. R. Soc. Lond. B 281: 20132750.10.1098/rspb.2013.2750Search in Google Scholar PubMed PubMed Central IUCN. 2014. IUCN red list of threatened species. Version 2014.1. Accessed 10 Jul 2014.Search in Google Scholar Jansa, S.A., F.K. Barker and L.R. Heaney. 2006. The pattern and timing of diversification of Philippine endemic rodents: evidence from mitochondrial and nuclear gene sequences. Syst. Biol. 55: 73–88.10.1080/10635150500431254Search in Google Scholar PubMed Larkin, M.A., G. Blackshields, N.P. Brown, R. Chenna, P.A. McGettigan, H. McWilliam, F. Valentin, I.M. Wallace, A. Wilm, R. Lopez, J.D. Thompson, T.J. Gibson and D.G. Higgins. 2007. ClustalW and ClustalX version 2.0. Bioinform. 23: 2947–2948.10.1093/bioinformatics/btm404Search in Google Scholar PubMed Laxmann, M.E. 1769. Sibiriche Brief, Gottingen and Gotha.Search in Google Scholar Koufos, G.D. 2006. The Neogene mammal localities of Greece: faunas, chronology and biostratigraphy. Hellenic J. Geosci. 41: 183–214.Search in Google Scholar Lee, M.-Y., A.A. Lissovsky, S.K. Park, E.V. Obolenskaya, N.E. Dokuchaev, Y.P. Zhang, L. Yu, Y.J. Kim, I. Voloshina, A. Myslenkov, T.Y. Choi, M.S. Min and H. Lee. 2008. Mitochondrial cytochrome b sequence variations and population structure of Siberian chipmunk (Tamias sibiricus) in northeastern Asia and population substructure in South Korea. Molec. Cells 26: 566–575.Search in Google Scholar Linnaeus, C. 1758. Systema naturae per regna tria naturae, secundum classes, ordines, genera, species, cum charateribus, differentiis, synonymis, locis…, 10th ed. Imprint Holmiae, Impensis L. Salvii.10.5962/bhl.title.542Search in Google Scholar Mayr, E. 1969. Principles of systematic biology. McGraw-Hill, New York.Search in Google Scholar McKenna, M.C. and S.K. Bell. 1997. Classification of mammals: above the species level. Columbia Univ. Press, New York.Search in Google Scholar Meng, J., J. Ye, W.-Y. Wu, X.-J. Ni, and S.-D. Bi. 2008. The Neogene Dingshanyanchi Formation in Northern Jungguar basin of Xinjiang and its stratigraphic implications. Vertebrata PalAsiatica 46: 90–110.Search in Google Scholar Merriam, C.H. 1886. Description of a new species of chipmunk from California (Tamias macrorhabdotes sp. nov.). Proc. Biol. Soc. Wash. 3: 25–28.Search in Google Scholar Merriam, C.H. 1893. Descriptions of eight new ground squirrels of the genera Spermophilus and Tamias from California, Texas, and Mexico. Proc. Biol. Soc. Wash. 8: 129–138.Search in Google Scholar Merriam, C.H. 1897. Notes on the chipmunks of the genus Eutamias occurring west of the east base of the Cascade-Sierra system, with descriptions of new forms. Proc. Biol. Soc. Wash. 11: 189–212.Search in Google Scholar Nadler, C.F., R.S. Hoffmann, J.H. Honacki and D. Pozin. 1977. Chromosomal evolution in chipmunks, with special emphasis on A and B karyotypes of the subgenus Neotamias. Amer. Midl. Natur. 98: 343–353.10.2307/2424985Search in Google Scholar Norris, R.W., C.L. Strope, D.M. McCandlish and A. Stoltzfus. 2015. Bayesian priors for tree calibration: evaluating two new approaches based on fossil intervals. bioRxiv: dx.doi.org/10.1101/014340.10.1101/014340Search in Google Scholar Obolenskaya, E.V., M.-Y. Lee, N.E. Dokuchaev, T. Oshida, M.-S. Lee, H. Lee and A.A. Lissovsky. 2009. Diversity of Palaearctic chipmunks (Tamias, Sciuridae). Mammalia 73: 281–298.10.1515/MAMM.2009.047Search in Google Scholar Patterson, B.D. 1983. On the phyletic weight of mensural cranial characters in chipmunks and their allies (Rodentia: Sciuridae). Fieldiana: Zool., n.s. 20: 1–24.Search in Google Scholar Patterson, B.D. and N.S. Upham. 2014. A newly recognized family from the Horn of Africa, the Heterocephalidae (Rodentia: Ctenohystrica). Zool. J. Linn. Soc. 172: 942–963.10.1111/zoj.12201Search in Google Scholar Piaggio, A.J. and G.S. Spicer. 2000. Molecular phylogeny of the chipmunk genus Tamias based on the mitochondrial cytochrome oxidase subunit II gene. J. Mammal. Evol. 7: 147–166.10.1023/A:1009484302799Search in Google Scholar Piaggio, A.J. and G.S. Spicer. 2001. Molecular phylogeny of the chipmunks inferred from mitochondrial cytochrome b and cytochrome oxidase II gene sequences. Mol. Phylogen. Evol. 20: 335–350.10.1006/mpev.2001.0975Search in Google Scholar PubMed Pisanu, B., E. Obolenskaya, E. Baudry, A. Lissovsky and J.-L. Chapuis. 2013. Narrow phylogeographic origin of five introduced populations of the Siberian chipmunk Tamias (Eutamias) sibiricus (Laxmann, 1769)(Rodentia: Sciuridae) established in France. Biol. Invas. 15: 1201–1207.10.1007/s10530-012-0375-xSearch in Google Scholar Pratt, A.E. and G.S. Morgan. 1989. New Sciuridae (Mammalia: Rodentia) from the early Miocene Thomas Farm local fauna, Florida. J. Vert. Paleont. 9: 89–100.10.1080/02724634.1989.10011741Search in Google Scholar Qiu, Z. 1988. Neogene micromammals of China. In: (P. Whyte, ed.) Palaeoenvironment of East Asia from the mid-Tertiary. Second International Conference on the Paleoenvironment of East Asia 77. Centre of Asian Studies, University of Hong Kong, Hong Kong. pp. 834–848.Search in Google Scholar R Development Core Team. 2011. R: A language and environment for statistical computing (Available online at http://www.R-project.org/). The R Foundation for Statistical Computing., Vienna, Austria.Search in Google Scholar Reid, N., J.R. Demboski and J. Sullivan. 2012. Phylogeny estimation of the radiation of western North American chipmunks (Tamias) in the face of introgression using reproductive protein genes. Syst. Biol. 61: 44–62.10.1093/sysbio/syr094Search in Google Scholar PubMed PubMed Central Roskov, Y. et al. 2014. Species 2000 & ITIS Catalogue of Life, 2014 Annual Checklist. Accessed 7 Jan 2015.Search in Google Scholar Roth, V.L. and J.M. Mercer. 2008. Differing rates of macroevolutionary diversification in arboreal squirrels. Curr. Sci. 95: 857–861.Search in Google Scholar Say, T. 1822. [Mammals]. In: (E. James, ed.) Account of an expedition from Pittsburgh to the Rocky Mountains, performed in the years 1819 and 1820, under the command of Maj. Stephen H. Long. From the notes of Major Long, Mr. T. Say, and other gentlemen of the exploring party, Philadelphia.Search in Google Scholar Simon, C., F. Frati, A. Beckenbach, B. Crespi, H. Liu and P. Flook. 1994. Evolution, weighting, and phylogenetic utility of mitochondrial gene sequences and a compilation of conserved polymerase chain reaction primers. Ann. Ent. Soc. Am. 87: 651–701.10.1093/aesa/87.6.651Search in Google Scholar Sullivan, J., J.R. Demboski, K.C. Bell, S. Hird, B. Sarver, N. Reid and J.M. Good. 2014. Divergence-with-gene-flow within the recent chipmunk radiation (Tamias). Hered. 113: 185–194.10.1038/hdy.2014.27Search in Google Scholar PubMed PubMed Central Swofford, D. 2003. PAUP* ver. 4.0b10: Phylogenetic analysis using parsimony. Sinauer Associates, Sunderland, MA.Search in Google Scholar Thorington, R.W., Jr and R.S. Hoffmann. 2005. Family Sciuridae. In: (D. E. Wilson and D. A. M. Reeder, eds.) Mammal species of the world: a taxonomic and geographic reference, 3rd ed. Johns Hopkins Univ. Press, Washington DC. pp. 754–818.Search in Google Scholar Thorington, R.W., Jr., J.L. Koprowski, M.A. Steele and J.F. Whatton. 2012. Squirrels of the world. Johns Hopkins Univ. Press, Baltimore.Search in Google Scholar Tobe, S.S., A.C. Kitchener and A.M.T. Linacre. 2010. Reconstructing mammalian phylogenies: a detailed comparison of the cytochrome b and cytochrome oxidase subunit I mitochondrial genes. PLoS One 5: e14156.10.1371/journal.pone.0014156Search in Google Scholar PubMed PubMed Central Upham, N.S. and B.D. Patterson. In press. Evolution of caviomorph rodents: a complete phylogeny and timetree for living genera. In: (A. I. Vassallo and D. Antenucci, eds.) Biology of caviomorph rodents: diversity and evolution. SAREM, Buenos Aires.Search in Google Scholar White, J.A. 1953a. The baculum in the chipmunks of western North America. Univ. Kans. Publ. Mus. Nat. Hist. 5: 611–631.Search in Google Scholar White, J.A. 1953b. Genera and subgenera of chipmunks. Univ. Kans. Publ. Mus. Nat. Hist. 5: 545–561.Search in Google Scholar The online version of this article (DOI: 10.1515/mammalia-2015-0004) offers supplementary material, available to authorized users. ©2016 by De Gruyter
Math Centers for Upper Elementary: Math centers can be a valuable tool for engaging upper elementary students in math learning. These hands-on, interactive activities allow students to explore math concepts in a fun and engaging way, while also providing opportunities for problem-solving, critical thinking, and collaboration. Some examples of math centers that might be suitable for upper elementary students include math games, manipulatives, task cards, and worksheets. How to Set Up Math Centers in the Classroom: Setting up math centers in your classroom is an important step towards creating an effective learning environment. To set up math centers, you will need to gather a variety of math materials and resources such as games, puzzles, manipulatives, task cards, and worksheets. You should also consider the layout of your classroom and how you will organize the math centers. Will you have a designated area for each center, or will you rotate the centers throughout the room? Differentiated Math Centers for Different Learning Styles: One of the benefits of math centers is that they allow you to differentiate instruction for your students. By creating different math centers for different ability levels or learning styles, you can ensure that all of your students are able to engage with the material in a way that is meaningful and relevant to them. For example, you might create a math center with hands-on manipulatives for kinesthetic learners, or a math center with task cards for students who enjoy problem-solving. Engaging Math Center Activities for Students: Engaging math center activities are essential for keeping your students interested and motivated. Some ideas for engaging math center activities might include math games, puzzles, and task cards. You could also use manipulatives such as base ten blocks, fraction bars, or geometry shapes to help students explore math concepts in a hands-on way. It’s important to vary the activities you offer to keep your students engaged and motivated. Math Center Ideas for Problem-Solving and Critical Thinking: Math centers are a great way to encourage problem-solving and critical thinking skills in your students. Some ideas for math center activities that support these skills might include task cards, math games that require strategy, or puzzles that require students to think critically in order to solve them. You could also create math centers that focus on real-world problems or open-ended tasks, which can help to encourage higher-level thinking and creativity. Assessing Student Progress in Math Centers: Assessing student progress in math centers is an important part of the learning process. By monitoring and assessing student progress as they work at math centers, you can get a better understanding of which areas they are struggling with and where they need additional support. There are many different ways you can assess student progress in math centers, such as through observation, self-assessment, or through the use of assessments such as quizzes or tests. Math Centers for Hands-On Exploration of Math Concepts: Math centers provide a great opportunity for students to explore math concepts in a hands-on way. Manipulatives such as base ten blocks, fraction bars, and geometry shapes can be used to help students visualize and understand math concepts in a tangible way. You can also use math games and puzzles to help students practice and reinforce their math skills. Using Math Centers to Support Collaborative Learning: Math centers can be a great way to support collaborative learning in your classroom. By working in small groups or pairs, students can share ideas and work together to solve problems. You can encourage collaboration in math centers by creating activities that require students to work together Setting up math centers: To set up math centers in your upper elementary classroom, you will need to gather a variety of math materials and resources. Some examples might include math games, puzzles, manipulatives, task cards, and worksheets. You should also consider the layout of your classroom and how you will organize the math centers. Will you have a designated area for each center, or will you rotate the centers throughout the room? Implementing math centers: Once you have your math centers set up, you can start implementing them in your classroom. Here are a few tips for successful implementation: Introduce the math centers to your students: Before you start using math centers, make sure to introduce them to your students. Explain the purpose: Make sure to explain the purpose of math centers to your students. Math centers can be used to reinforce math skills, practice problem-solving, or explore math concepts in a hands-on way. Model how to work at math centers: Before your students start working at math centers, it’s a good idea to model how to work at them. You can demonstrate how to complete an activity at a math center, or how to move between centers. Provide clear instructions: Make sure to provide clear instructions for each math center activity. This can help students to understand what they are supposed to do and how to complete the activity. Encourage independence: Math centers should be designed to allow students to work independently or in small groups. Encourage your students to take ownership of their learning and explore the activities at their own pace. You can provide support and guidance as needed, but try to allow students to work independently as much as possible. Rotate the centers: One way to keep math centers fresh and engaging is to rotate them regularly. This can help to prevent boredom and keep students interested. You can rotate the centers daily, weekly, or even more frequently, depending on your schedule and the needs of your students. Establish clear expectations: Before you start using math centers, make sure to set clear expectations for your students. This may include rules for working at math centers, how to move between centers, and how to clean up after working at a center. Math centers can be a great way to differentiate instruction for your students. By creating different math centers for different ability levels or learning styles, you can ensure that all of your students are able to engage with the material in a way that is meaningful and relevant to them. Encourage collaboration in math centers by creating activities that require students to work together or share ideas. This can help to foster teamwork, communication, and problem-solving skills. Monitor and assess student progress: It’s important to monitor and assess student progress as they work at math centers. This can help you to identify areas where students are struggling and where they need additional support. You can assess student progress through observation, self-assessment, or through the use of assessments such as quizzes or tests. Examples of math centers: Here are a few ideas for math centers that you can use in your upper elementary classroom: Math games: Math games are a fun and engaging way to practice math skills. Some examples might include rolling dice and solving addition and subtraction problems, or using playing cards to practice skip counting and multiplication. Manipulatives, such as base ten blocks, fraction bars, and geometry shapes, can be used to explore math concepts in a hands-on way. Task cards are a great way to provide students with challenging and engaging math problems to solve. You can create your own task cards or use ones that are available online or in resources such as textbooks. Worksheets can be used to practice math skills and reinforce what students have learned. Make sure to vary the difficulty level and content of the worksheets to meet the needs of your students. Math centers are a great way to engage upper elementary students in math learning. By setting up and implementing math centers in your classroom, you can provide students with opportunities for hands-on exploration, problem-solving, and critical thinking. Here are a couple of books that teach about math centers: - “Math Workshop: Five Steps to Implementing Guided Math, Learning Stations, Reflection, and More” by Jennifer Lempp: This book provides practical guidance on how to implement a teaching approach called math workshop, which includes the use of math centers. The book offers a five-step process for implementing math workshop in the classroom, including planning, whole group instruction, guided math groups, learning stations, and reflection. The book also includes a variety of strategies and activities that you can use to engage your students in math learning, as well as tips for differentiating instruction and assessing student progress. Overall, this book is a valuable resource for teachers looking to implement math workshop in their classrooms. - “Math Centers for First Grade” by Kim Sutton: This book provides a variety of math center ideas and activities that are specifically geared towards first grade students. It includes activities such as math games, manipulatives, and task cards, as well as assessment tools and lesson plans. - “Math Centers: Engaging and Effective Practices for Grades 3-5″ by Maria D. Miller: This book provides a wealth of ideas for math centers that can be used in grades 3-5. It includes activities such as math games, puzzles, and task cards, as well as strategies for differentiating instruction and assessing student progress. If you want to engage your upper elementary students in hands-on, interactive math learning, then you need to check out our printable math games! These games are perfect for use in math centers and will provide your students with opportunities for problem-solving, critical thinking, and collaboration. Plus, with our convenient printable format, you can easily add these games to your math centers without having to spend time preparing materials. Click here to purchase our printable math games and take the first step towards creating an exciting and effective math center in your classroom!
In the world of research and data-driven decision making, hypothesis testing is a critical tool. Hypothesis testing in R can be accomplished in several ways, depending on the nature of the data and the specific test you want to run. In this article, we will discuss hypothesis testing, its importance, different types of hypothesis tests, and how to conduct these tests in R. Introduction to Hypothesis Testing Hypothesis testing is a statistical method that is used in making statistical decisions using experimental data. It is basically an assumption that we make about the population parameter. This assumption may or may not be true. Hypothesis testing is a critical tool in inferential statistics, allowing researchers to infer conclusions about a population based on a sample of data. Hypothesis testing generally starts with a null hypothesis (H0) that represents a theory that has been put forward, either because it is believed to be true or because it is used as a basis for argument. A researcher might claim, for example, that two groups are the same. The alternative hypothesis (H1 or Ha) is a statement that directly contradicts the null hypothesis by stating that the actual value of a population parameter is less than, greater than, or not equal to the value stated in the null hypothesis. Installing and Loading Necessary Libraries Before we proceed with the types of hypothesis tests and their implementation in R, let’s install and load the necessary packages. You can install the packages by using the command install.packages(), and load them using the install.packages(c("ggplot2", "tidyverse", "dplyr", "car")) library(ggplot2) library(tidyverse) library(dplyr) library(car) Types of Hypothesis Tests There are various types of hypothesis tests in R, each designed to analyze different types of data and different kinds of research questions. The type of hypothesis test you choose to run depends on your data and your research question. Here are some of the most common types of hypothesis tests: - T-test: The t-test is used to compare the means of two groups. In R, this is performed using the - ANOVA (Analysis of Variance): ANOVA is used when one wants to compare the means of more than two groups. In R, you can perform an ANOVA using the - Chi-square test: The Chi-square test is used to determine whether there is a significant association between two categorical variables. In R, this can be performed using the - Correlation test: The correlation test is used to check the relationship between two continuous variables. In R, this can be done using the Performing Hypothesis Testing in R Now let’s explore how to perform these hypothesis tests in R. Let’s start with the t-test. # Create a binary factor variable mtcars$cyl_binary <- ifelse(mtcars$cyl == 4, "4 cylinders", "More than 4 cylinders") # Perform the t-test t.test(mpg ~ cyl_binary, data = mtcars) In the above R code, ifelse() function is used to create a new binary variable cyl_binary. This new variable is “4 cylinders” if cyl equals 4 and “More than 4 cylinders” otherwise. Then, we perform the t-test using this new binary variable. For ANOVA, consider an example where we have a dataset ‘PlantGrowth’ and we want to see if the type of treatment (ctrl, trt1, trt2) affects plant growth. Here, the null hypothesis is that there’s no difference in mean plant growth between the treatment groups. data("PlantGrowth") aov_result <- aov(weight ~ group, data = PlantGrowth) summary(aov_result) aov() function is used to perform the ANOVA test and the summary() function is used to get the result of the test. The Chi-square test is used for categorical data. For example, we can use the built-in dataset mtcars and check if there is an association between the number of cylinders (cyl) and the type of transmission (am). data("mtcars") chisq_result <- chisq.test(mtcars$cyl, mtcars$am) print(chisq_result) chisq.test() performs the Chi-square test, and the print() function is used to get the result. For the correlation test, we will use the built-in dataset mtcars and check if there is a correlation between mpg (Miles/(US) gallon) and disp (Displacement (cu.in.)). data("mtcars") cor_result <- cor.test(mtcars$mpg, mtcars$disp) print(cor_result) cor.test() is used to perform the correlation test, and print() is used to print the result. Interpreting the Results The output of each test contains a p-value. The p-value is used in hypothesis testing to help you support or reject the null hypothesis. It represents the probability that the results of your test occurred at random. If p-value ≤ 0.05, we reject the null hypothesis, and if p-value > 0.05, we fail to reject the null hypothesis. This guide has provided a comprehensive introduction to performing hypothesis testing in R. Hypothesis testing is a vital tool in statistics to determine whether a result is statistically significant, whether this result occurred by chance, or whether there is a pattern to the data observed. As seen above, R provides various functions to perform these tests efficiently. As with all statistical analyses, it’s important to understand your data and the assumptions underlying each test to choose the appropriate test and interpret the results correctly.
A meteorite that landed on a frozen lake in 2018 contains thousands of organic compounds that formed billions of years ago and could hold clues about the origins of life on Earth. The meteor entered Earth’s atmosphere on Jan. 16, 2018, after a very long journey through the freezing vacuum of space, lighting up skies over Ontario, Canada, and the midwestern United States. Weather radar tracked the flaming space rock’s descent and breakup, helping meteorite hunters to quickly locate fallen fragments on Strawberry Lake in Hamburg, Michigan. An international team of researchers then examined a walnut-size piece of the meteorite “while it was still fresh,” scientists reported in a new study. Their analysis revealed more than 2,000 organic molecules dating to when our Solar System was young; similar compounds may have seeded the emergence of microbial life on our planet, the study authors reported. Swift recovery of the meteorite from the lake’s frozen surface prevented liquid water from seeping into cracks and contaminating the sample with terrestrial spores and microbes. This maintained the space rock’s pristine state, enabling experts to more easily evaluate its composition. In fact, there was so little terrestrial weathering that the fragment brought to Chicago’s Field Museum looked like it had been collected in space, said study co-author Jennika Greer, a doctoral candidate in the Department of the Geophysical Sciences at the University of Chicago, and a resident graduate student at The Field Museum. When space rocks enter the atmosphere at speeds of several miles per second, the air around them becomes ionized. Extreme heat melts away up to 90 percent of the meteor, and the rock that survives atmospheric passage becomes encased in a 1-millimeter-thick fusion crust of melted glass, said lead study author Philipp Heck, a curator of meteorites at the Field Museum and an associate professor at the University of Chicago. That surviving fragment inside the glassy crust is a pristine record of the rock’s geochemistry in space. And despite a fiery fall to Earth, after the vaporized external layers are carried away, rocky meteorites such as this one are very, very cold when they land, Heck told Live Science. “I’ve heard eyewitness accounts of meteorites falling into puddles after it rained, and the puddle froze because the meteorite was so cold,” he said. The Michigan meteorite’s ratio of uranium (isotopes 238 and 235) to the element’s decayed state as lead (isotopes 207 and 206) told the scientists that the parent asteroid formed about 4.5 billion years ago. Around that time, the rock underwent a process called thermal metamorphism, as it was subjected to temperatures of up to 1,300 degrees Fahrenheit (700 degrees Celsius). After that, the asteroid’s composition stayed mostly unchanged for the last 3 billion years. Then about 12 million years ago, an impact broke off the chunk of rock that recently fell in Michigan, according to an analysis of the meteorite’s exposure to cosmic rays in space, Heck told Live Science. Because the meteorite was altered so little after its initial heating billions of years ago, it was classified as H4: “H” indicates that it’s a rocky meteorite that’s high in iron, while type 4 meteorites have undergone thermal metamorphism sufficient to change their original composition. Only about 4 percent of the meteorites that fall to Earth today land in the H4 category. “When we’re looking at these meteorites, we’re looking at something that’s close to the material when it formed early in the Solar System’s history,” Greer said. The meteorite held 2,600 organic, or carbon-containing compounds, the researchers reported in the study. Because the meteorite was mostly unchanged since 4.5 billion years ago, these compounds likely are similar to the ones that other meteorites brought to a young Earth, some of which “might have been incorporated into life,” Heck said. The transformation from extraterrestrial organic compounds into the first microbial life on Earth is “a big step” that is still shrouded in mystery, but evidence suggests that organics are common in meteorites – even in thermally metamorphosed meteorites such as the one that landed in Michigan, he added. Meteor bombardment was also more frequent for a young Earth than it is today, “so we are pretty certain that the input from meteorites into the organic inventory on Earth was important,” for seeding life, Heck said. The findings were published online Oct. 27 in the journal Meteoritics & Planetary Science.
Jefferson whose intellect, scientific prowess and foresight led him to realize in 1803 that our continent needed to be explored beyond its border at the Mississippi River. With the acquisition of the Louisiana Purchase, this need became more compelling. Jefferson believed that there was a Northwest Passage, a waterway of adjoining rivers that could connect the Missouri River to the Pacific. Congress approved $2500 for this expedition. Jefferson selected his secretary, Captain Meriwether Lewis, to organize the "Corps of Discovery" to establish this route and bring back specimens from nature, maps and navigation data. Lewis then chose his close friend Lieutenant William Clark to be his co-leader of the "Corps of Discovery. " three weeks of studies in celestial observations under Andrew Ellicott eminent astronomer-surveyor. He also received tutoring in botany, fossils and more lessons from Robert Patterson in the determination of latitude and longitude. Lewis and Clark carried the necessary instruments for measuring the altitude of celestial bodies (sextant and quadrant), a chronometer for determining longitude and compasses to determine course. They also carried the best available maps of the region, an Astronomical Ephemeris and Nautical Almanac, Practical Introduction to Spherics and Nautical Astronomy and tables for finding latitude and longitude. It is to be noted that Lewis and Clark used an artificial horizon in his celestial observations as the clear horizon was not always available and the terrain elevation above sea level was not generally known. At sea the sextant measured angle to a celestial body which was then corrected for height above sea level (dip angle). Clark maintained a daily record of courses and distances traveled and frequently mapped the regions encountered by taking bearings and estimating distances to references. The team traveled on a 55 foot masted keel boat and canoes when on the waterways. When they reached their Pacific Coast destination in Oregon territory which they named Fort Clatsop (for the neighboring Indian tribe), Clark estimated that they had traveled 4,162 miles from the mouth of the Missouri to the Pacific. This estimate has been cited to be in error by ~ 1% (40 miles) of the actual distance traveled. It is claimed that Lewis' celestial observations at Fort Clatsop, when reduced to latitude and longitude, would locate the site within 4 miles of its actual position. How were these navigational accomplishments achieved? used dead reckoning and Lewis used meridian transits of the Sun for latitude and lunar distances to establish Greenwich time and longitude. b. Lewis used eclipse tables to establish longitude and Polaris to establish latitude; Clark used inductive reckoning. c. Lewis used Viking tables for longitude and meridian transits to establish latitude; Clark used dead reckoning. d. All of the above. The answer is in surveying and map making, maintained a daily log of courses and distances traveled and transferred the information to his map. Courses were determined from his compass. He could determine the magnetic variation by comparing compass magnetic north to true north. True north could be obtained by taking a bearing of Polaris (which traced a circle approximately 1° in radius around the celestial pole). Knowing magnetic variation he could plot his dead reckoning position relative to true north. He could determine the speed of the boat by timing a log chip dropped in the river along the side of the boat. If the log chip traversed the boat's length in 71/3 seconds for example, he would know that he was traveling about 5 miles per hour. However, it is doubtful that Clark could achieve a dead reckoning error of 1% without compensatory errors. He relied upon a compass with an inherent error of at least a degree; his estimate of speed and distances was about 5% to 10% if not more. Both Lewis and Clark obtained the data for determining latitude and longitude by making equal altitude measurements (before and after noon) of the Sun using the sextant or quadrant and chronometer and were capable of reducing the data. The actual reduction of the data (which was recorded on tabular forms) to establish longitude by lunar distances en route was accomplished at West Point by mathematicians after the expedition was completed. Lewis and Clark were instructed to measure the altitude of the Sun at least two hours before noon, set the instrument down, and wait until the altitude would return to the same altitude verified by observation. The two times were then averaged to establish the time the Sun was on their meridian. This was the local apparent noon. Subtracting the time of noon at Greenwich (obtained from the Nautical Almanac) from the time recorded for the local noon (in Greenwich time) would yield the difference in time of the two locations. Multiplying the time difference by 15°/hr would yield longitude. If the altitude of the Sun were taken and plotted periodically between the initial and final observations, one could determine latitude which would be calculated at the midpoint between the initial and final observations of the Sun when the Sun reached its highest ascension and was on the observer's meridian. This whole procedure was known as determining local apparent noon (Figure 1 and 2). One of the watches could be reset to local time on the basis of this procedure (allowing the chronometer to maintain Greenwich time). The chronometer was regulated prior to the expedition which meant that its error rate was known and could be acknowledged in the computations. to be obtained by performing measurements of lunar distances by Lewis and Clark. Lunar distances was a technique for determining Greenwich time and longitude by measuring the horizontal angle of the Moon to the Sun or one of the selected stars and measuring their altitudes using the sextant. A tedious calculation using a spherical triangle was employed to clear the distance of refraction and parallax effects for each measured altitude and other errors. This information was compared to tabular data in a table to obtain Greenwich time and longitude. This technique was conceived in the 15th century and underwent perfection over the centuries. It was a very difficult procedure for most navigators. 1. Establishing local apparent noon and Clark conducted this observation of the Sun and established their 15°= 1 hour 1° = 4 minutes 15' = 1 minute 1' = 4 seconds ° degree of arc ' minute of arc " second of arc 1' = 60" to final time of observation: time final observation 17hr31min Greenwich time initial observation of observations for meridian transit : 39hr 02min/2= 19hr 31min GT(Sun is on your meridian) Noon at Greenwich (0° longitude) from the tables was at 1204 1/2. Subtract the Greenwich noon time from Greenwich time of the local noon yields 7hr 261/2 min (use conversion factors above) convert the difference in time components to degrees and arc minutes and add them: 15°/hr X(7 hrs.) = 105° 0.25°/min. X (26 1/2min.) = 6° 38' Longitude = 111° 38' W 2. Time diagram for determining longitude at Beavercreek fork is obtained by meridian transit of the Sun is shown in Figure 3. 3. Latitude by meridian transit of the Sun. just south of Dillon, Montana at the fork of the Beaverhead River. that at meridian transit of the Sun, its elevation was 60° 02.4' (after corrections for refraction and semi-diameter) = (90°- h) + d Given declination "d" of Sun is 15° 15.4' N h = 60° 02.4' finds his latitude as 45° 13'N and longitude as 111° 38' W In reality Lewis and Clark fully calculated longitude by lunar distances only once early in their journey up the Missouri River. It was left for the mathematicians at West Point to reduce the extensive data recorded on the expedition. So vexing was the task to reduce the data for longitude by lunar distances that F.R. Hassler, a West Point mathematics instructor, never succeeded in completing the calculations casting doubt as to whether Fort Catslop was located by lunar distances. If it were located with the four-mile accuracy claimed, it may have been achieved by meridian transit calculations. A lunar positions table (extracted from the British Almanac issued in 1766 for the year 1767) is shown in Figure 4. A diagram of the lunar distances spherical triangle showing the angles to be measured and the angle to be calculated is shown in Figure 5. 4. Table of Lunar 5. Lunar Distances preparation for the expedition, the use of the finest available maps of the region, the creation of maps and charts en route and the recorded data made it possible for Lewis and Clark to accomplish their goal and preserve the Northwest region beyond the Louisiana Purchase for later claim by the United States. on Lunar Distances (earlier a midshipman under Captain Cook), an experienced navigator, utilized lunar distances in establishing the longitude of Nootka a port on the west coast of Vancouver Island in 1792. He and his sailing master Lt. Whidbey made 13 sets of observations with an average difference from the mean value of 8.7 minutes of longitude (5.7 nmi at his latitude) and a standard deviation of the sets of 10.18 minutes of longitude. Each set was an average of from two to eight sets of lunar distances. The average number of observations in one set was 7.5. Since the scatter of a number of operations is reduced by the square root of the number of observations averaged, the standard deviation should be multiplied by the square root of 7.5 or 2.7. Therefore for a single lunar distance observation, the expected random error should have been about 30 minutes of longitude (19.5 nmi for latitude 49.5°). At sea one could expect not to obtain better than one degree accuracy from a single lunar distances
This article needs to be updated.(June 2016) |Part of a series on| Progressivism in the United States is a broadly based reform movement that reached its height early in the 20th century and is generally considered to be middle class and reformist in nature. It arose as a response to the vast changes brought by modernization, such as the growth of large corporations, pollution and fears of corruption in American politics. In the 21st century, progressives continue to embrace concepts such as environmentalism and social justice. Social progressivism, the view that governmental practices ought to be adjusted as society evolves, forms the ideological basis for many American progressives. Historian Alonzo Hamby defined progressivism as the "political movement that addresses ideas, impulses, and issues stemming from modernization of American society. Emerging at the end of the nineteenth century, it established much of the tone of American politics throughout the first half of the century." Historians debate the exact contours, but generally date the "Progressive Era" from the 1890s to either World War I or the onset of the Great Depression, in response to the perceived excesses of the Gilded Age. Many of the core principles of the Progressive Movement focused on the need for efficiency in all areas of society. Purification to eliminate waste and corruption was a powerful element, as well as the Progressives' support of worker compensation, improved child labor laws, minimum wage legislation, a support for a maximum hours that workers could work for, graduated income tax and allowed women the right to vote. According to historian William Leuchtenburg: The Progressives believed in the Hamiltonian concept of positive government, of a national government directing the destinies of the nation at home and abroad. They had little but contempt for the strict construction of the Constitution by conservative judges, who would restrict the power of the national government to act against social evils and to extend the blessings of democracy to less favored lands. The real enemy was particularism, state rights, limited government. Progressives repeatedly warned that illegal voting was corrupting the political system. It especially identified big-city bosses, working with saloon keepers and precinct workers, as the culprits in stuffing the ballot box. The solution to purifying the vote included prohibition (designed to close down the saloons), voter registration requirements (designed to end multiple voting), and literacy tests (designed to minimize the number of ignorant voters). All the Southern states (and Oklahoma) used devices to disenfranchise black voters during the Progressive Era. Typically the progressive elements in those states pushed for disenfranchisement, often fighting against the conservatism of the Black Belt whites. A major reason given was that whites routinely purchased black votes to control elections, and it was easier to disenfranchise blacks than to go after powerful white men. In the North, Progressives such as William U'Ren and Robert La Follette argued that the average citizen should have more control over his government. The Oregon System of "Initiative, Referendum, and Recall" was exported to many states, including Idaho, Washington, and Wisconsin. Many progressives, such as George M. Forbes, president of Rochester's Board of Education, hoped to make government in the U.S. more responsive to the direct voice of the American people when he said: [W]e are now intensely occupied in forging the tools of democracy, the direct primary, the initiative, the referendum, the recall, the short ballot, commission government. But in our enthusiasm we do not seem to be aware that these tools will be worthless unless they are used by those who are aflame with the sense of brotherhood...The idea [of the social centers movement is] to establish in each community an institution having a direct and vital relation to the welfare of the neighborhood, ward, or district, and also to the city as a whole Philip J. Ethington seconds this high view of direct democracy saying: initiatives, referendums, and recalls, along with direct primaries and the direct election of US Senators, were the core achievements of 'direct democracy' by the Progressive generation during the first two decades of the twentieth century. Progressives fought for women's suffrage to purify the elections using supposedly purer female voters. Progressives in the South supported the elimination of supposedly corrupt black voters from the election booth. Historian Michael Perman says that in both Texas and Georgia, "disfranchisement was the weapon as well as the rallying cry in the fight for reform"; and in Virginia, "the drive for disfranchisement had been initiated by men who saw themselves as reformers, even progressives." While the ultimate significance of the progressive movement on today's politics is still up for debate, Alonzo L. Hamby asks: What were the central themes that emerged from the cacophony [of progressivism]? Democracy or elitism? Social justice or social control? Small entrepreneurship or concentrated capitalism? And what was the impact of American foreign policy? Were the progressives isolationists or interventionists? Imperialists or advocates of national self-determination? And whatever they were, what was their motivation? Moralistic utopianism? Muddled relativistic pragmatism? Hegemonic capitalism? Not surprisingly many battered scholars began to shout 'no mas!' In 1970, Peter Filene declared that the term 'progressivism' had become meaningless. The Progressives typically concentrated on city and state government, looking for waste and better ways to provide services as the cities grew rapidly. These changes led to a more structured system, power that had been centralized within the legislature would now be more locally focused. The changes were made to the system to effectively make legal processes, market transactions, bureaucratic administration, and democracy easier to manage, thus putting them under the classification of "Municipal Administration". There was also a change in authority for this system; it was believed that the authority that was not properly organized had now given authority to professionals, experts, and bureaucrats for these services. These changes led to a more solid type of municipal administration compared to the old system that was underdeveloped and poorly constructed. The Progressives mobilized concerned middle class voters, as well as newspapers and magazines, to identify problems and concentrate reform sentiment on specific problems. Many Protestants focused on the saloon as the power base for corruption, as well as violence and family disruption, so they tried to get rid of the entire saloon system through prohibition. Others (like Jane Addams in Chicago) promoted Settlement Houses. Early municipal reformers included Hazen S. Pingree (mayor of Detroit in the 1890s) and Tom L. Johnson in Cleveland, Ohio. In 1901, Johnson won election as mayor of Cleveland on a platform of just taxation, home rule for Ohio cities, and a 3-cent streetcar fare. Columbia University President Seth Low was elected mayor of New York City in 1901 on a reform ticket. Many progressives such as Louis Brandeis hoped to make American governments better able to serve the people's needs by making governmental operations and services more efficient and rational. Rather than making legal arguments against ten-hour workdays for women, he used "scientific principles" and data produced by social scientists documenting the high costs of long working hours for both individuals and society. The progressives' quest for efficiency was sometimes at odds with the progressives' quest for democracy. Taking power out of the hands of elected officials and placing that power in the hands of professional administrators reduced the voice of the politicians and in turn reduced the voice of the people. Centralized decision-making by trained experts and reduced power for local wards made government less corrupt but more distant and isolated from the people it served. Progressives who emphasized the need for efficiency typically argued that trained independent experts could make better decisions than the local politicians. Thus Walter Lippmann in his influential Drift and Mastery (1914), stressing the "scientific spirit" and "discipline of democracy," called for a strong central government guided by experts rather than public opinion. One example of progressive reform was the rise of the city manager system, in which paid, professional engineers ran the day-to-day affairs of city governments under guidelines established by elected city councils. Many cities created municipal "reference bureaus" which did expert surveys of government departments looking for waste and inefficiency. After in-depth surveys, local and even state governments were reorganized to reduce the number of officials and to eliminate overlapping areas of authority between departments. City governments were reorganized to reduce the power of local ward bosses and to increase the powers of the city council. Governments at every level began developing budgets to help them plan their expenditures (rather than spending money haphazardly as needs arose and revenue became available). Governor Frank Lowden of Illinois showed a "passion for efficiency" as he streamlined state government. Corruption represented a source of waste and inefficiency in the government. William U'Ren in Oregon, and Robert M. La Follette Sr. in Wisconsin, and others worked to clean up state and local governments by passing laws to weaken the power of machine politicians and political bosses. In Wisconsin, La Follette pushed through an open primary system that stripped party bosses of the power to pick party candidates. The Oregon System, which included a "Corrupt Practices Act", a public referendum, and a state-funded voter's pamphlet among other reforms was exported to other states in the northwest and Midwest. Its high point was in 1912, after which they detoured into a disastrous third party status. Early progressive thinkers such as John Dewey and Lester Ward placed a universal and comprehensive system of education at the top of the progressive agenda, reasoning that if a democracy were to be successful, its leaders, the general public, needed a good education. Progressives worked hard to expand and improve public and private education at all levels. Modernization of society, they believed, necessitated the compulsory education of all children, even if the parents objected. Progressives turned to educational researchers to evaluate the reform agenda by measuring numerous aspects of education, later leading to standardized testing. Many educational reforms and innovations generated during this period continued to influence debates and initiatives in American education for the remainder of the 20th century. One of the most apparent legacies of the Progressive Era left to American education was the perennial drive to reform schools and curricula, often as the product of energetic grass-roots movements in the city. Since progressivism was and continues to be 'in the eyes of the beholder,' progressive education encompasses very diverse and sometimes conflicting directions in educational policy. Such enduring legacies of the Progressive Era continue to interest historians. Progressive Era reformers stressed 'object teaching,' meeting the needs of particular constituencies within the school district, equal educational opportunity for boys and girls, and avoiding corporal punishment. Gamson (2003) examines the implementation of progressive reforms in three city school districts—Seattle, Washington, Oakland, California, and Denver, Colorado—during 1900–28. Historians of educational reform during the Progressive Era tend to highlight the fact that many progressive policies and reforms were very different and, at times, even contradictory. At the school district level, contradictory reform policies were often especially apparent, though there is little evidence of confusion among progressive school leaders in Seattle, Oakland, and Denver. District leaders in these cities, including Frank B. Cooper in Seattle and Fred M. Hunter in Oakland, often employed a seemingly contradictory set of reforms: local progressive educators consciously sought to operate independently of national progressive movements; they preferred reforms that were easy to implement; and they were encouraged to mix and blend diverse reforms that had been shown to work in other cities. The reformers emphasized professionalization and bureaucratization. The old system whereby ward politicians selected school employees was dropped in the case of teachers and replaced by a merit system requiring a college-level education in a normal school (teacher's college). The rapid growth in size and complexity the large urban school systems facilitated stable employment for women teachers and provided senior teachers greater opportunities to mentor younger teachers. By 1900 in Providence, Rhode Island, most women remained as teachers for at least 17.5 years, indicating teaching had become a significant and desirable career path for women. Many progressives hoped that by regulating large corporations they could liberate human energies from the restrictions imposed by industrial capitalism. Yet the progressive movement was split over which of the following solutions should be used to regulate corporations. Pro-labor progressives such as Samuel Gompers argued that industrial monopolies were unnatural economic institutions which suppressed the competition which was necessary for progress and improvement. United States antitrust law is the body of laws that prohibits anti-competitive behavior (monopoly) and unfair business practices. Presidents Theodore Roosevelt and William Howard Taft supported trust-busting. During their presidencies, the otherwise-conservative Taft brought down 90 trusts in four years while Roosevelt took down 44 in 7 1/2 years in office. Progressives such as Benjamin Parke De Witt argued that in a modern economy, large corporations and even monopolies were both inevitable and desirable. With their massive resources and economies of scale, large corporations offered the U.S. advantages which smaller companies could not offer. Yet, these large corporations might abuse their great power. The federal government should allow these companies to exist but regulate them for the public interest. President Theodore Roosevelt generally supported this idea and was later to incorporate it as part of his "New Nationalism". Progressives set up training programs to ensure that welfare and charity work would be undertaken by trained professionals rather than warm-hearted amateurs. Jane Addams of Chicago's Hull House typified the leadership of residential, community centers operated by social workers and volunteers and located in inner city slums. The purpose of the settlement houses was to raise the standard of living of urbanites by providing adult education and cultural enrichment programs. Child labor laws were designed to prevent the overuse of children in the newly emerging industries. The goal of these laws was to give working class children the opportunity to go to school and to mature more institutionally, thereby liberating the potential of humanity and encouraging the advancement of humanity. Factory owners generally did not want this progression because of lost workers. They used Charles Dickens as a symbol that the working conditions spark imagination. This initiative failed, with child labor laws being enacted anyway. Labor unions grew steadily until 1916, then expanded fast during the war. In 1919 a wave of major strikes alienated the middle class; the strikes were lost, which alienated the workers. In the 1920s the unions were in the doldrums; in 1924 they supported La Follette's Progressive party, but he only carried his base in Wisconsin. The American Federation of Labor under Samuel Gompers after 1907 began supporting the Democrats, who promised more favorable judges. The Republicans appointed pro-business judges. Theodore Roosevelt and his third party also supported such goals as the eight-hour work day, improved safety and health conditions in factories, workers' compensation laws, and minimum wage laws for women. Most progressives, especially in rural areas, adopted the cause of prohibition. They saw the saloon as political corruption incarnate, and bewailed the damage done to women and children. They believed the consumption of alcohol limited mankind's potential for advancement. Progressives achieved success first with state laws then with the enactment of the Eighteenth Amendment to the U.S. Constitution in 1919. The golden day did not dawn; enforcement was lax, especially in the cities where the law had very limited popular support and where notorious criminal gangs, such as the Chicago gang of Al Capone made a crime spree based on illegal sales of liquor in speakeasies. The "experiment" (as President Hoover called it) also cost the treasury large sums of taxes and the 18th amendment was repealed by the Twenty-first Amendment to the U.S. Constitution in 1933. Some Progressives sponsored eugenics as a solution to excessively large or underperforming families, hoping that birth control would enable parents to focus their resources on fewer, better children. Progressive leaders like Herbert Croly and Walter Lippmann indicated their classically liberal concern over the danger posed to the individual by the practice of Eugenics. During the term of the progressive President Theodore Roosevelt (1901–1909), and influenced by the ideas of 'philosopher-scientists' such as George Perkins Marsh, John Wesley Powell, John Muir, Lester Frank Ward and W. J. McGee, the largest government-funded conservation-related projects in U.S. history were undertaken: On March 14, 1903, President Roosevelt created the first National Bird Preserve, (the beginning of the Wildlife Refuge system), on Pelican Island, Florida. In all, by 1909, the Roosevelt administration had created an unprecedented 42 million acres (170,000 km²) of United States National Forests, 53 National Wildlife Refuges and 18 areas of "special interest", such as the Grand Canyon. In addition, Roosevelt approved the Newlands Reclamation Act of 1902, which gave subsidies for irrigation in 13 (eventually 20) western states. Another conservation-oriented bill was the Antiquities Act of 1906 that protected large areas of land by allowing the President to declare areas meriting protection to be National Monuments. The Inland Waterways Commission was appointed by Roosevelt on March 14, 1907 to study the river systems of the United States, including the development of water power, flood control, and land reclamation. In the early 20th century, politicians of the Democratic and Republican parties, Lincoln–Roosevelt League Republicans (in California) and Theodore Roosevelt's 1912 Progressive ("Bull Moose") Party all pursued environmental, political, and economic reforms. Chief among these aims was the pursuit of trust busting, the breaking up very large monopolies, and support for labor unions, public health programs, decreased corruption in politics, and environmental conservation. The Progressive Movement enlisted support from both major parties (and from minor parties as well). One leader, Democrat William Jennings Bryan, had won both the Democratic Party and the Populist Party nominations in 1896. At the time, the great majority of other major leaders had been opposed to Populism. When Roosevelt left the Republican Party in 1912, he took with him many of the intellectual leaders of progressivism, but very few political leaders. The Republican Party then became notably more committed to business-oriented and efficiency oriented progressivism, typified by Taft and Herbert Hoover. Equally significant to progressive-era reform were the crusading journalists, known as muckrakers. These journalists publicized, to middle class readers, economic privilege, political corruption, and social injustice. Their articles appeared in McClure's Magazine and other reform periodicals. Some muckrakers focused on corporate abuses. Ida Tarbell, for instance, exposed the activities of the Standard Oil Company. In The Shame of the Cities (1904), Lincoln Steffens dissected corruption in city government. In Following the Color Line (1908), Ray Stannard Baker criticized race relations. Other muckrakers assailed the Senate, railroad companies, insurance companies, and fraud in patent medicine. Novelists, too, criticized corporate injustices. Theodore Dreiser drew harsh portraits of a type of ruthless businessman in The Financier (1912) and The Titan (1914). In The Jungle (1906), Socialist Upton Sinclair repelled readers with descriptions of Chicago's meatpacking plants, and his work led to support for remedial food safety legislation. Leading intellectuals also shaped the progressive mentality. In Dynamic Sociology (1883) Lester Frank Ward laid out the philosophical foundations of the Progressive movement and attacked the laissez-faire policies advocated by Herbert Spencer and William Graham Sumner. In The Theory of the Leisure Class (1899), Thorstein Veblen attacked the "conspicuous consumption" of the wealthy. Educator John Dewey emphasized a child-centered philosophy of pedagogy, known as progressive education, which affected schoolrooms for three generations. Income inequality in the United States has been on the rise since 1970, as the wealthy continue to hold more and more wealth and income. For example, 95% of income gains from 2009 to 2013 went to the top 1% of wage earners in the United States. Progressives have recognized that lower union rates, weak policy, globalization, and other drivers have caused the gap in income. The rise of income inequality has led Progressives to draft legislation including, but not limited to, reforming Wall Street, reforming the tax code, reforming campaign finance, closing loopholes, and keeping domestic work. Progressives began to demand stronger Wall Street regulation after they perceived deregulation and relaxed enforcement as leading to the financial crisis of 2008. Passing the Dodd-Frank financial regulatory act in 2010 provided increased oversight on financial institutions and the creation of new regulatory agencies, but many Progressives argue its broad framework allows for financial institutions to continue to take advantage of consumers and the government. Bernie Sanders, among others, has advocated to reimplement Glass-Steagall for its stricter regulation and to break up the banks because of financial institutions' market share being concentrated in fewer corporations than progressives would like. In 2009, the Congressional Progressive Caucus outlined five key healthcare principles they intended to pass into law. The CPC mandated a nationwide public option, affordable health insurance, insurance market regulations, an employer insurance provision mandate, and comprehensive services for children. In March 2010, Congress passed the Patient Protection and Affordable Care Act, which was intended to increase the affordability and efficiency of the United States healthcare system. Although considered a success by progressives, many argued that it didn't go far enough in achieving healthcare reform, as exemplified with the Democrats' failure in achieving a national public option. In recent decades, Single-payer healthcare has become an important goal in healthcare reform for progressives. In the 2016 Democratic Primary, progressive Democratic Socialist presidential candidate Bernie Sanders raised the issue of a single-payer healthcare system, citing his belief that millions of Americans are still paying too much for health insurance, and arguing that millions more don't receive the care they need. In 2016, an effort was made to implement a single-payer healthcare system in the state of Colorado, known as ColoradoCare (Amendment 69). Senator Bernie Sanders held rallies in Colorado in support of the Amendment leading up to the vote. Despite high-profile support, Amendment 69 failed to pass, with just 21.23% of voting Colorado residents voting in favor, and 78.77% against. Adjusted for inflation, the minimum wage peaked in 1968 at $8.54 (in 2014 dollars). Progressives believe that stagnating wages perpetuate income inequality and that raising the minimum wage is a necessary step to combat inequality. If the minimum wage grew at the rate of productivity growth in the United States, it would be $21.72 an hour, nearly three times as much as the current $7.25 an hour. Popular progressives, such as Senator Bernie Sanders and Rep. Keith Ellison, have endorsed a federally mandated wage increase to $15 an hour. The movement has already seen success with its implementation in California with the passing of bill to raise the minimum wage $1 every year until reaching $15 an hour in 2021. New York workers are lobbying for similar legislation as many continue to rally for a minimum wage increase as part of the Fight for $15 movement. Black Lives Matter is an international activist group that fights against police brutality and systemic racism. Black Lives Matter has organized protests against the deaths of African-Americans by police actions and crimes. These include protests against the deaths of Michael Brown, Tamir Rice, Eric Garner, Freddie Gray, and Korryn Gaines. With the rise in popularity of self-proclaimed progressives such as Bernie Sanders and Elizabeth Warren, the term began to carry greater cultural currency, particularly in the 2016 Democratic primaries. While answering a question from CNN moderator Anderson Cooper regarding her willingness to shift positions during an October 2015 debate, Hillary Clinton referred to herself as a "progressive who likes to get things done," drawing the ire of a number of Sanders supporters and other critics from her Left. Questions about the precise meaning of the term have persisted within the Democratic Party and without since the election of Donald Trump in the 2016 US presidential election, with some candidates using it to indicate their affiliation with the left flank of the party. As such, "progressive" and "progressivism" are essentially contested concepts, with different groups and individuals defining the terms in different (and sometimes contradictory) ways towards different (and sometimes contradictory) ends. Following the first progressive movement of the early 20th century, two later short-lived parties have also identified as "progressive". In 1924, Wisconsin Senator Robert La Follette ran for president on the "Progressive party" ticket. La Follette won the support of labor unions, Germans and Socialists by his crusade. He carried only Wisconsin and the party vanished outside Wisconsin. There, it remained a force until the 1940s. A third party was initiated in 1948 by former Vice President Henry A. Wallace as a vehicle for his campaign for president. He saw the two parties as reactionary and war-mongering, and attracted support from left-wing voters who opposed the Cold War policies that had become a national consensus. Most liberals, New Dealers, and especially the CIO unions, denounced the party because it was increasingly controlled by Communists. It faded away after winning 2% of the vote in 1948. Progressivism emerged as a response to the excesses of the Gilded age [...] [Progressives] fought for worker's [sic] compensation, child labor laws, minimum wage and maximum hours legislation; they enacted anti-trust laws, improved living conditions in urban slums, instituted the graduated income tax, won woman the right to vote, and the groundwork for Roosevelt's New Deal. La Follette has ever sought to give the people greater power over their affairs. He has favored and now favors the direct election of senators... None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media. All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves. The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely.
New findings published this week in Physical Review Letters suggest that carbon, oxygen, and hydrogen cosmic rays travel through the galaxy toward Earth in a similar way, but, surprisingly, that iron arrives at Earth differently. Learning more about how cosmic rays move through the galaxy helps address a fundamental, lingering question in astrophysics: How is matter generated and distributed across the universe? "So what does this finding mean?" asks John Krizmanic, a senior scientist with UMBC's Center for Space Science and Technology (CSST). "These are indicators of something interesting happening. And what that something interesting is we're going to have to see." Cosmic rays are atomic nuclei--atoms stripped of their electrons--that are constantly whizzing through space at nearly the speed of light. They enter Earth's atmosphere at extremely high energies. Information about these cosmic rays can give scientists clues about where they came from in the galaxy and what kind of event generated them. An instrument on the International Space Station (ISS) called the Calorimetric Electron Telescope (CALET) has been collecting data about cosmic rays since 2015. The data include details such as how many and what kinds of atoms are arriving, and how much energy they're arriving with. The American, Italian, and Japanese teams that manage CALET, including UMBC's Krizmanic and postdoc Nick Cannady, collaborated on the new research. Iron on the move Cosmic rays arrive at Earth from elsewhere in the galaxy at a huge range of energies--anywhere from 1 billion volts to 100 billion billion volts. The CALET instrument is one of extremely few in space that is able to deliver fine detail about the cosmic rays it detects. A graph called a cosmic ray spectrum shows how many cosmic rays are arriving at the detector at each energy level. The spectra for carbon, oxygen, and hydrogen cosmic rays are very similar, but the key finding from the new paper is that the spectrum for iron is significantly different. There are several possibilities to explain the differences between iron and the three lighter elements. The cosmic rays could accelerate and travel through the galaxy differently, although scientists generally believe they understand the latter, Krizmanic says. "Something that needs to be emphasized is that the way the elements get from the sources to us is different, but it may be that the sources are different as well," adds Michael Cherry, physics professor emeritus at Louisiana State University (LSU) and a co-author on the new paper. Scientists generally believe that cosmic rays originate from exploding stars (supernovae), but neutron stars or very massive stars could be other potential sources. An instrument like CALET is important for answering questions about how cosmic rays accelerate and travel, and where they come from. Instruments on the ground or balloons flown high in Earth's atmosphere were the main source of cosmic ray data in the past. But by the time cosmic rays reach those instruments, they have already interacted with Earth's atmosphere and broken down into secondary particles. With Earth-based instruments, it is nearly impossible to identify precisely how many primary cosmic rays and which elements are arriving, plus their energies. But CALET, being on the ISS above the atmosphere, can measure the particles directly and distinguish individual elements precisely. Iron is a particularly useful element to analyze, explains Cannady, a postdoc with CSST and a former Ph.D. student with Cherry at LSU. On their way to Earth, cosmic rays can break down into secondary particles, and it can be hard to distinguish between original particles ejected from a source (like a supernova) and secondary particles. That complicates deductions about where the particles originally came from. "As things interact on their way to us, then you'll get essentially conversions from one element to another," Cannady says. "Iron is unique, in that being one of the heaviest things that can be synthesized in regular stellar evolution, we're pretty certain that it is pretty much all primary cosmic rays. It's the only pure primary cosmic ray, where with others you'll have some secondary components feeding into that as well." "Made of stardust" Measuring cosmic rays gives scientists a unique view into high-energy processes happening far, far away. The cosmic rays arriving at CALET represent "the stuff we're made of. We are made of stardust," Cherry says. "And energetic sources, things like supernovas, eject that material from their interiors, out into the galaxy, where it's distributed, forms new planets, solar systems, and... us." "The study of cosmic rays is the study of how the universe generates and distributes matter, and how that affects the evolution of the galaxy," Krizmanic adds. "So really it's studying the astrophysics of this engine we call the Milky Way that's throwing all these elements around." A global effort The Japanese space agency launched CALET and today leads the mission in collaboration with the U.S. and Italian teams. In the U.S., the CALET team includes researchers from LSU; NASA Goddard Space Flight Center; UMBC; University of Maryland, College Park; University of Denver; and Washington University.The new paper is the fifth from this highly successful international collaboration published in PRL, one of the most prestigious physics journals. CALET was optimized to detect cosmic ray electrons, because their spectrum can contain information about their sources. That's especially true for sources that are relatively close to Earth in galactic terms: within less than one-thirtieth the distance across the Milky Way. But CALET also detects the atomic nuclei of cosmic rays very precisely. Now those nuclei are offering important insights about the sources of cosmic rays and how they got to Earth. "We didn't expect that the nuclei - the carbon, oxygen, protons, iron - would really start showing some of these detailed differences that are clearly pointing at things we don't know," Cherry says. The latest finding creates more questions than it answers, emphasizing that there is still more to learn about how matter is generated and moves around the galaxy. "That's a fundamental question: How do you make matter?" Krizmanic says. But, he adds, "That's the whole point of why we went in this business, to try to understand more about how the universe works." Physical Review Letters
Parallax was the first method used by astronomers to find the distance to nearby stars. It relies on measuring the change in angle of the star being observed against more distant background stars as a result of the motion of the earth around the sun. The idea of parallax can be neatly demonstrated using our own eyes. Look at a nearby object and shut one eye. Make a mental note of the object’s position, and then keeping your head still, change which eye is open and which is shut. You should see the object appear to shift. This shift in angle is caused by the differing positions of our two eyes. We rely on this effect all the time to accurately gauge distances. Try throwing and catching a ball with only one eye open to see how much we use parallax. We can use the same principle to measure the distance to stars, using telescopes instead of eyes. However, the nearest stars to Earth, other than the sun, are over four light-years away. This means that we cannot move our two “eyes” apart far enough on Earth to measure an observable change in angle. Fortunately, Earth is not stationary. By taking two measurements at six-month intervals, when Earth is at opposite ends of its orbit around the sun, we can record a measurable change in angle to a nearby star compared to distant background stars (whose own movement is negligible). This angle measurement can be used in combination with the (already known) distance from Earth to the sun to calculate the distance to the star. We see that the distance (from the sun to the star) is where 1 AU (astronomical unit) is the distance from Earth to the sun and is the measured parallax angle (see diagram). Using the small angle approximation we find that the distance is Note that regardless of the direction of the star from Earth, there are always two points in Earth’s orbit where the line between them is perpendicular to the line from the sun to the star. You can visualize this more easily in three dimensions using something circular to represent Earth’s orbit. Even to nearby stars, the parallax angles measured are small—the nearest star system, Alpha Centauri, has a parallax angle of just 0.0002 degrees. Astronomers use the smaller unit, the arcsecond, to measure parallax angles, where 3600 arcseconds = 1 degree. From this comes a standard unit of interstellar distances—the arcsecond. The parsec is defined as the distance to a star that has a parallax of one arcsecond. Measuring the distance to stars by parallax requires highly accurate angle measurements. Even with modern equipment, we can only use parallax to measure the distance to stars up to 10,000 light-years away, which is just 10% of the diameter of our galaxy (the Milky Way). To measure the distances to more distant objects, we are able to use different techniques, such as standard candles and redshift measurements. However, these later techniques are only possible because of parallax measurements to nearby stars.
The old CBSE Syllabus of Class 9 Maths can be used to compare with the revised syllabus so as to understnad hotw various chapters and topics have been divided to prepare the separate syllabi for Term 1 and Term 2. We have provided here the CBSE Syllabus of Class 9 Science that can also be downloaded in PDF format. Students must analyse the complete syllabus to prepare the right strategy for the new academic session in order to achieve good marks in all their period tests and the annual examination. Also Check CBSE Class 10 Term-Wise Syllabus 2021-2022 Released - Download Now! Check Course Structure for Class 9 Maths: Also Check: CBSE Class 9 Maths Complete & Best Study Material for 2021-2022 UNIT I: NUMBER SYSTEMS 1. REAL NUMBERS (16 Periods) 1. Review of representation of natural numbers, integers, and rational numbers on the number line. Representation of terminating / non-terminating recurring decimals on the number line through successive magnification. Rational numbers as recurring/ terminating decimals. Operations on real numbers. 2. Examples of non-recurring/non-terminating decimals. Existence of non-rational numbers (irrational numbers) such as , and their representation on the number line. Explaining that every real number is represented by a unique point on the number line and conversely, viz. every point on the number line represents a unique real number. 3. Definition of nth root of a real number. 4. Rationalization (with precise meaning) of real numbers of the type and (and their combinations) where x and y are natural number and a and b are integers. 5. Recall of laws of exponents with integral powers. Rational exponents with positive real bases (to be done by particular cases, allowing learner to arrive at the general laws.) UNIT II: ALGEBRA 1. POLYNOMIALS (23 Periods) Definition of a polynomial in one variable, with examples and counter examples. Coefficients of a polynomial, terms of a polynomial and zero polynomial. Degree of a polynomial. Constant, linear, quadratic and cubic polynomials. Monomials, binomials, trinomials. Factors and multiples. Zeros of a polynomial. Motivate and State the Remainder Theorem with examples. Statement and proof of the Factor Theorem. Factorization of ax2 + bx + c, a ≠ 0 where a, b and c are real numbers, and of cubic polynomials using the Factor Theorem. Recall of algebraic expressions and identities. Verification of identities: and their use in factorization of polynomials. 2. LINEAR EQUATIONS IN TWO VARIABLES (14 Periods) Recall of linear equations in one variable. Introduction to the equation in two variables. Focus on linear equations of the type ax+by+c=0. Explain that a linear equation in two variables has infinitely many solutions and justify their being written as ordered pairs of real numbers, plotting them and showing that they lie on a line. Graph of linear equations in two variables. Examples, problems from real life, including problems on Ratio and Proportion and with algebraic and graphical solutions being done simultaneously. NCERT Book for Class 9 Maths (Latest Textbook for 2021-2022) NCERT Solutions for Class 9 Maths (Updated for 2021-2022) UNIT III: COORDINATE GEOMETRY COORDINATE GEOMETRY (6 Periods) The Cartesian plane, coordinates of a point, names and terms associated with the coordinate plane, notations, plotting points in the plane. UNIT IV: GEOMETRY 1. INTRODUCTION TO EUCLID'S GEOMETRY (Not for assessment) (6 Periods) History - Geometry in India and Euclid's geometry. Euclid's method of formalizing observed phenomenon into rigorous Mathematics with definitions, common/obvious notions, axioms/postulates and theorems. The five postulates of Euclid. Equivalent versions of the fifth postulate. Showing the relationship between axiom and theorem, for example: (Axiom) 1. Given two distinct points, there exists one and only one line through them. (Theorem) 2. (Prove) Two distinct lines cannot have more than one point in common. 2. LINES AND ANGLES (13 Periods) 1. (Motivate) If a ray stands on a line, then the sum of the two adjacent angles so formed is 180O and the converse. 2. (Prove) If two lines intersect, vertically opposite angles are equal. 3. (Motivate) Results on corresponding angles, alternate angles, interior angles when a transversal intersects two parallel lines. 4. (Motivate) Lines which are parallel to a given line are parallel. 5. (Prove) The sum of the angles of a triangle is 1800. 6. (Motivate) If a side of a triangle is produced, the exterior angle so formed is equal to the sum of the two interior opposite angles. 3. TRIANGLES (20 Periods) 1. (Motivate) Two triangles are congruent if any two sides and the included angle of one triangle is equal to any two sides and the included angle of the other triangle (SAS Congruence). 2. (Prove) Two triangles are congruent if any two angles and the included side of one triangle is equal to any two angles and the included side of the other triangle (ASA Congruence). 3. (Motivate) Two triangles are congruent if the three sides of one triangle are equal to three sides of the other triangle (SSS Congruence). 4. (Motivate) Two right triangles are congruent if the hypotenuse and a side of one triangle are equal (respectively) to the hypotenuse and a side of the other triangle. (RHS Congruence) 5. (Prove) The angles opposite to equal sides of a triangle are equal. 6. (Motivate) The sides opposite to equal angles of a triangle are equal. 7. (Motivate) Triangle inequalities and relation between ‘angle and facing side' inequalities in triangles. 4. QUADRILATERALS (10 Periods) 1. (Prove) The diagonal divides a parallelogram into two congruent triangles. 2. (Motivate) In a parallelogram opposite sides are equal, and conversely. 3. (Motivate) In a parallelogram opposite angles are equal, and conversely. 4. (Motivate) A quadrilateral is a parallelogram if a pair of its opposite sides is parallel and equal. 5. (Motivate) In a parallelogram, the diagonals bisect each other and conversely. 6. (Motivate) In a triangle, the line segment joining the mid points of any two sides is parallel to the third side and in half of it and (motivate) its converse. 5. AREA (7 Periods) Review concept of area, recall area of a rectangle. 1. (Prove) Parallelograms on the same base and between the same parallels have equal area. 2. (Motivate) Triangles on the same base (or equal bases) and between the same parallels are equal in area. 6. CIRCLES (15 Periods) Through examples, arrive at definition of circle and related concepts-radius, circumference, diameter, chord, arc, secant, sector, segment, subtended angle. 1. (Prove) Equal chords of a circle subtend equal angles at the center and (motivate) its converse. 2. (Motivate) The perpendicular from the center of a circle to a chord bisects the chord and conversely, the line drawn through the center of a circle to bisect a chord is perpendicular to the chord. 3. (Motivate) There is one and only one circle passing through three given non-collinear points. 4. (Motivate) Equal chords of a circle (or of congruent circles) are equidistant from the center (or their respective centers) and conversely. 5. (Prove) The angle subtended by an arc at the center is double the angle subtended by it at any point on the remaining part of the circle. 6. (Motivate) Angles in the same segment of a circle are equal. 7. (Motivate) If a line segment joining two points subtends equal angle at two other points lying on the same side of the line containing the segment, the four points lie on a circle. 8. (Motivate) The sum of either of the pair of the opposite angles of a cyclic quadrilateral is 180° and its converse. 7. CONSTRUCTIONS (10 Periods) 1. Construction of bisectors of line segments and angles of measure 60o , 90o , 45o etc., equilateral triangles. 2. Construction of a triangle given its base, sum/difference of the other two sides and one base angle. 3. Construction of a triangle of given perimeter and base angles. UNIT V: MENSURATION 1. AREAS (4 Periods) Area of a triangle using Heron's formula (without proof) and its application in finding the area of a quadrilateral. 2. SURFACE AREAS AND VOLUMES (12 Periods) Surface areas and volumes of cubes, cuboids, spheres (including hemispheres) and right circular cylinders/cones. UNIT VI: STATISTICS & PROBABILITY 1. STATISTICS (13 Periods) Introduction to Statistics: Collection of data, presentation of data — tabular form, ungrouped / grouped, bar graphs, histograms (with varying base lengths), frequency polygons. Mean, median and mode of ungrouped data. 2. PROBABILITY (9 Periods) History, Repeated experiments and observed frequency approach to probability. Focus is on empirical probability. (A large amount of time to be devoted to group and to individual activities to motivate the concept; the experiments to be drawn from real-life situations, and from examples used in the chapter on statistics). MATHEMATICS QUESTION PAPER DESIGN CLASS – IX (2021-22) Time: 3 Hrs. Max. Marks: 80 Pen Paper Test and Multiple Assessment (5+5) Lab Practical (Lab activities to be done from the prescribed books) 1. Mathematics - Textbook for class IX - NCERT Publication 2. Guidelines for Mathematics Laboratory in Schools, class IX - CBSE Publication 3. Laboratory Manual - Mathematics, secondary stage - NCERT Publication 4. Mathematics exemplar problems for class IX, NCERT publication. Download CBSE Class 9 Maths New Syllabus for 2021-2022 in PDF Also Check: CBSE Class 9 Syllabus 2021-2022 (All Subjects)
Table of Contents - 1 Enhancing Critical Thinking Skills with Problem-Based Learning - 1.1 What is Problem-Based Learning? - 1.2 Enhancing Critical Thinking Skills with PBL - 1.3 FAQs: - 1.3.1 Q: How does PBL differ from traditional learning methods? - 1.3.2 Q: What is the role of the teacher in PBL? - 1.3.3 Q: In what subjects is PBL most effective? - 1.3.4 Q: What are the benefits of PBL? - 1.3.5 Q: How do you create a successful PBL problem? - 1.3.6 Q: How should students be assessed in a PBL setting? - 1.3.7 Q: How can PBL be used with online learning? - 1.4 Conclusion Enhancing Critical Thinking Skills with Problem-Based Learning As students continue to compete in the globalized world, it is essential for them to develop critical thinking skills. Critical thinking enables one to analyze problems systematically, carefully evaluate evidence, and come up with a solution that satisfies all the requirements. One of the ways to enhance critical thinking skills is through problem-based learning. In this article, we will take a closer look at how problem-based learning improves the critical thinking skills of students. What is Problem-Based Learning? Problem-based learning (PBL) is an innovative teaching method that revolves around the use of real-life scenarios as the basis for learning. Instead of the traditional method of lecturing, PBL involves students working in groups to solve a complex problem, providing a practical approach to learning. The primary goal of PBL is to teach students how to develop sound decision-making skills, to work collaboratively, and how to apply knowledge to real-world situations. Enhancing Critical Thinking Skills with PBL PBL provides an excellent platform to enhance critical thinking skills. Here are some ways that PBL can improve your critical thinking skills: Promotes analysis of complex problems When solving a PBL problem, students are required to analyze all aspects of the problem systematically. PBL problems are specifically designed to require that students have a thorough understanding of all aspects of the problem. This enables students to break down complex issues into small parts, making it easier for them to analyze and solve them. PBL is a group-based approach to learning where students work in small groups to solve problems. This team approach encourages students to collaborate, share ideas, and work together towards a common goal. Collaboration enhances creativity, innovation and helps students to come up with novel solutions to the problems they encounter. Develops reasoning skills PBL requires that students analyze situations, review evidence, and evaluate the problem context to determine the root cause. The students must use logic and reason to identify possible solutions, test their hypotheses, and draw conclusions based on the facts presented. These reasoning skills are essential for enhancing critical thinking abilities. Provides practical learning opportunities PBL is an ideal method for providing students with hands-on and practical learning opportunities. These experiences allow students to apply their knowledge to solve real-world problems. Practical application of knowledge is a crucial factor in enhancing critical thinking skills, as it forces one to use critical thinking skills in a practical context. Improves communication skills PBL involves team collaboration, which gives students ample opportunities to communicate their ideas, justify their reasoning, and defend their opinions to the rest of the group. This process provides students with the right forum to improve their communication skills. Effective communication is required in critical thinking, as it enables one to articulate ideas, compare and contrast different viewpoints, and sustain a critical debate. Q: How does PBL differ from traditional learning methods? A: PBL is more student-centered where students take the lead in their learning. Traditional learning methods are instructor-centric, where the teacher is the center of the learning experience. Q: What is the role of the teacher in PBL? A: The teacher plays an essential role in PBL. They provide guidance to the students, help facilitate the learning process, and evaluate the outcomes. Q: In what subjects is PBL most effective? A: While PBL can be used to teach any subject, it is most effective in subjects that require students to solve complex problems, such as science, engineering, and math. Q: What are the benefits of PBL? A: Some of the benefits of PBL include improved problem-solving skills, critical thinking abilities, enhanced communication, and exposure to practical learning opportunities. Q: How do you create a successful PBL problem? A: A successful PBL problem should be complex, open-ended, and realistic. It should simulate real-world issues and be relevant to the students’ lives. Q: How should students be assessed in a PBL setting? A: Students should be assessed based on their solution to the problem, their ability to collaborate, their communication skills, and the quality of their reasoning. Q: How can PBL be used with online learning? A: PBL can be adapted to online learning by using video conferencing, online discussion forums, and other digital technologies to facilitate collaboration, communication, and problem-solving. In conclusion, PBL is an effective teaching strategy that promotes critical thinking skills by providing practical learning opportunities, encouraging teamwork, and challenging students to use their reasoning skills. The implementation of PBL in classroom settings enhances student engagement, fosters creativity and innovation, and develops the skills required for life-long learning. Teachers can help students develop their critical thinking skills by incorporating PBL into their curriculum.
Arrays and multiplication by tens and hundreds worksheets help students develop their understanding of multiplication and write their own multiplication equations. The student will write their number from the array and the answer in a column. Using the appropriate range, they will create their own arrays, using known number facts and a multiplication equation. This worksheet is appropriate for first through fifth grade students. Kindergarten Array Multiplication Worksheets Worksheets from arrays and multiplying by 10 and 100 worksheet , source:leenkat.com Arrays are useful models for learning multiplication. You can use them to support teaching and assessment in a variety of ways, including highly structured lessons, games, and open investigations. In these math worksheets, children can practice building their own number sentences and multiplying by tens using arrays. Adding an X to each row or column will help them build their three-times table. Arrays are useful for introducing children to the relationship between multiplication and division. Using the array grid method, children can learn how to divide and multiply by 10 and 100. The concrete image of an array will help them understand the concepts better. Furthermore, it will help them visualize the answer they are attempting to find. These worksheets are an excellent way to start teaching multiplication and division. Skip Counting by 2 3 4 5 6 and 7 from arrays and multiplying by 10 and 100 worksheet , source:pinterest.com Another way to teach students about multiplication and division is through the use of arrays. Students can build their own three-times tables by building number sentences out of the arrays. Creating an array is a concrete way to teach children how to think about the relationship between multiplication and division. A physical array is an ideal tool for this purpose. It will help children visualize the relationships between the two operations. Arrays are an excellent way to teach children multiplication and division. The physical array gives a concrete image to the child. It helps children develop their numeracy skills and visualizing the answer will help them understand how to build the three-times tables. These worksheets are also useful in teaching children how to build their arrays. They can use these worksheets to learn about multiplication and division. Multiplying by 10 100 and 1 000 Worksheets & Activities from arrays and multiplying by 10 and 100 worksheet , source:pinterest.com Arrays and multiplication are two different concepts. Using them as a model of number sentences helps children build the three-times tables. In addition, the arrays are useful in a variety of situations. They can be used as an object in a game, in a classroom, or in a child’s own home. The worksheets provide a solid foundation for learning to multiply by ten and hundred. In addition to teaching multiplication facts, arrays can help children understand number sentences and number properties. Using number sentences as a model for multiplication is a great way to get children to learn multiplication facts. While it is not necessary to use numbers, they can be an important learning tool. Arrays are helpful in building the three-times tables. They help students understand the inverse relationship between the two operations. Multiplication Facts Worksheets from arrays and multiplying by 10 and 100 worksheet , source:math-drills.com An array is a collection of numbers and objects arranged into rows and columns. Each row or column is represented by a number disk. A number arrows represent the product of a multiplication. When a child completes a worksheet, he or she will learn the multiplication facts visually. Aside from building their understanding of multiplication, the worksheets also promote the development of language. Arrays and multiplication by tens and hundreds worksheets and other activities are designed to help children develop an understanding of the inverse relationship between division and multiplication. Using arrays and multiplication by tens and cents is a great way to encourage students to learn to recognize these relationships. This worksheet helps children practice reading and writing while enhancing their ability to visualize their answers. Multiplication Arrays Worksheets Math 3rd Grade from arrays and multiplying by 10 and 100 worksheet , source:pinterest.co.uk Using a blank array helps children develop compensating strategies. For example, a child may decide to work out 34 x 9 as 34 x nine. Or he or she may decide to divide this number by tens as 34 x 1. In the abstract grid method, a child can draw the grid as an overlay over the blank array. This helps children develop the mathematical process of division into easier parts. Repeated Addition Worksheets – Croefit from arrays and multiplying by 10 and 100 worksheet , source:croefit.com Perform Multiple Calculations with Excel Array Formulas from arrays and multiplying by 10 and 100 worksheet , source:lifewire.com scipy array tip sheet from arrays and multiplying by 10 and 100 worksheet , source:pages.physics.cornell.edu Division Arrays for Division Part Two Worksheet from arrays and multiplying by 10 and 100 worksheet , source:education.com 2 Times Table Worksheet Activity Sheet 2 times tables from arrays and multiplying by 10 and 100 worksheet , source:twinkl.co.uk
Quicksort is an in-place sorting algorithm. Developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. When implemented well, it can be somewhat faster than merge sort and about two or three times faster than heapsort.[contradictory] |Best-case performance|| (simple partition)| or (three-way partition and equal keys) |Worst-case space complexity|| auxiliary (naive)| auxiliary (Hoare 1962) Quicksort is a divide-and-conquer algorithm. It works by selecting a 'pivot' element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. For this reason, it is sometimes called partition-exchange sort. The sub-arrays are then sorted recursively. This can be done in-place, requiring small additional amounts of memory to perform the sorting. Quicksort is a comparison sort, meaning that it can sort items of any type for which a "less-than" relation (formally, a total order) is defined. Efficient implementations of Quicksort are not a stable sort, meaning that the relative order of equal sort items is not preserved. The quicksort algorithm was developed in 1959 by Tony Hoare while he was a visiting student at Moscow State University. At that time, Hoare was working on a machine translation project for the National Physical Laboratory. As a part of the translation process, he needed to sort the words in Russian sentences before looking them up in a Russian-English dictionary, which was in alphabetical order on magnetic tape. After recognizing that his first idea, insertion sort, would be slow, he came up with a new idea. He wrote the partition part in Mercury Autocode but had trouble dealing with the list of unsorted segments. On return to England, he was asked to write code for Shellsort. Hoare mentioned to his boss that he knew of a faster algorithm and his boss bet sixpence that he did not. His boss ultimately accepted that he had lost the bet. Later, Hoare learned about ALGOL and its ability to do recursion that enabled him to publish the code in Communications of the Association for Computing Machinery, the premier computer science journal of the time. Quicksort gained widespread adoption, appearing, for example, in Unix as the default library sort subroutine. Hence, it lent its name to the C standard library subroutine qsort and in the reference implementation of Java. Robert Sedgewick's PhD thesis in 1975 is considered a milestone in the study of Quicksort where he resolved many open problems related to the analysis of various pivot selection schemes including Samplesort, adaptive partitioning by Van Emden as well as derivation of expected number of comparisons and swaps. Jon Bentley and Doug McIlroy incorporated various improvements for use in programming libraries, including a technique to deal with equal elements and a pivot scheme known as pseudomedian of nine, where a sample of nine elements is divided into groups of three and then the median of the three medians from three groups is chosen. Bentley described another simpler and compact partitioning scheme in his book Programming Pearls that he attributed to Nico Lomuto. Later Bentley wrote that he used Hoare's version for years but never really understood it but Lomuto's version was simple enough to prove correct. Bentley described Quicksort as the "most beautiful code I had ever written" in the same essay. Lomuto's partition scheme was also popularized by the textbook Introduction to Algorithms although it is inferior to Hoare's scheme because it does three times more swaps on average and degrades to O(n2) runtime when all elements are equal.[self-published source?] In 2009, Vladimir Yaroslavskiy proposed a new Quicksort implementation using two pivots instead of one. In the Java core library mailing lists, he initiated a discussion claiming his new algorithm to be superior to the runtime library's sorting method, which was at that time based on the widely used and carefully tuned variant of classic Quicksort by Bentley and McIlroy. Yaroslavskiy's Quicksort has been chosen as the new default sorting algorithm in Oracle's Java 7 runtime library after extensive empirical performance tests. Quicksort is a type of divide and conquer algorithm for sorting an array, based on a partitioning routine; the details of this partitioning can vary somewhat, so that quicksort is really a family of closely related algorithms. Applied to a range of at least two elements, partitioning produces a division into two consecutive non empty sub-ranges, in such a way that no element of the first sub-range is greater than any element of the second sub-range. After applying this partition, quicksort then recursively sorts the sub-ranges, possibly after excluding from them an element at the point of division that is at this point known to be already in its final location. Due to its recursive nature, quicksort (like the partition routine) has to be formulated so as to be callable for a range within a larger array, even if the ultimate goal is to sort a complete array. The steps for in-place quicksort are: - If the range has fewer than two elements, return immediately as there is nothing to do. Possibly for other very short lengths a special-purpose sorting method is applied and the remainder of these steps skipped. - Otherwise pick a value, called a pivot, that occurs in the range (the precise manner of choosing depends on the partition routine, and can involve randomness). - Partition the range: reorder its elements, while determining a point of division, so that all elements with values less than the pivot come before the division, while all elements with values greater than the pivot come after it; elements that are equal to the pivot can go either way. Since at least one instance of the pivot is present, most partition routines ensure that the value that ends up at the point of division is equal to the pivot, and is now in its final position (but termination of quicksort does not depend on this, as long as sub-ranges strictly smaller than the original are produced). - Recursively apply the quicksort to the sub-range up to the point of division and to the sub-range after it, possibly excluding from both ranges the element equal to the pivot at the point of division. (If the partition produces a possibly larger sub-range near the boundary where all elements are known to be equal to the pivot, these can be excluded as well.) The choice of partition routine (including the pivot selection) and other details not entirely specified above can affect the algorithm's performance, possibly to a great extent for specific input arrays. In discussing the efficiency of quicksort, it is therefore necessary to specify these choices first. Here we mention two specific partition methods. Lomuto partition schemeEdit This scheme is attributed to Nico Lomuto and popularized by Bentley in his book Programming Pearls and Cormen et al. in their book Introduction to Algorithms. In most formulations this scheme chooses as the pivot the last element in the array. The algorithm maintains index i as it scans the array using another index j such that the elements at lo through i-1 (inclusive) are less than the pivot, and the elements at i through j (inclusive) are equal to or greater than the pivot. As this scheme is more compact and easy to understand, it is frequently used in introductory material, although it is less efficient than Hoare's original scheme e.g., when all elements are equal. This scheme degrades to O(n2) when the array is already in order. There have been various variants proposed to boost performance including various ways to select the pivot, deal with equal elements, use other sorting algorithms such as insertion sort for small arrays, and so on. In pseudocode, a quicksort that sorts elements at lo through hi (inclusive) of an array A can be expressed as: // Sorts a (portion of an) array, divides it into partitions, then sorts those algorithm quicksort(A, lo, hi) is // Ensure indices are in correct order if lo >= hi || lo < 0 then return // Partition array and get the pivot index p := partition(A, lo, hi) // Sort the two partitions quicksort(A, lo, p - 1) // Left side of pivot quicksort(A, p + 1, hi) // Right side of pivot // Divides array into two partitions algorithm partition(A, lo, hi) is pivot := A[hi] // Choose the last element as the pivot // Temporary pivot index i := lo - 1 for j := lo to hi - 1 do // If the current element is less than or equal to the pivot if A[j] <= pivot then // Move the temporary pivot index forward i := i + 1 // Swap the current element with the element at the temporary pivot index swap A[i] with A[j] // Move the pivot element to the correct pivot position (between the smaller and larger elements) i := i + 1 swap A[i] with A[hi] return i // the pivot index Sorting the entire array is accomplished by quicksort(A, 0, length(A) - 1). Hoare partition schemeEdit The original partition scheme described by Tony Hoare uses two pointers (indices into the range) that start at both ends of the array being partitioned, then move toward each other, until they detect an inversion: a pair of elements, one greater than the bound (Hoare's terms for the pivot value) at the first pointer, and one less than the bound at the second pointer; if at this point the first pointer is still before the second, these elements are in the wrong order relative to each other, and they are then exchanged. After this the pointers are moved inwards, and the search for an inversion is repeated; when eventually the pointers cross (the first points after the second), no exchange is performed; a valid partition is found, with the point of division between the crossed pointers (any entries that might be strictly between the crossed pointers are equal to the pivot and can be excluded from both sub-ranges formed). With this formulation it is possible that one sub-range turns out to be the whole original range, which would prevent the algorithm from advancing. Hoare therefore stipulates that at the end, the sub-range containing the pivot element (which still is at its original position) can be decreased in size by excluding that pivot, after (if necessary) exchanging it with the sub-range element closest to the separation; thus, termination of quicksort is ensured. With respect to this original description, implementations often make minor but important variations. Notably, the scheme as presented below includes elements equal to the pivot among the candidates for an inversion (so "greater than or equal" and "less than or equal" tests are used instead of "greater than" and "less than" respectively; since the formulation uses do...while rather than repeat...until which is actually reflected by the use of strict comparison operators). While there is no reason to exchange elements equal to the bound, this change allows tests on the pointers themselves to be omitted, which are otherwise needed to ensure they do not run out of range. Indeed, since at least one instance of the pivot value is present in the range, the first advancement of either pointer cannot pass across this instance if an inclusive test is used; once an exchange is performed, these exchanged elements are now both strictly ahead of the pointer that found them, preventing that pointer from running off. (The latter is true independently of the test used, so it would be possible to use the inclusive test only when looking for the first inversion. However, using an inclusive test throughout also ensures that a division near the middle is found when all elements in the range are equal, which gives an important efficiency gain for sorting arrays with many equal elements.) The risk of producing a non-advancing separation is avoided in a different manner than described by Hoare. Such a separation can only result when no inversions are found, with both pointers advancing to the pivot element at the first iteration (they are then considered to have crossed, and no exchange takes place). The division returned is after the final position of the second pointer, so the case to avoid is where the pivot is the final element of the range and all others are smaller than it. Therefore, the pivot choice must avoid the final element (in Hoare's description it could be any element in the range); this is done here by rounding down the middle position, using the floor function. This illustrates that the argument for correctness of an implementation of the Hoare partition scheme can be subtle, and it is easy to get it wrong. // Sorts a (portion of an) array, divides it into partitions, then sorts those algorithm quicksort(A, lo, hi) is if lo >= 0 && hi >= 0 && lo < hi then p := partition(A, lo, hi) quicksort(A, lo, p) // Note: the pivot is now included quicksort(A, p + 1, hi) // Divides array into two partitions algorithm partition(A, lo, hi) is // Pivot value pivot := A[ floor((hi + lo) / 2) ] // The value in the middle of the array // Left index i := lo - 1 // Right index j := hi + 1 loop forever // Move the left index to the right at least once and while the element at // the left index is less than the pivot do i := i + 1 while A[i] < pivot // Move the right index to the left at least once and while the element at // the right index is greater than the pivot do j := j - 1 while A[j] > pivot // If the indices crossed, return if i ≥ j then return j // Swap the elements at the left and right indices swap A[i] with A[j] The entire array is sorted by quicksort(A, 0, length(A) - 1). Hoare's scheme is more efficient than Lomuto's partition scheme because it does three times fewer swaps on average. Also, as mentioned, the implementation given creates a balanced partition even when all values are equal.[self-published source?], which Lomuto's scheme does not. Like Lomuto's partition scheme, Hoare's partitioning also would cause Quicksort to degrade to O(n2) for already sorted input, if the pivot was chosen as the first or the last element. With the middle element as the pivot, however, sorted data results with (almost) no swaps in equally sized partitions leading to best case behavior of Quicksort, i.e. O(n log(n)). Like others, Hoare's partitioning doesn't produce a stable sort. In this scheme, the pivot's final location is not necessarily at the index that is returned, as the pivot and elements equal to the pivot can end up anywhere within the partition after a partition step, and may not be sorted until the base case of a partition with a single element is reached via recursion. The next two segments that the main algorithm recurs on are (lo..p) (elements ≤ pivot) and (p+1..hi) (elements ≥ pivot) as opposed to (lo..p-1) and (p+1..hi) as in Lomuto's scheme.[why?] Subsequent recursions (expansion on previous paragraph) Let's expand a little bit on the next two segments that the main algorithm recurs on. Because we are using strict comparators (>, <) in the "do...while" loops to prevent ourselves from running out of range, there's a chance that the pivot itself gets swapped with other elements in the partition function. Therefore, the index returned in the partition function isn't necessarily where the actual pivot is. Consider the example of [5, 2, 3, 1, 0], following the scheme, after the first partition the array becomes [0, 2, 1, 3, 5], the "index" returned is 2, which is the number 1, when the real pivot, the one we chose to start the partition with was the number 3. With this example, we see how it is necessary to include the returned index of the partition function in our subsequent recursions. As a result, we are presented with the choices of either recursing on (lo..p) and (p+1..hi), or (lo..p - 1) and (p..hi). Which of the two options we choose depends on which index (i or j) we return in the partition function when the indices cross, and how we choose our pivot in the partition function (floor v.s. ceiling). Let's first examine the choice of recursing on (lo..p) and (p+1..hi), with the example of sorting an array where multiple identical elements exist [0, 0]. If index i (the "latter" index) is returned after indices cross in the partition function, the index 1 would be returned after the first partition. The subsequent recursion on (lo..p)would be on (0, 1), which corresponds to the exact same array [0, 0]. A non-advancing separation that causes infinite recursion is produced. It is therefore obvious that when recursing on (lo..p) and (p+1..hi), because the left half of the recursion includes the returned index, it is the partition function's job to exclude the "tail" in non-advancing scenarios. Which is to say, index j (the "former" index when indices cross) should be returned instead of i. Going with a similar logic, when considering the example of an already sorted array [0, 1], the choice of pivot needs to be "floor" to ensure that the pointers stop on the "former" instead of the "latter" (with "ceiling" as the pivot, the index 1 would be returned and included in (lo..p) causing infinite recursion). It is for the exact same reason why choice of the last element as pivot must be avoided. The choice of recursing on (lo..p - 1) and (p..hi) follows the exact same logic as above. Because the right half of the recursion includes the returned index, it is the partition function's job to exclude the "head" in non-advancing scenarios. The index i (the "latter" index after the indices cross) in the partition function needs to be returned, and "ceiling" needs to be chosen as the pivot. The two nuances are clear, again, when considering the examples of sorting an array where multiple identical elements exist ([0, 0]), and an already sorted array [0, 1] respectively. It is noteworthy that with version of recursion, for the same reason, choice of the first element as pivot must be avoided. Choice of pivotEdit In the very early versions of quicksort, the leftmost element of the partition would often be chosen as the pivot element. Unfortunately, this causes worst-case behavior on already sorted arrays, which is a rather common use-case. The problem was easily solved by choosing either a random index for the pivot, choosing the middle index of the partition or (especially for longer partitions) choosing the median of the first, middle and last element of the partition for the pivot (as recommended by Sedgewick). This "median-of-three" rule counters the case of sorted (or reverse-sorted) input, and gives a better estimate of the optimal pivot (the true median) than selecting any single element, when no information about the ordering of the input is known. Median-of-three code snippet for Lomuto partition: mid := ⌊(lo + hi) / 2⌋ if A[mid] < A[lo] swap A[lo] with A[mid] if A[hi] < A[lo] swap A[lo] with A[hi] if A[mid] < A[hi] swap A[mid] with A[hi] pivot := A[hi] It puts a median into A[hi] first, then that new value of A[hi] is used for a pivot, as in a basic algorithm presented above. Specifically, the expected number of comparisons needed to sort n elements (see § Analysis of randomized quicksort) with random pivot selection is 1.386 n log n. Median-of-three pivoting brings this down to Cn, 2 ≈ 1.188 n log n, at the expense of a three-percent increase in the expected number of swaps. An even stronger pivoting rule, for larger arrays, is to pick the ninther, a recursive median-of-three (Mo3), defined as - ninther(a) = median(Mo3(first 1/3 of a), Mo3(middle 1/3 of a), Mo3(final 1/3 of a)) Selecting a pivot element is also complicated by the existence of integer overflow. If the boundary indices of the subarray being sorted are sufficiently large, the naïve expression for the middle index, (lo + hi)/2, will cause overflow and provide an invalid pivot index. This can be overcome by using, for example, lo + (hi−lo)/2 to index the middle element, at the cost of more complex arithmetic. Similar issues arise in some other methods of selecting the pivot element. With a partitioning algorithm such as the Lomuto partition scheme described above (even one that chooses good pivot values), quicksort exhibits poor performance for inputs that contain many repeated elements. The problem is clearly apparent when all the input elements are equal: at each recursion, the left partition is empty (no input values are less than the pivot), and the right partition has only decreased by one element (the pivot is removed). Consequently, the Lomuto partition scheme takes quadratic time to sort an array of equal values. However, with a partitioning algorithm such as the Hoare partition scheme, repeated elements generally results in better partitioning, and although needless swaps of elements equal to the pivot may occur, the running time generally decreases as the number of repeated elements increases (with memory cache reducing the swap overhead). In the case where all elements are equal, Hoare partition scheme needlessly swaps elements, but the partitioning itself is best case, as noted in the Hoare partition section above. To solve the Lomuto partition scheme problem (sometimes called the Dutch national flag problem), an alternative linear-time partition routine can be used that separates the values into three groups: values less than the pivot, values equal to the pivot, and values greater than the pivot. (Bentley and McIlroy call this a "fat partition" and it was already implemented in the qsort of Version 7 Unix.) The values equal to the pivot are already sorted, so only the less-than and greater-than partitions need to be recursively sorted. In pseudocode, the quicksort algorithm becomes algorithm quicksort(A, lo, hi) is if lo < hi then p := pivot(A, lo, hi) left, right := partition(A, p, lo, hi) // note: multiple return values quicksort(A, lo, left - 1) quicksort(A, right + 1, hi) partition algorithm returns indices to the first ('leftmost') and to the last ('rightmost') item of the middle partition. Every item of the partition is equal to p and is therefore sorted. Consequently, the items of the partition need not be included in the recursive calls to The best case for the algorithm now occurs when all elements are equal (or are chosen from a small set of k ≪ n elements). In the case of all equal elements, the modified quicksort will perform only two recursive calls on empty subarrays and thus finish in linear time (assuming the partition subroutine takes no longer than linear time). - To make sure at most O(log n) space is used, recur first into the smaller side of the partition, then use a tail call to recur into the other, or update the parameters to no longer include the now sorted smaller side, and iterate to sort the larger side. - When the number of elements is below some threshold (perhaps ten elements), switch to a non-recursive sorting algorithm such as insertion sort that performs fewer swaps, comparisons or other operations on such small arrays. The ideal 'threshold' will vary based on the details of the specific implementation. - An older variant of the previous optimization: when the number of elements is less than the threshold k, simply stop; then after the whole array has been processed, perform insertion sort on it. Stopping the recursion early leaves the array k-sorted, meaning that each element is at most k positions away from its final sorted position. In this case, insertion sort takes O(kn) time to finish the sort, which is linear if k is a constant.: 117 Compared to the "many small sorts" optimization, this version may execute fewer instructions, but it makes suboptimal use of the cache memories in modern computers. Quicksort's divide-and-conquer formulation makes it amenable to parallelization using task parallelism. The partitioning step is accomplished through the use of a parallel prefix sum algorithm to compute an index for each array element in its section of the partitioned array. Given an array of size n, the partitioning step performs O(n) work in O(log n) time and requires O(n) additional scratch space. After the array has been partitioned, the two partitions can be sorted recursively in parallel. Assuming an ideal choice of pivots, parallel quicksort sorts an array of size n in O(n log n) work in O(log2 n) time using O(n) additional space. Quicksort has some disadvantages when compared to alternative sorting algorithms, like merge sort, which complicate its efficient parallelization. The depth of quicksort's divide-and-conquer tree directly impacts the algorithm's scalability, and this depth is highly dependent on the algorithm's choice of pivot. Additionally, it is difficult to parallelize the partitioning step efficiently in-place. The use of scratch space simplifies the partitioning step, but increases the algorithm's memory footprint and constant overheads. Other more sophisticated parallel sorting algorithms can achieve even better time bounds. For example, in 1991 David Powers described a parallelized quicksort (and a related radix sort) that can operate in O(log n) time on a CRCW (concurrent read and concurrent write) PRAM (parallel random-access machine) with n processors by performing partitioning implicitly. The most unbalanced partition occurs when one of the sublists returned by the partitioning routine is of size n − 1. This may occur if the pivot happens to be the smallest or largest element in the list, or in some implementations (e.g., the Lomuto partition scheme as described above) when all the elements are equal. If this happens repeatedly in every partition, then each recursive call processes a list of size one less than the previous list. Consequently, we can make n − 1 nested calls before we reach a list of size 1. This means that the call tree is a linear chain of n − 1 nested calls. The ith call does O(n − i) work to do the partition, and , so in that case quicksort takes O(n2) time. In the most balanced case, each time we perform a partition we divide the list into two nearly equal pieces. This means each recursive call processes a list of half the size. Consequently, we can make only log2 n nested calls before we reach a list of size 1. This means that the depth of the call tree is log2 n. But no two calls at the same level of the call tree process the same part of the original list; thus, each level of calls needs only O(n) time all together (each call has some constant overhead, but since there are only O(n) calls at each level, this is subsumed in the O(n) factor). The result is that the algorithm uses only O(n log n) time. To sort an array of n distinct elements, quicksort takes O(n log n) time in expectation, averaged over all n! permutations of n elements with equal probability. We list here three common proofs to this claim providing different insights into quicksort's workings. If each pivot has rank somewhere in the middle 50 percent, that is, between the 25th percentile and the 75th percentile, then it splits the elements with at least 25% and at most 75% on each side. If we could consistently choose such pivots, we would only have to split the list at most times before reaching lists of size 1, yielding an O(n log n) algorithm. When the input is a random permutation, the pivot has a random rank, and so it is not guaranteed to be in the middle 50 percent. However, when we start from a random permutation, in each recursive call the pivot has a random rank in its list, and so it is in the middle 50 percent about half the time. That is good enough. Imagine that a coin is flipped: heads means that the rank of the pivot is in the middle 50 percent, tail means that it isn't. Now imagine that the coin is flipped over and over until it gets k heads. Although this could take a long time, on average only 2k flips are required, and the chance that the coin won't get k heads after 100k flips is highly improbable (this can be made rigorous using Chernoff bounds). By the same argument, Quicksort's recursion will terminate on average at a call depth of only . But if its average call depth is O(log n), and each level of the call tree processes at most n elements, the total amount of work done on average is the product, O(n log n). The algorithm does not have to verify that the pivot is in the middle half—if we hit it any constant fraction of the times, that is enough for the desired complexity. An alternative approach is to set up a recurrence relation for the T(n) factor, the time needed to sort a list of size n. In the most unbalanced case, a single quicksort call involves O(n) work plus two recursive calls on lists of size 0 and n−1, so the recurrence relation is In the most balanced case, a single quicksort call involves O(n) work plus two recursive calls on lists of size n/2, so the recurrence relation is The master theorem for divide-and-conquer recurrences tells us that T(n) = O(n log n). The outline of a formal proof of the O(n log n) expected time complexity follows. Assume that there are no duplicates as duplicates could be handled with linear time pre- and post-processing, or considered cases easier than the analyzed. When the input is a random permutation, the rank of the pivot is uniform random from 0 to n − 1. Then the resulting parts of the partition have sizes i and n − i − 1, and i is uniform random from 0 to n − 1. So, averaging over all possible splits and noting that the number of comparisons for the partition is n − 1, the average number of comparisons over all permutations of the input sequence can be estimated accurately by solving the recurrence relation: Solving the recurrence gives C(n) = 2n ln n ≈ 1.39n log2 n. This means that, on average, quicksort performs only about 39% worse than in its best case. In this sense, it is closer to the best case than the worst case. A comparison sort cannot use less than log2(n!) comparisons on average to sort n items (as explained in the article Comparison sort) and in case of large n, Stirling's approximation yields log2(n!) ≈ n(log2 n − log2 e), so quicksort is not much worse than an ideal comparison sort. This fast average runtime is another reason for quicksort's practical dominance over other sorting algorithms. Using a binary search treeEdit The following binary search tree (BST) corresponds to each execution of quicksort: the initial pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot of the right half is the root of the right subtree, and so on. The number of comparisons of the execution of quicksort equals the number of comparisons during the construction of the BST by a sequence of insertions. So, the average number of comparisons for randomized quicksort equals the average cost of constructing a BST when the values inserted form a random permutation. Consider a BST created by insertion of a sequence of values forming a random permutation. Let C denote the cost of creation of the BST. We have , where is a binary random variable expressing whether during the insertion of there was a comparison to . By linearity of expectation, the expected value of C is . Fix i and j<i. The values , once sorted, define j+1 intervals. The core structural observation is that is compared to in the algorithm if and only if falls inside one of the two intervals adjacent to . Observe that since is a random permutation, is also a random permutation, so the probability that is adjacent to is exactly . We end with a short calculation: The space used by quicksort depends on the version used. The in-place version of quicksort has a space complexity of O(log n), even in the worst case, when it is carefully implemented using the following strategies. - In-place partitioning is used. This unstable partition requires O(1) space. - After partitioning, the partition with the fewest elements is (recursively) sorted first, requiring at most O(log n) space. Then the other partition is sorted using tail recursion or iteration, which doesn't add to the call stack. This idea, as discussed above, was described by R. Sedgewick, and keeps the stack depth bounded by O(log n). Quicksort with in-place and unstable partitioning uses only constant additional space before making any recursive call. Quicksort must store a constant amount of information for each nested recursive call. Since the best case makes at most O(log n) nested recursive calls, it uses O(log n) space. However, without Sedgewick's trick to limit the recursive calls, in the worst case quicksort could make O(n) nested recursive calls and need O(n) auxiliary space. From a bit complexity viewpoint, variables such as lo and hi do not use constant space; it takes O(log n) bits to index into a list of n items. Because there are such variables in every stack frame, quicksort using Sedgewick's trick requires O((log n)2) bits of space. This space requirement isn't too terrible, though, since if the list contained distinct elements, it would need at least O(n log n) bits of space. Another, less common, not-in-place, version of quicksort uses O(n) space for working storage and can implement a stable sort. The working storage allows the input array to be easily partitioned in a stable manner and then copied back to the input array for successive recursive calls. Sedgewick's optimization is still appropriate. Relation to other algorithmsEdit Quicksort is a space-optimized version of the binary tree sort. Instead of inserting items sequentially into an explicit tree, quicksort organizes them concurrently into a tree that is implied by the recursive calls. The algorithms make exactly the same comparisons, but in a different order. An often desirable property of a sorting algorithm is stability – that is the order of elements that compare equal is not changed, allowing controlling order of multikey tables (e.g. directory or folder listings) in a natural way. This property is hard to maintain for in situ (or in place) quicksort (that uses only constant additional space for pointers and buffers, and O(log n) additional space for the management of explicit or implicit recursion). For variant quicksorts involving extra memory due to representations using pointers (e.g. lists or trees) or files (effectively lists), it is trivial to maintain stability. The more complex, or disk-bound, data structures tend to increase time cost, in general making increasing use of virtual memory or disk. The most direct competitor of quicksort is heapsort. Heapsort's running time is O(n log n), but heapsort's average running time is usually considered slower than in-place quicksort. This result is debatable; some publications indicate the opposite. Introsort is a variant of quicksort that switches to heapsort when a bad case is detected to avoid quicksort's worst-case running time. Quicksort also competes with merge sort, another O(n log n) sorting algorithm. Mergesort is a stable sort, unlike standard in-place quicksort and heapsort, and has excellent worst-case performance. The main disadvantage of mergesort is that, when operating on arrays, efficient implementations require O(n) auxiliary space, whereas the variant of quicksort with in-place partitioning and tail recursion uses only O(log n) space. Mergesort works very well on linked lists, requiring only a small, constant amount of auxiliary storage. Although quicksort can be implemented as a stable sort using linked lists, it will often suffer from poor pivot choices without random access. Mergesort is also the algorithm of choice for external sorting of very large data sets stored on slow-to-access media such as disk storage or network-attached storage. Bucket sort with two buckets is very similar to quicksort; the pivot in this case is effectively the value in the middle of the value range, which does well on average for uniformly distributed inputs. A selection algorithm chooses the kth smallest of a list of numbers; this is an easier problem in general than sorting. One simple but effective selection algorithm works nearly in the same manner as quicksort, and is accordingly known as quickselect. The difference is that instead of making recursive calls on both sublists, it only makes a single tail-recursive call on the sublist that contains the desired element. This change lowers the average complexity to linear or O(n) time, which is optimal for selection, but the selection algorithm is still O(n2) in the worst case. A variant of quickselect, the median of medians algorithm, chooses pivots more carefully, ensuring that the pivots are near the middle of the data (between the 30th and 70th percentiles), and thus has guaranteed linear time – O(n). This same pivot strategy can be used to construct a variant of quicksort (median of medians quicksort) with O(n log n) time. However, the overhead of choosing the pivot is significant, so this is generally not used in practice. More abstractly, given an O(n) selection algorithm, one can use it to find the ideal pivot (the median) at every step of quicksort and thus produce a sorting algorithm with O(n log n) running time. Practical implementations of this variant are considerably slower on average, but they are of theoretical interest because they show an optimal selection algorithm can yield an optimal sorting algorithm. Instead of partitioning into two subarrays using a single pivot, multi-pivot quicksort (also multiquicksort) partitions its input into some s number of subarrays using s − 1 pivots. While the dual-pivot case (s = 3) was considered by Sedgewick and others already in the mid-1970s, the resulting algorithms were not faster in practice than the "classical" quicksort. A 1999 assessment of a multiquicksort with a variable number of pivots, tuned to make efficient use of processor caches, found it to increase the instruction count by some 20%, but simulation results suggested that it would be more efficient on very large inputs. A version of dual-pivot quicksort developed by Yaroslavskiy in 2009 turned out to be fast enough to warrant implementation in Java 7, as the standard algorithm to sort arrays of primitives (sorting arrays of objects is done using Timsort). The performance benefit of this algorithm was subsequently found to be mostly related to cache performance, and experimental results indicate that the three-pivot variant may perform even better on modern machines. For disk files, an external sort based on partitioning similar to quicksort is possible. It is slower than external merge sort, but doesn't require extra disk space. 4 buffers are used, 2 for input, 2 for output. Let N = number of records in the file, B = the number of records per buffer, and M = N/B = the number of buffer segments in the file. Data is read (and written) from both ends of the file inwards. Let X represent the segments that start at the beginning of the file and Y represent segments that start at the end of the file. Data is read into the X and Y read buffers. A pivot record is chosen and the records in the X and Y buffers other than the pivot record are copied to the X write buffer in ascending order and Y write buffer in descending order based comparison with the pivot record. Once either X or Y buffer is filled, it is written to the file and the next X or Y buffer is read from the file. The process continues until all segments are read and one write buffer remains. If that buffer is an X write buffer, the pivot record is appended to it and the X buffer written. If that buffer is a Y write buffer, the pivot record is prepended to the Y buffer and the Y buffer written. This constitutes one partition step of the file, and the file is now composed of two subfiles. The start and end positions of each subfile are pushed/popped to a stand-alone stack or the main stack via recursion. To limit stack space to O(log2(n)), the smaller subfile is processed first. For a stand-alone stack, push the larger subfile parameters onto the stack, iterate on the smaller subfile. For recursion, recurse on the smaller subfile first, then iterate to handle the larger subfile. Once a sub-file is less than or equal to 4 B records, the subfile is sorted in place via quicksort and written. That subfile is now sorted and in place in the file. The process is continued until all sub-files are sorted and in place. The average number of passes on the file is approximately 1 + ln(N+1)/(4 B), but worst case pattern is N passes (equivalent to O(n^2) for worst case internal sort). Three-way radix quicksortEdit This algorithm is a combination of radix sort and quicksort. Pick an element from the array (the pivot) and consider the first character (key) of the string (multikey). Partition the remaining elements into three sets: those whose corresponding character is less than, equal to, and greater than the pivot's character. Recursively sort the "less than" and "greater than" partitions on the same character. Recursively sort the "equal to" partition by the next character (key). Given we sort using bytes or words of length W bits, the best case is O(KN) and the worst case O(2KN) or at least O(N2) as for standard quicksort, given for unique keys N<2K, and K is a hidden constant in all standard comparison sort algorithms including quicksort. This is a kind of three-way quicksort in which the middle partition represents a (trivially) sorted subarray of elements that are exactly equal to the pivot. Quick radix sortEdit Also developed by Powers as an O(K) parallel PRAM algorithm. This is again a combination of radix sort and quicksort but the quicksort left/right partition decision is made on successive bits of the key, and is thus O(KN) for N K-bit keys. All comparison sort algorithms impliclty assume the transdichotomous model with K in Θ(log N), as if K is smaller we can sort in O(N) time using a hash table or integer sorting. If K ≫ log N but elements are unique within O(log N) bits, the remaining bits will not be looked at by either quicksort or quick radix sort. Failing that, all comparison sorting algorithms will also have the same overhead of looking through O(K) relatively useless bits but quick radix sort will avoid the worst case O(N2) behaviours of standard quicksort and radix quicksort, and will be faster even in the best case of those comparison algorithms under these conditions of uniqueprefix(K) ≫ log N. See Powers for further discussion of the hidden overheads in comparison, radix and parallel sorting. In any comparison-based sorting algorithm, minimizing the number of comparisons requires maximizing the amount of information gained from each comparison, meaning that the comparison results are unpredictable. This causes frequent branch mispredictions, limiting performance. BlockQuicksort rearranges the computations of quicksort to convert unpredictable branches to data dependencies. When partitioning, the input is divided into moderate-sized blocks (which fit easily into the data cache), and two arrays are filled with the positions of elements to swap. (To avoid conditional branches, the position is unconditionally stored at the end of the array, and the index of the end is incremented if a swap is needed.) A second pass exchanges the elements at the positions indicated in the arrays. Both loops have only one conditional branch, a test for termination, which is usually taken. Partial and incremental quicksortEdit Several variants of quicksort exist that separate the k smallest or largest elements from the rest of the input. Richard Cole and David C. Kandathil, in 2004, discovered a one-parameter family of sorting algorithms, called partition sorts, which on average (with all input orderings equally likely) perform at most comparisons (close to the information theoretic lower bound) and operations; at worst they perform comparisons (and also operations); these are in-place, requiring only additional space. Practical efficiency and smaller variance in performance were demonstrated against optimised quicksorts (of Sedgewick and Bentley-McIlroy). - Introsort – Hybrid sorting algorithm - "Sir Antony Hoare". Computer History Museum. Archived from the original on 3 April 2015. Retrieved 22 April 2015. - Hoare, C. A. R. (1961). "Algorithm 64: Quicksort". Comm. ACM. 4 (7): 321. doi:10.1145/366622.366644. - Skiena, Steven S. (2008). The Algorithm Design Manual. Springer. p. 129. ISBN 978-1-84800-069-8. - C.L. Foster, Algorithms, Abstraction and Implementation, 1992, ISBN 0122626605, p. 98 - Shustek, L. (2009). "Interview: An interview with C.A.R. Hoare". Comm. ACM. 52 (3): 38–41. doi:10.1145/1467247.1467261. S2CID 1868477. - "My Quickshort interview with Sir Tony Hoare, the inventor of Quicksort". Marcelo M De Barros. 15 March 2015. - Bentley, Jon L.; McIlroy, M. Douglas (1993). "Engineering a sort function". Software: Practice and Experience. 23 (11): 1249–1265. CiteSeerX 10.1.1.14.8162. doi:10.1002/spe.4380231105. S2CID 8822797. - Van Emden, M. H. (1 November 1970). "Algorithms 402: Increasing the Efficiency of Quicksort". Commun. ACM. 13 (11): 693–694. doi:10.1145/362790.362803. ISSN 0001-0782. S2CID 4774719. - Bentley, Jon (2007). "The most beautiful code I never wrote". In Oram, Andy; Wilson, Greg (eds.). Beautiful Code: Leading Programmers Explain How They Think. O'Reilly Media. p. 30. ISBN 978-0-596-51004-6. - "Quicksort Partitioning: Hoare vs. Lomuto". cs.stackexchange.com. Retrieved 3 August 2015. - Yaroslavskiy, Vladimir (2009). "Dual-Pivot Quicksort" (PDF). Archived from the original (PDF) on 2 October 2015. - "Replacement of Quicksort in java.util.Arrays with new Dual-Pivot Quick". permalink.gmane.org. Archived from the original on 6 November 2018. Retrieved 3 August 2015. - "Java 7 Arrays API documentation". Oracle. Retrieved 23 July 2018. - Wild, S.; Nebel, M.; Reitzig, R.; Laube, U. (7 January 2013). Engineering Java 7's Dual Pivot Quicksort Using MaLiJAn. Proceedings. Society for Industrial and Applied Mathematics. pp. 55–69. doi:10.1137/1.9781611972931.5. ISBN 978-1-61197-253-5. - Jon Bentley (1999). Programming Pearls. Addison-Wesley Professional. - Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) . "Quicksort". Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. pp. 170–190. ISBN 0-262-03384-4. - Wild, Sebastian (2012). Java 7's Dual Pivot Quicksort (Thesis). Technische Universität Kaiserslautern. - Hoare, C. A. R. (1 January 1962). "Quicksort". The Computer Journal. 5 (1): 10–16. doi:10.1093/comjnl/5.1.10. ISSN 0010-4620. - in many languages this is the standard behavior of integer division - Chandramouli, Badrish; Goldstein, Jonathan (18 June 2014). "Patience is a virtue: revisiting merge and sort on modern processors". Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data. Sigmod '14. Snowbird Utah USA: ACM: 731–742. doi:10.1145/2588555.2593662. ISBN 978-1-4503-2376-5. S2CID 7830071. - Sedgewick, Robert (1 September 1998). Algorithms in C: Fundamentals, Data Structures, Sorting, Searching, Parts 1–4 (3 ed.). Pearson Education. ISBN 978-81-317-1291-7. - qsort.c in GNU libc: , - http://www.ugrad.cs.ubc.ca/~cs260/chnotes/ch6/Ch6CovCompiled.html[permanent dead link] - Sedgewick, R. (1978). "Implementing Quicksort programs". Comm. ACM. 21 (10): 847–857. doi:10.1145/359619.359631. S2CID 10020756. - LaMarca, Anthony; Ladner, Richard E. (1999). "The Influence of Caches on the Performance of Sorting". Journal of Algorithms. 31 (1): 66–104. CiteSeerX 10.1.1.27.1788. doi:10.1006/jagm.1998.0985. S2CID 206567217. Although saving small subarrays until the end makes sense from an instruction count perspective, it is exactly the wrong thing to do from a cache performance perspective. - Umut A. Acar, Guy E Blelloch, Margaret Reid-Miller, and Kanat Tangwongsan, Quicksort and Sorting Lower Bounds, Parallel and Sequential Data Structures and Algorithms. 2013. - Breshears, Clay (2012). "Quicksort Partition via Prefix Scan". Dr. Dobb's. - Miller, Russ; Boxer, Laurence (2000). Algorithms sequential & parallel: a unified approach. Prentice Hall. ISBN 978-0-13-086373-7. - Powers, David M. W. (1991). Parallelized Quicksort and Radixsort with Optimal Speedup. Proc. Int'l Conf. on Parallel Computing Technologies. CiteSeerX 10.1.1.57.9071. - The other one may either have 1 element or be empty (have 0 elements), depending on whether the pivot is included in one of subpartitions, as in the Hoare's partitioning routine, or is excluded from both of them, like in the Lomuto's routine. - Edelkamp, Stefan; Weiß, Armin (7–8 January 2019). Worst-Case Efficient Sorting with QuickMergesort. ALENEX 2019: 21st Workshop on Algorithm Engineering and Experiments. San Diego. arXiv:1811.99833. doi:10.1137/1.9781611975499.1. ISBN 978-1-61197-549-9. on small instances Heapsort is already considerably slower than Quicksort (in our experiments more than 30% for n = 210) and on larger instances it suffers from its poor cache behavior (in our experiments more than eight times slower than Quicksort for sorting 228 elements). - Hsieh, Paul (2004). "Sorting revisited". azillionmonkeys.com. - MacKay, David (December 2005). "Heapsort, Quicksort, and Entropy". Archived from the original on 1 April 2009. - Wild, Sebastian; Nebel, Markus E. (2012). Average case analysis of Java 7's dual pivot quicksort. European Symposium on Algorithms. arXiv:1310.7409. Bibcode:2013arXiv1310.7409W. - "Arrays". Java Platform SE 7. Oracle. Retrieved 4 September 2014. - Wild, Sebastian (3 November 2015). "Why Is Dual-Pivot Quicksort Fast?". arXiv:1511.01138 [cs.DS]. - Kushagra, Shrinu; López-Ortiz, Alejandro; Qiao, Aurick; Munro, J. Ian (2014). Multi-Pivot Quicksort: Theory and Experiments. Proc. Workshop on Algorithm Engineering and Experiments (ALENEX). doi:10.1137/1.9781611973198.6. - Kushagra, Shrinu; López-Ortiz, Alejandro; Munro, J. Ian; Qiao, Aurick (7 February 2014). Multi-Pivot Quicksort: Theory and Experiments (PDF) (Seminar presentation). Waterloo, Ontario. - Motzkin, D.; Hansen, C.L. (1982), "An efficient external sorting with minimal space requirement", International Journal of Computer and Information Sciences, 11 (6): 381–396, doi:10.1007/BF00996816, S2CID 6829805 - David M. W. Powers, Parallel Unification: Practical Complexity, Australasian Computer Architecture Workshop, Flinders University, January 1995 - Kaligosi, Kanela; Sanders, Peter (11–13 September 2006). How Branch Mispredictions Affect Quicksort (PDF). ESA 2006: 14th Annual European Symposium on Algorithms. Zurich. doi:10.1007/11841036_69. - Edelkamp, Stefan; Weiß, Armin (22 April 2016). "BlockQuicksort: How Branch Mispredictions don't affect Quicksort". arXiv:1604.06697 [cs.DS]. - Richard Cole, David C. Kandathil: "The average case analysis of Partition sorts", European Symposium on Algorithms, 14–17 September 2004, Bergen, Norway. Published: Lecture Notes in Computer Science 3221, Springer Verlag, pp. 240–251. - Sedgewick, R. (1978). "Implementing Quicksort programs". Comm. ACM. 21 (10): 847–857. doi:10.1145/359619.359631. S2CID 10020756. - Dean, B. C. (2006). "A simple expected running time analysis for randomized 'divide and conquer' algorithms". Discrete Applied Mathematics. 154: 1–5. doi:10.1016/j.dam.2005.07.005. - Hoare, C. A. R. (1961). "Algorithm 63: Partition". Comm. ACM. 4 (7): 321. doi:10.1145/366622.366642. S2CID 52800011. - Hoare, C. A. R. (1961). "Algorithm 65: Find". Comm. ACM. 4 (7): 321–322. doi:10.1145/366622.366647. - Hoare, C. A. R. (1962). "Quicksort". Comput. J. 5 (1): 10–16. doi:10.1093/comjnl/5.1.10. (Reprinted in Hoare and Jones: Essays in computing science, 1989.) - Musser, David R. (1997). "Introspective Sorting and Selection Algorithms". Software: Practice and Experience. 27 (8): 983–993. doi:10.1002/(SICI)1097-024X(199708)27:8<983::AID-SPE117>3.0.CO;2-#. - Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Pages 113–122 of section 5.2.2: Sorting by Exchanging. - Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 7: Quicksort, pp. 145–164. - Faron Moller. Analysis of Quicksort. CS 332: Designing Algorithms. Department of Computer Science, Swansea University. - Martínez, C.; Roura, S. (2001). "Optimal Sampling Strategies in Quicksort and Quickselect". SIAM J. Comput. 31 (3): 683–705. CiteSeerX 10.1.1.17.4954. doi:10.1137/S0097539700382108. - Bentley, J. L.; McIlroy, M. D. (1993). "Engineering a sort function". Software: Practice and Experience. 23 (11): 1249–1265. CiteSeerX 10.1.1.14.8162. doi:10.1002/spe.4380231105. S2CID 8822797. - "Animated Sorting Algorithms: Quick Sort". Archived from the original on 2 March 2015. Retrieved 25 November 2008. – graphical demonstration - "Animated Sorting Algorithms: Quick Sort (3-way partition)". Archived from the original on 6 March 2015. Retrieved 25 November 2008. - Open Data Structures – Section 11.1.2 – Quicksort, Pat Morin - Interactive illustration of Quicksort, with code walkthrough
Voyager 1 and its journey beyond space In 1977 NASA launched two spacecrafts known respectively as Voyager 1 and Voyager 2. The mission of these spacecrafts was to go boldly where no spacecraft has ever been before – to the heliosphere. The heliosphere is a magnetic bubble of charged particles in the space surrounding our solar system and solar wind. It is a region 8-14 billion miles away from the sun. 35 years after the two Voyagers were launched, data published by NASA suggests that Voyager 1 has in fact managed to travel all the way to the heliosheath – the outermost layer in the heliosphere. The graph below from NASA shows an increased rise in the galactic cosmic rays from outside our solar system in recent months: There has also been a very dramatic drop in the number of charged particles originating from the Sun: These two graphs suggest that Voyager 1 is travelling further and further away from our sun and our solar system. Scientists are still working on determining whether or not there has been a change in the direction of the magnetic field affecting Voyager 1. If this is confirmed then it will be almost definitive that Voyager 1 has in fact reached the boundaries of the solar system. Why is this of interest to us? The heliopause is a region that has never before been reached by any spacecraft or anything from Earth. If all goes to plan the Voyagers have enough electrical power and fuel to operate until 2020, so they could potentially explore the Milky Way. By the end of Voyager 1’s journey it will have travelled 12.4 billion miles away from the sun. This data taken from Voyager 1 will allow us to gain a deeper insight into what is going on at the edge of our solar system and beyond.
An arithmetic or linear sequence, an, is an ordered list of n numbers where the difference between each consecutive term, d, is constant. An arithmetic sequence can always be described by the general rule: The term to term rule of an arithmetic sequence describes how to get from one term to the next in the sequence. To find the term to term rule, subtract the first term from the second term to find the common first difference term. From the common first difference term and the first term the position to term rule, or general rule, can be worked out with the formula: The general rule of an arithmetic sequence can be used to work out the nth term of the sequence, the term at position n, and is given by the formula: A series is the sum of the terms of a sequence up to a certain number of terms and it is denoted by Sn. A series for an arithmetic sequence is given by the formula: Did you know? By simply logging in, you would be able to access 50% more questions.
Perpendicular Lines Definition Perpendicular lines is a particular term used in Geometry, Mathematics. These lines are defined as any two lines that intersect each other by making an angle of 90 degrees. It means that these lines are drawn at the right perpendicular to each other. Read about Altitude in Geometry here in detail. Table of Contents Representation of Perpendicular Lines In Mathematics, two perpendicular lines are represented by inserting a particular symbol between the names of those lines. For example, you can represent two perpendicular lines AB & CD as AB ⟂ CD. The symbol between their names shows that these lines are perpendicular to each other. Real-life examples of Perpendicular Lines To understand this concept, you can have a look at your surroundings and imagine multiple perpendicular lines in different objects and places. Here we are going to show you a few examples from real life regarding Perpendicular Lines. For better understanding, let us show you a few perpendicular lines and real-life examples in picture format. - The height and width of the TV make a set of perpendicular lines. - The hands of a clock at 3’O clock and 9’O Clock makes the angle 90 degrees. Opposite of Perpendicular Lines Like these lines, there are other sets of lines two with opposite properties to these lines. Such lines are called opposite to perpendicular lines. In Geometry, those lines are called Parallel lines because they are making an angle of 0 degrees. In simple words, you can say that these lines drawn side by side without touching each other. Read about Parallel lines here in detail. Fun Facts about Perpendicular Lines - Two parallel lines can intersect each other at a single point only. - Multiple perpendicular lines can be drawn on a single line. - Two lines with the same perpendicular line will always be parallel to each other. - Only two lines can be perpendicular to each other at a time. How to construct Perpendicular lines? It is pretty simple to draw perpendicular lines in Mathematics. You can do this by using simple geometrical equipment or tools. Here are those tools with the discussion of processes through which you can draw these lines. - Draw a horizontal line on the page of your copy. - Keep Protractor on that line in such a way that the line of the Protractor lies just above your drawn line - Mark a point where the 90 degrees of the Protractor is pointing above your line. - Take a ruler and draw a straight line joining that dot with your line. - Now, the line you have just drawn will be perpendicular to the first line and vice versa. You can also draw perpendicular lines with the help of a Compass. It is a little bit more difficult process than the above-mentioned one. Here are the steps that you have to follow one - Draw a line on the page of your copy. - Mark a point, let's say A, where you want to draw a perpendicular. - Take a compass and open it with your desired radius. - Draw an arc on both sides of point “A” with the same radius on the line. - Label those points for understanding, let's say those points as “X” and “Y”. - Open the compass a bit more and draw two arcs above the line by keeping it at “X” and “Y” points. - Mark the point of intersection of those arcs as “B” for understanding. - Now, draw a line joining “A” and “B” using a ruler. - The line AB is perpendicular to the first line that you have drawn in the beginning. What is the difference between parallel and perpendicular lines? There are multiple differences between these lines. But you can say that two lines making an angle of 90 degrees are perpendicular while two lines making an angle of 0 degrees are parallel lines. Can perpendicular lines intersect? Yes, perpendicular lines must intersect each other. Are all intersecting lines perpendicular? No, all intersecting lines are not perpendicular. The lines that are making an angle of 90 degrees are termed perpendicular lines. Can two lines be perpendicular and parallel at the same time? No, two lines can either be perpendicular or parallel at the same time. How many lines be perpendicular to a single line? Multiple perpendicular lines can be drawn on a single line. Keep in mind that the angle between the drawn lines with the original line should be exactly 90 degrees.
IN 1981 PIETRO and Rosemary Grant, a team of husband and wife evolutionary biologists, have noticed something odd about Daphne Major. Each year in the previous decade they had traveled from Princeton University to this Galapagos island to study its three endemic species of tanager, part of a group known colloquially as “Darwin’s finches”. On this occasion their eyes were drawn to an unusual male who sported dark feathers and sang a unique song. Genetic analysis later identified it as a large cactus finch, probably blown from Española, another part of the archipelago more than 1 Intrigued, the Grants followed the castaway as he explored his new home. They have seen him mate with a local female medium land finch. This produced five healthy and fit children. Those offspring were also surprisingly sexually selective. With the exception of a single male, they and their descendants only mated with each other, and have continued to do so ever since. Despite this strong inbreeding, the hybrids (two of which are pictured above) have been successful. They have carved a niche in which they use their size and deep beak to take advantage of the large woody fruits of the Jamaican fever plant, which grows locally. They have effectively become another species of Darwin’s finch, of which 13 were previously recognized. Although they do not yet have a Latinized scientific name, they are known by all as the “Big Bird” lineage. This story would once have been considered deeply implausible. The orthodox evolutionary narrative does not suggest that hybridization is how new animal species emerge. But, as genetic testing proliferated, biologists were faced with an unexpected fact. Hybrids are not an evolutionary bug. They are a feature. This knowledge is changing the way people think about evolution. The ordered family trees imagined by Charles Darwin in one of his first notebooks (see image below) are turning into cobwebs, and the primacy of the mutation in generating the variation that natural selection then scrutinizes is questioned. The influx of genes that accompanies hybridization also creates that variation – and the more people look, the more important it seems to get. Hybridization also offers shortcuts on the long march to speciation that are not dependent on natural selection at all. As the example of the Big Bird lineage shows, instead of taking millennia to emerge, a new species can appear almost overnight. Indeed, all of this had already been recognized for simple organisms such as bacteria. These genes are promiscuously exchanged between more and less related individuals. But bacteria were unknown when Darwin invented natural selection, and the subject of speciation has been dominated by examples from animals and plants ever since. Recognizing that what is true for bacteria is also true for these multicellular organisms has profound implications, not least for how humans understand their origins. It seems fitting, then, that the birds whose diversity helped inspire Darwin still have stories of evolution to tell. The conventional view of evolution is that mutations occur randomly. The maladaptive ones are then eliminated by competitive pressure while the adaptive ones proliferate. The result, over long periods of time and witnessed by populations that are sometimes divided by external circumstances, is change that eventually crystallizes into new and separate species. This process leaves the door open for hybrids. The genomes of closely related species can remain sufficiently similar to produce viable offspring. But these genes often do less well than those of parents of the same species. As a result, viable hybrids are also often sterile (think mules) and are also at increased risk for developmental and other diseases. In fact, infertility in male hybrids is so common that it has a name: Haldane’s rule. This sort of thing was enough to persuade most of Darwin’s twentieth-century disciples that the need to avoid hybridization was actually a driving force that caused natural selection to erect reproductive barriers between incipient species, encouraging so is speciation. However, there is another way to look at hybridization. Mixing the traits of two parent species could actually leave their hybrid offspring in better condition. This is called hybrid vigor or heterosis. The interaction of the genes of two species can even produce traits shown by neither parent. This is known as transgressive segregation, and the resulting hybrid can be surprisingly well suited to an entirely new niche, as is the case with the Big Birds. Both the evil and beneficial effects of hybridization are real. The question is who wins most often in practice? In plants, it is often the beneficial. This is a consequence of the unusually malleable genetics of plants. The nuclear genomes of complex organisms (animals, plants, fungi, and unicellular organisms such as amoebas) are divided into bundles of DNA called chromosomes. Such organisms are generally haploid or diploid, which means that each cell nucleus contains one or two copies of each chromosome. Humans are diploid. They have 23 chromosome pairs, for a total of 46 single chromosomes. But there are exceptions. Plants, for example, are often polyploid, which means that each nucleus contains copies in multiples greater than two. For example, Californian coastal redwoods have six copies. Since sequoia cell nuclei have 11 distinct types of chromosomes, they house a total of 66 chromosomes in all. Sometimes, polyploidy is the result of spontaneous doubling of an organism’s genome. Often, however, it is a consequence of hybridization, with the chromosomes of both parents ending up in a single nucleus. However it turns out, polyploidy provides backup copies of genes that natural selection can work on while other versions continue with their original function. And if it is also the result of hybridization, it brings the further possibilities of heterosis and transgressive segregation. Furthermore, by changing the chromosome count of an organism, polyploidy has another pertinent effect. Create an instant breeding barrier with both parent species. This gives a new incipient species the opportunity to establish itself without being reabsorbed into one of the parental populations. The results can be spectacular. Recent evidence suggests, for example, that hybridization between two plant species in the distant past, followed by a simple doubling of the number of chromosomes in their offspring, may be responsible for much of the extraordinary diversity in flowering plants that is seen. today. Plants appear to be easy beneficiaries of hybridization. For many animals, however, and for mammals in particular, extra chromosomes do not serve to make things better, but to disturb them. Why, it’s not entirely clear. Cell division in animals seems more easily confused by superfluous chromosomes than in plants, so this could be a factor. Plants also have simpler cells, which are better able to accommodate extra chromosomes. Whatever the details, animal hybrids seem to suffer from the effects of genetic incompatibility much more acutely than plants, and are therefore less able to benefit from heterosis. Evolutionary biologists have therefore long assumed that hybridization plays a negligible role in animal evolution, and there was little evidence to suggest otherwise. Advance in DNA sequencing changed that by allowing people to look under the hood of evolutionary history. This uncovered a steady stream of animals breathed alive entirely by hybrid speciation. They include some familiar names. The European bison, for example, is the result of hybridization, over 120,000 years ago, between two now extinct species: the Ice Age steppe bison and the aurochs. The latter were the wild antecedents of modern domestic cattle and survived in the Jaktorow Forest, Poland, until 1627. Something similar is true of the Clymene Atlantic dolphin. Genetic analysis revealed that this cetacean, which roams the salt flats between West Africa, Brazil and the Gulf of Mexico, owes its existence to a hybridization that took place between two other globetrotting dolphins, the striped dolphin and the spinner dolphin. . At least one hybrid animal also traces its ancestry back to three species. Genetic analysis proves this Artibeus schwartzi, a Caribbean fruit bat, is the result of hybridization over the past 30,000 years of the Jamaican fruit bat (Artibeus jamaicensis), the South American flat-faced fruit-eating bat (Artibeus planirostris) and a third animal, not yet identified, that researchers speculate may be extinct. Another kettle of fish It also appears that, as in the case of flowering plants, hybridization may fuel the explosive radiation of new animals. The best known example is the case of cichlids from the African Great Lakes, in particular Lake Victoria, Lake Tanganyika and Lake Malawi. Great Lake cichlids are a group of thousands of closely related fish, famous for their panoply of shapes, sizes and colors (see photo). Each is adapted to a different depth and ecological niche. The evolutionary history of cichlids has long puzzled biologists. Lake Victoria, in particular, comes and goes with the climate. Its current instantiation is less than 15,000 years old. In evolutionary terms this is a blink of an eye, but lake cichlids diversified into more than 500 species during that time. The reason is hybridization. Using genetic analysis to place Lake Victoria cichlids within the larger cichlid family tree, the researchers found that they descended from an encounter between two distinct parental lineages, one swimming in the Congo and the other in the Nile. The value of being such a genetic mosaic is evident from the story of one of the best-studied cichlid genes, which codes for a long-wave sensitive protein called opsin found in the retina of the eye. This protein determines the sensitivity of the eye to red light. This is important because red light levels drop rapidly in deeper waters. Consequently, fish living at different depths need eyes that are tuned differently from each other. The Congo cichlid lineage had eyes optimized for clear and shallow water. The lineage view of the Nile was more in tune with the deep and dark. The hybrids were able to cut and modify these genetic variants to produce a range of light sensitivity. This allowed them to colonize the full depth of the water column in Lake Victoria as it developed. The new lake, for its part, offered cichlids a myriad of empty ecological niches to fill. The result was a sudden and explosive process called “combinatorial speciation”. Elsewhere in the natural world, combinatorial speciation appears to have contributed to the surprising diversity of Sporophila, a genus of 41 neotropical songbirds, and the munias, mannikin, and silverbills of the genus Lonchura, a group of 31 estrildid finches that range across Africa and Southeast Asia. Nor is it only in vertebrates that this phenomenon raises its head. Heliconius, a genus of 39 flaming New World butterflies, also owes its captivating diversity to combinatorial speciation. It’s raining cats, dogs and bears These findings confuse Darwin’s concept of speciation as a slow and gradual process. Biologists now know that under the right circumstances, and with the help of hybridization, new species can emerge and consolidate over a handful of generations. This is an important amendment to evolutionary theory. However, it is true that hybrid speciation in its full form remains rare for animals. It requires an unlikely congruence of factors to maintain a new hybrid population reproductively isolated from both parental species. The survival of the Galapagos Big Bird lineage, for example, involved physical isolation from one and strong sexual selection against the other. Most commonly, an incipient hybrid population is reabsorbed by one or both parental species before it can establish itself properly. The result is a percolation of genes from one species to another, rather than a complete hybrid. This is called introgressive hybridization or, simply, introgression. DNA analysis of a long list of closely related animals shows that this version of hybridization is much more common than the complete form. It might even be ubiquitous. The North American gray wolf, for example, owes its gene for melanism – the deep black fur shown by some wolves – to the introgression of domestic dogs brought from Asia 14,000 years ago by early American human settlers. In forest-dwelling wolves this gene has undergone strong positive selection, suggesting that it is adaptive. The most obvious explanation is that melanism provides better camouflage in the deep stygian woods of North America. Alternatively, female wolves may simply prefer their tall, dark, handsome males. Panthera– the genus to which most big cats belong – is even more impressive in the scope of its introgressive weave. It has five members: lions, tigers, leopards, snow leopards and jaguars. It has long been known that these cross successfully in captivity, producing crosses called ligri (lion x tiger), jaglioni (jaguar x lion) and so on. But recent analyzes show that this also happened in nature. The researchers identified at least six past introgressive episodes in the genre, with each member involved in at least one of them. The most promiscuous of the five appears to be the lion. Gene variants have spread among lions and tigers, lions and snow leopards, lions and jaguars. There is also evidence that at least some of this gene flow has been adaptive. Three lion genes incorporated into the jaguar genomes are known to have been heavily selected. Two of these are involved in vision, in particular they help guide the development of the optic nerve. Genetic analysis also reveals a long history of hybridization between polar bears and grizzlies, the largest of their brown cousins. It is still unclear whether this had an adaptive value, but it may soon have a chance to prove itself. As climate change warms the polar bear’s Arctic home, the species may have to adapt quickly. A splash of grizzlies, a group accustomed to more temperate climates, could help achieve this. The most studied case of introgression in animals is, however, closer to home than in wolves, big cats and bears. He’s looking at you from the mirror. The most up-to-date evidence suggests this Homo sapiens was born more than 315,000 years ago from gene flow between a series of interconnected population groups scattered across Africa. Whether these populations were different enough to be considered distinct species is still debated. In the African Pleistocene grasslands, however, these ancestral groups were not alone. Their world was interspersed with a menagerie of other hominins. And interspecies mating appears to have been widespread. My family and other hominins Several members of this human menagerie appear to have descended from Homo heidelbergensis, a species that spread to eastern and southern Africa about 700,000 years ago before crossing the Middle East to Europe and Asia. This species – a possible ancestor of the parent groups of Homo sapiens– also gave rise to at least two others, the Neanderthals (Homo neanderthalensis) and the Denisovians (Homo denisova). The former survived in Europe until 28,000 years ago, while the latter, an Asian group, lasted until about 50,000 years ago. Other hominid species around at the time emerged directly from Homo erectus, a more primitive creature who was also the ancestor of Homo heidelbergensis and who, a million years earlier, had traced a transcontinental expansive path similar to that of heidelbergensis. Local descendants of erectus have been largely displaced from heidelbergensis when it arrived. But some resistance has survived in the corners of the Old World heidelbergensis never achieved. These included the islands of Flores in Indonesia and Luzon in the Philippines. Here was that diminutive Homo floresiensis is Homo luzonensis—The “hobbits” of the island – lasted, like the Denisovians, up to 50,000 years ago. There were probably isolated descendants of older cousins as well. At least one is known, Homo naledi, which preceded the emergence of Homo erectus and they still roamed southern Africa about 230,000 years ago. Eventually this great hominid circus came to an abrupt end. The record in Africa is opaque. But in Europe, Asia and Oceania it is clear that the arrival of modern humans has coincided with a great disappearance of local hominids. Be it disease, competition for scarce resources or perhaps even genocide, a few thousand years of contact with Homo sapiens it was enough to extinguish every other hominid species. Even a few millennia, however, have proved sufficient Homo sapiens to know his cousins intimately. The record of these romantic entanglements remains in the DNA of almost everyone alive today. In 2010 a team led by Svante Pääbo from the campus of the Max Planck Institute in Leipzig published the first draft sequence of the Neanderthal genome. This led to the extending discovery of Neanderthals DNA they make up 1-4% of the modern human genome in all populations outside sub-Saharan Africa. This is consistent with a series of hybridization reports in Europe, the Middle East and Central Asia dating back to around 65,000 years ago. The Neanderthal legacy helped Homo sapiens adapt to the needs of the environments of these unknown places. There seems to have been a strong selection, for example, in favor of Neanderthal genes linked to skin and hair growth. These include bnc 2, a gene linked to skin pigment and freckles that is still present in two thirds of Europeans. There appears to have also been a selection for Neanderthal-derived genes that deal with pathogens. Some govern the immune system’s ability to detect bacterial infections. Others encode proteins that interact with viruses. The Denisovians and their contribution a Homo sapiens, were another of Dr. Pääbo’s discoveries. In 2009, one of his teams was kidnapped DNA from a fossilized finger bone excavated from Denisova cave in the Altai Mountains of Siberia. This bone turned out to belong to a previously unknown species that got its name from the cave in which it was found. Physical specimens of this species remain rare. Examination of living people, however, reveals that it is Denisovan DNA they make up 3-6% of the genome of contemporary Papuans, Aboriginal Australians and Melanesians. Many Chinese and Japanese also bring Denisovan DNA, albeit at lower rates. As with the Neanderthals, this inheritance brought benefits. The Denisovan version of a gene called epas1 modulates the production of red blood cells, which carry oxygen. This helps modern Tibetans survive at high altitudes. Denisovan tbx 15 is wars 2 likewise it helps the Inuit survive the intense cold of the Arctic by regulating the amount of metabolic heat they produce. We count multitudes That the Denisovans could hide in the modern human DNA yet leaving so few fossil traces left geneticists wondering what other ghosts they might have found. The genomes of sub-Saharan Africans, in particular, reveal evidence of at least one further intertwining. In 2012 a genomic analysis of members of the Baka, Hadza and Sandawe, three groups of people of ancient lineage, suggested an archaic introgression. In 2016 a deeper analysis focused on the Baka identified this aspect over the past 30,000 years. In February, a study of members of two other groups, the Yoruba and the Mende, confirmed that between 2% and 19% of their genomes can be traced back to an unidentified archaic species. It is unclear whether this is the same as what contributed to the Baka, Hadza and Sandawe, but it appears to have deviated from the line that leads directly to Homo sapiens not long before the Neanderthals and Denisovans, an African Neanderthal, if you will. The same genetic tools also revealed deeper ghosts. The Denisovans show signs of hybridization with a “superarchaic” lineage, perhaps Homo erectus Yes. This constitutes 1% of the genome of the species. About 15% of this superarchaic heritage has, in turn, been passed on to modern humans. There is also evidence of a minuscule genetic contribution to African populations by an equally super-archaic relative. To be human, therefore, is to be a multispecies bastard. However, as the example of the big cats in particular shows, Homo sapiens it is not, in this, an exception. Hybridization, once seen as a spear bearer in the great theater of evolution, is fast becoming a star of the show. Meanwhile, Darwin’s idea of a simple and universal family tree is relegated to wings. In its place, some experts now prefer the idea of a tangled bush of interconnected branches. But even this is an imperfect comparison. A more appropriate analogy is a frayed rope. The species are intertwined with single threads. Where evolution proceeds in an orthodox Darwinian way, braids dissolve, strands split and new species result. But the rope doesn’t fray neatly. Introgression strands cross from braid to braid and occasionally two tangle to form a new braid altogether. This is a more complex conception of evolutionary history, but also richer. Few things in life are simple – why should life itself be? ■ This article appeared in the Science and Technology section of the print edition under the title “Match and Mix”
Advanced Multiplication Using Numbers Between 100 and 1000 In this advanced multiplication worksheet, students solve 12 problems in which two 3 digit numbers are multiplied. There are no instructions or examples on the page. This is an interactive online worksheet that can be printed. 7 Views 5 Downloads Numbers in a Multiplication Table Identifying patterns is a crucial skill for all mathematicians, young and old. Explore the multiplication table with your class, using patterns and symmetry to teach about square numbers, prime numbers, and the commutative and identity... 3rd - 5th Math CCSS: Designed Multiply Fractions by Whole Numbers: Using Repeated Addition Prior experience with addition of fractions guides young mathematicians as they learn how to multiply fractions by whole numbers. The first video in this two-part series outlines this process with clear explanations and examples.... 4 mins 3rd - 5th Math CCSS: Designed Math Worksheet: Advanced Multiplication Using Numbers Between 10 and 100 In this online interactive 2 digit multiplication worksheet, students practice their math skills as they solve 15 problems that require them to multiply numbers between 10 and 100. The answers may be typed in on the worksheet and... 4th - 7th Math
ENIAC was started in 1943 and completed at the end of 1945 - and so missed the war by a few months. It was huge. It used just short of 18000 vacuum tubes arranged in racks 100 feet long in total. It consumed enough electricity, about 150kW, for a small town and disposing of the heat produced by the vacuum tubes was no small problem. But it worked. It could add two numbers in 0.2 milliseconds and this speed increase made calculations possible would have simply taken too long by other methods available at the time. In today's terms ENIAC's arithmetic performance is equivalent to an IBM PC running at .005MHz - which is a lot better than the relay computers could manage. It even managed to work for reasonable periods of time between vacuum tube failures, up to 20 hours. ENIAC was used for 10 years and did a huge amount of computation in that time. It not only computed trajectories but ran the simulations needed for the H bomb. It also caught the public imagination. In a public demonstration it added numbers and plotted shell trajectories in real time. It took only twenty seconds to work out the trajectory of a shell that took 30 seconds to reach its target. ENIAC's flashing lights and switch banks also set the look of machines in film and fiction that has hardly changed through to the present day! Betty Jean Jennings (left) and Fran Bilas (right), two of ENIAC's six female programmers, operate its main control panel. ENIAC differed in a number of important ways from a modern computer. The first is, that despite the work of Shannon and others, ENIAC used the decimal system to do arithmetic. It worked with ten digit signed arithmetic. The arithmetic units also mimicked the way that mechanical calculators worked using gear wheels. A ring counter can be considered to be the electronic equivalent of a rotating gear wheel in the sense that it has ten states only one of which is active. An input pulse makes the state change to the next and when the last state is reached a pulse is output and the ring moves back to the first state. The output is used as a carry to the next position of the number and addition is performed by feeding in the correct number of pulses. So adding two numbers together really did simulate the rotation of gear wheels counting the number of times the calculating machine's handle is turned. This is a long way from the way a modern computer uses Boolean logic to perform addition. It also didn't have the internal structure of a modern computer and was more like a collection of logic circuits that could be connected in different ways. It had to be programmed using plug board and wires that looked like a small telephone exchange. It could take days to "program" the machine to solve a problem that would take only a few minutes to run. It wasn't a stored program machine but it did have loops and conditionals but the programs were created using wire connections and setting switches. This was the way that existing and very sucessful analog computers were organized. They had units that performed mathematical functions such as addition, multiplication, integration and so on and these were writed up together to provide a simulation of a mathematical equation. ENIAC wasn't so different just digital. So was ENIAC the first electronic digital computer? You can now see that the question is a difficult one as it all depends what you mean by digital computer. The serious challengers for the title are the Colossus - a secret code breaking computer built in the UK, Konrad Zuse's relay/electronic machines built in Germany and the Atanasoff-Berry Computer. Each of these had differences from modern machines - the Colossus was programmed using hardware and not general purpose. The ABC used binary was programmed using hardware and only solved linear equations. Zuse's Z3 machine looked more like a modern computer than the other two but lacked conditional branching. If you restrict the definition of computer to stored program computer then the Manchester "Baby" is probably the best candidated for the first. The biggest problem in constructing a stored program computer was how to achieve the large amount of memory needed to store the program and the data. Using vacuum tubes was prohibitively expensive. Eckert suggested using mercury delay lines of the type used to store radar pulses and this indeed proved possible. However, EDVAC, his next computer, wasn't the first stored program computer to be put into operation. Manchester University built and operated a prototype in 1948 and EDSAC at the University of Cambridge was completed in 1949 but EDVAC wasn't completed until 1952. After ENIAC came EDVAC, a true stored program computer. Instead of having to be programmed using plug boards and wires, EDVAC stored its instructions along with its data in the manner of all computers since. The idea for a stored program computer is attributed to Von Neumann as a consequence of his accidental meeting with Goldstine on a railway platform. However no idea is completely new and it is possible that Presper Eckert had a hand in it. The strange twist to this story is that Mauchly and Eckert left the EDVAC project before it was completed. The reason was that the University of Pennsylvania insisted that all inventions patents etc. created by people working for it were the property of the University. Mauchly and Eckert refused to sign the release giving the University the rights to the technology that they had created. As a result they left, the Moore School lost the lead in computing and the first computer company - The Electronic Control Company - came into existence. This may not seem like of a gamble, but leaving secure university jobs to start an industry which some experts had said would have a market of only six machines in the whole of the USA was very risky. They failed to find backers and Eckert's father put up the $25000 needed - and so once again set the tone for future startups! They tried to find a buyer for their planned machine and for the second time in the history of computing the US Census Bureau fitted the bill. The first time it was Hollerith's punch cards and associated equipment but for the imminent 1950 census punched cards looked distinctly impractical. Keen to use electronics they placed the first commercial order for a computer - Mauchly and Eckert's UNIVersal Automated Computer, i.e. UNIVAC - in 1946 for $350,270. UNIVAC at the US Census Bureau Work began in 1946 but neither Mauchly or Eckert were reasonable managers and they quickly fell behind schedule and ran into debt. Even so they managed to find a total of four buyers for their machine and pioneered new revolutionary techniques such as digital tape drives for data storage. But lack of funds remained their main problem. In 1950 the re-named Eckert-Mauchly Computer Company was taken over by Remington Rand - yes, the people who made razors and assorted office equipment! Remington paid $200,000 to Eckert and Mauchly and guaranteed their employment for eight years. In 1951 the first Univac was delivered, late and over budget. Univac was the first commercial computer in the US (LEO 1 was produced commercially in the UK in the same year) and, proving the pessimists wrong, various companies immediately ordered a total of 50. J. Presper Eckert and CBS anchor Walter Cronkite discuss the UNIVAC 1952 US Presidential Election prediction UNIVAC became an overnight TV star when it correctly predicted the outcome of the US presidential elections again proving the pundits wrong. The headlines the next day read "Machine Makes Monkey Out of Man" and the modern era of computing was well and truly started. Remmington Rand, and more or less everyone else, eventually lost the lead to a late starter in the computer field - IBM - but that's another story. Born in 1791, Charles Babbage was the man who invented calculating machines that, although they were never realised in his lifetime, are rightly seen as the forerunners of modern programmable co [ ... ] Soviet Russia had its own early computer program and its "father of the computer" was Sergei Alekseevich Lebedev. Was the Russian effort just a copy of computers being built at the same time in the US [ ... ]
Fprintf in MATLAB is a crucial function for formatting text and data output. Whether you're saving results to a file or displaying them in the command window, understanding how to use this function can make your life easier. Let's get straight to the point and explore how to make the most of fprintf in your MATLAB projects. The basic syntax of fprintf in MATLAB is quite straightforward. It's primarily used to format text and numerical data for output. The function takes a format specifier, followed by the variables you want to format. % Basic Syntax fprintf('Format Specifier', variables) 'Format Specifier'is a string that tells MATLAB how to format the variables. The variablesare the data you want to output. Format specifiers are crucial in determining how your data will appear. They are placeholders in the format string that dictate the type of data (integer, float, string, etc.). % Using Format Specifiers fprintf('The value of pi is: %f\n', pi); %fis the format specifier for floating-point numbers, and \nadds a new line. The value of You can direct the output of fprintf either to the command window or to a file. To write to a file, you'll need a file identifier, usually obtained using % Writing to a File fileID = fopen('example.txt', 'w'); fprintf(fileID, 'Hello, World!\n'); fclose(fileID); fileIDis the file identifier. The fprintffunction writes "Hello, World!" to a file named Don't forget to close the file using Some commonly used options include %d for integers, %s for strings, and %e for scientific notation. % Frequently Used Options fprintf('Integer: %d, String: %s, Scientific: %e\n', 42, 'hello', 2.718); Each format specifier is replaced by the corresponding variable in the list. When it comes to text formatting, fprintf offers a variety of options. You can align text, control the number of decimal places, and even pad numbers with zeros or spaces. % Aligning Text Right fprintf('%10s\n', 'right'); %10saligns the string "right"to the right within a 10-character wide field. You can pad numbers with zeros or truncate them to a specific number of decimal places. This is particularly useful for creating neat tables or data logs. % Padding With Zeros fprintf('Padded Number: %05d\n', 7); %05dpads the integer 7 with zeros, resulting in the output "Padded Number: 00007". Controlling the decimal precision is another handy feature. You can specify the number of digits after the decimal point for floating-point numbers. % Controlling Decimal Places fprintf('Pi with 2 decimal places: %.2f\n', pi); %.2frounds the value of pi to two decimal places. The output will be "Pi with 2 decimal places: 3.14". Often, you'll need to combine text and numbers in a single output line. The format specifiers can be embedded within a text string for this purpose. % Combining Text and Numbers fprintf('The square of %d is %d\n', 5, 25); %dformat specifiers are replaced by the integers 5 and 25, respectively. You can also specify both field width and precision for better control over the output format. % Field Width and Precision fprintf('Pi: %10.4f\n', pi); %10.4fspecifies a field width of 10 and rounds pi to four decimal places. The output will be "Pi: 3.1416", with spaces filling the unused field width. When you need to write data to files, fprintf is your go-to function in MATLAB. The process involves opening a file, writing to it, and then closing it when you're done. % Open File fileID = fopen('data.txt', 'w'); % Write to File fprintf(fileID, 'Hello, MATLAB!\n'); % Close File fclose(fileID); fopenopens a file named fprintfthen writes "Hello, MATLAB!" to the file. Finally, fclosecloses the file. Different file modes can be used with fopen to control how data is written to the file. The most common modes are 'w' for write, 'a' for append, and 'r+' for read and write. % Append to File fileID = fopen('data.txt', 'a'); fprintf(fileID, 'Appending this line.\n'); fclose(fileID); To write multiple lines, you can either use multiple fprintf statements or use a loop for repetitive structures. % Writing Multiple Lines fileID = fopen('numbers.txt', 'w'); for i = 1:5 fprintf(fileID, 'Number %d\n', i); end fclose(fileID); The loop iterates five times, each time writing a new line. It's good practice to include error handling when working with files. This ensures that the file is successfully opened before attempting to write to it. % Error Handling fileID = fopen('data.txt', 'w'); if fileID == -1 error('Failed to open file.'); end fprintf(fileID, 'This will only write if the file opened successfully.\n'); fclose(fileID); fopenfails to open the file. This prevents any subsequent code from executing, avoiding potential issues. The most straightforward use of fprintf is to display output in the MATLAB command window. This is particularly useful for quick debugging or for providing status updates during a script's execution. % Display Text in Command Window fprintf('This is a test message.\n'); fprintf doesn't add a new line at the end of the output. If you want to suppress new lines or control line breaks, you can do so explicitly. % Suppress New Line fprintf('This message will not end with a new line.'); \nat the end of the format string means that the message will not end with a new line. You can also display variables directly in the command window by including them as additional arguments to % Display Variables x = 5; fprintf('The value of x is %d.\n', x); xis displayed in the command window. %dformat specifier is replaced by the value of Special characters like tabs ( \t) and carriage returns ( \r) can be used to control the layout of the output. % Using Special Characters fprintf('Column1\tColumn2\n'); fprintf('1\t\t2\n'); For real-time updates, you can overwrite the current line in the command window using the carriage return character % Real-Time Updates for i = 1:5 fprintf('Processing %d out of 5\r', i); pause(1); end fprintf('\n'); i, giving a real-time update. After the loop, a new line is added to finalize the output. In a temperature monitoring system, it's crucial to log temperature data for analysis and record-keeping. This case study demonstrates how fprintfin MATLAB can be used to log temperature data into a text file efficiently. The system collects temperature data from multiple sensors every minute. The requirement is to log this data into a text file with a timestamp. fprintfto write the temperature data into a text file. First, we open the file in append mode. Then, we use fprintfto write the data and finally close the file. % Open file in append mode fileID = fopen('temperature_log.txt', 'a'); % Simulated temperature data and timestamp temperature = 23.4; % in Celsius timestamp = datestr(now, 'yyyy-mm-dd HH:MM:SS'); % Write to file fprintf(fileID, '%s, %.2f\n', timestamp, temperature); % Close file fclose(fileID); The text file temperature_log.txtgets updated with lines like: 2023-09-13 14:30:00, 23.4 2023-09-13 14:31:00, 23.5 ... One of the common pitfalls when using fprintf is forgetting to close a file after writing to it. This can lead to data loss or file corruption. % Incorrect Usage fileID = fopen('data.txt', 'w'); fprintf(fileID, 'Some data'); % Missing fclose(fileID); fcloseafter you're done writing to it. Another issue arises when the format specifiers don't match the type of data you're trying to print. This can result in incorrect or unexpected output. % Mismatched Format Specifier fprintf('The value is %d\n', 3.14); %dfor a floating-point number will truncate the decimal part. Make sure the format specifier matches the data type. Be cautious when opening a file in 'w' mode, as it will overwrite existing files without any warning. % Overwriting File fileID = fopen('existing_file.txt', 'w'); fprintf(fileID, 'New data'); fclose(fileID); 'a'mode to append data. Using an undefined or incorrect file identifier can result in errors or unexpected behavior. % Undefined File Identifier fprintf(badFileID, 'This will cause an error.\n'); Omitting special characters like \n for a new line or \t for a tab can mess up your output formatting. % Forgetting New Line fprintf('This is line one.'); fprintf('This is line two.'); In this example, the two lines will appear concatenated because of the missing Can I Use Fprintf To Write To Multiple Files At Once? fprintf writes to one file at a time, specified by the file identifier. If you need to write to multiple files, you'll have to call fprintf multiple times with different file identifiers. Why Is My File Empty After Using Fprintf? If your file is empty after using fprintf, it's likely that you forgot to close the file using fclose. Always ensure you close the file to save the data properly. Can I Use Fprintf To Write Binary Data? fprintf is designed for text-based output. If you need to write binary data, consider using functions like What's The Difference Between Fprintf and Sprintf? fprintf writes formatted data to a file or the command window, sprintf stores the formatted data in a string variable. The format specifiers and syntax are similar, but the output destinations are different. Why Are My Numbers Getting Truncated? If your numbers are getting truncated, check your format specifiers. Using %d for a floating-point number, for example, will truncate the decimal part. Use %f for floating-point numbers to maintain the decimal values. Let’s test your knowledge!
EL Support Lesson Subtraction Story Problems Students will be able to subtract within 20 to solve for an unknown part. Students will be able to describe steps to solve subtraction story problems using manipulatives, pictures and partner support. - Tell students the story problem, "Kristi had eight pennies. Kristi's friend gave her some more pennies. Now Kristi has 12 pennies! How many pennies did her friend give her?" - Seat the students in a circle on the rug, or show them pennies on a document camera. Ask students how much a penny is worth. (One cent!) - Think aloud as you model the problem. Say, "I know that Kristi had eight pennies." Chorally count one penny at a time to show eight. "I want to know how many pennies Kristi's friend gave her. I know she ended with twelve." - Count on as you create a second pile of pennies, "Nine, ten, eleven, twelve. How many pennies are in this part? How many more did I need to make twelve total?" (Four.) - Reflect, "That's right! Kristi's friend gave her four pennies!" Explicit Instruction/Teacher modeling(10 minutes) - Ask students if there would be a way to show how many pennies Kristi's friend gave her using an equation, or number sentence. - Think aloud, "Kristi started with some pennies. There were eight pennies in that part. We did not know how many pennies were in the other part. That part was unknown." - Write 8 + ? on the board, and say, "I will use the question mark because we did not know how many were in that part." Tell students to shrug their shoulders as if asking a question and repeat, "Question mark." - Say, "We know she ended with 12 total pennies." Finish the equation 8 + ? = 12. - Remind students that you solved the problem by using pennies and counting on. Tell students to turn and talk to a partner to share other ideas for ways to solve the problem. Display the sentence frame, "I could solve the problem by ____." - Choose volunteers to share ideas. Guide students in understanding that since 12 is the total number of pennies, 12 - 8 = ? could be used to solve the problem with subtraction. - Tell students, "Since I know the total number of pennies and one part, I can solve for the unknown part using subtraction. Then, I can use addition to check my answer. This is another way to show that Kristi's friend gave her 4 pennies.** Guided Practice(5 minutes) - Tell students that they will work with a partner to solve story problems. Students can choose to use objects, draw pictures or write equations to solve the problems. - Instruct students to read each problem three times. Create a chart to describe the steps for solving the word problems: - Read the problem and think about what is happening (sketch a stick figure with a thought bubble) - Read the problem again, and underline important words in the problem (write common math terms such as more, less, and all together and underline them) - Read the problem again, and solve the problem. Show your thinking! (write 8 + ? = 12, 12- 8 = ?, draw a picture of the problem, write the solution: 4 pennies) - Distribute the Subtraction Story Problems worksheet, and model solving the first problem following the steps as you read the problem three times with a different focus for each read. Group work time(10 minutes) - Students will work with a partner to follow the steps and solve the word problems. Partners should take turns reading each problem a total of three times. - Encourage students to explain their thinking as they solve each problem. - Require that partnerships agree on a solution before moving on to the next problem. Additional EL adaptations - Display a poster with numerals and number names 0-20 for reference. - If students do not know number names in English, allow them to say the number names in their home language (L1). - Solve the story problems in a teacher-led small group. Translate the problems to L1 if possible. - Encourage students to act out the problems to improve comprehension. - Allow students to write their own "change unknown" subtraction word problems. Students can exchange problems with a partner and solve. - Circulate as students work on the story problem with their partner. Check for comprehension of the situations presented in the problems. Students may automatically think that addition of two parts is required in word problems that include "more." Encourage students to act out the problems, or model with drawings and manipulatives. - Prompt students to verbalize the steps to solve a story problem by reading the problem three times. - If students do not answer the problems correctly, ask them to explain their thinking. Encourage the self-correction of errors rather than rushing to provide the correct answer. Review and closing(5 minutes) - Review the solutions for problems from the worksheet as time allows. - Ask students to explain the context of the problem in their own words, and to justify their answers by explaining their thinking.
Surface Area = Areas of top and bottom +Area of the side Surface Area = 2(Area of top) + (perimeter of top)* height Surface Area = 2(pi r 2) + (2 pi r)* h In words, the easiest way is to think of a can. The surface area is the areas of all the parts needed to cover the can. Formula for percentage. Finding the average. Basic math formulas Algebra word problems. Types of angles. Area of irregular shapes Math problem solver. Math skills assessment. Compatible numbers. Surface area of a cube The area of the whole surface is then obtained by adding together the areas of the pieces, using additivity of surface area. The main formula can be specialized to different classes of surfaces, giving, in particular, formulas for areas of graphs z = f(x,y) and surfaces of revolution. In math (especially geometry) and science, you will often need to calculate the surface area, volume, or perimeter of a variety of shapes.Whether it's a sphere or a circle, a rectangle or a cube, a pyramid or a triangle, each shape has specific formulas that you must follow to get the correct measurements. The surface area of the cylinder is found by adding the area of the circles that form the lid and the base of the cylinder to the area of the rectangular "label" of the cylinder's body, which has a height of h and a base of 2πr when unwrapped. The equation for the surface area is therefore 2πr^2 + 2πrh. Therefore, the total surface area of the cone is 83.17cm 2 Example 2: The total surface area of a cone is 375 square inches. If its slant height is four times the radius, then what is the base diameter of the cone? Use π = 3. Solution: The total surface area of a cone = πrl + πr 2 = 375 inch 2 Slant height: l = 4 × radius = 4r Substitute l ... Surface area formulas and volume formulas appear time and again in calculations and homework problems. Pressure is a force per area and density is mass per volume. These are just two simple types of calculations that involve these formulas. This is a short list of common geometric shapes and their surface area formulas and volume formulas. Body Surface Area Calculator. The calculator below computes the total surface area of a human body, referred to as body surface area (BSA). Direct measurement of BSA is difficult, and as such many formulas have been published that estimate BSA. The calculator below provides results for some of the most popular formulas. Surface Area of Circle Formula. The surface area of a circle is the total space defined within boundaries of a circle. Since, the circle is a two-dimensional figure, in most of the cases area and surface area would be the same. This is easy to calculate the surface area of a circle when either radius, diameter, or circumference is known. The surface area of a sphere is given by the formula Where r is the radius of the sphere. In the figure above, drag the orange dot to change the radius of the sphere and note how the formula is used to calculate the surface area. This formula was discovered over two thousand years ago by the Greek philosopher Archemedes.
Content Reviewers:Rishi Desai, MD, MPH Analysis of variance, or simply, ANOVA, is a type of parametric statistical test used to determine if there’s a significant difference between the means or averages of three or more groups. And significance is normally defined by a p-value of less than 0.05 or 5%. Now when doing any parametric test, there are three key assumptions that we have to make about the population. First, the sample population must have been recruited randomly. Choosing names randomly ensures that the people included in the study will have similar characteristics to the target population. This is important because that ensures that the results of the test can be applied to the target population - meaning it has good external validity! The second assumption is that each individual in the sample was recruited independently from other individuals in the sample. In other words, no individuals influenced whether or not any other individual was included in the study. For example, if two friends decided to get their blood pressures measured on the same day, and they were both included in the study, these two individuals would not be independent of each other and the second assumption would not be met. Like random sampling, independent recruitment of individuals is important because it ensures that the sample population approximates the target population. The third assumption is that the sample size is large enough to approximate the target population, which usually means having more than 20 people. If it’s impossible to get a large sample size, then the sample population must follow a normal bell-shaped distribution for the characteristic being studied because that’s what we would expect to see in the target population. Okay, now let’s say there’s a certain blood pressure medication, called Medication A, and you want to figure out if it helps lower systolic blood pressure after taking it for three months and after taking it for six months. So, you find 10 people and give each of them Medication A. Then, you measure each of their systolic blood pressures at time 1 - which is the time you initially gave them the medication - and then measure it again at time 2, let’s say 3 months later, and time 3, let’s say 6 months after they started taking the medication. You find out that the mean systolic blood pressure measurement at time 1 is 138; at time 2 it’s 132, and at time 3 it’s 130. Now, the next step is to figure out if 138, 132, and 130 are significantly different from one another, and you do that by performing an ANOVA test. Specifically, we would use a repeated measures ANOVA test, because we’re looking at the same group of people at multiple time periods. In a repeated ANOVA test, time is the independent variable, and in this example, systolic blood pressure is the dependent variable. It might be tempting to think that medication type is the independent variable in this study, but this isn’t the case, since everyone in the study is taking the same medication type. For example, let’s say there are three medications called Medication A, B, and C. A one-way ANOVA test would compare the systolic blood pressure measurements for people who have been taking one of the three types of medications for 6 months. In this example, medication type is the independent variable instead of time, because you measure the blood pressure of every person in the study at the same time. Typically, a repeated ANOVA test starts with two hypotheses. The first hypothesis is the null hypothesis, and it says that the means of each group are equal. In other words, the null hypothesis is that the mean systolic blood pressure is the same for people at time 1, 2, and 3. The second hypothesis is the alternate hypothesis, and it says that at least one group’s mean is significantly different from the others. So, the alternate hypothesis in our example is that the mean systolic blood pressure is not the same for people at time 1, 2, and 3. One important thing to know is that ANOVA doesn’t tell you which group’s mean is different than the others or whether the mean is higher or lower; it simply tells you that the groups’ means are not equal. Now, there are six steps to test these hypotheses. The first step is to calculate the mean of each individual group and the overall mean or grand mean - which is the mean blood pressure measurements for all the groups. Since the means for each group are 138, 132, and 130, we can calculate the overall mean by adding up each group - so 138 plus 132 plus 130, which is 400. Then, we divide that by the number of groups, which is 3. So, the overall mean is 400 divided by 3, or approximately 133. The second step is to find the between-group variation, which is also called the sum of squares-between, or the SSB. The sum of squares-between is a measure of how similar each group’s mean is to the overall mean. To find the sum of squares-between, we start by subtracting each group’s mean from the overall mean and squaring it, which is called the squared difference. Then, you multiply the squared difference by the number of people in that group. For a repeated ANOVA test, the number of people in each group stays consistent unless people drop out of the study in the middle of it. In this example, there are 10 people in each group. So, for Time 1, we subtract the mean blood pressure of the Time 1 group, which is 138, from the overall mean, which is 133, and that equals negative 5. The squared difference is negative 5 squared, or 25, and 25 times 10 is 250. For the Time 2 group, the mean is 132, so 133 minus 132 is 1, and 1 squared is still 1, and 1 times 10 equals 10. For the Time 3 group, the mean is 130, so 133 minus 130 is 3. 3 squared is 9, and 9 times 10 is 90. Now that we have the values for each group, we add them together to get the sum of squares-between. So, 10 plus 250 plus 90 is 350. A larger sum of squares-between tells us that the group means and the overall mean are spread out or different from one another, and a smaller sum of squares-between tells us that the group means are fairly similar to the overall mean. The third step in the ANOVA calculation is to find the within-group variation, which is also called the sum of squares-within, or SSW. The sum of squares-within is a measure of how similar each individual blood pressure measurement is from its own group mean. To find the sum of squares-within, you start by finding the squared differences for each person in one group, and to do this, you take each individual blood pressure measurement and subtract that group’s mean, then square it. For example, let’s just take the first 3 systolic blood pressure measurements in the Time 1 group, which are 129, 142, and 143. Since the group mean is 138, you subtract 138 from each individual measurement, so 129 minus 138 is negative 9, 142 minus 138 is 4, and 143 minus 138 is 5. Then you square each number and add them all together to get the squared difference - so when you add up negative 9-squared, or 81, and 4-squared, or 16, and 5-squared, or 25, you get 122. The squared difference is larger for groups with more people, so let’s say the squared difference of the Time 1 group is 330, and the squared differences for the Time 2 and Time 3 groups are 310 and 265. As a general rule, if all of the groups have equal sample sizes - like if each group has 10 people - then groups with higher squared differences, like the Time 1 group, have more variation than groups that have lower squared differences, like the Time 3 group. Now, to get the sum of squares-within, we add up all the squared differences for each group. So 330 plus 310 plus 265 equals 905. To do step 4, we have to know a little more about the sum of squares-within. The sum of squares-within is made of subject-level variation, which is also called the sum of squares of subjects or SSs, and random error, which is also called the sum of squared error or SSE. The values for the sum of squares of subjects and sum of squared-error add up to the value of the sum of squares-within. The sum of squares of subjects is basically the variation caused by differences in people’s individual characteristics, like sex, age, or genetic differences. For example, let’s say we’re measuring blood pressure in two groups of people. People who are older tend to have higher blood pressure, so if there are lots of older people in one group and lots of younger people in the other group, then the individual blood pressure measurements in the first group will be higher than the measurements in the second group, and the sum of squares of subjects will be high.
Have you ever been bullied or know someone who has been bullied? I know I have. October is National Bullying Prevention Month. Each October since 2006, there has been a national effort to raise awareness about bullying and provide education and resources to try to prevent it. According to data from 2017, about 20% of youth ages 12-18 experienced bullying at school and nearly 30% reported cyberbullying during their lifetime. That is a lot of our country’s youth! What Exactly is Bullying? Bullying is unwanted aggressive behavior. Bullying must have a real or perceived power imbalance between the bully and the victim, where the bully uses their power to control or hurt their victim. The bullying behavior needs to be repeated over time, or at least have the potential to repeat over time. There are three categories of bullying: - Verbal bullying includes teasing, taunting, threats, or name-calling - Social or relational bullying includes ignoring someone on purpose, ostracizing, spreading rumors, or embarrassing someone - Physical bullying includes damaging belongings or harming another’s body such as spitting, hitting, pushing, rude gestures, or tripping The constant and easy access of cell phones, social media, and the internet has increased the real dangers of cyberbullying. (Photo source: UF/IFAS) Technology has changed the ways of bullying. Bullying is no longer only ‘picking on’ someone, making fun of them, calling them names, or ignoring them at school. The constant and easy access of cell phones, social media, and the internet has truly expanded bullying to an unthinkable, unending scale. There are many ways to bully someone online, including: - Verbal attacks, mean messages, or rumors on social media accounts, online games such as Fortnite, or through email or text - Releasing embarrassing or inappropriate pictures, GIFs, or videos online or through text (e.g. sexting) - Creating fake profiles or hacking into someone’s account online in order to hurt that person Perhaps one of the most dangerous things about cyberbullying is once something is posted online and is circulated, it’s very hard to permanently remove. This oftentimes makes escape from the bullying unusually difficult or even seemingly impossible. It’s so important to keep up with ways technology is advancing in order to protect ourselves from things like cyberbullying. Effects of Bullying The negative psychological effects of bullying are very real – for the bully, the victim, and those who may witness it. For the bully, they have a greater risk of using substances, engaging in risky or violent behavior, being abusive in future relationships, committing crimes, and developing other external behavior problems. Effects of bullying include low self-esteem, fear, loneliness, heartache, and potential physical illness. These effects put a widespread toll on the mental, physical, and social health of the victims and also those who witness bullying. The increased risk of using addictive and illegal substances, anxiety, depression, eating disorders or even becoming suicidal are to be taken seriously and should be treated appropriately. Seek out mental health professionals or physicians and consult with them on the best combination of treatment. These effects can last days, months, years, or even lifetimes depending on the person and the circumstance. The Story of Amanda Todd The story of Amanda Todd is an unfortunate real example of cyberbullying and how unforgiving and never-ending it can be. Amanda ultimately committed suicide to get away from it; she was only 15 years old. Her YouTube video, published in 2012 a month before she committed suicide, has 13.5 million views to date. To better understand the reality of bullying, please consider watching it or sharing it. However, viewer discretion is advised. Bullying, harassment, discrimination, or any other type of negative, cruel, or harmful behavior is never okay or acceptable in any way. If you have been a witness of bullying or a bully, stand up to stop it! If you have been bullied or know someone who has, please seek help from caring professionals, family, or friends. Go-to resources are found below. Stop Bullying Now Hotline: 1-800-273-8255 - Available 24/7, managed by the U.S. Department of Health and Human Services National Suicide Prevention Lifeline: 1-800-273-8255 - Available 24/7, there is an online chat option available here The Trevor Project: 1-866-488-7386 - Available 24/7, suicide prevention help specifically for the LGBTQ+ community - Texting and chat options are available here National Eating Disorders Association: 1-800-931-2237 - Mon-Thu 9am-9pm, Fri 9am-5pm The Cybersmile Foundation STOMP Out Bullying National Center for Educational Statistics, Indicators of School Crime and Safety Indicator 10: Bullying at School and Electronic Bullying, April 2019. Cyberbullying Research Center The Amanda Todd Legacy Hanging with friends at 4-H Camp. Melanie Taylor as a 4-H Teen Counselor (right). Photo source: Melanie Taylor Spring is upon us and 4-H Summer Camp preparations are in full swing. As a 4-H Agent preparing for our week of county 4-H camp, my days are busy with phone calls and emails from parents, teen counselor trainings, adult volunteer screenings, paperwork, paperwork, and more paperwork. Although this is a very busy time for me as a 4-H Agent, it also allows me to reflect on why I chose this career path and why there is a sense of nostalgia as I prepare for 4-H camp. I attended 4-H camp in Virginia, where I grew up, every year from age 9-18. I was a camper who grew into a counselor-in-training and then a counselor. Those weeks of 4-H camp were filled with hot days and warm nights, but it was worth it all for the memories I will have for a lifetime. I can still smell the cafeteria food and hear the sounds in the gymnasium as kids played basketball and pounded at their leathercraft projects. I can feel the chills I would get as the entire camp sang around the campfire circle and patiently waited for the canoe to land on the lake’s edge; the camp staff would carry a flame as they entered the campfire circle and ceremoniously light the fire. Most importantly, I am still connected with my 4-H camp friends through social media and/or as close friends and we continue to share our old, blurry camp pictures from the 1990’s each year on Facebook. Morning flag raising ceremony at Camp Timpoochee. Photo source: UF/IFAS Northwest District So, as I work hard to prepare camp for my county campers and teen counselors, I want to create similar memories for them. In ten, twenty, and thirty years from now, I want them to think back on the fun moments they experienced in the Florida 4-H camping program. I also want them to form friendships and make camp connections for a lifetime, whether it is learning to kayak, fish, make arts and crafts, cook over a campfire, sing camp songs, etc. With all of this said, I hope you as parents will consider giving your child(ren) these special moments. The days will be long, but fun, and their nights will be filled with campfires and hanging out with friends. When they arrive home on Friday, they will be exhausted, but so excited to share all of the camp songs with you (prepare yourself for lots of loud, enthusiastic singing). They will have new friends they want you to meet and they will tell you camp stories they will always cherish. In Northwest Florida, there are two 4-H Camps, Camp Cherry Lake in Madison and Camp Timpoochee in Niceville. Each county in these camping districts has one county week of camp each summer. Contact your local UF/IFAS Extension Office now to find out the details and register your child for a week of fun and memories. Northwest Florida 4-H Camp Dates 2019. Photo source: UF/IFAS Extension The Consumer Financial Protection Bureau, (CFPB) has defined financial capacity as a the combination of attitude, knowledge, skills, and self-efficacy needed to make and exercise money management decisions that best fit the circumstances of one’s life, within an enabling environment that includes, but is not limited to, access to appropriate financial services. Many of the attitudes, knowledge and skills needed to build financial capacity can be learned. People learn behavior through a variety of contexts. Children, in particular, learn through practices modeled by a parent or caregiver. In fact, research shows that parents and caregivers have the most influence on their children’s financial capability. If you are like most parents, you probably recognize this—and you are interested in setting your kids on a good path toward financial well-being. However, many parents also say they do not always have time, tools, or personal confidence to start talking about money thinking their children will learn about it in school, later on, when they are old enough to understand. This is most unfortunate. According to the Council for Economic Education 2018 Survey of the States, only 17 States require high school students to take a course in personal finance. So, if a parent isn’t teaching their children basic money/financial skills who is? Economical and financial literacy is a foundational element to achieving financial health and financial well-being. It is never too early (or too late) to start building this. Talking to children about money, even in EARLY CHILDHOOD, helps children build the skills they need later in life. Early childhood education experts like to call this scaffolding. You are setting the framework…the support…the platform, encouraging financial capability milestones from early childhood into young adulthood. Children can learn the behaviors, knowledge, skills, and personal characteristics that support financial health and well-being. Books can help start these critical early conversation. The CFPB has made it EASY! Parents can be their child’s first financial capability teacher! The University of Wisconsin-Extension Family Living Programs and the University of Wisconsin-Madison Center for Financial Security have selected books for the CFPB Money as you Grow Book Club. This program uses easy to read and understand children’s books to discuss money concepts. These books include many favorites: - A Bargain for Frances, by Russell Hoban - A Chair for My Mother, by Vera Williams - Alexander, Who Used to Be Rich Last Sunday, by Judith Viorst - Count on Pablo, by Barbara deRubertis - Cuenta con Pablo, by Barbara deRubertis - Curious George Saves His Pennies, by Margaret and H.AS. Rey - Just Shopping With Mom, by Mercer Mayer - Lemonade in Winter, by Emily Jenkins - My Rows and Piles of Coins, by Tololwa M. Mollel - Ox-Cart Man, by Donald Hall - Sheep in a Shop, by Nancy Shaw - The Berenstain Bears & Mama’s New Job, by Stan & Jan Berenstain - The Berenstain Bears’ Trouble With Money by Stan and Jan Berenstain - The Purse, by Kathy Caple - The Rag Coat, by Lauren Mills - Those Shoes, by Maribeth Boelts - Tia Isa Wants a Car, by Meg Medina - Tia Isa Quiere un Carro, by Meg Medina Fortunately, many of the building blocks for good financial decision making – like self-regulation, patience, planning, and problem-solving – do not require a lot of financial know-how. Reading books with children is a creative way to learn about the many sides of money management. Pick up a few of the titles at your local library and influence your children’s financial capability. Building good habits leads to a life of good financial health and well-being. Example of key ideas from reading books: ||How Children Show It ||Can look at a few choices and select on what will bring the best results. ||Can follow a multi–step plan. ||Can prioritize choices when they want two or more things at once. |Can describe problems and come up with a few idea to make things better. ||Can identify the different jobs people in the family and in the community do to earn money and keep it safe. ||Make spending choices with their own money – real or play. ||Keeps money in a safe place and keeps track of amount saved for future spending. |Sharing and borrowing ||Can explain the difference between lending and giving something away. ||Can talk about times when they were able to wait and how they were able to do it. ||Can identify who they can turn to for help reaching a goal, or what tools or tricks might help them stick with a plan. |Staying true to yourself ||Name one special thing they like about themselves and their loved ones. ||Can talk about a time when their plans did not turn out how they wanted and what they did instead. As the holiday season quickly approaches, many people are filled with extra holiday cheer and enthusiasm. Some are jolly, but still overwhelmed with all of the activities, decorating, and shopping that needs to be completed. Then, there are those that find the holiday season as a reminder of things such as, the death of a loved one, family feuds, divorce, and the list goes on. If you are feeling this way here are a few tips to make getting through the season a little bit easier. 1. Feel your emotions – Many people want to suppress their sadness or anxiety, but this only makes it worse. We are all allowed to grieve, cry and feel mad at times. If you feel this way, let yourself feel your feelings. You will feel better once you have accepted and worked through the emotions. You also do not have to force yourself to feel happy just because it is the holiday season. 2. Reach out to others – Instead of secluding yourself spend time with others, whether it is at church, a community group or with family and friends. Spending time with others and socializing is good for the spirit. In addition, there are tons of volunteer opportunities during the holidays. Try something new and volunteer your time to a worthy cause. You will feel great about helping others and contributing to the cause. Research such as this one conducted by UnitedHealth Group commissioned a national survey of 3,351 adults and found that the majority of participants reported feeling mentally and physically healthier after a volunteer experience. The research showed: - 96% reported that volunteering enriched their sense of purpose in life - 94% of people who volunteered in the last twelve months said that volunteering improved their mood - 80% of them feel like they have control over their health - 78% of them said that volunteering lowered their stress levels - 76% of people who volunteered in the last twelve months said that volunteering has made them feel healthier - About a quarter of them reported that their volunteer work has helped them manage a chronic illness by keeping them active and taking their minds off of their own problems - Volunteering also improved their mood and self-esteem 3. Be realistic – Realize that times and traditions change as families grow and age. Do not focus on everything having to be the same every year. Be willing to accept changes, such as adult children may not be able to attend the family gathering, so utilize technology and talk through video conferencing, share pictures on email and/or Facebook. Find a way to make it work. 4. Set aside differences for everyone’s sake. Aim to accept family and friends the way they are, even if they do not meet your expectations. Leave grievances at the door for the day and enjoy your family and friends. Share those grievances and talk at a more appropriate and private time. Also, remember they could be feeling the stress of the holiday too. So, be patient if someone is grouchy or sad as you celebrate. You may both be feeling the same way. 5. Learn to say no – Be realistic in the number of activities you and your family can participate. Do not feel guilty because you cannot attend every party and event you are invited too. Graciously decline an invite and share that your schedule is booked, but thank them for thinking of you. A host does not expect that everyone will attend their parties. 6. Take a breather as needed – If you start to feel overwhelmed with anxiety, anger or sadness take a few minutes to be alone. Take 15 minutes to spend in the quiet to reduce the stress and clear your mind. For example: listen to soothing music, do a few mindful breathing exercises to slow yourself down or read a book to temporarily escape the stress. 7.Seek professional help as needed – there are times when the emotions are just too overwhelming to sort through on our own. If you continue to feel sad, anxious, angry, etc. there is absolutely no shame in seeking the help of a doctor or mental health professional. It will only help you work through your feelings with a non-bias person. Helping yourself feel better will improve your quality of life and those around you. Do not let the idea of the holidays turn you into a modern day Ebenezer Scrooge. Learn to take care of yourself first. Learn your limitations and accept them. Do not let others’ expectations overwhelm you. Just remember when you start feeling extreme levels of emotions and/or stress take a few deep breathes and remind yourself to relax and feel the moment. Be mindful of your surroundings and remind yourself of your many blessings, even when going through difficult times. Make it your personal goal to feel your feelings and enjoy what you can about the holiday season, whether it is the twinkling lights, time with friends and family, the food or any of the many special holiday traditions. Aim to find JOY during this holiday season. Stress, depression and the holidays: Tips for coping. www.mayoclinic.org, Signs and Symptoms of Depression http://edis.ifas.ufl.edu/pdffiles/FY/FY10000.pdf Depression and Older Adults http://edis.ifas.ufl.edu/pdffiles/FY/FY95200.pdf Have you ever read the book Something from Nothing, by Phoebe Gilman? It is a wonderful story, with a sewing theme, of sewing/creating something beautiful over and over again. My fervent hope is that the 4-H sewing camp participants feel the same way about all of their creations generated during sewing camp! Recently, the Tallahassee Chapter of the American Sewing Guild (ASG, part of a national, non-profit organization dedicated to the art and love of sewing) generously volunteered their time, talent, and supplies to enrich the experience of every 4-H sewing camp participant. The ASG philosophy, coupled with the 4-H history of helping youth “learn by doing” is a good fit. Both organizations focus on teaching new topics and life skills development through experiences thus enhancing self-confidence through skill building. In today’s world, sewing is seemingly no longer a necessity. Sewing can even be expensive! But, can we put a price on self-confidence or creativity, sustainability or even a life skill? 4-H Sewing Campers Photo source: Heidi Copeland Think of all the things learned while sewing. Sewing helps teach: - Finger dexterity and the development of fine motor skills. - The value of patience. - Systematic following of directions – both verbal and written. - Vocabulary as well as techniques. - Pride in accomplishment for a job well done! Moreover, sewing truly integrates science, technology, engineering, art and math (STEAM). And it is FUN! Campers: - Learned first-hand about fibers (science). - Experienced technology using various sewing machines and equipment – some even computer driven. - Became adept at trouble shooting their own machine repair (engineering). - Artistically bedazzled their creations. - Utilized practical applications of many mathematical concepts to measure and sew as well as critical thinking and problem solving. The 4-H Club pledge says, “I pledge … My Head to clearer thinking, My Heart to greater loyalty, My Hands to larger service and My Health to better living for my club, my community, my country, and my world”. ALL of the campers contributed to a community service project sewing a pillowcase destined for the Early Learning Coalition of the Big Bend Read a Child to Sleep campaign. This fostered the idea that empathy, sharing, nurturing relationships and giving is important too. Sewing certainly did not stop when camp ended. A budding entrepreneur posted on Facebook she is taking orders for her creations while another camper is helping a local theatre group fashion costumes to obtain her community service hours fulfilling a high school graduation requirement. There is no better feeling than the pride of accomplishment. Sewing campers learned by doing and while they were at it learned a skills they will carry throughout life. To find out more about the American Sewing Guild: https://www.asg.org/ To find out more about Leon County 4-H programs: http://leon.ifas.ufl.edu/4h If you are interested in learning more about 4-H, go to florida4h.org. For nearly 40 years, National Grandparents Day has been celebrated as an opportunity to express gratitude for all that grandparents do for families and communities. According to the U.S. Census Bureau Profile, America Facts for Features, in 1970, Marian McQuade initiated a campaign to establish a day to honor grandparents. In 1978, President Jimmy Carter signed a federal proclamation, declaring the first Sunday after Labor Day to be National Grandparents Day. Across the U.S., not only are grandparents appreciated for sharing their time, wisdom, and values, but they are currently stepping up to raise over 7.2 million children under the age of 18 whose biological parents are unable to do so, thus keeping the children out of the foster care system. In Florida, 11% of children live in homes where householders are grandparents or other relatives. Locally, in Leon County, there are more than 2,000 grandparent-headed families, where: - 13.1% of the grandparents are 60 years and older - 39.8% of these families live below the poverty level - Nearly 50% of these families have had the children for 5 or more years The reasons as to why so many grandparents are raising grandchildren are many and varied. Nationally, substance abuse causes more than one third of this type of placement. Nevertheless, because of a grandparent’s selfless devotion and generosity to the needs of others, grandparents are, in fact, owed a great deal of thanks for their altruism. As one grandmother exclaimed, “For my 50th birthday, I got a 2 year-old. My story isn’t unique.” In fact, grandparent roles in children’s lives are so significant that the Grandparents as Parents (GaP) Program of the Tallahassee Senior Foundation, funded by the Leon County Commission, grants, and donations, has a program and support group just for them! According to Karen Boebinger, GaP Program Coordinator, “The GaP program provides moral support and resource assistance to these grandfamilies who are trying to navigate through their new lifestyle.” AARP® has streamlined the gathering of relevant information pertinent to this nationwide dilemma. The AARP® resource, Grand Families Fact Sheet, includes state-specific data and programs available, as well as information about public benefits, educational assistance, legal relationship options, and state laws. This fact sheet also contains many other resource tools such as the National Council on Aging’s questionnaire that helps grandparent caregivers and/or the children they are raising determine if they qualify for certain programs that pay for food, an increase in income, and/or home and healthcare costs. Once the questionnaire is completed, the website generates a list of eligible programs and contact information. (www.aarp.org/quicklink) Take a moment today and every day to give thanks and appreciation for the thousands of grandparents in our community and around the country for the service they do for children. One thing is for certain: grandparents are more valuable to their grandchildren and communities than ever. Grandparents are indispensable and important people. Want more information about supporting GaP or do you need support yourself? Contact Karen Boebinger, GaP Program Coordinator, at 850-891-4027 or firstname.lastname@example.org.
Data Structure and Algorithms - Quick Sort Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data into smaller arrays. A large array is partitioned into two arrays one of which holds values smaller than the specified value, say pivot, based on which the partition is made and another array holds values greater than the pivot value. Quick sort partitions an array and then calls itself recursively twice to sort the two resulting subarrays. This algorithm is quite efficient for large-sized data sets as its average and worst case complexity are of Ο(nlogn), where n is the number of items. Partition in Quick Sort Following animated representation explains how to find the pivot value in an array. The pivot value divides the list into two parts. And recursively, we find the pivot for each sub-lists until all lists contains only one element. Quick Sort Pivot Algorithm Based on our understanding of partitioning in quick sort, we will now try to write an algorithm for it, which is as follows. Step 1 − Choose the highest index value has pivot Step 2 − Take two variables to point left and right of the list excluding pivot Step 3 − left points to the low index Step 4 − right points to the high Step 5 − while value at left is less than pivot move right Step 6 − while value at right is greater than pivot move left Step 7 − if both step 5 and step 6 does not match swap left and right Step 8 − if left ≥ right, the point where they met is new pivot Quick Sort Pivot Pseudocode The pseudocode for the above algorithm can be derived as − function partitionFunc(left, right, pivot) leftPointer = left -1 rightPointer = right while True do while A[++leftPointer] < pivot do //do-nothing end while while rightPointer > 0 && A[--rightPointer] > pivot do //do-nothing end while if leftPointer >= rightPointer break else swap leftPointer,rightPointer end if end while swap leftPointer,right return leftPointer end function Quick Sort Algorithm Using pivot algorithm recursively, we end up with smaller possible partitions. Each partition is then processed for quick sort. We define recursive algorithm for quicksort as follows − Step 1 − Make the right-most index value pivot Step 2 − partition the array using pivot value Step 3 − quicksort left partition recursively Step 4 − quicksort right partition recursively Quick Sort Pseudocode To get more into it, let see the pseudocode for quick sort algorithm − procedure quickSort(left, right) if right-left <= 0 return else pivot = A[right] partition = partitionFunc(left, right, pivot) quickSort(left,partition-1) quickSort(partition+1,right) end if end procedure To know about quick sort implementation in C programming language, please click here.
On December 11.4, 2010, Steve Larson of the Catalina Sky Survey noticed an odd brightness from Scheila, an asteroid on the outer region of the main belt of asteroids that orbit in an area between Mars and Jupiter. Three streams of dust appeared to trail from the asteroid. Data from NASA's Swift Satellite and the Hubble Space Telescope suggested that a smaller asteroid's impact was the likely trigger for the appearance of comet-like tails from Scheila. However, questions remained about the date when the dust emission occurred and how the triple dust tails formed. The current research team sought answers to these queries. Soon after reports of Scheila's unusual brightness, the current research team used the Subaru Prime Focus Camera (Suprime-Cam) on the Subaru Telescope (8.2 m), the Ishigakijima Astronomical Observatory Murikabushi Telescope (1.05 m), and the University of Hawaii 2.2 m Telescope to make optical observations of these mysterious dust trails over a three-month period. The top of Figure 1 shows images of the development of the dust trails taken by the Murikabushi Telescope on the 12th and 19th of December 2010. Although asteroids generally look like points when observed from Earth, Scheila looked like a comet. As the three streaks of dust streamed from the asteroid, their surface brightness decreased. Eventually the dust clouds became undetectable, and then a faint linear structure appeared. The bottom of Figure 1 shows the image obtained by Subaru Telescope on March 2, 2011. Based on these images of the linear structure, the scientists determined a dust emission date of December 3.5+/-1, 2010. Steve Larson of the Catalina Sky Survey noticed that Scheila had a slightly diffuse appearance on December 3.4, 2010. Therefore, it is likely that the collision of the asteroids occurred within the short time between December 2 12:00 UT and December 3 10:00 UT. To explain the formation of Scheila's triple dust tails, the research team conducted a computer simulation of Scheila's dust emission on December 3th. Their simulation was based on information gained through impact experiments in a laboratory at ISAS, a hypervelocity impact facility and division of the Japan Aerospace Exploration Agency (JAXA). Figure 2 shows the ejecta produced by an oblique impact, which was not a head-on collision. Two prominent features characterize oblique impacts and the shock waves generated by them. One feature, a downrange plume, occurs in a direction downrange from the impact site and results from the fragmentation or sometimes evaporation of the object that impacted another. A second feature occurs during the physical destruction of the impacted object; a shock wave spreads from the impact site, scoops out materials (conical impact ejecta), and forms an impact crater. The axis of the cone of ejecta is roughly perpendicular to the surface at the impact site. The team reasoned that these two processes caused the ejection of Scheila's dust particles and that sunlight pushed them away from the asteroid. After performing a tremendous number of computer simulations under different conditions, they could only duplicate their observed images when an object struck Scheila’s surface from behind (Figures 3 and 4). Taking all of the evidence into account—their observations and simulations --the research team concluded that there is only one way to explain the mysterious brightness and triple trails of dust from Scheila. A smaller asteroid obliquely impacted Scheila from behind. The following papers will appear in the Astrophysical Journal: Ishiguro et al. 2011, Astrophysical Journal Letters 740, L11, "Observational Evidences for Impact on the Main-Belt Asteroid (596) Scheila" Ishiguro et al. 2011, Astrophysical Journal Letters, 741, L24, "Interpretation of (596) Scheila's Triple Dust Tails" This research was supported by a Basic Research Grant from Seoul National University, by a fundamental research grant (type I) from the National Research Foundation of Korea and by a Grant-in-Aid for Scientific Research on Priority Areas from MEXT, Japan. NAOJ supported the use of the UH 2.2 m Telescope. Bottom: Suprime-Cam on the Subaru Telescope captured this image of the linear structure on the 2nd of March 2011.
Spacecraft propulsion is any method used to accelerate spacecraft and artificial satellites. There are many different methods. Each method has drawbacks and advantages, and spacecraft propulsion is an active area of research. However, most spacecraft today are propelled by forcing a gas from the back/rear of the vehicle at very high speed through a supersonic de Laval nozzle. This sort of engine is called a rocket engine. All current spacecraft use chemical rockets (bipropellant or solid-fuel) for launch, though some (such as the Pegasus rocket and SpaceShipOne) have used air-breathing engines on their first stage. Most satellites have simple reliable chemical thrusters (often monopropellant rockets) or resistojet rockets for orbital station-keeping and some use momentum wheels for attitude control. Soviet bloc satellites have used electric propulsion for decades, and newer Western geo-orbiting spacecraft are starting to use them for north-south stationkeeping and orbit raising. Interplanetary vehicles mostly use chemical rockets as well, although a few have used ion thrusters and Hall effect thrusters (two different types of electric propulsion) to great success. - 1 Requirements - 2 Effectiveness - 3 Methods - 4 Planetary and atmospheric propulsion - 5 Hypothetical methods - 6 Table of methods - 7 Testing - 8 See also - 9 Notes - 10 References - 11 External links Artificial satellites must be launched into orbit and once there they must be placed in their nominal orbit. Once in the desired orbit, they often need some form of attitude control so that they are correctly pointed with respect to Earth, the Sun, and possibly some astronomical object of interest. They are also subject to drag from the thin atmosphere, so that to stay in orbit for a long period of time some form of propulsion is occasionally necessary to make small corrections (orbital stationkeeping). Many satellites need to be moved from one orbit to another from time to time, and this also requires propulsion. A satellite's useful life is over once it has exhausted its ability to adjust its orbit. Spacecraft designed to travel further also need propulsion methods. They need to be launched out of the Earth's atmosphere just as satellites do. Once there, they need to leave orbit and move around. For interplanetary travel, a spacecraft must use its engines to leave Earth orbit. Once it has done so, it must somehow make its way to its destination. Current interplanetary spacecraft do this with a series of short-term trajectory adjustments. In between these adjustments, the spacecraft simply falls freely along its trajectory. The most fuel-efficient means to move from one circular orbit to another is with a Hohmann transfer orbit: the spacecraft begins in a roughly circular orbit around the Sun. A short period of thrust in the direction of motion accelerates or decelerates the spacecraft into an elliptical orbit around the Sun which is tangential to its previous orbit and also to the orbit of its destination. The spacecraft falls freely along this elliptical orbit until it reaches its destination, where another short period of thrust accelerates or decelerates it to match the orbit of its destination. Special methods such as aerobraking or aerocapture are sometimes used for this final orbital adjustment. Some spacecraft propulsion methods such as solar sails provide very low but inexhaustible thrust; an interplanetary vehicle using one of these methods would follow a rather different trajectory, either constantly thrusting against its direction of motion in order to decrease its distance from the Sun or constantly thrusting along its direction of motion to increase its distance from the Sun. The concept has been successfully tested by the Japanese IKAROS solar sail spacecraft. Spacecraft for interstellar travel also need propulsion methods. No such spacecraft has yet been built, but many designs have been discussed. Because interstellar distances are very great, a tremendous velocity is needed to get a spacecraft to its destination in a reasonable amount of time. Acquiring such a velocity on launch and getting rid of it on arrival will be a formidable challenge for spacecraft designers. When in space, the purpose of a propulsion system is to change the velocity, or v, of a spacecraft. Because this is more difficult for more massive spacecraft, designers generally discuss momentum, mv. The amount of change in momentum is called impulse. So the goal of a propulsion method in space is to create an impulse. When launching a spacecraft from Earth, a propulsion method must overcome a higher gravitational pull to provide a positive net acceleration. In orbit, any additional impulse, even very tiny, will result in a change in the orbit path. The rate of change of velocity is called acceleration, and the rate of change of momentum is called force. To reach a given velocity, one can apply a small acceleration over a long period of time, or one can apply a large acceleration over a short time. Similarly, one can achieve a given impulse with a large force over a short time or a small force over a long time. This means that for maneuvering in space, a propulsion method that produces tiny accelerations but runs for a long time can produce the same impulse as a propulsion method that produces large accelerations for a short time. When launching from a planet, tiny accelerations cannot overcome the planet's gravitational pull and so cannot be used. Earth's surface is situated fairly deep in a gravity well. The escape velocity required to get out of it is 11.2 kilometers/second. As human beings evolved in a gravitational field of 1g (9.8 m/s²), an ideal propulsion system would be one that provides a continuous acceleration of 1g (though human bodies can tolerate much larger accelerations over short periods). The occupants of a rocket or spaceship having such a propulsion system would be free from all the ill effects of free fall, such as nausea, muscular weakness, reduced sense of taste, or leaching of calcium from their bones. The law of conservation of momentum means that in order for a propulsion method to change the momentum of a space craft it must change the momentum of something else as well. A few designs take advantage of things like magnetic fields or light pressure in order to change the spacecraft's momentum, but in free space the rocket must bring along some mass to accelerate away in order to push itself forward. Such mass is called reaction mass. In order for a rocket to work, it needs two things: reaction mass and energy. The impulse provided by launching a particle of reaction mass having mass m at velocity v is mv. But this particle has kinetic energy mv²/2, which must come from somewhere. In a conventional solid, liquid, or hybrid rocket, the fuel is burned, providing the energy, and the reaction products are allowed to flow out the back, providing the reaction mass. In an ion thruster, electricity is used to accelerate ions out the back. Here some other source must provide the electrical energy (perhaps a solar panel or a nuclear reactor), whereas the ions provide the reaction mass. When discussing the efficiency of a propulsion system, designers often focus on effectively using the reaction mass. Reaction mass must be carried along with the rocket and is irretrievably consumed when used. One way of measuring the amount of impulse that can be obtained from a fixed amount of reaction mass is the specific impulse, the impulse per unit weight-on-Earth (typically designated by ). The unit for this value is seconds. Because the weight on Earth of the reaction mass is often unimportant when discussing vehicles in space, specific impulse can also be discussed in terms of impulse per unit mass. This alternate form of specific impulse uses the same units as velocity (e.g. m/s), and in fact it is equal to the effective exhaust velocity of the engine (typically designated ). Confusingly, both values are sometimes called specific impulse. The two values differ by a factor of gn, the standard acceleration due to gravity 9.80665 m/s² (). A rocket with a high exhaust velocity can achieve the same impulse with less reaction mass. However, the energy required for that impulse is proportional to the exhaust velocity, so that more mass-efficient engines require much more energy, and are typically less energy efficient. This is a problem if the engine is to provide a large amount of thrust. To generate a large amount of impulse per second, it must use a large amount of energy per second. So high-mass-efficient engines require enormous amounts of energy per second to produce high thrusts. As a result, most high-mass-efficient engine designs also provide lower thrust due to the unavailability of high amounts of energy. Propulsion methods can be classified based on their means of accelerating the reaction mass. There are also some special methods for launches, planetary arrivals, and landings. A reaction engine is an engine which provides propulsion by expelling reaction mass, in accordance with Newton's third law of motion. This law of motion is most commonly paraphrased as: "For every action force there is an equal, but opposite, reaction force". Examples include both duct engines and rocket engines, and more uncommon variations such as Hall effect thrusters, ion drives and mass drivers. Duct engines are obviously not used for space propulsion due to the lack of air; however some proposed spacecraft have these kinds of engines to assist takeoff and landing. Delta-v and propellant Exhausting the entire usable propellant of a spacecraft through the engines in a straight line in free space would produce a net velocity change to the vehicle; this number is termed 'delta-v' (). If the exhaust velocity is constant then the total of a vehicle can be calculated using the rocket equation, where M is the mass of propellant, P is the mass of the payload (including the rocket structure), and is the velocity of the rocket exhaust. This is known as the Tsiolkovsky rocket equation: For historical reasons, as discussed above, is sometimes written as For a high delta-v mission, the majority of the spacecraft's mass needs to be reaction mass. Because a rocket must carry all of its reaction mass, most of the initially-expended reaction mass goes towards accelerating reaction mass rather than payload. If the rocket has a payload of mass P, the spacecraft needs to change its velocity by , and the rocket engine has exhaust velocity ve, then the mass M of reaction mass which is needed can be calculated using the rocket equation and the formula for : For much smaller than ve, this equation is roughly linear, and little reaction mass is needed. If is comparable to ve, then there needs to be about twice as much fuel as combined payload and structure (which includes engines, fuel tanks, and so on). Beyond this, the growth is exponential; speeds much higher than the exhaust velocity require very high ratios of fuel mass to payload and structural mass. For a mission, for example, when launching from or landing on a planet, the effects of gravitational attraction and any atmospheric drag must be overcome by using fuel. It is typical to combine the effects of these and other effects into an effective mission delta-v. For example, a launch mission to low Earth orbit requires about 9.3–10 km/s delta-v. These mission delta-vs are typically numerically integrated on a computer. Some effects such as Oberth effect can only be significantly utilised by high thrust engines such as rockets, i.e. engines that can produce a high g-force (thrust per unit mass, equal to delta-v per unit time). Power use and propulsive efficiency For all reaction engines (such as rockets and ion drives) some energy must go into accelerating the reaction mass. Every engine will waste some energy, but even assuming 100% efficiency, to accelerate an exhaust the engine will need energy amounting to This energy is not necessarily lost- some of it usually ends up as kinetic energy of the vehicle, and the rest is wasted in residual motion of the exhaust. Comparing the rocket equation (which shows how much energy ends up in the final vehicle) and the above equation (which shows the total energy required) shows that even with 100% engine efficiency, certainly not all energy supplied ends up in the vehicle - some of it, indeed usually most of it, ends up as kinetic energy of the exhaust. The exact amount depends on the design of the vehicle, and the mission. However, there are some useful fixed points: - if the is fixed, for a mission delta-v, there is a particular that minimises the overall energy used by the rocket. This comes to an exhaust velocity of about ⅔ of the mission delta-v (see the energy computed from the rocket equation). Drives with a specific impulse that is both high and fixed such as Ion thrusters have exhaust velocities that can be enormously higher than this ideal for many missions. - if the exhaust velocity can be made to vary so that at each instant it is equal and opposite to the vehicle velocity then the absolute minimum energy usage is achieved. When this is achieved, the exhaust stops in space and has no kinetic energy; and the propulsive efficiency is 100%- all the energy ends up in the vehicle (in principle such a drive would be 100% efficient, in practice there would be thermal losses from within the drive system and residual heat in the exhaust). However, in most cases this uses an impractical quantity of propellant, but is a useful theoretical consideration. Anyway, the vehicle has to move before the method can be applied. Some drives (such as VASIMR or Electrodeless plasma thruster) actually can significantly vary their exhaust velocity. This can help reduce propellant usage or improve acceleration at different stages of the flight. However the best energetic performance and acceleration is still obtained when the exhaust velocity is close to the vehicle speed. Proposed ion and plasma drives usually have exhaust velocities enormously higher than that ideal (in the case of VASIMR the lowest quoted speed is around 15000 m/s compared to a mission delta-v from high Earth orbit to Mars of about 4000m/s). It might be thought that adding power generation capacity is helpful, and although initially this can improve performance, this inevitably increases the weight of the power source, and eventually the mass of the power source and the associated engines and propellant dominates the weight of the vehicle, and then adding more power gives no significant improvement. For, although solar power and nuclear power are virtually unlimited sources of energy, the maximum power they can supply is substantially proportional to the mass of the powerplant (i.e. specific power takes a largely constant value which is dependent on the particular powerplant technology). For any given specific power, with a large which is desirable to save propellant mass, it turns out that the maximum acceleration is inversely proportional to . Hence the time to reach a required delta-v is proportional to . Thus the latter should not be too large. In the ideal case is useful payload and is reaction mass (this corresponds to empty tanks having no mass, etc.). The energy required can simply be computed as This corresponds to the kinetic energy the expelled reaction mass would have at a speed equal to the exhaust speed. If the reaction mass had to be accelerated from zero speed to the exhaust speed, all energy produced would go into the reaction mass and nothing would be left for kinetic energy gain by the rocket and payload. However, if the rocket already moves and accelerates (the reaction mass is expelled in the direction opposite to the direction in which the rocket moves) less kinetic energy is added to the reaction mass. To see this, if, for example, =10 km/s and the speed of the rocket is 3 km/s, then the speed of a small amount of expended reaction mass changes from 3 km/s forwards to 7 km/s rearwards. Thus, although the energy required is 50 MJ per kg reaction mass, only 20 MJ is used for the increase in speed of the reaction mass. The remaining 30 MJ is the increase of the kinetic energy of the rocket and payload. Thus the specific energy gain of the rocket in any small time interval is the energy gain of the rocket including the remaining fuel, divided by its mass, where the energy gain is equal to the energy produced by the fuel minus the energy gain of the reaction mass. The larger the speed of the rocket, the smaller the energy gain of the reaction mass; if the rocket speed is more than half of the exhaust speed the reaction mass even loses energy on being expelled, to the benefit of the energy gain of the rocket; the larger the speed of the rocket, the larger the energy loss of the reaction mass. where is the specific energy of the rocket (potential plus kinetic energy) and is a separate variable, not just the change in . In the case of using the rocket for deceleration, i.e. expelling reaction mass in the direction of the velocity, should be taken negative. The formula is for the ideal case again, with no energy lost on heat, etc. The latter causes a reduction of thrust, so it is a disadvantage even when the objective is to lose energy (deceleration). If the energy is produced by the mass itself, as in a chemical rocket, the fuel value has to be , where for the fuel value also the mass of the oxidizer has to be taken into account. A typical value is = 4.5 km/s, corresponding to a fuel value of 10.1 MJ/kg. The actual fuel value is higher, but much of the energy is lost as waste heat in the exhaust that the nozzle was unable to extract. The required energy is - for we have - for a given , the minimum energy is needed if , requiring an energy of - In the case of acceleration in a fixed direction, and starting from zero speed, and in the absence of other forces, this is 54.4% more than just the final kinetic energy of the payload. In this optimal case the initial mass is 4.92 times the final mass. These results apply for a fixed exhaust speed. Due to the Oberth effect and starting from a nonzero speed, the required potential energy needed from the propellant may be less than the increase in energy in the vehicle and payload. This can be the case when the reaction mass has a lower speed after being expelled than before – rockets are able to liberate some or all of the initial kinetic energy of the propellant. Also, for a given objective such as moving from one orbit to another, the required may depend greatly on the rate at which the engine can produce and maneuvers may even be impossible if that rate is too low. For example, a launch to Low Earth Orbit (LEO) normally requires a of ca. 9.5 km/s (mostly for the speed to be acquired), but if the engine could produce at a rate of only slightly more than g, it would be a slow launch requiring altogether a very large (think of hovering without making any progress in speed or altitude, it would cost a of 9.8 m/s each second). If the possible rate is only or less, the maneuver can not be carried out at all with this engine. The power is given by where is the thrust and the acceleration due to it. Thus the theoretically possible thrust per unit power is 2 divided by the specific impulse in m/s. The thrust efficiency is the actual thrust as percentage of this. If e.g. solar power is used this restricts ; in the case of a large the possible acceleration is inversely proportional to it, hence the time to reach a required delta-v is proportional to ; with 100% efficiency: - for we have - power 1000 W, mass 100 kg, = 5 km/s, = 16 km/s, takes 1.5 months. - power 1000 W, mass 100 kg, = 5 km/s, = 50 km/s, takes 5 months. Thus should not be too large. Power to thrust ratio The power to thrust ratio is simply: Thus for any vehicle power P, the thrust that may be provided is: Suppose we want to send a 10,000 kg space probe to Mars. The required from LEO is approximately 3000 m/s, using a Hohmann transfer orbit. For the sake of argument, let us say that the following thrusters may be used: |Energy per kg ||1||100||190,000||95||500 kJ||0.5 kW/N||N/A| ||5||500||8,200||103||12.6 MJ||2.5 kW/N||N/A| |Ion thruster||50||5,000||620||775||1.25 GJ||25 kW/N||25 kg/N| - Assuming 100% energetic efficiency; 50% is more typical in practice. - Assumes a specific power of 1 kW/kg Observe that the more fuel-efficient engines can use far less fuel; its mass is almost negligible (relative to the mass of the payload and the engine itself) for some of the engines. However, note also that these require a large total amount of energy. For Earth launch, engines require a thrust to weight ratio of more than one. To do this with the ion or more theoretical electrical drives, the engine would have to be supplied with one to several gigawatts of power — equivalent to a major metropolitan generating station. From the table it can be seen that this is clearly impractical with current power sources. Alternative approaches include some forms of laser propulsion, where the reaction mass does not provide the energy required to accelerate it, with the energy instead being provided from an external laser or other Beam-powered propulsion system. Small models of some of these concepts have flown, although the engineering problems are complex and the ground based power systems are not a solved problem. Instead, a much smaller, less powerful generator may be included which will take much longer to generate the total energy needed. This lower power is only sufficient to accelerate a tiny amount of fuel per second, and would be insufficient for launching from Earth. However, over long periods in orbit where there is no friction, the velocity will be finally achieved. For example, it took the SMART-1 more than a year to reach the Moon, whereas with a chemical rocket it takes a few days. Because the ion drive needs much less fuel, the total launched mass is usually lower, which typically results in a lower overall cost, but the journey takes longer. Mission planning therefore frequently involves adjusting and choosing the propulsion system so as to minimise the total cost of the project, and can involve trading off launch costs and mission duration against payload fraction. Most rocket engines are internal combustion heat engines (although non combusting forms exist). Rocket engines generally produce a high temperature reaction mass, as a hot gas. This is achieved by combusting a solid, liquid or gaseous fuel with an oxidiser within a combustion chamber. The extremely hot gas is then allowed to escape through a high-expansion ratio nozzle. This bell-shaped nozzle is what gives a rocket engine its characteristic shape. The effect of the nozzle is to dramatically accelerate the mass, converting most of the thermal energy into kinetic energy. Exhaust speed reaching as high as 10 times the speed of sound at sea level are common. Rocket engines provide essentially the highest specific powers and high specific thrusts of any engine used for spacecraft propulsion. Ion propulsion rockets can heat a plasma or charged gas inside a magnetic bottle and release it via a magnetic nozzle, so that no solid matter need come in contact with the plasma. Of course, the machinery to do this is complex, but research into nuclear fusion has developed methods, some of which have been proposed to be used in propulsion systems, and some have been tested in a lab. See rocket engine for a listing of various kinds of rocket engines using different heating methods, including chemical, electrical, solar, and nuclear. Rather than relying on high temperature and fluid dynamics to accelerate the reaction mass to high speeds, there are a variety of methods that use electrostatic or electromagnetic forces to accelerate the reaction mass directly. Usually the reaction mass is a stream of ions. Such an engine typically uses electric power, first to ionize atoms, and then to create a voltage gradient to accelerate the ions to high exhaust velocities. For these drives, at the highest exhaust speeds, energetic efficiency and thrust are all inversely proportional to exhaust velocity. Their very high exhaust velocity means they require huge amounts of energy and thus with practical power sources provide low thrust, but use hardly any fuel. For some missions, particularly reasonably close to the Sun, solar energy may be sufficient, and has very often been used, but for others further out or at higher power, nuclear energy is necessary; engines drawing their power from a nuclear source are called nuclear electric rockets. With any current source of electrical power, chemical, nuclear or solar, the maximum amount of power that can be generated limits the amount of thrust that can be produced to a small value. Power generation adds significant mass to the spacecraft, and ultimately the weight of the power source limits the performance of the vehicle. Current nuclear power generators are approximately half the weight of solar panels per watt of energy supplied, at terrestrial distances from the Sun. Chemical power generators are not used due to the far lower total available energy. Beamed power to the spacecraft shows some potential. Some electromagnetic methods: - Ion thrusters (accelerate ions first and later neutralize the ion beam with an electron stream emitted from a cathode called a neutralizer) - Electrothermal thrusters (electromagnetic fields are used to generate a plasma to increase the heat of the bulk propellant, the thermal energy imparted to the propellant gas is then converted into kinetic energy by a nozzle of either physical material construction or by magnetic means) - Electromagnetic thrusters (ions are accelerated either by the Lorentz Force or by the effect of electromagnetic fields where the electric field is not in the direction of the acceleration) - Mass drivers (for propulsion) In electrothermal and electromagnetic thrusters, both ions and electrons are accelerated simultaneously, no neutralizer is required. Without internal reaction mass The law of conservation of momentum is usually taken to imply that any engine which uses no reaction mass cannot accelerate the center of mass of a spaceship (changing orientation, on the other hand, is possible). But space is not empty, especially space inside the Solar System; there are gravitation fields, magnetic fields, electromagnetic waves, solar wind and solar radiation. Electromagnetic waves in particular are known to contain momentum, despite being massless; specifically the momentum flux density P of an EM wave is quantitatively 1/c times the Poynting vector S, i.e. P = S/c, where c is the velocity of light. Field propulsion methods which do not rely on reaction mass thus must try to take advantage of this fact by coupling to a momentum-bearing field such as an EM wave that exists in the vicinity of the craft. However, because many of these phenomena are diffuse in nature, corresponding propulsion structures need to be proportionately large. There are several different space drives that need little or no reaction mass to function. A tether propulsion system employs a long cable with a high tensile strength to change a spacecraft's orbit, such as by interaction with a planet's magnetic field or through momentum exchange with another object. Solar sails rely on radiation pressure from electromagnetic energy, but they require a large collection surface to function effectively. The magnetic sail deflects charged particles from the solar wind with a magnetic field, thereby imparting momentum to the spacecraft. A variant is the mini-magnetospheric plasma propulsion system, which uses a small cloud of plasma held in a magnetic field to deflect the Sun's charged particles. An E-sail would use very thin and lightweight wires holding an electric charge to deflect these particles, and may have more controllable directionality. As a proof of concept, NanoSail-D became the first nanosatellite to orbit Earth.[full citation needed] There are plans to add them[clarification needed] to future Earth orbit satellites, enabling them to de-orbit and burn up once they are no longer needed. Cubesail will be the first mission to demonstrate solar sailing in low Earth orbit, and the first mission to demonstrate full three-axis attitude control of a solar sail. A satellite or other space vehicle is subject to the law of conservation of angular momentum, which constrains a body from a net change in angular velocity. Thus, for a vehicle to change its relative orientation without expending reaction mass, another part of the vehicle may rotate in the opposite direction. Non-conservative external forces, primarily gravitational and atmospheric, can contribute up to several degrees per day to angular momentum, so secondary systems are designed to "bleed off" undesired rotational energies built up over time. Accordingly, many spacecraft utilize reaction wheels or control moment gyroscopes to control orientation in space. A gravitational slingshot can carry a space probe onward to other destinations without the expense of reaction mass. By harnessing the gravitational energy of other celestial objects, the spacecraft can pick up kinetic energy. However, even more energy can be obtained from the gravity assist if rockets are used. Planetary and atmospheric propulsion There have been many ideas proposed for launch-assist mechanisms that have the potential of drastically reducing the cost of getting into orbit. Proposed non-rocket spacelaunch launch-assist mechanisms include: - Skyhook (requires reusable suborbital launch vehicle, not engineeringly feasible using presently available materials) - Space elevator (tether from Earth's surface to geostationary orbit, cannot be built with existing materials) - Launch loop (a very fast enclosed rotating loop about 80 km tall) - Space fountain (a very tall building held up by a stream of masses fired from its base) - Orbital ring (a ring around Earth with spokes hanging down off bearings) - Electromagnetic catapult (railgun, coilgun) (an electric gun) - Rocket sled launch - Space gun (Project HARP, ram accelerator) (a chemically powered gun) - Beam-powered propulsion rockets and jets powered from the ground via a beam - High-altitude platforms to assist initial stage Studies generally show that conventional air-breathing engines, such as ramjets or turbojets are basically too heavy (have too low a thrust/weight ratio) to give any significant performance improvement when installed on a launch vehicle itself. However, launch vehicles can be air launched from separate lift vehicles (e.g. B-29, Pegasus Rocket and White Knight) which do use such propulsion systems. Jet engines mounted on a launch rail could also be so used. On the other hand, very lightweight or very high speed engines have been proposed that take advantage of the air during ascent: - SABRE - a lightweight hydrogen fuelled turbojet with precooler - ATREX - a lightweight hydrogen fuelled turbojet with precooler - Liquid air cycle engine - a hydrogen fuelled jet engine that liquifies the air before burning it in a rocket engine - Scramjet - jet engines that use supersonic combustion Normal rocket launch vehicles fly almost vertically before rolling over at an altitude of some tens of kilometers before burning sideways for orbit; this initial vertical climb wastes propellant but is optimal as it greatly reduces airdrag. Airbreathing engines burn propellant much more efficiently and this would permit a far flatter launch trajectory, the vehicles would typically fly approximately tangentially to Earth's surface until leaving the atmosphere then perform a rocket burn to bridge the final delta-v to orbital velocity. Planetary arrival and landing When a vehicle is to enter orbit around its destination planet, or when it is to land, it must adjust its velocity. This can be done using all the methods listed above (provided they can generate a high enough thrust), but there are a few methods that can take advantage of planetary atmospheres and/or surfaces. - Aerobraking allows a spacecraft to reduce the high point of an elliptical orbit by repeated brushes with the atmosphere at the low point of the orbit. This can save a considerable amount of fuel because it takes much less delta-V to enter an elliptical orbit compared to a low circular orbit. Because the braking is done over the course of many orbits, heating is comparatively minor, and a heat shield is not required. This has been done on several Mars missions such as Mars Global Surveyor, Mars Odyssey and Mars Reconnaissance Orbiter, and at least one Venus mission, Magellan. - Aerocapture is a much more aggressive manoeuver, converting an incoming hyperbolic orbit to an elliptical orbit in one pass. This requires a heat shield and much trickier navigation, because it must be completed in one pass through the atmosphere, and unlike aerobraking no preview of the atmosphere is possible. If the intent is to remain in orbit, then at least one more propulsive maneuver is required after aerocapture—otherwise the low point of the resulting orbit will remain in the atmosphere, resulting in eventual re-entry. Aerocapture has not yet been tried on a planetary mission, but the re-entry skip by Zond 6 and Zond 7 upon lunar return were aerocapture maneuvers, because they turned a hyperbolic orbit into an elliptical orbit. On these missions, because there was no attempt to raise the perigee after the aerocapture, the resulting orbit still intersected the atmosphere, and re-entry occurred at the next perigee. - A ballute is an inflatable drag device. - Parachutes can land a probe on a planet or moon with an atmosphere, usually after the atmosphere has scrubbed off most of the velocity, using a heat shield. - Airbags can soften the final landing. - Lithobraking, or stopping by impacting the surface, is usually done by accident. However, it may be done deliberately with the probe expected to survive (see, for example, Deep Impact (spacecraft)), in which case very sturdy probes are required. A variety of hypothetical propulsion techniques have been considered that would require entirely new principles of physics to be realized or that may not exist. To date, such methods are highly speculative and include: - Diametric drive - Pitch drive & bias drive - Disjunction drive - Alcubierre drive (a form of warp drive) - Differential sail - Wormholes – theoretically possible, but unachieveable in practice with current technology - Woodward effect - Reactionless drives – breaks the law of conservation of momentum; theoretically impossible - Photon rocket - Bussard ramjet - A "hyperspace" drive based upon Heim theory - Micronewton electromagnetic thruster - Linear momentum loss has been claimed for an electromagnetically powered thruster Table of methods Below is a summary of some of the more popular, proven technologies, followed by increasingly speculative methods. Four numbers are shown. The first is the effective exhaust velocity: the equivalent speed that the propellant leaves the vehicle. This is not necessarily the most important characteristic of the propulsion method; thrust and power consumption and other factors can be. However: - if the delta-v is much more than the exhaust velocity, then exorbitant amounts of fuel are necessary (see the section on calculations, above) - if it is much more than the delta-v, then, proportionally more energy is needed; if the power is limited, as with solar energy, this means that the journey takes a proportionally longer time The second and third are the typical amounts of thrust and the typical burn times of the method. Outside a gravitational potential small amounts of thrust applied over a long period will give the same effect as large amounts of thrust over a short period. (This result does not apply when the object is significantly influenced by gravity.) The fourth is the maximum delta-v this technique can give (without staging). For rocket-like propulsion systems this is a function of mass fraction and exhaust velocity. Mass fraction for rocket-like systems is usually limited by propulsion system weight and tankage weight. For a system to achieve this limit, typically the payload may need to be a negligible percentage of the vehicle, and so the practical limit on some systems can be much lower. |Solid-fuel rocket||<2.5||<107||Minutes||7||9: Flight proven| |Hybrid rocket||Minutes||>3||9: Flight proven| |Monopropellant rocket||citation needed]1 – 3[||citation needed]0.1 – 100[||Milliseconds – minutes||3||9: Flight proven| |Liquid-fuel rocket||<4.4||<107||Minutes||9||9: Flight proven| |Electrostatic ion thruster||[full citation needed]15 – 210||Months – years||>100||9: Flight proven| |Hall-effect thruster (HET)||citation needed]8 – 50[||Months – years||>100||9: Flight proven| |Resistojet rocket||2 – 6||10−2 – 10||Minutes||?||8: Flight qualified| |Arcjet rocket||4 – 16||10−2 – 10||Minutes||?||citation needed]8: Flight qualified[| |Field emission electric propulsion (FEEP)|| – 130100||10−6 – 10−3||Months – years||?||8: Flight qualified| |Pulsed plasma thruster (PPT)||20||0.1||2,000 – 10,000 hours||?||7: Prototype demoed in space| |Dual-mode propulsion rocket||1 – 4.7||0.1 – 107||Milliseconds – minutes||3 – 9||7: Prototype demoed in space| |Tripropellant rocket||citation needed]2.5 – 5.3[||citation needed]0.1 – 107[||Minutes||9||6: Prototype demoed on ground| |Magnetoplasmadynamic thruster (MPD)||20 – 100||100||Weeks||?||6: Model, 1 kW demoed in space| |Nuclear–thermal rocket||9||107||Minutes||>20||6: Prototype demoed on ground| |Propulsive mass drivers||0 – 30||104 – 108||Months||?||6: Model, 32 MJ demoed on ground| |Tether propulsion||N/A||1 – 1012||Minutes||7||6: Model, 31.7 km demoed in space| |Air-augmented rocket||5 – 6||0.1 – 107||Seconds – minutes||>7?||6: Prototype demoed on ground| |Liquid-air-cycle engine||4.5||103 – 107||Seconds – minutes||?||6: Prototype demoed on ground| |Pulsed-inductive thruster (PIT)||10 – 80||20||Months||?||5: Component validated in vacuum| |Variable-specific-impulse magnetoplasma rocket (VASIMR)||citation needed]10 – 300[||citation needed]40 – 1,200[||Days – months||>100||5: Component, 200 kW validated in vacuum| |Magnetic-field oscillating amplified thruster||10 – 130||0.1 – 1||Days – months||>100||5: Component validated in vacuum| |Solar–thermal rocket||7 – 12||1 – 100||Weeks||>20||4: Component validated in lab| |Radioisotope rocket||citation needed]7 – 8[||1.3 – 1.5||Months||?||4: Component validated in lab| |Nuclear–electric rocket||As electric propulsion method used||400 kW validated in lab4: Component,| |Orion Project (near-term nuclear pulse propulsion)||20 – 100||109 – 1012||Days||30 – 60||3: Validated, 900 kg proof-of-concept| |Space elevator||N/A||N/A||Indefinite||>12||3: Validated proof-of-concept| |Reaction Engines SABRE||30/4.5||0.1 – 107||Minutes||9.4||3: Validated proof-of-concept| |Magnetic sails||145 – 750, solar wind||t2/||Indefinite||?||3: Validated proof-of-concept| |Mini-magnetospheric plasma propulsion||200||1/kW||Months||?||3: Validated proof-of-concept| |Beam-powered/laser||As propulsion method powered by beam||3: Validated, 71 m proof-of-concept| |Launch loop/orbital ring||N/A||104||Minutes||11 – 30||Technology concept formulated2:| |Nuclear pulse propulsion (Project Daedalus' drive)||20 – 1,000||109 – 1012||Years||15,000||2: Technology concept formulated| |Gas-core reactor rocket||10 – 20||103 – 106||?||?||2: Technology concept formulated| |Nuclear salt-water rocket||100||103 – 107||Half-hour||?||2: Technology concept formulated| |Fission sail||?||?||?||?||2: Technology concept formulated| |Fission-fragment rocket||15,000||?||?||?||2: Technology concept formulated| |Nuclear–photonic rocket||299,792||10−5 – 1||Years – decades||?||2: Technology concept formulated| |Fusion rocket||citation needed]100 – 1,000[||?||?||?||2: Technology concept formulated| |Antimatter-catalyzed nuclear pulse propulsion||200 – 4,000||?||Days – weeks||?||2: Technology concept formulated| |Antimatter rocket||citation needed]10,000 – 100,000[||?||?||?||2: Technology concept formulated| |Bussard ramjet||2.2 – 20,000||?||Indefinite||30,000||2: Technology concept formulated| Spacecraft propulsion systems are often first statically tested on Earth's surface, within the atmosphere but many systems require a vacuum chamber to test fully. Rockets are usually tested at a rocket engine test facility well away from habitation and other buildings for safety reasons. Ion drives are far less dangerous and require much less stringent safety, usually only a large-ish vacuum chamber is needed. Famous static test locations can be found at Rocket Ground Test Facilities Some systems cannot be adequately tested on the ground and test launches may be employed at a Rocket Launch Site. - Photonic laser thruster - In-space propulsion technologies - Interplanetary spaceflight - Interstellar travel - Index of aerospace engineering articles - Lists of rockets - Magnetic sail - Orbital maneuver - Orbital mechanics - Plasma propulsion engine - Pulse detonation engine - Rocket engine nozzle - Solar sail - Specific impulse - Stochastic electrodynamics - Tsiolkovsky rocket equation - ^ With things moving around in orbits and nothing staying still, the question may be quite reasonably asked, stationary relative to what? The answer is for the energy to be zero (and in the absence of gravity which complicates the issue somewhat), the exhaust must stop relative to the initial motion of the rocket before the engines were switched on. It is possible to do calculations from other reference frames, but consideration for the kinetic energy of the exhaust and propellant needs to be given. In Newtonian mechanics the initial position of the rocket is the centre of mass frame for the rocket/propellant/exhaust, and has the minimum energy of any frame. - Hess, M.; Martin, K. K.; Rachul, L. J. (February 7, 2002). "Thrusters Precisely Guide EO-1 Satellite in Space First". NASA. Archived from the original on 2007-12-06. Retrieved 2007-07-30. - Phillips, Tony (May 30, 2000). "Solar S'Mores". NASA. Retrieved 2007-07-30. - Olsen, Carrie (September 21, 1995). "Hohmann Transfer & Plane Changes". NASA. Archived from the original on 2007-07-15. Retrieved 2007-07-30. - Staff (April 24, 2007). "Interplanetary Cruise". 2001 Mars Odyssey. NASA. Archived from the original on August 2, 2007. Retrieved 2007-07-30. - Doody, Dave (February 7, 2002). "Chapter 4. Interplanetary Trajectories". Basics of Space Flight (NASA JPL). Retrieved 2007-07-30. - Hoffman, S. (August 20–22, 1984). "A comparison of aerobraking and aerocapture vehicles for interplanetary missions". AIAA and AAS, Astrodynamics Conference. Seattle, Washington: American Institute of Aeronautics and Astronautics. pp. 25 p. Archived from the original on September 27, 2007. Retrieved 2007-07-31. - Anonymous (2007). "Basic Facts on Cosmos 1 and Solar Sailing". The Planetary Society. Archived from the original on July 3, 2007. Retrieved 2007-07-26. - Rahls, Chuck (December 7, 2005). "Interstellar Spaceflight: Is It Possible?". Physorg.com. Retrieved 2007-07-31. - Zobel, Edward A. (2006). "Summary of Introductory Momentum Equations". Zona Land. Archived from the original on September 27, 2007. Retrieved 2007-08-02. - Benson, Tom. "Guided Tours: Beginner's Guide to Rockets". NASA. Retrieved 2007-08-02. - equation 19-1 Rocket propulsion elements 7th edition- Sutton - Choueiri, Edgar Y. (2004). "A Critical History of Electric Propulsion: The First 50 Years (1906–1956)". Journal of Propulsion and Power 20 (2): 193–203. doi:10.2514/1.9245. - Drachlis, Dave (October 24, 2002). "NASA calls on industry, academia for in-space propulsion innovations". NASA. Archived from the original on December 6, 2007. Retrieved 2007-07-26. - NASA's Nanosail-D Becomes the First Solar Sail Spacecraft to Orbit the Earth | Inhabitat - Green Design Will Save the World - "Space Vehicle Control". University of Surrey. Retrieved 8 August 2015. - King-Hele, Desmond (1987). Satellite orbits in an atmosphere: Theory and application. Springer. ISBN 978-0-216-92252-5. - Tsiotras, P.; Shen, H.; Hall, C. D. (2001). "Satellite attitude control and power tracking with energy/momentum wheels". Journal of Guidance, Control, and Dynamics 43 (1): 23–34. Bibcode:2001JGCD...24...23T. doi:10.2514/2.4705. ISSN 0731-5090. - Dykla, J. J.; Cacioppo, R.; Gangopadhyaya, A. (2004). "Gravitational slingshot". American Journal of Physics 72 (5): 619–000. Bibcode:2004AmJPh..72..619D. doi:10.1119/1.1621032. - Anonymous (2006). "The Sabre Engine". Reaction Engines Ltd. Retrieved 2007-07-26. - Harada, K.; Tanatsugu, N.; Sato, T. (1997). "Development Study on ATREX Engine". Acta Astronautica 41 (12): 851–862. doi:10.1016/S0094-5765(97)00176-8. - Dimitri S.H. Charrier (2012). "Micronewton electromagnetic thruster". Applied Physics Letters 101. p. 034104. - ESA Portal – ESA and ANU make space propulsion breakthrough - Hall effect thrusters have been used on Soviet/Russian satellites for decades. - A Xenon Resistojet Propulsion System for Microsatellites[dead link] (Surrey Space Centre, University of Surrey, Guildford, Surrey) - Alta - Space Propulsion, Systems and Services - Field Emission Electric Propulsion - Google Translate - Young Engineers' Satellite 2 - NASA GTX Archived November 22, 2008, at the Wayback Machine. - The PIT MkV pulsed inductive thruster - Pratt & Whitney Rocketdyne Wins $2.2 Million Contract Option for Solar Thermal Propulsion Rocket Engine (Press release, June 25, 2008, Pratt & Whitney Rocketdyne)[dead link] - "Operation Plumbbob". July 2003. Retrieved 2006-07-31. - Brownlee, Robert R. (June 2002). "Learning to Contain Underground Nuclear Explosions". Retrieved 2006-07-31. - PSFC/JA-05-26:Physics and Technology of the Feasibility of Plasma Sails, Journal of Geophysical Research, September 2005 Archived March 18, 2009, at the Wayback Machine. - NASA Beginner's Guide to Propulsion - NASA Breakthrough Propulsion Physics project - Rocket Propulsion - Journal of Advanced Theoretical Propulsion - Different Rockets - Earth-to-Orbit Transportation Bibliography - Spaceflight Propulsion - a detailed survey by Greg Goebel, in the public domain - Rocket motors on howstuffworks.com - Johns Hopkins University, Chemical Propulsion Information Analysis Center - Tool for Liquid Rocket Engine Thermodynamic Analysis - NASA Jet Propulsion Laboratory - Smithsonian National Air and Space Museum's How Things Fly website
Table of Contents X-ray diffraction (XRD) is a non-destructive method that lets you calculate the interplanar spacing of the atoms within a solid using Bragg’s Law. There are numerous applications for XRD, including measuring residual stress on the surface of a material by pointing X-rays towards a polycrystalline solid and measuring the return X-ray scatter that constructively interferes as it bounces off the object. In this article, we’ll cover some of the different XRD measurement techniques, common applications of XRD in the real world, and explore in detail how XRD can be used to measure residual stresses of polycrystalline solids. What Is the Principle of X-ray Diffraction? X-ray diffraction is used to identify a crystal’s atomic and molecular structure. The X-ray pattern contains information about the spacing of the lattice planes and crystal orientation, which is used to determine the crystal’s structure. The fundamental principle of X-ray diffraction is Bragg’s Law, which we’ll cover later in this article. X-ray diffraction can be performed using various techniques depending on the application and the particular use case where it’s being implemented. These techniques include the Sin2 ψ method, the Two-Angle method, the Marion-Cohen method, and the cosα method. The sin2 ψ method was initially developed in the 1920s and used for X-ray residual stress analysis for many decades; this method is also still used by manufacturers today. To perform a measurement using sin2 ψ, a zero or one-dimensional sensor is used to acquire a portion of the diffraction Debye-Scherrer ring. Once a portion of the ring has been acquired, the incidence angle is changed to acquire the remaining crystal distortion data. Debye-Scherrer rings are the resulting concentric diffraction rings produced by Bragg reflections when a highly parallel electron beam illuminates polycrystalline grains within an object. Just as the name suggests, the two-angle method is used to determine stress within a polycrystalline material by measuring the lattice spacing of any two ψ (psi) angles, as long as the lattice spacing is a linear function of sin2 ψ. Lattice spacing data is then determined by using two extreme values of ψ, typically 0 and 45 degrees. The Marion-Cohen method can be used to measure residual stresses within an object by characterizing the dependence of lattice spacing on stresses within highly textured materials. For stresses produced by shot peening, machining, or grinding, residual stresses that are measured with the Marion-Cohen method produce nearly identical results when compared to the Two-Angle and sin2 ψ methods. The cosα (cosine alpha) XRD method is an improvement on the sin2 ψ (sin squared psi) method we discussed earlier. Instead of using a zero or one-dimensional sensor, this technique utilizes a two-dimensional sensor to acquire the entire Debye-Scherrer ring simultaneously. Due to this single-exposure measurement, changing the angle of incidence is no longer necessary, and a goniometer is no longer required in the XRD measurement device. Is XRD the Same as XRF? No, XRD (X-ray diffraction) is not the same as XRF (X-ray fluorescence), even though both methods are sometimes used together to analyze a particular material or sample. While XRD is used to measure and identify the mineralogy of an object, XRF gives information about the sample’s chemical composition, such as the types of elements present. XRF does not provide the exact composition and molecular structure of the sample, such as that provided by XRD. Common XRD Applications There are many different applications for XRD, with one of the most common being the measurement of residual stresses within polycrystalline solids. Some of the most common XRD applications include: - Determining Structural Properties of Materials - Grain Size - Lattice Parameters - Phase Composition - Preferred Orientation - Identifying Orientation & Crystalline Phases - Determining Atomic Arrangement - Measuring Multi-Layers & Thickness of Thin Films - Determining Unit Cell Dimensions - Measuring Sample Purity - Determining Modal Amounts of Minerals How Does XRD Measure Residual Stress? XRD doesn’t measure residual stress, per se, but it gives the information and data required to calculate residual stresses and strains within an object. By using the various XRD techniques mentioned above, you can observe peak profiles in select sections of the Debye-Scherrer rings. How Do You Calculate Stress from XRD Data? Stresses are calculated from XRD using a fundamental physics principle called Bragg’s Law. This law states that every atomic plane within a crystalline structure undergoes refraction at one unique angle for any number of X-rays of a fixed wavelength. This means you can use the incident ray, wavelength, and a measurement angle theta to solve Bragg’s equation and determine the d-spacing between the crystal lattice planes of atoms in a polycrystalline material. Once you know the d-spacing, you can compare a nominal unstressed sample to a stressed sample in order to determine the precise stresses present. Advantages of XRD One of the biggest advantages of XRD is the fact that it is a non-destructive measurement method, meaning that your test sample is fully preserved before and after the technique. Additionally, it’s easy to calculate residual stresses using XRD data, and simple to interpret the data that the X-ray diffraction method provides. Spotting X-ray to some millimeter area on a sample, it’s possible to obtain a wide range of structural properties incredibly quickly. Have Questions About X-Ray Diffraction & Residual Stress Measurement? Pulstec is a leader in the residual stress measurement device space and has developed a cutting-edge XRD residual stress analyzer that is used by manufacturers and R&D departments all over the globe. If you have questions about X-ray diffraction, or would like to request a free demo of our XRD analyzer, contact us today.
Earthquakes can deform the ground, make buildings and other structures collapse, and create tsunamis (large sea waves). Lives may be lost in the resulting destruction. In the last 500 years, several million people have been killed by earthquakes around the world, including over 240,000 in the 1976 T’ang-Shan, China, earthquake. Worldwide, earthquakes have also caused severe property and structural damage. Adequate precautions, such as education, emergency planning, and constructing stronger, more flexible, safely designed structures, can limit the loss of life and decrease the damage caused by earthquakes. Focus and Epicenter- The point within the Earth along the rupturing geological fault where an earthquake originates is called the focus, or hypocenter. The point on the Earth’s surface directly above the focus is called the epicenter. Faults- Stress in the Earth’s crust creates faults, resulting in earthquakes. The properties of an earthquake depend strongly on the type of fault slip, or movement along the fault, that causes the earthquake. Geologists categorize faults according to the direction of the fault slip. The surface between the two sides of a fault lies in a plane, and the direction of the plane is usually not vertical; rather it dips at an angle into the Earth. Waves- The sudden movement of rocks along a fault causes vibrations that transmit energy through the Earth in the form of waves. Waves that travel in the rocks below the surface of the Earth are called body waves, and there are two types of body waves: primary, or P, waves, and secondary, or S, waves. The S waves, also known as shearing waves, move the ground back and forth Effects Of Earthquake Ground Shaking and Landslides-Earthquake waves make the ground move, shaking buildings and causing poorly designed or weak structures to partially or totally collapse. The ground shaking weakens soils and foundation materials under structures and causes dramatic changes in fine-grained soils. During an earthquake, water-saturated sandy soil becomes like liquid mud, an effect called liquefaction. Liquefaction causes damage as the foundation soil beneath structures and buildings weakens. Fire-Another post-earthquake threat is fire, such as the fires. The amount of damage caused by post-earthquake fire depends on the types of building materials used, whether water lines are intact, and whether natural gas mains have been broken. Ruptured gas mains may lead to numerous fires, and fire fighting cannot be effective if the water mains are not intact to transport water to the fires. Tsunami Waves and Flooding- Along the coasts, sea waves called tsunamis that accompany some large earthquakes centered under the ocean can cause more death and damage than ground shaking. Tsunamis are usually made up of several oceanic waves that travel out from the slipped fault and arrive one after the other on shore. They can strike without warning, often in places very distant from the epicenter of the earthquake. Tsunami waves are sometimes inaccurately referred to as tidal waves, but tidal forces do not cause them. Rather, tsunamis occur when a major fault under the ocean floor suddenly slips. The displaced rock pushes water above it like a giant paddle, producing powerful water waves at the ocean surface. The ocean waves spread out from the vicinity of the earthquake source and move across the ocean until they reach the coastline, where their height increases as they reach the continental shelf, the part of the Earth’s crust that slopes, or rises, from the ocean floor up to the land. Disease-Catastrophic earthquakes can create a risk of widespread disease outbreaks, especially in underdeveloped countries. Damage to water supply lines, sewage lines, and hospital facilities as well as lack of housing may lead to conditions that contribute to the spread of contagious diseases, such as influenza (the flu) and other viral infections. Blizzard Blizzard, severe storm characterized by extreme cold, strong winds, and a heavy snowfall. These storms are most common to the western United States but sometimes occur in other parts of the country. According to the U. S. National Weather Service, winds of 35 mph (56. 3 km/h) or more and visibility of 0. 25 mi (0. 40 km) or less are conditions that, if they endure for three hours, define a blizzard. The great blizzard of March 11-14, 1888, which covered the eastern U. S. , was perhaps the most paralyzing of any storm on record. Cyclone Cyclone, in strict meteorological terminology, an area of low atmospheric pressure surrounded by a wind system blowing, in the northern hemisphere, in a counterclockwise direction. A corresponding high-pressure area with clockwise winds is known as an anticyclone. In the southern hemisphere these wind directions are reversed. Cyclones are commonly called lows and anticyclones highs. The term cyclone has often been more loosely applied to a storm and disturbance attending such pressure systems, particularly the violent tropical hurricane and the typhoon, which center on areas of unusually low pressure. Hurricane Hurricane, name given to violent storms that originate over the tropical or subtropical waters of the Atlantic Ocean, Caribbean Sea, Gulf of Mexico, or North Pacific Ocean east of the International Date Line. Such storms over the North Pacific west of the International Date Line are called typhoons; those elsewhere are known as tropical cyclones, which is the general name for all such storms including hurricanes and typhoons. These storms can cause great damage to property and loss of human life due to high winds, flooding, and large waves crashing against shorelines. How Hurricanes Form-Tropical cyclones form and grow over warm ocean water, drawing their energy from latent heat. Latent heat is the energy released when water vapor in rising hot, humid air condenses into clouds and rain. As warmed air rises, more air flows into the area where the air is rising, creating wind. The Earth’s rotation causes the wind to follow a curved path over the ocean (the Coriolis effect), which helps give tropical cyclones their circular appearance. Hurricanes and tropical cyclones form, maintain their strength, and grow only when they are over ocean water that is approximately 27°C (80°F). Such warmth causes large amounts of water to evaporate, making the air very humid. This warm water requirement accounts for the existence of tropical cyclone seasons, which occur generally during a hemisphere’s summer and autumn. Because water is slow to warm up and cool down, oceans do not become warm enough for tropical cyclones to occur in the spring. Oceans can become warm enough in the summer for hurricanes to develop, and the oceans also retain summer heat through the fall. Hurricanes weaken and die out when cut off from warm, humid air as they move over cooler water or land but can remain dangerous as they weaken. Hurricanes and other tropical cyclones begin as disorganized clusters of showers and thunderstorms. When one of these clusters becomes organized with its winds making a complete circle around a center, it is called a tropical depression. When a depression’s sustained winds reach 63 km/h (39 mph) or more, it becomes a tropical storm and is given a name. By definition, a tropical storm becomes a hurricane when winds reach 119 km/h (74 mph) or more. Characteristics of Hurricane-A hurricane consists of bands of thunderstorms that spiral toward the low-pressure center, or “eye” of the storm. Winds also spiral in toward the center, speeding up as they approach the eye. Large thunderstorms create an “eye wall” around the center where winds are the strongest. Winds in the eye itself are nearly calm, and the sky is often clear. Air pressures in the eye at the surface range from around 982 hectopascals (29 inches of mercury) in a weak hurricane to lower than 914 hectopascals (27 inches of mercury) in the strongest storms. Hectopascals are the metric unit of air pressure and are the same as millibars, a term used by many weather forecasters in the United States. Hectopascals is the preferred term in scientific journals and is being used more often in public forecasts in nations that use the metric system. )In a large, strong storm, hurricane-force winds may be felt over an area with a diameter of more than 100 km (60 m). The diameter of the area affected by gale winds and torrential rain can extend another 200 km (120 m) or more outward from the eye of the storm. The diameter of the eye may be less than 16 km (10 m) in a strong hurricane to more than 48 km (30 m) in a weak storm. The smaller the diameter of the eye, the stronger the hurricane winds will be. A hurricane’s strength is rated from Category 1, which has winds of at least 119 km/h (74 mph), to Category 5, which has winds of more than 249 km/h (155 mph). These categories, known as the Saffir-Simpson hurricane scale, were developed in the 1970s. Tornado Tornado, violently rotating column of air extending from ithin a thundercloud down to ground level. The strongest tornadoes may sweep houses from their foundations, destroy brick buildings, toss cars and school buses through the air, and even lift railroad cars from their tracks. Tornadoes vary in diameter from tens of meters to nearly 2 km (1 mi), with an average diameter of about 50 m (160 ft). Most tornadoes in the northern hemisphere create winds that blow counterclockwise around a center of extremely low atmospheric pressure. In the southern hemisphere the winds generally blow clockwise. Peak wind speeds can range from near 120 km/h (75 mph) to almost 500 km/h (300 mph). The forward motion of a tornado can range from a near standstill to almost 110 km/h (70 mph). A tornado becomes visible when a condensation funnel made of water vapor (a funnel cloud) forms in extreme low pressures, or when the tornado lofts dust, dirt, and debris upward from the ground. A mature tornado may be columnar or tilted, narrow or broad—sometimes so broad that it appears as if the parent thundercloud itself had descended to ground level. Some tornadoes resemble a swaying elephant's trunk. Others, especially very violent ones, may break into several intense suction vortices—intense swirling masses of air—each of which rotates near the parent tornado. A suction vortex may be only a few meters in diameter, and thus can destroy one house while leaving a neighboring house relatively unscathed. Formation-Many tornadoes, including the strongest ones, develop from a special type of thunderstorm known as a supercell. A supercell is a long-lived, rotating thunderstorm 10 to 16 km (6 to 10 mi) in diameter that may last several hours, travel hundreds of miles, and produce several tornadoes. Supercell tornadoes are often produced in sequence, so that what appears to be a very long damage path from one tornado may actually be the result of a new tornado that forms in the area where the previous tornado died. Sometimes, tornado outbreaks occur, and swarms of supercell storms may occur. Each supercell may spawn a tornado or a sequence of tornadoes. The complete process of tornado formation in supercells is still debated among meteorologists. Scientists generally agree that the first stage in tornado formation is an interaction between the storm updraft and the winds. An updraft is a current of warm, moist air that rises upward through the thunderstorm. The updraft interacts with the winds, which must change with height in favorable ways for the interaction to occur. This interaction causes the updraft to rotate at the middle levels of the atmosphere. The rotating updraft, known as a mesocyclone, stabilizes the thunderstorm and gives it its long-lived supercell characteristics. The next stage is the development of a strong downdraft (a current of cooler air that moves in a downward direction) on the backside of the storm, known as a rear-flank downdraft. It is not clear whether the rear-flank downdraft is induced by rainfall or by pressure forces set up in the storm, although it becomes progressively colder as the rain evaporates into it. This cold air moves downward because it is denser than warm air. The speed of the downdraft increases and the air plunges to the ground, where it fans out at speeds that can exceed 160 km/h (100 mph). The favored location for the development of a tornado is at the area between this rear-flank downdraft and the main storm updraft. However, the details of why a tornado should form there are still not clear. The same condensation process that creates tornadoes makes visible the generally weaker sea-going tornadoes, called waterspouts. Waterspouts occur most frequently in tropical waters. OccurrenceThe United States has the highest average annual number of tornadoes in the world, about 800 per year. Outside the United States, Australia ranks second in tornado frequency. Tornadoes also occur in many other countries, including China, India, Russia, England, and Germany. Bangladesh has been struck several times by devastating killer tornadoes. In the United States, tornadoes occur in all 50 states. However, the region with the most tornadoes is “Tornado Alley,” a swath of the Midwest extending from the Texas Gulf Coastal Plain northward through eastern South Dakota. Another area of high concentration is “Dixie Alley,” which extends across the Gulf Coastal Plain from south Texas eastward to Florida. Tornadoes are most frequent in the Midwest, where conditions are most favorable for the development of the severe thunderstorms that produce tornadoes. The Gulf of Mexico ensures a supply of moist, warm air that enables the storms to survive. Weather conditions that trigger severe thunderstorms are frequently in place here: convergence (flowing together) of air along boundaries between dry and moist air masses, convergence of air along the boundaries between warm and cold air masses, and low pressure systems in the upper atmosphere traveling eastward across the plains. In winter, tornado activity is usually confined to the Gulf Coastal Plain. In spring, the most active tornado season, tornadoes typically occur in central Tornado Alley and astward into the Ohio Valley. In summer, most tornadoes occur in a northern band stretching from the Dakotas eastward into Pennsylvania and southern New York State. The worst tornado disasters in the United States have claimed hundreds of lives. The Tri-State Outbreak of March 18, 1925, had the highest death toll: 740 people died in 7 tornadoes that struck Illinois, Missouri, and Indiana. The Super Outbreak of April 3-4, 1974, spawned 148 tornadoes (the most in any known outbreak) and killed 315 people from Alabama north to Ohio. Floods When it rains or snows, some of the water is retained by the soil, some is absorbed by vegetation, some evaporates, and the remainder, which reaches stream channels, is called runoff. Floods occur when soil and vegetation cannot absorb all the water; water then runs off the land in quantities that cannot be carried in stream channels or retained in natural ponds and constructed reservoirs. About 30 percent of all precipitation is runoff, and this amount may be increased by melting snow masses. Periodic floods occur naturally on many rivers, forming an area known as the flood plain. These river floods often result from heavy rain, sometimes combined with melting snow, which causes the rivers to overflow their banks; a flood that rises and falls rapidly with little or no advance warning is called a flash flood. Flash floods usually result from intense rainfall over a relatively small area. Coastal areas are occasionally flooded by unusually high tides induced by severe winds over ocean surfaces, or by tsunamis caused by undersea earthquakes. Effects of Floods-Floods not only damage property and endanger the lives of humans and animals, but have other effects as well. Rapid runoff causes soil erosion as well as sediment deposition problems downstream. Spawning grounds for fish and other wildlife habitat are often destroyed. High-velocity currents increase flood damage; prolonged high floods delay traffic and interfere with drainage and economic use of lands. Bridge abutments, bank lines, sewer outfalls, and other structures within floodways are damaged, and navigation and hydroelectric power are often impaired. Financial losses due to floods are commonly millions of dollars each year. Drought Drought, condition of abnormally dry weather within a geographic region where some rain might usually be expected. A drought is thus quite different from a dry climate, which designates a region that is normally, or at least seasonally, dry. The term drought is applied to a period in which an unusual scarcity of rain causes a serious hydrological imbalance: Water-supply reservoirs empty, wells dry up, and crop damage ensues. The severity of the drought is gauged by the degree of moisture deficiency, its duration, and the size of the area affected. If the drought is brief, it is known as a dry spell, or partial drought. A partial drought is usually defined as more than 14 days without appreciable precipitation, whereas a drought may last for years. Droughts tend to be more severe in some areas than in others. Catastrophic droughts generally occur at latitudes of about 15°-20°, in areas bordering the permanently arid regions of the world. Permanent aridity is a characteristic of those areas where warm, tropical air masses, in descending to earth, become hotter and drier. When a poleward shift in the prevailing westerlies occurs , the high-pressure, anticyclonic conditions of the permanently arid regions impinge on areas that are normally subject to seasonally wet low-pressure weather and a drought ensues. A southward shift in the westerlies caused the most severe drought of the 20th century, the one that afflicted the African region called the Sahel for a dozen years, beginning in 1968. In North America, archaeological studies of Native Americans and statistics derived from long-term agricultural records show that six or seven centuries ago whole areas of the Southwest were abandoned by the indigenous agriculturists because of repeated droughts and were never reoccupied. The statistics indicate that roughly every 22 years—with a precision of three to four years—a major drought occurs in the United States, most seriously affecting the Prairie and midwestern states. The disastrous drought of the 1930s, during which large areas of the Great Plains became known as the Dust Bowl, is one example. The effect of the drought was aggravated by overcropping, overpopulation, and lack of timely relief measures. In Africa, the Sahel drought was also aggravated by nonclimatic determinants such as overcropping, as well as by problems between nations and peoples unfriendly with one another. Although drought cannot be reliably predicted, certain precautions can be taken in drought-risk areas. These include construction of reservoirs to hold emergency water supplies, education to avoid overcropping and overgrazing, and programs to limit settlement in drought-prone areas. Volcano Volcano, mountain or hill formed by the accumulation of materials erupted through one or more openings (called volcanic vents) in the earth's surface. The term volcano can also refer to the vents themselves. Most volcanoes have steep sides, but some can be gently sloping mountains or even flat tablelands, plateaus, or plains. The volcanoes above sea level are the best known, but the vast majority of the world's volcanoes lie beneath the sea, formed along the global oceanic ridge systems that crisscross the deep ocean floor . According to the Smithsonian Institution, 1,511 above-sea volcanoes have been active during the past 10,000 years, 539 of them erupting one or more times during written history. On average, 50 to 60 above-sea volcanoes worldwide are active in any given year; about half of these are continuations of eruptions from previous years, and the rest are new. Volcano Formation-All volcanoes are formed by the accumulation of magma (molten rock that forms below the earth's surface). Magma can erupt through one or more volcanic vents, which can be a single opening, a cluster of openings, or a long crack, called a fissure vent. It forms deep within the earth, generally within the upper part of the mantle (one of the layers of the earth’s crust), or less commonly, within the base of the earth's crust. High temperatures and pressures are needed to form magma. The solid mantle or crustal rock must be melted under conditions typically reached at depths of 80 to 100 km (50 to 60 mi) below the earth’s surface. Once tiny droplets of magma are formed, they begin to rise because the magma is less dense than the solid rock surrounding it. The processes that cause the magma to rise are poorly understood, but it generally moves upward toward lower pressure regions, squeezing into spaces between minerals within the solid rock. As the individual magma droplets rise, they join to form ever-larger blobs and move toward the surface. The larger the rising blob of magma, the easier it moves. Rising magma does not reach the surface in a steady manner but tends to accumulate in one or more underground storage regions, called magma reservoirs, before it erupts onto the surface. With each eruption, whether explosive or nonexplosive, the material erupted adds another layer to the growing volcano. After many eruptions, the volcanic materials pile up around the vent or vents. These piles form a topographic feature, such as a hill, mountain, plateau, or crater, that we recognize as a volcano. Most of the earth's volcanoes are formed beneath the oceans, and their locations have been documented in recent decades by mapping of the ocean floor. Volcanic Materials- 1-Lava-Lava is magma that breaks the surface and erupts from a volcano. If the magma is very fluid, it flows rapidly down the volcano’s slopes. Lava that is more sticky and less fluid moves slower. Lava flows that have a continuous, smooth, ropy, or billowy surface are called pahoehoe (pronounced pah HOH ee hoh ee) flows, while aa (pronounced ah ah) flows have a jagged surface composed of loose, irregularly shaped lava chunks. Once cooled, pahoehoe forms smooth rocks, while aa forms jagged rocks. The words pahoehoe and aa are Hawaiian terms that describe the texture of the lava. Lava may also be described in terms of its composition and the type of rock it forms. Basalt, andesite, , and rhyolite are all different kinds of rock that form from lava. Each type of rock, and the lava from which it forms, contains a different amount of the compound silicon dioxide. Basaltic lava has the least amount of silicon dioxide, andesitic and dacitic lava have medium levels of silicon dioxide, while rhyolitic lava has the most. -Tephra-Tephra, or pyroclastic material, is made of rock fragments formed by explosive shattering of sticky magma (see Pyroclastic Flow). The term pyroclastic is of Greek origin and means 'fire-broken' (pyro, “fire”; klastos, “broken”). Tephra refers to any airborne pyroclastic material regardless of size or shape. The best-known tephra materials include pumice, cinders, and volcanic ash. These fragments are exploded when gases build up inside a volcano and produce an explosion. The pieces of magma are shot into the air during the explosion. Ash refers to fragments smaller than 2 mm (0. 08 in) in diameter. The finest ash is called volcanic dust and is made up of particles that are less than 0. 06 mm (0. 002 in) in diameter. Volcanic blocks, or bombs, are the largest fragments of tephra, more than 64 mm (2. 5 in) in diameter (baseball size or larger). Some bombs can be the size of a small car. 3-Gases-Gases, primarily in the form of steam, are released from volcanoes during eruptions. All eruptions, explosive or nonexplosive, are accompanied by the release of volcanic gas. The sudden escape of high-pressure volcanic gas from magma is the driving force for eruptions. Gases come from the magma itself or from the hot magma coming into contact with water in the ground. Volcanic plumes can appear dark during an eruption because the gases are mixed with dark-colored materials such as tephra. Most volcanic gases predominantly consist of water vapor (steam), with carbon dioxide (CO2) and sulfur dioxide (SO2) being the next two most common compounds along with smaller amounts of chlorine and fluorine gases. Types of Volcano 1-Cinder Cones and Composite Volcanoes-Cinder cones and composite volcanoes have the familiar conelike shape that people most often associate with volcanoes. Some of these form beautifully symmetrical volcanic hills or mountains such as Paricutin Volcano in Mexico and Mount Fuji in Japan. Although both cinder cones and composite volcanoes are mostly the results of explosive eruptions, cinder cones consist exclusively of fragmental lava. This fragmental lava is erupted explosively and made up of cinders. -Shield Volcanoes-Shield volcanoes (also called volcanic shields) get their name from their distinctive, gently sloping mound-like shapes that resemble the fighting shields that ancient warriors carried into battle. Their shapes reflect the fact that they are constructed mainly of countless fluid basaltic lava flows that erupted nonexplosively. Such flows can easily spread great distances from the feeding volcanic vents, similar to the spreading out of hot syrup poured onto a plate. Volcanic shields may be either small or large, and the largest shield volcanoes are many times larger than the largest composite volcanoes. -Caldera-A caldera is a round or oval-shaped low-lying area that forms when the ground collapses because of explosive eruptions. An explosive eruption can explode the top off of the mountain or eject all of the magma that is inside the volcano. Either of these actions may cause the volcano to collapse. Calderas can be bigger than the largest shield volcanoes in diameter. Such volcanic features, if geologically young, are often outlined by an irregular, steep-walled boundary (a caldera rim), which reflects the original ringlike zone, or fault, along which the ground collapse occurred. Some calderas have hills and mountains rising within them, called resurgent domes, that reflect volcanic activity after the initial collapse. 4-Volcanic Plateaus-Some of the largest volcanic features on earth do not actually look like volcanoes. Instead, they form extensive, nearly flat-topped accumulations of erupted materials. These materials form volcanic plateaus or plains covering many thousands of square kilometers. The volcanic materials can be either very fluid basaltic lava flows or far-traveled pyroclastic flows. The basaltic lava flows are called flood or plateau basalts and are erupted from many fissure vents. Volcano Hazards-Eruptions pose direct and indirect volcano hazards to people and property, both on the ground and in the air. Direct hazards are pyroclastic flows, lava flows, falling ash, and debris flows. Pyroclastic flows are mixtures of hot ash, rock fragments, and gas. They are especially deadly because of their high temperatures of 850° C (1600° F) or higher and fast speeds of 250 km/h (160 mph) or greater. Lava flows, which move much more slowly than pyroclastic flows, are rarely life threatening but can produce massive property damage and economic loss. Heavy accumulations of volcanic ash, especially if they become wet from rainfall, can collapse roofs and damage crops. Debris flows called lahars are composed of wet concretelike mixtures of volcanic debris and water from melted snow or ice or heavy rainfall. Lahars can travel quickly through valleys, destroying everything in their paths. Pyroclastic and volcanic debris flows have caused the most eruption-related deaths in the 20th century.
Number Sequence Problems are word problems that involves a number sequence. Sometimes you may be asked to obtain the value of a particular term of the sequence or you may be asked to determine the pattern of a sequence.Related Topics: More Algebra Word Problems The question will describe how the sequence of numbers is generated. After a certain number of terms, the sequence will repeat through the same numbers again. Try to follow the description and write down the sequence of numbers until you can determine how many terms before the numbers repeat. That information can then be used to determine what a particular term would be. If we have a sequence of numbers: x, y, z, x, y, z, .... that repeats after the third term. If we want to find out what is the fifth term then we get the remainder of 5 ÷ 3, which is 2. The fifth term is then the same as the second term, which is y. The first term in a sequence of number is 2. Each even-numbered term is 3 more than the previous term and each odd-numbered term, excluding the first, is –1 times the previous term. What is the 45th term of the sequence? Step 1: Write down the terms until you notice a repetition 2, 5, -5, - 2, 2, 5, -5, -2, ... The sequence repeats after the fourth term. Step 2: To find the 45th term, get the remainder for 45 ÷ 4, which is 1 Step 3: The 45th term is the same as the 1st term, which is 2 Answer: The 45th term is 2. 6, 13, 27, 55, … In the sequence above, each term after the first is determined by multiplying the preceding term by m and then adding n. What is the value of n? The fastest way to solve this would be if you notice that the pattern: 6 × 2 + 1 = 13 13 × 2 + 1 =27 The value of n is 1. If you were not able to see the pattern then you can come with two equations and then solve for n. 6m + n =13 (equation 1) 13m + n = 27 (equation 2) Use substitution method Isolate n in equation 1 n = 13 – 6m Substitute into equation 2 13m + 13 – 6m = 27 7m = 14 m = 2 Substitute m = 2 into equation 1 6(2) + n = 13 n = 1 Answer: n = 1 Solving Number Sequences How to solve number sequences by looking for patterns, then using addition, subtraction, multiplication, or division to complete the sequence.
Chapter 1. Introduction to SQL In this introductory chapter, we explore the origin and utility of the SQL language, demonstrate some of the more useful features of the language, and define a simple database design from which most examples in the book are derived. What Is SQL? SQL is a special-purpose language used to define, access, and manipulate data. SQL is nonprocedural, meaning that it describes the necessary components (i.e., tables) and desired results without dictating exactly how those results should be computed. Every SQL implementation sits atop a database engine , whose job it is to interpret SQL statements and determine how the various data structures in the database should be accessed to accurately and efficiently produce the desired outcome. The SQL language includes two distinct sets of commands: Data Definition Language (DDL) is the subset of SQL used to define and modify various data structures, while Data Manipulation Language (DML) is the subset of SQL used to access and manipulate data contained within the data structures previously defined via DDL. DDL includes numerous commands for handling such tasks as creating tables, indexes, views, and constraints, while DML is comprised of just five statements: Some people feel that DDL is the sole property of database administrators, while database developers are responsible for writing DML statements, but the two are not so easily separated. It is difficult to efficiently access and manipulate data without an understanding of what data structures are available and how they are related; likewise, it is difficult to design appropriate data structures without knowledge of how the data will be accessed. That being said, this book deals almost exclusively with DML, except where DDL is presented to set the stage for one or more DML examples. The reasons for focusing on just the DML portion of SQL include: DDL is well represented in various books on database design and administration as well as in SQL reference guides. Most database performance issues are the result of inefficient DML statements. Even with a paltry five statements, DML is a rich enough topic to warrant not just one book, but a whole series of books. Anyone who writes SQL in an Oracle environment should be armed with the following three books: a reference guide to the SQL language, such as Oracle in a Nutshell (O’Reilly); a performance-tuning guide, such as Optimizing Oracle Performance (O’Reilly); and the book you are holding, which shows how to best utilize and combine the various features of Oracle’s SQL implementation. So why should you care about SQL? In this age of Internet computing and n-tier architectures, does anyone even care about data access anymore? Actually, efficient storage and retrieval of information is more important than ever: Many companies now offer services via the Internet. During peak hours, these services may need to handle thousands of concurrent requests, and unacceptable response times equate to lost revenue. For such systems, every SQL statement must be carefully crafted to ensure acceptable performance as data volumes increase. We can store a lot more data today than we could just a few years ago. A single disk array can hold tens of terabytes of data, and the ability to store hundreds of terabytes is just around the corner. Software used to load or analyze data in these environments must harness the full power of SQL to process ever-increasing data volumes within constant (or shrinking) time windows. Hopefully, you now have an appreciation for what SQL is and why it is important. The next section will explore the origins of the SQL language and the support for the SQL standard in Oracle’s products. A Brief History of SQL In the early 1970s, an IBM research fellow named Dr. E. F. Codd endeavored to apply the rigors of mathematics to the then-untamed world of data storage and retrieval. Codd’s work led to the definition of the relational data model and a language called DSL/Alpha for manipulating data in a relational database. IBM liked what they saw, so they commissioned a project called System/R to build a prototype based on Codd’s work. Among other things, the System/R team developed a simplified version of DSL called SQUARE, which was later renamed SEQUEL, and finally renamed SQL. The work done on System/R eventually led to the release of various IBM products based on the relational model. Other companies, such as Oracle, rallied around the relational flag as well. By the mid 1980s, SQL had gathered sufficient momentum in the marketplace to warrant oversight by the American National Standards Institute (ANSI). ANSI released its first SQL standard in 1986, followed by updates in 1989, 1992, 1999, and 2003. There will undoubtedly be further refinements in the future. Thirty years after the System/R team began prototyping a relational database, SQL is still going strong. While there have been numerous attempts to dethrone relational databases in the marketplace, well-designed relational databases coupled with well-written SQL statements continue to succeed in handling large, complex data sets where other methods fail. Oracle’s SQL Implementation Given that Oracle was an early adopter of the relational model and SQL, one might think that they would have put a great deal of effort into conforming with the various ANSI standards. For many years, however, the folks at Oracle seemed content that their implementation of SQL was functionally equivalent to the ANSI standards without being overly concerned with true compliance. Beginning with the release of Oracle8i, however, Oracle has stepped up its efforts to conform to ANSI standards and has tackled such features as the CASE statement and the left/right/full outer join syntax. Ironically, the business community seems to be moving in the opposite direction. A few years ago, people were much more concerned with portability and would limit their developers to ANSI-compliant SQL so that they could implement their systems on various database engines. Today, companies tend to pick a database engine to use across the enterprise and allow their developers to use the full range of available options without concern for ANSI-compliance. One reason for this change in attitude is the advent of n-tier architectures, where all database access can be contained within a single tier instead of being scattered throughout an application. Another possible reason might be the emergence of clear leaders in the DBMS market over the last decade, such that managers perceive less risk in which database engine they choose. Theoretical Versus Practical Terminology If you were to peruse the various writings on the relational model, you would come across terminology that you will not find used in this book (such as relations and tuples). Instead, we use practical terms such as tables and rows, and we refer to the various parts of a SQL statement by name rather than by function (i.e., “SELECT clause” instead of projection). With all due respect to Dr. Codd, you will never hear the word tuple used in a business setting, and, since this book is targeted toward people who use Oracle products to solve business problems, you won’t find it here either. A Simple Database Because this is a practical book, it contains numerous examples. Rather than fabricating different sets of tables and columns for every chapter or section in the book, we have decided to draw from a single, simple schema for most examples. The subject area that we chose to model is a parts distributor, such as an auto-parts wholesaler or medical device distributor, in which the business fills customer orders for one or more parts that are supplied by external suppliers. Figure 1-1 shows the entity-relationship model for this business. If you are unfamiliar with models, here is a brief description of how they work. Each box in the model represents an entity, which correlates to a database table. The lines between the entities represent the which correlate to foreign keys. For example, the cust_order table holds a foreign key to the employee table, which signifies the salesperson responsible for a particular order. Physically, this means that the cust_order table contains a column holding employee ID numbers, and that, for any given order, the employee ID number indicates the employee who sold that order. If you find this confusing, simply use the diagram as an illustration of the tables and columns found within our database. As you work your way through the SQL examples in this book, return occasionally to the diagram, and you should find that the relationships start making sense. In this section, we will introduce the five statements that comprise the DML portion of SQL. The information presented in this section should be enough to allow you to start writing DML statements. As is discussed at the end of the section, however, DML can look deceptively simple, so keep in mind while reading the section that there are many more facets to DML than are discussed here. The SELECT Statement The SELECT statement is used to retrieve data from a database. The set of data retrieved via a SELECT statement is referred to as a result set. Like a table, a result set is comprised of rows and columns, making it possible to populate a table using the result set of a SELECT statement. The SELECT statement can be summarized as follows: SELECT <one or more things> FROM <one or more places> WHERE <zero, one, or more conditions apply> While the SELECT and FROM clauses are required, the WHERE clause is optional (although you will seldom see it omitted). We will therefore begin with a simple example that retrieves three columns from every row of the SELECT cust_nbr, name, region_id FROM customer;CUST_NBR NAME REGION_ID ---------- ------------------------------ ---------- 1 Cooper Industries 5 2 Emblazon Corp. 5 3 Ditech Corp. 5 4 Flowtech Inc. 5 5 Gentech Industries 5 6 Spartan Industries 6 7 Wallace Labs 6 8 Zantech Inc. 6 9 Cardinal Technologies 6 10 Flowrite Corp. 6 11 Glaven Technologies 7 12 Johnson Labs 7 13 Kimball Corp. 7 14 Madden Industries 7 15 Turntech Inc. 7 16 Paulson Labs 8 17 Evans Supply Corp. 8 18 Spalding Medical Inc. 8 19 Kendall-Taylor Corp. 8 20 Malden Labs 8 21 Crimson Medical Inc. 9 22 Nichols Industries 9 23 Owens-Baxter Corp. 9 24 Jackson Medical Inc. 9 25 Worcester Technologies 9 26 Alpha Technologies 10 27 Phillips Labs 10 28 Jaztech Corp. 10 29 Madden-Taylor Inc. 10 30 Wallace Industries 10 Since we neglected to impose any conditions via a WHERE clause, the query returns every row from the customer table. If you want to restrict the set of data returned by the query, you can include a WHERE clause with a single condition: SELECT cust_nbr, name, region_id WHERE region_id = 8;CUST_NBR NAME REGION_ID ---------- ------------------------------ ---------- 16 Paulson Labs 8 17 Evans Supply Corp. 8 18 Spalding Medical Inc. 8 19 Kendall-Taylor Corp. 8 20 Malden Labs 8 The result set now includes only those customers residing in the region with a region_id of 8. But what if you want to specify a region by name instead of You could query the region table for a particular name and then query the customer table using the region_id. Instead of issuing two different queries, however, you can produce the same outcome using a single query by introducing a join, as in: SELECT customer.cust_nbr, customer.name, region.name FROM customer INNER JOIN region ON region.region_id = customer.region_id WHERE region.name = 'New England';CUST_NBR NAME NAME ---------- ------------------------------ ----------- 1 Cooper Industries New England 2 Emblazon Corp. New England 3 Ditech Corp. New England 4 Flowtech Inc. New England 5 Gentech Industries New England The FROM clause now contains two tables instead of one and includes a join condition that specifies that the region tables are to be joined using the region_id column found in both tables. Joins and join conditions will be explored in detail in Since both the region tables contain a column called name, you must specify which name column you are interested in. This is done in the previous example by using dot-notation to append the table name in front of each column name. If you would rather not type full table names, you can assign to each table in the FROM clause and use those aliases instead of the table names in the SELECT and WHERE clauses, as in: SELECT c.cust_nbr, c.name, r.name FROM customer c INNER JOIN region r ON r.region_id = c.region_id WHERE r.name = 'New England'; SELECT clause elements In the examples thus far, the result sets generated by our queries have contained columns from one or more tables. While most elements in your SELECT clauses will typically be simple column references, a SELECT clause may also include: Literal values, such as numbers (27) or strings (`abc') Expressions, such as shape.diameter * 3.1415927 Function calls, such as TO_DATE(`01-JAN-2004',`DD-MON-YYYY') Pseudocolumns, such as ROWID, ROWNUM, or LEVEL While the first three items in this list are fairly straightforward, the last item merits further discussion. Oracle makes available several phantom columns, known as pseudocolumns , that do not exist in any tables. Rather, they are values visible during query execution that can be helpful in certain situations. For example, the pseudocolumn ROWID represents the physical location of a row. This information represents the fastest possible access mechanism. It can be useful if you plan to delete or update a row retrieved via a query. However, you should never store ROWID values in the database, nor should you reference them outside of the transaction in which they are retrieved, since a row’s ROWID can change in certain situations, and ROWIDs can be reused after a row has been deleted. The next example demonstrates each of the different element types from the previous list: 'cust # ' || cust_nbr cust_nbr_str, TO_CHAR(last_order_dt, 'DD-MON-YYYY') last_order FROM customer;ROWNUM CUST_NBR MULTIPLIER CUST_NBR_STR GREETING LAST_ORDER ------ -------- ---------- ------------ -------- ----------- 1 1 1 cust # 1 hello 15-JUN-2000 2 2 1 cust # 2 hello 27-JUN-2000 3 3 1 cust # 3 hello 07-JUL-2000 4 4 1 cust # 4 hello 15-JUL-2000 5 5 1 cust # 5 hello 01-JUN-2000 6 6 1 cust # 6 hello 10-JUN-2000 7 7 1 cust # 7 hello 17-JUN-2000 8 8 1 cust # 8 hello 22-JUN-2000 9 9 1 cust # 9 hello 25-JUN-2000 10 10 1 cust # 10 hello 01-JUN-2000 11 11 1 cust # 11 hello 05-JUN-2000 12 12 1 cust # 12 hello 07-JUN-2000 13 13 1 cust # 13 hello 07-JUN-2000 14 14 1 cust # 14 hello 05-JUN-2000 15 15 1 cust # 15 hello 01-JUN-2000 16 16 1 cust # 16 hello 31-MAY-2000 17 17 1 cust # 17 hello 28-MAY-2000 18 18 1 cust # 18 hello 23-MAY-2000 19 19 1 cust # 19 hello 16-MAY-2000 20 20 1 cust # 20 hello 01-JUN-2000 21 21 1 cust # 21 hello 26-MAY-2000 22 22 1 cust # 22 hello 18-MAY-2000 23 23 1 cust # 23 hello 08-MAY-2000 24 24 1 cust # 24 hello 26-APR-2000 25 25 1 cust # 25 hello 01-JUN-2000 26 26 1 cust # 26 hello 21-MAY-2000 27 27 1 cust # 27 hello 08-MAY-2000 28 28 1 cust # 28 hello 23-APR-2000 29 29 1 cust # 29 hello 06-APR-2000 30 30 1 cust # 30 hello 01-JUN-2000 Note that the third through sixth columns have been given column aliases, which are names that you assign to a column. If you are going to refer to the columns in your query by name instead of by position, you will want to assign each column a name that makes sense to you. Interestingly, a SELECT clause is not required to reference columns from any of the tables in the FROM clause. For example, the next query’s result set is composed entirely of literals: SELECT 1 num, 'abc' str FROM customer;NUM STR ---------- --- 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc 1 abc Ordering your results In general, there is no guarantee that the result set generated by your query will be in any particular order. If you want your results to be sorted by one or more columns, you can add an ORDER BY clause after the WHERE clause. The following example sorts the results from the New England query by customer name: SELECT c.cust_nbr, c.name, r.name FROM customer c INNER JOIN region r ON r.region_id = c.region_id WHERE r.name = 'New England' ORDER BY c.name;CUST_NBR NAME NAME -------- ------------------------------ ----------- 1 Cooper Industries New England 3 Ditech Corp. New England 2 Emblazon Corp. New England 4 Flowtech Inc. New England 5 Gentech Industries New England You may also designate the sort column(s) by their position in the SELECT clause. To sort the previous query by customer number, which is the first column in the SELECT clause, you could issue the following statement: SELECT c.cust_nbr, c.name, r.name FROM customer c INNER JOIN region r ON r.region_id = c.region_id WHERE r.name = 'New England' ORDER BY 1;CUST_NBR NAME NAME ---------- ------------------------------ ----------- 1 Cooper Industries New England 2 Emblazon Corp. New England 3 Ditech Corp. New England 4 Flowtech Inc. New England 5 Gentech Industries New England Specifying sort keys by position will certainly save you some typing, but it can often lead to errors if you later change the order of the columns in your SELECT clause. In some cases, your result set may contain duplicate data. For example, if you are compiling a list of parts that were included in last month’s orders, the same part number would appear multiple times if more than one order included that part. If you want duplicates removed from your result set, you can include the DISTINCT keyword in your SELECT clause, as in: SELECT DISTINCT li.part_nbr FROM cust_order co INNER JOIN line_item li ON co.order_nbr = li.order_nbr WHERE co.order_dt >= TO_DATE('01-JUL-2001','DD-MON-YYYY') AND co.order_dt < TO_DATE('01-AUG-2001','DD-MON-YYYY'); This query returns the distinct set of parts ordered during July 2001. Without the DISTINCT keyword, the result set would contain one row for every line-item of every order, and the same part would appear multiple times if it was included in multiple orders. When deciding whether to include DISTINCT in your SELECT clause, keep in mind that finding and removing duplicates necessitates a sort operation, which can greatly increase the execution time of your query. The INSERT Statement The INSERT statement is the mechanism for loading data into your database. This section will introduce the traditional single-table INSERT statement, as well as the new multitable INSERT ALL statement introduced in Oracle 9i. With the traditional INSERT statement, data can be inserted into only one table at a time, although the data being loaded into the table can be pulled from one or more additional tables. When inserting data into a table, you do not need to provide values for every column in the table; however, you need to be aware of the columns that require non-NULL values and the ones that do not. Here’s the definition of the describe employeeName Null? Type ----------------------------------------- -------- ------------ EMP_ID NOT NULL NUMBER(5) FNAME VARCHAR2(20) LNAME VARCHAR2(20) DEPT_ID NOT NULL NUMBER(5) MANAGER_EMP_ID NUMBER(5) SALARY NUMBER(5) HIRE_DATE DATE JOB_ID NUMBER(3) The NOT NULL designation for the dept_id columns indicates that values are required for these two columns. Therefore, you must be sure to provide values for at least these two columns in your INSERT statements, as demonstrated by the following: INSERT INTO employee (emp_id, dept_id) VALUES (101, 20); Any inserts into employee may optionally include any or all of the remaining six columns, which are described as nullable since they may be left undefined. Thus, you could decide to add the employee’s last name to the previous statement: INSERT INTO employee (emp_id, lname, dept_id) VALUES (101, 'Smith', 20); The VALUES clause must contain the same number of elements as the column list, and the data types must match the column definitions. In dept_id hold numeric values while lname holds character data, so the INSERT statement will execute without error. Oracle always tries to convert data from one type to another automatically, however, so the following statement will also run without error: INSERT INTO employee (emp_id, lname, dept_id) VALUES ('101', 'Smith', '20'); Sometimes, the data to be inserted needs to be retrieved from one or more tables. Since the SELECT statement generates a result set consisting of rows and columns of data, you can feed the result set from a SELECT statement directly into an INSERT statement, as in: INSERT INTO employee (emp_id, fname, lname, dept_id, hire_date) SELECT 101, 'Dave', 'Smith', d.dept_id, SYSDATE FROM department d WHERE d.name = 'ACCOUNTING'; In this example, the purpose of the SELECT statement is to retrieve the department ID for the Accounting department. The other four columns in the SELECT clause are either literals Smith') or function calls (SYSDATE). While inserting data into a single table is the norm, there are situations where data from a single source must be inserted either into multiple tables or into the same table multiple times. Such tasks would normally be handled programatically using PL/SQL, but Oracle9i introduced the concept of a multitable insert to allow complex data insertion via a single INSERT statement. For example, let’s say that one of Mary Turner’s customers wants to set up a recurring order on the last day of each month for the next six months. The following statement adds six rows to the using a SELECT statement that returns exactly one row: INSERT ALL INTO cust_order (order_nbr, cust_nbr, sales_emp_id, order_dt, expected_ship_dt, status) VALUES (ord_nbr, cust_nbr, emp_id, ord_dt, ord_dt + 7, status) INTO cust_order (order_nbr, cust_nbr, sales_emp_id, order_dt, expected_ship_dt, status) VALUES (ord_nbr + 1, cust_nbr, emp_id, add_months(ord_dt, 1), add_months(ord_dt, 1) + 7, status) INTO cust_order (order_nbr, cust_nbr, sales_emp_id, order_dt, expected_ship_dt, status) VALUES (ord_nbr + 2, cust_nbr, emp_id, add_months(ord_dt, 2), add_months(ord_dt, 2) + 7, status) INTO cust_order (order_nbr, cust_nbr, sales_emp_id, order_dt, expected_ship_dt, status) VALUES (ord_nbr + 3, cust_nbr, emp_id, add_months(ord_dt, 3), add_months(ord_dt, 3) + 7, status) INTO cust_order (order_nbr, cust_nbr, sales_emp_id, order_dt, expected_ship_dt, status) VALUES (ord_nbr + 4, cust_nbr, emp_id, add_months(ord_dt, 4), add_months(ord_dt, 4) + 7, status) INTO cust_order (order_nbr, cust_nbr, sales_emp_id, order_dt, expected_ship_dt, status) VALUES (ord_nbr + 5, cust_nbr, emp_id, add_months(ord_dt, 5), add_months(ord_dt, 5) + 7, status) SELECT 99990 ord_nbr, c.cust_nbr cust_nbr, e.emp_id emp_id, last_day(SYSDATE) ord_dt, 'PENDING' status FROM customer c CROSS JOIN employee e WHERE e.fname = 'MARY' and e.lname = 'TURNER' and c.name = 'Gentech Industries'; The SELECT statement returns the data necessary for this month’s order, and the INSERT statement modifies the expected_ship_dt columns for the next five months’ orders. You are not obligated to insert all rows into the same table, nor must your SELECT statement return only one row, making the multitable insert statement quite flexible and powerful. The next example shows how data about a new salesperson can be entered into both the INSERT ALL INTO employee (emp_id, fname, lname, dept_id, hire_date) VALUES (eid, fnm, lnm, did, TRUNC(SYSDATE)) INTO salesperson (salesperson_id, name, primary_region_id) VALUES (eid, fnm || ' ' || lnm, rid) SELECT 1001 eid, 'JAMES' fnm, 'GOULD' lnm, d.dept_id did, r.region_id rid FROM department d, region r WHERE d.name = 'SALES' and r.name = 'Southeast US'; So far, you have seen how multiple rows can be inserted into the same table and how the same rows can be inserted into multiple tables. The next, and final, example of multitable inserts demonstrates how a conditional clause can be used to direct each row of data generated by the SELECT statement into zero, one, or many tables: INSERT FIRST WHEN order_dt < TO_DATE('2001-01-01', 'YYYY-MM-DD') THEN INTO cust_order_2000 (order_nbr, cust_nbr, sales_emp_id, sale_price, order_dt) VALUES (order_nbr, cust_nbr, sales_emp_id, sale_price, order_dt) WHEN order_dt < TO_DATE('2002-01-01', 'YYYY-MM-DD') THEN INTO cust_order_2001 (order_nbr, cust_nbr, sales_emp_id, sale_price, order_dt) VALUES (order_nbr, cust_nbr, sales_emp_id, sale_price, order_dt) WHEN order_dt < TO_DATE('2003-01-01', 'YYYY-MM-DD') THEN INTO cust_order_2002 (order_nbr, cust_nbr, sales_emp_id, sale_price, order_dt) VALUES (order_nbr, cust_nbr, sales_emp_id, sale_price, order_dt) SELECT co.order_nbr, co.cust_nbr, co.sales_emp_id, co.sale_price, co.order_dt FROM cust_order co WHERE co.cancelled_dt IS NULL AND co.ship_dt IS NOT NULL; This statement copies all customer orders prior to January 1, 2003, to one of three tables depending on the value of the order_dt column. The keyword FIRST specifies that once one of the conditions evaluates to TRUE, the statement should skip the remaining conditions and move on to the next row. If you specify ALL instead of FIRST, all conditions will be evaluated, and each row might be inserted into multiple tables if more than one condition evaluates to TRUE. The DELETE Statement The DELETE statement facilitates the removal of data from the database. Like the SELECT statement, the DELETE statement contains a WHERE clause that specifies the conditions used to identify rows to be deleted. If you neglect to add a WHERE clause to your DELETE statement, all rows will be deleted from the target table. The following statement will delete all employees with the last name of Hooper from the employee table: DELETE FROM employee WHERE lname = 'HOOPER'; In some cases, the values needed for one or more of the conditions in your WHERE clause exist in another table. For example, your company may decide to outsource its accounting functions, thereby necessitating the removal of all accounting personnel from the DELETE FROM employee WHERE dept_id = (SELECT dept_id FROM department WHERE name = 'ACCOUNTING'); The use of the SELECT statement in this example is known as a subquery and will be studied in detail in Chapter 5. In certain cases, you may want to restrict the number of rows that are to be deleted from a table. For example, you may want to remove all data from a table, but you want to limit your transactions to no more than 100,000 rows. If the contained 527,365 records, you would need to find a way to restrict your DELETE statement to 100,000 rows and then run the statement six times until all the data has been purged. The following example demonstrates how the ROWNUM pseudocolumn may be used in a DELETE statement to achieve the desired effect: DELETE FROM cust_order WHERE ROWNUM <= 100000; COMMIT; The UPDATE Statement Modifications to existing data are handled by the UPDATE statement. Like the DELETE statement, the UPDATE statement includes a WHERE clause to specify which rows should be targeted. The following example shows how you might give a 10% raise to everyone making less than $40,000: UPDATE employee SET salary = salary * 1.1 WHERE salary < 40000; If you want to modify more than one column in the table, you have two choices: provide a set of column/value pairs separated by commas, or provide a set of columns and a subquery. The following two UPDATE statements modify the inactive_ind columns in the customer table for any customer who hasn’t placed an order in the past year: UPDATE customer SET inactive_dt = SYSDATE, inactive_ind = 'Y' WHERE last_order_dt < SYSDATE -- 365; UPDATE customer SET (inactive_dt, inactive_ind) = (SELECT SYSDATE, 'Y' FROM dual) WHERE last_order_dt < SYSDATE -- 365; The subquery in the second example is a bit forced, since it uses a query against the dual table to build a result set containing two literals, but it should give you an idea of how you would use a subquery in an UPDATE statement. In later chapters, you will see far more interesting uses for subqueries. The MERGE Statement There are certain situations, within Data Warehouse applications, where you may want to either insert a new row into a table or update an existing row depending on whether or not the data already exists in the table. For example, you may receive a nightly feed of parts data that contains both parts that are known to the system along with parts just introduced by your suppliers. If a part number exists in the table, you will need to update the status columns; otherwise, you will need to insert a new row. While you could write code that reads each record from the feed, determines whether or not the part number exists in the part table, and issues either an INSERT or UPDATE statement, you could instead issue a single MERGE statement. Assuming that your data feed has been loaded into the staging table, your MERGE statement would look something like the MERGE INTO part p_dest USING part_stg p_src ON (p_dest.part_nbr = p_src.part_nbr) WHEN MATCHED THEN UPDATE SET p_dest.unit_cost = p_src.unit_cost, p_dest.status = p_src.status WHEN NOT MATCHED THEN INSERT (p_dest.part_nbr, p_dest.name, p_dest.supplier_id, p_dest.status, p_dest.inventory_qty, p_dest.unit_cost, p_dest.resupply_date) VALUES (p_src.part_nbr, p_src.name, p_src.supplier_id, p_src.status, 0, p_src.unit_cost, null); This statement looks fairly complex, so here is a description of what it is doing: - Lines 1-3 For each row in the part_stgtable, see if the part_nbrcolumn exists in the part table. - Lines 4-5 If it does, then update the matching row in the parttable using data from the - Lines 6-10 Otherwise, insert a new row into the parttable using the data from the So Why Are There 17 More Chapters? After reading this chapter, you might think that SQL looks pretty simple (at least the DML portion). At a high level, it is fairly simple, and you now know enough about the language to go write some code. However, you will learn over time that there are numerous ways to arrive at the same end point, and some are more efficient and elegant than others. The true test of SQL mastery is when you no longer have the desire to return to what you were working on the previous year, rip out all the SQL, and recode it. For one of us, it took about nine years to reach that point. Hopefully, this book will help you reach that point in far less time. While you are reading the rest of the book, you might notice that the majority of examples use SELECT statements, with the remainder somewhat evenly distributed across INSERT, UPDATE, and DELETE statements. This disparity is not indicative of the relative importance of SELECT statements over the other three DML statements; rather, SELECT statements are favored because we can show a query’s result set, which should help you to better understand the query, and because many of the points being made using SELECT statements can be applied to UPDATE and DELETE statements as well. Depending on the purpose of the model, entities may or may not correlate to database tables. For example, a logical model depicts business entities and their relationships, whereas a physical model illustrates tables and their primary/foreign keys. The model in Figure 1-1 is a physical model. MERGE was introduced in Oracle9i.
Have you ever seen a video of a landslide? Landslides are powerful geologic events that happen suddenly and cause devastation in areas with unstable hills, slopes and cliff sides. Each year in the U.S. landslides can cause great damage to buildings and property, in addition to changing the surrounding habitats. In this science activity you will model landslides using a clipboard and pennies, and investigate how friction and the angle of a hill's slope affects potential landslides. A landslide is any geologic process in which gravity causes rock, soil, artificial fill or a combination of the three to move down a slope. Several things can trigger landslides, including the slow weathering of rocks as well as soil erosion, earthquakes and volcanic activity. One major force all landslides have in common is that they are propelled by gravity. We normally think of gravity pulling an object vertically down, such as when you drop a ball straight down. But on a slope gravity gets slightly more complicated. Any force (such as gravity) has magnitude and direction. On a slope gravitational effects can be separated into a component that's parallel to the slope (which pulls the object down the slope) and a component perpendicular to it (which pulls the object against the slope's surface). As the angle of the slope increases (making it steeper), gravity's parallel component increases and the perpendicular component decreases, thereby overcoming resistance for downward movement. This resistance is called friction and depends on the perpendicular component of gravity, along with the slope's and object's surfaces. When the parallel component becomes greater than the perpendicular component, the object slides down the slope. In other words, the critical maximum slope from horizontal—called the angle of repose, which is the greatest angle that an object will remain at rest—has been surpassed. Take a piece of tape a little longer than the length of four pennies lined up next to one another (about three and a half inches long) and set two pennies on the tape so the pennies are touching, side by side. Set one penny on each of the two pennies on the tape so that you have two stacks of pennies with two pennies in each stack. Then wrap the tape lengthwise completely around the pennies so that they are held in place, still stacked and side by side. The tape should slightly overlap on the top side. Repeat this with the four other pennies so that you have made two taped stacks of pennies like this. Cut out a strip of paper towel that is slightly longer than the length of one of the stacks of pennies, and the same width as the pennies. (In other words, the paper towel strip should be about two to two and a half inches long and almost one inch wide.) Take one of the taped penny stacks and make sure the rough, exposed tape edges are on the top (and the smooth side is on the bottom). Then, using two small pieces of tape, tape the paper towel strip lengthwise on to the stack of pennies so that both edges of the strip curve around to the top side and are taped there. Do not put any tape on the bottom side, which should be completely covered only by the paper towel strip. Set the clipboard on a flat surface. Clip a paper towel sheet onto the clipboard. (If you cut a strip out of the sheet for the penny wrapping, put that space at the bottom of the clipboard.) Place both penny stacks you made on the clipboard so that they're both touching the clip at the top. They should be touching the clip lengthwise but not touching one another. Make sure both stacks are placed so that their rough tape edges are facing up, and the paper towel strip or smooth taped side of the stacks is down, touching the clipboard. How does the bottom of each stack feel compared with one another? Is one much smoother than the other? Holding on to the clipboard clip, slowly and steadily lift that end of the clipboard. (Make sure the opposite side stays down, touching the flat surface.) Which stack of pennies slides down the clipboard first as you increase its angle? Stop tilting the clipboard as soon as one of the stacks of pennies starts to slide down. Repeat this process at least nine more times for a total of 10 trials. Each time be sure to start with the clipboard laying flat on a flat surface and with both stacks of pennies sitting next to one another by the clip. Also make sure to slowly lift the clipboard each time. For each trial, which stack of pennies slides down the clipboard first? Are your results fairly consistent? If one stack of pennies usually slid down the clipboard first, why do you think this happened? Why do you think the angle of repose (the angle after which an object slides down a slope) may have been different for the two different stacks of pennies? What do you think your results might have to do with friction? Extra: You could repeat this activity, but use a protractor to quantify your results. What is the exact angle at which the different penny stacks start to slide down the slope? How do their angles of repose compare exactly? Extra: Try this activity again, but this time try making different size penny stacks (with more or fewer pennies) and compare their angles of repose. How does the size of the penny stack affect how it slides down the slope? Extra: Grab some other small objects and try repeating this activity. For example, you could make different objects from LEGOs. Do you get similar results? What factors do you think are most important in determining whether an object will first slide at a lower or steeper incline[break] Observations and results Did the tape-only penny stack usually start sliding down the clipboard first when you slowly raised the clipboard, increasing the angle of the slope? The majority of the time the stack of pennies that were only coated in tape (and not a strip of paper towel) should have started sliding down the clipboard before the other stack of pennies did as the clipboard was raised up by its clip. For example, out of 10 trials the tape-only penny stack may have started sliding before the paper towel-wrapped stack in all of the trials. The resistance for downward movement on the slope is called friction, and it depends on the component of gravity that is perpendicular to the slope as well as the surfaces of the object and the slope itself. Because there was a greater amount of friction between the two paper towel–coated surfaces rubbing against one another than there was between the paper towel–coated surface and the tape-coated surface, the penny stack with a paper towel strip on it had a greater amount of friction, or resistance to movement, when going down the slope. This greater amount of friction should have given the paper towel–coated stack a greater angle of repose compared with the tape-only stack. More to explore Friction Basics, from Rader's Physics4Kids.com Landslide Types and Processes, from the U.S. Geological Survey Fun, Science Activities for You and Your Family, from Science Buddies Landslides: What Causes a Hill to Become Creep-y?, from Science Buddies This activity brought to you in partnership with Science Buddies
Home > Flashcards > Print Preview The flashcards below were created by user on FreezingBlue Flashcards. What would you like to do? - The faculty of observing in any given case the available means of persuasion. A thoughtful, reflective activity leading to effective communication, including rational exchange of opposing viewpoints. - Aristotle: father or rhetoric - Rhetoric is in any form of writing. - The person or persons who are intended to read/hear the piece of writing. Those who understand and can use the available means to appeal to an audience of one or many find themselves in a position of strength. They have the tools to resolve conflicts without confrontation, to persuade readers or listeners to support their position, or to move others to take action. - Lou Gerhig's audience was his fans, fellow athletes, fans in the stadium, and fans who will hear his speech from afar. - The occasion or the time and place a piece of writing was written or spoken. - Rhetoric is always situational. - Lou Gehrig's speech was given between dubbleheader baseball games. He knew he should keep it light hearted. - Goal that the speaker or writer wants to achieve. - Lou Gehrig's purpose in his speech was to keep stay positive by looking on the bright side -his past luck and present optimism- and downplaying the bleak outlook. - When the audience is more likely to take one side over the other. - Someone writing about freedom of speech in a community that has experienced hat graffiti must take that context into account and adjust the purpose of the piece so as no to offend the audience. - The main idea of a piece of writing. - Should be a crystal clear and focused statement. - Lou Gehrig's thesis statement was that he's the "luckiest man on the face of the earth." - Also called: Claim, Assertion. - What a piece of writing is mostly about. - Lou Gehrig's was baseball in general, the New York Yankees in particular, not his sickness. - The person telling the story or giving the speech. - Lou Gehrig presents himself as a common man, modest and glad for the life he's lived. - Described as the interaction among subject, speaker, and audience, as well as how this interaction determines the structure and language of the argument - that is, a text or image that establishes a position. - Also called the Aristotelian triangle. The character the speaker creates when he or she writes or speaks- depending on the context, purpose, subject, and audience. - Speakers and writers appeal to ethos, or character, to demonstrate that they are credible and trustworthy. - In a speech discouraging children from using alcohol the speaker may stress that they are psychologists specializing in alcoholism or adolescent behavior. - Appeals to ethos often emphasize shared values between the speaker and the audience. - The speaker's attitude toward the audience or subject. - Sometimes the speaker establishes ethos throught the discourse itself, whether written or spoken, by making a good impression. That impression may result from a tone of reason and goodwill or from the type and thoroughness of information presented. Writers and speakers appeal to logos, or reason, by offering clear, rational ideas. Appealing to logos (Greek, "embodied thought") means having a clear thesis, with specific details, examples, facts, statistical data, or expert testimony as support. - A belief or statement taken for granted without proof. - In his speech Lou Gehrig assumes that like him his audience also believes that "bad breaks" are an inevitable part of life. - Another way to appeal to logos is to acknowledge a counterargument, to anticipate objection or opposing view. - While you might worry that raising an opposing view will weaken your argument, you'll be vulnerable if you ignore ideas that run counter to your own - Presenting a counterargument, shows you have completely thought out your subject. - An element of counterargument. - You agree that the opposing argument may be true, or have strong points. - An element of counterargument. - You deny the validity of all or part of the argument. - A Greek term that refers to suffering but has come to be associated with broader appeals to emotion. - Lou Gehrig's speech appeals largely to pathos. Although writing that relies exclusively on emotional appeals is rarely effective in the long ter choosing language such as figurativ or personal anecdotes that engage emotions of the audience can add an important dimension. - What is implied by a word as opposed to its literal meaning. - The feelings they evoke. *positive, negative, fearful, gloomy* - Lou Gehrig chooses a sequence of words with strong positive connotations: greatest, wonderul, honored, grand, blessing. - A negative term for writing designed to sway opinion rather than present information. - This often occurs in writings relying mainly on pathos. - An argument against an idea, usually regarding philosophy, politics, or religion. - Writings that appeal only to pathos are usually more polemical than persuasive. - As in works that consist of words, spoken or written, rhetoric is also present in works of visual texts. - Visual rhetoric is ever present in policital cartoons An ironic, sarcastic, or witty composition that claims to argue for something, but actually argues against it. Another element of rhetoric is the organization of a piece, what classical rhetoricaians called arrangement. The Classical Model - Classical rhetorician outlined a five-part structur for an oratory, or speech, that writers still use today, although perhaps not always consciously. - Five Parts: 1.Introduction (exordium) 2. the narration (narratio) 3.the confirmation (confirmatio) 4. the refutation (refutatio) 5. the conclusion (peroratio) - Narration refers to telling a story or recounting a series of events. - Narration is not simply crafting an appealing story; it is crafting a story that supports your thesis. - George Orwell's Shooting an Elephant is an example of this form. Patterns of Development One way to consider arrangmen is according to purpose. Compare and contrast, narration of a event, or defining a term can suggest a method of organization, or arrangment. These patters of development include a range of logical ways to organize an entire text or, mor likely, individual paragraphs or sections. - Description is closely allied with narration because both include many specific details. However, unlike narration, description emphasizes the senses by painting a picture of how something looks, sounds, smells, tastes, or feels. Description is often used to establish a mood or atmosphere. - Makes writing clearer, more vived, and more persuasive. - Process analysis explains how something works, how to do something, or how something is done. - We use process analysis when explaining how to bake bread, or use the snap feature on Windows 7. - The key to sucessful process analysis is clarity: explain the subject clearly, and give the sequences of major steps and/or phases. - Prodiving a series of examples- facts, specific cases, or instances- turns a general idea into a concrete one; this makes your argument both clearer and more persuasive to a reader. - An example of this form is Jonathon Edwards sermon Sinners in the Hands of an Angry God. - Aristotle taught that examples are a type of logical proof called induction. A series of specific examples leads to a general conclusion. - You see a man with a lawn mower in the back of his truck, he probably has a lawn, and on that lawn is a house, and in that house is a family. This is probably a generally happy man. Comparison and contrast - A common pattern of developmen is comarison and contast: Discussing two ting hightlighing their similarities and differences. - Writers use comparison and contrast to analyze information carefully. Classification and division - It is important for readers as well as writers to be able to sort material or ideas into major categories. - By answering the question: What goes together and why? writers and readers can make connections in seemingly unrelated things - To ensure that writers and their audiences are speaking the same language definition may lay the foundation to establish common ground or identifying areas of conflict. - Defining a term is often the first step in a debate or disagreement. Cause and Effect - Analyzing the causes that lead to a certain effect, or conversely, the effects that result from a cause is a powerful foundaton for argument. - Since casual analysis depends upon crystal clear logic, it is important to carefully trace a chain of cause and effect to reconize possible contributing causes.
The Fischer–Tropsch process is a collection of chemical reactions that converts a mixture of carbon monoxide and hydrogen into liquid hydrocarbons. These reactions occur in the presence of metal catalysts, typically at temperatures of 150–300 °C (302–572 °F) and pressures of one to several tens of atmospheres. The process was first developed by Franz Fischer and Hans Tropsch at the Kaiser-Wilhelm-Institut für Kohlenforschung in Mülheim an der Ruhr, Germany, in 1925. As a premier example of C1 chemistry, Fischer–Tropsch process is an important reaction in both coal liquefaction and gas to liquids technology for producing liquid hydrocarbons. In the usual implementation, carbon monoxide and hydrogen, the feedstocks for F-T, are produced from coal, natural gas, or biomass in a process known as gasification. The Fischer–Tropsch process then converts these gases into a synthetic lubrication oil and synthetic fuel. The Fischer–Tropsch process has received intermittent attention as a source of low-sulfur diesel fuel and to address the supply or cost of petroleum-derived hydrocarbons. - 1 Reaction mechanism - 2 Feedstocks: gasification - 2.1 Feedstocks: GTL - 2.2 Process conditions - 2.3 Design of the Fischer–Tropsch process reactor - 2.4 Product distribution - 2.5 Catalysts - 2.6 HTFT and LTFT - 3 History - 4 Commercialization - 5 Research developments - 6 Process efficiency - 7 Fischer-Tropsch in Nature - 8 See also - 9 References - 10 Further reading - 11 External links The Fischer–Tropsch process involves a series of chemical reactions that produce a variety of hydrocarbons, ideally having the formula (CnH2n+2). The more useful reactions produce alkanes as follows: - (2n + 1) H2 + n CO → CnH2n+2 + n H2O where n is typically 10–20. The formation of methane (n = 1) is unwanted. Most of the alkanes produced tend to be straight-chain, suitable as diesel fuel. In addition to alkane formation, competing reactions give small amounts of alkenes, as well as alcohols and other oxygenated hydrocarbons. Fischer–Tropsch intermediates and elemental reactions Converting a mixture of H2 and CO into aliphatic products is a multi-step reaction with several intermediate compounds. The growth of the hydrocarbon chain may be visualized as involving a repeated sequence in which hydrogen atoms are added to carbon and oxygen, the C–O bond is split and a new C–C bond is formed. For one –CH2– group produced by CO + 2 H2 → (CH2) + H2O, several reactions are necessary: - Associative adsorption of CO - Splitting of the C–O bond - Dissociative adsorption of 2 H2 - Transfer of 2 H to the oxygen to yield H2O - Desorption of H2O - Transfer of 2 H to the carbon to yield CH2 The conversion of CO to alkanes involves hydrogenation of CO, the hydrogenolysis (cleavage with H2) of C–O bonds, and the formation of C–C bonds. Such reactions are assumed to proceed via initial formation of surface-bound metal carbonyls. The CO ligand is speculated to undergo dissociation, possibly into oxide and carbide ligands. Other potential intermediates are various C1 fragments including formyl (CHO), hydroxycarbene (HCOH), hydroxymethyl (CH2OH), methyl (CH3), methylene (CH2), methylidyne (CH), and hydroxymethylidyne (COH). Furthermore, and critical to the production of liquid fuels, are reactions that form C–C bonds, such as migratory insertion. Many related stoichiometric reactions have been simulated on discrete metal clusters, but homogeneous Fischer–Tropsch catalysts are poorly developed and of no commercial importance. Addition of isotopically labelled alcohol to the feed stream results in incorporation of alcohols into product. This observation establishes the facility of C–O bond scission. Using 14C-labelled ethylene and propene over cobalt catalysts results in incorporation of these olefins into the growing chain. Chain growth reaction thus appears to involve both ‘olefin insertion’ as well as ‘CO-insertion’. Fischer–Tropsch plants associated with coal or related solid feedstocks (sources of carbon) must first convert the solid fuel into gaseous reactants, i.e., CO, H2, and alkanes. This conversion is called gasification and the product is called synthesis gas ("syngas"). Synthesis gas obtained from coal gasification tends to have a H2:CO ratio of ~0.7 compared to the ideal ratio of ~2. This ratio is adjusted via the water-gas shift reaction. Coal-based Fischer–Tropsch plants produce varying amounts of CO2, depending upon the energy source of the gasification process. However, most coal-based plants rely on the feed coal to supply all the energy requirements of the Fischer–Tropsch process. Carbon monoxide for FT catalysis is derived from hydrocarbons. In gas to liquids (GTL) technology, the hydrocarbons are low molecular weight materials that often would be discarded or flared. Stranded gas provides relatively cheap gas. GTL is viable provided gas remains relatively cheaper than oil. Several reactions are required to obtain the gaseous reactants required for Fischer–Tropsch catalysis. First, reactant gases entering a Fischer–Tropsch reactor must be desulfurized. Otherwise, sulfur-containing impurities deactivate ("poison") the catalysts required for Fischer–Tropsch reactions. - H2O + CO → H2 + CO2 - H2O + CH4 → CO + 3 H2 Generally, the Fischer–Tropsch process is operated in the temperature range of 150–300 °C (302–572 °F). Higher temperatures lead to faster reactions and higher conversion rates but also tend to favor methane production. For this reason, the temperature is usually maintained at the low to middle part of the range. Increasing the pressure leads to higher conversion rates and also favors formation of long-chained alkanes, both of which are desirable. Typical pressures range from one to several tens of atmospheres. Even higher pressures would be favorable, but the benefits may not justify the additional costs of high-pressure equipment, and higher pressures can lead to catalyst deactivation via coke formation. A variety of synthesis-gas compositions can be used. For cobalt-based catalysts the optimal H2:CO ratio is around 1.8–2.1. Iron-based catalysts can tolerate lower ratios, due to intrinsic Water Gas Shift Reaction activity of the Iron catalyst. This reactivity can be important for synthesis gas derived from coal or biomass, which tend to have relatively low H2:CO ratios (< 1). Design of the Fischer–Tropsch process reactor Efficient removal of heat from the reactor is the basic need of Fischer–Tropsch reactors since these reactions are characterized by high exothermicity. Four types of reactors are discussed: Multi tubular fixed-bed reactor - This type of reactor contains a number of tubes with small diameter. These tubes contain catalyst and are surrounded by boiling water which removes the heat of reaction. A fixed-bed reactor is suitable for operation at low temperatures and has an upper temperature limit of 257 °C (530 K). Excess temperature leads to carbon deposition and hence blockage of the reactor. Since large amounts of the products formed are in liquid state, this type of reactor can also be referred to as a trickle flow reactor system. Entrained flow reactor - An important requirement of the reactor for the Fischer–Tropsch process is to remove the heat of the reaction. This type of reactor contains two banks of heat exchangers which remove heat; the remainder of which is removed by the products and recycled in the system. The formation of heavy waxes should be avoided, since they condense on the catalyst and form agglomerations. This leads to fluidization. Hence, risers are operated over 297 °C (570 K). - Heat removal is done by internal cooling coils. The synthesis gas is bubbled through the waxy products and finely-divided catalyst which is suspended in the liquid medium. This also provides agitation of the contents of the reactor. The catalyst particle size reduces diffusional heat and mass transfer limitations. A lower temperature in the reactor leads to a more viscous product and a higher temperature (> 297 °C, 570 K) gives an undesirable product spectrum. Also, separation of the product from the catalyst is a problem. Fluid-bed and circulating catalyst (riser) reactors - These are used for high-temperature Fischer–Tropsch synthesis (nearly 340 °C) to produce low-molecular-weight unsaturated hydrocarbons on alkalized fused iron catalysts. The fluid-bed technology (as adapted from the catalytic cracking of heavy petroleum distillates) was introduced by Hydrocarbon Research in 1946–50 and named the 'Hydrocol' process. A large scale Fischer–Tropsch Hydrocol plant (350,000 tons per annum) operated during 1951–57 in Brownsville, Texas. Due to technical problems, and lacking economy due to increasing petroleum availability, this development was discontinued. Fluid-bed Fischer–Tropsch synthesis has recently been very successfully reinvestigated by Sasol. One reactor with a capacity of 500,000 tons per annum is now in operation and even larger ones are being built (nearly 850,000 tons per annum). The process is now used mainly for C2 and C7 alkene production. This new development can be regarded as an important progress in Fischer–Tropsch technology. A high-temperature process with a circulating iron catalyst ('circulating fluid bed', 'riser reactor', 'entrained catalyst process') was introduced by the Kellogg Company and a respective plant built at Sasol in 1956. It was improved by Sasol for successful operation. At Secunda, South Africa, Sasol operated 16 advanced reactors of this type with a capacity of approximately 330,000 tons per annum each. Now the circulating catalyst process is being replaced by the superior Sasol-advanced fluid-bed technology. Early experiments with cobalt catalyst particles suspended in oil have been performed by Fischer. The bubble column reactor with a powdered iron slurry catalyst and a CO-rich syngas was particularly developed to pilot plant scale by Kölbel at the Rheinpreuben Company in 1953. Recently (since 1990) low-temperature Fischer–Tropsch slurry processes are under investigation for the use of iron and cobalt catalysts, particularly for the production of a hydrocarbon wax, or to be hydrocracked and isomerised to produce diesel fuel, by Exxon and Sasol. Today slurry-phase (bubble column) low-temperature Fischer–Tropsch synthesis is regarded by many authors as the most efficient process for Fischer–Tropsch clean diesel production. This Fischer–Tropsch technology is also under development by the Statoil Company (Norway) for use on a vessel to convert associated gas at offshore oil fields into a hydrocarbon liquid. - Wn/ = (1 − α)2αn−1 where Wn is the weight fraction of hydrocarbons containing n carbon atoms, and α is the chain growth probability or the probability that a molecule will continue reacting to form a longer chain. In general, α is largely determined by the catalyst and the specific process conditions. Examination of the above equation reveals that methane will always be the largest single product so long as α is less than 0.5; however, by increasing α close to one, the total amount of methane formed can be minimized compared to the sum of all of the various long-chained products. Increasing α increases the formation of long-chained hydrocarbons. The very long-chained hydrocarbons are waxes, which are solid at room temperature. Therefore, for production of liquid transportation fuels it may be necessary to crack some of the Fischer–Tropsch products. In order to avoid this, some researchers have proposed using zeolites or other catalyst substrates with fixed sized pores that can restrict the formation of hydrocarbons longer than some characteristic size (usually n < 10). This way they can drive the reaction so as to minimize methane formation without producing lots of long-chained hydrocarbons. Such efforts have had only limited success. A variety of catalysts can be used for the Fischer–Tropsch process, the most common are the transition metals cobalt, iron, and ruthenium. Nickel can also be used, but tends to favor methane formation (“methanation”). Cobalt-based catalysts are highly active, although iron may be more suitable for certain applications. Cobalt catalysts are more active for Fischer–Tropsch synthesis when the feedstock is natural gas. Natural gas has a high hydrogen to carbon ratio, so the water-gas-shift is not needed for cobalt catalysts. Iron catalysts are preferred for lower quality feedstocks such as coal or biomass. Synthesis gases derived from these hydrogen-poor feedstocks has a low-hydrogen-content and require the water–gas shift reaction. Unlike the other metals used for this process (Co, Ni, Ru), which remain in the metallic state during synthesis, iron catalysts tend to form a number of phases, including various oxides and carbides during the reaction. Control of these phase transformations can be important in maintaining catalytic activity and preventing breakdown of the catalyst particles. In addition to the active metal the catalysts typically contain a number of "promoters," including potassium and copper. Group 1 alkali metals, including potassium, are a poison for cobalt catalysts but are promoters for iron catalysts. Catalysts are supported on high-surface-area binders/supports such as silica, alumina, or zeolites. Promotors also have an important influence on activity. Alkali metal oxides and copper are common promotors, but the formulation depends on the primary metal, iron vs cobalt. Alkali oxides on cobalt catalysts generally cause activity to drop severely even with very low alkali loadings. C≥5 and CO2 selectivity increase while methane and C2–C4 selectivity decrease. In addition, the alkene to alkane ratio increases. Fischer–Tropsch catalysts are sensitive to poisoning by sulfur-containing compounds. Cobalt-based catalysts are more sensitive than their iron counterparts. Fischer–Tropsch iron catalysts need alkali promotion to attain high activity and stability (e.g. 0.5 wt% K 2O). Addition of Cu for reduction promotion, addition of SiO 3 for structural promotion and maybe some manganese can be applied for selectivity control (e.g. high olefinicity). The working catalyst is only obtained when—after reduction with hydrogen—in the initial period of synthesis several iron carbide phases and elemental carbon are formed whereas iron oxides are still present in addition to some metallic iron. With iron catalysts two directions of selectivity have been pursued. One direction has aimed at a low-molecular-weight olefinic hydrocarbon mixture to be produced in an entrained phase or fluid bed process (Sasol–Synthol process). Due to the relatively high reaction temperature (approx. 340 °C), the average molecular weight of the product is so low that no liquid product phase occurs under reaction conditions. The catalyst particles moving around in the reactor are small (particle diameter 100 µm) and carbon deposition on the catalyst does not disturb reactor operation. Thus a low catalyst porosity with small pore diameters as obtained from fused magnetite (plus promoters) after reduction with hydrogen is appropriate. For maximising the overall gasoline yield, C3 and C4 alkenes have been oligomerized at Sasol. However, recovering the olefins for use as chemicals in, e.g., polymerization processes is advantageous today. The second direction of iron catalyst development has aimed at highest catalyst activity to be used at low reaction temperature where most of the hydrocarbon product is in the liquid phase under reaction conditions. Typically, such catalysts are obtained through precipitation from nitrate solutions. A high content of a carrier provides mechanical strength and wide pores for easy mass transfer of the reactants in the liquid product filling the pores. The main product fraction then is a paraffin wax, which is refined to marketable wax materials at Sasol; however, it also can be very selectively hydrocracked to a high quality diesel fuel. Thus, iron catalysts are very flexible. Ruthenium is the most active of the F-T catalysts. It works at the lowest reaction temperatures, and it produces the highest molecular weight hydrocarbons. It acts as a Fischer Tropsch catalyst as the pure metal, without any promotors, thus providing the simplest catalytic system of Fischer Tropsch synthesis, where mechanistic conclusions should be the easiest—e.g., much easier than with iron as the catalyst. Like with nickel, the selectivity changes to mainly methane at elevated temperature. Its high price and limited world resources exclude industrial application. Systematic Fischer Tropsch studies with ruthenium catalysts should contribute substantially to the further exploration of the fundamentals of Fischer Tropsch synthesis. There is an interesting question to consider: what features have the metals nickel, iron, cobalt, and ruthenium in common to let them—and only them—be Fischer–Tropsch catalysts, converting the CO/H2 mixture to aliphatic (long chain) hydrocarbons in a ‘one step reaction’. The term ‘one step reaction’ means that reaction intermediates are not desorbed from the catalyst surface. In particular, it is amazing that the much carbided alkalized iron catalyst gives a similar reaction as the just metallic ruthenium catalyst. HTFT and LTFT High-temperature Fischer–Tropsch (or HTFT) is operated at temperatures of 330–350 °C and uses an iron-based catalyst. This process was used extensively by Sasol in their coal-to-liquid plants (CTL). Low-Temperature Fischer–Tropsch (LTFT) is operated at lower temperatures and uses an iron or cobalt-based catalyst. This process is best known for being used in the first integrated GTL-plant operated and built by Shell in Bintulu, Malaysia. Since the invention of the original process by Fischer and Tropsch, working at the Kaiser-Wilhelm-Institut for Chemistry in the 1920s, many refinements and adjustments were made. Fischer and Tropsch filed a number of patents, e.g., U.S. Patent 1,746,464, applied 1926, published 1930. It was commercialized by Brabag in Germany in 1936. Being petroleum-poor but coal-rich, Germany used the Fischer–Tropsch process during World War II to produce ersatz (replacement) fuels. Fischer–Tropsch production accounted for an estimated 9% of German war production of fuels and 25% of the automobile fuel. The United States Bureau of Mines, in a program initiated by the Synthetic Liquid Fuels Act, employed seven Operation Paperclip synthetic fuel scientists in a Fischer–Tropsch plant in Louisiana, Missouri in 1946. In Britain, Alfred August Aicher obtained several patents for improvements to the process in the 1930s and 1940s. Aicher's company was named Synthetic Oils Ltd (not related to a company of the same name in Canada). Ras Laffan, Qatar The LTFT facility Pearl GTL at Ras Laffan, Qatar, is the largest FT plant. It uses cobalt catalysts at 230 °C, converting natural gas to petroleum liquids at a rate of 140,000 barrels per day (22,000 m3/d), with additional production of 120,000 barrels (19,000 m3) of oil equivalent in natural gas liquids and ethane. The plant in Ras Laffan was commissioned in 2007, called Oryx GTL, has a capacity of 34,000 barrels per day (5,400 m3/d). The plant utilizes the Sasol slurry phase distillate process, which uses a cobalt catalyst. Oryx GTL is a joint venture between Qatar Petroleum and Sasol. Another large scale implementation of Fischer–Tropsch technology is a series of plants operated by Sasol in South Africa, a country with large coal reserves, but little oil. The first commercial plant opened in 1952. Sasol uses coal and now natural gas as feedstocks and produces a variety of synthetic petroleum products, including most of the country's diesel fuel. PetroSA, another South African company, operates a refinery with a 36,000 barrels a day plant that completed semi-commercial demonstration in 2011, paving the way to begin commercial preparation. The technology can be used to convert natural gas, biomass or coal into synthetic fuels. Shell middle distillate synthesis One of the largest implementations of Fischer–Tropsch technology is in Bintulu, Malaysia. This Shell facility converts natural gas into low-sulfur Diesel fuels and food-grade wax. The scale is 12,000 barrels per day (1,900 m3/d). Construction is underway for Velocys' commercial reference plant incorporating its microchannel Fischer–Tropsch technology; ENVIA Energy's Oklahoma City GTL project being built adjacent to Waste Management's East Oak landfill site. The project is being financed by a joint venture between Waste Management, NRG Energy, Ventech and Velocys. The feedstock for this plant will be a combination of landfill gas and pipeline natural gas. In October 2006, Finnish paper and pulp manufacturer UPM announced its plans to produce biodiesel by the Fischer–Tropsch process alongside the manufacturing processes at its European paper and pulp plants, using waste biomass resulting from paper and pulp manufacturing processes as source material. A demonstration-scale Fischer–Tropsch plant was built and operated by Rentech, Inc., in partnership with ClearFuels, a company specializing in biomass gasification. Located in Commerce City, Colorado, the facility produces about 10 barrels per day (1.6 m3/d) of fuels from natural gas. Commercial-scale facilities are planned for Rialto, California; Natchez, Mississippi; Port St. Joe, Florida; and White River, Ontario. Rentech closed down their pilot plant in 2013, and abandoned work on their FT process as well as the proposed commercial facilities. INFRA GTL Technology In 2010, INFRA built a compact Pilot Plant for conversion of natural gas into synthetic oil. The plant modeled the full cycle of the GTL chemical process including the intake of pipeline gas, sulfur removal, steam methane reforming, syngas conditioning, and Fischer-Tropsch synthesis. In 2013 the first pilot plant was acquired by VNIIGAZ Gazprom LLC. In 2014 INFRA commissioned and operated on a continuous basis a new, larger scale full cycle Pilot Plant. It represents the second generation of INFRA’s testing facility and is differentiated by a high degree of automation and extensive data gathering system. In 2015, INFRA built its own catalyst factory in Troitsk (Moscow, Russia). The catalyst factory has a capacity of over 15 tons per year, and produces the unique proprietary Fischer-Tropsch catalysts developed by the company’s R&D division. In 2016, INFRA designed and built a modular, transportable GTL (gas-to-liquid) M100 plant for processing natural and associated gas into synthetic crude oil in Wharton (Texas, USA). The M100 plant is operating as a technology demonstration unit, R&D platform for catalyst refinement, and economic model to scale the Infra GTL process into larger and more efficient plants. In the United States and India, some coal-producing states have invested in Fischer–Tropsch plants. In Pennsylvania, Waste Management and Processors, Inc. was funded by the state to implement Fischer–Tropsch technology licensed from Shell and Sasol to convert so-called waste coal (leftovers from the mining process) into low-sulfur diesel fuel. Choren Industries has built a plant in Germany that converts biomass to syngas and fuels using the Shell Fischer–Tropsch process structure. The company went bankrupt in 2011 due to impracticalities in the process. U.S. Air Force certification Syntroleum, a publicly traded United States company, has produced over 400,000 U.S. gallons (1,500,000 L) of diesel and jet fuel from the Fischer–Tropsch process using natural gas and coal at its demonstration plant near Tulsa, Oklahoma. Syntroleum is working to commercialize its licensed Fischer–Tropsch technology via coal-to-liquid plants in the United States, China, and Germany, as well as gas-to-liquid plants internationally. Using natural gas as a feedstock, the ultra-clean, low sulfur fuel has been tested extensively by the United States Department of Energy (DOE) and the United States Department of Transportation (DOT). Most recently, Syntroleum has been working with the United States Air Force to develop a synthetic jet fuel blend that will help the Air Force to reduce its dependence on imported petroleum. The Air Force, which is the United States military's largest user of fuel, began exploring alternative fuel sources in 1999. On December 15, 2006, a B-52 took off from Edwards Air Force Base, California for the first time powered solely by a 50–50 blend of JP-8 and Syntroleum's FT fuel. The seven-hour flight test was considered a success. The goal of the flight test program is to qualify the fuel blend for fleet use on the service's B-52s, and then flight test and qualification on other aircraft. The test program concluded in 2007. This program is part of the Department of Defense Assured Fuel Initiative, an effort to develop secure domestic sources for the military energy needs. The Pentagon hopes to reduce its use of crude oil from foreign producers and obtain about half of its aviation fuel from alternative sources by 2016. With the B-52 now approved to use the FT blend, the C-17 Globemaster III, the B-1B, and eventually every airframe in its inventory to use the fuel by 2011. Carbon dioxide reuse Carbon dioxide is not a typical feedstock for F-T catalysis. Hydrogen and carbon dioxide react over a cobalt-based catalyst, producing methane. With iron-based catalysts unsaturated short-chain hydrocarbons are also produced. Upon introduction to the catalyst's support, ceria functions as a reverse water gas shift catalyst, further increasing the yield of the reaction. The short-chain hydrocarbons were upgraded to liquid fuels over solid acid catalysts, such as zeolites. Using conventional FT technology the process ranges in carbon efficiency from 25 to 50 percent and a thermal efficiency of about 50% for CTL facilities idealised at 60% with GTL facilities at about 60% efficiency idealised to 80% efficiency. Fischer-Tropsch in Nature A Fischer–Tropsch-type process has also been suggested to have produced a few of the building blocks of DNA and RNA within asteroids. Similarly, naturally occurring FT processes have also been described as important for the formation of abiogenic petroleum. - Arno de Klerk (2013). "Fischer–Tropsch Process". Kirk‐Othmer Encyclopedia of Chemical Technology. Weinheim: Wiley-VCH. doi:10.1002/0471238961.fiscdekl.a01. - Höök, Mikael; Fantazzini, Dean; Angelantoni, André; Snowden, Simon (2013). "Hydrocarbon liquefaction: viability as a peak oil mitigation strategy". Philosophical Transactions of the Royal Society A. 372 (2006). Bibcode:2013RSPTA.37220319H. doi:10.1098/rsta.2012.0319. PMID 24298075. Retrieved 2009-06-03. - "U.S. Product Supplied for Crude Oil and Petroleum Products". tonto.eia.doe.gov. Retrieved 3 April 2018. - Kaneko, Takao; Derbyshire, Frank; Makino, Eiichiro; Gray, David; Tamura, Masaaki (2001). "Coal Liquefaction". Ullmann's Encyclopedia of Industrial Chemistry. Weinheim: Wiley-VCH. doi:10.1002/14356007.a07_197. ISBN 9783527306732. - Gates, Bruce C. (February 1993). "Extending the Metal Cluster-Metal Surface Analogy". Angewandte Chemie International Edition in English. 32: 228–229. doi:10.1002/anie.199302281. - Schulz, H. (1999). "Short history and present trends of Fischer-Tropsch synthesis". Applied Catalysis A: General. 186: 3–12. doi:10.1016/S0926-860X(99)00160-X. - Moulijn, Jacob A.; Makkee, Michiel; van Diepen, Annelies E. (May 2013). Chemical Process Technology. Wiley. pp. 193–200. ISBN 978-1-4443-2025-1. - Spath, P. L.; Dayton, D. C. (December 2003). "Preliminary Screening — Technical and Economic Assessment of Synthesis Gas to Fuels and Chemicals with Emphasis on the Potential for Biomass-Derived Syngas" (PDF). NREL/TP510-34929. National Renewable Energy Laboratory. p. 95. Archived from the original (PDF) on 2008-12-17. Retrieved 2008-06-12. - Khodakov, Andrei Y.; Chu, Wei; Fongarland, Pascal (2007-05-01). "Advances in the Development of Novel Cobalt Fischer−Tropsch Catalysts for Synthesis of Long-Chain Hydrocarbons and Clean Fuels". Chemical Reviews. 107 (5): 1692–1744. doi:10.1021/cr050972v. ISSN 0009-2665. - Balonek, Christine M.; Lillebø, Andreas H.; Rane, Shreyas; Rytter, Erling; Schmidt, Lanny D.; Holmen, Anders (2010-08-01). "Effect of Alkali Metal Impurities on Co–Re Catalysts for Fischer–Tropsch Synthesis from Biomass-Derived Syngas". Catalysis Letters. 138 (1–2): 8–13. doi:10.1007/s10562-010-0366-4. ISSN 1011-372X. - "Gas to Liquids (GTL) Technology". Retrieved 15 May 2015. - US 1746464, issued 1930-02-11 - Leckel, Dieter (2009-05-21). "Diesel Production from Fischer−Tropsch: The Past, the Present, and New Concepts". Energy & Fuels. 23 (5): 2342–2358. doi:10.1021/ef900064c. ISSN 0887-0624. - "German Synthetic Fuels Scientists". Archived from the original on 24 September 2015. Retrieved 15 May 2015. - For example, British Patent No. 573,982, applied 1941, published 1945"Improvements in or relating to Methods of Producing Hydrocarbon Oils from Gaseous Mixtures of Hydrogen and Carbon Monoxide" (PDF). January 14, 1941. Archived from the original (PDF) on December 17, 2008. Retrieved 2008-11-09. - Carl Mesters (2016). "A Selection of Recent Advances in C1 Chemistry". Annual Review of Chemical and Biomolecular Engineering. 7: 223–38. doi:10.1146/annurev-chembioeng-080615-034616. - "Construction of World's First Synthesis Plant" Popular Mechanics, February 1952, p. 264, bottom of page. - "technologies & processes" Sasol Archived 2008-11-16 at the Wayback Machine. - "After Sasol ditches plan for $14B gas-to-liquids plant, Louisiana off hook for $200M in incentives". Archived from the original on 2017-12-02. Retrieved 2017-12-01. - "PetroSA technology ready for next stage | Archive | BDlive". Businessday.co.za. 2011-05-10. Retrieved 2013-06-05. - ""Setting the stage for the future of smaller-scale GTL", Gas Processing". August 2015. - "UPM-Kymmene says to establish beachhead in biodiesel market". NewsRoom Finland. Archived from the original on 2007-03-17. - http://www.rentechinc.com/ (official site) - "GEO ExPro magazine" (PDF). Vol. 14, No. 4 – 2017 Pgs 14-17. - "Governor Rendell leads with innovative solution to help address PA energy needs". State of Pennsylvania. Archived from the original on 2008-12-11. - "Schweitzer wants to convert Otter Creek coal into liquid fuel". Billings Gazette. August 2, 2005. Archived from the original on 2009-01-01. - http://www.choren.com[permanent dead link] Choren official web site - Fairley, Peter. Growing Biofuels – New production methods could transform the niche technology. MIT Technology Review November 23, 2005 - Inderwildi, Oliver R.; Jenkins, Stephen J.; King, David A. (2008). "Mechanistic Studies of Hydrocarbon Combustion and Synthesis on Noble Metals". Angewandte Chemie International Edition. 47 (28): 5253–5. doi:10.1002/anie.200800685. PMID 18528839. - Zamorano, Marti (2006-12-22). "B-52 synthetic fuel testing: Center commander pilots first Air Force B-52 flight using solely synthetic fuel blend in all eight engines". Aerotech News and Review. - "C-17 flight uses synthetic fuel blend". 2007-10-25. Retrieved 2008-02-07. - Dorner, Robert; Dennis R. Hardy; Frederick W. Williams; Heather D. Willauer (2010). "Heterogeneous catalytic CO2 conversion to value-added hydrocarbons". Energy Environ. Sci. 3: 884–890. doi:10.1039/C001514H. - Dorner, Robert. "Catalytic Support for use in Carbon Dioxide Hydrogenation Reactions". - Unruh, Dominik; Pabst, Kyra; Schaub, Georg (2010-04-15). "Fischer−Tropsch Synfuels from Biomass: Maximizing Carbon Efficiency and Hydrocarbon Yield". Energy & Fuels. 24 (4): 2634–2641. doi:10.1021/ef9009185. ISSN 0887-0624. - de Klerk 2011 - Pearce, Ben K. D.; Pudritz, Ralph E. (2015). "Seeding the Pregenetic Earth: Meteoritic Abundances of Nucleobases and Potential Reaction Pathways". The Astrophysical Journal. 807 (1): 85. arXiv:1505.01465. Bibcode:2015ApJ...807...85P. doi:10.1088/0004-637X/807/1/85. - de Klerk, Arno (2011). Fischer–Tropsch refining (1st ed.). Weinheim, Germany: Wiley-VCH. ISBN 9783527326051. - de Klerk, Arno; Furimsky, Edward (15 Dec 2010). Catalysis in the refining of Fischer–Tropsch syncrude. Cambridge: Royal Society of Chemistry. doi:10.1039/9781849732017. - Fischer–Tropsch archives - Fischer–Tropsch fuels from coal and biomass - Abiogenic gas debate (AAPG Explorer Nov. 2002) - Gas origin theories to be studied (AAPG Explorer Nov. 2002) - Unconventional ideas about unconventional gas (Society of Petroleum Engineers) - Process of synthesis of liquid hydrocarbons – Great Britain patent GB309002 – Hermann Plauson - Clean diesel from coal by Kevin Bullis - Implementing the “Hydrogen Economy” with Synfuels (pdf) - Carbon-to-liquids research - Effect of alkali metals on cobalt catalysts
Math 253/201 Worksheet - Addition and Multiplication Tables for Signed Numbers In this addition and multiplication learning exercise, students compute 242 addition and multiplication problems. Students use a table to multiply and add integers from -5 to +5. 4th - 6th Math 3 Views 4 Downloads Saxon Math Intermediate 5 - Student Edition Expand your resource library with this collection of Saxon math materials. Covering a wide range of topics from basic arithmetic and place value, to converting between fractions, decimals, and percents, these example problems and skills... 4th - 6th Math CCSS: Adaptable Integers: Addition and Subtraction Young mathematicians construct their own understanding of integers with an inquiry-based math lesson. Using colored chips to represent positive and negative numbers, children model a series of addition and subtraction problems as they... 5th - 8th Math CCSS: Adaptable Adding Mixed Numbers (Unlike Denominators) Mix things things up in your elementary math class with a series of problem-solving exercises. Presented with a series of mixed number word problems, young mathematicians are asked to solve them by using either visual fraction models or... 3rd - 6th Math CCSS: Adaptable
15-1. Introduction. Shelter is anything that protects a survivor from the environmental hazards. The information in this chapter describes how the environment influences shelter site selection and factors which survivors must consider before constructing an adequate shelter. The techniques and procedures for constructing shelters for various types of protection are also presented. 15-2. Shelter Considerations. The location and type of shelter built by survivors vary with each survival situation. There are many things to consider when picking a site. Survivors should consider the time and energy required to establish an adequate camp, weather conditions, life forms (human, plant, and animal), terrain, and time of day. Every effort should be made to use as little energy as possible and yet attain maximum protection from the environment. a. Time. Late afternoon is not the best time to look for a site which will meet that day's shelter requirements. If survivors wait until the last minute, they may be forced to use poor materials in unfavorable conditions. They must constantly be thinking of ways to satisfy their needs for protection from b. Weather. Weather conditions are a key consideration when selecting a shelter site. Failure to consider the weather could have disastrous results. Some major weather factors which can influence the survivor's choice of shelter type and site selection are temperature, wind, and precipitation. (1) Temperature. Temperatures can vary considerably within a given area. Situating a campsite in low areas such as a valley in cold regions can expose survivors to low night temperatures and windchill factors. Colder temperatures are found along valley floors which are sometimes referred to as "cold air sumps." It may be advantageous to situate campsites to take advantage of the Sun. Survivors could place their shelters in open areas during the colder months for added warmth, and in shaded areas for protection from the Sun during periods of hotter weather. In some areas a compromise may have to be made. For example, in many deserts the daytime temperatures can be very high while low temperatures at night can turn water to ice. Protection from both heat and cold are needed in these areas. Shelter type and location should be chosen to provide protection from the existing temperature conditions. (2) Wind. Wind can be either an advantage or a disadvantage depending upon the temperature of the area and the velocity of the wind. During the summer or on warm days, survivors can take advantage of the cool breezes and protection the wind provides from insects by locating their camps on knolls or spits of land. Conversely, wind can become an annoyance or even a hazard as blowing sand, dust, or snow can cause skin and eye irritation and damage to clothing and equipment. On cold days or during winter months, survivors should seek shelter sites which are protected from the effects of windchill and drifting (3) Precipitation. The many forms of precipitation (rain, sleet, hail, or snow) can also present problems for survivors. Shelter sites should be out of major drainages and other low areas to provide protection from flash floods or mud slides resulting from heavy rains. Snow can also be a great danger if shelters are placed in potential avalanche areas. c. Life Forms. All life forms (plant, human, and animal) must be considered when selecting the campsite and the type of shelter that will be used. The "human" factor may mean the enemy or other groups from whom survivors wish to remain undetected. Information regarding this aspect of shelters and shelter site selection is in part nine of this regulation (Evasion). For a shelter to be adequate, certain factors must be considered, especially if extended survival is expected. (1) Insect life can cause personal discomfort, disease, and injury. By locating shelters on knolls, ridges, or any other area that has a breeze or steady wind, survivors can reduce the number of flying insects in their area. Staying away from standing water sources will help to avoid mosquitoes, bees, wasps, and hornets. Ants can be a major problem; some species will vigorously defend their territories with painful stings or bites or particularly distressing pungent odors. (2) Large and small animals can also be a problem, especially if the camp is situated near their trails or waterholes. (3) Dead trees that are standing, and trees with dead branches should be avoided. Wind may cause them to fall, causing injuries or death. Poisonous plants, such as poison oak or poison ivy, must also be avoided when locating a d. Terrain. Terrain hazards may not be as apparent as weather and animal life hazards, but they can be many times more dangerous. Avalanche, rock, dry streambeds, or mud-slide areas should be avoided. These areas can be recognized by either a clear path or a path of secondary vegetation, such as 1- to 15-foot tall vegetation or other new growth which extends from the top to the bottom of a hill or mountain. Survivors should not choose shelter sites at the bottom of steep slopes which may be prone to slides. Likewise, there is a danger in camping at the bottom of steep scree or talus slopes. Additionally, rock overhang must be checked for safety before using it as a shelter. a. Four prerequisites must be satisfied when selecting a shelter location. (1) The first is being near water, food, fuel, and a signal or recovery (2) The second is that the area be safe, providing natural protection from (3) The third is that sufficient materials be available to construct the shelter. In some cases, the "shelter" may already be present. Survivors seriously limit themselves if they assume shelters must be a fabricated framework having predetermined dimensions and a cover of parachute material or a signal paulin. More appropriately, survivors should consider using sheltered places already in existence in the immediate area. This does not rule out shelters with a fabricated framework and parachute or other manufactured material covering; it simply enlarges the scope of what can be used as a survival shelter. (4) Finally, the area chosen must be both large enough and level enough for the survivor to lie down. Personal comfort is an important fundamental for survivors to consider. An adequate shelter provides physical and mental well-being for sound rest. Adequate rest is extremely vital if survivors are to make sound decisions. Their need for rest becomes more critical as time passes and rescue or return is delayed. Before actually constructing a shelter, survivors must determine the specific purpose of the shelter. The following factors influence the type of shelter to be fabricated. (a) Rain or other precipitation. (e) Available materials nearby (manufactured or natural). (f) Length of expected stay. (g) Enemy presence in the area-evasion "shelters" are covered in part nine of the regulation (Evasion). (h) Number and physical condition of survivors. b. If possible, survivors should try to find a shelter which needs little work to be adequate. Using what is already there, so that complete construction of a shelter is not necessary, saves time and energy. For example, rock overhangs, caves, large crevices, fallen logs, root buttresses, or snow banks can all be modified to provide adequate shelter. Modifications may include adding snow blocks to finish off an existing tree well shelter, increasing the insulation of the shelter by using vegetation or parachute material, etc., or building a reflector fire in front of a rock overhang or cave. Survivors must consider the amount of energy required to build the shelter. It is not really wise to spend a great deal of time and energy in constructing a shelter if nature has provided a natural shelter nearby which will satisfy the survivor's needs. See Figure 15-1 for examples of naturally occurring shelters. Figure 15-1. Natural c. The size limitations of a shelter are important only if there is either a lack of material on hand or if it is cold. Otherwise, the shelter should be large enough to be comfortable yet not so large as to cause an excessive amount of work. Any shelter, naturally occurring or otherwise, in which a fire is to be built must have a ventilation system which will provide fresh air and allow smoke and carbon monoxide to escape. Even if a fire does not produce visible smoke (such as heat tabs), the shelter must still be vented. See Figure 15-27 for placement of ventilation holes in a snow cave. If a fire is to be placed outside the shelter, the opening of the shelter should be placed 90 degrees to the prevailing wind. This will reduce the chances of sparks and smoke being blown into the shelter if the wind should reverse direction in the morning and evening. This frequently occurs in mountainous areas. The best fire to shelter distance is approximately 3 feet. One place where it would not be wise to build a fire is near the aircraft wreckage, especially if it is being used as a shelter. The possibility of igniting spilled lubricants or fuels is great. Survivors may decide instead to use materials from the aircraft to add to a shelter located a safe distance from the crash site. 15-4. Immediate Action Shelters. The first type of shelter that survivors may consider using, or the first type they may be forced to use, is an immediate action shelter. An immediate action shelter is one which can be erected quickly with minimum effort; for example, raft, aircraft parts, parachutes, paulin, and plastic bag. Natural formations can also shield survivors from the elements immediately, to include overhanging ledges, fallen logs, caves, and tree wells (Figure 15-2). It isn't necessary to be concerned with exact shelter dimensions. Survivors should remember that if shelter is needed, use an existing shelter if at all possible. They should improvise on natural shelters or construct new shelters only if necessary. Regardless of type, the shelter must provide whatever protection is needed and, with a little ingenuity, it should be possible for survivors to protect themselves and do so quickly. In many instances, the immediate action shelters may have to serve as permanent shelters for aircrew members. For example, many aircrew members fly without parachutes, large cutting implements (axes), and entrenching tools; therefore, multiperson liferafts may be the only immediate or long-term shelter available. In this situation, multiperson liferafts must be deployed in the quickest manner possible to ensure maximum advantages are attained from the following shelter a. Set up in areas which afford maximum protection from precipitation and wind and use the basic shelter principle in paragraphs 15-2 and 15-3. Immediate Action Shelters. b. Anchor the raft for retention during high winds. c. Use additional boughs, grasses, etc., for ground insulation. 15-5. Improvised Shelters. Shelters of this type should be easy to construct and (or) dismantle in a short period of time. However, these shelters usually require more time to construct then an immediate action shelter. For this reason, survivors should only consider this type of shelter when they aren't immediately concerned with getting out of the elements. Shelters of this type include the following: a. The "A frame" design is adaptable to all environments as it can be easily modified; for example, tropical para-hammock, temperate area "A frame," arctic thermal "A frame," and fighter trench. b. Simple shade shelter; these are useful in dry areas. c. Various paratepees. d. Snow shelters; includes tree-pit shelters. e. All other variations of the above shelter types; sod shelters, etc. 15-6. Shelters for Warm Temperature Areas: a. If survivors are to use parachute material, they should remember that "pitch and tightness" apply to shelters designed to shed rain or snow. Parachute material is porous and will not shed moisture unless it is stretched tightly at an angle of sufficient pitch which will encourage run-off instead of penetration. An angle of 40 to 60 degrees is recommended for the "pitch" of the shelter. The material stretched over the framework should be wrinkle-free and tight. Survivors should not touch the material when water is running over it as this will break the surface tension at that point and allow water to drip into the shelter. Two layers of parachute material, 4 to 6 inches apart, will create a more effective water repellent covering. Even during hard rain, the outer layer only lets a mist penetrate if it is pulled tight. The inner layer will then channel off any moisture which may penetrate. This layering of parachute material also creates a dead-air space that covers the shelter. This is especially beneficial in cold areas when the shelter is enclosed. Adequate insulation can also be provided by boughs, aircraft parts, snow, etc. These will be discussed in more depth in the area of cold climate shelters. A double layering of parachute material helps to trap body heat, radiating heat from the Earth's surface, and other heating sources. b. The first step is deciding the type of shelter required. No matter which shelter is selected, the building or improvising process should be planned and orderly, following proven procedures and techniques. The second step is to select, collect, and prepare all materials needed before the actual construction; this includes framework, covering, bedding, or insulation, and implements used to secure the shelter ("dead-men," lines, stakes, (1) For shelters that use a wooden framework, the poles or wood selected should have all the rough edges and stubs removed. Not only will this reduce the chances of the parachute fabric being ripped, but it will eliminate the chances of injury to survivors. (2) On the outer side of a tree selected as natural shelter, some or all of the branches may be left in place as they will make a good support structure for the rest of the shelter parts. (3) In addition to the parachute, there are many other materials which can be used as framework coverings. Some of the following are both framework and covering all in one: (a) Bark peeled off dead trees. (b) Boughs cut off trees. (c) Bamboo, palm, grasses, and other vegetation cut or woven into desired (4) If parachute material is to be used alone or in combination with natural materials, it must be changed slightly. Survivors should remove all of the lines from the parachute and then cut it to size. This will eliminate bunching and wrinkling and reduce leakage. c. The third step in the process of shelter construction is site preparation. This includes brushing away rocks and twigs from the sleeping area and cutting back overhanging vegetation. d. The fourth step is to actually construct the shelter, beginning with the framework. The framework is very important. It must be strong enough to support the weight of the covering and precipitation buildup of snow. It must also be sturdy enough to resist strong wind gusts. (1) Construct the framework in one of two ways. For natural shelters, branches may be securely placed against trees or other natural objects. For parachute shelters, poles may be lashed to trees or to other poles. The support poles or branches can then be layed and (or) attached depending on (2) The pitch of the shelter is determined by the framework. A 60-degree pitch is optimum for shedding precipitation and providing shelter room. (3) The size of the shelter is controlled by the framework. The shelter should be large enough for survivors to sit up, with adequate room to lie down and to store all personal equipment. (4) After the basic framework has been completed, survivors can apply and secure the framework covering. The care and techniques used to apply the covering will determine the effectiveness of the shelter in shedding (5) When using parachute material on shelters, survivors should remove all suspension line from the material. (Excess line can be used for lashing, sewing, etc.) Next, stretch the center seam tight; then work from the back of the shelter to the front, alternating sides and securing the material to stakes or framework by using buttons and lines. When stretching the material tight, survivors should pull the material 90 degrees to the wrinkles. If material is not stretched tight, any moisture will pool in the wrinkles and leak into the shelter. (6) If natural materials are to be used for the covering, the shingle method should be used. Starting at the bottom and working toward the top of the shelter, the bottom of each piece should overlap the top of the preceding piece. This will allow water to drain off. The material should be placed on the shelter in sufficient quantity so that survivors in the shelter cannot see 15-7. Maintenance and Improvements. Once a shelter is constructed, it must be maintained. Additional modifications may make the shelter more effective and comfortable. Indian lacing (lacing the front of the shelter to the bipod) will tighten the shelter. A door may help block the wind and keep insects out. Other modifications may include a fire reflector, porch or work area, or another whole addition such as an opposing lean-to. 15-8. Construction of a. A-Frame. The following is one way to build an A-frame shelter in a warm temperate environment using parachute material for the covering. There are as many variations of this shelter as there are builders. The procedures here will, if followed carefully, result in the completion of a safe shelter that will meet survivors' needs. For an example of this and other A-frame shelters, see Figure Figure 15-3. A-Frame (1) Materials Needed: (a) One 12 to 18 foot long sturdy ridge pole with all projections cleaned (b) Two bipod poles, approximately 7 feet long. (c) Parachute material, normally 5 or 6 gores. (d) Suspension lines. (e) "Buttons," small objects placed behind gathers of material to provide a secure way of affixing suspension line to the parachute (f) Approximately 14 stakes, approximately 10 inches long. (2) Assembling the Framework: (a) Lash (See chapter 17 - Equipment.) the two bipod poles together at (b) Place the ridge pole, with the large end on the ground, into the bipod formed by the poles and secure with a square lash. (c) The bipod structure should be 90 degrees to the ridge pole and the bipod poles should be spread out to an approximate equilateral triangle of a 60-degree pitch. A piece of line can be used to measure this. (3) Application of Fabric: (a) Tie off about 2 feet of the apex in a knot and tuck this under the butt end of the ridge pole. Use half hitches and clove hitches to secure the material to the base of the pole. (b) Place the center radial seam of the parachute piece (or the center of the fabric) on the ridge pole. After pulling the material taut, use half hitches and clove hitches to secure the fabric to the front of the ridge (c) Scribe or draw a line on the ground from the butt of the ridge pole to each of bipod poles. Stake the fabric down, starting at the rear of the shelter and alternately staking from side to side to the shelter front. Use a sufficient number of stakes to ensure the parachute material is (d) Stakes should be slanted or inclined away from the direction of pull. When tying off with a clove hitch, the line should pass in front of the stake first and then pass under itself to allow the button and line to be pulled 90 degrees to the wrinkle. (e) Indian lacing is the sewing or lacing of the lower lateral band with inner core or line which is secured to the bipod poles. This will remove the remaining wrinkles and further tighten the material. (f) A rain fly, bed, and other refinements can now be added. (1) Materials Needed: (a) A sturdy, smooth ridge pole (longer than the builder's body) long enough to span the distance between two sturdy trees. (b) Support poles, 10 feet long. (c) Stakes, suspension lines, and buttons. (d) Parachute material (minimum of four gores). (2) Assembling the Framework: (a) Lash the ridge pole (between two suitable trees) about chest or (b) Lay the roof support poles on the ridge pole so the roof support poles and the ground are at approximately a 60-degree angle. Lash the roof support poles to the ridge pole. (3) Application of Fabric: (a) Place the middle seam of the fabric on the middle support pole with lower lateral band along the ridge pole. (b) Tie-off the middle and both sides of the lower lateral band approximately 8 to 10 inches from the ridge pole. (c) Stake the middle of the rear of the shelter first, then alternate from side to side. (d) The stakes that go up the sides to the front should point to the front of the shelter. (e) Pull the lower lateral band closer to the ridge pole by Indian (f) Add bed and other refinements (reflector fire, bed logs, rain fly, etc.). See Figure 15-4 for c. Paratepee, O-Pole. The paratepee is an excellent shelter for protection from wind, rain, cold, and insects. Cooking, eating, sleeping, resting, signaling, and washing can all be done without going outdoors. The paratepee, whether 9-pole, 1-pole, or no-pole, is the only improvised shelter that provides adequate ventilation to build an inside fire. With a small fire inside, the shelter also serves as a signal at night. (1) Materials Needed: (a) Suspension line. (b) Parachute material, normally 14 gores are suitable. - 1. Spread out the 14-gore section of parachute and cut off all lines at the lower lateral band, leaving about 18 inches of line attached. All other suspension lines should be stripped from the parachute. -2. Sew two smoke flaps, made from two large panels of parachute material, at the apex of the 14-gore section on the outside seams. Attach suspension line with a bowline in the end to each smoke flap. The ends of the smoke flap poles will be inserted in these (see Figure (d) Although any number of poles may be used, 11 poles, smoothed off, each about 20 feet long, will normally provide adequate support. (2) Assembling the Framework. (Assume 11 poles are used. Adjust instructions if different numbers are used.) (a) Lay three poles on the ground with the butts even. Stretch the canopy along the poles. The lower lateral band should be 4 to 6 inches from the bottoms of the poles before the stretching takes place. Mark one of the poles at the apex point. (b) Lash the three poles together, 5 to 10 inches above the marked area. (A shear lash is effective for this purpose.) These poles will form the tripod (Figure 15-5). (c) Scribe a circle approximately 12 feet in diameter in the shelter area and set the tripod so the butts of the poles are evenly spaced on the circle. Five of the remaining eight poles should be placed so the butts are evenly spaced around the 12-foot circle and the tops are laid in the apex of the tripod to form the smallest apex possible (Figure (3) Application of Fabric: (a) Stretch the parachute material along the tie pole. Using the suspension line attached to the middle radial seam, tie the lower lateral band to the tie pole 6 inches from the butt end. Stretch the parachute material along the middle radial seam and tie it to the tie pole using the suspension line at the apex. Lay the tie pole onto the shelter frame with the butt along the 12-foot circle and the top in the apex formed by the other poles. The tie pole should be placed directly opposite the proposed (b) Move the canopy material (both sides of it) from the tie pole around the framework and tie the lower lateral band together and stake it at the door. The front can now be sewn or pegged closed, leaving 3 to 4 feet for a door. (A sewing "ladder" can be made by lashing steps up the front of the tepee (Figure 15-5). Figure 15-5. 9-Pole (c) Enter the shelter and move the butts of the poles outward to form a more perfect circle and until the fabric is relatively tight and smooth. (d) Tighten the fabric and remove remaining wrinkles. Start staking directly opposite the door, and alternate from side to side, pulling the material down and to the front of the shelter. Use clove hitches or similar knots to secure material to the stakes. (e) Insert the final two poles into the loops on the smoke flaps. The paratepee is now finished (Figure 15-5). (f) One improvement which could be made to the paratepee is the installation of a liner. This will allow a draft for a fire without making the occupants cold, since there may be a slight gap between the lower lateral band and the ground. A liner can be affixed to the inside of the paratepee by taking the remaining 14-gore piece of material and firmly staking the lower lateral band directly to the ground all the way around, leaving room for the door. The area where the liner and door meet may be sewn up. The rest of the material is brought up the inside walls and affixed to the poles with buttons (Figure 15-5). d. Paratepee, 1 -Pole: (1) Materials Needed: (a) Normally use a 14-gore section of canopy, strip the shroud lines leaving 16- to 18-inch lengths at the lower lateral band. (c) Inner core and needle. (2) Construction of the 1 -Pole Paratepee: (a) Select a shelter site and scribe a circle about 14 feet in diameter on the ground. (b) The parachute material is staked to the ground using the lines attached at the lower lateral band. After deciding where the shelter door will be located, stake the first line (from the lower band) down securely. Proceed around the scribed line and stake down all the lines from the lateral band, making sure the parachute material is stretched taut before the line is staked down. (c) Once all the lines are staked down, loosely attach the center pole, and, through trial and error, determine the point at which the parachute material will be pulled tight once the center pole is placed upright - securely attach the material at this point. (d) Using a suspension line (or innercore), sew the end gores together leaving 3 or 4 feet for a door (Figure Figure 15-6. 1-Pole e. Paratepee, No-Pole. For this shelter, the 14 gores of material are prepared the same way. A line is attached to the apex and thrown over a tree limb, etc., and tied off. The lower lateral band is then staked down starting opposite the door around a 12- to 14-foot circle. (See Figure 15-7 for paratepee example.) Figure 15-7. No-Pole f. Sod Shelter. A framework covered with sod provides a shelter which is warm in cold weather and one that is easily made waterproof and insect-proof in the summer. The framework for a sod shelter must be strong, and it can be made of driftwood, poles, willow, etc. (Some natives use whale bones.) Sod, with a heavy growth of grass or weeds, should be used since the roots tend to hold the soil together. Cutting about 2 inches of soil along with the grass is sufficient. The size of the blocks are determined by the strength of the individual. A sod house is strong and fireproof. 15-9. Shelter for Tropical Areas. Basic considerations for shelter in tropical areas are as follows: a. In tropical areas, especially moist tropical areas, the major environmental factors influencing both site selection and shelter types are: (1) Moisture and dampness. (3) Wet ground. (5) Mud-slide areas. (6) Dead standing trees and limbs. b. Survivors should establish a campsite on a knoll or high spot in an open area well back from any swamps or marshy areas. The ground in these areas is drier, and there may be a breeze which will result in fewer insects. c. Underbrush and dead vegetation should be cleared from the shelter site. Crawling insects will not be able to approach survivors as easily due to lack of d. A thick bamboo clump or matted canopy of vines for cover reflects the smoke from the campfire and discourages insects. This cover will also keep the extremely heavy early morning dew off the bedding. e. The easiest improvised shelter is made by draping a parachute, tarpaulin, or poncho over a rope or vine stretched between two trees. One end of the canopy should be kept higher than the other; insects are discouraged by few openings in shelters and smudge fires. A hammock made from parachute material will keep the survivor off the ground and discourage ants, spiders, leeches, scorpions, and f. In the wet jungle, survivors need shelter from dampness. If they stay with the aircraft, it should be used for shelter. They should try to make it mosquito-proof by covering openings with netting or parachute cloth. g. A good rain shelter can be made by constructing an A-type framework and shingling it with a good thickness of palm or other broad leaf plants, pieces of bark, and mats of grass (Figure Banana Leaf A-Frame. h. Nights are cold in some mountainous tropical areas. Survivors should try to stay out of the wind and build a fire. Reflecting the heat off a rock pile or other barrier is a good idea. Some natural materials which can be used in the shelters are green wood (dead wood may be too rotten), bamboo, and palm leaves. Vines can be used in place of suspension line for thatching roofs or floors, etc. Banana plant sections can be separated from the banana plant and fashioned to provide a mattress effect. Specific Shelters for Tropical Environments: a. Raised Platform Shelter (Figure 15-9). This shelter has many variations. One example is four trees or vertical poles in a rectangular pattern which is a little longer and a little wider than the survivor, keeping in mind the survivor will also need protection for equipment. Two long, sturdy poles are then square lashed between the trees or vertical poles, one on each side of the intended shelter. Cross pieces can then be secured across the two horizontal poles at 6- to 12-inch intervals. This forms the platform on which a natural mattress may be constructed. Parachute material can be used as an insect net and a roof can be built over the structure using A-frame building techniques. The roof should be waterproofed with thatching laid bottom to top in a thick shingle fashion. See Figure 15-9 for examples of this and other platform shelters. These shelters can also be built using three trees in a triangular pattern. At the foot of the shelter, two poles are joined to one tree. Raised Platform Shelter. b. Variation of Platform Shelter. A variation of the platform-type shelter is the paraplatform. A quick and comfortable bed is made by simply wrapping material around the two "frame" poles. Another method is to roll poles in the material in the same manner as for an improvised stretcher (Figure 15-10. Raised Paraplatform Shelter. c. Hammocks. Various parahammocks can also be made. They are more involved than a simple parachute wrapped framework and not quite as comfortable (Figure
Organic Chemistry - Some Basic Principles and Techniques Complete Chemistry for Engineering and Medical Entrance Exam Preparation. (State Board | CBSE Board | ICSE Board | IIT JEE Main | Advanced | BITSAT | SAT | NEET etc.) Updated on Sep, 2023 Language - English In this unit, we have learnt some basic concepts in structure and reactivity of organic compounds, which are formed due to covalent bonding. The nature of the covalent bonding in organic compounds can be described in terms of orbitals hybridisation concept, according to which carbon can have sp3, sp2 and sp hybridised orbitals. The sp3, sp2 and sp hybridised carbons are found in compounds like methane, ethene and ethyne respectively. The tetrahedral shape of methane, planar shape of ethene and linear shape of ethyne can be understood on the basis of this concept. A sp3 hybrid orbital can overlap with 1s orbital of hydrogen to give a carbon - hydrogen (C–H) single bond (sigma, σ bond). Overlap of a sp2 orbital of one carbon with sp2 orbital of another results in the formation of a carbon–carbon σ bond. The unhybridised p orbitals on two adjacent carbons can undergo lateral (side-byside) overlap to give a pi (π) bond. Organic compounds can be represented by various structural formulas. The three dimensional representation of organic compounds on paper can be drawn by wedge and dash formula. Organic compounds can be classified on the basis of their structure or the functional groups they contain. A functional group is an atom or group of atoms bonded together in a unique fashion and which determines the physical and chemical properties of the compounds. The naming of the organic compounds is carried out by following a set of rules laid down by the International Union of Pure and Applied Chemistry (IUPAC). In IUPAC nomenclature, the names are correlated with the structure in such a way that the reader can deduce the structure from the name. Organic reaction mechanism concepts are based on the structure of the substrate molecule, fission of a covalent bond, the attacking reagents, the electron displacement effects and the conditions of the reaction. These organic reactions involve breaking and making of covalent bonds. A covalent bond may be cleaved in heterolytic or homolytic fashion. A heterolytic cleavage yields carbocations or carbanions, while a homolytic cleavage gives free radicals as reactive intermediate. Reactions proceeding through heterolytic cleavage involve the complimentary pairs of reactive species. These are electron pair donor known as nucleophile and an electron pair acceptor known as electrophile. The inductive, resonance, electromeric and hyperconjugation effects may help in the polarisation of a bond making certain carbon atom or other atom positions as places of low or high electron densities. Organic reactions can be broadly classified into following types; substitution, addition, elimination and rearrangement reactions. Purification, qualitative and quantitative analysis of organic compounds are carried out for determining their structures. The methods of purification namely : sublimation, distillation and differential extraction are based on the difference in one or more physical properties. Chromatography is a useful technique of separation, identification and purification of compounds. It is classified into two categories : adsorption and partition chromatography. Adsorption chromatography is based on differential adsorption of various components of a mixture on an adsorbent. Partition chromatography involves continuous partitioning of the components of a mixture between stationary and mobile phases. After getting the compound in a pure form, its qualitative analysis is carried out for detection of elements present in it. Nitrogen, sulphur, halogens and phosphorus are detected by Lassaigne’s test. Carbon and hydrogen are estimated by determining the amounts of carbon dioxide and water produced. Nitrogen is estimated by Dumas or Kjeldahl’s method and halogens by Carius method. Sulphur and phosphorus are estimated by oxidising them to sulphuric and phosphoric acids respectively. The percentage of oxygen is usually determined by difference between the total percentage (100) and the sum of percentages of all other elements present. What will you learn in this course: - Understand reasons for tetravalence of carbon and shapes of organic molecules. - Write structures of organic molecules in various ways. - Classify the organic compounds. - Name the compounds according to IUPAC system of nomenclature and also derive their structures from the given names. - Understand the concept of organic reaction mechanism. - Explain the influence of electronic displacements on structure and reactivity of organic compounds. - Recognise the types of organic reactions. - Learn the techniques of purification of organic compounds. - Write the chemical reactions involved in the qualitative analysis of organic compounds. - Understand the principles involved in quantitative analysis of organic compounds. What are the prerequisites for this course? - Basic understanding of chemistry and math's Check out the detailed breakdown of what’s inside the course Some Basic Principles & Techniques - Representation of organic compound 13:47 13:47 - Classification of Organic Compound on the basis of structure 12:57 12:57 - Classification of Organic Compounds on the Basis of F.G. - Part-1 11:53 11:53 - Classification of Organic Compounds on the Basis of F.G. - Part-2 12:12 12:12 - Classification of Organic Compounds on the Basis of F.G. - Part-3 08:01 08:01 - Classification of Organic Compounds on the Basis of F.G. - Part-4 11:21 11:21 - Classification of Organic Compounds on the Basis of F.G. - Part-5 09:58 09:58 - Homologous Series 12:26 12:26 - Nomenclature part 1 14:23 14:23 - Nomenclature part 2 10:01 10:01 - Nomenclature part 3 07:37 07:37 - Nomenclature part 4 10:29 10:29 - Nomenclature part 5 07:12 07:12 - Nomenclature of alicyclic compounds 11:52 11:52 - Nomenclature of complex substituents 10:17 10:17 - Iupac name 09:57 09:57 - Nomenclature 1 14:28 14:28 - Nomenclature 2 10:34 10:34 - Nomenclature 3 10:19 10:19 - Nomenclature 4 14:23 14:23 - Nomenclature 5 09:09 09:09 - Nomenclature of organic compounds having functional groups part-1 17:35 17:35 - Nomenclature Of Organic Compounds Having Functional Groups Part-2 08:59 08:59 - Nomenclature of ether 07:32 07:32 - Nomenclature of amines 12:57 12:57 - Nomenclature Of Aromatic Compounds Part-1 08:21 08:21 - Nomenclature Of Aromatic Compounds Part-2 08:17 08:17 - Nomenclature of aromatic compounds part-3 12:40 12:40 - Nomenclature of aromatic compounds part-4 07:49 07:49 - Nomenclature of aromatic compounds part-5 07:59 07:59 - Nomenclature Of Aromatic Compounds Part-6 05:22 05:22 - Isomerism 12:09 12:09 - Isomerism 2 09:15 09:15 - Types of organic reaction 14:06 14:06 - Bond-fission-part-1 13:00 13:00 - Bond-fission-part-2 07:42 07:42 - Types of reagent 12:07 12:07 - Inductive effect 15:30 15:30 - Electromeric-effect 11:11 11:11 - Resonance 13:10 13:10 - Rules of resonance 13:05 13:05 - Resonance effect 10:04 10:04 - Hyperconjugation part - 1 15:10 15:10 - Hyperconjugation part-2 14:47 14:47 - Distinction Between nucleophile and base 04:32 04:32 - Hydrocarbon Group part - 1 12:37 12:37 - Hydrocarbon Group part - 2 10:48 10:48 - Hydrocarbon Group part - 3 09:09 09:09 - Reaction intermediate part-1 16:42 16:42 - Reaction intermediate part-2 08:15 08:15 - Reaction Intermediate Part-3 15:12 15:12 - Reaction Intermediate Part-4 12:57 12:57 - Reaction intermediate part-5 06:54 06:54 - Types of carbon atoms 06:24 06:24 - Characteristics of Organic Compounds 12:00 12:00 - Relative stability of canonical forms 11:45 11:45 - Empirical and Molecular Formula 07:21 07:21 - Determination of Melting Point 09:29 09:29 - Crystalline and Amorphous 03:06 03:06 - Methods of purification 04:03 04:03 - Methods of purification of organic compounds solvent extraction 08:04 08:04 - Methods of purification of organic compounds sublimation 07:46 07:46 - Methods of purification of organic compounds simple distillation 17:12 17:12 - Methods of purification of organic compounds steam distillation 07:25 07:25 - Methods of purification of organic compounds fractional distillation 08:49 08:49 - Methods of purification of organic compounds distillation under reduced pressure 04:36 04:36 - Methods of purification of organic compounds fractional distillation use of fractionating column 13:47 13:47 - Methods of purification of organic compounds crystallization 11:01 11:01 - Methods of purification of organic compounds fractional crystallization 04:45 04:45 - Methods of purification of organic compounds chromatography introduction 08:14 08:14 - Methods of purification of organic compounds types of chromatography 05:35 05:35 - Methods of Purification of organic compounds thin layer chromatography 12:48 12:48 - Methods of purification of organic compounds partition chromatography 08:05 08:05 - Methods of purification of organic compounds column chromatography 11:31 11:31 - Methods of purification of organic compounds introduction 05:18 05:18 - Quantitative analysis 04:25 04:25 - Quantitative analysis part-1 10:19 10:19 - Quantitative analysis part-2 06:04 06:04 - Qualitative analysis of organic compounds part-1 08:24 08:24 - Qualitative analysis of organic compounds part-2 15:29 15:29 - Qualitative analysis of organic compounds part-3 05:08 05:08 - Qualitative analysis of organic compounds part-4 06:40 06:40 - Qualitative analysis of organic compounds part-5 06:44 06:44 - Qualitative analysis of organic compounds part-6 04:27 04:27 - Molecular formula from empirical formula 14:28 14:28 - Quantitative analysis of sulphur 06:52 06:52 - Quantitative analysis of oxygen 06:49 06:49 - Quantitative analysis of phosphorus part-1 06:27 06:27 - Quantitative analysis of phosphorus part-2 05:45 05:45 - Quantitative analysis of nitrogen 06:25 06:25 - Qualitative analysis of organic compounds detection of carbon and hydrogen 08:24 08:24 - Qualitative analysis of organic compounds detection of sulphur 05:08 05:08 - Qualitative analysis of organic compounds detection of phosphorous 06:44 06:44 - Qualitative analysis of organic compounds detection of nitrogen 15:29 15:29 - Qualitative analysis of organic compounds detection of both nitrogen and sulphur 04:27 04:27 - Qualitative analysis of organic compounds detection of halogen 06:40 06:40 Studi.Live is an Indian educational venture based in Mumbai. The founders of this portal are into education and technology domains since more than 20 years. Studi.Live currently offers online and live lectures for IITJEE and NEET preparation. The portal will eventually add courses for school students from 5th to 12th, Olympiads, Professional Courses, Commerce and Software Programming for all levels. The Studi.Live portal, as the name suggests, offers online live lectures in interactive mode. The site uses highly safe and secure WEBEX platform for online lectures. Unlike most of the other edtech portals, Studi.Live is a collaborative and activity-based platform which strives to conform to the New Education Policy by the Government of India. The Studi.Live portal has been created considering the student needs and how to keep them engaged in online studies. Online studies on any portal are known to create fatigue and drop-outs quickly. Hence, our portal has been designed to keep the students engaged in self activities on the portal – students can do offline activities and update on the portal. The Studi.Live portal offers HD videos on lightboard with lots of images and animation to make the concept engaging and simple to understand. Our objective has been to provide a one stop platform where students can view videos, read books, attempt tests, participate in discussion forums, create Wiki, and attend live lectures. Studi.Live provides round the clock 24×7 support via email and online chat to resolve technical issues which students may face and for subject matter doubts we have online sessions every week. Studi.Live has a huge repository of close to 50,000 minutes of recorded videos on lightboard, and digital board for anytime omnichannel view. More than 150 books in flip-format, more than 3 lakh questions, and hundreds of blogs to read from. What more, the Studi.Live portal has Level-up game where for every click and every keyboard stroke, students earn points (be it reading books, attempting tests, viewing videos, writing on forums, creating Wiki, or playing games) and compete to be on top of the ladder. By collecting level-up game points, students can go up to 10 levels and transform themselves from a learning NINJA to expert SAMURAI. We have open challenge to our students, if you collect more than 25,000 points on Studi.Live portal, you have a great chance to clear the IITJEE or NEET. Who else gives this promise? We know your potential and our strength. User your certification to make a career change or to advance in your current career. Salaries are among the highest in the world. Our students work with the Best Related Video CoursesView More Become a valued member of Tutorials Point and enjoy unlimited access to our vast library of top-rated Video CoursesSubscribe now Master prominent technologies at full length and become a valued certified professional.Explore Now
The mathematical purpose of this lesson is to help students connect linear equations in two variables with the context the equation represents and the graph. Students compare multiple graphs and identify their slopes and intercepts. They also identify equivalent linear equations, some written in standard form and others in slope-intercept form. Students also reason about the meaning of the slope and \(y\)-intercept on a graph of a linear equation in the situation it represents. Students think abstractly and quantitatively (MP2) by thinking about the slope and intercepts in context. - Determine parts of equations (slope, y-intercept, x-intercept) and match to a graph. - Identify and generate equations that will have the same graph. - Let’s match graphs and equations. Print Formatted Materials For access, consult one of our IM Certified Partners.
By the end of this section, you will be able to: - Identify the historical factors that shaped the development of the Greek city-state - Describe the evolution of the political, economic, and social systems of Athens and Sparta - Discuss the alliances and hostilities among the Greek city-states during the Classical period - Identify the major accomplishments of Ancient Greek philosophy, literature, and art In the centuries following the collapse of the Bronze Age Mycenaean kingdoms around 1100 BCE, a dynamic new culture evolved in Iron Age Greece and the Aegean region. During this period, the Greek city-states developed innovative consensual governments. Free adult males participated in their own governance and voted to create laws and impose taxes. This system of government contrasted with the earlier monarchies of the ancient Near East, in which rulers claimed to govern their subjects through the will of the gods. The degree of political participation in the Greek city-states varied from monarchy and oligarchy, or government by a small group of wealthy elites, to democracy, literally “rule by the people,” a broader-based participation that eventually included both rich and poor adult males. These systems influenced Ancient Roman and European political thought through the centuries. The Greek Classical period (500–323 BCE) witnessed constant warfare among rival city-states, yet it was marked by the creation of enduring works of literature and art that inspired centuries of European artists and writers. Greek philosophers also subjected the human condition and the natural world to rational analysis, rejecting traditional beliefs and sacred myths. The Greek Dark Ages (1100–800 BCE) persisted after the collapse of the Mycenaean civilization but began to recede around 800 BCE. From this point and for the next few centuries, Greece experienced a revival in which a unique and vibrant culture emerged and evolved into what we recognize today as Classical Greek civilization. This era, from 800 to 500 BCE, is called Archaic Greece after arche, Greek for “beginning.” The Greek renaissance was marked by rapid population growth and the organization of valleys and islands into independent city-states, each known as a polis (Greek for city-state). Towns arose around a hill fortress or acropolis to which inhabitants could flee in times of danger. Each polis had its own government and religious cults, and each built monumental temples for the gods, such as the temple of Hera, wife of Zeus and protector of marriage and the home, at the city-state of Argos. Though politically disunited, the Greeks, who began to refer to themselves as Hellenes after the mythical king Hellen, did share a common language and religion. The most famous of their sacred sites were Delphi, near Mount Parnassus in central Greece and seat of the oracle of Apollo, the god of prophecy, and Olympia in southern Greece, sacred to Zeus, who ruled the pantheon of gods at Mount Olympus (Figure 6.9). Beginning in 776 BCE, according to Aristotle, Greeks traveled to Olympia every four years to compete in athletic contests in Zeus’s honor, the origin of the Olympic Games. The Olympic Games Postponed a year because of the COVID-19 pandemic, the 2021 Games of the XXXII Olympiad in Japan included more than three hundred events in thirty-three sports, including new entries like skateboarding, rock climbing, and surfing. Modern games have been held since 1896, when the new International Olympic Committee started the tradition, but as the name suggests, the inspiration came from Ancient Greece. Athletic events in Ancient Greece were important displays of strength and endurance. There were contests at the sanctuaries at Delphi and Nemea (near Argos), but none was as renowned as the Olympic Games, held at the sanctuary in Olympia that was dedicated to Zeus. Contestants came from all over the Greek world, including Sicily and southern Italy. Unlike the skateboarding and surfing of modern games, the ancient games focused on skills necessary for war: running, jumping, throwing, and wrestling. Over time, sports that included horses, like chariot racing, were also incorporated. Such events were referenced in Homer’s Iliad, when the hero Achilles held athletic contests to honor his fallen comrade Patroclus and awarded prizes or athla (from which the word “athlete” is derived). The centerpiece of the ancient games was the two-hundred-yard sprint, or stadion, from which comes the modern word “stadium” (Figure 6.10). Unlike the modern games, where attendees pay great sums to watch athletes compete, admission to the ancient games was free—for men. Women were forbidden from watching and, if they dared to attend, could pay with their lives. Competitors were likely locals with proven abilities, though over time professional athletes came to dominate the sport. They could earn a good living from prizes and other rewards gained through their talent and celebrity, and their statues adorned the sanctuary at Olympia. The poet Pindar in the early fifth century BCE was renowned for composing songs to honor them when they returned home as victors. The Olympic Games continued to be celebrated until 393 CE, when they were halted during the reign of the Christian Roman emperor Theodosius. - Why might the organizers of the modern Olympic Games have named their contest after the ancient Greek version? - How are the ancient games similar to the modern Olympic Games? How are they different? The start of the Archaic period also witnessed the reemergence of specialization in Greek society. Greek artists became more sophisticated and skilled in their work. They often copied artistic styles from Egypt and Phoenicia, where Greek merchants were engaging in long-distance trade. At the site of Al-Mina, along the Mediterranean coast in Syria where historians believe the Phoenician alphabet was first transmitted to the Greeks, Greek and Phoenician merchants exchanged goods. Far to the west, on the island of Ischia off the west coast of Italy, Greeks were competing with Phoenician merchants for trade with local peoples, whose iron ore was in strong demand. Thanks to their contact and trade with the Phoenicians, Greeks adapted the Phoenician alphabet to their own language, making an important innovation by adding vowels (a, e, i, o, u). The eighth century BCE thus witnessed the return of literacy and the end of the Aegean world’s relative isolation after the interlude of the Greek Dark Ages. The eighth century BCE was also the period in which the epic poems the Iliad and the Odyssey were composed, traditionally attributed to the blind poet Homer. While historians debate whether Homer was a historical or a legendary figure, they agree the epics originated in the songs of oral poets in the Greek Dark Ages. In the eighth century BCE, using the Greek alphabet, scribes wrote these stories down for the first time. As the population expanded during the Archaic period, a shortage of farmland brought dramatic changes. Many Greeks in search of land to farm left their homes and founded colonies along the shores of the Black Sea and the northern Aegean, in North Africa at Cyrene in Libya, and in southern Gaul (modern France) at Massalia (Marseille). The largest number were on the island of Sicily and in southern Italy, the region the Greeks referred to as Magna Graecia or “Greater Greece.” When Greeks established a colony, it became an independent polis with its own laws. The free adult males of the community divided the colony’s land into equal lots. Thus, a new idea developed in the colonies that citizenship in a community was associated with equality and participation in the governing of the state. In the society of Archaic Greece, the elite landowners, or aristoi, traditionally controlled the government and the priesthoods in the city-states. But thanks to the new ideas from the colonies, the common people, or kakoi, began demanding land and a voice in the governing of the polis. They were able to gain leverage in these negotiations because city-states needed troops in their wars for control of farmland. The nobility relied on the wealthier commoners, who could afford to equip themselves with iron weapons and armor. In some city-states, the aristoi and the kakoi were not able to resolve their differences peaceably. In such cases, a man who had strong popular support in the city would seize power and rule over the city. The Greeks referred to such populist leaders as tyrants. In the sixth century BCE, the difficulties caused by the land shortage were relieved by the invention of coinage. A century before, adopting a practice of the kings of Lydia in western Asia Minor (Turkey), Athens stamped silver pieces with the image of an owl, a symbol of wisdom often associated with the goddess Athena (See Figure 6.11). Instead of weighing precious metals to use as currency or arguing over the value of bartered goods to trade, merchants could use coins as a simple medium of exchange. The agora, or place of assembly in each city-state, thus became a marketplace to buy and sell goods. In the sixth century BCE, this rise of a market economy stimulated economic growth as farmers, artisans, and merchants discovered stronger incentives to produce and procure more goods for profit. For example, farmers learned how to produce more food with the land they already possessed rather than always seeking more land. The economic growth of this period is reflected in the many new temples the Greek city-states constructed then. Sparta and Athens In the Archaic period, Athens and Sparta emerged as two of the most important of the many Greek city-states. Not only did their governments and cultures dominate the Greek world in the subsequent Classical period; they also fired the imaginations of Western cultures for centuries to come. Athens was the birthplace of democracy, whereas Sparta was an oligarchy headed by two kings. The Rise and Organization of Sparta Sparta in the eighth century BCE was a collection of five villages in Laconia, a mountain valley in the Peloponnese in southern Greece. Due to the shortage of farmland, the citizens (adult males) of these villages, the Spartiates, all served in the military and waged war on neighboring towns, forcing them to pay tribute. The Spartiates also appropriated farmland for themselves and enslaved the inhabitants of these lands, most famously the Messenians, who became known as the helots. Just as Greek colonists at this time divided land among themselves into equal lots, the Spartiates likewise divided the conquered land equally and assigned to each landowner a certain number of helot families to work it. Helots, unlike enslaved people in other parts of Greece, could not be bought or sold but remained on the land as forced laborers from generation to generation. In the seventh century BCE, Sparta conquered the land of Messene to its west and divided its farmland equally among the Spartiates. By the late sixth century BCE, the wealth from the rich agricultural land that Sparta then controlled had made it the most powerful state in the Peloponnese. Sparta also organized the city-states of this region and parts beyond into a system of alliances that historians refer to as the Peloponnesian League. Its members still had self-government and paid no tribute to Sparta, but all were expected to have the same friends and enemies as Sparta, which maintained its dominance in the league. Sparta also used its army to overthrow tyrants in the Peloponnesian city-states and restore political power to the aristoi. The Spartans were proud of their unique system of government, or constitution, which was a set of laws and traditional political practices rather than a single document. It was said to have been created by a great lawgiver named Lycurgus around 800 BCE, but modern historians view its development as an evolutionary process during the Archaic period rather than the work of a single person. Sparta had two hereditary kings drawn from rival royal families. Their powers were very limited, though both sat as permanent members of the Council of Elders and were priests in the state religion. On occasion, the Spartan kings also led armies into battle. The Assembly of Spartiates passed all laws and approved all treaties with the advice of the Council of Elders. This Assembly also elected five judges every year who administered the affairs of state, as well as the members of the Council of Elders. The unique element of Spartan culture was the agoge, its educational system. At the age of seven, boys were separated from their families and raised by the state. To teach them to live by their wits and courage, they were fed very little so they had to learn how to steal food to survive. At the age of twelve they began an even more severe regimen. They were not allowed clothes except a cloak in the wintertime, and they bathed just once a year (Figure 6.12). They also underwent ritual beatings intended to make them physically strong and hardened warriors. At the age of eighteen, young men began two years of intense military training. At the age of twenty, a young Spartan man’s education was complete. Women of the Spartiate class, before marrying in their mid-teens, also practiced a strict physical regimen, since they were expected to be as strong as their male relatives and husbands and even participate in defending the homeland. Spartan women enjoyed a reputation for independence, since they managed the farms while men were constantly training for or at war and often ran their family estates alone due to the early deaths of their soldier husbands. The state organized unmarried women into teams known as chorai (from which the term chorus is derived) that danced and sang at religious festivals. When a Spartiate man reached the age of thirty, he could marry, vote in the Assembly, and serve as a judge. Each Spartiate remained in the army reserve until the age of sixty, when he could finally retire from military service and became eligible for election to the Council of Elders. Spartan citizens were proud to devote their time to the service of the state in the military and government; they did not have to work the land or learn a trade since this work was done for them by commoners and helot subjects. The Rise and Organization of Athens Athens, like Sparta, developed its own system of government in the Archaic period. Uniquely large among Greek city-states, Athens had long enclosed all the land of Attica, which included several mountain valleys. It was able to eventually develop into a militarily powerful democratic state in which all adult male citizens could participate in government, though “citizenship” was a restricted concept, and because only males could participate, it was by nature a limited democracy. The roots of Athenian democracy are long and deep, however, and its democratic institutions evolved over centuries before reaching their fullest expression in the fifth century BCE. It was likely the growing prosperity of Athenians in the eighth century that had set Athens on this path. As more families became prosperous, they demanded greater say in the functioning of the city-state. By the seventh century BCE, Athens had an assembly allowing citizens (free adult males) to gather and discuss the affairs of the state. However, as the rising prosperity of Athenians stalled and economic hardship loomed by the end of the century, the durability of the fledgling democracy seemed in doubt. Attempts to solve the economic problems by adjusting the legal code, most notably by the legislator Draco (from whose name we get the modern term “draconian”), had little effect, though codifying the law in written form brought more clarity to the legal system. With the once-thriving middle class slipping into bankruptcy and sometimes slavery, civil war seemed inevitable. Disaster was avoided only with the appointment of Solon in 594 BCE to restore order. Solon came from a wealthy elite family, but he made it known that he would draft laws to benefit all Athenians, rich and poor. A poet, he used his songs to convey his ideas for these new laws (Figure 6.13). One of Solon’s first measures was to declare that all debts Athenians owed one another were forgiven. Solon also made it law that no Athenian could be sold into slavery for failure to repay a loan. These decrees did much to provide relief to farmers struggling with debt who could now return to work the land. Under Solon’s new laws, each of Athens’s four traditional tribes chose one hundred of its members by lot, including commoners, to sit in the new Council of Four Hundred and run the government. There were still magistrates, but now Solon created the jury courts. All Athenians could appeal the ruling of a magistrate in court and have their cases heard by a jury of fellow citizens. Solon also set up a hierarchal system in which citizens were eligible for positions in government based on wealth instead of hereditary privilege. Wealth was measured by the amount of grain and olive oil a citizen’s land could produce. Only the wealthiest could serve as a magistrate, sit on the Council, and attend the Assembly and jury courts. Citizens with less wealth could participate in all these activities but could not serve as magistrates. The poorest could only attend the Assembly and the jury courts. Solon’s reforms were not enough to end civil unrest, however. By 545 BCE, a relative of his named Pisistratus had seized power by force with his own private army and ruled as a tyrant with broad popular support. Pisistratus was reportedly a benevolent despot and very popular. He kept Solon’s reforms largely in place, and Athenians became accustomed to serving in Solon’s Council and in jury courts. They were actively engaged in self-government, thus setting the stage for the establishment of democracy. Pisistratus also encouraged the celebration of religious festivals and cults that united the people of Attica through a common religion. To further help the farmers Solon brought back, Pisistratus redistributed land so they could once again make a living. After Pisistratus’s death, his sons tried to carry on as tyrants, but they lacked their father’s popularity. Around 509 BCE, an Athenian aristocrat named Cleisthenes persuaded the Spartans to intervene in Athens and overthrow these tyrants. The Spartans, however, set up a government of elites in Athens that did not include Cleisthenes. Consequently, he appealed to the common people living in the villages, or demes, to reject this pro-Spartan regime and establish a “democracy.” His appeal was successful, and Cleisthenes implemented reforms to Solon’s system of government. He replaced the Council of Four Hundred with one of five hundred and reorganized the Athenians into ten new tribes, including in each one villages from different parts of Attica. Every year, each tribe chose fifty members by lot to sit in the new Council. This reform served to unite the Athenians, since each tribe consisted of people from different parts of Attica who now had to work together politically. Each tribe’s delegation of fifty also served as presidents for part of the year and ran the day-to-day operation of the government. By the end of the Archaic period, Athens had developed a functioning direct democracy, which differs from modern republics in which citizens vote for representatives who sit in the legislature. All citizens could sit in the Athenian Assembly, which then was required to meet at least ten times a year. All laws had to be approved by the Assembly. Only the Assembly could declare war and approve treaties. Athens had a citizen body of thirty to forty thousand adult males in the Classical period, but only six thousand needed to convene for meetings of the Assembly. Citizens could also be chosen by lot to sit in the Council. Since they were permitted to serve for just two one-year terms over a lifetime, many Athenians had the opportunity to participate in the executive branch of government. All citizens also served on juries, which not only determined the guilt or innocence of the accused but also interpreted the way the law was applied. Women, enslaved people, and foreign residents could not participate. However, women of the citizen class were prominent in the public religious life of the city, serving as priestesses and in ceremonial roles in religious festivals. The Greek Classical period (500–323 BCE) was an era of great cultural achievement in which enduring art, literature, and schools of philosophy were created. It began with the Greek city-states uniting temporarily to face an invasion by the mighty Persian Empire, but it ended with them locked in recurring conflicts and ultimately losing their independence, first to Persia and later to Macedon. The Persian Wars The Persian Wars (492–449 BCE) were a struggle between the Greek city-states and the expanding Persian Empire. In the mid-sixth century BCE, during the reign of Cyrus the Great, Persian armies subdued the Greek city-states of Ionia, located across the Aegean from Greece in western Asia Minor (Turkey) (Figure 6.14). To govern the cities, the Persians installed tyrants recruited from the local Greek population. The resident Greeks were unhappy with the tyrants’ rule, and in 499 BCE they rose in the Ionian Rebellion, joined by Athens and the Greek cities on the island of Cyprus. But by 494 BCE Persian forces had crushed the rebellions in both Ionia and Cyprus. For intervening in Persian affairs, the Persian king Darius decided that Athens must be punished. In 490 BCE, Darius assembled a large fleet and army to cross the Aegean from Asia Minor, planning to subdue Athens and install one of Pisistratus’s sons as tyrant there. These Persian forces landed at Marathon on the west coast of Attica. They vastly outnumbered the Athenians but were drafted subjects with little motivation to fight and die. The Athenian soldiers, in contrast, were highly motivated to defend their democracy. The Persians could not withstand the Athenians’ spirited charge in the Battle of Marathon and were forced back onto their ships. Leaving the battle, the Persians then sailed around Attica to Athens. The soldiers at Marathon raced by land across the peninsula to guard the city. Seeing the city defended, the Persians returned to Asia Minor in defeat. In 480 BCE, Xerxes, the son and successor of Darius, launched his own invasion of Greece intended to avenge this defeat and subdue all the Greek city-states. He assembled an even larger fleet as well as an army that would invade by land from the north. At this time of crisis, most of the Greek city-states decided to unite as allies and formed what is commonly called the Hellenic League. Sparta commanded the armies and Athens the fleet. A small band of the larger land forces, mostly Spartans, decided to make a stand at Thermopylae, a narrow pass between the mountains and the sea in northeastern Greece. Their goal was not to defeat the invading Persian army, which vastly outnumbered them, but to delay them so the rest of the forces could organize a defense. For days the small Spartan force, led by their king Leonidas, successfully drove back a vastly superior Persian army, until a Greek traitor informed the Persians of another mountain pass that enabled them to circle around and surround the Spartans. The Spartan force fought to the death, inspiring the Greeks to continue the fight and hold the Hellenic League together. After the Battle of Thermopylae, the Persian forces advanced against Athens. The Athenians abandoned their city and withdrew to the nearby island of Salamis, where they put their faith in their fleet to protect them. At the naval Battle of Salamis, the allied Greek fleet led by Athens destroyed the Persian ships. Xerxes then decided to withdraw much of his force from Greece, since he no longer had a fleet to keep it supplied. In 479 BCE, the reduced Persian force had retreated from Athens to the plains of Boeotia, just north of Attica. The Greek allied forces under the command of Sparta advanced into Boeotia and met the Persian army at the Battle of Plataea. The Persian forces, mostly unwilling draftees, were no match for the Spartan troops, and the battle ended in the death or capture of most of the Persian army. The Athenian Empire and the Peloponnesian War After the Persian Wars, the Athenians took the lead in continuing the fight against Persia and liberating all Greek city-states. In 477 BCE, they organized an alliance of Greek city-states known today as the Delian League, headquartered on the Aegean island of Delos. Members could provide ships and troops for the league or simply pay Athens to equip the fleet, which most chose to do. Over the next several decades, allied forces of the Delian League liberated the Greek city-states of Ionia from Persian rule and supported rebellions against Persia in Cyprus and Egypt. Around 449 BCE, Athens and Persia reached a peace settlement in which the Persians recognized the independence of Ionia and the Athenians agreed to stop aiding rebels in the Persian Empire. Over the course of this war, the money from the Delian League enriched many lower-class Athenians, who found employment as rowers in the fleet. Athens even began paying jurors in jury courts and people who attended meetings of the Assembly. Over time it became clear to the other Greeks that the Delian League was no longer an alliance but an empire in which the subject city-states paid a steady flow of tribute. In 465 BCE, the city-state of Thasos withdrew from the league but was compelled by Athenian forces to rejoin. Around 437 BCE, the Athenians began using tribute to rebuild the temples on the Acropolis that the Persians had destroyed. Including the Parthenon, dedicated to Athena Parthenos, these were some of the most beautiful temples ever built and the pride of Athens, but to the subject city-states they came to symbolize Athenians’ despotism and arrogance (See Figure 6.15). The wealth and power of Athens greatly concerned the Spartans, who saw themselves as the greatest and noblest of the Greeks. The rivalry between the two city-states eventually led them into open conflict. In 433 BCE, the Athenians assisted the city-state of Corcyra in its war against Corinth. Corinth was a member of the Peloponnesian League and requested that Sparta, the leader of this league, take action against Athenian aggression. Thus, in 431 BCE, the Peloponnesian War began with the invasion of Attica by Sparta and its allies (See Figure 6.16). The political leader Pericles persuaded his fellow Athenians to withdraw from the countryside of Attica and move within the walls of Athens, reasoning that the navy would provide them food and supplies and the wall would keep them safe until Sparta tired of war and sought peace. Pericles’s assessment proved correct. In 421 BCE, after ten years of war, the Spartans and Athenians agreed to the Peace of Nicias, which kept the Athenian empire intact. The cost of the war for Athens was high, however. Due to the crowding of people within its walls, a plague had erupted in the city in 426 BCE and killed many, including Pericles. Several years later, arguing that the empire could thrive only by expanding, an ambitious young Athenian politician named Alcibiades (a kinsman of Pericles) inspired a massive invasion of Sicily targeting Syracuse, the island’s largest city-state. Just as the campaign began in 415 BCE, Alcibiades’s political enemies in Athens accused him of impiety and treason, and he fled to Sparta to avoid a trial. Without his leadership, the expedition against Syracuse floundered, and in 413 BCE the entire Athenian force was destroyed. In exile, Alcibiades convinced the Spartans to invade Attica again, now that Athens had been weakened by the disaster in Syracuse. In the years that followed, the Spartans realized they needed a large fleet to defeat Athens, and they secured funds for it from Persia on the condition that Sparta restore the Greek cities in Ionia to Persian rule. In 405 BCE, the new Spartan fleet destroyed the Athenian navy at the Battle of Aegospotami in the Hellespont. The Athenians, under siege, could not secure food or supplies without ships, and in 404 BCE the city surrendered to Sparta. The Peloponnesian War ended with the fall of the city and the collapse of the Athenian empire. The conclusion of the Peloponnesian War initially left Sparta dominant in Greece. Immediately following the war, Sparta established oligarchies of local aristocrats in the city-states that had been democracies under the Delian League. And it set up the Era of the Thirty Tyrants, a brief rule of oligarchs in Athens. With regard to Persia, Sparta reneged on its promise to restore the Greek city-states in Ionia to Persian control. Persia responded by funding Greek resistance to Sparta, which eventually compelled Sparta to accept Persia’s terms in exchange for Persian support. This meant turning over the Ionian city-states as it had previously promised. Now with Persian backing, the Spartans continued to interfere in the affairs of other Greek city-states. This angered city-states like Thebes and Athens. In 371 BCE, the Thebans defeated the Spartans at the Battle of Leuctra in Boeotia. The next year they invaded the Peloponnese and liberated Messene from Spartan rule, depriving the Spartans of most of their helot labor there. Without the helots, the Spartans could not support their military system as before, and their Peloponnesian League collapsed. Alarmed by the sudden growth of Thebes’s power, Athens and Sparta again joined forces and, in 362 BCE, fought the Thebans at the Battle of Mantinea. The battle was inconclusive, but Thebes’s dominance soon faded. By 350 BCE, the Greek city-states were exhausted economically and politically after decades of constant warfare. The Classical “Golden Age” Many historians view the Greek Classical period and the cultural achievements in Athens in particular as a “Golden Age” of art, literature, and philosophy. Some scholars argue that this period saw the birth of science and philosophy because for the first time people critically examined the natural world and subjected religious beliefs to reason. (Other modern historians argue that this position discounts the accomplishments in medicine and mathematics of ancient Egypt and Mesopotamia.) For example, around 480 BCE, Empedocles speculated that the universe was not created by gods but instead was the result of the four material “elements”—air, water, fire, earth—being subjected to the forces of attraction and repulsion. Another philosopher and scientist of the era, Democritus, maintained that the universe consisted of tiny particles he called “atoms” that came together randomly in a vortex to form the universe. Philosophers questioned not only the traditional views of the gods but also traditional values. Some of this questioning came from the sophists (“wise ones”) of Athens, those with a reputation for learning, wisdom, and skillful deployment of rhetoric. Sophists emerged as an important presence in the democratic world of Athens beginning in the mid-fourth century BCE. They claimed to be able to teach anyone rhetoric, or the art of persuasion, for a fee, as a means to achieve success as a lawyer or a politician. While many ambitious men sought the services of sophists, others worried that speakers thus trained could lead the people to act against their own self-interest. Many thought Socrates was one of the sophists. A stonecutter by trade, Socrates publicly questioned sophists and politicians about good and evil, right and wrong. He wanted to base values on reason instead of on unchallenged traditional beliefs. His questioning often embarrassed powerful people in Athens and made enemies, while his disciples included the politician Alcibiades and even some who had opposed Athenian democracy. In 399 BCE, an Athenian jury court found Socrates guilty of impiety and corrupting the youth, and he was sentenced to death (Figure 6.17). Socrates left behind no writings of his own, but some of his disciples wrote about him. One of these was Plato, who wrote dialogues from 399 BCE to his death in 347 BCE that featured Socrates in conversation with others. Through these dialogues, Plato constructed a philosophical system that included the study of nature (physics), of the human mind (psychology and epistemology, the theory of knowledge), and ethics. He maintained that the material world we perceive is an illusion, a mere shadow of the real world of ideas and forms that underlie the universe. According to Plato, the true philosopher uses reason to comprehend these ideas and forms. Plato established a school at the Academy, which was a gymnasium or public park near Athens where people went to relax and exercise. One of his most famous pupils was Aristotle, who came to disagree with his teacher and believed that ideas and forms could not exist independently of the material universe. In 334 BCE, Aristotle founded his own school at a different gymnasium in Athens, the Lyceum, where his students focused on the reasoned study of the natural world. Modern historians view Plato and Aristotle as the founders of Western (European) philosophy because of the powerful influence of their ideas through the centuries. Athens in the Golden Age was also the birthplace of theater. Playwrights of the fifth century BCE such as Sophocles and Euripides composed tragedies that featured music and dance, like operas and musicals today (Figure 6.18). The plots were based on traditional myths about gods and heroes, but through their characters the playwrights pondered philosophical questions of the day that have remained influential over time. In Sophocles’s Antigone, for example, Antigone, the daughter of Oedipus, must decide whether to obey the laws or follow her religious beliefs. The study of history also evolved during the Golden Age. Herodotus and Thucydides are considered the first true historians because they examined the past to rationally explain the causes and effects of human actions. Herodotus wrote a sweeping history of wide geographic scope, called Histories (“inquiries”), to explore the deep origins of the tension between the Persian and Greek worlds. In History of the Peloponnesian War, Thucydides employed objectivity to explain the politics, events, and brutality of the conflict in a way that is similar in some respects to the approach of modern historians. Finally, this period saw masterpieces of sculpture, vase painting, and architecture. Classical Age Greek artists broke free of the heavily stylized and two-dimensional art of Egypt and the Levant, which had inspired Greek geometric forms, and produced their own uniquely realistic styles that aimed to capture in art the ideal human form. Centuries later, and especially during the European Renaissance, artists modeled their own works on these classical models. Ancient Greek Sculpture and Painting In the Archaic period, the Greeks had more contact with the cultures of Phoenicia and Egypt, and artists modeled their work on examples from these regions. For instance, ancient Egyptian artists followed strict conventions in their heavily stylized works, such as arms held close to the sides of the body and a parallel stance for the feet. Greek artists adopted these conventions in their statues of naked youths, or kouroi, which were often dedicated in religious sanctuaries (Figure 6.19). During the Classical period, Greek sculptors still produced statues of naked youths for religious sanctuaries, but in more lifelike poses that resembled the way the human body appears naturally (Figure 6.20). Greek painting is most often preserved on vases. In the Archaic period, artists frequently decorated vases with motifs such as patterning, borrowed from Phoenician and Egyptian art (Figure 6.21). By the Classical era, especially in Athens, vase painters were relying less on patterning and instead depicting realistic scenes from myths and daily life (Figure 6.22). In the Classical period, Greek artists thus came into their own and no longer borrowed heavily from the art of Egypt and Phoenicia. - What do the many artistic influences on Greece suggest about its connections with other parts of the ancient world? - Why might Greek art have relied heavily on mythical symbols and depictions? What does this indicate about Greek culture?
Only a handful of supermassive black holes have been confirmed by scientists, but the universe could be filled with billions of these gravitational giants. Theoretically, if you compress a sufficient amount of matter into a small enough space, it will create such a powerful gravitational field that nothing — not even light — can escape from it. That’s the basic idea behind black holes, and it’s so bizarre that for many years people thought they couldn’t possibly exist in reality, according to University of Texas in Austin. Yet today we know the universe is filled with them — perhaps as many as one for every ten visible stars, according to Live Science. A few of those black holes are truly enormous, with masses millions of times greater than the sun. Here we take a closer look at the strange world of supermassive black holes. How big are supermassive black holes? It’s impossible to observe a black hole directly, because — as their name suggests — they don’t emit any light or other radiation. But they can be detected via their gravitational effect on visible stars in their neighbourhood, which orbit around the black hole much faster than they would around a normal object of similar size. By measuring the speed of stars close to the black hole, astronomers can estimate its mass. That’s how they know, for example, that the black hole at the center of our own galaxy has a mass around four million times that of the sun, according to NASA. As big as that sounds, it’s really quite tiny compared to the largest supermassive black holes that have been measured — some of which approach 100 billion solar masses, Supermassive black hole examples Leo I dwarf galaxy Although this tiny galaxy is only about 20 million solar masses in total, its central black hole is proportionately huge, at around 3 million solar masses. This huge elliptical galaxy – a cosmic neighbour at just 13 million light-years – is a powerful radio emitter thanks to the 55 million solar mass black hole at its center. The product of two merging galaxies, NGC 7727 still retains two separate supermassive black holes – of 154 and 6.3 million solar masses – just 1,600 light-years apart near its center. This cluster of galaxies is estimated to have a black hole of up to 100 billion solar masses near its center. Frustratingly, its exact location continues to elude detection. Black holes at the center of galaxies Rather than devouring anything that ventures too close to them, the black holes at the centers of most galaxies only give away their existence through subtle effects on nearby stars. In an active galaxy, however, the supermassive black hole behaves a lot differently. When surrounded by a swirling "accretion disk" of rapidly rotating gas and dust, matter is constantly spiralling down into the black hole. In the process, it releases enormous amounts of energy, sometimes even outshining the rest of the galaxy. In 2019 the Event Horizon Telescope succeeded in photographing one such galaxy — Messier 87 — producing a direct image of the accretion disc. The ominous shadow of the galaxy’s 6.5 billion solar mass black hole is clearly visible, quite literally as a "black hole" at the center of the disk. How supermassive black holes are formed Movies often portray black holes as giant cosmic vacuum cleaners, relentlessly sucking in other material until there’s nothing left. If that was how real black holes worked, there’d be no mystery as to where the supermassive kind came from: once an "ordinary" black hole had formed from stellar collapse, it would simply grow and grow until it reached enormous size. But real black holes don’t suck matter in like this; they merely attract it with the same law of gravity as a normal object of the same mass. Their exceptional nature comes from the fact that they’re super-condensed and the force of gravity increases as distance decreases. So it’s possible for an orbiting object to stray into a region where gravity becomes incredibly strong. At larger distances, however, a black hole’s gravity is perfectly normal. But if a black hole is incapable of sucking in distant matter, how does it ever grow to supermassive size? At present no one knows the answer to this, although there are several promising theories. Although they may not be the rapacious predators portrayed in sci-fi, we know that some black holes do absorb new material and that’s what’s going on in the accretion disks of active galaxies, for example. Occasionally pairs of black holes crash into each other and merge to produce a single, larger black hole and we know that from the evidence of gravitational waves, which have been observed on a regular basis since they were first discovered in 2015. But accretion and mergers, while undoubtedly part of the solution, aren’t enough in themselves to explain the observational evidence for supermassive black holes. That’s because we know the first active galaxies — which must have been powered by central black holes – were formed very early in the life of the universe. For example, a supermassive black hole of a billion solar masses is believed to have existed in one galaxy more than 12 billion years ago — around 90% of the way back to the Big Bang. It’s possible that the stellar life cycle, which is so crucial to the standard model of black hole formation, had nothing to do with the creation of the oldest supermassive black holes. Instead, they may have formed almost immediately from the gravitational collapse of an enormous cloud of gas — one that already contained as much matter as millions of stars. According to this theory, a "direct collapse black hole" of this kind would have taken around 150 million years to form — the blink of an eye in cosmic terms. Another hypothesis invokes the idea of primordial black holes, which are theorised to have been created in the Big Bang itself. These are sometimes proposed as a possible explanation for dark matter, and are generally assumed to have been quite small in size. However, they might have served as the basic seeds from which present-day supermassive black holes grew . For more information about black holes check out " Death by Black Hole - and Other Cosmic Quandaries" by Neil deGrasse Tyson and "Gravity's Fatal Attraction: Black Holes in the Universe" by Mitchell Begelman and Martin Rees. Patchen Barss, "The mysterious origins of Universe's biggest black holes", BBC, August 2021. University of Texas in Austin, "History of Black Holes", accessed January 2022. Stephen Batttersby, "Monster munch: How did black holes get vast so fast?", New Scientist, March 2013. Alison Klesman, "What are primordial black holes?", Astronomy, July 2019. ESA, "What happens when two supermassive black holes merge?", May 2019. NASA, "Exploring Active Galactic Nuclei", February 2016. Shobha Kaicker, "How do astronomers calculate the mass of a black hole?", Astronomy, April 2020. NASA, "Black Holes", March 2022. NASA, "Active Galaxies", September 2021. NASA, "Stars", September 2021.
X-ray astronomy is an observational branch of astronomy which deals with the study of X-ray observation and detection from astronomical objects. X-radiation is absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray astronomy is the space science related to a type of space telescope that can see farther than standard light-absorption telescopes, such as the Mauna Kea Observatories, via x-ray radiation. X-ray emission is expected from astronomical objects that contain extremely hot gases at temperatures from about a million kelvin (K) to hundreds of millions of kelvin (MK). Moreover, the maintenance of the E-layer of ionized gas high in the Earth's Thermosphere also suggested a strong extraterrestrial source of X-rays. Although theory predicted that the Sun and the stars would be prominent X-ray sources, there was no way to verify this because Earth's atmosphere blocks most extraterrestrial X-rays. It was not until ways of sending instrument packages to high altitude were developed that these X-ray sources could be studied. The existence of solar X-rays was confirmed early in the rocket age by V-2s converted to sounding rocket purpose, and the detection of extraterrestrial X-rays has been the primary or secondary mission of multiple satellites since 1958. The first cosmic (beyond the solar system) X-ray source was discovered by a sounding rocket in 1962. Called Scorpius X-1 (Sco X-1) (the first X-ray source found in the constellation Scorpius), the X-ray emission of Scorpius X-1 is 10,000 times greater than its visual emission, whereas that of the Sun is about a million times less. In addition, the energy output in X-rays is 100,000 times greater than the total emission of the Sun in all wavelengths. Many thousands of X-ray sources have since been discovered. In addition, the space between galaxies in galaxy clusters is filled with a very hot, but very dilute gas at a temperature between 10 and 100 megakelvins (MK). The total amount of hot gas is five to ten times the total mass in the visible galaxies. - 1 Sounding rocket flights - 2 Balloons - 3 Rockoons - 4 X-ray astronomy satellite - 5 X-ray telescopes and mirrors - 6 X-ray astronomy detectors - 7 Astrophysical sources of X-rays - 8 Celestial X-ray sources - 9 Proposed (future) X-ray observatory satellites - 10 Explorational X-ray astronomy - 11 Theoretical X-ray astronomy - 12 Analytical X-ray astronomy - 13 Stellar X-ray astronomy - 14 Amateur X-ray astronomy - 15 History of X-ray astronomy - 16 Major questions in X-ray astronomy - 17 Exotic X-ray sources - 18 X-ray dark stars - 19 X-ray dark planet/comet - 20 See also - 21 References - 22 External links Sounding rocket flights The first sounding rocket flights for X-ray research were accomplished at the White Sands Missile Range in New Mexico with a V-2 rocket on January 28, 1949. A detector was placed in the nose cone section and the rocket was launched in a suborbital flight to an altitude just above the atmosphere. X-rays from the Sun were detected by the U.S. Naval Research Laboratory Blossom experiment on board. An Aerobee 150 rocket was launched on June 12, 1962 and it detected the first X-rays from other celestial sources (Scorpius X-1). It is now known that such X-ray sources as Sco X-1 are compact stars, such as neutron stars or black holes. Material falling into a black hole may emit X-rays, but the black hole itself does not. The energy source for the X-ray emission is gravity. Infalling gas and dust is heated by the strong gravitational fields of these and other celestial objects. Based on discoveries in this new field of X-ray astronomy, starting with Scorpius X-1, Riccardo Giacconi received the Nobel Prize in Physics in 2002. The largest drawback to rocket flights is their very short duration (just a few minutes above the atmosphere before the rocket falls back to Earth) and their limited field of view. A rocket launched from the United States will not be able to see sources in the southern sky; a rocket launched from Australia will not be able to see sources in the northern sky. X-ray Quantum Calorimeter (XQC) project In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxy. It fills interstellar space and blends smoothly into the surrounding intergalactic medium. The interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fields. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field. Of interest is the hot ionized medium (HIM) consisting of a coronal cloud ejection from star surfaces at 106-107 K which emits X-rays. The ISM is turbulent and full of structure on all spatial scales. Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures are stellar wind bubbles and superbubbles of hot gas. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble. To measure the spectrum of the diffuse X-ray emission from the interstellar medium over the energy range 0.07 to 1 keV, NASA launched a Black Brant 9 from White Sands Missile Range, New Mexico on May 1, 2008. The Principal Investigator for the mission is Dr. Dan McCammon of the University of Wisconsin–Madison. Balloon flights can carry instruments to altitudes of up to 40 km above sea level, where they are above as much as 99.997% of the Earth's atmosphere. Unlike a rocket where data are collected during a brief few minutes, balloons are able to stay aloft for much longer. However, even at such altitudes, much of the X-ray spectrum is still absorbed. X-rays with energies less than 35 keV (5,600 aJ) cannot reach balloons. On July 21, 1964, the Crab Nebula supernova remnant was discovered to be a hard X-ray (15–60 keV) source by a scintillation counter flown on a balloon launched from Palestine, Texas, United States. This was likely the first balloon-based detection of X-rays from a discrete cosmic X-ray source. High-energy focusing telescope The high-energy focusing telescope (HEFT) is a balloon-borne experiment to image astrophysical sources in the hard X-ray (20–100 keV) band. Its maiden flight took place in May 2005 from Fort Sumner, New Mexico, USA. The angular resolution of HEFT is c. 1.5'. Rather than using a grazing-angle X-ray telescope, HEFT makes use of a novel tungsten-silicon multilayer coatings to extend the reflectivity of nested grazing-incidence mirrors beyond 10 keV. HEFT has an energy resolution of 1.0 keV full width at half maximum at 60 keV. HEFT was launched for a 25-hour balloon flight in May 2005. The instrument performed within specification and observed Tau X-1, the Crab Nebula. High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) A balloon-borne experiment called the High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) observed X-ray and gamma-rays emissions from the Sun and other astronomical objects. It was launched from McMurdo Station, Antarctica in December 1991 and 1992. Steady winds carried the balloon on a circumpolar flight lasting about two weeks each time. The rockoon (a portmanteau of rocket and balloon) was a solid fuel rocket that, rather than being immediately lit while on the ground, was first carried into the upper atmosphere by a gas-filled balloon. Then, once separated from the balloon at its maximum height, the rocket was automatically ignited. This achieved a higher altitude, since the rocket did not have to move through the lower thicker air layers that would have required much more chemical fuel. The original concept of "rockoons" was developed by Cmdr. Lee Lewis, Cmdr. G. Halvorson, S. F. Singer, and James A. Van Allen during the Aerobee rocket firing cruise of the USS Norton Sound on March 1, 1949. From July 17 to July 27, 1956, the Naval Research Laboratory (NRL) shipboard launched eight Deacon rockoons for solar ultraviolet and X-ray observations at ~30° N ~121.6° W, southwest of San Clemente Island, apogee: 120 km. X-ray astronomy satellite X-ray astronomy satellites study X-ray emissions from celestial objects. Satellites, which can detect and transmit data about the X-ray emissions are deployed as part of branch of space science known as X-ray astronomy. Satellites are needed because X-radiation is absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray telescopes and mirrors X-ray telescopes (XRTs) have varying directionality or imaging ability based on glancing angle reflection rather than refraction or large deviation reflection. This limits them to much narrower fields of view than visible or UV telescopes. The mirrors can be made of ceramic or metal foil. The first X-ray telescope in astronomy was used to observe the Sun. The first X-ray picture (taken with a grazing incidence telescope) of the Sun was taken in 1963, by a rocket-borne telescope. On April 19, 1960 the very first X-ray image of the sun was taken using a pinhole camera on an Aerobee-Hi rocket. The utilization of X-ray mirrors for extrasolar X-ray astronomy simultaneously requires: - the ability to determine the location at the arrival of an X-ray photon in two dimensions and - a reasonable detection efficiency. X-ray astronomy detectors X-ray astronomy detectors have been designed and configured primarily for energy and occasionally for wavelength detection using a variety of techniques usually limited to the technology of the time. X-ray detectors collect individual X-rays (photons of X-ray electromagnetic radiation) and count the number of photons collected (intensity), the energy (0.12 to 120 keV) of the photons collected, wavelength (c. 0.008–8 nm), or how fast the photons are detected (counts per hour), to tell us about the object that is emitting them. Astrophysical sources of X-rays Several types of astrophysical objects emit, fluoresce, or reflect X-rays, from galaxy clusters, through black holes in active galactic nuclei (AGN) to galactic objects such as supernova remnants, stars, and binary stars containing a white dwarf (cataclysmic variable stars and super soft X-ray sources), neutron star or black hole (X-ray binaries). Some solar system bodies emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays. A combination of many unresolved X-ray sources is thought to produce the observed X-ray background. The X-ray continuum can arise from bremsstrahlung, black-body radiation, synchrotron radiation, or what is called inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions. Hercules X-1 is composed of a neutron star accreting matter from a normal star (HZ Herculis) probably due to Roche lobe overflow. X-1 is the prototype for the massive X-ray binaries although it falls on the borderline, ~2 M☉, between high- and low-mass X-ray binaries. Celestial X-ray sources The celestial sphere has been divided into 88 constellations. The International Astronomical Union (IAU) constellations are areas of the sky. Each of these contains remarkable X-ray sources. Some of them have been identified from astrophysical modeling to be galaxies or black holes at the centers of galaxies. Some are pulsars. As with sources already successfully modeled by X-ray astrophysics, striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth. Constellations are an astronomical device for handling observation and precision independent of current physical theory or interpretation. Astronomy has been around for a long time. Physical theory changes with time. With respect to celestial X-ray sources, X-ray astrophysics tends to focus on the physical reason for X-ray brightness, whereas X-ray astronomy tends to focus on their classification, order of discovery, variability, resolvability, and their relationship with nearby sources in other constellations. Within the constellations Orion and Eridanus and stretching across them is a soft X-ray "hot spot" known as the Orion-Eridanus Superbubble, the Eridanus Soft X-ray Enhancement, or simply the Eridanus Bubble, a 25° area of interlocking arcs of Hα emitting filaments. Soft X-rays are emitted by hot gas (T ~ 2–3 MK) in the interior of the superbubble. This bright object forms the background for the "shadow" of a filament of gas and dust. The filament is shown by the overlaid contours, which represent 100 micrometre emission from dust at a temperature of about 30 K as measured by IRAS. Here the filament absorbs soft X-rays between 100 and 300 eV, indicating that the hot gas is located behind the filament. This filament may be part of a shell of neutral gas that surrounds the hot bubble. Its interior is energized by ultraviolet (UV) light and stellar winds from hot stars in the Orion OB1 association. These stars energize a superbubble about 1200 lys across which is observed in the visual (Hα) and X-ray portions of the spectrum. Proposed (future) X-ray observatory satellites There are several projects that are proposed for X-ray observatory satellites. See main article link above. Explorational X-ray astronomy Usually observational astronomy is considered to occur on Earth's surface (or beneath it in neutrino astronomy). The idea of limiting observation to Earth includes orbiting the Earth. As soon as the observer leaves the cozy confines of Earth, the observer becomes a deep space explorer. Except for Explorer 1 and Explorer 3 and the earlier satellites in the series, usually if a probe is going to be a deep space explorer it leaves the Earth or an orbit around the Earth. For a satellite or space probe to qualify as a deep space X-ray astronomer/explorer or "astronobot"/explorer, all it needs to carry aboard is an XRT or X-ray detector and leave Earth orbit. Ulysses was launched October 6, 1990, and reached Jupiter for its "gravitational slingshot" in February 1992. It passed the south solar pole in June 1994 and crossed the ecliptic equator in February 1995. The solar X-ray and cosmic gamma-ray burst experiment (GRB) had 3 main objectives: study and monitor solar flares, detect and localize cosmic gamma-ray bursts, and in-situ detection of Jovian aurorae. Ulysses was the first satellite carrying a gamma burst detector which went outside the orbit of Mars. The hard X-ray detectors operated in the range 15–150 keV. The detectors consisted of 23-mm thick × 51-mm diameter CsI(Tl) crystals mounted via plastic light tubes to photomultipliers. The hard detector changed its operating mode depending on (1) measured count rate, (2) ground command, or (3) change in spacecraft telemetry mode. The trigger level was generally set for 8-sigma above background and the sensitivity is 10−6 erg/cm2 (1 nJ/m2). When a burst trigger is recorded, the instrument switches to record high resolution data, recording it to a 32-kbit memory for a slow telemetry read out. Burst data consist of either 16 s of 8-ms resolution count rates or 64 s of 32-ms count rates from the sum of the 2 detectors. There were also 16 channel energy spectra from the sum of the 2 detectors (taken either in 1, 2, 4, 16, or 32 second integrations). During 'wait' mode, the data were taken either in 0.25 or 0.5 s integrations and 4 energy channels (with shortest integration time being 8 s). Again, the outputs of the 2 detectors were summed. The Ulysses soft X-ray detectors consisted of 2.5-mm thick × 0.5 cm2 area Si surface barrier detectors. A 100 mg/cm2 beryllium foil front window rejected the low energy X-rays and defined a conical FOV of 75° (half-angle). These detectors were passively cooled and operate in the temperature range −35 to −55 °C. This detector had 6 energy channels, covering the range 5–20 keV. Theoretical X-ray astronomy Theoretical X-ray astronomy is a branch of theoretical astronomy that deals with the theoretical astrophysics and theoretical astrochemistry of X-ray generation, emission, and detection as applied to astronomical objects. Like theoretical astrophysics, theoretical X-ray astronomy uses a wide variety of tools which include analytical models to approximate the behavior of a possible X-ray source and computational numerical simulations to approximate the observational data. Once potential observational consequences are available they can be compared with experimental observations. Observers can look for data that refutes a model or helps in choosing between several alternate or conflicting models. Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model. Most of the topics in astrophysics, astrochemistry, astrometry, and other fields that are branches of astronomy studied by theoreticians involve X-rays and X-ray sources. Many of the beginnings for a theory can be found in an Earth-based laboratory where an X-ray source is built and studied. Dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid acts to maintain a magnetic field. This theory is used to explain the presence of anomalously long-lived magnetic fields in astrophysical bodies. If some of the stellar magnetic fields are really induced by dynamos, then field strength might be associated with rotation rate. From the observed X-ray spectrum, combined with spectral emission results for other wavelength ranges, an astronomical model addressing the likely source of X-ray emission can be constructed. For example, with Scorpius X-1 the X-ray spectrum steeply drops off as X-ray energy increases up to 20 keV, which is likely for a thermal-plasma mechanism. In addition, there is no radio emission, and the visible continuum is roughly what would be expected from a hot plasma fitting the observed X-ray flux. The plasma could be a coronal cloud of a central object or a transient plasma, where the energy source is unknown, but could be related to the idea of a close binary. In the Crab Nebula X-ray spectrum there are three features that differ greatly from Scorpius X-1: its spectrum is much harder, its source diameter is in light-years (ly)s, not astronomical units (AU), and its radio and optical synchrotron emission are strong. Its overall X-ray luminosity rivals the optical emission and could be that of a nonthermal plasma. However, the Crab Nebula appears as an X-ray source that is a central freely expanding ball of dilute plasma, where the energy content is 100 times the total energy content of the large visible and radio portion, obtained from the unknown source. The "Dividing Line" as giant stars evolve to become red giants also coincides with the Wind and Coronal Dividing Lines. To explain the drop in X-ray emission across these dividing lines, a number of models have been proposed: - low transition region densities, leading to low emission in coronae, - high-density wind extinction of coronal emission, - only cool coronal loops become stable, - changes in a magnetic field structure to that an open topology, leading to a decrease of magnetically confined plasma, or - changes in the magnetic dynamo character, leading to the disappearance of stellar fields leaving only small-scale, turbulence-generated fields among red giants. Analytical X-ray astronomy Analytical X-ray astronomy is applied to an astronomy puzzle in an attempt to provide an acceptable solution. Consider the following puzzle. High-mass X-ray binaries (HMXBs) are composed of OB supergiant companion stars and compact objects, usually neutron stars (NS) or black holes (BH). Supergiant X-ray binaries (SGXBs) are HMXBs in which the compact objects orbit massive companions with orbital periods of a few days (3–15 d), and in circular (or slightly eccentric) orbits. SGXBs show typical the hard X-ray spectra of accreting pulsars and most show strong absorption as obscured HMXBs. X-ray luminosity (Lx) increases up to 1036 erg·s−1 (1029 watts). Aim: use the discovery of long orbits (>15 d) to help discriminate between emission models and perhaps bring constraints on the models. Method: analyze archival data on various SGXBs such as has been obtained by INTEGRAL for candidates exhibiting long orbits. Build short- and long-term light curves. Perform a timing analysis in order to study the temporal behavior of each candidate on different time scales. Compare various astronomical models: - direct spherical accretion - Roche-Lobe overflow via an accretion disk on the compact object. Draw some conclusions: for example, the SGXB SAX J1818.6-1703 was discovered by BeppoSAX in 1998, identified as a SGXB of spectral type between O9I−B1I, which also displayed short and bright flares and an unusually very low quiescent level leading to its classification as a SFXT. The analysis indicated an unusually long orbital period: 30.0 ± 0.2 d and an elapsed accretion phase of ~6 d implying an elliptical orbit and possible supergiant spectral type between B0.5-1I with eccentricities e ~ 0.3–0.4. The large variations in the X-ray flux can be explained through accretion of macro-clumps formed within the stellar wind. Choose which model seems to work best: for SAX J1818.6-1703 the analysis best fits the model that predicts SFXTs behave as SGXBs with different orbital parameters; hence, different temporal behavior. Stellar X-ray astronomy Stellar X-ray astronomy is said to have started on April 5, 1974, with the detection of X-rays from Capella. A rocket flight on that date briefly calibrated its attitude control system when a star sensor pointed the payload axis at Capella (α Aur). During this period, X-rays in the range 0.2–1.6 keV were detected by an X-ray reflector system co-aligned with the star sensor. The X-ray luminosity of Lx = 1031 erg·s−1 (1024 W) is four orders of magnitude above the Sun's X-ray luminosity. Coronal stars, or stars within a coronal cloud, are ubiquitous among the stars in the cool half of the Hertzsprung-Russell diagram. Experiments with instruments aboard Skylab and Copernicus have been used to search for soft X-ray emission in the energy range ~0.14–0.284 keV from stellar coronae. The experiments aboard ANS succeeded in finding X-ray signals from Capella and Sirius (α CMa). X-ray emission from an enhanced solar-like corona was proposed for the first time. The high temperature of Capella's corona as obtained from the first coronal X-ray spectrum of Capella using HEAO 1 required magnetic confinement unless it was a free-flowing coronal wind. In 1977 Proxima Centauri is discovered to be emitting high-energy radiation in the XUV. In 1978, α Cen was identified as a low-activity coronal source. With the operation of the Einstein observatory, X-ray emission was recognized as a characteristic feature common to a wide range of stars covering essentially the whole Hertzsprung-Russell diagram. The Einstein initial survey led to significant insights: - X-ray sources abound among all types of stars, across the Hertzsprung-Russell diagram and across most stages of evolution, - the X-ray luminosities and their distribution along the main sequence were not in agreement with the long-favored acoustic heating theories, but were now interpreted as the effect of magnetic coronal heating, and - stars that are otherwise similar reveal large differences in their X-ray output if their rotation period is different. To fit the medium-resolution spectrum of UX Ari, subsolar abundances were required. Stellar X-ray astronomy is contributing toward a deeper understanding of - magnetic fields in magnetohydrodynamic dynamos, - the release of energy in tenuous astrophysical plasmas through various plasma-physical processes, and - the interactions of high-energy radiation with the stellar environment. Current wisdom has it that the massive coronal main sequence stars are late-A or early F stars, a conjecture that is supported both by observation and by theory. Young, low-mass stars Newly formed stars are known as pre-main-sequence stars during the stage of stellar evolution before they reach the main-sequence. Stars in this stage (ages <10 million years) produce X-rays in their stellar coronae. However, their X-ray emission is 103 to 105 times stronger than for main-sequence stars of similar masses. X-ray emission for pre–main-sequence stars was discovered by the Einstein Observatory. This X-ray emission is primarily produced by magnetic reconnection flares in the stellar coronae, with many small flares contributing to the "quiescent" X-ray emission from these stars. Pre–main sequence stars have large convection zones, which in turn drive strong dynamos, producing strong surface magnetic fields. This leads to the high X-ray emission from these stars, which lie in the saturated X-ray regime, unlike main-sequence stars that show rotational modulation of X-ray emission. Other sources of X-ray emission include accretion hotspots and collimated outflows. X-ray emission as an indicator of stellar youth is important for studies of star-forming regions. Most star-forming regions in the Milky Way Galaxy are projected on Galactic-Plane fields with numerous unrelated field stars. It is often impossible to distinguish members of a young stellar cluster from field-star contaminants using optical and infrared images alone. X-ray emission can easily penetrate moderate absorption from molecular clouds, and can be used to identify candidate cluster members. Given the lack of a significant outer convection zone, theory predicts the absence of a magnetic dynamo in earlier A stars. In early stars of spectral type O and B, shocks developing in unstable winds are the likely source of X-rays. Coolest M dwarfs Beyond spectral type M5, the classical αω dynamo can no longer operate as the internal structure of dwarf stars changes significantly: they become fully convective. As a distributed (or α2) dynamo may become relevant, both the magnetic flux on the surface and the topology of the magnetic fields in the corona should systematically change across this transition, perhaps resulting in some discontinuities in the X-ray characteristics around spectral class dM5. However, observations do not seem to support this picture: long-time lowest-mass X-ray detection, VB 8 (M7e V), has shown steady emission at levels of X-ray luminosity (LX) ≈ 1026 erg·s−1 (1019 W) and flares up to an order of magnitude higher. Comparison with other late M dwarfs shows a rather continuous trend. Strong X-ray emission from Herbig Ae/Be stars Herbig Ae/Be stars are pre-main sequence stars. As to their X-ray emission properties, some are - reminiscent of hot stars, - others point to coronal activity as in cool stars, in particular the presence of flares and very high temperatures. The nature of these strong emissions has remained controversial with models including - unstable stellar winds, - colliding winds, - magnetic coronae, - disk coronae, - wind-fed magnetospheres, - accretion shocks, - the operation of a shear dynamo, - the presence of unknown late-type companions. The FK Com stars are giants of spectral type K with an unusually rapid rotation and signs of extreme activity. Their X-ray coronae are among the most luminous (LX ≥ 1032 erg·s−1 or 1025 W) and the hottest known with dominant temperatures up to 40 MK. However, the current popular hypothesis involves a merger of a close binary system in which the orbital angular momentum of the companion is transferred to the primary. Pollux is the brightest star in the constellation Gemini, despite its Beta designation, and the 17th brightest in the sky. Pollux is a giant orange K star that makes an interesting color contrast with its white "twin", Castor. Evidence has been found for a hot, outer, magnetically supported corona around Pollux, and the star is known to be an X-ray emitter. New X-ray observations by the Chandra X-ray Observatory show three distinct structures: an outer, horseshoe-shaped ring about 2 light years in diameter, a hot inner core about 3 light-months in diameter, and a hot central source less than 1 light-month in diameter which may contain the superstar that drives the whole show. The outer ring provides evidence of another large explosion that occurred over 1,000 years ago. These three structures around Eta Carinae are thought to represent shock waves produced by matter rushing away from the superstar at supersonic speeds. The temperature of the shock-heated gas ranges from 60 MK in the central regions to 3 MK on the horseshoe-shaped outer structure. "The Chandra image contains some puzzles for existing ideas of how a star can produce such hot and intense X-rays," says Prof. Kris Davidson of the University of Minnesota. Davidson is principal investigator for the Eta Carina observations by the Hubble Space telescope. "In the most popular theory, X-rays are made by colliding gas streams from two stars so close together that they'd look like a point source to us. But what happens to gas streams that escape to farther distances? The extended hot stuff in the middle of the new image gives demanding new conditions for any theory to meet." Amateur X-ray astronomy Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. The United States Air Force Academy (USAFA) is the home of the US's only undergraduate satellite program, and has and continues to develop the FalconLaunch sounding rockets. In addition to any direct amateur efforts to put X-ray astronomy payloads into space, there are opportunities that allow student-developed experimental payloads to be put on board commercial sounding rockets as a free-of-charge ride. There are major limitations to amateurs observing and reporting experiments in X-ray astronomy: the cost of building an amateur rocket or balloon to place a detector high enough and the cost of appropriate parts to build a suitable X-ray detector. History of X-ray astronomy In 1927, E.O. Hulburt of the US Naval Research Laboratory and associates Gregory Breit and Merle A. Tuve of the Carnegie Institution of Washington explored the possibility of equipping Robert H. Goddard's rockets to explore the upper atmosphere. "Two years later, he proposed an experimental program in which a rocket might be instrumented to explore the upper atmosphere, including detection of ultraviolet radiation and X-rays at high altitudes". In the late 1930s, the presence of a very hot, tenuous gas surrounding the Sun was inferred indirectly from optical coronal lines of highly ionized species. The Sun has been known to be surrounded by a hot tenuous corona. In the mid-1940s radio observations revealed a radio corona around the Sun. The beginning of the search for X-ray sources from above the Earth's atmosphere was on August 5, 1948 12:07 GMT. A US Army (formerly German) V-2 rocket as part of Project Hermes was launched from White Sands Proving Grounds. The first solar X-rays were recorded by T. Burnight. Through the 1960s, 70s, 80s, and 90s, the sensitivity of detectors increased greatly during the 60 years of X-ray astronomy. In addition, the ability to focus X-rays has developed enormously—allowing the production of high-quality images of many fascinating celestial objects. Major questions in X-ray astronomy As X-ray astronomy uses a major spectral probe to peer into source, it is a valuable tool in efforts to understand many puzzles. Stellar magnetic fields Magnetic fields are ubiquitous among stars, yet we do not understand precisely why, nor have we fully understood the bewildering variety of plasma physical mechanisms that act in stellar environments. Some stars, for example, seem to have magnetic fields, fossil stellar magnetic fields left over from their period of formation, while others seem to generate the field anew frequently. Extrasolar X-ray source astrometry With the initial detection of an extrasolar X-ray source, the first question usually asked is "What is the source?" An extensive search is often made in other wavelengths such as visible or radio for possible coincident objects. Many of the verified X-ray locations still do not have readily discernible sources. X-ray astrometry becomes a serious concern that results in ever greater demands for finer angular resolution and spectral radiance. There are inherent difficulties in making X-ray/optical, X-ray/radio, and X-ray/X-ray identifications based solely on positional coincidents, especially with handicaps in making identifications, such as the large uncertainties in positional determinants made from balloons and rockets, poor source separation in the crowded region toward the galactic center, source variability, and the multiplicity of source nomenclature. X‐ray source counterparts to stars can be identified by calculating the angular separation between source centroids and position of the star. The maximum allowable separation is a compromise between a larger value to identify as many real matches as possible and a smaller value to minimize the probability of spurious matches. "An adopted matching criterion of 40" finds nearly all possible X‐ray source matches while keeping the probability of any spurious matches in the sample to 3%." Solar X-ray astronomy Coronal heating problem In the area of solar X-ray astronomy, there is the coronal heating problem. The photosphere of the Sun has an effective temperature of 5,570 K yet its corona has an average temperature of 1–2 × 106 K. However, the hottest regions are 8–20 × 106 K. The high temperature of the corona shows that it is heated by something other than direct heat conduction from the photosphere. It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational or magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient gas in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events—nanoflares. Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfvén waves have been found to dissipate or refract before reaching the corona. In addition, Alfvén waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms. Coronal mass ejection A coronal mass ejection (CME) is an ejected plasma consisting primarily of electrons and protons (in addition to small quantities of heavier elements such as helium, oxygen, and iron), plus the entraining coronal closed magnetic field regions. Evolution of these closed magnetic structures in response to various photospheric motions over different time scales (convection, differential rotation, meridional circulation) somehow leads to the CME. Small-scale energetic signatures such as plasma heating (observed as compact soft X-ray brightening) may be indicative of impending CMEs. The soft X-ray sigmoid (an S-shaped intensity of soft X-rays) is an observational manifestation of the connection between coronal structure and CME production. "Relating the sigmoids at X-ray (and other) wavelengths to magnetic structures and current systems in the solar atmosphere is the key to understanding their relationship to CMEs." The first detection of a Coronal mass ejection (CME) as such was made on December 1, 1971 by R. Tousey of the US Naval Research Laboratory using OSO 7. Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing. The largest geomagnetic perturbation, resulting presumably from a "prehistoric" CME, coincided with the first-observed solar flare, in 1859. The flare was observed visually by Richard Christopher Carrington and the geomagnetic storm was observed with the recording magnetograph at Kew Gardens. The same instrument recorded a crotchet, an instantaneous perturbation of the Earth's ionosphere by ionizing soft X-rays. This could not easily be understood at the time because it predated the discovery of X-rays (by Roentgen) and the recognition of the ionosphere (by Kennelly and Heaviside). Exotic X-ray sources A microquasar is a smaller cousin of a quasar that is a radio emitting X-ray binary, with an often resolvable pair of radio jets. LSI+61°303 is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01. Observations are revealing a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). Observations made by Chandra indicate the presence of loops and rings in the hot X-ray emitting gas that surrounds Messier 87. A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of copious amounts of high-energy electromagnetic radiation, particularly X-rays and gamma rays. X-ray dark stars During the solar cycle, as shown in the sequence of images at right, at times the Sun is almost X-ray dark, almost an X-ray variable. Betelgeuse, on the other hand, appears to be always X-ray dark. Hardly any X-rays are emitted by red giants. There is a rather abrupt onset of X-ray emission around spectral type A7-F0, with a large range of luminosities developing across spectral class F. Altair is spectral type A7V and Vega is A0V. Altair's total X-ray luminosity is at least an order of magnitude larger than the X-ray luminosity for Vega. The outer convection zone of early F stars is expected to be very shallow and absent in A-type dwarfs, yet the acoustic flux from the interior reaches a maximum for late A and early F stars provoking investigations of magnetic activity in A-type stars along three principal lines. Chemically peculiar stars of spectral type Bp or Ap are appreciable magnetic radio sources, most Bp/Ap stars remain undetected, and of those reported early on as producing X-rays only few of them can be identified as probably single stars. X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area." X-ray dark planet/comet X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area." As X-ray detectors have become more sensitive, they have observed that some planets and other normally X-ray non-luminescent celestial objects under certain conditions emit, fluoresce, or reflect X-rays. NASA's Swift Gamma-Ray Burst Mission satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. "The solar wind—a fast-moving stream of particles from the sun—interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the Sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet. - Significant Achievements in Solar Physics 1958-1964. Washington D.C.: NASA. 1966. pp. 49–58. - "Chronology – Quarter 1 1949". Archived from the original on April 8, 2010. - Giacconi R (2003). "Nobel Lecture: The dawn of x-ray astronomy". Rev Mod Phys. 75 (3): 995. Bibcode:2003RvMP...75..995G. doi:10.1103/RevModPhys.75.995. - "Scorpius X-1". Retrieved January 4, 2019. - "Riccardo Giacconi". Retrieved January 4, 2019. - Spitzer L (1978). Physical Processes in the Interstellar Medium. Wiley. ISBN 978-0-471-29335-4. - Wright B. "36.223 UH MCCAMMON/UNIVERSITY OF WISCONSIN". Archived from the original on May 11, 2008. - Drake SA. "A Brief History of High-Energy Astronomy: 1960–1964". - Harrison FA; Boggs, Steven E.; Bolotnikov, Aleksey E.; Christensen, Finn E.; Cook Iii, Walter R.; Craig, William W.; Hailey, Charles J.; Jimenez-Garate, Mario A.; et al. (2000). Truemper, Joachim E; Aschenbach, Bernd, eds. "Development of the High-Energy Focusing Telescope (HEFT) balloon experiment". Proc SPIE. X-Ray Optics, Instruments, and Missions III. 4012: 693. doi:10.1117/12.391608. - Feffer, Paul (1996). "Solar energetic ion and electron limits from High Resolution Gamma-ray and Hard X-ray Spectrometer (HIREGS) Observations". Solar Physics. 171 (2): 419–445. Bibcode:1997SoPh..171..419F. doi:10.1023/A:1004911511905. - Feffer, Paul (1997). X-ray and Gamma-ray Observations of Solar Flares. Ann Arbor, MI: UMI Company. - "Chronology – Quarter 3 1956". - "SWIFT X-ray mirrors". - "Chandra X-ray focusing mirrors". - "X-ray optics". - Blake, R. L.; Chubb, T. A.; Friedman, H.; Unzicker, A. E. (January 1963). "Interpretation of X-Ray Photograph of the Sun". Astrophysical Journal. 137: 3. Bibcode:1963ApJ...137....3B. doi:10.1086/147479. - Morrison P (1967). "Extrasolar X-ray Sources". Annual Review of Astronomy and Astrophysics. 5 (1): 325. Bibcode:1967ARA&A...5..325M. doi:10.1146/annurev.aa.05.090167.001545. - Podsiadlowski P; Rappaport S; Pfahl E (2001). "Evolutionary Binary Sequences for Low- and Intermediate-Mass X-ray Binaries". The Astrophysical Journal. 565 (2): 1107. arXiv:astro-ph/0107261. Bibcode:2002ApJ...565.1107P. doi:10.1086/324686. - Priedhorsky WC; Holt SS (1987). "Long-term cycles in cosmic X-ray sources". Space Science Reviews. 45 (3–4): 291. Bibcode:1987SSRv...45..291P. doi:10.1007/BF00171997. - Kawakatsu Y (Dec 2007). "Concept study on Deep Space Orbit Transfer Vehicle". Acta Astronautica. 61 (11–12): 1019–28. Bibcode:2007AcAau..61.1019K. doi:10.1016/j.actaastro.2006.12.019. - Smith W. "Explorer Series of Spacecraft". - Trimble V (1999). "White dwarfs in the 1990s". Bull Astron Soc India. 27: 549. Bibcode:1999BASI...27..549T. - Kashyap V; Rosner R; Harnden FR Jr.; Maggio A; Micela G; Sciortino S (1994). "X-ray emission on hybrid stars: ROSAT observations of alpha Trianguli Australis and IOTA Aurigae". Astrophys J. 431: 402. Bibcode:1994ApJ...431..402K. doi:10.1086/174494. - Zurita Heras JA; Chaty S (2009). "Discovery of an eccentric 30 day period in the supergiant X-ray binary SAX J1818.6–1703 with INTEGRAL". Astronomy and Astrophysics. 493 (1): L1. arXiv:0811.2941. Bibcode:2009A&A...493L...1Z. doi:10.1051/0004-6361:200811179. - Catura RC; Acton LW; Johnson HM (1975). "Evidence for X-ray emission from Capella". Astrophys J. 196: L47. Bibcode:1975ApJ...196L..47C. doi:10.1086/181741. - Güdel M (2004). "X-ray astronomy of stellar coronae" (PDF). The Astronomy and Astrophysics Review. 12 (2–3): 71–237. arXiv:astro-ph/0406661. Bibcode:2004A&ARv..12...71G. doi:10.1007/s00159-004-0023-2. Archived from the original (PDF) on August 11, 2011. - Mewe R; Heise J; Gronenschild EHBM; Brinkman AC; Schrijver J; den Boggende AJF (1975). "Detection of X-ray emission from stellar coronae with ANS". Astrophys J. 202: L67. Bibcode:1975ApJ...202L..67M. doi:10.1086/181983. - Telleschi AS. "Coronal Evolution of Solar-Like Stars in Star-Forming Regions and the Solar Neighborhood" (PDF). - Preibisch, T.; et al. (2005). "The Origin of T Tauri X-Ray Emission: New Insights from the Chandra Orion Ultradeep Project". Astrophysical Journal Supplement. 160 (2): 401–422. arXiv:astro-ph/0506526. Bibcode:2005ApJS..160..401P. doi:10.1086/432891. - Feigelson, E. D.; Decampli, W. M. (1981). "Observations of X-ray emission from T Tauri stars". Astrophysical Journal Letters. 243: L89–L93. Bibcode:1981ApJ...243L..89F. doi:10.1086/183449. - Montmerle, T. (1983). "Einstein observations of the Rho Ophiuchi dark cloud - an X-ray Christmas tree". Astrophysical Journal, Part 1. 269: 182–201. Bibcode:1983ApJ...269..182M. doi:10.1086/161029. - Feigelson, E. D.; Montmerle, T. (1999). "High-Energy Processes in Young Stellar Objects". Annual Review of Astronomy and Astrophysics. 37: 363–408. Bibcode:1999ARA&A..37..363F. doi:10.1146/annurev.astro.37.1.363. - Kastner, J. H.; et al. (2001). "Discovery of Extended X-Ray Emission from the Planetary Nebula NGC 7027 by the Chandra X-Ray Observatory". Astrophysical Journal. 550 (2): L189–L192. arXiv:astro-ph/0102468. Bibcode:2001ApJ...550L.189K. doi:10.1086/319651. - Pravdo, S. H.; et al. (2001). "Discovery of X-rays from the protostellar outflow object HH2". Nature. 413 (6857): 708–711. Bibcode:2001Natur.413..708P. doi:10.1038/35099508. PMID 11607024. - Feigelson, E. D.; et al. (2013). "Overview of the Massive Young Star-Forming Complex Study in Infrared and X-Ray (MYStIX) Project". Astrophysical Journal Supplement. 209 (2): 26. arXiv:1309.4483. Bibcode:2013ApJS..209...26F. doi:10.1088/0067-0049/209/2/26. - Hatzes AP; Cochran WD; Endl M; Guenther EW; Saar SH; Walker GAH; Yang S; Hartmann M; et al. (2006). "Confirmation of the planet hypothesis for the long-period radial velocity variations of β Geminorum". Astronomy and Astrophysics. 457 (1): 335. arXiv:astro-ph/0606517. Bibcode:2006A&A...457..335H. doi:10.1051/0004-6361:20065445. - "Chandra Takes X-ray Image of Repeat Offender". October 8, 1999. - Department of Astronautics (2008). "World's first astronautics department celebrates 50 years". Archived from the original on December 12, 2012. - Blaylock E. "AFRL Signs EPA to Educate and Inspire Future Aerospace Professionals". - "Spacelab 2 NRL Looks at the Sun". - Grottian W (1939). "Zur Frage der Deutung der Linien im Spektrum der Sonnenkorona". Naturwissenschaften. 27 (13): 214. Bibcode:1939NW.....27..214G. doi:10.1007/BF01488890. - Keller CU (1995). "X-rays from the Sun". Cell Mol Life Sci. 51 (7): 710. doi:10.1007/BF01941268. - Thomas RM; Davison PJN (1974). "A comment on X-ray source identifications". Astron Soc Australia, Proc. 2: 290. Bibcode:1974PASAu...2..290T. - Gaidos EJ (Nov 1998). "Nearby Young Solar Analogs. I. Catalog and Stellar Characteristics". Publ. Astron. Soc. Pac. 110 (753): 1259–76. Bibcode:1998PASP..110.1259G. doi:10.1086/316251. - Massey P; Silva DR; Levesque EM; Plez B; Olsen KAG; Clayton GC; Meynet G; Maeder A (2009). "Red Supergiants in the Andromeda Galaxy (M31)". Astrophys J. 703 (1): 420. arXiv:0907.3767. Bibcode:2009ApJ...703..420M. doi:10.1088/0004-637X/703/1/420. - Erdèlyi R; Ballai, I (2007). "Heating of the solar and stellar coronae: a review". Astron Nachr. 328 (8): 726. Bibcode:2007AN....328..726E. doi:10.1002/asna.200710803. - Russell CT (2001). "Solar wind and interplanetary magnetic filed: A tutorial". In Song, Paul; Singer, Howard J.; Siscoe, George L. Space Weather (Geophysical Monograph) (PDF). American Geophysical Union. pp. 73–88. ISBN 978-0-87590-984-4. - Alfvén H (1947). "Magneto-hydrodynamic waves, and the heating of the solar corona". Monthly Notices of the Royal Astronomical Society. 107 (2): 211. Bibcode:1947MNRAS.107..211A. doi:10.1093/mnras/107.2.211. - Parker EN (1988). "Nanoflares and the solar X-ray corona". Astrophys J. 330: 474. Bibcode:1988ApJ...330..474P. doi:10.1086/166485. - Sturrock PA; Uchida Y (1981). "Coronal heating by stochastic magnetic pumping". Astrophys J. 246: 331. Bibcode:1981ApJ...246..331S. doi:10.1086/158926. hdl:2060/19800019786. - Gopalswamy N; Mikic Z; Maia D; Alexander D; Cremades H; Kaufmann P; Tripathi D; Wang YM (2006). "The pre-CME Sun". Space Science Reviews. 123 (1–3): 303. Bibcode:2006SSRv..123..303G. doi:10.1007/s11214-006-9020-2. - "R.A.Howard, A Historical Perspective on Coronal Mass Ejections" (PDF). - Reddy F. "NASA's Swift Spies Comet Lulin". - The content of this article was adapted and expanded from http://imagine.gsfc.nasa.gov/ (Public Domain) |Wikimedia Commons has media related to Astronomy.| - How Many Known X-Ray (and Other) Sources Are There? - Is My Favorite Object an X-ray, Gamma-Ray, or EUV Source? - X-ray all-sky survey on WIKISKY - Audio – Cain/Gay (2009) Astronomy Cast – X-Ray Astronomy
We are living in an exciting time, where next-generation instruments and improved methods are leading to discoveries in astronomy, astrophysics, planetary science, and cosmology. As we look farther and in greater detail into the cosmos, some of the most enduring mysteries are finally being answered. Of particular interest are cosmic rays, the tiny particles consisting of protons, atomic nuclei, or stray electrons that have been accelerated to near the speed of light. These particles represent a major hazard for astronauts venturing beyond Earth’s protective magnetic field. At the same time, cosmic rays regularly interact with our atmosphere (producing “showers” of secondary particles) and may have even played a role in the evolution of life on Earth. Due to the way they carry an electric charge, which scrambles their path as they travel through the Milky Way’s magnetic field, astronomers have been hard-pressed to find where cosmic rays originate. But thanks to a new study that examined 12 years of data from NASA’s Fermi Gamma-ray Space Telescope, scientists have confirmed that the most powerful originate from shock waves caused by supernova remnants. The research was led by Ke Fang, an assistant professor with the Wisconsin IceCube Particle Astrophysics Center at the University of Wisconsin–Madison. She was joined by researchers from the Naval Research Laboratory, the Kavli Institute for Particle Astrophysics and Cosmology, the SLAC National Accelerator Laboratory, the Catholic University of America, and the Center for Research and Exploration in Space Science and Technology (CRESST) at NASA’s Goddard Space Flight Center. The paper that describes their findings recently appeared in the Physical Review Letters. Mitigation against cosmic rays is one of the main considerations regarding future missions to the Moon and Mars. Like solar radiation, these high-energy particles pose a risk to astronaut health due to their effect on skin tissue and organs, but also from the “showers” of secondary particles they produce. This occurs when cosmic rays come into contact with our atmosphere, which produces lower-energy particles like neutrons or electrons, most of which are deflected off into space. In space, however, cosmic rays produce showers after impacting with dense material – such as radiation shielding. Aboard the ISS, the impact of these rays creates showers of secondary particles that pass through the hull and fill the interior with lower-energy radiation. While ISS astronauts can limit their exposure to this radiation by rotating back to Earth, long-duration missions will not have that luxury. For crewed missions to Mars, astronauts will spend up to a year and a half in transit, plus several months on the Martian surface. For this reason, knowing where cosmic rays come from and the kind of energies they can achieve is essential to developing improved protection and mitigation methods. For years, astronomers have been searching for where the highest-energy cosmic rays come from – those that exceed 1,000 trillion electron volts (PeV). These rays are ten times the energy generated by the Large Hadron Collider, the most powerful particle accelerator in the world, and are almost powerful enough to escape our galaxy. “Theorists think the highest-energy cosmic ray protons in the Milky Way reach a million billion electron volts (or PeV) energies,” explained Fang in a recent NASA press release. “The precise nature of their sources, which we call PeVatrons, has been difficult to pin down.” Results from the Fermi Space Telescope, showing G106.3+2 (and J2229+6114) in different energy ranges. Credit: NASA/Fermi/Fang et al. 2022 While it is difficult to track cosmic rays back to their origin, scientists have observed how they collide with interstellar gas near supernovae, which produces gamma rays (the highest energy light there is). From this, scientists suggested in a previous study (also based on Fermi data) that a significant fraction of primary cosmic rays originate from supernova explosions. For the sake of their study, Prof. Fang and her colleagues analyzed twelve years of Fermi data on SNR G106.3+2, a comet-shaped supernova remnant located about 2,600 light-years from Earth in the constellation Cepheus. Using its primary instrument – the Large Area Telescope (LAT) – Fermi detected billion-electron-volt (GeV) gamma rays from within G106.3+2’s extended tail. Similar observations were conducted using the Very Energetic Radiation Imaging Telescope Array System (VERITAS) instrument at the Fred Lawrence Whipple Observatory in southern Arizona, the High-Altitude Water Cherenkov Gamma-Ray Observatory in Mexico, and the Tibet AS-Gamma Experiment in China. These observatories detected even higher-energy gamma rays reaching up to 100 trillion electron volts (TeV). While cosmic ray particles would initially be trapped by the supernova remnant’s powerful magnetic fields, their path causes them to repeatedly cross the supernova’s shock wave. The particles gain speed and energy with each pass and eventually become too fast for the supernova remnant to hold onto them. At this point, they fly off into interstellar space, where they become incredibly difficult to trace back to their source. Co-author Henrike Fleischhack, a researcher from the Catholic University of America in Washington and NASA’s Goddard Space Flight Center: “This object has been a source of considerable interest for a while now, but to crown it as a PeVatron, we have to prove it’s accelerating protons. The catch is that electrons accelerated to a few hundred TeV can produce the same emission. Now, with the help of 12 years of Fermi data, we think we’ve made the case that G106.3+2.7 is indeed a PeVatron.” Illustration of NASA’s Fermi Gamma-ray Space Telescope at work. Credit: NASA GSFC The supernova remnant is also notable for the pulsar J2229+6114 at its northern end, which astronomers think was born from the same supernova. This pulsar emits gamma rays as it spins, creating a strobing effect (like a lighthouse) that are typically less than 10 GeV in energy. These emissions are only visible during the first half of the pulsar’s rotation and did not present any significant interference for Fermi. Nevertheless, the research team was able to isolate G106.3+2.7’s higher-energy emissions by analyzing gamma rays arriving from the latter part of the cycle. Their detailed analysis overwhelmingly shows that PeV protons are what was driving the powerful gamma-ray emissions they observed. This research has demonstrated that supernova remnants are the source of the most powerful cosmic rays in the Universe, though some questions remain. While astronomers have identified other potential sources of PeVatrons – including Active Galactic Nuclei (AGNs) – supernova remnants remain at the top of the list. Yet out of about 300 known remnants, only a few have been found to emit gamma rays at these energies. “So far, G106.3+2.7 is unique, but it may turn out to be the brightest member of a new population of supernova remnants that emit gamma rays reaching TeV energies,” Fang added. “More of them may be revealed through future observations by Fermi and very-high-energy gamma-ray observatories.” Further Reading: NASA, Physical Review Letters
Nonverbal communication (NVC) is the transmission of messages or signals through a nonverbal platform such as eye contact, facial expressions, gestures, posture, and body language. It includes the use of social cues, kinesics, distance (proxemics) and physical environments/appearance, of voice (paralanguage) and of touch (haptics). It can also include the use of time (chronemics) and eye contact and the actions of looking while talking and listening, frequency of glances, patterns of fixation, pupil dilation, and blink rate (oculesics). The study of nonverbal communication started in 1872 with the publication of The Expression of the Emotions in Man and Animals by Charles Darwin. Darwin began to study nonverbal communication as he noticed the interactions between animals such as lions, tigers, dogs etc. and realized they also communicated by gestures and expressions. For the first time, nonverbal communication was studied and its relevance questioned. Today, scholars argue that nonverbal communication can convey more meaning than verbal communication. Some scholars state that most people trust forms of nonverbal communication over verbal communication. Ray Birdwhistell[note 1] concludes that nonverbal communication accounts for 60–70 percent of human communication, although according to other researchers the communication type is not quantifiable or does not reflect modern human communication, especially when people rely so much on written means. Just as speech contains nonverbal elements known as paralanguage, including voice quality, rate, pitch, loudness, and speaking style, as well as prosodic features such as rhythm, intonation, and stress, so written texts have nonverbal elements such as handwriting style, spatial arrangement of words, or the physical layout of a page. However, much of the study of nonverbal communication has focused on interaction between individuals, where it can be classified into three principal areas: environmental conditions where communication takes place, physical characteristics of the communicators, and behaviors of communicators during interaction. Nonverbal communication involves the conscious and unconscious processes of encoding and decoding. Encoding is defined as our ability to express emotions in a way that can be accurately interpreted by the receiver(s). Decoding is called "nonverbal sensitivity”, defined as the ability to take this encoded emotion and interpret its meanings accurately to what the sender intended. Encoding is the act of generating information such as facial expressions, gestures, and postures. Encoding information utilizes signals which we may think to be universal. Decoding is the interpretation of information from received sensations given by the encoder. Decoding information utilizes knowledge one may have of certain received sensations. For example, in the picture above, the encoder holds up two fingers, and the decoder may know from previous experience that this means two. There are some "decoding rules", which state that in some cases a person may be able to properly assess some nonverbal cues and understand their meaning, whereas others might not be able to do so as effectively. Both of these skills can vary from person to person, with some people being better than others at one or both. These individuals would be more socially conscious and have better interpersonal relationships. An example of this would be with gender, woman are found to be better decoders than men since they are more observant of nonverbal cues, as well as more likely to use them. Culture plays an important role in nonverbal communication, and it is one aspect that helps to influence how learning activities are organized. In many Indigenous American communities, for example, there is often an emphasis on nonverbal communication, which acts as a valued means by which children learn. In this sense, learning is not dependent on verbal communication; rather, it is nonverbal communication which serves as a primary means of not only organizing interpersonal interactions, but also conveying cultural values, and children learn how to participate in this system from a young age. According to some authors, nonverbal communication represents two-thirds of all communications. Nonverbal communication can portray a message both vocally and with the correct body signals or gestures. Body signals comprise physical features, conscious and unconscious gestures and signals, and the mediation of personal space. The wrong message can also be established if the body language conveyed does not match a verbal message. Nonverbal communication strengthens a first impression in common situations like attracting a partner or in a business interview: impressions are on average formed within the first four seconds of contact. First encounters or interactions with another person strongly affect a person's perception. When the other person or group is absorbing the message, they are focused on the entire environment around them, meaning the other person uses all five senses in the interaction: 83% sight, 11% hearing, 3% smell, 2% touch and 1% taste. Many indigenous cultures use nonverbal communication in the integration of children at a young age into their cultural practices. Children in these communities learn through observing and pitching in through which nonverbal communication is a key aspect of observation. History of researchEdit Scientific research on nonverbal communication and behavior was started in 1872 with the publication of Charles Darwin's book The Expression of the Emotions in Man and Animals. In the book, Darwin argued that all mammals, both humans and animals, showed emotion through facial expressions. He posed questions such as: "Why do our facial expressions of emotions take the particular forms they do?" and "Why do we wrinkle our nose when we are disgusted and bare our teeth when we are enraged?" Darwin attributed these facial expressions to serviceable associated habits, which are behaviors that earlier in our evolutionary history had specific and direct functions. For example, a species that attacked by biting, baring the teeth was a necessary act before an assault and wrinkling the nose reduced the inhalation of foul odors. In response to the question asking why facial expressions persist even when they no longer serve their original purposes, Darwin's predecessors have developed a highly valued explanation. According to Darwin, humans continue to make facial expressions because they have acquired communicative value throughout evolutionary history. In other words, humans utilize facial expressions as external evidence of their internal state. Although The Expression of the Emotions in Man and Animals was not one of Darwin's most successful books in terms of its quality and overall impact in the field, his initial ideas started the abundance of research on the types, effects, and expressions of nonverbal communication and behavior. Despite the introduction of nonverbal communication in the 1800s, the emergence of behaviorism in the 1920s paused further research on nonverbal communication. Behaviorism is defined as the theory of learning that describes people's behavior as acquired through conditioning. Behaviorists such as B.F. Skinner trained pigeons to engage in various behaviors to demonstrate how animals engage in behaviors with rewards. While most psychology researchers were exploring behaviorism, the study of nonverbal communication as recorded on film began in 1955–56 at the Center for Advanced Study in Behavioral Sciences through a project which came to be called the Natural History of an Interview. The initial participants included two psychiatrists, Frieda Fromm-Reichman and Henry Brosin, two linguists, Norman A. McQuown and Charles Hockett, and also two anthropologists, Clyde Kluckhohn and David M. Schneider, (these last two withdrew by the end of 1955, and did not participate in the major group project). In their place, two other anthropologists, Ray Birdwhistell, already then known as the founder of kinesics, the study of body motion communication, and Gregory Bateson, known more generally as a human communication theorist, both joined the team in 1956. Albert Scheflen and Adam Kendon were among those who joined one of the small research teams continuing research once the year at CASBS ended. The project analyzed a film made by Bateson, using an analytic method called at the time natural history, and later, mostly by Scheflen, context analysis. The result remained unpublished, as it was enormous and unwieldy, but it was available on microfilm by 1971. The method involves transcribing filmed or videotaped behavior in excruciating detail, and was later used in studying the sequence and structure of human greetings, social behaviors at parties, and the function of posture during interpersonal interaction. Research on nonverbal communication rocketed during the mid 1960s by a number of psychologists and researchers. Michael Argyle and Janet Dean Fodor, for example, studied the relationship between eye contact and conversational distance. Ralph V. Exline examined patterns of looking while speaking and looking while listening. Eckhard Hess produced several studies pertaining to pupil dilation that were published in Scientific American. Robert Sommer studied the relationship between personal space and the environment. Robert Rosenthal discovered that expectations made by teachers and researchers can influence their outcomes, and that subtle, nonverbal cues may play an important role in this process. Albert Mehrabian studied the nonverbal cues of liking and immediacy. By the 1970s, a number of scholarly volumes in psychology summarized the growing body of research, such as Shirley Weitz's Nonverbal Communication and Marianne LaFrance and Clara Mayo's Moving Bodies. Popular books included Body Language (Fast, 1970), which focused on how to use nonverbal communication to attract other people, and How to Read a Person Like a Book (Nierenberg & Calero, 1971) which examined nonverbal behavior in negotiation situations. The journal Environmental Psychology and Nonverbal Behavior was founded in 1976. In 1970, Argyle hypothesized that although spoken language is used for communicating the meaning about events external to the person communicating, the nonverbal codes are used to create and strengthen interpersonal relationships. When someone wishes to avoid conflicting or embarrassing events during communication, it is considered proper and correct by the hypothesis to communicate attitudes towards others non-verbally instead of verbally. Along with this philosophy, Michael Argyle also found and concluded in 1988 that there are five main functions of nonverbal body behavior and gestures in human communications: self-presentation of one's whole personality, rituals and cultural greetings, expressing interpersonal attitudes, expressing emotions, and to accompany speech in managing the cues set in the interactions between the speaker and the listener. It takes just one-tenth of a second for someone to judge and make their first impression. According to a study from Princeton University, this short amount of time is enough for a person to determine several attributes about an individual. These attributes included "attractiveness, likeability, trustworthiness, competence, and aggressiveness." A first impression is a lasting non-verbal communicator. The way a person portrays themselves on the first encounter is non-verbal statement to the observer. Presentation can include clothing and other visible attributes such as facial expressions or facial traits in general. Negative impressions can also be based on presentation and on personal prejudice. First impressions, although sometimes misleading, can in many situations be an accurate depiction of others.[verification needed Posture is a nonverbal cue that is associated with positioning and that these two are used as sources of information about individual's characteristics, attitudes, and feelings about themselves and other people. There are many different types of body positioning to portray certain postures, including slouching, towering, legs spread, jaw thrust, shoulders forward, and arm crossing. The posture or bodily stance exhibited by individuals communicates a variety of messages whether good or bad. A study, for instance, identified around 200 postures that are related to maladjustment and withholding of information. Posture can be used to determine a participant's degree of attention or involvement, the difference in status between communicators, and the level of fondness a person has for the other communicator, depending on body "openness".: 9 It can also be effectively used as a way for an individual to convey a desire to increase, limit, or avoid interaction with another person. Studies investigating the impact of posture on interpersonal relationships suggest that mirror-image congruent postures, where one person's left side is parallel to the other person's right side, leads to favorable perception of communicators and positive speech; a person who displays a forward lean or decreases a backward lean also signifies positive sentiment during communication. Posture can be situation-relative, that is, people will change their posture depending on the situation they are in. This can be demonstrated in the case of relaxed posture when an individual is within a nonthreatening situation and the way one's body tightens or become rigid when under stress. Clothing is one of the most common forms of non-verbal communication. The study of clothing and other objects as a means of non-verbal communication is known as artifactics or objectics. The types of clothing that an individual wears convey nonverbal cues about his or her personality, background and financial status, and how others will respond to them. An individual's clothing style can demonstrate their culture, mood, level of confidence, interests, age, authority, and values/beliefs. For instance, Jewish men may wear a yarmulke to outwardly communicate their religious belief. Similarly, clothing can communicate what nationality a person or group is; for example, in traditional festivities Scottish men often wear kilts to specify their culture. Aside from communicating a person's beliefs and nationality, clothing can be used as a nonverbal cue to attract others. Men and women may shower themselves with accessories and high-end fashion in order to attract partners they are interested in. In this case, clothing is used as a form of self-expression in which people can flaunt their power, wealth, sex appeal, or creativity. A study of the clothing worn by women attending discothèques, carried out in Vienna, Austria, showed that in certain groups of women (especially women who were without their partners), motivation for sex and levels of sexual hormones were correlated with aspects of their clothing, especially the amount of skin displayed and the presence of sheer clothing. The way one chooses to dress tells a lot about one's personality. In fact, there was a study done at the University of North Carolina, which compared the way undergraduate women chose to dress and their personality types. The study showed that women who dressed "primarily for comfort and practicality were more self-controlled, dependable, and socially well adjusted". Women who didn't like to stand out in a crowd had typically more conservative and traditional views and beliefs. Clothing, although non-verbal, tells people what the personality of the individual is like. The way a person dresses is typically rooted from deeper internal motivations such as emotions, experiences and culture. Clothing expresses who the person is, or even who they want to be that day. It shows other people who they want to be associated with, and where they fit in. Clothing can start relationships, because they clue other people in on what the wearer is like. Gestures may be made with the hands, arms or body, and also include movements of the head, face and eyes, such as winking, nodding, or rolling one's eyes. Although the study of gesture is still in its infancy, some broad categories of gestures have been identified by researchers. The most familiar are the so-called emblems or quotable gestures. These are conventional, culture-specific gestures that can be used as replacement for words, such as the hand wave used in western cultures for "hello" and "goodbye". A single emblematic gesture can have a very different significance in different cultural contexts, ranging from complimentary to highly offensive. For a list of emblematic gestures, see List of gestures. There are some universal gestures like the shoulder shrug. Gestures can also be categorized as either speech independent or speech related. Speech-independent gestures are dependent upon culturally accepted interpretation and have a direct verbal translation.: 9 A wave or a peace sign are examples of speech-independent gestures. Speech-related gestures are used in parallel with verbal speech; this form of nonverbal communication is used to emphasize the message that is being communicated. Speech-related gestures are intended to provide supplemental information to a verbal message such as pointing to an object of discussion. Facial expressions, more than anything, serve as a practical means of communication. With all the various muscles that precisely control mouth, lips, eyes, nose, forehead, and jaw, human faces are estimated to be capable of more than ten thousand different expressions. This versatility makes non-verbals of the face extremely efficient and honest, unless deliberately manipulated. In addition, many of these emotions, including happiness, sadness, anger, fear, surprise, disgust, shame, anguish and interest are universally recognized. Displays of emotions can generally be categorized into two groups: negative and positive. Negative emotions usually manifest as increased tension in various muscle groups: tightening of jaw muscles, furrowing of forehead, squinting eyes, or lip occlusion (when the lips seemingly disappear). In contrast, positive emotions are revealed by the loosening of the furrowed lines on the forehead, relaxation of the muscles around the mouth, and widening of the eye area. When individuals are truly relaxed and at ease, the head will also tilt to the side, exposing our most vulnerable area, the neck. This is a high-comfort display, often seen during courtship, that is nearly impossible to mimic when tense or suspicious. Gestures can be subdivided into three groups: Some hand movements are not considered to be gestures. They consist of manipulations either of the person or some object (e.g. clothing, pencils, eyeglasses)—the kinds of scratching, fidgeting, rubbing, tapping, and touching that people often do with their hands. These behaviors can show that a person is experiencing anxiety or feeling of discomfort, typical when the individual is not the one in control of the conversation or situation and therefore expresses this uneasiness subconsciously. Such behaviors are referred to as adapters. They may not be perceived as meaningfully related to the speech in which they accompany, but may serve as the basis for dispositional inferences of the speaker's emotion (nervous, uncomfortable, bored.) These types of movements are believed to express the unconscious thoughts and feelings of a person, or those thoughts an emotions one is trying to consciously hide. Other hand movements are gestures. They are movements with specific, conventionalized meanings called symbolic gestures. They are the exact opposite of adaptors, since their meanings are intended to be communicated and they have a specific meaning for the person who gives the gesture and the person to receive it. Familiar symbolic gestures include the "raised fist," "bye-bye," and "thumbs up." In contrast to adapters, symbolic gestures are used intentionally and serve a clear communicative function. Sign languages are highly developed systems of symbolic gesture. Every culture has their own set of gestures, some of which are unique only to a specific culture. Very similar gestures can have very different meanings across cultures. Symbolic gestures are usually used in the absence of speech but can also accompany speech. The middle ground between adapters and symbolic gestures is occupied by conversational gestures. These gestures do not refer to actions or words but do accompany speech. Conversational gestures are hand movements that accompany speech and are related to the speech they accompany. Though they do accompany speech, conversational gestures are not seen in the absence of speech and are only made by the person who is speaking. There are a few types of conversational gestures, specifically motor and lexical movements. Motor movements are those which are rhymical and repetitive, do not have to be accompanied by anything spoken due to their simple meaning, and the speaker's hand usually sticks to one position. When paired with verbal communication, they can be used to stress certain syllables. An example of this would be pointing someone in the direction of an individual and saying, "That way." In this case, the "That" in the sentence would be stressed by the movements. Lexical movements are more complex, not rhythmic, or repetitive, but rather lengthy and varied. An example of this would be something like giving elaborate directions to somewhere and pairing that with various hands movements to signal the various turns to take. According to Edward T. Hall, the amount of space we maintain between ourselves and the persons with whom we are communicating shows the importance of the science of proxemics. In this process, it is seen how we feel towards the others at that particular time. Within American culture Hall defines four primary distance zones: (i) intimate (touching to eighteen inches) distance, (ii) personal (eighteen inches to four feet) distance, (iii) social (four to twelve feet) distance, and (iv) public (more than twelve feet) distance. Intimate distance is considered appropriate for familiar relationships and indicates closeness and trust. Personal distance is still close but keeps another "at arm's length" and is considered the most comfortable distance for most of our interpersonal contact, while social distance is used for the kind of communication that occurs in business relationships and, sometimes, in the classroom. Public distance occurs in situations where two-way communication is not desirable or possible. Eye contact is the instance when two people look at each other's eyes at the same time; it is the primary nonverbal way of indicating engagement, interest, attention and involvement. Some studies have demonstrated that people use their eyes to indicate interest. This includes frequently recognized actions of winking and movements of the eyebrows. Disinterest is highly noticeable when little or no eye contact is made in a social setting. When an individual is interested, however, the pupils will dilate. According to Eckman, "Eye contact (also called mutual gaze) is another major channel of nonverbal communication. The duration of eye contact is its most meaningful aspect." Generally speaking, the longer there is established eye contact between two people, the greater the intimacy levels. Gaze comprises the actions of looking while talking and listening. The length of a gaze, the frequency of glances, patterns of fixation, pupil dilation, and blink rate are all important cues in nonverbal communication. "Liking generally increases as mutual gazing increases." Along with the detection of disinterest, deceit can also be observed in a person. Hogan states "when someone is being deceptive their eyes tend to blink a lot more. Eyes act as leading indicator of truth or deception," Both nonverbal and verbal cues are useful when detecting deception. It is typical for people who are detecting lies to rely consistently on verbal cues but this can hinder how well they detect deception. Those who are lying and those who are telling the truth possess different forms of nonverbal and verbal cues and this is important to keep in mind. In addition, it is important to note that understanding the cultural background of a person will influence how easily deception is detectable because nonverbal cues may differ depending on the culture. In addition to eye contact these nonverbal cues can consist of physiological aspects including pulse rate as well as levels of perspiration. In addition eye aversion can be predictive of deception. Eye aversion is the avoidance of eye contact. Eye contact and facial expressions provide important social and emotional information. Overall, as Pease states, "Give the amount of eye contact that makes everyone feel comfortable. Unless looking at others is a cultural no-no, lookers gain more credibility than non-lookers" In concealing deception, nonverbal communication makes it easier to lie without being revealed. This is the conclusion of a study where people watched made-up interviews of persons accused of having stolen a wallet. The interviewees lied in about 50% of the cases. People had access to either written transcript of the interviews, or audio tape recordings, or video recordings. The more clues that were available to those watching, the larger was the trend that interviewees who actually lied were judged to be truthful. That is, people that are clever at lying can use tone of voice and facial expressions to give the impression that they are truthful. Contrary to popular belief, a liar does not always avoid eye contact. In an attempt to be more convincing, liars deliberately made more eye contact with interviewers than those that were telling the truth. However, there are many cited examples of cues to deceit, delivered via nonverbal (paraverbal and visual) communication channels, through which deceivers supposedly unwittingly provide clues to their concealed knowledge or actual opinions. Most studies examining the nonverbal cues to deceit rely upon human coding of video footage (c.f. Vrij, 2008), although a recent study also demonstrated bodily movement differences between truth-tellers and liars using an automated body motion capture system. While not traditionally thought of as "talk," nonverbal communication has been found to contain highly precise and symbolic meanings, similar to verbal speech. However the meanings in nonverbal communication are conveyed through the use of gesture, posture changes, and timing. Nuances across different aspects of nonverbal communication can be found in cultures all around the world. These differences can often lead to miscommunication between people of different cultures, who usually do not mean to offend. Differences can be based in preferences for mode of communication, like the Chinese, who prefer silence over verbal communication.: 69 Differences can even be based on how cultures perceive the passage of time. Chronemics, how people handle time, can be categorized in two ways: polychronic which is when people do many activities at once and is common in Italy and Spain, or monochronic which is when people do one thing at a time which is common in America.: 422 Because nonverbal communication can vary across many axes—gestures, gaze, clothing, posture, direction, or even environmental cues like lighting—there is a lot of room for cultural differences.: 8 In Japan, a country which prides itself on the best customer service, workers tend to use wide arm gestures to give clear directions to strangers—accompanied by the ever-present bow to indicate respect. One of the main factors that differentiates nonverbal communication in cultures is high and low-context. context relates to certain events and the meaning that is ultimately derived from it. “High-context” cultures rely mostly on nonverbal cues and gestures, using elements such as the closeness of the kind of the relationships they have with others, strict social hierarchies and classes and deep cultural tradition and widely known beliefs and rules. In contrast, “low-context” cultures depend largely on words and verbal communication, where communications are direct and social hierarchies are way less tense and more loose. Gestures vary widely across cultures in how they are used and what they mean. A common example is pointing. In the United States, pointing is the gesture of a finger or hand to indicate or "come here please" when beckoning a dog. But pointing with one finger is also considered to be rude by some cultures. Those from Asian cultures typically use their entire hand to point to something. Other examples include, sticking your tongue out. In Western countries, it can be seen as mockery, but in Polynesia it serves as a greeting and a sign of reverence.: 417 Clapping is a North American way of applauding, but in Spain is used to summon a waiter at a restaurant. Differences in nodding and shaking the head to indicate agreement and disagreement also exist. Northern Europeans nodding their heads up and down to say "yes", and shaking their head from side to side to say "no". But the Greeks have for at least three thousand years used the upward nod for disagreement and the downward nod for agreement.": 417 There are many ways of waving goodbye: Americans face the palm outward and move the hand side to side, Italians face the palm inward and move the fingers facing the other person, French and Germans face the hand horizontal and move the fingers toward the person leaving.: 417 Also, it is important to note that gestures are used in more informal settings and more often by children.: 417 People in the United States commonly use the "OK" hand gesture to give permission and allow an action. In Japan, however, the same sign means "money". It refers to "zero" or "nothing" in several cultures besides these two (Argentina, Belgium, French and the Portuguese). To Eastern European cultures that same "OK" sign is considered a vulgar swearing gesture. Speech-Independent Gestures are nonverbal cues that communicate a word or an expression, most commonly a dictionary definition. Although there is differences in nonverbal gestures across cultures, speech-independent gestures must have an agreeable understanding among people affiliated with that culture or subculture on what that gesture's interpretation is. As most humans use gestures to better clarify their speech, speech-independent gestures don't rely on speech for their meaning. Usually they transpire into a single gesture. Many speech-independent gestures are made with the hand, the "ring" gesture usually comes across as asking someone if they are okay. There are several that could be performed through the face. For example, a nose wrinkle could universally mean disapproval or disgust. Nodding your head up ad down or side to side indicate an understanding or lack of when the speaker is talking. Just because speech-independent speech doesn't need actual speech for understanding the gesture, it still needs context. Using your middle finger is a gesture that could be used within different contexts. It could be comical or derogatory. The only way to know is if one analyzes the other behaviors surrounding it and depending on who the speaker is and who the speaker is addressing. Displays of emotionEdit Emotions are a key factor in nonverbal communication. Just as gestures and other hand movements vary across cultures, so does the way people display their emotions. For example, "In many cultures, such as the Arab and Iranian cultures, people express grief openly. They mourn out loud, while in Asian cultures, the general belief is that it is unacceptable to show emotion openly." For people in Westernized countries, laughter is a sign of amusement, but in some parts of Africa it is a sign of wonder or embarrassment.: 417 Emotional expression varies with culture. Native Americans tend to be more reserved and less expressive with emotions.: 44 Frequent touches are common for Chinese people; however, such actions like touching, patting, hugging or kissing in America are less frequent and not often publicly displayed.: 68 According to Rebecca Bernstein (from Point Park University) "Winking is a facial expression particularly varied in meaning." According to Latin culture, a wink was a display or invitation of romantic pursuit. The Yoruba (Nigeria) have taught their children to follow certain nonverbal commands, such as winking, which tells them it's time to leave the room. To the Chinese it comes off as an offensive gesture. According to Matsumoto and Juang, the nonverbal motions of different people indicate important channels of communication. Nonverbal actions should match and harmonize with the message being portrayed, otherwise confusion will occur. For instance, an individual would normally not be seen smiling and gesturing broadly when saying a sad message. The author states that nonverbal communication is very important to be aware of, especially if comparing gestures, gaze, and tone of voice amongst different cultures. As Latin American cultures embrace big speech gestures, Middle Eastern cultures are relatively more modest in public and are not expressive. Within cultures, different rules are made about staring or gazing. Women may especially avoid eye contact with men because it can be taken as a sign of sexual interest. In some cultures, gaze can be seen as a sign of respect. In Western culture, eye contact is interpreted as attentiveness and honesty. In Hispanic, Asian, Middle Eastern, and Native American cultures, eye contact is thought to be disrespectful or rude, and lack of eye contact does not mean that a person is not paying attention. Voice is a category that changes within cultures. Depending on whether or not the cultures is expressive or non-expressive, many variants of the voice can depict different reactions. The acceptable physical distance is another major difference in the nonverbal communication between cultures. In Latin America and the Middle East the acceptable distance is much shorter than what most Europeans and Americans feel comfortable with. This is why an American or a European might wonder why the other person is invading his or her personal space by standing so close, while the other person might wonder why the American/European is standing so far from him or her. In addition, for Latin Americans, the French, Italians, and Arabs the distance between people is much closer than the distance for Americans; in general for these close distance groups, 1 foot of distance is for lovers, 1.5–4 feet of distance is for family and friends, and 4–12 feet is for strangers.: 421 In the opposite way, most Native Americans value distance to protect themselves.: 43 Children's learning in indigenous American communitiesEdit Nonverbal communication is commonly used to facilitate learning in indigenous American communities. Nonverbal communication is pivotal for collaborative participation in shared activities, as children from indigenous American communities will learn how to interact using nonverbal communication by intently observing adults. Nonverbal communication allows for continuous keen observation and signals to the learner when participation is needed. Culture plays an important role in nonverbal communication, and it is one aspect that helps to influence how learning activities are organized. In many Indigenous American Communities, for example, there is often an emphasis on nonverbal communication, which acts as a valued means by which children learn. In a study on Children from both US Mexican (with presumed indigenous backgrounds) and European American heritages who watched a video of children working together without speaking found that the Mexican-heritage children were far more likely to describe the children's actions as collaborative, saying that the children in the video were "talking with their hands and with their eyes." A key characteristic of this type of nonverbal learning is that children have the opportunity to observe and interact with all parts of an activity. Many Indigenous American children are in close contact with adults and other children who are performing the activities that they will eventually master. Objects and materials become familiar to the child as the activities are a normal part of everyday life. Learning is done in an extremely contextualized environment rather than one specifically tailored to be instructional. For example, the direct involvement that Mazahua children take in the marketplace is used as a type of interactional organization for learning without explicit verbal instruction. Children learn how to run a market stall, take part in caregiving, and also learn other basic responsibilities through non-structured activities, cooperating voluntarily within a motivational context to participate. Not explicitly instructing or guiding the children teaches them how to integrate into small coordinated groups to solve a problem through consensus and shared space. These Mazahua separate-but-together practices have shown that participation in everyday interaction and later learning activities establishes enculturation that is rooted in nonverbal social experience. As the children participate in everyday interactions, they are simultaneously learning the cultural meanings behind these interactions. Children's experience with nonverbally organized social interaction helps constitute the process of enculturation. In some Indigenous communities of the Americas, children reported one of their main reasons for working in their home was to build unity within the family, the same way they desire to build solidarity within their own communities. Most indigenous children learn the importance of putting in this work in the form of nonverbal communication. Evidence of this can be observed in a case study where children are guided through the task of folding a paper figure by observing the posture and gaze of those who guide them through it. This is projected onto homes and communities, as children wait for certain cues from others to initiative cooperate and collaborate. One aspect of nonverbal communication that aids in conveying these precise and symbolic meanings is "context-embeddedness." The idea that many children in Indigenous American Communities are closely involved in community endeavors, both spatially and relationally, which help to promote nonverbal communication, given that words are not always necessary. When children are closely related to the context of the endeavor as active participants, coordination is based on a shared reference, which helps to allow, maintain, and promote nonverbal communication. The idea of "context-embeddedness" allows nonverbal communication to be a means of learning within Native American Alaskan Athabaskans and Cherokee communities. By observing various family and community social interactions, social engagement is dominated through nonverbal communication. For example, when children elicit thoughts or words verbally to their elders, they are expected to structure their speech carefully. This demonstrates cultural humility and respect as excessive acts of speech when conversational genre shifts reveal weakness and disrespect. This careful self-censorship exemplifies traditional social interaction of Athapaskin and Cherokee Native Americans who are mostly dependent on nonverbal communication. Nonverbal cues are used by most children in the Warm Springs Indian Reservation community within the parameters of their academic learning environments. This includes referencing Native American religion through stylized hand gestures in colloquial communication, verbal and nonverbal emotional self-containment, and less movement of the lower face to structure attention on the eyes during face-to-face engagement. Therefore, children's approach to social situations within a reservation classroom, for example, may act as a barrier to a predominantly verbal learning environment. Most Warm Springs children benefit from a learning model that suits a nonverbal communicative structure of collaboration, traditional gesture, observational learning and shared references. It is important to note that while nonverbal communication is more prevalent in Indigenous American Communities, verbal communication is also used. Preferably, verbal communication does not substitute one's involvement in an activity, but instead acts as additional guidance or support towards the completion of an activity. Disadvantages of nonverbal communication across culturesEdit People who have studied in mainly nonverbal communication may not be skilled as a verbal speaker, so much of what they are portraying is through gestures and facial expressions which can lead to major cultural barriers if they have conflict with diverse cultures already. "This can lead to intercultural conflict (according to Marianna Pogosyan Ph.D.), misunderstandings and ambiguities in communication, despite language fluency." Nonverbal communication makes the difference between bringing cultures together in understanding one another, appearing authentic. Or it can push people farther away due to misunderstandings in how different groups see certain nonverbal cues or gestures. From birth, children in various cultures are taught the gestures and cues their culture defines as universal which is not the case for others, but some movements are universal. Evidence suggests humans all smile when happy about something and frowning when something is upsetting or bad. "In the study of nonverbal communications, the limbic brain is where the action is...because it is the part of the brain that reacts to the world around us reflexively and instantaneously, in real time, and without thought." There is evidence that the nonverbal cues made from person-to-person do not entirely have something to do with environment. Along with gestures, phenotypic traits can also convey certain messages in nonverbal communication, for instance, eye color, hair color and height. Research into height has generally found that taller people are perceived as being more impressive. Melamed and Bozionelos (1992) studied a sample of managers in the United Kingdom and found that height was a key factor in who was promoted. Height can have benefits and depressors too. "While tall people often command more respect than short people, height can also be detrimental to some aspects of one-to-one communication, for instance, where you need to 'talk on the same level' or have an 'eye-to-eye' discussion with another person and do not want to be perceived as too big for your boots." Chronemics is the way time is used. Our use of time can communicate and send messages, nonverbally. The way we use time and give or don't give our time to others can communicate different messages. Chronemics can send messages to others about what we value and also send messages about power. "When you go to see someone who is in a position of power over you, such as your supervisor, it is not uncommon to be kept waiting. However, you would probably consider it bad form to make a more powerful person wait for you. Indeed, the rule seems to be that the time of powerful people is more valuable than the time of less powerful people." Movement and body positionEdit Kinesics is defined as movements, more specifically the study of our movements involving our hands, body, and face. This form of nonverbal communication is powerful in the messages it sends to those witnessing them. The term was first coined by Ray Birdwhistell, who considered the term body language inaccurate and instead opted to explain it as nonverbal behaviors stemming from body movement. Research around this behavior provides some examples, such as someone casually smiling and leaning forward, as well as maintaining eye contact to radiate a non-dominating and intimate demeanor. In contrast, someone leaning back, a stoic facial expression, and no to little eye contact could emit an unfriendly and dominating demeanor. Additional research expresses that eye contact is an important part of nonverbal communication involved in kinesics, as longer and appropriate levels of eye contact give an individual credibility. The opposite is said for those who do not maintain eye contact, as they are likely to be deemed distrustful. More eye contact was also found to be related to higher levels of likability and believability from those people interacted with. A real-life example of this is through service workers, in a study it was found that those workers who welcomed customers with smiles were seem like more warm individuals than those who did not smile. Customers reported that those without smiles and open body movements, such as waving or handshaking, were lacking warmth and deemed less friendly. Haptics: touching in communicationEdit Haptics is the study of touching as nonverbal communication, and haptic communication refers to how people and other animals communicate via touching. Touches among humans that can be defined as communication include handshakes, holding hands, kissing (cheek, lips, hand), back slapping, high fives, a pat on the shoulder, and brushing an arm. Touching of oneself may include licking, picking, holding, and scratching.: 9 These behaviors are referred to as "adapters" or "tells" and may send messages that reveal the intentions or feelings of a communicator and a listener. The meaning conveyed from touch is highly dependent upon the culture, the context of the situation, the relationship between communicators, and the manner of touch.: 10 Touch is an extremely important sense for humans; as well as providing information about surfaces and textures it is a component of nonverbal communication in interpersonal relationships, and vital in conveying physical intimacy. It can be both sexual (such as kissing) and platonic (such as hugging or tickling). Touch is the earliest sense to develop in the fetus. Human babies have been observed to have enormous difficulty surviving if they do not possess a sense of touch, even if they retain sight and hearing. Babies who can perceive through touch, even without sight and hearing, tend to fare much better. In chimpanzees, the sense of touch is highly developed. As newborns, they see and hear poorly but cling strongly to their mothers. Harry Harlow conducted a controversial study involving rhesus monkeys and observed that monkeys reared with a "terry cloth mother," a wire feeding apparatus wrapped in soft terry cloth that provided a level of tactile stimulation and comfort, the monkey who had the real parent were considerably more emotionally stable as adults than those with a mere wire mother (Harlow, 1958). Touching is treated differently from one country to another and socially acceptable levels of touching vary from one culture to another (Remland, 2009). In Thai culture, for example, touching someone's head may be thought rude. Remland and Jones (1995) studied groups of people communicating and found that touching was rare among the English (8%), the French (5%) and the Dutch (4%) compared to Italians (14%) and Greeks (12.5%). Striking, pushing, pulling, pinching, kicking, strangling and hand-to-hand fighting are forms of touch in the context of physical abuse. Proxemics is defined as the use of space as a form of communication, and includes how far or near you position yourself from others; it can be influenced by culture, race/ethnicity, gender, and age. Edward T. Hall invented the term when he realized that culture influences how people use space in communication while working with diplomats, and published his findings on proxemics in 1959 as The Silent Language. For example, in high contact cultures people are generally more comfortable in a closer proximity, whereas individuals in low contact cultures feel more comfortable with a greater amount of personal space. Hall concluded that proxemics could cause misunderstandings between cultures as cultures use of proxemics varies and what is customary in one culture may range from being confusing to being offensive to members of a different culture. Intimate space is any distance less than 18 inches, and is most commonly used by individuals when they are engaging with someone with whom they feel very comfortable, such as: a spouse, partner, friend, child, or parent. Personal space is a distance of 18 inches to 4 feet and is usually used when individuals are interacting with friends. Social distance is the most common type of proximity as it is used when communicating with colleagues, classmates, acquaintances, or strangers. Public distance creates the greatest gap between the individual and the audience and is categorized as distances greater than 12 feet in distance and is often used for speeches, lectures, or formal occasions. In relation to verbal communicationEdit When communicating face-to-face with someone, it's sometimes hard to differentiate which parts of conversing are communicated via verbally or non-verbally. Other studies done on the same subject have concluded that in more relaxed and natural settings of communication, verbal and non-verbal signals and cues can contribute in surprisingly similar ways. Argyle, using video tapes shown to the subjects, analysed the communication of submissive/dominant attitude, (high and low context, high context resorting to more strict social classes and take a more short and quick response route to portray dominance, low context being the opposite by taking time to explain everything and putting a lot of importance on communication and building trust and respect with others in a submissive and relaxed manner), and found that non-verbal cues had 4.3 times the effect of verbal cues. The most important effect was that body posture communicated superior status (specific to culture and context said person grew up in) in a very efficient way. On the other hand, a study by Hsee et al. had subjects judge a person on the dimension happy/sad and found that words spoken with minimal variation in intonation had an impact about 4 times larger than face expressions seen in a film without sound. Therefore, when considering certain non-verbal mannerisms such as facial expressions and physical cues, they can conflict in meaning when compared to spoken language and emotions. Different set ups and scenarios would yield different responses and meanings when using both types of communication. In other ways they can complement each other, provided they're used together wisely during a conversation. When seeking to communicate effectively, it's important that the nonverbal conversation supports the verbal conversation, and vice versa. If the nonverbal cues converge with what we are saying verbally, then our message is further reinforced. Mindfulness is one technique that can help improve our awareness of NVC. If we become more mindful and present to how our body is moving, then we can better control our external nonverbal communication, which results in more effective communication. When communicating, nonverbal messages can interact with verbal messages in six ways: repeating, conflicting, complementing, substituting, regulating and accenting/moderating. Conflicting verbal and nonverbal messages within the same interaction can sometimes send opposing or conflicting messages. A person verbally expressing a statement of truth while simultaneously fidgeting or avoiding eye contact may convey a mixed message to the receiver in the interaction. Conflicting messages may occur for a variety of reasons often stemming from feelings of uncertainty, ambivalence, or frustration. When mixed messages occur, nonverbal communication becomes the primary tool people use to attain additional information to clarify the situation; great attention is placed on bodily movements and positioning when people perceive mixed messages during interactions. Definitions of nonverbal communication creates a limited picture in our minds but there are ways to create a clearer one. There are different dimensions of verbal and nonverbal communication that have been discovered. They are (1) structure versus non-structure, (2) linguistic versus non-linguistic, (3) continuous versus discontinuous, (4) learned versus innate, and (5) left versus right hemispheric processing.: 7 Accurate interpretation of messages is made easier when nonverbal and verbal communication complement each other. Nonverbal cues can be used to elaborate on verbal messages to reinforce the information sent when trying to achieve communicative goals; messages have been shown to be remembered better when nonverbal signals affirm the verbal exchange.: 14 Nonverbal behavior is sometimes used as the sole channel for communication of a message. People learn to identify facial expressions, body movements, and body positioning as corresponding with specific feelings and intentions. Nonverbal signals can be used without verbal communication to convey messages; when nonverbal behavior does not effectively communicate a message, verbal methods are used to enhance understanding.: 16 Structure versus non-structureEdit Verbal communication is a highly structured form of communication with set rules of grammar. The rules of verbal communication help to understand and make sense of what other people are saying. For example, foreigners learning a new language can have a hard time making themselves understood. On the other hand, nonverbal communication has no formal structure when it comes to communicating. Nonverbal communication occurs without even thinking about it. The same behavior can mean different things, such as crying of sadness or of joy. Therefore, these cues need to be interpreted carefully to get their correct meaning.: 7–8 Linguistic versus non-linguisticEdit There are only a few assigned symbols in the system of nonverbal communication. Nodding the head is one symbol that indicates agreement in some cultures, but in others, it means disagreement. On the other hand, verbal communication has a system of symbols that have specific meanings to them.: 8 Continuous and discontinuousEdit Verbal communication is based on discontinuous units whereas nonverbal communication is continuous. Communicating nonverbally cannot be stopped unless one would leave the room, but even then, the intrapersonal processes still take place (individuals communicating with themselves). Without the presence of someone else, the body still manages to undergo nonverbal communication. For example, there are no other words being spoken after a heated debate, but there are still angry faces and cold stares being distributed. This is an example of how nonverbal communication is continuous.: 8 Learned versus innateEdit Learned non-verbal cues require a community or culture for their reinforcement. For example, table manners are not innate capabilities upon birth. Dress code is a non-verbal cue that must be established by society. Hand symbols, whose interpretation can vary from culture to culture, are not innate nonverbal cues. Learned cues must be gradually reinforced by admonition or positive feedback. Innate non-verbal cues are "built-in" features of human behavior. Generally, these innate cues are universally prevalent and regardless of culture. For example, smiling, crying, and laughing do not require teaching. Similarly, some body positions, such as the fetal position, are universally associated with weakness. Due to their universality, the ability to comprehend these cues is not limited to individual cultures.: 9 Left versus right-hemispheric processingEdit This type of processing involves the neurophysiological approach to nonverbal communication. It explains that the right hemisphere processes nonverbal stimuli such as those involving spatial, pictorial, and gestalt tasks while the left hemisphere involves the verbal stimuli involving analytical and reasoning tasks. It is important to know the implications in processing the differences between verbal and nonverbal communication messages. It is possible that individuals may not use the correct hemisphere at appropriate times when it comes to interpreting a message or meaning.: 9 From 1977 to 2004, the influence of disease and drugs on receptivity of nonverbal communication was studied by teams at three separate medical schools using a similar paradigm. Researchers at the University of Pittsburgh, Yale University and Ohio State University had subjects observe gamblers at a slot machine awaiting payoffs. The amount of this payoff was read by nonverbal transmission prior to reinforcement. This technique was developed by and the studies directed by psychologist Robert E. Miller and psychiatrist A. James Giannini. These groups reported diminished receptive ability in heroin addicts and phencyclidine abusers, contrasted with increased receptivity in cocaine addicts. Men with major depression manifested significantly decreased ability to read nonverbal cues when compared with euthymic men. In some subjects tested for ability to read nonverbal cues, intuitive paradigms were apparently employed while in others a cause and effect approach was used. Subjects in the former group answered quickly and before reinforcement occurred. They could not give a rationale for their particular responses. Subjects in the latter category delayed their response and could offer reasons for their choice. The level of accuracy between the two groups did not vary nor did handedness. Obese women and women with premenstrual syndrome were found to also possess diminished abilities to read these cues. In contradistinction, men with bipolar disorder possessed increased abilities. A woman with total paralysis of the nerves of facial expression was found unable to transmit or receive any nonverbal facial cues whatsoever. Because of the changes in levels of accuracy on the levels of nonverbal receptivity, the members of the research team hypothesized a biochemical site in the brain which was operative for reception of nonverbal cues. Because certain drugs enhanced ability while others diminished it, the neurotransmitters dopamine and endorphin were considered to be likely etiological candidate. Based on the available data, however, the primary cause and primary effect could not be sorted out on the basis of the paradigm employed. An increased emphasis on gestures exists when intonations or facial expression are used. "Speakers often anticipate how recipients will interpret their utterances. If they wish some other, less obvious interpretation, they may "mark" their utterance (e.g. with special intonations or facial expressions)." This specific emphasis known as 'marking' can be spotted as a learned form of non-verbal communication in toddlers. A groundbreaking study from Carpenter et al in the Journal of Child Language has concluded that the act of marking a gesture is recognized by three-year-olds, but not by two-year-olds. In the study, two and three-year-old toddlers were tested on their recognition of markedness within gestures. The experiment was conducted in a room with an examiner and the test subjects, which for the first study were three-year-olds. The examiner sat across from each child individually, and allowed them to play with various objects including a purse with a sponge in it and a box with a sponge in it. After allowing the child to play with the objects for three minutes, the examiner told the child it was time to clean up and motioned by pointing to the objects. They measured the responses of the children by first pointing and not marking the gesture, to see the child's reaction to the request and if they reached for the objects to clean them up. After observing the child's response, the examiner then asked and pointed again, marking the gesture with facial expression, as to lead the child to believe the objects were supposed to be cleaned up. The results showed that three-year-old children were able to recognize the markedness, by responding to the gesture and cleaning the objects up as opposed to when the gesture was presented without being marked. In the second study in which the same experiment was performed on two-year-olds, the results were different. For the most part, the children did not recognize the difference between the marked and unmarked gesture by not responding more prevalently to the marked gesture, unlike the results of the three-year-olds. This shows that this sort of nonverbal communication is learned at a young age, and is better recognized in three-year-old children than two-year-old children, making it easier for us to interpret that the ability to recognize markedness is learned in the early stages of development, somewhere between three and four years of age. Boone and Cunningham conducted a study to determine at which age children begin to recognize emotional meaning (happiness, sadness, anger and fear) in expressive body movements. The study included 29 adults and 79 children divided into age groups of four-, five- and eight-year-olds. The children were shown two clips simultaneously and were asked to point to the one that was expressing the target emotion. The results of the study revealed that of the four emotions being tested the 4-year-olds were only able to correctly identify sadness at a rate that was better than chance. The 5-year-olds performed better and were able to identify happiness, sadness and fear at better than chance levels. The 8-year-olds and adults could correctly identify all four emotions and there was very little difference between the scores of the two groups. Between the ages of 4 and 8, nonverbal communication and decoding skills improve dramatically. Comprehension of nonverbal facial cuesEdit A byproduct of the work of the Pittsburgh/Yale/Ohio State team was an investigation of the role of nonverbal facial cues in heterosexual nondate rape. Males who were serial rapists of adult women were studied for nonverbal receptive abilities. Their scores were the highest of any subgroup. Rape victims were next tested. It was reported that women who had been raped on at least two occasions by different perpetrators had a highly significant impairment in their abilities to read these cues in either male or female senders. These results were troubling, indicating a predator-prey model. The authors did note that whatever the nature of these preliminary findings the responsibility of the rapist was in no manner or level diminished. The final target of study for this group was the medical students they taught. Medical students at Ohio State University, Ohio University and Northeast Ohio Medical College were invited to serve as subjects. Students indicating a preference for the specialties of family practice, psychiatry, pediatrics and obstetrics-gynecology achieved significantly higher levels of accuracy than those students who planned to train as surgeons, radiologists, or pathologists. Internal medicine and plastic surgery candidates scored at levels near the mean. - Animal communication - Asemic writing - Augmentative and alternative communication - Behavioral communication - Chinese number gestures - Doctrine of mental reservation - Intercultural competence - Albert Mehrabian - Metacommunicative competence - Desmond Morris - Joe Navarro - Neuro-linguistic programming - People skills - Regulatory focus theory - Silent service code - Statement analysis - Twilight language - Unconscious communication - Not to be confused with Dogwhistle - Giri, Vijai N. (2009). "Nonverbal Communication Theories". Encyclopedia of Communication Theory. doi:10.4135/9781412959384.n262. ISBN 9781412959377. - Darwin, Charles (1972). The Expression of the Emotions in Man and Animals. AMS Pres. - MCCORNACK, STEVEN (2019). CHOICES & CONNECTIONS: an Introduction to Communication (2 ed.). Boston: BEDFORD BKS ST MARTIN'S. p. 138. ISBN 978-1-319-04352-0. - Fontenot, Karen Anding (2018). "Nonverbal communication and social cognition". Salem Press Encyclopedia of Health. 4: 4. - Edward Craighead, W.; Nemeroff, Charles B. (2004). "Nonverbal Communication". The Concise Corsini Encyclopedia of Psychology and Behavioral Science. ISBN 9780471604150. - Paradise, Ruth (1994). "Interactional Style and Nonverbal Meaning: Mazahua Children Learning How to Be Separate-But-Together". Anthropology & Education Quarterly. 25 (2): 156–172. doi:10.1525/aeq.1994.25.2.05x0907w. - Hogan, K.; Stubbs, R. (2003). Can't Get Through: 8 Barriers to Communication. Grenta, LA: Pelican Publishing Company. ISBN 978-1589800755. Retrieved 14 May 2016. - Burgoon, Judee K; Guerrero, Laura k; Floyd, Kory (2016). "Introduction to Nonverbal Communication". Nonverbal communication. New York: Routledge. pp. 1–26. ISBN 978-0205525003. - Demarais, A.; White, V. (2004). First Impressions (PDF). New York: Bantam Books. ISBN 978-0553803204. - Pease B.; Pease A. (2004). The Definitive Book of Body Language (PDF). New York: Bantam Books. - Krauss, R.M.; Chen, Y. & Chawla, P. (2000). "Nonverbal behavior and nonverbal communication: What do conversational hand gestures tell us?" (PDF). Advances in Experimental Social Psychology. 1 (2): 389–450. doi:10.1016/S0065-2601(08)60241-5. - Hecht, M.A. & Ambady, N. (1999). "Nonverbal communication and psychology: Past and future" (PDF). The New Jersey Journal of Communication. 7 (2): 1–12. CiteSeerX 10.1.1.324.3485. doi:10.1080/15456879909367364. - Sanderson, C.A. (2010). Social Psychology. Wiley. - Birdwhistell, Ray L. (1952). Introduction to Kinesics. Washington, DC: Department of State, Foreign Service Institute. - McQuown, Norman (1971). The Natural History of an Interview. Chicago, IL: University of Chicago Joseph Regenstein Library, Department of Photoduplication. - Scheflen, Albert E. (1973). Communicational structure: Analysis of a psychotherapy transaction. Bloomington, IN: Indiana University Press. - Kendon, Adam; Harris, Richard M.; Key, Mary R. (1975). Organization of behavior in face-to-face interaction. The Hague, Netherlands: Mouton. - Kendon, Adam (1977). Studies in the behavior of social interaction. Lisse, The Netherlands: Peter De Ridder Press. - Birdwhistell, Ray L. (1970). Kinesics and context: Essays on body motion communication. Philadelphia, PA: University of Pennsylvania Press. - "Environmental psychology and nonverbal behavior [electronic resource]". Princeton University Library Catalog. Retrieved 16 August 2018. - Argyle Michael, Salter Veronica, Nicholson Hilary, Williams Marylin, Burgess Philip (1970). "The communication of inferior and superior attitudes by verbal and non-verbal signals". British Journal of Social & Clinical Psychology. 9 (3): 222–231. doi:10.1111/j.2044-8260.1970.tb00668.x.CS1 maint: multiple names: authors list (link) - Rosenthal, Robert & Bella M. DePaulo (1979). "Sex differences in accommodation in nonverbal communication". In R. Rosenthal. Skill in nonverbal communication: Individual difference. Oelgeschlager, Gunn & Hain. pp. 68–103. - Mehrabian, Albert (1972). Nonverbal Communication. New Brunswick: Transaction Publishers. p. 16. ISBN 978-0202309668. - (Knapp & Hall 2007) harv error: no target: CITEREFKnapp_&_Hall2007 (help) - Eaves, Michael; Leathers, Dale G. (2017). Successful Nonverbal Communication: Principles and Applications. Routledge. ISBN 978-1134881253. - Bull, P.E. (1987). Posture and gesture. Oxford: Pergamon Press. ISBN 978-0-08-031332-0. - Fast, J. (1970). Body Language – The Essential Secrets of Non-verbal Communication. New York: MJF Book. - Zastrow, Charles (2009). Social Work with Groups: A Comprehensive Workbook (7th ed.). Belmont, CA: Brooks/Cole Cengage Learning. p. 141. ISBN 978-0495506423. - Yammiyavar, Pradeep; Clemmensen, Torkil; Kumar, Jyoti (2008). "Influence of Cultural Background on Non-verbal Communication in a Usability Testing Situation". International Journal of Design. 2 (2): 31–40. Archived from the original on 5 July 2012. Retrieved 1 October 2012. - "Nonverbal Communication: "You'd better smile when you say that, Pilgrim!"". Oklahoma Panhandle University, Communications Department. p. 6. Retrieved 1 October 2012. - Learnvest (2012). "What your clothes say about you". - Grammer, Karl; Renninger, LeeAnn; Fischer, Bettina (February 2004). "Disco Clothing, Female Sexual Motivation, and Relationship Status: Is She Dressed to Impress?". The Journal of Sex Research. 41 (1): 66–74. doi:10.1080/00224490409552214. PMID 15216425. S2CID 16965002. - "Researchers say clothing choices reveal personality". Sarasota Journal. 12 March 1981. p. 38. Retrieved 31 March 2014. - "What Your Clothes Say About You". Forbes. 4 March 2012. Retrieved 31 March 2014. - (Ottenheimer 2007, p. 130) harv error: no target: CITEREFOttenheimer2007 (help) - Ekman, P. (2003). Emotions revealed: Recognizing faces and feelings to improve communication and emotional life. New York: Times Books. ISBN 978-0805072754. - Hall, Edward T. (1959). The Silent Language. New York: Anchor Books. - Davidhizar, R (1992). "Interpersonal communication: a review of eye contact". Infection Control & Hospital Epidemiology. 13 (4): 222–225. doi:10.2307/30147101. JSTOR 30147101. PMID 1593103. - Weiten, W.; Dunn, D. & Hammer, E. (2009). Psychology Applied to Modern Life. Belmont, CA: Wadsworth. - (Argyle 1988, pp. 153–155) harv error: no target: CITEREFArgyle1988 (help) - Burgoon, J. K.; J. P. Blair & R. E. Strom (2008). "Cognitive biases and nonverbal cue availability in detecting deception. Human communication research". Human Communication Research. 34 (4): 572–599. doi:10.1111/j.1468-2958.2008.00333.x. - Mann, Samantha; Aldert Vrij; Sharon Leal; Par Granhag; Lara Warmelink; Dave Forester (5 May 2012). "Windows to the Soul? Deliberate Eye Contact as a Cue to Deceit". Journal of Nonverbal Behavior. 36 (3): 205–215. doi:10.1007/s10919-012-0132-y. S2CID 144639436. - Drewnicky, Alex. "Body Language – Common Myths and How to use it Effectively". Retrieved 11 February 2014. - Ekman, P. & Friesen, W.V. (1969). "Nonverbal leakage and clues to deception" (PDF). Psychiatry. 32 (1): 88–106. doi:10.1080/00332747.1969.11023575. PMID 5779090. - Vrij, A. (2008). Detecting lies and deceit: Pitfalls and opportunities. Chichester: John Wiley & Sons. - Eapen, N.M.; Baron, S.; Street, C.N.H. & Richardson, D.C. (2010). S. Ohlsson & R. Catrambone (eds.). The bodily movements of liars. Proceedings of the 32nd Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. - Rogoff, Barbara; Paradise, Ruth; Arauz, Rebeca Mejia; Correa-Chavez, Maricela; Angelillo, Cathy (2003). "Firsthand Learning Through Intent Participation" (PDF). Annual Review of Psychology. 54 (1): 175–203. doi:10.1146/annurev.psych.54.101601.145118. PMID 12499516. Archived from the original (PDF) on 16 June 2016. Retrieved 14 May 2016. - Wang, D. & Li, H. (2007). "Nonverbal language in cross-cultural communication". US-China Foreign Language. 5 (10). - Kirch, M. S. (1979). "Non-Verbal Communication Across Cultures". Modern Language Journal. 63 (8): 416–423. doi:10.1111/j.1540-4781.1979.tb02482.x. - Morain, Genelle G. (June 1978). Kinesics and Cross-Cultural Understanding. Language in Education: Theory and Practice, No. 7 (PDF) (Report). Arlington, VA: Eric Clearinghouse on Language and Linguistics. Retrieved 26 January 2020. - "7 Cultural Differences in Nonverbal Communication". Point Park University Online. 28 March 2017. Retrieved 31 October 2018. - "Providers Guide to Quality and Culture". Management Sciences for Health. 2012. Archived from the original on 13 March 2016. - Knapp, Mark L. (2014). Nonverbal communication in human interaction. Wadsworth Cengage Learning. ISBN 978-1-133-31159-1. OCLC 1059153353. - Levine & Adelman (1993). Beyond Language. Prentice Hall. - Wong, S.; Bond, M. & Rodriguez Mosquera, P. M. (2008). "The Influence of Cultural Value Orientations on Self-Reported Emotional Expression across Cultures". Journal of Cross-Cultural Psychology. 39 (2): 226. doi:10.1177/0022022107313866. S2CID 146718155. - Herring, R. D. (1985). A Cross-Cultural Review of Nonverbal Communication with an Emphasis on the Native American (Report). - Matsumoto, D. & Juang, L. (2008). Culture and psychology (5th ed.). Belmont, Ca: Wadsworth. pp. 244–247. - Stoy, Ada (2010). "Project Communication Tips: Nonverbal Communication in Different Cultures". - Paradise, Ruth (June 1994). "Interactional Style and Nonverbal Meaning: Mazahua Children Learning How to Be Separate-But-Together". Anthropology & Education Quarterly. 25 (2): 156–172. doi:10.1525/aeq.1994.25.2.05x0907w. - Correa-Chávez, M. & Roberts, A. (2012). "A cultural analysis is necessary in understanding intersubjectivity". Culture & Psychology. 18 (1): 99–108. doi:10.1177/1354067X11427471. S2CID 144221981. - Paradise, R. (1994). "Interactional Style and Nonverbal Meaning: Mazahua Children Learning How to Be Separate-But-Together". Anthropology & Education Quarterly. 25 (2): 156–172. doi:10.1525/aeq.1994.25.2.05x0907w. - Coppens, Andrew D.; et al. (2014). "Children's initiative in family household work in Mexico". Human Development. 57 (2–3): 116–130. doi:10.1159/000356768. S2CID 144758889. - Paradise, R.; et al. (2014). "One, two, three, eyes on me! Adults attempting control versus guiding in support of initiative". Human Development. 57 (2–3): 131–149. doi:10.1159/000356769. S2CID 142954175. - de Leon, Lourdes (2000). "The Emergent Participant: Interactive Patterns in the Socialization of Tzotzil (Mayan) Infants". Journal of Linguistic Anthropology. 8 (2): 131–161. doi:10.1525/jlin.19188.8.131.52. - Schieffelin, B. B.; Ochs, E. (1986). "Language Socialization". Annual Review of Anthropology. 15: 163–191. doi:10.1146/annurev.an.15.100186.001115. - Philips, Susan (1992). The Invisible Culture: Communication in Classroom and Community on the Warm Springs Indian Reservation. Waveland Press. ISBN 9780881336948. - "Non-Verbal Communication Across Cultures". Psychology Today. Retrieved 31 October 2018. - "Advantages and disadvantages of non-verbal communication". The Business Communication. 3 October 2013. Retrieved 12 November 2018. - Floyd, Kory. Interpersonal Communication. New York : McGraw-Hill, 2011. Print. - Sundaram, D. S., & Webster, C. (2000). The role of nonverbal communication in service encounters. Journal of Services Marketing. - Gallace, Alberto; Spence, Charles (2010). "The science of interpersonal touch: An overview". Neuroscience and Biobehavioral Reviews. 34 (2): 246–259. doi:10.1016/j.neubiorev.2008.10.004. ISSN 0149-7634. PMID 18992276. S2CID 1092688.CS1 maint: multiple names: authors list (link) - Remland, M.S. & Jones, T.S. (1995). "Interpersonal distance, body orientation, and touch: The effect of culture, gender and age". Journal of Social Psychology. 135 (3): 281–297. doi:10.1080/00224545.1995.9713958. PMID 7650932. - Leeds-Hurwitz, Wendy (1990). "Notes in the history of intercultural communication: The Foreign Service Institute and the mandate for intercultural training". Quarterly Journal of Speech. 76 (3): 262–281. doi:10.1080/00335639009383919. - Hall, Edward T. (1963). "A system for the notation of proxemic behavior". American Anthropologist. 65 (5): 1003–26. doi:10.1525/aa.1963.65.5.02a00020. - Sluzki, Carlos E (2015). "Proxemics in Couple Interactions: Rekindling an Old Optic". Family Process. 55 (1): 7–15. doi:10.1111/famp.12196. PMID 26558850. - Mehrabian Albert, Wiener Morton (1967). "Decoding of inconsistent communications". Journal of Personality and Social Psychology. 6 (1): 109–114. doi:10.1037/h0024532. PMID 6032751. - Mehrabian Albert, Ferris Susan R (1967). "Inference of attitudes from nonverbal communication in two channels". Journal of Consulting Psychology. 31 (3): 248–252. doi:10.1037/h0024648. PMID 6046577. - Argyle, Michael; Veronica Salter; Hilary Nicholson; Marylin Williams & Philip Burgess (1970). "The communication of inferior and superior attitudes by verbal and non-verbal signals". British Journal of Social & Clinical Psychology. 9 (3): 222–231. doi:10.1111/j.2044-8260.1970.tb00668.x. - "So You're an American?". www.state.gov. Archived from the original on 10 December 2018. Retrieved 9 December 2018. - Christopher K. Hsee; Elaine Hatfield & Claude Chemtob (1992). "Assessments of the emotional states of others: Conscious judgments versus emotional contagion". Journal of Social and Clinical Psychology. 14 (2): 119–128. doi:10.1521/jscp.19184.108.40.206. - Beheshti, Naz. "The Power Of Mindful Nonverbal Communication". Forbes. Retrieved 4 December 2019. - "Nonverbal Communication". www.issup.net. Retrieved 4 December 2019. - Malandro, Loretta (1989). Nonverbal communication. New York: Newbery Award Records. ISBN 978-0-394-36526-8. - RE Miller; AJ Giannini; JM Levine (1977). "Nonverbal communication in men with a cooperative conditioning task". Journal of Social Psychology. 103 (1): 101–108. doi:10.1080/00224545.1977.9713300. - AJ Giannini; BT Jones (1985). "Decreased reception of nonverbal cues in heroin addicts". Journal of Psychology. 119 (5): 455–459. doi:10.1080/00223980.1985.10542915. - AJ Giannini. RK Bowman; JD Giannini (1999). "Perception of nonverbal facial cues in chronic phencyclidine abusers". Perceptual and Motor Skills. 89 (1): 72–76. doi:10.2466/pms.19220.127.116.11. PMID 10544402. S2CID 12966596. - AJ Giannini; DJ Folts; SM Melemis RH Loiselle (1995). "Depressed men's lowered ability to interpret nonverbal cues". Perceptual and Motor Skills. 81 (2): 555–559. doi:10.2466/pms.1918.104.22.1685. PMID 8570356. - AJ Giannini; J Daood; MC Giannini; R Boniface; PG Rhodes (1977). "Intellect vs Intuition–A dichotomy in the reception of nonverbal communication". Journal of General Psychology. 99: 19–24. doi:10.1080/00221309.1978.9920890. - AJ Giannini; ME Barringer; MC Giannini; RH Loiselle (1984). "Lack of relationship between handedness and intuitive and intellectual (rationalistic) modes of information processing". Journal of General Psychology. 111 (1): 31–37. doi:10.1080/00221309.1984.9921094. - AJ Giannini; L DiRusso; DJ Folts; G Cerimele (1990). "Nonverbal communication in moderately obese females. A pilot study". Annals of Clinical Psychiatry. 2 (2): 111–113. doi:10.3109/10401239009149557. - AJ Giannini, LM Sorger, DM Martin, L Bates (1988). Journal of Psychology 122: 591–594 - AJ Giannini; DJ Folts; L Fiedler (1990). "Enhanced encoding of nonverbal cues in male bipolars". Journal of Psychology. 124 (5): 557–561. doi:10.1080/00223980.1990.10543248. PMID 2250231. - AJ Giannini; D Tamulonis; MC Giannini; RH Loiselle; G Spirtos (1984). "Defective response to social cues in Mobius syndrome". Journal of Nervous and Mental Disorders. 172 (3): 174–175. doi:10.1097/00005053-198403000-00008. PMID 6699632. - AJ Giannini (1995). "Suggestions for future studies of nonverbal facial cues". Perceptual and Motor Skills. 81 (3): 881–882. doi:10.2466/pms.1922.214.171.1241. PMID 8668446. S2CID 42550313. - Carpenter, Malinda; Kristin Liebal; Michael Tomasello (September 2011). "Young children's understanding of markedness in non-verbal communication". Journal of Child Language. 38 (4): 888–903. doi:10.1017/S0305000910000383. PMID 21382221. S2CID 10428965. - Boone, R. T. & Cunningham, J. G. (1998). "Children's decoding of emotion in expressive body movement: The development of cue attunement". Developmental Psychology. 34 (5): 1007–1016. doi:10.1037/0012-16126.96.36.1997. PMID 9779746. - AJ Giannini; KW Fellows (1986). "Enhanced interpretation of nonverbal cues in male rapists". Archives of Sexual Behavior. 15 (2): 153–158. doi:10.1007/BF01542222. PMID 3718203. S2CID 21793355. - AJ Giannini; WA Price; JL Kniepple (1986). "Decreased interpretation of nonverbal cues in rape victims". International Journal of Psychiatry in Medicine. 16 (4): 389–394. doi:10.2190/V9VP-EEGE-XDKM-JKJ4. PMID 3557809. S2CID 34164554. - AJ Giannini; JD Giannini; RK Bowman (2000). "Measurement of nonverbal receptive abilities in medical students". Perceptual and Motor Skills. 90 (3 Pt 2): 1145–1150. doi:10.2466/pms.2000.90.3c.1145. PMID 10939061. S2CID 21879527. - Andersen, Peter (2007). Nonverbal Communication: Forms and Functions (2nd ed.). Waveland Press. - Andersen, Peter (2004). The Complete Idiot's Guide to Body Language. Alpha Publishing. ISBN 978-1592572489. - Argyle, Michael (1988). Bodily Communication (2nd ed.). Madison: International Universities Press. ISBN 978-0-416-38140-5. - Brehove, Aaron (2011). Knack Body Language: Techniques on Interpreting Nonverbal Cues in the World and Workplace. Guilford, CT: Globe Pequot Press. ISBN 9781599219493. Archived from the original on 24 September 2015. Retrieved 8 June 2011. - Bridges, J. (1998). How to be a Gentleman (PDF). Nashville, TN: Rutledge Hill Press. Archived from the original (PDF) on 16 June 2015. Retrieved 14 May 2016. - Bull, P. E. (1987). Posture and Gesture. Oxford: Pergamon Press. ISBN 978-0-08-031332-0. - Burgoon, J. K.; Guerrero, L. K.; & Floyd, K. (2011). Nonverbal communication. Boston: Allyn & Bacon. ISBN 9780205525003. - Campbell, S. (2005). Saying What's Real. Tiburon, CA: Publishers Group West. ISBN 978-1932073126. - Driver, J. (2010). You Say More Than You Think. New York, NY: Crown Publishers. - Ekman, P. (2003). Emotions Revealed. New York, NY: Owl Books. ISBN 978-0805072754. - Floyd, K.; Guerrero, L. K. (2006). Nonverbal communication in close relationships. Mahwah, New Jersey: Lawrence Erlbaum Associates. ISBN 9780805843972. - Gilbert, M. (2002). Communication Miracles at Work. Berkeley, CA: Publishers Group West. ISBN 9781573248020. - Givens, D.B. (2000). "Body speak: what are you saying?". Successful Meetings (October) 51. - Givens, D. (2005). Love Signals. New York, NY: St. Martins Press. ISBN 9780312315054. - Guerrero, L. K.; DeVito, J. A.; Hecht, M. L., eds. (1999). The nonverbal communication reader (2nd ed.). Lone Grove, Illinois: Waveland Press. Archived from the original on 5 July 2007. Retrieved 19 September 2007. - Gudykunst, W.B. & Ting-Toomey, S. (1988). Culture and Interpersonal Communication. California: Sage Publications Inc. - Hanna, Judith L. (1987). To Dance Is Human: A Theory of Nonverbal Communication. Chicago: University of Chicago Press. - Hargie, O. & Dickson, D. (2004). Skilled Interpersonal Communication: Research, Theory and Practice. Hove: Routledge. ISBN 9780415227193. - Knapp, Mark L. & Hall, Judith A. (2007). Nonverbal Communication in Human Interaction (5th ed.). Wadsworth: Thomas Learning. ISBN 978-0-15-506372-3. - Melamed, J. & Bozionelos, N. (1992). "Managerial promotion and height". Psychological Reports. 71 (6): 587–593. doi:10.2466/PR0.71.6.587-593. - Pease B.; Pease A. (2004). The Definitive Book of Body Language. New York, NY: Bantam Books. - Remland, Martin S. (2009). Nonverbal communication in everyday life. Boston: Allyn & Bacon. - Ottenheimer, H.J. (2007). The anthropology of language: an introduction to linguistic anthropology. Kansas State: Thomson Wadsworth. - Segerstrale, Ullica; Molnar, Peter, eds. (1997). Nonverbal Communication: Where Nature Meets Culture. Mahwah, NJ: Lawrence Erlbaum Associates. ISBN 978-0-8058-2179-6. - Simpson-Giles, C. (2001). How to Be a Lady. Nashville, TN: Rutledge Hill Press. ISBN 9781558539396. - Zysk, Wolfgang (2004). Körpersprache – Eine neue Sicht (Doctoral Dissertation 2004) (in German). University Duisburg-Essen (Germany). |Wikimedia Commons has media related to Non-verbal communication.| - "Credibility, Respect, and Power: Sending the Right Nonverbal Signals" by Debra Stein - Online Nonverbal Library with more than 500 free available articles on this topic. - The Nonverbal Dictionary of Gestures, Signs & Body Language Cues by David B. Givens - "Psychology Today Nonverbal Communication Blog posts" by Joe Navarro - "NVC Portal - A useful portal providing information on Nonverbal Communication" - "Breaking Trail Online: Using Body Language When Traveling" by Hank Martin - “Significance of posture and position in the communication of attitude and status relationships” by Mehrabian Albert
Nucleic acids include ribonucleic acid, or rna, and deoxyribonucleic acid, or dna. In this section, we will examine the structures of dna and rna, and how these structures are related to the functions these molecules perform. Nucleic acids store and retrieve genetic information biology place tutorial 4. Learn vocabulary, terms, and more with flashcards, games, and other study tools. The structure of the double helix is somewhat like a ladder, with the base pairs forming the ladders rungs and the sugar and phosphate molecules forming the vertical sidepieces of the ladder. Resulting a repeating sugarphosphate backbone with protruding nitrogenous bases. The structure and function of macromolecules proteins. Thus, nucleic acids are macromolecules of the utmost biological importance. It can mean something as simple as the sequence of nucleotides in a piece of dna, or something as complex as the way that dna molecule folds and. Nucleic acids can be defined as organic molecules present in living cells. In most cases, they function as effectors for allosteric. Subsequent analyses have shown that there are two types of nucleic acidsdeoxyribonucleic acid. Feb 26, 2020 nucleic acid, naturally occurring chemical compound that is capable of being broken down to yield phosphoric acid, sugars, and organic bases. Thus, nucleic acids are macromolecules of the utmost biological. They carry the genetic blueprint of a cell and carry instructions for the functioning of the cell. Department of biological sciences, napier university, edinburgh. The structure and function of nucleic acids revised edition. Structural properties of nucleic acid building blocks function of dna and rna dna and rna are chainlike macromolecules that function in the storage and transfer of genetic information. Nucleic acid structure can mean something as simple as the sequence of nucleotides in a piece of dna. These large molecules are called nucleic acids because they were first identified inside the nucleus of cells, however, they are also found in mitochondria and chloroplasts as well as bacteria and. Nucleic acids are defined as the polymers of nucleotide. In dna, tertiary structure arises from supercoiling, which involves double helices being twisted into tighter, more compact shapes. In fact, scientists are using these molecules to build the basis of an artificial life form, which could maintain the artificial nucleic acid and extract information from it to build new proteins and survive. May 18, 2016 composition of nucleic acids a nucleic acid polymer, polynucleotide,forms from the nucleotide monomers when the phosphate of one nucleotide bonds to the sugar of the next nucleotide. The key concept is that some form of nucleic acid is the genetic material, and these encode the macromolecules that function in the cell. Nucleic acids are formed by the combination of nucleotide molecules through sugarphosphate bonds known as phosphodiester linkages. Nucleic acid types and structure biology dictionary. Dna is metabolically and chemically more stable than rna. Nucleic acids are the main informationcarrying molecules of the cell, and, by directing the process of protein synthesis, they determine the. Nucleic acids are the main informationcarrying molecules of the cell and play a central role in determining the inherited characteristics of every living thing. It has many functions in cells, notably acting as the intermediate between. Nucleic acid structure is often divided into four different levels. A nucleotide is a base attached to a sugar attached to a phosphate. Dec 08, 2017 the structure of the double helix is somewhat like a ladder, with the base pairs forming the ladders rungs and the sugar and phosphate molecules forming the vertical sidepieces of the ladder. Dna deoxyribonucleic acid, and rna ribonucleic acid. Like proteins, nucleic acids have a primary structure that is defined as the sequence of their nucleotides. Read this article to get information about nucleic acids, its structure, size, types and significance. Nucleic acids comprise of dnadeoxyribonucleic acid and rnaribonucleic acid that form the polymers of nucleotides. Nucleotides are composed of a nitrogenous base, a fivecarbon sugar, and a phosphate group. These are important organic substances found in nucleus and cytoplasm. Rna structure, like protein structure, has importance, in some cases, for catalytic function. An overview of nucleic acid chemistry, structure, and function. Dna contains a different ribose sugar and one of its four nitrogenous bases is different, but otherwise dna and rna are identical. Nucleic acids can form huge polymers which can take on many shapes. Subsequent analyses have shown that there are two types of nucleic acids deoxyribonucleic acid. In april 1953, james watson and francis crick published molecular structure of nucleic acids. Nucleic acid definition, function and examples biology. Because a nucleic acid is a polymer of many nucleotide molecules, dna and rna molecules are called polynucleotides. Nucleic acid, naturally occurring chemical compound that is capable of being broken down to yield phosphoric acid, sugars, and a mixture of organic bases purines and pyrimidines. The two main types of nucleic acids are deoxyribonucleic acid dna and ribonucleic acid rna. The components and structures of common nucleotides are compared. Nucleic acid formation from nucleotides the assembly of nucleotides into polynucleotides, or nucleic acids, can be thought of as a dehydration reaction between the 3oh of one nucleotide and the phosphate group of a second nucleotide to form a phosphodiester bond. These molecules function in the same way as natural nucleic acids, but they can serve a similar function. Oct 01, 2002 however, to optimize the performance of molecular beacons for different applications, it is necessary to understand their structurefunction relationships. Writing the full names of the amino acids is inconvenient, especially for. Nucleic acids are the main informationcarrying molecules of the cell, and, by directing the process of protein synthesis, they determine the inherited characteristics of every. The backbone of a nucleic acid is made of alternating sugar and phosphate molecules bonded together in a long chain, represented below. It has many functions in cells, notably acting as the intermediate between dna and proteins. How do nucleic acid monomers influence the function of dna and rna. As such, there are several ways to discuss nucleic acid structure. They both carry genetic information, but their roles are vastly different. They control the important biosynthetic activities of the cell and carry hereditary information from generation to generation. Proteins determine how an organisms body is built and how it. In the most commonly found form of dna, two single strands lie side by side in an antiparallel arrangement, with one running 5 to 3 and the other running 3 to 5. Avery and his group at rockefeller university in new york city, new. The nucleic acids, dna and rna, may be thought of as the information molecules of the cell. It plays a key factor in transferring genetic information from one generation to the next. Dna structure and function of deoxyribonucleic acid dna. For nucleic acids, tertiary structure refers to the overall threedimensional shape. The studies of nucleic acids have also paved the way for the development of biochemistry, molecular biology, biotechnology and modern medicine. Nucleic acids, deoxyribonucleic acid dna and ribonucleic acid rna, carry genetic information which is read in cells to make the rna and. Pdf the structure and function of nucleic acids revised. Generalized structural units of nucleic acids are indicated in the scheme 1. Dna stands for deoxyribonucleic acid and rna stands for ribonucleic acid. The backbone of a nucleic acid is made of alternating sugar and phosphate molecules bonded together in a. Nucleic acids structure and function biology homework. Function of nucleic acids the purpose of dna is to act as a code or recipe for making proteins. They are major components of all cells 15% of the cells dry weight. Ribonucleic acid rna, the other kind of nucleic acid, is a related molecule to dna. Nov 29, 2016 the cathgene3d resource is contributing to a current elixir research programme, excelerate, in which the cathgene3d hmms are being used to assign domain structure and function annotations for metagenome sequences in the marine metagenome data use case. Structure and function nucleic acids biology libretexts. Chapter 2 structures of nucleic acids nucleic acids. Nucleic acid structure refers to the structure of nucleic acids such as dna and rna. Like random coils in proteins that give rise to tertiary structure, singlestranded regions of rna that link duplex regions give these molecules a tertiary structure, as well. Structures of nucleic acids conformation, the purine ring is over the pentose ring, and the anti conformation, it is away from the pentose. The main function of the nucleic acids is to transmit genetic material or information from parent cell to the daughter cells or from one generation to the next. Structure and function of nucleic acids slideshare. In the article, watson and crick propose a novel structure for deoxyribonucleic acid or dna. Dna is the genetic material found in all living organisms, ranging from singlecelled bacteria to multicellular. Nucleic acids are polymers of nucleotides in eukaryotic cells nucleic acids are either. The nucleic acids are vital biopolymers found in all living things, where they function to encode, transfer, and express genes. Nucleic acids research, volume 45, issue d1, january 2017, pages d289d295. Dna and rna, and how these structures are related to the functions these molecules perform. The structure of a polynucleotide is shown diagrammatically above. Proteins determine how an organisms body is built and how it functions, which is why dna is often. Objectives by the end of lecture the student should. For amino acid sequences in proteins, the convention is to write the amino acids in order starting with the n. A structure of deoxyribose nucleic acid or a structure for deoxyribose nucleic acid, in the journal nature. However, to optimize the performance of molecular beacons for different applications, it is necessary to understand their structurefunction relationships. The structure and function of nucleic acids biochemical society. Unlike proteins, which have 20 different kinds of amino acids, there are only 4 different kinds of nucleotides in nucleic acids. Nucleic acid structure the double helix major groove minor groove. Major grooves are critical for binding proteins that regulate dna function. The exact roles of dna and rna in the complex process of the transfer of genetic information are the subjects of subsequent sections of this booklet. Dna present in the nucleus and in small amounts in. Structures of nucleic acids some genomes are rna some viruses have rna genomes. This one page worksheet is designed to have meaningful, thoughtprovoking, and creative questions that are respectful of student time. Composition of nucleic acids a nucleic acid polymer, polynucleotide,forms from the nucleotide monomers when the phosphate of one nucleotide bonds to the sugar of the next nucleotide. Nucleic acids consist of nucleotides, which in turn are composed of a sugar, a phosphate group and a nitrogenous base. Nucleic acids are the most important macromolecules for the continuity of life. For nucleic acids, tertiary structure refers to the. Aug 12, 2019 like proteins, nucleic acids have a primary structure that is defined as the sequence of their nucleotides. This biochemistry video tutorial provides a basic introduction into nucleic acids such as dna and rna. Conventional biophysical and chemical biology approaches for delineating relationships between the structure and biological function of nucleic acids nas abstract nas from their native. The structure of nucleic acids as polymers with unique sequences of bases by way of their nucleotide residues gives way to a high fidelity means of transmitting genetic information by reading and replicating the base sequence for a strand of dna. How does the sequence of an amino acid determine the threedimensional structure of the protein. These molecules are composed of long strands of nucleotides. The purpose of this chapter therefore is to serve as a reminder of some of the most relevant points, and to highlight. A study of the structure and function of nucleic acids is needed to be able to understand how information controlling the characteristics of an organism is stored in the form of genes in a cell and how these genes are transmitted to future generations of offspring. Know the three chemical components of a nucleotide. Nucleic acids are macromolecules that store genetic information and enable protein production. Investigating dna structure, nucleic acids, and protein synthesis. The structure of nucleic acids as polymers with unique sequences of bases by way of their nucleotide residues gives way to a high fidelity means of transmitting genetic information by reading and replicating the base sequence for. Nucleic acids, including dna and rna, are the basic genetic material of all life forms on earth. Because nucleic acids can form huge polymers which can take on many shapes, there are several ways to discuss the structure of nucleic acid. Nucleotides and nucleic acids brief history1 1869 miescher isolated nuclein from soiled bandages 1902 garrod studied rare genetic disorder. The primary function of nucleic acids, which in nature include dna and rna, is to store and transfer genetic information. If youre seeing this message, it means were having trouble loading external resources on our website. In dna, secondary structure pertains to the helix formed by the interaction of two dna strands. Nucleic acids, built by polymerizing nucleotides, function primarily as informational molecules for the storage and retrieval of information about the primary sequence of polypeptides. Pdf an overview of nucleic acid chemistry, structure, and function. What does directionality mean when referring to nucleic acids and proteins. What is a monomer, and what relation does it have to the structure and function of polymers. Introduction to nucleic acids and their structure the questions and. Structure and function of nucleic acids as cell constituents. Identify phosphoester bonding patterns and nglycosidic bonds within nucleotides. Nucleic acids rna and dna structure biochemistry youtube. Jun 24, 2019 rna structure, like protein structure, has importance, in some cases, for catalytic function. This may become critical in certain assays since the addition or deletion of just a single nucleotide to the stem can dramatically change the behavior of molecular beacons. Dna is identical most energy favorable conformation for double stranded dna to form without antiparallel base pairing this conformation could not exist structure consists of major grooves and minor grooves. Dna molecules were firstly isolated by swiss physician friedrich miescher in 1869 dahm, 2008.827 263 11 1245 1104 1296 995 1473 698 874 1158 1211 421 557 363 88 1107 470 1351 1002 32 1106 1271 1352 125 1280 293 957 1349
Coyotes today are pint-sized compared to their Ice Age counterparts, finds a new fossil study. Between 11,500 and 10,000 years ago — a mere blink of an eye in geologic terms — coyotes shrunk to their present size. The sudden shrinkage was most likely a response to dwindling food supply and changing interactions with competitors, rather than warming climate, researchers say. This skeleton is a composite from the University of California Museum of Paleontology. Credit: Photo by F. Robin O'Keefe This is a modern coyote and a Pleistocene coyote skull. Credit: Original artwork by Doyle V. Trankina In a paper appearing this week in Proceedings of the National Academy of Sciences, researchers studied museum collections of coyote skeletons dating from 38,000 years ago to the present day. It turns out that between 11,500 and 10,000 years ago, at the end of a period called the Pleistocene, coyotes in North America suddenly got smaller. "Pleistocene coyotes probably weighed between 15-25 kilograms, and overlapped in size with wolves. But today the upper limit of a coyote is only around 10-18 kilograms," said co-author Julie Meachen of the National Evolutionary Synthesis Center in Durham, North Carolina. "Within just over a thousand years, they evolved into the smaller coyotes that we have today," she added. What caused coyotes to shrink? Several factors could explain the shift. One possibility is warming climate, the researchers say. Between 15,000 and 10,000 years ago, global average annual temperatures quickly rose by an average of six degrees. "Things got a long warmer, real fast," Meachen said. Large animals are predicted to fare worse than small animals when temperatures warm up. To find out if climate played a role in coyotes' sudden shrinkage, Meachen and co-author Joshua Samuels of John Day Fossil Beds National Monument in Oregon measured the relationship between body size and temperature for dozens of Ice Age coyotes, and for coyotes living today, using thigh bone circumference to estimate body size for each individual. But when they plotted body size against coldest average annual temperature for each animal's location, they found no relationship, suggesting that climate change was unlikely to be the main factor. If the climate hypothesis is true, then we should see similar changes in other Ice Age carnivores too, Meachen added. The researchers also studied body size over time in the coyote's larger relative, the wolf, but they found that wolf body sizes didn't budge. "We're skeptical that climate change at the end of the Pleistocene was the direct cause of the size shift in coyotes," Meachen said. Another possibility is that humans played a role. In this view, coyotes may have shrunk over time because early human hunters —believed to have arrived in North America around 13,000 years ago — selectively wiped out the bigger coyotes, or the animals coyotes depended on for food, leaving only the small to survive. Stone tool butchery marks on Ice Age animal bones would provide a clue that human hunters had something to do with it, but the fossil record has turned up too few examples to test the idea. "Human hunting as the culprit is really hard to dispute or confirm because there's so little data," Meachen said. A third, far more likely explanation, is dwindling food supply and changing interactions with competitors, the researchers say. Just 1000 years before the sudden shrinkage in coyotes, dozens of other species were wiped out in a wave of extinctions that killed off many large mammals in North America. Until then, coyotes lived alongside a great diversity of large prey, including horses, sloths, camels, llamas and bison. "There were not only a greater diversity of prey species, but the species were also more abundant. It was a great food source," Meachen said. While coyotes survived the extinctions, there were fewer large prey left for them to eat. Smaller individuals that required less food to survive, or could switch to smaller prey, would have had an advantage. Before the die-off, coyotes also faced stiff competition for food from other large carnivores, including a bigger version of wolves living today called the dire wolf. After bigger carnivores such as dire wolves went extinct, coyotes would have no longer needed their large size to compete with these animals for food. The findings are important because they show that extinction doesn't just affect the animals that disappear, the researchers say — it has long-term effects on the species that remain as well. "In a time of increasing loss of biodiversity, understanding the degree to which species interactions drive evolutionary change is important," says Saran Twombly, program director in the National Science Foundation (NSF)'s Division of Environmental Biology, which supported the research. "Species interactions are delicate balancing acts. When species go extinct, we see the signature of the effects on the species that remain," Meachen said. CITATION: Meachen, J. and J. Samuels (2012). "Evolution in coyotes (Canis latrans) in response to the megafaunal extinctions." Proceedings of the National Academy of Sciences. The National Evolutionary Synthesis Center (NESCent) is a nonprofit science center dedicated to cross-disciplinary research in evolution. Funded by the National Science Foundation, NESCent is jointly operated by Duke University, The University of North Carolina at Chapel Hill, and North Carolina State University. For more information about research and training opportunities at NESCent, visit www.nescent.org. Robin Ann Smith | EurekAlert! Filling the gap: High-latitude volcanic eruptions also have global impact 20.11.2017 | Institute of Atmospheric Physics, Chinese Academy of Sciences Antarctic landscape insights keep ice loss forecasts on the radar 20.11.2017 | University of Edinburgh The WHO reports an estimated 429,000 malaria deaths each year. The disease mostly affects tropical and subtropical regions and in particular the African continent. The Fraunhofer Institute for Silicate Research ISC teamed up with the Fraunhofer Institute for Molecular Biology and Applied Ecology IME and the Institute of Tropical Medicine at the University of Tübingen for a new test method to detect malaria parasites in blood. The idea of the research project “NanoFRET” is to develop a highly sensitive and reliable rapid diagnostic test so that patient treatment can begin as early as possible. Malaria is caused by parasites transmitted by mosquito bite. The most dangerous form of malaria is malaria tropica. Left untreated, it is fatal in most cases.... The formation of stars in distant galaxies is still largely unexplored. For the first time, astron-omers at the University of Geneva have now been able to closely observe a star system six billion light-years away. In doing so, they are confirming earlier simulations made by the University of Zurich. One special effect is made possible by the multiple reflections of images that run through the cosmos like a snake. Today, astronomers have a pretty accurate idea of how stars were formed in the recent cosmic past. But do these laws also apply to older galaxies? For around a... Just because someone is smart and well-motivated doesn't mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching. That is the implication of a new study which shows for the first time that there is a broad range of differences in people's visual ability and that these... Computer Tomography (CT) is a standard procedure in hospitals, but so far, the technology has not been suitable for imaging extremely small objects. In PNAS, a team from the Technical University of Munich (TUM) describes a Nano-CT device that creates three-dimensional x-ray images at resolutions up to 100 nanometers. The first test application: Together with colleagues from the University of Kassel and Helmholtz-Zentrum Geesthacht the researchers analyzed the locomotory system of a velvet worm. During a CT analysis, the object under investigation is x-rayed and a detector measures the respective amount of radiation absorbed from various angles.... The quantum world is fragile; error correction codes are needed to protect the information stored in a quantum object from the deteriorating effects of noise. Quantum physicists in Innsbruck have developed a protocol to pass quantum information between differently encoded building blocks of a future quantum computer, such as processors and memories. Scientists may use this protocol in the future to build a data bus for quantum computers. The researchers have published their work in the journal Nature Communications. Future quantum computers will be able to solve problems where conventional computers fail today. We are still far away from any large-scale implementation,... 15.11.2017 | Event News 15.11.2017 | Event News 30.10.2017 | Event News 21.11.2017 | Physics and Astronomy 21.11.2017 | Physics and Astronomy 21.11.2017 | Life Sciences
Quadrilaterals are part of a plane enclosed by four sides (quad means four and lateral means side). All quadrilaterals have exactly four sides and four angles, and they can be sorted into specific groups based on lengths of their sides or measures of their angles. What they have in common is that in every quadrilateral the sum of the measures of all interior angles is equal to . Their vertices are marked with capital letters and sides with small letters. Angles in vertices and are usually marked in order with: (alpha, beta, gamma, delta). Remember that in triangles, the sum of the measures of all exterior angles is equal to (remember, exterior angle is a supplementary angle to a certain interior angle). This will also be true for quadrilaterals. The sum of all measures of exterior angles in quadrilaterals is always equal to . Diagonals are lines that connect opposite angles. The division of quadrilaterals according to perpendicularity diagonals and parallel sides: First group of quadrilaterals is a scalene quadrilateral. Scalene quadrilateral is a quadrilateral that doesn’t have any special properties; the sides and angles have different lengths and measures. Quadrilaterals which have one pair of parallel sides are called trapezoids. Sides that are parallel are called bases of a trapezoid, and ones that are not parallel are called legs. Trapezoids whose legs are of equal length are called isosceles trapezoids. Diagonals of a isosceles trapezoids are congruent. Height or altitude of a trapezoid is the length of a line that is perpendicular to a base and runs through opposite vertex. Altitude of a trapezoid will be equal no matter from which vertex we draw it. If we are drawing an altitude from larger base, we simply extend the shorter base. If is an angle in vertex , in vertex , in vertex and in vertex in a trapeziod , then is valid: In other words, the angles on the same side of a leg of a trapezoid are supplementary. Expand the segment over the vertices and . On the line denote point . Since the line is a transverse of the parallel lines and , then is valid . The angles and are supplementary angles, which means that . Analogously, we obtained . A parallelogram is a quadrilateral whose opposite sides are congruent and parallel. Altitude or a height of a parallelogram, in the label , is the line segment that connects a vertex with opposite side, and is perpendicular to that side. Let be a parallelogram. The opposite angles in a quadrilateral are congruent, and the adjacent angles are supplementary. By definition, if is a trapezoid with legs and , then: If is a trapezoid with legs and , then: It follows and . The following statements are equivalent to each other: 1) A quadrilateral is a parallelogram 2) There exists two opposite sides of a quadrilateral which are congruent and parallel 3) Each two opposites sides of a quadrilateral are congruent 4) Diagonals of a quadrilateral bisect each other 5) Both pairs of opposite angles of a quadrilateral are congruent Each of the above statements can be an alternative definition of a parallelogram. The remaining statements we need to prove. Let be a parallelogram. Then and . Since line is a traverse of parallel lines and ,then . A line is also a traverse of parallel lines and that’s . is also the common side of triangles and . By A-S-A theorem of congruence of triangles, triangles and are congruent. It follows that and . In quadrilateral let be and . Since is a traverse of the parallel lines and , that is . The side is common side of triangles and . By S-A-S theorem of congruence of triangles, triangles and are congruent. It follows that In quadrilateral let be and , and let the point be the intersection of diagonals and . First, consider triangles and . By S-S-S theorem of congruence of triangles, triangles and are congruent. It follows that . Angles and are vertical angles. If now consider triangles and , it follows that . Since that triangles and are congruent by A-S-A theorem of congruence triangles. It follows that and which means that point is the midpoint of and . In quadrilateral let the point be the midpoint of diagonals and : and . Consider triangles and . By S-A-S theorem they are congruent ( – (vertical angles) – ). It follows that and . Triangles and are also congruent by S-A-S theorem ( – (vertical angles) – ). It follows that and . In quadrilateral let be and . That means that and . Assume that lines and are not parallel and let the point of intersection of these two lines be the point which is on the same side of line and points to and . Then angles and are interior angles of triangle , but the sum of measures of angles and is equal to , which is a contradiction. If point is on the opposite side of line and points to and , then , which is also a contradiction. It follows that . Similarly we prove that . A rhombus is a parallelogram which has at least one pair of adjacent sides of equal length. Opposite angles are of equal measure: , and that adjacent angles are supplementary. Diagonals in rhombus are congruent and perpendicular. A kite is a quadrilateral which characterizes two pairs of sides of equal lengths that are adjacent to each other. Diagonals of a kite are perpendicular and at least one diagonal is a line of symmetry. A kite is also a tangential quadrilateral. A rectangle is a parallelogram which at least one interior angle is right. Diagonals in rectangles are congruent. Square is a rectangle whose all sides are equal. Diagonals in a square are congruent and perpendicular. Perimeters and areas of quadrilaterals Perimeter of any geometric shape is the length of is outline. Area of any geometric shape is the surface it occupies. Unit of measure for area is (square meter). square meter is equal to the surface enclosed by a square with sides . There are also some derived units of measure for areas, for smaller or larger shapes. Area of a square is equal to a square of length of its side. Area of a rectangular is equal to the product of lengths of adjacent sides. Area of a rhombus is equal to the product of length of its side and altitude. This is true because, from the picture: if we translate altitude into point , and extend side over vertex , we will get triangle which is congruent with triangle . If we ‘translate’ triangle onto triangle we will get a rectangular with one side and other . The same that goes for a rhombus works on a parallelogram, the area of a parallelogram is a product of its one side and altitude on that side. Area of a trapezoid is equal to one half of a product of sum of its bases and altitude. This formula is a result of dividing a trapezoid into a two triangles and , and a rectangle . Now, we can write our area as the sum of smaller areas: . We know that . Now we need to find and . If we translate side next to we get a triangle . The altitude of a triangle is equal to the altitude of a trapezoid . And the side on which this altitude is set is equal to . This leads to a conclusion that: This means that: Naming quadrilaterals (278.7 KiB, 745 hits) Name the biggest number of quadrilaterals (295.6 KiB, 608 hits) Angles in quadrilaterals (423.6 KiB, 662 hits) Parallelograms - Find an angle (531.7 KiB, 519 hits) Parallelograms - Find an length (547.3 KiB, 499 hits) Trapezoids - Find a length of the median (293.1 KiB, 545 hits) Trapezoids - Find a length of the half-segment (289.9 KiB, 452 hits) Trapezoids - Find a length of a base (312.8 KiB, 494 hits) Trapezoids - Angles (284.8 KiB, 625 hits) Area of triangles and quadrilaterals (501.9 KiB, 732 hits)
Take copper colored pennies and turn then silver, then make them gold this is an easy chemistry project that uses common laboratory chemicals. Chemical reactions of copper lab tara faggioli pd 5 10/19/09 introduction in this lab, solid copper metal is going to be reacted through a series of reactions using the cation cu+2. Chemistry - nchs lab: reactions of copper and percent yield page 1 of 8 the objective in this experiment is to recover all of the copper you begin with in analytically pure form this is the test of your laboratory skills. Chemical reactions of copper experiment 9 9-2 3 combustion – an element reacts with oxygen to form an oxide (this is also a synthesis) a compound consisting of carbon, hydrogen and/or oxygen (c8h18 is called octane) reacts with oxygen to form carbon dioxide and water. A 00194-g sample of copper metal is recycled through the series of reactions in this experiment if 00169g of copper is the recovered after the series of reactions in this experiment, what is the percent recovery of the copper metal. Lab #4: chemical reactions hints: (1) see the single replacement discussion and examples in your pre-lab, and (2) the copper is being oxidized to cu2+ part c-2 part c-3 in this reaction, the calcium replaces one of the hydrogens in h microsoft word - reactions_lab_currentdocx. General chemistry i (fc, 09 - 10) lab #4: stoichiometry: the reaction of iron with copper(ii) sulfate revised 8/19/2009 1 introduction in this experiment we will use stoichiometric principles to deduce the appropriate equation. Purpose: the purpose of this experiment was to track the element copper (cu) through a series of different chemical reactions, in an effort to understand the fundamental phenomena of chemical and physical change, and law of conservation of mass. Cullen/chemedx 2014 types of chemical reactions lab purpose: observe some chemical reactions and identify reactants and products of those reactions classify the reactions as synthesis, decomposition, single replacement or double. The purpose of this lab was to measure the conservation of copper's mass after going through a series of reactions. Copper sulfide, synthesis of copper sulfide, copper and sulfur, mass of the copp, crucible and copper, sulfur in product, synthesize and calculate, moles of copper, final product, nonvolatile copper sulfide these are the important points of chemistry. Lab #6 chemical transformations of copper introduction: copper was one of the first metals to be isolated, due to the ease of separating it from its ores it is believed that the process was known (metallurgy) as early as 4500 bc it is a ductile. The copper oxide can then react with the hydrogen gas to form the copper metal and water when the funnel is removed from the hydrogen stream, the copper was still be warm enough to be oxidized by the air again. A chemistry world subscription brings you all the research, news and views from the global chemical science community regularly updated and packed full of articles, podcasts and videos, there is no better way to keep in touch with the chemical sciences. Chemistry unit 7 lab copper-silver nitrate reactionintroduction in this experiment, a solution of silver nitrate will react with copper wire silver metal will be produced careful measurements will enable you to determine the mole relationships between the reactants and productsprocedure1. Step 2- shake the test tube and copper wire to dislodge the silver step 3- set up a funnel with your filter paper in it step 4- with a waste beaker beneath the funnel, lift the copper wire out of the test tube and hold it over the filter system. The purpose of this lab was to investigate and discover the many different uses and properties of copper during the experiment, copper and copper compounds are used to perform a series of chemical reactions. Roman chaar period 7 aug/28/10 rough draft of lab report purpose: to see if you can make copper a liquid an turn it back to it¶s original state. 1) weigh about 0500 g of light copper turnings 2) place in a 250 ml beaker and add 5ml of concentrated hno3 under the fume hood after the reaction has ended, add 100ml of dh2o record observations and write a reaction the copper and hno3 made a brown gas and green liquid when the reaction ended. General chemistry lab #1 conservation of mass: a cycle of copper reactions purpose the goal of this experiment is to introduce you to several classes of chemical reactions: oxidation/reduction, precipitation, decomposition and acid/base neutralization. Transformation of copper: a sequence of chemical reactions objectives reactions procedure wash the copper metal three times with distilled water and transfer it to an evaporating dish as described in the procedure (part e), and then wash it three times with 5-ml portions of isopropanol back to the chemical principles lab schedule. Chemistry student lab activities determination of copper in brass dc91418 price: free learn more about downloading digital content analyze percent copper by preparing a series of dilutions of a known copper solution and comparing their colors a microscale lab concepts. Copper is an essential mineral that the body incorporates into enzymesthese enzymes play a role in the regulation of iron metabolism, formation of connective tissue, energy production at the cellular level, the production of melanin (the pigment that produces skin color), and the function of the nervous systemthis test measures the amount of copper in the blood, urine, or liver (hepatic. I hope you enjoy this video let me know what you think of the lab in the comments disclaimer: this experiment must be performed outside or in a fume hood. In this experiment, we found, successfully, the mass percent of copper in brass it is surprising that it took so many steps, using very exotic methods, in order to get a seemingly simple answer our answer of 90% is a high percentage, but not entirely unreasonable given the reddish-gold color of the brass we were given.
Worksheet. November 01st , 2020. These math sheets can be printed as extra teaching material for teachers, extra math practice for kids or as homework material parents can use. This is a math pdf printable activity sheet with several exercises. 3rd grade math division worksheets pdf. Addition, subtraction, segments, mean, mode. You can also customize them using the generator below. 1)students develop an understanding of multiplication and division of whole numbers through problems involving. 3rd grade math worksheets pdf printable on third grade topics: Division skills are key to becoming a math pro, and these third grade division worksheets will help your students build math confidence while having a blast! This 3rd grade math game is an excellent way to help children better understand numerical expressions! 3rd grade math also introduces fraction worksheets and basic geometry, both topics where mastery of the arithmetic operations. Inches, feet and yards types of angles crazy. Below are six versions of our grade 3 math worksheet on dividing numbers (up to 30) by 2 or 3. 3rd grade math worksheets and games. Developing an understanding of the structure of rectangular arrays and of area. 3rd grade math worksheets rounding. This worksheet is a supplementary third grade resource to help teachers, parents and children at home and in school. Third graders will find it easy to navigate through this page, downloading loads of printable pdf math activity worksheets to practice or supplement their course work. Developing an understanding of multiplication and division and strategies for multiplication and division within 100; Whether your students are learning these concepts for the first time or reinforcing past lessons, they will love exploring division through board games, word problems, fractions activities. Home > 3rd grade math. Math worksheets for your 3rd grade students: Square numbers addition word problems: This 3rd grade math game focuses on all division standards in 3rd grade, and provides students with practice in the form of multiple choice or short answer questions. Primary 3 math topics covered : Free grade 3 math worksheets. The worksheets can be made in html or pdf format — both are easy to print. Introduction to division, division with pictures, division of fruits, division of single digits, division of multiples of ten, division with. Multiplication and division are introduced along with fun math pages that are kid tested. Math worksheets for 3rd grade. Your students can then take turns picking them up, two at a time. Basic division math worksheet, division problems, 3rd grade pdf. Dividing by 4 or 5 dividing by 6 or 7 Provides practice at all the major topics for grade 4 with emphasis on multiplication and division of larger numbers. 3rd grade math worksheets for children arranged by topic.each topic is a link to loads of worksheets under the same category. Worksheets cover the following division topics: Qr codes (optional) make this game even more interactive as students get imme Division (medium) find the missing factors logic puzzle fun #1 division: It has an answer key attached on the second page. I can math games are the perfect way to make math fun! All you need to do is print out the cards and place them face down in an ordered fashion. The main topics of study in photomathonline grade 3 are: Addition and subtraction up to 3 and 4 place numbers, basic division and quick facts, adding, subtracting and recognizing fractions, algebra concept, fractions, word problems, math logic, metric systems and measurements, algebraic thinking etc. Download free printable third grade math worksheets that teach addition, subtraction, multiplication, division and much more. Rounding worksheets rounding decimals worksheets rounding worksheets hundred million to millionth ccss math content 3 nbt a 2 fluently add and subtract within 1000 using strategies and algorithms based on place value properties of operations and or the relationship between. Division worksheets (questions & answers) for 1st, 2nd, 3rd, 4th, 5th & 6th grade teachers, parents and students is available for free in printable & downloadable (pdf & image) format. This workbook has been compiled and tested by a team of math experts to increase your child's confidence, enjoyment, and success at school. This worksheet includes a picture and a quote from dr. Worksheets > math > grade 3 > division > dividing by 2 or 3. Dividing by 2 or 3. Our 3rd grade division worksheets include i) simple division worksheets to help kids with their division facts and mental division skills and ii) an introduction into long division including simple division with remainder questions.practice dividing by tens and hundreds is also emphasized. These worksheets are pdf files. Play free 3rd grade math games. Teeming with adequate practice materials, the printable 3rd grade math worksheets with answer keys should be your pick if developing an understanding of multiplication and division within 100, using place value to round numbers, working with fractions, solving problems involving measurement and estimation of intervals of time, liquid volumes, and masses of objects, getting acquainted with the. This is a suitable resource page for third graders, teachers and parents. Worksheets > math > grade 3 > division. Math worksheets on division.suitable pdf printable division worksheets for children in the following grades : 2nd grade, 3rd grade, 4th grade, 5th grade, 6th grade and 7th grade. Any content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s. In no way does LocalHost claim ownership or responsibility for such items and you should seek legal consent for any use of such materials from its owner.
Abraham Lincoln and slavery |16th President of the United States| Abraham Lincoln's position on slavery is one of the most discussed issues in American history. Lincoln often expressed moral opposition to slavery in public and private. Initially, he expected to bring about the eventual extinction of slavery by stopping its further expansion into any U.S. territory, and by proposing compensated emancipation (an offer Congress applied to Washington, D.C.) in his early presidency. Lincoln stood by the Republican Party's platform of 1860, which stated that slavery should not be allowed to expand into any more territories. He believed that the extension of slavery in new western lands would block "free labor on free soil", and he also wanted a peaceful, enduring end to slavery. As early as the 1850s, Lincoln was politically attacked as an abolitionist, but he did not consider himself one. Howard Jones says that "in the prewar period, as well as into the first months of the American Civil War itself....Lincoln believed it prudent to administer a slow death to slavery through gradual emancipation and voluntary colonization rather than to follow the abolitionist and demanding an immediate end to slavery without compensation to owners." In 1863, Lincoln ordered the freedom of all slaves in the areas "in rebellion" and insisted on enforcement freeing millions of slaves, but he did not call for the immediate end of slavery everywhere in the U.S. until the proposed 13th Amendment became part of his party platform for the 1864 election. In 1842, Abraham Lincoln married Mary Todd, who was a daughter of a slave-owning family from Kentucky. Lincoln returned to the political stage as a result of the 1854 Kansas–Nebraska Act and soon became a leading opponent of the "Slaveocracy"—the political power of the southern slave owners. The Kansas–Nebraska Act, written to form the territories of Kansas and Nebraska, included language, designed by Stephen A. Douglas, which allowed the settlers to decide whether they would or would not accept slavery in their region. Lincoln saw this as a repeal of the 1820 Missouri Compromise which had outlawed slavery above the 36-30' parallel. During the Civil War, Lincoln used the war powers of the presidency to issue the Emancipation Proclamation, in January 1863. (He had warned in September 1862 he would do so if the Confederate states did not return.) It declared "all persons held as slaves within any State or designated part of a State, the people whereof shall then be in rebellion against the United States, shall be then, thenceforward, and forever free" but exempted border states and those areas of slave states already under Union control. It immediately changed the legal status of all slaves in the affected areas, and as soon as the Union Army arrived it actually did liberate the slaves in that area. On the first day it affected tens of thousands of slaves. Month by month it freed thousands more until June 1865, when it had freed the great majority of slaves in the former Confederacy. Lincoln pursued various plans to voluntarily colonize free blacks outside the United States, but none of these had a major effect. Historians disagree over whether or not his plans to colonize blacks were sincere or political posturing. Regardless, by the end of his life, Lincoln had come to support black suffrage, a position that would lead him to be assassinated by John Wilkes Booth. - 1 Early years - 2 1840s–1850s - 3 1860 Republican presidential nomination - 4 As President-elect in 1860 and 1861 - 5 Presidency (1861–65) - 6 Views on African Americans - 7 See also - 8 References - 9 Further reading - 10 External links President of the United States Assassination and legacy Lincoln was born on February 12, 1809, in Hardin County, Kentucky (now LaRue County). His family attended a Separate Baptists church, which had strict moral standards and opposed alcohol, dancing, and slavery. The family moved north across the Ohio River to free (i.e., non-slave) territory and made a new start in then Perry County; now Spencer County. Lincoln later noted that this move was "partly on account of slavery" but mainly due to land title difficulties. As a young man, he settled in the free state of Illinois. Legal and political Lincoln, the leader most associated with the end of slavery in the United States, came to national prominence in the 1850s, following the advent of the Republican Party, which opposed the expansion of slavery. Earlier, as a member of the Whig Party in the Illinois General Assembly, Lincoln issued a written protest of the assembly's passage of a resolution stating that slavery could not be abolished in Washington, D.C. In 1841, he won a court case (Bailey v. Cromwell), representing a black woman and her children who claimed she had already been freed and could not be sold as a slave. In 1845, he successfully defended Marvin Pond (People v. Pond) for harboring the fugitive slave John Hauley. In 1847, he lost a case (Matson v. Rutherford) representing a slave owner (Robert Matson) claiming return of fugitive slaves. While a congressman from Illinois in 1846 to 1848, Lincoln supported the Wilmot Proviso, which, if it had been adopted, would have banned slavery in any U.S. territory won from Mexico. Lincoln, in collaboration with abolitionist Congressman Joshua R. Giddings, wrote a bill to abolish slavery in the District of Columbia with compensation for the owners, enforcement to capture fugitive slaves, and a popular vote on the matter. Lincoln had left politics until he was drawn back into it by the Kansas–Nebraska Act of 1854, which allowed territories to decide for themselves whether they would allow slavery. Lincoln was morally opposed to slavery and politically opposed to any expansion of it. At issue was extension into the western territories. On October 16, 1854, in his "Peoria Speech", Lincoln declared his opposition to slavery, which he repeated in his route to presidency. Speaking in his Kentucky accent, with a very powerful voice, he said the Kansas Act had a "declared indifference, but as I must think, a covert real zeal for the spread of slavery. I cannot but hate it. I hate it because of the monstrous injustice of slavery itself. I hate it because it deprives our republican example of its just influence in the world..." Since the 1840s Lincoln had been an advocate of the American Colonization Society program of colonizing blacks in Liberia. In an October 16, 1854,:a speech at Peoria, Illinois (transcribed after the fact by Lincoln himself),:b Lincoln points out the immense difficulties of such a task are an obstacle to finding an easy way to quickly end slavery.:c If all earthly power were given to me […] my first impulse would be to free all the slaves, and send them to Liberia,—to their own native land. But a moment's reflection would convince me that whatever of high hope (as I think there is) there may be in this, in the long run, its sudden execution is impossible. Letter to Joshua Speed In 1855, Lincoln wrote to Joshua Speed, a personal friend and slave owner in Kentucky: You know I dislike slavery; and you fully admit the abstract wrong of it... I also acknowledge your rights and my obligations, under the constitution, in regard to your slaves. I confess I hate to see the poor creatures hunted down, and caught, and carried back to their stripes, and unrewarded toils; but I bite my lip and keep quiet. In 1841 you and I had together a tedious low-water trip, on a Steam Boat from Louisville to St. Louis. You may remember, as I well do, that from Louisville to the mouth of the Ohio, there were, on board, ten or a dozen slaves, shackled together with irons. That sight was a continued torment to me; and I see something like it every time I touch the Ohio, or any other slave-border. It is hardly fair for you to assume, that I have no interest in a thing which has, and continually exercises, the power of making me miserable. You ought rather to appreciate how much the great body of the Northern people do crucify their feelings, in order to maintain their loyalty to the Constitution and the Union. … How can any one who abhors the oppression of negroes, be in favor of degrading classes of white people? Our progress in degeneracy appears to me to be pretty rapid. As a nation, we began by declaring that "all men are created equal." We now practically read it "all men are created equal, except negroes." When the Know-Nothings get control, it will read "all men are created equal, except negroes, and foreigners, and catholics." When it comes to this I should prefer emigrating to some country where they make no pretence of loving liberty—to Russia, for instance, where despotism can be taken pure, and without the base alloy of hypocrisy. Lincoln–Douglas debates 1858 Many of Lincoln's public anti-slavery sentiments were presented in the seven Lincoln–Douglas debates of 1858 against his opponent, Stephen Douglas, during Lincoln's unsuccessful campaign for a seat in the U.S. Senate (which was decided by the Illinois legislature). Douglas advocated "popular sovereignty" and self-government, which would give the citizens of a territory the right to decide if slavery would be legal there. Douglas criticized Lincoln as being inconsistent, saying he altered his message and position on slavery and on the political rights of freed blacks in order to appeal to the audience before him, as northern Illinois was more hostile to slavery than southern Illinois. Lincoln stated that Negroes had the rights to "life, liberty, and the pursuit of happiness" in the first of the Lincoln–Douglas debates. Publicly, Lincoln said he was not advocating Negro suffrage in his speech in Columbus, Ohio on September 16, 1859.:d This might have been a strategy speech used to gain voters, as Douglas had accused Lincoln of favoring negroes too much as well. A fragment from Lincoln dated October 1, 1858, refuting theological arguments by Frederick A. Ross in favor of slavery, reads in part, "As a good thing, slavery is strikingly perculiar [sic], in this, that it is the only good thing which no man ever seeks the good of, for himself. Nonsense! Wolves devouring lambs, not because it is good for their own greedy maws, but because it is good for the lambs!!!" 1860 Republican presidential nomination The Republican Party was committed to restricting the growth of slavery, and its victory in the election of 1860 was the trigger for secession acts by Southern states. The debate before 1860 was mainly focused on the Western territories, especially Kansas and the popular sovereignty controversy. Lincoln was nominated as the Republican candidate for president in the election of 1860. Lincoln was opposed to the expansion of slavery into new areas, but held that the federal government was prevented by the Constitution from banning slavery in states where it already existed. His plan was to halt the spread of slavery, and to offer monetary compensation to slave-owners in states that agreed to end slavery (see Compensated emancipation). He was considered a moderate within his party, as there were some who wanted the immediate abolition of slavery. As President-elect in 1860 and 1861 In a letter to Senator Lyman Trumbull on December 10, 1860, Lincoln wrote, "Let there be no compromise on the question of extending slavery." In a letter to John A. Gilmer of North Carolina of December 15, 1860, which was soon published in newspapers, Lincoln wrote that the 'only substantial difference' between North and South was that 'You think slavery is right and ought to be extended; we think it is wrong and ought to be restricted.' Lincoln repeated this statement in a letter to Alexander H. Stephens of Georgia on Dec. 22, 1860 On February 22, 1861, at a speech in Independence Hall, in Philadelphia, Pennsylvania, Lincoln reconfirmed that his convictions sprang from the sentiment expressed in the Declaration of Independence, which was also the basis of the continued existence of the United States since that time, namely, the "principle or idea" "in that Declaration giving liberty, not alone to the people of this country, but hope to the world for all future time. (Great applause.) It was that which gave promise that in due time the weights should be lifted from the shoulders of all men, and that all should have an equal chance. (Cheers.)" The proposed Corwin amendment was passed by Congress before Lincoln became President and was ratified by two states, but was abandoned once the Civil War began. It would have explicitly prohibited congressional interference with slavery in states where it already existed. In his First Inaugural Address, March 4, 1861, Lincoln explained that, "holding such a provision to now be implied constitutional law, I have no objection to its being made express and irrevocable." The Corwin amendment was a late attempt at reconciliation, but it also was a measure of reassurance to the slave-holding border states that the federal government was not intent on taking away their powers. Nonetheless, Lincoln was bitterly attacked throughout the secession crisis and Civil War regarding his anti-slavery views. Many of Lincoln's opponents, especially in the South, regarded him as an "abominable" abolitionist, even before the war. Lincoln's plan was to get rid of slavery in the District of Columbia (which Congress did), and in the states, including loyal border states of Delaware, Maryland, Kentucky and Missouri through compensated emancipation. The American Civil War began in April, 1861. At the beginning of the war, Lincoln prohibited his generals from freeing slaves even in captured territories. On August 30, 1861, Major General John C. Frémont, the commander of the Union Army in St. Louis, proclaimed that all slaves owned by Confederates in Missouri were free. Lincoln opposed allowing military leaders to take executive actions that were not authorized by the government, and realized that such actions could induce slaveowners in border states to oppose the Union or even start supporting the enemy. Lincoln demanded Frémont modify his order and free only slaves owned by Missourians working for the South. When Frémont refused, he was replaced by the conservative General Henry Wager Halleck. Radical Republicans such as William P. Fessenden of Maine and Charles Sumner supported Frémont. Fessenden described Lincoln's action as "a weak and unjustifiable concession to the Union men of the border states" and Sumner wrote in a letter to Lincoln how sad it was "to have the power of a god and not use it godlike." The situation was repeated in May 1862, when General David Hunter began enlisting black soldiers in the occupied district under his control. Soon afterwards Hunter issued a statement that all slaves owned by Confederates in Georgia, Florida, and South Carolina were free. Despite the pleas of Treasury Secretary Salmon P. Chase, Lincoln ordered Hunter to disband the black 1st South Carolina Regiment and to retract his proclamation. At all times Lincoln insisted that he controlled the issue—only he had the war powers. Lincoln's view was that in order for freedmen to effectively and legally rely on the promise and declaration of freedom it had to be grounded in the president's constitutional authority. On August 22, 1862, just a few weeks before signing the preliminary Emancipation Proclamation and after he had already discussed a draft of it with his cabinet in July, he wrote a letter in response to an editorial by Horace Greeley of the New York Tribune which had urged complete abolition. Lincoln differentiates between "my view of official duty"—that is, what he can do in his official capacity as President—and his personal views. Officially he must save the Union above all else; personally he wanted to free all the slaves: I would save the Union. I would save it the shortest way under the Constitution. The sooner the national authority can be restored; the nearer the Union will be "the Union as it was." If there be those who would not save the Union, unless they could at the same time save slavery, I do not agree with them. If there be those who would not save the Union unless they could at the same time destroy slavery, I do not agree with them. My paramount object in this struggle is to save the Union, and is not either to save or to destroy slavery. If I could save the Union without freeing any slave I would do it, and if I could save it by freeing all the slaves I would do it; and if I could save it by freeing some and leaving others alone I would also do that. What I do about slavery, and the colored race, I do because I believe it helps to save the Union; and what I forbear, I forbear because I do not believe it would help to save the Union. I shall do less whenever I shall believe what I am doing hurts the cause, and I shall do more whenever I shall believe doing more will help the cause. I shall try to correct errors when shown to be errors; and I shall adopt new views so fast as they shall appear to be true views. I have here stated my purpose according to my view of official duty; and I intend no modification of my oft-expressed personal wish that all men everywhere could be free. Just one month after writing this letter, Lincoln issued his first Emancipation Proclamation, which announced that at the beginning of 1863, he would use his war powers to free all slaves in states still in rebellion (as they came under Union control). Lincoln scholar Harold Holzer wrote in this context about Lincoln's letter: "Unknown to Greeley, Lincoln composed this after he had already drafted a preliminary Emancipation Proclamation, which he had determined to issue after the next Union military victory. Therefore, this letter, was in truth, an attempt to position the impending announcement in terms of saving the Union, not freeing slaves as a humanitarian gesture. It was one of Lincoln's most skillful public relations efforts, even if it has cast longstanding doubt on his sincerity as a liberator." Historian Richard Striner argues that "for years" Lincoln's letter has been misread as "Lincoln only wanted to save the Union." However, within the context of Lincoln's entire career and pronouncements on slavery this interpretation is wrong, according to Striner. Rather, Lincoln was softening the strong Northern white supremacist opposition to his imminent emancipation by tying it to the cause of the Union. This opposition would fight for the Union but not to end slavery, so Lincoln gave them the means and motivation to do both, at the same time. In his 2014 book, Lincoln's Gamble, journalist and historian Todd Brewster asserted that Lincoln's desire to reassert the saving of the Union as his sole war goal was in fact crucial to his claim of legal authority for emancipation. Since slavery was protected by the Constitution, the only way that he could free the slaves was as a tactic of war—not as the mission itself. But that carried the risk that when the war ended, so would the justification for freeing the slaves. Late in 1862, Lincoln asked his Attorney General, Edward Bates, for an opinion as to whether slaves freed through a war-related proclamation of emancipation could be re-enslaved once the war was over. Bates had to work through the language of the Dred Scott decision to arrive at an answer, but he finally concluded that they could indeed remain free. Still, a complete end to slavery would require a constitutional amendment. Concerned that the Proclamation would not last past the war, Lincoln returned to less formal proposals he had made earlier but this time in his State of the Union Address on December 1, 1862, he proposed three constitutional amendments: that would allow slavery to continue until 1900; free slaves permanently that were freed during the war, while paying compensation to loyal owners; and colonize freedmen outside the South. Republican leaders warned the proposals would not pass Congress but Lincoln argued strongly for them. There was more than a year and a half of trial to suppress the rebellion before the proclamation issued, the last one hundred days of which passed under an explicit notice that it was coming, unless averted by those in revolt, returning to their allegiance. The war has certainly progressed as favorably for us, since the issue of proclamation as before. I know, as fully as one can know the opinions of others, that some of the commanders of our armies in the field who have given us our most important successes believe the emancipation policy and the use of the colored troops constitute the heaviest blow yet dealt to the Rebellion, and that at least one of these important successes could not have been achieved when it was but for the aid of black soldiers. Among the commanders holding these views are some who have never had any affinity with what is called abolitionism or with the Republican party policies but who held them purely as military opinions. I submit these opinions as being entitled to some weight against the objections often urged that emancipation and arming the blacks are unwise as military measures and were not adopted as such in good faith. You say you will not fight to free negroes. Some of them seem willing to fight for you; but, no matter. Fight you, then exclusively to save the Union. I issued the proclamation on purpose to aid you in saving the Union. Whenever you shall have conquered all resistance to the Union, if I shall urge you to continue fighting, it will be an apt time, then, for you to declare you will not fight to free negroes. I thought that in your struggle for the Union, to whatever extent the negroes should cease helping the enemy, to that extent it weakened the enemy in his resistance to you. Do you think differently? I thought that whatever negroes can be got to do as soldiers, leaves just so much less for white soldiers to do, in saving the Union. Does it appear otherwise to you? But negroes, like other people, act upon motives. Why should they do any thing for us, if we will do nothing for them? If they stake their lives for us, they must be prompted by the strongest motive—even the promise of freedom. And the promise being made, must be kept. … Peace does not appear so distant as it did. I hope it will come soon, and come to stay; and so come as to be worth the keeping in all future time. It will then have been proved that, among free men, there can be no successful appeal from the ballot to the bullet; and that they who take such appeal are sure to lose their case, and pay the cost. And then, there will be some black men who can remember that, with silent tongue, and clenched teeth, and steady eye, and well-poised bayonet, they have helped mankind on to this great consummation; while, I fear, there will be some white ones, unable to forget that, with malignant heart, and deceitful speech, they strove to hinder it. Lincoln addresses the changes to his positions and actions regarding emancipation in an 1864 letter to Albert G. Hodges. In that letter, Lincoln states his ethical opposition to slavery, writing, "I am naturally anti-slavery. If slavery is not wrong, nothing is wrong. I can not remember when I did not so think, and feel. ... And yet I have never understood that the Presidency conferred upon me an unrestricted right to act officially upon this judgment and feeling." Lincoln further explained that he had eventually determined that military emancipation and the enlistment of black soldiers were necessary for the preservation of the Union, which was his responsibility as President. In December 1863, Lincoln used his war powers and issued a "Proclamation for Amnesty and Reconstruction", which offered Southern states a chance to peacefully rejoin the Union if they abolished slavery and collected loyalty oaths from 10% of their voting population. When Lincoln accepted the nomination for the Union party for President in June, 1864, he called for the first time for the passage of the Thirteenth Amendment to the United States Constitution, to immediately abolish slavery and involuntary servitude, except as punishment for a crime. He wrote in his letter of acceptance that "it would make a fitting and necessary conclusion" to the war and would permanently join the causes of "Liberty and Union." He won re-election on this platform in November, and in December, 1864, Lincoln worked to have the House approve the amendment. When the House passed the 13th amendment on January 31, 1865, Lincoln signed the amendment, although this was not a legal requirement, and said in a speech the next day, "He thought all would bear him witness that he had never shrunk from doing all that he could to eradicate slavery by issuing an emancipation proclamation." He pointed out that the emancipation proclamation did not complete the task of eradicating slavery; "But this amendment is a King's cure for all the evils [of slavery]." Compensated emancipation: buy out the slave owners He made numerous proposals for "compensated emancipation" in the loyal border states whereby the federal government would purchase all of the slaves and free them. Each state government refused to act. President Lincoln advocated that slave owners be compensated for emancipated slaves. On March 6, 1862 President Lincoln in a message to the U.S. Congress stated that emancipating slaves would create economic "inconveniences" and justified compensation to the slave owners. The resolution was adopted by Congress; however, the Southern States refused to comply. On July 12, 1862 President Lincoln in a conference with Congressmen from Kentucky, Maryland, Delaware, and Missouri encouraged their respective states to adopt emancipation legislation that gave compensation to the slave owners. On July 14, 1862 President Lincoln sent a bill to Congress that allowed the Treasury to issue bonds at 6% interest to states for slave emancipation compensation to the slave owners. The bill was never voted on by Congress. Colonization of freed slaves was long seen by many as an answer to the problem of slavery. One of President Abraham Lincoln's policies during his administration was the voluntary colonization of African American Freedmen; he firmly opposed compulsory colonization, and in one instance ordered the Secretary of War to bring some colonized blacks back to the United States. The Pre-Emancipation Proclamation offered support for the colonization of free blacks outside of the United States. Historians have debated and have remained divided over whether Lincoln's racial views (or merely his acceptance of the political reality) included that African Americans could not live in the same society as white Americans due to racism. Benjamin Butler stated that Lincoln in 1865 firmly denied that "racial harmony" would be possible in the United States. One view (known to scholars as the "lullaby" theory) is that Lincoln adopted colonization for Freedmen in order to make his Emancipation Proclamation politically acceptable. This view has been challenged with new evidence of the Lincoln administration's attempts to colonize freedmen in British Honduras after the Emancipation Proclamation took effect on January 1, 1863. Bureau of Emigration President Lincoln supported colonization during the Civil War as a practical response to newly freed slaves. At his urging, Congress included text in the Confiscation Act of 1862 indicating support for Presidential authority to recolonize consenting African Americans. With this authorization, Lincoln created an agency to direct his colonization projects. At the suggestion of Lincoln, in 1862, Congress appointed $600,000 to fund and created the Bureau of Emigration in the U.S. Department of the Interior. To head that office Lincoln appointed the energetic Reverend James Mitchell, a leader of the American Colonization Party. Lincoln had known Mitchell since 1853, when Mitchell visited Illinois. Mitchell's Washington D.C.'s office was in charge of implementing Lincoln's voluntary colonization policy of African Americans. In his annual December message to Congress that year (his second "State of the Union" Message), he reiterated his strong support for government expenditure on colonization for those who wanted to go, but he also noted that objections to free blacks remaining in the United States were baseless, "if not sometimes malicious." In 1862, Lincoln mentioned colonization favorably in his preliminary Emancipation Proclamation. Much concerning the controversial Bureau of Emigration is unknown today, as Mitchell's papers that kept record of the office were lost after his death in 1903. Chiriqui Improvement Company President Lincoln first proposed a Panama colony for Blacks in October 1861. Several hundred acres of Chiriquí Province in Panama (then a part of Gran Colombia) had in 1855 been granted to the Chiriqui Improvement Company for coal mining. The Company supplied the U.S. Navy with half-price coal during the war, but required more workers. Congress gravitated towards this plan in mid-1862, and Lincoln appointed Kansas Senator Samuel Pomeroy to oversee it. Pomeroy promised 40 acres and a job to willing Blacks, and chose 500 of 13,700 who applied. Lincoln signed a contract with businessman Ambrose W. Thompson, the owner of the land, and made plans to send tens of thousands of African Americans. Pomeroy secured $25,000 from Congress to pay for transportation and equipment. The plan was suspended in early October 1862 before a single ship sailed though, apparently due to diplomatic protests from neighboring Central American governments and the uncertainty raised by the Colombian Civil War (1860–1862). The plan also violated the 1850 Clayton–Bulwer Treaty prohibiting US and UK colonization of Central America. Lincoln hoped to overcome these complications by having Congress make provision for a treaty for African American emigration, much as he outlined in his Second Annual Message of December 1, 1862, but the Chiriquí plan appears to have died over the New Year of 1863 as revelations of the corrupt interest of his acquaintance Richard W. Thompson and Secretary of the Interior John Palmer Usher likely proved too much to bear in political terms. Ile à Vache In December 1862, Lincoln signed a contract with businessman Bernard Kock to establish a colony on the Ile à Vache, an island of Haiti. 453 freed slaves departed for the island from Fort Monroe, Virginia. A government investigation had deemed Kock untrustworthy, and Secretary of State William Seward stopped the plan from going forward after learning of Kock's involvement. Poor planning, an outbreak of smallpox, and financial mismanagement by Kock left the colonists under-supplied and starving, according to early reports. 292 colonists remained on Ile a Vache in 1865; 73 had moved to Aux Cayes on Haiti. The United States Navy arrived to rescue survivors after less than one year on the island. British West Indies In addition to Panama and Haiti, Mitchell's office also oversaw attempts at colonization in British Honduras and elsewhere in the British West Indies. Lincoln believed that by dealing with the comparatively stable British Government, he could avoid some of the problems that plagued his earlier attempts at colonization with private interests. He signed an agreement on June 13, 1863, with John Hodge of British Honduras that authorized colonial agents to recruit ex-slaves and transport them to Belize from approved ports in Philadelphia, New York City, and Boston. Later that year the Department of the Interior sent John Willis Menard, a free African-American clerk who supported colonization, to investigate the site for the government. British authorities pulled out of the agreement in December, fearing it would disrupt their position of neutrality in the Civil War. The question of when Lincoln abandoned colonization, if ever, has aroused considerable debate among historians. The government funded no more colonies after the rescue of the Ile a Vache survivors in early 1864, and Congress repealed most of the colonization funding that July. Whether Lincoln's opinion had changed is unknown. He left no surviving statements in his own hand on the subject during the last two years of his presidency, although he apparently wrote Attorney General Edward Bates in November 1864 to inquire whether earlier legislation allowed him to continue pursuing colonization and to retain Mitchell's services irrespective of the loss of funding. An entry in the diary of presidential secretary John Hay dated July 2, 1864, says that Lincoln had "sloughed off" colonization, though without much elaboration. In a later report, General Benjamin F. Butler claimed that Lincoln approached him in 1865 a few days before his assassination, to talk about reviving colonization in Panama. Historians have long debated the validity of Butler's account, as it was written many years after the fact and Butler was prone to exaggeration of his own exploits as a general. Recently discovered documents prove that Butler and Lincoln did indeed meet on April 11, 1865, though whether and to what extent they talked about colonization is not recorded except in Butler's account. On that same day, Lincoln gave a speech supporting a form of limited suffrage for blacks. Much of the present debate revolves around whether to accept Butler's story. If rejected, then it appears that Lincoln "sloughed off" colonization at some point in mid-1864. If it is accepted, then Lincoln remained a colonizationist at the time of his death. This question is compounded by the unclear meaning of Hay's diary, and another article by Secretary of the Navy Gideon Welles, which suggests that Lincoln intended to revive colonization in his second term. In either case, the implications for understanding Lincoln's views on race and slavery are strong. Citizenship and limited suffrage In his second term as president, on April 11, 1865, Lincoln gave a speech in which he promoted voting rights for blacks. John Wilkes Booth, a Southerner and outspoken Confederate sympathizer attended the speech and became determined to kill Lincoln for supporting citizenship for blacks. On April 14, 1865, three days later, Lincoln was assassinated by Booth and died the next day. In analyzing Lincoln's position historian Eugene H. Berwanger notes: During his presidency, Lincoln took a reasoned course which helped the federal government both destroy slavery and advance the cause of black suffrage. For a man who had denied both reforms four years earlier, Lincoln's change in attitude was rapid and decisive. He was both open-minded and perceptive to the needs of his nation in a postwar era. Once committed to a principle, Lincoln moved toward it with steady, determined progress. Views on African Americans Known as the Great Emancipator, Lincoln was a complicated figure who wrestled with his own views on race. Through changing times successive generations have interpreted Lincoln's views on African Americans differently. "To apply 20th century beliefs and standards to an America of 1858 and declare Abraham Lincoln a "racist" is a faulty formula that unfairly distorts Lincoln's true role in advancing civil and human rights. By the standards of his time, Lincoln's views on race and equality were progressive and truly changed minds, policy and most importantly, hearts for years to come." Lincoln's primary audience was white voters. Lincoln's views on slavery, race equality, and African American colonization are often intermixed. During the 1858 debates with Stephen Douglas, Lincoln expressed his then view that he believed whites were superior to blacks. Lincoln stated he was against miscegenation and allowing blacks to serve as jurors. While President, as the American Civil War progressed, Lincoln advocated or implemented anti-racist policies including the Emancipation Proclamation and limited suffrage for African Americans. Former slave and leading abolitionist Frederick Douglass unequivocally regarded Lincoln as sharing "the prejudices of his white fellow-country-men against the Negro," but also observed of Lincoln that "in his company, I was never reminded of my humble origin, or of my unpopular color." Douglass attested to Lincoln's genuine respect for him and other blacks and to the wisdom of his course of action in obtaining both the preservation of the Union (his sworn duty as President) and the freeing of the slaves. In an 1876 speech he defended Lincoln's actions thus: His great mission was to accomplish two things: first, to save his country from dismemberment and ruin; and, second, to free his country from the great crime of slavery. To do one or the other, or both, he must have the earnest sympathy and the powerful cooperation of his loyal fellow-countrymen. Without this primary and essential condition to success his efforts must have been vain and utterly fruitless. Had he put the abolition of slavery before the salvation of the Union, he would have inevitably driven from him a powerful class of the American people and rendered resistance to rebellion impossible. Viewed from the genuine abolition ground, Mr. Lincoln seemed tardy, cold, dull, and indifferent; but measuring him by the sentiment of his country, a sentiment he was bound as a statesman to consult, he was swift, zealous, radical, and determined… Taking him for all in all, measuring the tremendous magnitude of the work before him, considering the necessary means to ends, and surveying the end from the beginning, infinite wisdom has seldom sent any man into the world better fitted for his mission than Abraham Lincoln. In his past, Lincoln lived in a middle-class, racially mixed neighborhood of Springfield, Illinois; one of his long-time neighbors, Jameson Jenkins (who may have been born a slave), had come from North Carolina and was publicly implicated in the 1850s as a Springfield conductor on the underground railroad, sheltering escaped slaves. In 1861, Lincoln called on Jenkins to give him a ride to the train depot, where Lincoln delivered his farewell address before leaving Springfield for the last time. - George Washington and slavery - Thomas Jefferson and slavery - John Quincy Adams and abolitionism - Timeline of the African-American Civil Rights Movement - Striner, Richard (2006). Father Abraham: Lincoln's Relentless Struggle to End Slavery. Oxford University Press. pp. 2–4. ISBN 978-0-19-518306-1. - Howard Jones (2002). Abraham Lincoln and a New Birth of Freedom: The Union and Slavery in the Diplomacy of the Civil War. U of Nebraska Press. pp. 21–22. - Foner, Eric (1970), Free Soil, Free Labor, Free Men: The Ideology of the Republican Party before the Civil War - "Mary Todd Lincoln". Archived from the original on April 22, 2009. - "Mr. Lincoln's White House: an examination of Washington DC during Abraham Lincoln's Presidency". Mrlincolnswhitehouse.org. Archived from the original on January 24, 2009. Retrieved 2008-08-31. - Hamner, Christopher (December 2010). "Booth's Reason for Assassination". Teaching History: Ask a Historian. Roy Rosenzweig Center for History and New Media at George Mason University. Archived from the original on December 2, 2010. Retrieved December 2, 2010. Lincoln also indicated a wish to extend the franchise to some African-Americans – at the very least, those who had fought in the Union ranks during the war—and expressed a desire that the southern states would extend the vote to literate blacks, as well. Booth stood in the audience for the speech, and this notion seems to have amplified his rage at Lincoln. 'That means nigger citizenship,' he told Lewis Powell, one of his band of conspirators. 'Now, by God, I’ll put him through. That is the last speech he will ever make.' - Donald (1996), pp. 20–22. - Donald (1996), pp. 22–24. - Sandburg (1926), p. 20. - "Lincoln on Slavery". Retrieved 2009-11-15. - Lincoln, Abraham (1907). "Injustice the Foundation of Slavery". In Marion Mills Miller. Life and Works of Abraham Lincoln. 3. New York: Current Literature. pp. 26–27. - Adams, Carl (Fall–Winter 2008). "Lincoln's First Freed Slave A Review of Bailey v. Cromwell, 1841". Journal of the Illinois State Historical Society. 101 (3–4). Archived from the original on January 28, 2012. Retrieved 2012-06-16. - "Lincoln Law Practice – People v Pond". - Holzer, p. 63. - Harris, William C. (2007). Lincoln's Rise to the Presidency. University Press of Kansas. p. 54. ISBN 978-0-7006-1520-9.; Foner, Eric (2010). The Fiery Trial: Abraham Lincoln and American Slavery. W.W. Norton. p. 57. ISBN 978-0-393-06618-0.. - Thomas (2008), pp. 148–152. - White, p. 199. - Basler (1953), p. 255.[full citation needed] - "Mr. Lincoln and Freedom". Abraham Lincoln Institute. - a. "Speech at Peoria, October 16, 1854". Retrieved 2008-09-15. - b. "Preface by Lewis Lehrman". Retrieved 2008-08-31. - c. "1854". Retrieved 2008-08-31. - d. "The progress of Abraham Lincoln's opposition to slavery". Retrieved 2008-08-31. - "Abraham Lincoln at Peoria: The Turning Point: Getting Right with the Declaration of Independence". Lincolnatpeoria.com. Archived from the original on 2008-09-14. Retrieved 2008-08-31. - "Lincoln on Slavery". udayton.edu. Retrieved 2008-08-31. - Lincoln, Abraham. "Mr. Lincoln's Reply". First Joint Debate at Ottawa. bartleby.com. Retrieved 2008-09-15. - Escott, Paul (2009)"What Shall We Do with the Negro?" University of Virginia Press, p. 25. - "Abraham Lincoln's 1855 Letter to Joshua Speed". Showcase.netins.net. Retrieved 2013-10-12. - "32b. The Lincoln-Douglas Debates". US History. Archived from the original on April 7, 2014. Retrieved 2014-04-28. - "U S Constitution – The Lincoln–Douglas Debates, First Joint Debate". Usconstitution.com. Retrieved 2008-08-31.[permanent dead link] - "Vespasian Warner's recount of events leading up to the Lincoln–Douglas Debate". Moore–Warner Farm Management. Archived from the original on 2009-01-26. Retrieved 2009-01-21. - Cuomo, Mario M.; Holzer, Harold (1990). Lincoln on Democracy. Harper Collins. p. 131. ISBN 0-06-039126-X. - "Fragment: On Slavery - Teaching American History". teachingamericanhistory.org. - Cuomo, Mario M.; Holzer, Harold (1990). Lincoln on Democracy. Harper Collins. p. 180. ISBN 0-06-039126-X. - Lincoln, Abraham (10 December 1860). "[Letter ] To Lyman Trumbull". The Collected Works of Abraham Lincoln. Retrieved 9 March 2015. - Lincoln, Abraham (15 December 1860). "[Letter ] To John A. Gilmer". The Collected Works of Abraham Lincoln. Retrieved 9 March 2015. - Lincoln, Abraham (15 December 1860). "[Letter ] To Alexander H. Stephens". The Collected Works of Abraham Lincoln. Retrieved 9 March 2015. - Foner, Eric (2010). The Fiery Trial: Abraham Lincoln and American Slavery. W.W. Norton. p. 153. ISBN 978-0-393-06618-0. - Foner, Eric (2010). The Fiery Trial: Abraham Lincoln and American Slavery. W.W. Norton. p. 155. ISBN 978-0-393-06618-0. - Lincoln, Abraham (22 February 1861). "Speech in Independence Hall, Philadelphia, Pennsylvania". The Collected Works of Abraham Lincoln. Retrieved 9 March 2015. - Cuomo, Mario M.; Holzer, Harold (1990). Lincoln on Democracy. Harper Collins. p. 198. ISBN 0-06-039126-X. - Lincoln, Abraham (4 March 1861). "First Inaugural Address". Abraham Lincoln Online. Retrieved 9 March 2015. - Cuomo, Mario M.; Holzer, Harold (1990). Lincoln on Democracy. Harper Collins. p. 208. ISBN 0-06-039126-X. - Foner, Eric (2010). The Fiery Trial: Abraham Lincoln and American Slavery. W.W. Norton. pp. 156, 158. ISBN 978-0-393-06618-0. - "Abraham Lincoln and the Corwin Amendment". www.lib.niu.edu. - Jenkins, Sally, and John Stauffer. The State of Jones. New York: Anchor Books edition/Random House, 2009 (2010). ISBN 978-0-7679-2946-2, p. 72 - Daniel W. Crofts, Lincoln and the Politics of Slavery: The Other Thirteenth Amendment and the Struggle to Save the Union (2016). - Cox, LaWanda (1981). Lincoln and Black Freedom: A Study in Presidential Leadership. University of South Carolina Press. pp. 12–14. ISBN 978-0-87249-400-8. - Lincoln, Abraham. "Letter to Horace Greeley, August 22, 1862". In Miller, Marion Mills. Life and Works of Abraham Lincoln. Current Literature. Retrieved 2011-01-24. - Harold Holzer, Dear Mr. Lincoln: Letters to the President, Southern Illinois University Press, 2006, p. 162 - Striner, Richard (2006). Father Abraham: Lincoln's Relentless Struggle to End Slavery. Oxford University Press. p. 176. ISBN 978-0-19-518306-1. - Brewster, Todd (2014). Lincoln's Gamble: The Tumultuous Six Months that Gave America the Emancipation Proclamation and Changed the Course of the Civil War. Scribner. p. 59. ISBN 978-1451693867. - Brewster, Todd (2014). Lincoln's Gamble: The Tumultuous Six Months that Gave America the Emancipation Proclamation and Changed the Course of the Civil War. Scribner. p. 236. ISBN 978-1451693867. - See Donald Lincoln p 396-97 - See for text - Lincoln, Abraham (26 August 1863). "[Letter ] To James C. Conkling". The Collected Works of Abraham Lincoln. Retrieved 10 March 2015. - Cuomo and Holzer, Lincoln on Democracy, 1990, p. 292 - "Abraham Lincoln's Letter to James Conkling". Showcase. Archived from the original on 2008-07-19. Retrieved 2008-08-31. - "1864 letter to Albert G. Hodges". - Cuomo and Holzer, Lincoln on Democracy, 1990, "If Slavery is not wrong, nothing is Wrong", pp. 316–318 - Foner, Eric (2010). The Fiery Trial: Abraham Lincoln and American Slavery. W.W. Norton. p. 312. ISBN 978-0-393-06618-0. - Vorenberg, Final Freedom (2001), p. 47. - Foner, Eric (2010). The Fiery Trial: Abraham Lincoln and American Slavery. W.W. Norton. pp. 299, 312–313. ISBN 978-0-393-06618-0. - Cuomo and Holzer, Lincoln on Democracy, 1990, pp. 338–340 - James Tackach (2002). Lincoln's Moral Vision. Univ. Press of Mississippi. p. 79. - William E. Gienapp, "Abraham Lincoln and the Border States." Journal of the Abraham Lincoln Association 13 (1992): 13-46. in JSTOR - Carol E. Hoffecker, "Abraham Lincoln and Delaware." Delaware History (2008) 32#3 pp 155-170. - Lincoln, Abraham (December 1, 1862). Abraham Lincoln's Second Annual Message of 1862 (Speech). Presidential speech. Archived from the original on March 24, 2012. - Lowell H. Harrison, "Lincoln and Compensated Emancipation in Kentucky." in Douglas Cantrell et al eds., Kentucky through the Centuries: A Collection of Documents and Essays (2005). - Aaron Astor, Rebels on the Border: Civil War, Emancipation, and the Reconstruction of Kentucky and Missouri (LSU Press, 2012). - James G. Randall and David Donald (1960). The Civil War and Reconstruction [Second Edition]. p. 673. - Welles, Gideon (1861–1864). "Diary of Gideon Wells". I: 152. JSTOR 2713705. The President objected unequivocally to compulsion. The emigration must be voluntary... - Woodson, Carter Godwin; Logan, Rayford Whittingham (1919). The Journal of Negro History. 4. Association for the Study of Negro Life and History, Inc. p. 19. Retrieved March 14, 2016. [B]ring back to this country such of the colonists there as desire to return. - Magness and Page (2011), Colonization After Emancipation: Lincoln and the Movement for Black Resettlement, chapter 11. - Oubre, Forty Acres and a Mule (1978), p. 4. - Magness & Page, Emancipation After Colonization (2011), p. 4. - Magness, Phillip W. (September 2011). "James Mitchell and the Mystery of the Emigration Office Papers". Journal of the Abraham Lincoln Association. 32 (2): 50–62. Retrieved 2014-08-08. - "Abraham Lincoln: Second Annual Message". www.presidency.ucsb.edu. - Oubre, Forty Acres and a Mule (1978), pp. 3–4. "As early as October, 1861, Lincoln proposed colonizing Negroes on the Chiriqui Improvement Company grant in the district of Panama. In 1855 the company had gained control of several hundred thousand acres of rich coal land on the Isthmus of Panama. During the war the company contracted to provide the Navy Department with coal at one half the cost in the United States. In order to meet the demands of the Department of the Navy the company needed laborers for its coal mines. - Page, Sebastian N. (2011). "Lincoln and Chiriquí Colonization Revisited". American Nineteenth Century History. 12 (3): 289–325. doi:10.1080/14664658.2011.626160. - Oubre, Forty Acres and a Mule (1978), p. 5. - Lockett, James D. (1991). "Abraham Lincoln and Colonization". Journal of Black Studies. 21 (4): 428–444. doi:10.1177/002193479102100404. - ""Lincoln and Black Colonization," Britannica.com, Retrieved 2011-04-23". - Phillip W. Magness and Sebastian N. Page, Colonization after Emancipation: Lincoln and the Movement for Black Resettlement (University of Missouri Press: 2011), Chapter 3 - Phillip W. Magness and Sebastian N. Page, Colonization after Emancipation: Lincoln and the Movement for Black Resettlement (University of Missouri Press: 2011), Chapter 5 - For a summary of this debate see Sebastian N. Page, "Lincoln on Race," American Nineteenth Century History, Vol. 11, No. 1, March 2010 - Phillip W. Magness and Sebastian N. Page, Colonization after Emancipation: Lincoln and the Movement for Black Resettlement (University of Missouri Press: 2011), p. 98 - Bates to Lincoln, Opinion on James Mitchell, November 30, 1864, Abraham Lincoln Papers at the Library of Congress, Retrieved 2011-11-17 - Michael Burlingame and John R. Ettlinger, eds., Inside Lincoln's White House: The Complete Civil War Diary of John Hay (Carbondale: Southern Illinois University Press, 1999) - Benjamin F. Butler, Autobiography and Personal Reminiscences of Major General Benjamin F. Butler (Boston: A. M. Thayer, 1892), p. 903 - Mark E. Neely, "Abraham Lincoln and Black Colonization: Benjamin Butler's Spurious Testimony," Civil War History 25 (1979), pp. 77–83 - Phillip W. Magness, "Benjamin Butler's Colonization Testimony Reevaluated". Journal of the Abraham Lincoln Association, Vol. 29, No. 1, Summer 2008 - Henry Louis Gates, Jr. Lincoln on Race and Slavery Princeton University Press, 2009, foreword - "Last Public Address". Speeches and Writings. Abraham Lincoln Online. April 11, 1865. Retrieved 2008-09-15. External link in - Swanson, p. 6 - "Lincoln's Constitutional Dilemma: Emancipation and Black Suffrage". Journal of the Abraham Lincoln Association. Archived from the original on 2008-08-21. Retrieved 2008-08-31. - Gates (February 12, 2009),Was Lincoln a Racist? Archived December 3, 2011, at the Wayback Machine. - "Oration in Memory of Abraham Lincoln by Frederick Douglass". Teachingamericanhistory.org. April 14, 1876. Retrieved 2011-10-29. - Douglass, pp. 259–260. - "Lincoln Home – The Underground Railroad in Lincoln's Neighborhood" (PDF). National Park Service – US Dept. of the Interior. February 2008. Retrieved 2012-08-25. - Belz, Herman. Abraham Lincoln, Constitutionalism, and Equal Rights in the Civil War Era (1998) - Burton, Vernon. The Age of Lincoln (2009) - DiLorenzo, Thomas J. The Real Lincoln: A New Look at Abraham Lincoln, His Agenda, and an Unnecessary War (2003). ISBN 978-0-7615-3641-3. OCLC 716369332 an intense attack on Lincoln - Donald, David H. Lincoln (1995) a standard scholarly biography - Escott, Paul D."What Shall We Do with the Negro?" Lincoln, White Racism, and Civil War America. University of Virginia Press, (2009). ISBN 978-0-8139-2786-2 - Foner, Eric. The Fiery Trial: Abraham Lincoln and American Slavery (2011); Pulitzer Prize; the standard scholarly account - Finkelman, Paul. "Lincoln and Emancipation: Constitutional Theory, Practical Politics, and the Basic Practice of Law," Journal of Supreme Court History (2010) 35#3 pp. 243–266 - Fredrickson, George M. Big Enough to Be Inconsistent: Abraham Lincoln Confronts Slavery and Race (2009) - Guelzo, Allen C.: - Abraham Lincoln: Redeemer President. 1999. - Defending Emancipation: Abraham Lincoln and the Conkling Letter, 1863. Civil War History, Vol. 48. 2002. - "How Abe Lincoln Lost the Black Vote: Lincoln and Emancipation in the African American Mind". Journal of the Abraham Lincoln Association. September 15, 2008. Archived from the original on July 5, 2008. Retrieved 2008-09-15. - Harris, William C. With Charity for All: Lincoln and the Restoration of the Union (1997). - Holzer, Harold (2004). Lincoln at Cooper Union: The Speech That Made Abraham Lincoln President. Simon & Schuster. ISBN 978-0-7432-9964-0. - Jones, Howard; Abraham Lincoln and a New Birth of Freedom: The Union and Slavery in the Diplomacy of the Civil War (1999) - Klingaman, William K. Final Freedom: The Civil War, Abraham Lincoln and the Road to Emancipation, 1861–1865 (2001) - McPherson, James M. Abraham Lincoln and the Second American Revolution (1992) - Manning, Chandra, "The Shifting Terrain of Attitudes toward Abraham Lincoln and Emancipation," Journal of the Abraham Lincoln Association, 34 (Winter 2013), 18–39, historiography - Rawley, James A. Abraham Lincoln and a Nation Worth Fighting For. Harlan-Davidson, (1996) - Swanson, James. Manhunt: The 12-Day Chase for Lincoln's Killer. Harper Collins, 2006. ISBN 978-0-06-051849-3 - Vorenberg, Michael. Final Freedom: The Civil War, the Abolition of Slavery, and the Thirteenth Amendment (2001)
High Resolution Coronal Imager (Hi-C) Why the Sun’s corona — the wispy outermost atmosphere — is so much hotter than its surface stands as a big mystery in solar science. NASA’s High-Resolution Coronal Imager (Hi-C) is an ultraviolet telescope designed to help solve the mystery, and understand the processes by which the Sun sends violent storms of particles into the Solar System. Since most of the light from the Sun’s atmosphere is blocked by air molecules on Earth, Hi-C is carried on a short-flight rocket into Earth’s upper atmosphere. Since 2012, Hi-C has flown three times, providing the highest resolution images of the corona and revealing new details in the Sun’s atmosphere. Hi-C is jointly operated by NASA's Marshall Space Flight Center (MSFC) and the Center for Astrophysics | Harvard & Smithsonian, with contributions from other researchers around the world. The Telescope and the Science The Sun’s corona can reach temperatures more than a thousand times greater than those on the surface, which means some process is heating material as it travels outward. How that process works is one of the most enduring mysteries in solar physics, but it’s also relevant to life on Earth: the corona is the source of solar storms that can wreak havoc on satellites and power grids. The corona is only visible to the human eye during a total solar eclipse, but it shines brightly in ultraviolet (UV) light. Since Earth’s atmosphere blocks most UV radiation, corona observations require either space telescopes, which are expensive, or high-altitude rockets, which can be constructed more cheaply but are limited in the amount of time they can observe. For Hi-C, NASA follows the rocket strategy. This ultraviolet observatory was designed to provide the highest-resolution images of the corona ever taken, using a Black Brant IX sounding rocket to carry the Hi-C into the upper atmosphere. These flights only last just over ten minutes, but the instruments are powerful enough to make the most of that time. Hi-C has flown three times: in 2012, 2016, and 2018, with an improved camera used on the third flight. During the five minutes of observing time, Hi-C’s third observation used its 24-centimeter (9.5-inch) mirror to capture roughly one image of the corona every five seconds. This flight was conducted in tandem with an observation by NASA’s Interface Region Imaging Spectrograph (IRIS) spacecraft to provide a complete set of data on the Sun’s state at that time. With these high resolution images, scientists discovered new magnetic activity within the Sun’s corona, which may be responsible for the high temperatures in the plasma. Hi-C is operated collaboratively between NASA’s Marshall Space Flight Center and the CfA, with contributions from Lockheed Martin, the University of Central Lancashire in the UK, and NASA’s Wallops Flight Facility.
Teaching Strategies for Effective Brain-Based Learning - 5 Minutes Read - What is brain-based learning, and how does it work? Teaching methods, lesson designs, and school programs that are based on the latest scientific research about how the brain learns, including factors like cognitive development - Brain-based learning refers to how students learn differently as they age, grow, and mature on a social and emotional level, and cognitively also. You already know that each student learns in a unique way, so it's critical to use a variety of brain-based learning strategies in your teaching practice to appeal to a diverse range of students and their needs. It is necessary for young brains to apply information in order to retain it. Here are some teaching strategies to help your students develop executive function. Simple ways to incorporate brain-based learning into the classroom The International Journal of Innovative Research & Studies lays out a fantastic set of brain-based learning strategies that you can use in the classroom to boost your students' performance and chances of success. Below mentioned are six ideas to help you get started with brain based learning for students: 1. Get off to a good start by setting a positive tone For real learning to take place, students must often feel physically and emotionally safe in the classroom. You can help your students learn more effectively by creating a positive classroom environment in which they feel supported and encouraged. Setting a positive tone in the classroom with classroom greetings can increase student engagement, and many educators have found that doing so at the start of the day creates a sense of community. 2. Make time to turn and talk to your classmate Students are more likely to remember information when they talk about what they've learned. Turn and talk to your classmate" can help students process what they've just read, discuss ideas before sharing them with the class, and clarify any issues they may have encountered while doing homework. This strategy can be used as a warm-up, during class discussions, or at the end of the day to round out the day. This practice has a lot of resources in the Teacher Toolkit to help you get started. If you're teaching remotely, use the raise hand feature in most video conferencing platforms to keep things more organized. 3. Make use of visual elements Many people are visual learners, meaning they absorb and remember information best when they can see it. If you're teaching remotely, you probably already have posters and visuals in your classroom, but are they helping your students? In a virtual setting, using visual elements to add context to lessons, such as breaking up your slides with a GIF that draws students' attention back during a lecture or finding a quick video of the science concepts you're discussing, are simple ways to keep students' attention. Other fun ways to incorporate visual elements into your teaching include changing your Zoom background to match the theme of your lesson or wearing a silly hat or decorative necktie. 4. Break learning down into manageable chunks Chunking, or breaking down difficult or large texts into smaller chunks, has been shown to aid students in identifying keywords and phrases, paraphrasing, and understanding the text in their own words. Students can better understand and comprehend material by breaking down a large piece of text into smaller, more manageable chunks. Chunking can also be used to break down large chunks of text into smaller, more manageable chunks. Work through long instructions with your students step by step to ensure that they understand everything that is being asked of them. 5. Demonstrate higher-order reasoning abilities. Consider how and when you'll model these higher-order thinking skills in your lessons, and give students opportunities to use their developing executive function networks throughout the learning process. When this executive function is strengthened, a student's ability to monitor the accuracy of his or her work and analyze the validity of information heard or read is enhanced. Techniques that promote the development of judgment networks include estimation with feedback and adjustment, editing and revising one's own written work and evaluating websites using criteria to separate fact from opinion. This executive function assists students in distinguishing low-importance details from the main ideas of a text or study topic. When students plan an essay, choose information to include in notes, or evaluate word problems in math for relevant data, they use the executive function of prioritizing. Prioritizing also improves one's ability to put disparate facts together into a larger concept while recognizing degrees of relevance and relatedness. Possibilities for Activating and Transferring Prior Knowledge Plan activities that allow students to connect what they've learned in the past to what they're learning now and to the larger concept. When you give students opportunities to apply what they've learned in class to a variety of situations, you're encouraging them to build larger conceptual networks in their brains, making the new information a valuable tool and part of their long-term memory. When it comes to planning learning contexts that are personally appealing, you have to go beyond textbooks. This can be challenging when you have to teach a curriculum that takes longer than the time required. When you plan and teach with mental manipulation for executive function in mind, your students will become more aware of their own shifting attitudes and accomplishments. 6. Introduce activities that will aid in the development of an executive function Executive functions such as learning, studying, organizing, prioritizing, reviewing, and actively participating in the class must be explicitly taught and practiced by students. Comparing and contrasting, providing new examples of a concept, spiraled curriculum, group collaboration, and open-ended discussions are all activities that can help develop executive function networks. Additionally, when students summarise and symbolize new learning in new formats, such as through the arts or writing across the curriculum, executive function is developed. Students will be able to do the following through authentic, student-centered activities, projects, and discussions: - Make forecasts - Solve a wide range of problems. - Make inquiries. - Determine what information they require. - Consider how they can acquire any skills or knowledge they lack in order to achieve their desired outcomes. Students' attitudes toward the value of learning are strengthened by this type of student-driven information and skill acquisition. Students put forth the effort, collaborate successfully, ask questions, revise hypotheses, redo work, and seek the foundational knowledge you require them to learn when they are motivated to solve problems that are personally meaningful to them. They do this because they are curious about what you have to say. The importance of foundational knowledge cannot be overstated. Learning is organized into related patterns, linked in long-term conceptual memory neural networks, and accessible for retrieval and transfer to solve future problems and investigate new concepts. It's easier than you think to incorporate brain-based learning into an in-person, online, or blended classroom (and you might already be doing so!). Finding new and innovative ways to engage your students can help them see the world as a place where they can learn. Thanks for reading!
Test your students’ ability to spot mistakes and develop their understanding of the nth term of a sequence with this activity. Pupils are given two different answers to a question and have to determine which is correct. They then have to explain what misconception has led to the incorrect answer. In this resources there are four different tick or trash worksheets; Worksheet A; This worksheet involves 10 questions on finding the nth term of linear sequences. There are nine questions of increasing sequence and Q10 a decreasing linear sequence. Worksheet B; This worksheet involves 10 questions on finding the nth term of linear sequences. The first nine questions are a mix of both increasing and decreasing linear sequences and Q10 basic quadratic sequence. Worksheet C; This worksheet involves 10 questions on finding the nth term of quadratic sequences, a mix of a=1 and a≠1… Worksheet D; This worksheet has 9 questions which progresses through all of the above style questions; linear and quadratic. Full instructions and solutions are provided in this resource – what more could you need? Check out the Tick or Trash bundle. I’ve also included a UK A4 size and a US letter size, please print from the most appropriate. Thank you for looking. There are no reviews yet.
The four planets farthest from the sun—Jupiter, Saturn, Uranus, and Neptune—are called the outer planets of our solar system. Figure 25.19 shows the relative sizes of the outer planets and the Sun. Because they are much larger than Earth and the other inner planets, and because they are made primarily of gases and liquids rather than solid matter, the outer planets are also called gas giants. The gas giants are made up primarily of hydrogen and helium, the same elements that make up most of the Sun. Astronomers believe that hydrogen and helium gases were found in large amounts throughout the solar system when it first formed. However, the inner planets didn't have enough mass to hold on to these very light gases. As a result, the hydrogen and helium initially on these inner planets floated away into space. Only the Sun and the massive outer planets had enough gravity to keep hydrogen and helium from drifting away. All of the outer planets have numerous moons. All of the outer planets also have planetary rings, which are rings of dust and other small particles encircling a planet in a thin plane. Only the rings of Saturn can be easily seen from Earth. - Describe key features of the outer planets and their moons. - Compare the outer planets to each other and to Earth. Jupiter, shown in Figure 25.20, is the largest planet in our solar system, and the largest object in the solar system besides the Sun. Jupiter is named for the king of the gods in Roman mythology. Jupiter is truly a giant! It is much less dense than Earth—it has 318 times the mass of Earth, but over 1,300 times Earth's volume. Because Jupiter is so large, it reflects a lot of sunlight. When it is visible, it is the brightest object in the night sky besides the Moon and Venus. This brightness is all the more impressive, since Jupiter is quite far from the Earth—5.20 AUs away. It takes Jupiter about 12 Earth years to orbit once around the Sun. A Ball of Gas and LiquidEdit If a spaceship were to try to land on the surface of Jupiter, the astronauts would find that there is no solid surface at all! Jupiter is made mostly of hydrogen, with some helium, and small amounts of other elements. The outer layers of the planet are gas. Deeper within the planet, pressure compresses the gases into a liquid. Some evidence suggests that Jupiter may have a small rocky core at its center. A Stormy AtmosphereEdit The upper layer of Jupiter's atmosphere contains clouds of ammonia (NH3) in bands of different colors. These bands rotate around the planet, but also swirl around in turbulent storms. The Great Red Spot, shown in Figure 25.21, is an enormous, oval-shaped storm found south of Jupiter's equator. It is more than three times as wide as the entire Earth! Clouds in the storm rotate in a counterclockwise direction, making one complete turn every six days or so. The Great Red Spot has been on Jupiter for at least 300 years. It is possible, but not certain, that this storm is a permanent feature on Jupiter. Jupiter's Moons and RingsEdit Jupiter has a very large number of moons. As of 2008, we have discovered over 60 natural satellites of Jupiter. Of these, four are big enough and bright enough to be seen from Earth, using no more than a pair of binoculars. These four moons—named Io, Europa, Ganymede, and Callisto—were first discovered by Galileo in 1610, so they are sometimes referred to as the Galilean moons. Figure 25.22 shows the four Galilean moons and their sizes relative to the Great Red Spot. The Galilean moons are larger than the dwarf planets Pluto, Ceres, and Eris. In fact, Ganymede, which is the biggest moon in the solar system, is even larger than the planet Mercury! Scientists are particularly interested in Europa, the smallest of the Galilean moons, because it may be a likely place to find extraterrestrial life. The surface of Europa is a smooth layer of ice. Evidence suggests that there is an ocean of liquid water under the ice. Europa also has a continual source of energy—it is heated as it is stretched and squashed by tidal forces from Jupiter. Because it has liquid water and a continual heat source, astrobiologists surmise that life might have formed on Europa much as it did on Earth. Numerous missions have been planned to explore Europa, including plans to drill through the ice and send a probe into the ocean. However, no such mission has yet been attempted. In 1979, two spacecraft (Voyager 1 and Voyager 2) visited Jupiter and its moons. Photos from the Voyager missions showed that Jupiter has a ring system. This ring system is very faint, so it is very difficult to observe from Earth. Saturn, shown in Figure 25.23, is famous for its beautiful rings. Saturn's mass is about 95 times the mass of Earth, and its volume is 755 times Earth's volume, making it the second largest planet in the solar system. Despite its large size, Saturn is the least dense planet in our solar system. It is less dense than water, which means if there could be a bathtub big enough, Saturn would float. In Roman mythology, Saturn was the father of Jupiter. So, it is an appropriate name for the next planet beyond Jupiter. Saturn orbits the Sun once about every 30 Earth years. Saturn's composition is similar to Jupiter. It is made mostly of hydrogen and helium, which are gases in the outer layers and liquids at deeper layers. It may also have a small solid core. The upper atmosphere has clouds in bands of different colors. These rotate rapidly around the planet, but there seems to be less turbulence and fewer storms on Saturn than on Jupiter. A Weird HexagonEdit There is a strange feature at Saturn's north pole—the clouds form a hexagonal pattern, as shown in the infrared image in Figure 25.24. This hexagon was viewed by Voyager 1 in the 1980s, and again by the Cassini Orbiter in 2006, so it seems to be a long-lasting feature. Though astronomers have hypothesized and speculated about what causes these hexagonal clouds, no one has yet come up with a convincing explanation. The rings of Saturn were first observed by Galileo in 1610. However, he could not see them clearly enough to realize they were rings; he thought they might be two large moons, one on either side of Saturn. In 1659, the Dutch astronomer Christiaan Huygens was the first to realize that the rings were in fact rings. The rings circle Saturn's equator. They appear tilted because Saturn itself is tilted about 27 degrees to the side. The rings do not touch the planet. The Voyager 1 spacecraft visited Saturn in 1980, followed by Voyager 2 in 1981. The Voyager probes sent back detailed pictures of Saturn, its rings, and some of its moons. From the Voyager data, we learned that Saturn's rings are made of particles of water and ice, with a little bit of dust as well. There are several gaps in the rings. Some of the gaps have been cleared out by moons that are within the rings. Scientists believe the moons' gravity caused ring dust and gas to fall towards the moon, leaving a gap in the rings. Other gaps in the rings are caused by the competing gravitational forces of Saturn and of moons outside the rings. As of 2008, over 60 moons have been identified around Saturn. Most of them are very small. Some are even found within the rings. In a sense, all the particles in the rings are like little moons, too, because they orbit around Saturn. Only seven of Saturn's moons are large enough for gravity to have made them spherical, and all but one are smaller than Earth's moon. Saturn's largest moon, Titan, is about one and a half times the size of Earth's Moon and is also larger than the planet Mercury. Figure 25.25 compares the size of Titan to Earth. Scientists are very interested in Titan because it has an atmosphere that is similar to what Earth's atmosphere might have been like before life developed on Earth. Titan may have a layer of liquid water under a layer of ice on the surface. Scientists now believe there are also lakes on the surface of Titan, but these lakes contain liquid methane (CH4) and ethane (C2H6) instead of water! Methane and ethane are compounds found in natural gas, a mixture of gases found naturally on Earth and often used as fuel. Uranus, shown in Figure 25.26, is named for the Greek god of the sky. In Greek mythology, Uranus was the father of Cronos, the Greek equivalent of the Roman god Saturn. By the way, astronomers pronounce the name "YOOR-uh-nuhs". Uranus was not known to ancient observers. It was first discovered by the astronomer William Herschel in 1781. Uranus can be seen from Earth with the unaided eye, but it was overlooked for centuries because it is very faint. Uranus is faint because it is very far away, not because it is small. It is about 2.8 billion kilometers (1.8 billion miles) from the Sun. Light from the Sun takes about 2 hours and 40 minutes to reach Uranus. Uranus orbits the Sun once about every 84 Earth years. An Icy Blue-Green BallEdit Like Jupiter and Saturn, Uranus is composed mainly of hydrogen and helium. It has a thick layer of gas on the outside, then liquid further on the inside. However, Uranus has a higher percentage of icy materials, such as water, ammonia (NH3), and methane (CH4), than Jupiter and Saturn do. When sunlight reflects off Uranus, clouds of methane filter out red light, giving the planet a blue-green color. There are bands of clouds in the atmosphere of Uranus, but they are hard to see in normal light, so the planet looks like a plain blue ball. Uranus is the lightest of the outer planets, with a mass about 14 times the mass of Earth. Even though it has much more mass than Earth, it is much less dense than Earth. At the "surface" of Uranus, the gravity is actually weaker than on Earth's surface. If you were at the top of the clouds on Uranus, you would weigh about 10% less than what you weigh on Earth. The Sideways PlanetEdit Most of the planets in the solar system rotate on their axes in the same direction that they move around the Sun. Uranus, though, is tilted on its side so its axis is almost parallel to its orbit. In other words, it rotates like a top that was turned so that it was spinning parallel to the floor. Scientists think that Uranus was probably knocked over by a collision with another planet-sized object billions of years ago. Rings and Moons of UranusEdit Uranus has a faint system of rings, as shown in Figure 25.27. The rings circle the planet's equator, but because Uranus is tilted on its side, the rings are almost perpendicular to the planet's orbit. Uranus has 27 moons that we know of. All but a few of them are named for characters from the plays of William Shakespeare. The five biggest moons of Uranus—Miranda, Ariel, Umbriel, Titania, and Oberon—are shown in Figure 25.28. Neptune, shown in Figure 25.29, is the eighth planet from the Sun. It is the only major planet that can't be seen from Earth without a telescope. Scientists predicted the existence of Neptune before it was actually discovered. They noticed that Uranus did not always appear exactly where it should appear. They knew there must be another planet beyond Uranus whose gravity was affecting Uranus' orbit. This planet was discovered in 1846, in the position that had been predicted, and it was named Neptune for the Roman god of the sea due to its blue-ish color. Neptune has slightly more mass than Uranus, but it is slightly smaller in size. In many respects, it is similar to Uranus. Uranus and Neptune are often considered "sister planets". Neptune, which is nearly 4.5 billion kilometers (2.8 billion miles) from the Sun, is much farther from the Sun than even distant Uranus. It moves very slowly in its orbit, taking 165 Earth years to complete one orbit around the Sun. Extremes of Cold and WindEdit Neptune is blue in color, with a few darker and lighter spots. The blue color is caused by atmospheric gases, including methane (CH4). When Voyager 2 made its closest encounter with Neptune in 1989, there was a large dark-blue spot south of the equator. This spot was called the Great Dark Spot. However, when the Hubble Space Telescope took pictures of Neptune in 1994, the Great Dark Spot had disappeared. Instead, another dark spot had appeared north of the equator. Astronomers believe both of these spots represent gaps in the methane clouds on Neptune. The changing appearance of Neptune is due to its turbulent atmosphere. The winds on Neptune are stronger than on any other planet in the solar system, reaching speeds of 1,100 km/h (700 mi/h), close to the speed of sound. This extreme weather surprised astronomers, since the planet receives little energy from the Sun to power weather systems. Neptune is also one of the coldest places in the solar system. Temperatures at the top of the clouds are about –218°C (–360°F). Neptune's Rings and MoonsEdit Like the other outer planets, Neptune has rings of ice and dust. These rings are much thinner and fainter than those of Saturn. Some evidence suggests that the rings of Neptune may be unstable, and may change or disappear in a relatively short time. Neptune has 13 known moons. Triton, shown in Figure 25.30, is the only one of them that has enough mass to be spherical in shape. Triton orbits in the direction opposite to the orbit of Neptune. Scientists think Triton did not form around Neptune, but instead was captured by Neptune's gravity as it passed by. Pluto was once considered one of the outer planets, but when the definition of a planet was changed in 2006, Pluto became one of the leaders of the dwarf planets. It is one of the largest and brightest objects that make up this group. Look for Pluto in the next section in the discussion of dwarf planets. Pluto is no longer considered a planet. The four outer planets—Jupiter, Saturn, Uranus, and Neptune—are all gas giants made primarily of hydrogen and helium. They have thick gaseous outer layers and liquid interiors. - All of the outer planets have numerous moons, as well as planetary rings made of dust and other particles. - Jupiter is by far the largest planet in the solar system. It has bands of different colored clouds, and a long-lasting storm called the Great Red Spot. - Jupiter has over 60 moons. The four biggest were discovered by Galileo, and are called the Galilean moons. - One of the Galilean moons, Europa, may have an ocean of liquid water under a layer of ice. The conditions in this ocean might be right for life to have developed. - Saturn is smaller than Jupiter, but similar in composition and structure. - Saturn has a large system of beautiful rings. Saturn's largest moon, Titan, has an atmosphere similar to Earth’s atmosphere before life formed. - Uranus and Neptune were discovered in modern times. They are similar to each other in size and composition. They are both smaller than Jupiter and Saturn, and also have more icy materials. - Uranus is tilted on its side, probably due to a collision with a large object in the past. - Neptune is very cold and has very strong winds. It had a large dark spot that disappeared, then another dark spot appeared on another part of the planet. These dark spots are storms in Neptune’s atmosphere. - Pluto is no longer considered one of the outer planets. It is now considered a dwarf planet. - Name the outer planets a) in order from the Sun outward, b) from largest to smallest by mass, and c) from largest to smallest by size. - Why are the outer planets called gas giants? - How do the Great Red Spot and Great Dark Spot differ? - Name the Galilean moons, and explain why they are called that. - Why might Europa be a likely place to find extraterrestrial life? - What causes gaps in Saturn's rings? - Why are scientists interested in the atmosphere of Saturn's moon Titan? - What liquid is found on the surface of Titan? - Why is Uranus blue-green in color? - What is the name of Neptune's largest moon? - Galilean moons - The four largest moons of Jupiter discovered by Galileo. - gas giants - The four large outer planets composed of the gases hydrogen and helium. - Great Red Spot - An enormous, oval shaped storm on Jupiter. - outer planets - The four large planets beyond the asteroid belt in our solar system. - planetary rings - Rings of dust and rock encircling a planet in a thin plane. Points to ConsiderEdit - The inner planets are small and rocky, while the outer planets are large and gaseous. Why might the planets have formed into two groups like they are? - We have discussed the Sun, the planets, and the moons of the planets. What other objects can you think of that can be found in our solar system?
NASA scientists are closer to solving the mystery of how Mars' moon Phobos formed. In late November and early December 2015, NASA's Mars Atmosphere and Volatile Evolution (MAVEN) mission made a series of close approaches to the Martian moon Phobos, collecting data from within 300 miles (500 kilometers) of the moon. Among the data returned were spectral images of Phobos in the ultraviolet. The images will allow MAVEN scientists to better assess the composition of this enigmatic object, whose origin is unknown. Comparing MAVEN's images and spectra of the surface of Phobos to similar data from asteroids and meteorites will help planetary scientists understand the moon's origin - whether it is a captured asteroid or was formed in orbit around Mars. The MAVEN data, when fully analyzed, will also help scientists look for organic molecules on the surface. Evidence for such molecules has been reported by previous measurements from the ultraviolet spectrograph on the Mars Express spacecraft. The observations were made by the Imaging Ultraviolet Spectrograph instrument aboard MAVEN. MAVEN's principal investigator is based at the University of Colorado's Laboratory for Atmospheric and Space Physics, and NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the MAVEN project. Partner institutions include Lockheed Martin, the University of California at Berkeley, and NASA's Jet Propulsion Laboratory. For more information on MAVEN, visit:
Alphabetical order is a system whereby character strings are placed in order based on the position of the characters in the conventional ordering of an alphabet. It is one of the methods of collation. In mathematics, a lexicographical order is the generalization of the alphabetical order to other data types, such as sequences of numbers or other ordered mathematical objects. When applied to strings or sequences that may contain digits, numbers or more elaborate types of elements, in addition to alphabetical characters, the alphabetical order is generally called a lexicographical order. To determine which of two strings of characters comes first when arranging in alphabetical order, their first letters are compared. If they differ, then the string whose first letter comes earlier in the alphabet comes before the other string. If the first letters are the same, then the second letters are compared, and so on. If a position is reached where one string has no more letters to compare while the other does, then the first (shorter) string is deemed to come first in alphabetical order. Capital letters (upper case) are generally considered to be identical to their corresponding lower case letters for the purposes of alphabetical ordering, although conventions may be adopted to handle situations where two strings differ only in capitalization. Various conventions also exist for the handling of strings containing spaces, modified letters (such as those with diacritics), and non-letter characters such as marks of punctuation. The result of placing a set of words or strings in alphabetical order is that all of the strings beginning with the same letter are grouped together; within that grouping all words beginning with the same two-letter sequence are grouped together; and so on. The system thus tends to maximize the number of common initial letters between adjacent words. Alphabetical order was first used in the 1st millennium BCE by Northwest Semitic scribes using the abjad system. However, a range of other methods of classifying and ordering material, including geographical, chronological, hierarchical and by category, were preferred over alphabetical order for centuries. The Bible is dated to the 6th–7th centuries BCE. In the Book of Jeremiah, the prophet utilizes the Atbash substitution cipher, based on alphabetical order. Similarly, biblical authors used acrostics based on the (ordered) Hebrew alphabet. The first effective use of alphabetical order as a cataloging device among scholars may have been in ancient Alexandria, in the Great Library of Alexandria, which was founded around 300 BCE. The poet and scholar Callimachus, who worked there, is thought to have created the world's first library catalog, known as the Pinakes, with scrolls shelved in alphabetical order of the first letter of authors' names. In the 1st century BC, Roman writer Varro compiled alphabetic lists of authors and titles. In the 2nd century CE, Sextus Pompeius Festus wrote an encyclopedic epitome of the works of Verrius Flaccus, De verborum significatu, with entries in alphabetic order. In the 3rd century CE, Harpocration wrote a Homeric lexicon alphabetized by all letters. In the 10th century, the author of the Suda used alphabetic order with phonetic variations. Alphabetical order as an aid to consultation started to enter the mainstream of Western European intellectual life in the second half of the 12th century, when alphabetical tools were developed to help preachers analyse biblical vocabulary. This led to the compilation of alphabetical concordances of the Bible by the Dominican friars in Paris in the 13th century, under Hugh of Saint Cher. Older reference works such as St. Jerome's Interpretations of Hebrew Names were alphabetized for ease of consultation. The use of alphabetical order was initially resisted by scholars, who expected their students to master their area of study according to its own rational structures; its success was driven by such tools as Robert Kilwardby's index to the works of St. Augustine, which helped readers access the full original text instead of depending on the compilations of excerpts which had become prominent in 12th century scholasticism. The adoption of alphabetical order was part of the transition from the primacy of memory to that of written works. The idea of ordering information by the order of the alphabet also met resistance from the compilers of encyclopaedias in the 12th and 13th centuries, who were all devout churchmen. They preferred to organise their material theologically – in the order of God's creation, starting with Deus (meaning God). In 1604 Robert Cawdrey had to explain in Table Alphabeticall, the first monolingual English dictionary, "Nowe if the word, which thou art desirous to finde, begin with (a) then looke in the beginning of this Table, but if with (v) looke towards the end". Although as late as 1803 Samuel Taylor Coleridge condemned encyclopedias with "an arrangement determined by the accident of initial letters", many lists are today based on this principle. Arrangement in alphabetical order can be seen as a force for democratising access to information, as it does not require extensive prior knowledge to find what was needed. The standard order of the modern ISO basic Latin alphabet is: An example of straightforward alphabetical ordering follows: The above words are ordered alphabetically. As comes before Aster because they begin with the same two letters and As has no more letters after that whereas Aster does. The next three words come after Aster because their fourth letter (the first one that differs) is r, which comes after e (the fourth letter of Aster) in the alphabet. Those words themselves are ordered based on their sixth letters (l, n and p respectively). Then comes At, which differs from the preceding words in the second letter (t comes after s). Ataman comes after At for the same reason that Aster came after As. Attack follows Ataman based on comparison of their third letters, and Baa comes after all of the others because it has a different first letter. When some of the strings being ordered consist of more than one word, i.e., they contain spaces or other separators such as hyphens, then two basic approaches may be taken. In the first approach, all strings are ordered initially according to their first word, as in the sequence: where all strings beginning with the separate word Oak precede all those beginning Oakley, because Oak precedes Oakley in alphabetical order. In the second approach, strings are alphabetized as if they had no spaces, giving the sequence: where Oak Ridge now comes after the Oakley strings, as it would if it were written "Oakridge". The second approach is the one usually taken in dictionaries, and it is thus often called dictionary order by publishers. The first approach has often been used in book indexes, although each publisher traditionally set its own standards for which approach to use therein; there was no ISO standard for book indexes (ISO 999) before 1975. In French, modified letters (such as those with diacritics) are treated the same as the base letter for alphabetical ordering purposes. For example, rôle comes between rock and rose, as if it were written role. However, languages that use such letters systematically generally have their own ordering rules. See Language-specific conventions below. In most cultures where family names are written after given names, it is still desired to sort lists of names (as in telephone directories) by family name first. In this case, names need to be reordered to be sorted correctly. For example, Juan Hernandes and Brian O'Leary should be sorted as "Hernandes, Juan" and "O'Leary, Brian" even if they are not written this way. Capturing this rule in a computer collation algorithm is complex, and simple attempts will fail. For example, unless the algorithm has at its disposal an extensive list of family names, there is no way to decide if "Gillian Lucille van der Waal" is "van der Waal, Gillian Lucille", "Waal, Gillian Lucille van der", or even "Lucille van der Waal, Gillian". Ordering by surname is frequently encountered in academic contexts. Within a single multi-author paper, ordering the authors alphabetically by surname, rather than by other methods such as reverse seniority or subjective degree of contribution to the paper, is seen as a way of "acknowledg[ing] similar contributions" or "avoid[ing] disharmony in collaborating groups". The practice in certain fields of ordering citations in bibliographies by the surnames of their authors has been found to create bias in favour of authors with surnames which appear earlier in the alphabet, while this effect does not appear in fields in which bibliographies are ordered chronologically. If a phrase begins with a very common word (such as "the", "a" or "an", called articles in grammar), that word is sometimes ignored or moved to the end of the phrase, but this is not always the case. For example, the book "The Shining" might be treated as "Shining", or "Shining, The" and therefore before the book title "Summer of Sam". However, it may also be treated as simply "The Shining" and after "Summer of Sam". Similarly, "A Wrinkle in Time" might be treated as "Wrinkle in Time", "Wrinkle in Time, A", or "A Wrinkle in Time". All three alphabetization methods are fairly easy to create by algorithm, but many programs rely on simple lexicographic ordering instead. See main article: Mac and Mc together. The prefixes M and Mc in Irish and Scottish surnames are abbreviations for Mac and are sometimes alphabetized as if the spelling is Mac in full. Thus McKinley might be listed before Mackintosh (as it would be if it had been spelled out as "MacKinley"). Since the advent of computer-sorted lists, this type of alphabetization is less frequently encountered, though it is still used in British telephone directories. The prefix St or St. is an abbreviation of "Saint", and is traditionally alphabetized as if the spelling is Saint in full. Thus in a gazetteer St John's might be listed before Salem (as if it would be if it had been spelled out as "Saint John's"). Since the advent of computer-sorted lists, this type of alphabetization is less frequently encountered, though it is still sometimes used. Ligatures (two or more letters merged into one symbol) which are not considered distinct letters, such as Æ and Œ in English, are typically collated as if the letters were separate—"æther" and "aether" would be ordered the same relative to all other words. This is true even when the ligature is not purely stylistic, such as in loanwords and brand names. Special rules may need to be adopted to sort strings which vary only by whether two letters are joined by a ligature. See main article: Lexicographical order. When some of the strings contain numerals (or other non-letter characters), various approaches are possible. Sometimes such characters are treated as if they came before or after all the letters of the alphabet. Another method is for numbers to be sorted alphabetically as they would be spelled: for example 1776 would be sorted as if spelled out "seventeen seventy-six", and 24 heures du Mans as if spelled "vingt-quatre..." (French for "twenty-four"). When numerals or other symbols are used as special graphical forms of letters, as 1337 for leet or the movie Seven (which was stylised as Se7en), they may be sorted as if they were those letters. Natural sort order orders strings alphabetically, except that multi-digit numbers are treated as a single character and ordered by the value of the number encoded by the digits. In the case of monarchs and popes, although their numbers are in Roman numerals and resemble letters, they are normally arranged in numerical order: so, for example, even though V comes after I, the Danish king Christian IX comes after his predecessor Christian VIII. Languages which use an extended Latin alphabet generally have their own conventions for treatment of the extra letters. Also in some languages certain digraphs are treated as single letters for collation purposes. For example, the 29-letter alphabet of Spanish treats ñ as a basic letter following n, and formerly treated the digraphs ch and ll as basic letters following c and l, respectively. Ch and ll are still considered letters, but are now alphabetized as two-letter combinations. (The new alphabetization rule was issued by the Royal Spanish Academy in 1994.) On the other hand, the digraph rr follows rqu as expected, and did so even before the 1994 alphabetization rule. In a few cases, such as Kiowa, the alphabet has been completely reordered. Alphabetization rules applied in various languages are listed below. A particular feature of Hungarian collation is that contracted forms of double di- and trigraphs (such as Hungarian: ggy from gy + gy or Hungarian: ddzs from dzs + dzs) should be collated as if they were written in full (independently of the fact of the contraction and the elements of the di- or trigraphs). For example, kaszinó should precede kassza (even though the fourth character z would normally come after s in the alphabet), because the fourth "character" (grapheme) of the word kassza is considered a second sz (decomposing ssz into sz + sz), which does follow i (in kaszinó). A, AU, E, I, O, U, B, F, P, V, D, J, T, TH, G, C, K, Q, CH, X, S, Z, L, Y, W, H, M, N Collation algorithms (in combination with sorting algorithms) are used in computer programming to place strings in alphabetical order. A standard example is the Unicode Collation Algorithm, which can be used to put strings containing any Unicode symbols into (an extension of) alphabetical order. It can be made to conform to most of the language-specific conventions described above by tailoring its default collation table. Several such tailorings are collected in Common Locale Data Repository. The principle behind alphabetical ordering can still be applied in languages that do not strictly speaking use an alphabet – for example, they may be written using a syllabary or abugida – provided the symbols used have an established ordering. For logographic writing systems, such as Chinese hanzi or Japanese kanji, the method of radical-and-stroke sorting is frequently used as a way of defining an ordering on the symbols. Japanese sometimes uses pronunciation order, most commonly with the Gojūon order but sometimes with the older Iroha ordering. In mathematics, lexicographical order is a means of ordering sequences in a manner analogous to that used to produce alphabetical order. Some computer applications use a version of alphabetical order that can be achieved using a very simple algorithm, based purely on the ASCII or Unicode codes for characters. This may have non-standard effects such as placing all capital letters before lower-case ones. See ASCIIbetical order. A rhyming dictionary is based on sorting words in alphabetical order starting from the last to the first letter of the word.
Geography: Solar System The solar system consists of the Sun; the eight planets, more than 130 satellites of the planets, and a large number of small bodies The planets are divided into inner or terrestrial planets which have higher densities e.x. Mercury, Venus, Earth and Mars and outer planets which have lower densities e.g. Jupiter, Saturn, Uranus and Neptune. - Mercury is the closest planet to the Sun. It orbits in a highly elliptical orbit ranging from 46 million km (29 million miles) from the Sun out to 70 million km (43.5 million miles). - It takes about 88 earth days to orbit the sun but rotates on it’s axis once every 59 earth days. Because of the slow rotation, a single day on Mercury (mid day to mid day) takes 176 Earth days. - It has no atmosphere and no satellite. - Its days are scorching hot and nights are frigid. - Venus is the second closest planet to the Sun and orbits in an almost circular orbit at 108 million km. As it orbits, Venus comes closer to earth than any other planet in the solar system and can come to within about 40 million km. - Venus takes about 225 earth days to orbit the Sun and rotates at the incredibly slow rate of once every 243 days – and in a clockwise direction - Venus is considered as ‘Earth’s-twin’ because its size and shape are very much similar to that of the earth. - It is also called the ‘morning’ or ‘evening star’. - It is probably the hottest planet because its atmosphere contains 90-95% of carbon dioxide. - It has no satellite. 3. The Earth - The third closest planet to the sun is earth and is the largest and densest of the inner planets. Earth orbits in a reasonably circular at 150 million km and is the first of the planets to have a moon - Earth takes 365.25 earth days to orbit the Sun and rotates once every 23 hours, 56 minutes and 4 seconds. Because it rotates around the sun the length of a day on earth (sunrise to sunrise) takes 24 hours. - Mars is the fourth closest planet to the Sun and orbits in an fairly eccentric orbit at around 230 (+-20) million km. - Mars takes about 686 earth days to orbit the Sun. It has a tilt (25.1 degrees) and rotational period (24 hour 37 minutes) which are both similar to the earth with a day (sunrise to sunrise) lasting 24 hours, 39 mins. Because of the tilt it also has seasons in the same way as the earth does. - Beneath its atmosphere, Mars is barren, covered with pink soil and boulder. Because of this it is known as ‘red planet’. - Phobos and Deimos are two moons of mars - Jupiter is the fifth closest planet to the Sun and is the first of what are called the outer planets (being outside the asteroid belt). It is by far the largest planet in the solar system having two and a half times as much mass as all the other planets put together and one thousandth the mass of the Sun. - Jupiter orbits the Sun once every 12 years. - It is presumed to have a rocky core surrounded by a sea of liquid metallic hydrogen which forms a ball 110,000km in diameter. - Europa, Ganymede and Callisto are the important moons of Jupiter - Saturn is the sixth closest planet to the Sun. It is the second largest planet in the solar system having a radius 9 times that of earth (57,000 km) and a mass 95 times that of earth. - Saturn orbits the Sun once very 29 years (at about 1400 million km) and is mainly comprised of gas (96% hydrogen and 3% helium) and is presumed to have a rocky core surrounded by a sea of liquid metallic hydrogen which forms a ball some 56,000km in diameter. - The upper layers are thought to comprise of liquid water, ammonium hydrosulfide, hydrogen and helium. - It has 21 known satellites. Among them Titan, Phobe, Tethys and Mimas are important. - It is the only planet that lies on its side. Hence, one pole or the other faces the sun as it orbits. - It is one of the coldest planets because of having an average temperature of -223?C. - Its atmosphere is made of mainly hydrogen. The landscape is barren and there is frozen - methane cloud. - There are 9 dark compact rings around the planet and a corkscrew shaped magnetic - It is the most distant planet from the sun. - There are five rings of Neptune. The outer ring seems to be studded with icy moon lets while the inner ring appears narrow and nearly solid.
In recent years, many computer-aided methods have been developed to find the most optimum design for a problem. These intelligent techniques have allowed engineers to create designs that were beyond what we could come up with manually. One of these methods is topology optimisation. Topology optimisation (TO) is a computer-based design method used for creating efficient designs today. Fields such as aerospace, civil engineering, bio-chemical and mechanical engineering use this method proactively to create innovative design solutions that will outperform manual designs. What Is Topology Optimisation? Topology optimisation is a mathematical method used at the concept level of design development. The aim of this method is to spread the amount of material present more effectively over the model. It takes into account the boundaries set by the designer, applied load, and space limitations to create a design. In simple terms, topology optimisation takes a 3D model and creates a design space. It then removes or displaces material within it to make the design more efficient. While carrying out the material distribution, the objective function does not take aesthetics or the ease of manufacturing into account. At the very least, the method needs us to provide the magnitude of loading and the constraints within which it should operate. Using this information, the optimisation algorithm creates a possible load path using the minimum amount of material. Once a design is finalised, we use additive (and sometimes subtractive) manufacturing methods to produce the part. As the name suggests, in additive manufacturing (here on out referred to as AM), the material is added (e.g. 3D printing) bit by bit until the final model is complete. AM is capable of creating complex shapes and structures that may be extremely difficult to create using other methods. This is why we prefer it for creating complex products that emerge after optimisation. Sometimes, however, the design suggested by topology optimisation is too complex even for AM. In such situations, we make small changes to the design to improve its manufacturability. How Does It Work? Topology optimisation is carried out on an already existing model. We can choose to optimise an entire component or elements of it. This area of focus is known as the design space. Topology optimisation uses finite element analysis (FEA) to create a simple mesh of the design space. The mesh is analysed for stress distribution and strain energy. This informs the system about the amount of loading the different sections are handling. While some sections will have optimal material distribution, there will be some that could use trimming. Sections with low strain energy and stress level are marked using the finite element method. Once all the inefficient sections within the design space are identified, the objective function gradually removes the material. During this trimming process, the system will also check how much the overall structure is affected by the removal process. If the removal process compromises its integrity, the process stops and the material in that region is retained. Before running the TO algorithm, we set the amount of material we intend to remove as a percentage of the total material. For example, we may set the target material reduction percentage at 50%. The system removes the excess material in stages. At every stage, it checks the structure for stress levels by reiterating the element distribution until it reaches the target percentage. Get your manufacturing quote in seconds - Quote in seconds - Short lead times - Delivery by Fractory Topology optimisation improves upon several challenges at a time. Let’s see what advantages TO has to offer. Create cost and weight effective solutions The most attractive benefit of topology optimisation is its ability to reduce any unnecessary weight. Size optimisation means that less raw material is needed. Extra weight also negatively impacts energy efficiency. Parts will cost more for shipping as well. All these advantages translate directly into actual cost savings which is important in a competitive market. A great example is how General Electric used TO to reduce the weight of an engine bracket by 84%. This modification in a small part saved the airlines nearly $31 million dollars by improving the overall energy efficiency. A faster design process As design constraints and performance expectations are factored in at the early stages of conception, it does not take as much time as without TO to come up with the final design. A faster process also means a shorter time-to-market duration which is especially important for new products in a competitive market. TO prevents undue material wastage. The algorithm is capable of creating sustainable building systems while still being rooted in sound structural logic. Also, as mentioned earlier, topologically optimised products save fuel through weight reduction. As the demand for sustainable alternatives increases, more and more industries in the manufacturing sector are employing TO due to its environment friendly nature. There are some topology optimisation problems that we must know about in order to use it effectively. Let us see what they are. The designs that TO comes up with can be difficult to manufacture. Given that AM is quite flexible in terms of what it can manufacture, it is still necessary to check for manufacturability prior to finalising the design. If we try to solve the topology optimisation problem thinking only about the function, it is possible that we may fall short when it comes to our build quality and efficiency. It is worth noting here that a few software vendors offer a feature called manufacturing constraints for TO. Thus, it is possible to create parts that are only manufacturable using conventional methods. Lately, the cost of AM has reduced but it is still a notch above traditional production methods. We need to consider the cost to benefit ratio on a case by case basis. For mass production, creating injection moulds is a possibility. Therefore, we can look further than 3D printing for creating plastic parts. For making a few components on and off, AM could prove expensive which is a deterrent in most cases as the investment is too high. In such cases, it will be more beneficial to outsource the production to a 3D printing service company. Applications of Topology Optimisation Many industries are now looking towards advanced design methods like topology optimisation and generative design. Although the production of parts may be costlier, there are important advantages on offer. Aerospace, medical and automotive industries are some of the ones looking for assist from these mathematical modelling methods. Air travel is costly. Since the very beginning, attempts have been made to reduce the mass of an aircraft as far as possible without compromising its strength. Topology optimisation helps analyse aircraft components in detail to chop off unnecessary component mass. This means an aircraft can carry more cargo (or use less fuel) on the same journey. The same benefits apply to satellites and rockets. This mathematical method helps reduce support structures and create lighter parts while retaining their original strength. In the medical field, topology optimisation creates highly efficient implants and prosthetics. Using the algorithm, we can create parts that imitate the bone density and stiffness of the patient. It further takes into account the patient’s anatomy and the designed part’s activity level and the load applied. The optimization improves the part’s endurance limit. Where feasible, the algorithm will replace the solid structure with lattice. This reduction in weight is a welcomed benefit for implants/prosthetics. Some automobile makers are now using TO for designing structural (chassis) as well as machinery components. This technology has helped in reducing the mass of the body skeleton while maintaining (and even improving in some cases) the overall strength of the initial product. Now, in addition to composites and adhesives, steel is finding more applications due to the possibility of creating complex lattice structures using AM. Topology shape optimisation can create complex structures that have the best stiffness to weight ratio while using minimum material. They may be manufactured using additive as well as subtractive manufacturing processes. AM does give a large amount of freedom to the designer but where flat products are concerned, advanced subtractive manufacturing methods can create parts with complex geometry just as effectively. Each method will impose different manufacturing constraints on topology and geometry of elements and how the production process will go about with its creation. Some excellent methods that can manufacture these innovative solutions are: 3D printing has been instrumental in bringing topology optimisation to the limelight. Without additive processes, it is nearly impossible to create the complex structures designed by many other optimisation techniques, especially generative design, in addition to TO. 3D printing offers a fast and efficient way to create topologically optimised products with little to no wastage. There are many advantages to 3D printing and very few limitations. Among the limits of 3D printing is that only a handful of metals can be used with it as it was originally designed for plastics. As the use of TO became widespread, efforts were made to add features to computer programs that allow traditional production methods to create these components. As TO creates hollow structures with support structures of non-uniform thickness, it is difficult to use CNC machining for intricate components. But for models where the visual capacity overlaps with Vmap (Visibility map) completely, the part is manufacturable with CNC. Visibility is a concept defined in production to understand the capacity of a particular process to create a certain part. In practical processes, a part is said to be visible if no points on its surface are hidden from the process directions. Needless to say, a 5-axis CNC machine will be able to manufacture products of greater difficulty than a 3-axis CNC machine. Laser machining can also work as a production process for TO products. This method is capable of cutting intricate shapes with enviable accuracy. Laser cutting can be used on several different materials (metals, wood, acrylic, MDF) making it more useful when subtractive manufacturing is possible for a TO part. Topology Optimisation Software There are over 30 software products available in the market for TO which come with their own tradeoffs. Some programs are more popular than others for their holistic approach towards the technique. Let’s take a look at some of them. Ansys creates design solutions for multiphysics engineering simulation. The Ansys Mechanical software comes preloaded with structural topology optimization features. This program can analyse and optimise simple as well as complex design spaces and make corrections where needed. It comes with features such as: - Modal analysis of multiple static loads. - Control options for setting minimum material thickness. - Ability to work with planar and cyclic symmetry. - Easy validation of results. The program is easy to master and provides important features such as: - Capable of generating mixed support structures having solid as well as lattice geometry. These files can be observed in 3D and can be sent directly to a 3D printer for production. - Ability to interact and assign new loads to the structure besides being capable of running predetermined loads that can be imported/exported for analysis. - Ability to reduce overhangs to encourage more self-supported structures. Solidworks added TO features in its 2018 update. This is a widely used computer program for CAD applications and the introduction of TO has been quite smooth and efficient. Solidworks also uses the subtractive method where it chisels away material to reduce mass and improve stress distribution. The distinct features of the Solidworks TO module are as follows: - Ability to bring optimized designs into a CAD environment using multiple methods. - The availability of various partner products. Advances in AM have enabled us to create extremely complex shapes with relative ease. To take full advantage of these leaps in production capabilities, we need technologies like topology optimisation. TO is great at optimising designed solutions. Sometimes, it can feel a bit out of control especially if you are still learning the ropes. However, there are many factors that can be controlled to shift the model toward a more favourable outcome. Some of these controls include restricting member size in design space, demanding symmetry about planes, or extrudability of the final model. You can also manipulate the material removal percentage to control the alacrity the algorithm will optimise the part with.