text
stringlengths
4
602k
New measurements by researchers at NASA reveal that an ocean once covered approximately twenty percent of the Martian surface, making Mars is considerably wetter than many previous estimates and raising the odds for the ancient habitability of the Red Planet. A primitive ocean on Mars held more water than Earth’s Arctic Ocean, according to NASA scientists who, using ground-based observatories, measured water signatures in the Red Planet’s atmosphere. Scientists have been searching for answers to why this vast water supply left the surface. Details of the observations and computations appear in the Science magazine. “Our study provides a solid estimate of how much water Mars once had, by determining how much water was lost to space,” said Geronimo Villanueva, a scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, and lead author of the new paper. “With this work, we can better understand the history of water on Mars.” Perhaps about 4.3 billion years ago, Mars would have had enough water to cover its entire surface in a liquid layer about 450 feet (137 meters) deep. More likely, the water would have formed an ocean occupying almost half of Mars’ northern hemisphere, in some regions reaching depths greater than a mile (1.6 kilometers). The new estimate is based on detailed observations made at the European Southern Observatory’s Very Large Telescope in Chile, and the W.M. Keck Observatory and NASA Infrared Telescope Facility in Hawaii. With these powerful instruments, the researchers distinguished the chemical signatures of two slightly different forms of water in Mars’ atmosphere. One is the familiar H2O. The other is HDO, a naturally occurring variation in which one hydrogen is replaced by a heavier form, called deuterium. By comparing the ratio of HDO to H2O in water on Mars today and comparing it with the ratio in water trapped in a Mars meteorite dating from about 4.5 billion years ago, scientists can measure the subsequent atmospheric changes and determine how much water has escaped into space. The team mapped H2O and HDO levels several times over nearly six years, which is equal to approximately three Martian years. The resulting data produced global snapshots of each compound, as well as their ratio. These first-of-their-kind maps reveal regional variations called microclimates and seasonal changes, even though modern Mars is essentially a desert. The research team was especially interested in regions near Mars’ north and south poles, because the polar ice caps hold the planet’s largest known water reservoir. The water stored there is thought to capture the evolution of Mars’ water during the wet Noachian period, which ended about 3.7 billion years ago, to the present. From the measurements of atmospheric water in the near-polar region, the researchers determined the enrichment, or relative amounts of the two types of water, in the planet’s permanent ice caps. The enrichment of the ice caps told them how much water Mars must have lost – a volume 6.5 times larger than the volume in the polar caps now. That means the volume of Mars’ early ocean must have been at least 20 million cubic kilometers (5 million cubic miles). Based on the surface of Mars today, a likely location for this water would be in the Northern Plains, considered a good candidate because of the low-lying ground. An ancient ocean there would have covered 19 percent of the planet’s surface. By comparison, the Atlantic Ocean occupies 17 percent of Earth’s surface. “With Mars losing that much water, the planet was very likely wet for a longer period of time than was previously thought, suggesting it might have been habitable for longer,” said Michael Mumma, a senior scientist at Goddard and the second author on the paper. NASA is studying Mars with a host of spacecraft and rovers under the agency’s Mars Exploration Program, including the Opportunity and Curiosity rovers, Odyssey and Mars Reconnaissance Orbiter spacecraft, and the MAVEN orbiter, which arrived at the Red Planet in September 2014 to study the planet’s upper atmosphere. In 2016, a Mars lander mission called InSight will launch to take a first look into the deep interior of Mars. The agency also is participating in ESA’s (European Space Agency) 2016 and 2018 ExoMars missions, including providing telecommunication radios to ESA’s 2016 orbiter and a critical element of the astrobiology instrument on the 2018 ExoMars rover. NASA’s next rover, heading to Mars in 2020, will carry instruments to conduct unprecedented science and exploration technology investigations on the Red Planet. NASA’s Mars Exploration Program seeks to characterize and understand Mars as a dynamic system, including its present and past environment, climate cycles, geology, and biological potential. In parallel, NASA is developing the human spaceflight capabilities needed for future round-trip missions to Mars in the 2030s. Reference: “Strong water isotopic anomalies in the martian atmosphere: Probing current and ancient reservoirs” by G. L. Villanueva , M. J. Mumma, R. E. Novak, H. U. Käufl, P. Hartogh, T. Encrenaz, A. Tokunaga, A. Khayat and M. D. Smith, 5 March 2015, Science.
8.1 Stream Erosion and Deposition As we discussed in Lab 5, flowing water is a very important mechanism for erosion, transportation and deposition of sediments. Water flow in a stream is primarily related to the stream’s , but it is also controlled by the geometry of the . As shown in Figure 8.1.1, water flow velocity is decreased by friction along the stream bed, so it is slowest at the bottom and edges and fastest near the surface and in the middle. In fact, the velocity just below the surface is typically a little higher than right at the surface because of friction between the water and the air. On a curved section of a stream, flow is fastest on the outside and slowest on the inside. Other factors that affect stream-water velocity are the size of sediments on the stream bed—because large particles tend to slow the flow more than small ones—and the , or volume of water passing a point in a unit of time (e.g., cubic metres (m3) per second). During a flood, the water level always rises, so there is more cross-sectional area for the water to flow in, however, as long as a river remains confined to its channel, the velocity of the water flow also increases. Figure 8.1.2 shows the nature of sediment transportation in a stream. Large particles rest on the bottom——and may only be moved during rapid flows under flood conditions. They can be moved by (bouncing) and by (being pushed along by the force of the flow). Smaller particles may rest on the bottom some of the time, where they can be moved by saltation and traction, but they can also be held in suspension in the flowing water, especially at higher velocities. As you know from intuition and from experience, streams that flow fast tend to be turbulent (flow paths are chaotic and the water surface appears rough) and the water may be muddy, while those that flow more slowly tend to have laminar flow (straight-line flow and a smooth water surface) and clear water. Turbulent flow is more effective than laminar flow at keeping sediments in suspension. Stream water also has a dissolved load, which represents (on average) about 15% of the mass of material transported, and includes ions such as calcium (Ca+2) and chloride (Cl−) in solution. The solubility of these ions is not affected by flow velocity. The faster the water is flowing, the larger the particles that can be kept in suspension and transported within the flowing water. However, as Swedish geographer Filip Hjulström discovered in the 1940s, the relationship between grain size and the likelihood of a grain being eroded, transported, or deposited is not as simple as one might imagine (Figure 8.1.3). Consider, for example, a 1 millimetre grain of sand. If it is resting on the bottom, it will remain there until the velocity is high enough to erode it, around 20 centimetres per second (cm/s). But once it is in suspension, that same 1 mm particle will remain in suspension as long as the velocity doesn’t drop below 10 cm/s. For a 10 mm gravel grain, the velocity is 105 cm/s to be eroded from the bed but only 80 cm/s to remain in suspension. On the other hand, a 0.01 mm silt particle only needs a velocity of 0.1 centimetres per second (cm/s) to remain in suspension, but requires 60 cm/s to be eroded. In other words, a tiny silt grain requires a greater velocity to be eroded than a grain of sand that is 100 times larger! For clay-sized particles, the discrepancy is even greater. In a stream, the most easily eroded particles are small sand grains between 0.2 mm and 0.5 mm. Anything smaller or larger requires a higher water velocity to be eroded and entrained in the flow. The main reason for this is that small particles, and especially the tiny grains of clay, have a strong tendency to stick together, and so are difficult to erode from the stream bed. It is important to be aware that a stream can both erode and deposit sediments at the same time. At 100 cm/s, for example, silt, sand, and medium gravel will be eroded from the stream bed and transported in suspension, coarse gravel will be held in suspension, pebbles will be both transported and deposited, and cobbles and boulders will remain stationary on the stream bed. Practice Exercise 8.1: Understanding the Hjulström-Sundborg diagram Refer to the Hjulström-Sundborg diagram (Figure 8.1.3) to answer these questions. - A fine sand grain (0.1 millimetres) is resting on the bottom of a stream bed. - What stream velocity will it take to get that sand grain into suspension? - Once the particle is in suspension, the velocity starts to drop. At what velocity will it finally come back to rest on the stream bed? - A stream is flowing at 10 centimetres per second (which means it takes 10 seconds to go 1 metres, and that’s pretty slow). - What size of particles can be eroded at 10 centimetres per second? - What is the largest particle that, once already in suspension, will remain in suspension at 10 centimetres per second? See Appendix 2 for Practice Exercise 8.1 answers. A stream typically reaches its greatest velocity when it is close to flooding over its banks. This is known as the bank-full stage, as shown in Figure 8.1.4. As soon as the flooding stream overtops its banks and occupies the wide area of its flood plain, the water has a much larger area to flow through and the velocity drops significantly. At this point, sediment that was being carried by the high-velocity water is deposited near the edge of the channel, forming a natural bank or . Figure 8.1.1 image description: When a stream curves, the flow of water is fastest on the outside of the curve and slowest on the inside of the curve. When the stream is straight and a uniform depth, the stream flows fastest in the middle near the top and slowest along the edges. When the depth is not uniform, the stream flows fastest in the deeper section. [Return to Figure 8.1.1] Figure 8.1.3 image description: - Erosion velocity curve: A 0.001 millimetre particle would erode at a flow velocity of 500 centimetres per second or greater. As the particle size gets larger, the minimum flow velocity needed to erode the particle decreases, with the lowest flow velocity being 30 centimetres per second to erode a 0.5 millimetre particle. To erode particles larger than 0.5 millimetres, the minimum flow velocity rises again. - Settling velocity curve: A 0.01 millimetre particle would be deposited with a flow velocity of 0.1 centimetre per second or less. As the flow velocity increases, only larger and larger particles will be deposited. - Particles between these two curves (either moving too slow or being too small to be eroded or deposited) will be transported in the stream. - Figure 8.1.1, 8.1.2, 8.1.3, 8.1.4: © Steven Earle. CC BY. the slope of a stream bed over a specific distance, typically expressed in m per km The physical boundaries of a stream (or a river), consisting of a bed and stream (or river) banks. the volume of water flow in a stream expressed in terms of volume per unit time (e.g., m3/s) the fraction of a stream’s sediment load that typically rests on the bottom and is moved by saltation and traction The vast majority of the minerals that make up the rocks of Earth's crust are silicate minerals. These include minerals such as quartz, feldspar, mica, amphibole, pyroxene, olivine, and a variety of clay minerals. The building block of all of these minerals is the , a combination of four oxygen atoms and one silicon atom. As we've seen, it's called a tetrahedron because planes drawn through the oxygen atoms form a shape with 4 surfaces (Figure 2.2.4). Since the silicon ion has a charge of 4 and each of the four oxygen ions has a charge of −2, the silica tetrahedron has a net charge of −4. In silicate minerals, these tetrahedra are arranged and linked together in a variety of ways, from single units to complex frameworks (Table 2.6). The simplest silicate structure, that of the mineral olivine, is composed of isolated tetrahedra bonded to iron and/or magnesium ions. In olivine, the −4 charge of each silica tetrahedron is balanced by two (i.e., +2) iron or magnesium cations. Olivine can be either Mg2SiO4 or Fe2SiO4, or some combination of the two (Mg,Fe)2SiO4. The divalent cations of magnesium and iron are quite close in radius (0.73 versus 0.62 angstroms). Because of this size similarity, and because they are both divalent cations (both can have a charge of +2), iron and magnesium can readily substitute for each other in olivine and in many other minerals. |Tetrahedron Configuration Picture||Tetrahedron Configuration Name||Example Minerals| |Isolated (nesosilicates)||Olivine, garnet, zircon, kyanite| |Pairs (sorosilicates)||Epidote, zoisite| |Single chains (inosilicates)||Pyroxenes, wollastonite| |Double chains (inosilicates)||Amphiboles| |Sheets (phyllosilicates)||Micas, clay minerals, serpentine, chlorite| |3-dimensional structure||Framework (tectosilicates)||Feldspars, quartz, zeolite| Cut around the outside of the shape (solid lines and dotted lines), and then fold along the solid lines to form a tetrahedron. If you have glue or tape, secure the tabs to the tetrahedron to hold it together. If you don’t have glue or tape, make a slice along the thin grey line and insert the pointed tab into the slit. If you are doing this in a classroom, try joining your tetrahedron with others into pairs, rings, single and double chains, sheets, and even three-dimensional frameworks. See Appendix 3 for Exercise 2.3 answers. In olivine, unlike most other silicate minerals, the silica tetrahedra are not bonded to each other. Instead they are bonded to the iron and/or magnesium ions, in the configuration shown on Figure 2.4.1. As already noted, the 2 ions of iron and magnesium are similar in size (although not quite the same). This allows them to substitute for each other in some silicate minerals. In fact, the ions that are common in silicate minerals have a wide range of sizes, as depicted in Figure 2.4.2. All of the ions shown are cations, except for oxygen. Note that iron can exist as both a +2 ion (if it loses two electrons during ionization) or a +3 ion (if it loses three). Fe2+ is known as ferrous iron. Fe3+ is known as iron. Ionic radii are critical to the composition of silicate minerals, so we’ll be referring to this diagram again. The structure of the single-chain silicate pyroxene is shown on Figures 2.4.3 and 2.4.4. In , silica tetrahedra are linked together in a single chain, where one oxygen ion from each tetrahedron is shared with the adjacent tetrahedron, hence there are fewer oxygens in the structure. The result is that the oxygen-to-silicon ratio is lower than in olivine (3:1 instead of 4:1), and the net charge per silicon atom is less (−2 instead of −4). Therefore, fewer cations are necessary to balance that charge. Pyroxene compositions are of the type MgSiO3, FeSiO3, and CaSiO3, or some combination of these. Pyroxene can also be written as (Mg,Fe,Ca)SiO3, where the elements in the brackets can be present in any proportion. In other words, pyroxene has one cation for each silica tetrahedron (e.g., MgSiO3) while olivine has two (e.g., Mg2SiO4). Because each silicon ion is +4 and each oxygen ion is −2, the three oxygens (−6) and the one silicon (+4) give a net charge of −2 for the single chain of silica tetrahedra. In pyroxene, the one divalent cation (2) per tetrahedron balances that −2 charge. In olivine, it takes two divalent cations to balance the −4 charge of an isolated tetrahedron.The structure of pyroxene is more “permissive” than that of olivine—meaning that cations with a wider range of ionic radii can fit into it. That’s why pyroxenes can have iron (radius 0.63 Å) or magnesium (radius 0.72 Å) or calcium (radius 1.00 Å) cations (see Figure 2.4.2 above). Exercise 2.4 Oxygen deprivation The diagram below represents a single chain in a silicate mineral. Count the number of tetrahedra versus the number of oxygen ions (yellow spheres). Each tetrahedron has one silicon ion so this should give you the ratio of Si to O in single-chain silicates (e.g., pyroxene). The diagram below represents a double chain in a silicate mineral. Again, count the number of tetrahedra versus the number of oxygen ions. This should give you the ratio of Si to O in double-chain silicates (e.g., amphibole). See Appendix 3 for Exercise 2.4 answers. In structures, the silica tetrahedra are linked in a double chain that has an oxygen-to-silicon ratio lower than that of pyroxene, and hence still fewer cations are necessary to balance the charge. Amphibole is even more permissive than pyroxene and its compositions can be very complex. Hornblende, for example, can include sodium, potassium, calcium, magnesium, iron, aluminum, silicon, oxygen, fluorine, and the hydroxyl ion (OH−). In structures, the silica tetrahedra are arranged in continuous sheets, where each tetrahedron shares three oxygen anions with adjacent tetrahedra. There is even more sharing of oxygens between adjacent tetrahedra and hence fewer cations are needed to balance the charge of the silica-tetrahedra structure in sheet silicate minerals. Bonding between sheets is relatively weak, and this accounts for the well-developed one-directional cleavage in micas (Figure 2.4.5). Biotite mica can have iron and/or magnesium in it and that makes it a silicate mineral (like olivine, pyroxene, and amphibole). Chlorite is another similar mineral that commonly includes magnesium. In mica, the only cations present are aluminum and potassium; hence it is a non-ferromagnesian silicate mineral. Apart from muscovite, biotite, and chlorite, there are many other sheet silicates (a.k.a. ), many of which exist as clay-sized fragments (i.e., less than 0.004 millimetres). These include the clay minerals kaolinite, , and , and although they are difficult to study because of their very small size, they are extremely important components of rocks and especially of soils. All of the sheet silicate minerals also have water molecules within their structure. Silica tetrahedra are bonded in three-dimensional frameworks in both the and . These are non-ferromagnesian minerals—they don't contain any iron or magnesium. In addition to silica tetrahedra, feldspars include the cations aluminum, potassium, sodium, and calcium in various combinations. Quartz contains only silica tetrahedra. The three main minerals are potassium feldspar, (a.k.a. K-feldspar or K-spar) and two types of plagioclase feldspar: albite (sodium only) and anorthite (calcium only). As is the case for iron and magnesium in olivine, there is a continuous range of compositions (solid solution series) between albite and anorthite in plagioclase. Because the calcium and sodium ions are almost identical in size (1.00 Å versus 0.99 Å) any intermediate compositions between CaAl2Si3O8 and NaAlSi3O8 can exist (Figure 2.4.6). This is a little bit surprising because, although they are very similar in size, calcium and sodium ions don’t have the same charge (Ca2+ versus Na+ ). This problem is accounted for by the corresponding substitution of Al+3 for Si+4 . Therefore, albite is NaAlSi3O8 (1 Al and 3 Si) while anorthite is CaAl2Si2O8 (2 Al and 2 Si), and plagioclase feldspars of intermediate composition have intermediate proportions of Al and Si. This is called a “coupled-substitution.” The intermediate-composition plagioclase feldspars are oligoclase (10% to 30% Ca), andesine (30% to 50% Ca), labradorite (50% to 70% Ca), and bytownite (70% to 90% Ca). K-feldspar (KAlSi3O8) has a slightly different structure than that of plagioclase, owing to the larger size of the potassium ion (1.37 Å) and because of this large size, potassium and sodium do not readily substitute for each other, except at high temperatures. These high-temperature feldspars are likely to be found only in volcanic rocks because intrusive igneous rocks cool slowly enough to low temperatures for the feldspars to change into one of the lower-temperature forms. In (SiO2), the silica tetrahedra are bonded in a “perfect” three-dimensional framework. Each tetrahedron is bonded to four other tetrahedra (with an oxygen shared at every corner of each tetrahedron), and as a result, the ratio of silicon to oxygen is 1:2. Since the one silicon cation has a +4 charge and the two oxygen anions each have a −2 charge, the charge is balanced. There is no need for aluminum or any of the other cations such as sodium or potassium. The hardness and lack of cleavage in quartz result from the strong covalent/ionic bonds characteristic of the silica tetrahedron. Exercise 2.5 Ferromagnesian silicates? Silicate minerals are classified as being either ferromagnesian or non-ferromagnesian depending on whether or not they have iron (Fe) and/or magnesium (Mg) in their formula. A number of minerals and their formulas are listed below. For each one, indicate whether or not it is a ferromagnesian silicate. See Appendix 3 for Exercise 2.5 answers.*Some of the formulas, especially the more complicated ones, have been simplified. along a stream, the ridge that naturally forms along the edge of the channel during flood events
Coordinate plane - brainpop, In this educational animated movie about math learn about data, the x-axis, the y-axis, coordinates, quadrants, points and lines.. Coordinate plane graph paper worksheets - math-aids., Graphing worksheets coordinate plane graph paper worksheets. this graphing worksheet will produce a single or four quadrant coordinate grid for the students to use in. The coordinate plane - math open reference, The coordinate plane is a two-dimensional surface on which we can plot points, lines and curves. it has two scales, called the x-axis and y-axis, at right angles to. Cartesian coordinate system - wikipedia, A cartesian coordinate system coordinate system specifies point uniquely plane pair numerical coordinates, signed distances. https://en.wikipedia.org/wiki/Cartesian_coordinate_system Stock shelves – coordinate plane game, Stock shelves fun students reinforce skills coordinate plane. game require students understand negative numbers . http://mrnussbaum.com/stockshelves/ Sixth grade interactive math skills - coordinate plane, Interactive math skills resources - sixth grade math concepts, coordinate plane, ordered pairs, axis, axis. http://www.internet4classrooms.com/skill_builders/coordinate_plane_math_sixth_6th_grade.htm
In this subtraction equations activity, students problem solve thirty subtraction equations for mastery. Students double check their answers. 3 Views 0 Downloads Two-Variable Relationships in Real-World Problems When tending to CCSS.Math.Content.6.EE.9, this complete resource fits the bill. Learners work in groups or pairs to assign two variables to several real-world problems. They write equations, complete tables of solutions, and then... 5th - 7th Math CCSS: Designed Deciphering Word Problems in Order to Write Equations Help young mathematicians crack the code of word problems with this three-lesson series on problem solving. Walking students step-by-step through the process of identifying key information, creating algebraic equations, and finally... 5th - 8th Math CCSS: Adaptable Study Jams! Addition & Subtraction Equations Zoe is decorating a cake for her grandpa's birthday and needs to know how many more candles to add. Follow along as she explains how to set up and solve algebraic equations in this step-by-step presentation. Equations with negative... 5th - 7th Math CCSS: Adaptable Study Jams! Relate Addition & Subtraction Understanding the inverse relationship between addition and subtraction is essential for developing fluency in young mathematicians. Zoe and RJ explain how three numbers can form fact families that make two addition and two subtraction... 1st - 4th Math CCSS: Adaptable
Weather radar, also called weather surveillance radar (WSR) and Doppler weather radar, is a type of radar used to locate precipitation, calculate its motion, and estimate its type (rain, snow, hail etc.). Modern weather radars are mostly pulse-Doppler radars, capable of detecting the motion of rain droplets in addition to the intensity of the precipitation. Both types of data can be analyzed to determine the structure of storms and their potential to cause severe weather. During World War II, radar operators discovered that weather was causing echoes on their screen, masking potential enemy targets. Techniques were developed to filter them, but scientists began to study the phenomenon. Soon after the war, surplus radars were used to detect precipitation. Since then, weather radar has evolved on its own and is now used by national weather services, research departments in universities, and in television newscasts. Raw images are routinely used and specialized software can take radar data to make short term forecasts of future positions and intensities of rain, snow, hail, and other weather phenomena. Radar output is even incorporated into numerical weather prediction models to improve analyses and forecasts. - 1 History - 2 How a weather radar works - 3 Data types - 4 Main types of radar outputs - 5 Limitations and artifacts - 6 Solutions for now and the future - 7 Specialized applications - 8 See also - 9 Notes - 10 References - 11 Sources - 12 External links During World War II, military radar operators noticed noise in returned echoes due to rain, snow, and sleet. After the war, military scientists returned to civilian life or continued in the Armed Forces and pursued their work in developing a use for those echoes. In the United States, David Atlas, at first working for the Air Force and later for MIT, developed the first operational weather radars. In Canada, J.S. Marshall and R.H. Douglas formed the "Stormy Weather Group" in Montreal. Marshall and his doctoral student Walter Palmer are well known for their work on the drop size distribution in mid-latitude rain that led to understanding of the Z-R relation, which correlates a given radar reflectivity with the rate at which rainwater is falling. In the United Kingdom, research continued to study the radar echo patterns and weather elements such as stratiform rain and convective clouds, and experiments were done to evaluate the potential of different wavelengths from 1 to 10 centimeters. By 1950 the UK company EKCO was demonstrating its airborne 'cloud and collision warning search radar equipment'. Between 1950 and 1980, reflectivity radars, which measure position and intensity of precipitation, were incorporated by weather services around the world. The early meteorologists had to watch a cathode ray tube. During the 1970s, radars began to be standardized and organized into networks. The first devices to capture radar images were developed. The number of scanned angles was increased to get a three-dimensional view of the precipitation, so that horizontal cross-sections (CAPPI) and vertical cross-sections could be performed. Studies of the organization of thunderstorms were then possible for the Alberta Hail Project in Canada and National Severe Storms Laboratory (NSSL) in the US in particular. The NSSL, created in 1964, began experimentation on dual polarization signals and on Doppler effect uses. In May 1973, a tornado devastated Union City, Oklahoma, just west of Oklahoma City. For the first time, a Dopplerized 10 cm wavelength radar from NSSL documented the entire life cycle of the tornado. The researchers discovered a mesoscale rotation in the cloud aloft before the tornado touched the ground – the tornadic vortex signature. NSSL's research helped convince the National Weather Service that Doppler radar was a crucial forecasting tool. The Super Outbreak of tornadoes on 3–4 April 1974 and their devastating destruction might have helped to get funding for further developments. Between 1980 and 2000, weather radar networks became the norm in North America, Europe, Japan and other developed countries. Conventional radars were replaced by Doppler radars, which in addition to position and intensity could track the relative velocity of the particles in the air. In the United States, the construction of a network consisting of 10 cm radars, called NEXRAD or WSR-88D (Weather Surveillance Radar 1988 Doppler), was started in 1988 following NSSL's research. In Canada, Environment Canada constructed the King City station, with a 5 cm research Doppler radar, by 1985; McGill University dopplerized its radar (J. S. Marshall Radar Observatory) in 1993. This led to a complete Canadian Doppler network between 1998 and 2004. France and other European countries had switched to Doppler networks by the early 2000s. Meanwhile, rapid advances in computer technology led to algorithms to detect signs of severe weather, and many applications for media outlets and researchers. After 2000, research on dual polarization technology moved into operational use, increasing the amount of information available on precipitation type (e.g. rain vs. snow). "Dual polarization" means that microwave radiation which is polarized both horizontally and vertically (with respect to the ground) is emitted. Wide-scale deployment was done by the end of the decade or the beginning of the next in some countries such as the United States, France, and Canada. In April 2013, all National Weather Service NEXRADs were completely dual-polarized. Since 2003, the U.S. National Oceanic and Atmospheric Administration has been experimenting with phased-array radar as a replacement for conventional parabolic antenna to provide more time resolution in atmospheric sounding. This could be significant with severe thunderstorms, as their evolution can be better evaluated with more timely data. Also in 2003, the National Science Foundation established the Engineering Research Center for Collaborative Adaptive Sensing of the Atmosphere (CASA), a multidisciplinary, multi-university collaboration of engineers, computer scientists, meteorologists, and sociologists to conduct fundamental research, develop enabling technology, and deploy prototype engineering systems designed to augment existing radar systems by sampling the generally undersampled lower troposphere with inexpensive, fast scanning, dual polarization, mechanically scanned and phased array radars. How a weather radar works Sending radar pulses Weather radars send directional pulses of microwave radiation, on the order of a microsecond long, using a cavity magnetron or klystron tube connected by a waveguide to a parabolic antenna. The wavelengths of 1 – 10 cm are approximately ten times the diameter of the droplets or ice particles of interest, because Rayleigh scattering occurs at these frequencies. This means that part of the energy of each pulse will bounce off these small particles, back in the direction of the radar station. Shorter wavelengths are useful for smaller particles, but the signal is more quickly attenuated. Thus 10 cm (S-band) radar is preferred but is more expensive than a 5 cm C-band system. 3 cm X-band radar is used only for short-range units, and 1 cm Ka-band weather radar is used only for research on small-particle phenomena such as drizzle and fog. Radar pulses spread out as they move away from the radar station. Thus the volume of air that a radar pulse is traversing is larger for areas farther away from the station, and smaller for nearby areas, decreasing resolution at far distances. At the end of a 150 – 200 km sounding range, the volume of air scanned by a single pulse might be on the order of a cubic kilometer. This is called the pulse volume The volume of air that a given pulse takes up at any point in time may be approximated by the formula , where v is the volume enclosed by the pulse, h is pulse width (in e.g. meters, calculated from the duration in seconds of the pulse times the speed of light), r is the distance from the radar that the pulse has already traveled (in e.g. meters), and is the beam width (in radians). This formula assumes the beam is symmetrically circular, "r" is much greater than "h" so "r" taken at the beginning or at the end of the pulse is almost the same, and the shape of the volume is a cone frustum of depth "h". Listening for return signals Between each pulse, the radar station serves as a receiver as it listens for return signals from particles in the air. The duration of the "listen" cycle is on the order of a millisecond, which is a thousand times longer than the pulse duration. The length of this phase is determined by the need for the microwave radiation (which travels at the speed of light) to propagate from the detector to the weather target and back again, a distance which could be several hundred kilometers. The horizontal distance from station to target is calculated simply from the amount of time that lapses from the initiation of the pulse to the detection of the return signal. The time is converted into distance by multiplying by the speed of light in air: If pulses are emitted too frequently, the returns from one pulse will be confused with the returns from previous pulses, resulting in incorrect distance calculations. Assuming the Earth is round, the radar beam in vacuum would rise according to the reverse curvature of the Earth. However, the atmosphere has a refractive index that diminishes with height, due to its diminishing density. This bends the radar beam slightly toward the ground and with a standard atmosphere this is equivalent to considering that the curvature of the beam is 4/3 the actual curvature of the Earth. Depending on the elevation angle of the antenna and other considerations, the following formula may be used to calculate the target's height above ground : - r = distance radar–target, - ke = 4/3, - ae = Earth radius, - θe = elevation angle above the radar horizon, - ha = height of the feedhorn above ground. A weather radar network uses a series of typical angles that will be set according to the needs. After each scanning rotation, the antenna elevation is changed for the next sounding. This scenario will be repeated on many angles to scan all the volume of air around the radar within the maximum range. Usually, this scanning strategy is completed within 5 to 10 minutes to have data within 15 km above ground and 250 km distance of the radar. For instance in Canada, the 5 cm weather radars use angles ranging from 0.3 to 25 degrees. The image to the right shows the volume scanned when multiple angles are used. Due to the Earth's curvature and change of index of refraction with height, the radar cannot "see" below the height above ground of the minimal angle (shown in green) or closer to the radar than the maximal one (shown as a red cone in the center). Calibrating intensity of return where is received power, is transmitted power, is the gain of the transmitting/receiving antenna, is radar wavelength, is the radar cross section of the target and is the distance from transmitter to target. In this case, we have to add the cross sections of all the targets: where is the light speed, is temporal duration of a pulse and is the beam width in radians. In combining the two equations : Which leads to: Notice that the return now varies inversely to instead of . In order to compare the data coming from different distances from the radar, one has to normalize them with this ratio. Reflectivity (in decibel or dBZ) Return echoes from targets ("reflectivity") are analyzed for their intensities to establish the precipitation rate in the scanned volume. The wavelengths used (1–10 cm) ensure that this return is proportional to the rate because they are within the validity of Rayleigh scattering which states that the targets must be much smaller than the wavelength of the scanning wave (by a factor of 10). Reflectivity perceived by the radar (Ze) varies by the sixth power of the rain droplets' diameter (D), the square of the dielectric constant (K) of the targets and the drop size distribution (e.g. N[D] of Marshall-Palmer) of the drops. This gives a truncated Gamma function, of the form: Precipitation rate (R), on the other hand, is equal to the number of particles, their volume and their fall speed (v[D]) as: So Ze and R have similar functions that can be resolved giving a relation between the two of the form: - Z = aRb - As the antenna scans the atmosphere, on every angle of azimuth it obtains a certain strength of return from each type of target encountered. Reflectivity is then averaged for that target to have a better data set. - Since variation in diameter and dielectric constant of the targets can lead to large variability in power return to the radar, reflectivity is expressed in dBZ (10 times the logarithm of the ratio of the echo to a standard 1 mm diameter drop filling the same scanned volume). How to read reflectivity on a radar display Radar returns are usually described by colour or level. The colours in a radar image normally range from blue or green for weak returns, to red or magenta for very strong returns. The numbers in a verbal report increase with the severity of the returns. For example, the U.S. National Doppler Radar sites use the following scale for different levels of reflectivity: - magenta: 65 dBZ (extremely heavy precipitation, possible hail) - red: 52 dBZ - yellow: 36 dBZ - green: 20 dBZ (light precipitation) Strong returns (red or magenta) may indicate not only heavy rain but also thunderstorms, hail, strong winds, or tornadoes, but they need to be interpreted carefully, for reasons described below. When describing weather radar returns, pilots, dispatchers, and air traffic controllers will typically refer to three return levels: - level 1 corresponds to a green radar return, indicating usually light precipitation and little to no turbulence, leading to a possibility of reduced visibility. - level 2 corresponds to a yellow radar return, indicating moderate precipitation, leading to the possibility of very low visibility, moderate turbulence and an uncomfortable ride for aircraft passengers. - level 3 corresponds to a red radar return, indicating heavy precipitation, leading to the possibility of thunderstorms and severe turbulence and structural damage to the aircraft. Aircraft will try to avoid level 2 returns when possible, and will always avoid level 3 unless they are specially-designed research aircraft. Some displays provided by commercial weather sites, like The Weather Channel, show precipitation types during the winter month : rain, snow, mixed precipitations (sleet and freezing rain). This is not an analysis of the radar data itself but a post-treatment done with other data sources, the primary being surface reports (METAR). Over the area covered by radar echoes, a program assigns a precipitation type according to the surface temperature and dew point reported at the underlying weather stations. Precipitation types reported by human operated stations and certain automatic ones (AWOS) will have higher weight. Then the program does interpolations to produce an image with defined zones. These will include interpolation errors due to the calculation. Mesoscale variations of the precipitation zones will also be lost. More sophisticated programs use the numerical weather prediction output from models, such as NAM and WRF, for the precipitation types and apply it as a first guess to the radar echoes, then use the surface data for final output. Until dual-polarization (section Polarization below) data are widely available, any precipitation types on radar images are only indirect information and must be taken with care. Precipitation is found in and below clouds. Light precipitation such as drops and flakes is subject to the air currents, and scanning radar can pick up the horizontal component of this motion, thus giving the possibility to estimate the wind speed and direction where precipitation is present. A target's motion relative to the radar station causes a change in the reflected frequency of the radar pulse, due to the Doppler effect. With velocities of less than 70-metre/second for weather echos and radar wavelength of 10 cm, this amounts to a change only 0.1 ppm. This difference is too small to be noted by electronic instruments. However, as the targets move slightly between each pulse, the returned wave has a noticeable phase difference or phase shift from pulse to pulse. Doppler weather radars use this phase difference (pulse pair difference) to calculate the precipitation's motion. The intensity of the successively returning pulse from the same scanned volume where targets have slightly moved is: So , v = target speed = . This speed is called the radial Doppler velocity because it gives only the radial variation of distance versus time between the radar and the target. The real speed and direction of motion has to be extracted by the process described below. The phase between pulse pairs can vary from - and +, so the unambiguous Doppler velocity range is - Vmax = This is called the Nyquist velocity. This is inversely dependent on the time between successive pulses: the smaller the interval, the larger is the unambiguous velocity range. However, we know that the maximum range from reflectivity is directly proportional to : - x = The choice becomes increasing the range from reflectivity at the expense of velocity range, or increasing the latter at the expense of range from reflectivity. In general, the useful range compromise is 100–150 km for reflectivity. This means for a wavelength of 5 cm (as shown in the diagram), an unambiguous velocity range of 12.5 to 18.75 metre/second is produced (for 150 km and 100 km, respectively). For a 10 cm radar such as the NEXRAD, the unambiguous velocity range would be doubled. Some techniques using two alternating pulse repetition frequencies (PRF) allow a greater Doppler range. The velocities noted with the first pulse rate could be equal or different with the second. For instance, if the maximum velocity with a certain rate is 10 metre/second and the one with the other rate is 15 m/s. The data coming from both will be the same up to 10 m/s, and will differ thereafter. It is then possible to find a mathematical relation between the two returns and calculate the real velocity beyond the limitation of the two PRFs. In a uniform rainstorm moving eastward, a radar beam pointing west will "see" the raindrops moving toward itself, while a beam pointing east will "see" the drops moving away. When the beam scans to the north or to the south, no relative motion is noted. In the synoptic scale interpretation, the user can extract the wind at different levels over the radar coverage region. As the beam is scanning 360 degrees around the radar, data will come from all those angles and be the radial projection of the actual wind on the individual angle. The intensity pattern formed by this scan can be represented by a cosine curve (maximum in the precipitation motion and zero in the perpendicular direction). One can then calculate the direction and the strength of the motion of particles as long as there is enough coverage on the radar screen. However, the rain drops are falling. As the radar only sees the radial component and has a certain elevation from ground, the radial velocities are contaminated by some fraction of the falling speed. This component is negligible in small elevation angles, but must be taken into account for higher scanning angles. In the velocity data, there could be smaller zones in the radar coverage where the wind varies from the one mentioned above. For example, a thunderstorm is a mesoscale phenomenon which often includes rotations and turbulence. These may only cover few square kilometers but are visible by variations in the radial speed. Users can recognize velocity patterns in the wind associated with rotations, such as mesocyclone, convergence (outflow boundary) and divergence (downburst). Droplets of falling liquid water tend to have a larger horizontal axis due to the drag coefficient of air while falling (water droplets). This causes the water molecule dipole to be oriented in that direction; so, radar beams are, generally, polarized horizontally in order to receive the maximal signal reflection. If two pulses are sent simultaneously with orthogonal polarization (vertical and horizontal, ZV and ZH respectively), two independent sets of data will be received. These signals can be compared in several useful ways: - Differential Reflectivity (Zdr) – The differential reflectivity is the ratio of the reflected vertical and horizontal power returns as ZV/ZH. Among other things, it is a good indicator of drop shape and drop shape is a good estimate of average drop size. - Correlation Coefficient (ρhv) – A statistical correlation between the reflected horizontal and vertical power returns. High values, near one, indicate homogeneous precipitation types, while lower values indicate regions of mixed precipitation types, such as rain and snow, or hail. - Linear Depolarization Ratio (LDR) – This is a ratio of a vertical power return from a horizontal pulse or a horizontal power return from a vertical pulse. It can also indicate regions where there is a mixture of precipitation types. - Specific Differential Phase (θdp) – The specific differential phase is a comparison of the returned phase difference between the horizontal and vertical pulses. This change in phase is caused by the difference in the number of wave cycles (or wavelengths) along the propagation path for horizontal and vertically polarized waves. It should not be confused with the Doppler frequency shift, which is caused by the motion of the cloud and precipitation particles. Unlike the differential reflectivity, correlation coefficient and linear depolarization ratio, which are all dependent on reflected power, the specific differential phase is a "propagation effect." It is a very good estimator of rain rate and is not affected by attenuation. With this new knowledge added to the reflectivity, velocity, and spectrum width produced by Doppler weather radars, researchers have been working on developing algorithms to differentiate precipitation types, non-meteorological targets, and to produce better rainfall accumulation estimates. In the U.S., NCAR and NSSL have been world leaders in this field. NOAA established a test deployment for dual-polametric radar at NSSL and equipped all its 10 cm NEXRAD radars with dual-polarization, which was completed in April 2013. McGill University J. S. Marshall Radar Observatory in Montreal, Canada has converted its instrument (1999) and the data are used operationally by Environment Canada in Montreal. Another Environment Canada radar, in King City (North of Toronto), was dual-polarized in 2005; it uses a 5 cm wavelength, which experiences greater attenuation. Environment Canada is working on converting all of its radars to dual-polarization. Météo-France is planning on incorporating dual-polarizing Doppler radar in its network coverage. Main types of radar outputs All data from radar scans are displayed according to the need of the users. Different outputs have been developed through time to reach this. Here is a list of common and specialized outputs available. Plan position indicator Since data are obtained one angle at a time, the first way of displaying them has been the Plan Position Indicator (PPI) which is only the layout of radar return on a two dimensional image. One has to remember that the data coming from different distances to the radar are at different heights above ground. This is very important as a high rain rate seen near the radar is relatively close to what reaches the ground but what is seen from 160 km away is about 1.5 km above ground and could be far different from the amount reaching the surface. It is thus difficult to compare weather echoes at different distance from the radar. PPIs are afflicted with ground echoes near the radar as a supplemental problem. These can be misinterpreted as real echoes. So other products and further treatments of data have been developed to supplement such shortcomings. USAGE: Reflectivity, Doppler and polarimetric data can use PPI. In the case of Doppler data, two points of view are possible: relative to the surface or the storm. When looking at the general motion of the rain to extract wind at different altitudes, it is better to use data relative to the radar. But when looking for rotation or wind shear under a thunderstorm, it is better to use the storm relative images that subtract the general motion of precipitation leaving the user to view the air motion as if he would be sitting on the cloud. Constant-altitude plan position indicator To avoid some of the problems on PPIs, the constant-altitude plan position indicator (CAPPI) has been developed by Canadian researchers. It is basically a horizontal cross-section through radar data. This way, one can compare precipitation on an equal footing at difference distance from the radar and avoid ground echoes. Although data are taken at a certain height above ground, a relation can be inferred between ground stations' reports and the radar data. CAPPIs call for a large number of angles from near the horizontal to near the vertical of the radar to have a cut that is as close as possible at all distance to the height needed. Even then, after a certain distance, there isn't any angle available and the CAPPI becomes the PPI of the lowest angle. The zigzag line on the angles diagram above shows the data used to produce 1.5 km and 4 km height CAPPIs. Notice that the section after 120 km is using the same data. Since the CAPPI uses the closest angle to the desired height at each point from the radar, the data can originate from slightly different altitudes, as seen on the image, in different points of the radar coverage. It is therefore crucial to have a large enough number of sounding angles to minimize this height change. Furthermore, the type of data must be changing relatively gradually with height to produced an image that is not noisy. Reflectivity data being relatively smooth with height, CAPPIs are mostly used for displaying them. Velocity data, on the other hand, can change rapidly in direction with height and CAPPIs of them are not common. It seems that only McGill University is producing regularly Doppler CAPPIs with the 24 angles available on their radar. However, some researchers have published papers using velocity CAPPIs to study tropical cyclones and development of NEXRAD products. Finally, polarimetric data are recent and often noisy. There doesn't seem to have regular use of CAPPI for them although the SIGMET company offer a software capable to produce those types of images. - Real-time examples Another solution to the PPI problems is to produce images of the maximum reflectivity in a layer above ground. This solution is usually taken when the number of angles available is small or variable. The American National Weather Service is using such Composite as their scanning scheme can vary from 4 to 14 angles, according to their need, which would make very coarse CAPPIs. The Composite assures that no strong echo is missed in the layer and a treatment using Doppler velocities eliminates the ground echoes. Comparing base and composite products, one can locate virga and updrafts zones. Real time example: NWS Burlington radar, one can compare the BASE and COMPOSITE products Another important use of radar data is the ability to assess the amount of precipitation that has fallen over large basins, to be used in hydrological calculations; such data is useful in flood control, sewer management and dam construction. The computed data from radar weather may be used in conjunction with data from ground stations. To produce radar accumulations, we have to estimate the rain rate over a point by the average value over that point between one PPI, or CAPPI, and the next; then multiply by the time between those images. If one wants for a longer period of time, one has to add up all the accumulations from images during that time. Aviation is a heavy user of radar data. One map particularly important in this field is the Echotops for flight planning and avoidance of dangerous weather. Most country weather radars are scanning enough angles to have a 3D set of data over the area of coverage. It is relatively easy to estimate the maximum altitude at which precipitation is found within the volume. However, those are not the tops of clouds as they always extend above the precipitation. Vertical cross sections To know the vertical structure of clouds, in particular thunderstorms or the level of the melting layer, a vertical cross-section product of the radar data is available. This is done by displaying only the data along a line, from coordinates A to B, taken from the different angles scanned. Range Height Indicator When a weather radar is scanning in only one direction vertically, it obtains high resolution data along a vertical cut of the atmosphere. The output of this sounding is called a Range Height Indicator (RHI) which is excellent for viewing the detailed vertical structure of a storm. This is different from the vertical cross section mentioned above by the fact that the radar is making a vertical cut along specific directions and does not scan over the entire 360 degrees around the site. This kind of sounding and product is only available on research radars. Over the past few decades, radar networks have been extended to allow the production of composite views covering large areas. For instance, many countries, including the United States, Canada and much of Europe, produce images that include all of their radars. This is not a trivial task. In fact, such a network can consist of different types of radar with different characteristics such as beam width, wavelength and calibration. These differences have to be taken into account when matching data across the network, particularly to decide what data to use when two radars cover the same point. If one uses the stronger echo but it comes from the more distant radar, one uses returns that are from higher altitude coming from rain or snow that might evaporate before reaching the ground (virga). If one uses data from the closer radar, it might be attenuated passing through a thunderstorm. Composite images of precipitations using a network of radars are made with all those limitations in mind. Here are some national radar networks: - Environment Canada - National Weather Service in United States - Czech Republic - South African Republic - Deutscher Wetterdienst in Germany - Bureau of Meteorology, Australia - Smhi, Scandinavia and Baltic sea - POLRAD – Poland To help meteorologists spot dangerous weather, mathematical algorithms have been introduced in the weather radar treatment programs. These are particularly important in analyzing the Doppler velocity data as they are more complex. The polarization data will even need more algorithms. Main algorithms for reflectivity: - Vertically Integrated Liquid (VIL) is an estimate of the total mass of precipitation in the clouds. - VIL Density is VIL divided by the height of the cloud top. It is a clue to the possibility of large hail in thunderstorms. - Potential wind gust, which can estimate the winds under a cloud (a downdraft) using the VIL and the height of the echotops (radar estimated top of the cloud) for a given storm cell. - Hail algorithms that estimate the presence of hail and its probable size. Main algorithms for Doppler velocities: - Mesocyclone detection: it is triggered by a velocity change over a small circular area. The algorithm is searching for a "doublet" of inbound/outbound velocities with the zero line of velocities, between the two, along a radial line from the radar. Usually the mesocyclone detection must be found on two or more stacked progressive tilts of the beam to be significative of rotation into a thunderstom cloud. - TVS or Tornado Vortex Signature algorithm is essentially a mesocyclone with a large velocity threshold found through many scanning angles. This algorithm is used in NEXRAD to indicate the possibility of a tornado formation. - Wind shear in low levels. This algorithm detects variation of wind velocities from point to point in the data and looking for a doublet of inbound/outbound velocities with the zero line perpendicular to the radar beam. The wind shear is associated with downdraft, (downburst and microburst), gust fronts and turbulence under thunderstorms. - VAD Wind Profile (VWP) is a display that estimates the direction and speed of the horizontal wind at various upper levels of the atmosphere, using the technique explained in the Doppler section. The animation of radar products can show the evolution of reflectivity and velocity patterns. The user can extract information on the dynamics of the meteorological phenomena, including the ability to extrapolate the motion and observe development or dissipation. This can also reveal non-meteorological artifacts (false echoes) that will be discussed later. Radar Integrated Display with Geospatial Elements A new popular presentation of weather radar data in United States is via Radar Integrated Display with Geospatial Elements (RIDGE) in which the radar data is projected on a map with geospatial elements such as topography maps, highways, state/county boundaries and weather warnings. The projection often is flexible giving the user a choice of various geographic elements. It is frequently used in conjunction with animations of radar data over a time period. Limitations and artifacts Radar data interpretation depends on many hypotheses about the atmosphere and the weather targets, including: - International Standard Atmosphere. - Targets small enough to obey the Rayleigh scattering, resulting in the return being proportional to the precipitation rate. - The volume scanned by the beam is full of meteorological targets (rain, snow, etc..), all of the same variety and in a uniform concentration. - No attenuation - No amplification - Return from side lobes of the beam are negligible. - The beam is close to a Gaussian function curve with power decreasing to half at half the width. - The outgoing and returning waves are similarly polarized. - There is no return from multiple reflections. These assumptions are not always met; one must be able to differentiate between reliable and dubious echoes. Anomalous propagation (non-standard atmosphere) The first assumption is that the radar beam is moving through air that cools down at a certain rate with height. The position of the echoes depend heavily on this hypothesis. However, the real atmosphere can vary greatly from the norm. Temperature inversions often form near the ground, for instance by air cooling at night while remaining warm aloft. As the index of refraction of air decreases faster than normal the radar beam bends toward the ground instead of continuing upward. Eventually, it will hit the ground and be reflected back toward the radar. The processing program will then wrongly place the return echoes at the height and distance it would have been in normal conditions. This type of false return is relatively easy to spot on a time loop if it is due to night cooling or marine inversion as one sees very strong echoes developing over an area, spreading in size laterally but not moving and varying greatly in intensity. However, inversion of temperature exists ahead of warm fronts and the abnormal propagation echoes are then mixed with real rain. The extreme of this problem is when the inversion is very strong and shallow, the radar beam reflects many times toward the ground as it has to follow a waveguide path. This will create multiple bands of strong echoes on the radar images. This situation can be found with inversions of temperature aloft or rapid decrease of moisture with height. In the former case, it could be difficult to notice. On the other hand, if the air is unstable and cools faster than the standard atmosphere with height, the beam ends up higher than expected. This indicates that precipitation is occurring higher than the actual height. Such an error is difficult to detect without additional temperature lapse-rate data for the area. If we want to reliably estimate the precipitation rate, the targets have to be 10 times smaller than the radar wave according to Rayleigh scattering. This is because the water molecule has to be excited by the radar wave to give a return. This is relatively true for rain or snow as 5 or 10 cm wavelength radars are usually employed. However, for very large hydrometeors, since the wavelength is on the order of stone, the return levels off according to Mie theory. A return of more than 55 dBZ is likely to come from hail but won't vary proportionally to the size. On the other hand, very small targets such as cloud droplets are too small to be excited and do not give a recordable return on common weather radars. Resolution and partially filled scanned volume As demonstrated at the start of the article, radar beams have a physical dimension and data are sampled at discrete angles, not continuously, along each angle of elevation. This results in an averaging of the values of the returns for reflectivity, velocities and polarization data on the resolution volume scanned. In the figure to the left, at the top is a view of a thunderstorm taken by a wind profiler as it was passing overhead. This is like a vertical cross section through the cloud with 150-metre vertical and 30-metre horizontal resolution. The reflectivity has large variations in a short distance. Compare this with a simulated view of what a regular weather radar would see at 60 km, in the bottom of the figure. Everything has been smoothed out. Not only the coarser resolution of the radar blur the image but the sounding incorporates area that are echo free, thus extending the thunderstorm beyond its real boundaries. This shows how the output of weather radar is only an approximation of reality. The image to the right compares real data from two radars almost colocated. The TDWR has about half the beamwidth of the other and one can see twice more details than with the NEXRAD. Resolution can be improved by newer equipment but some things cannot. As mentioned previously, the volume scanned increases with distance so the possibility that the beam is only partially filled also increases. This leads to underestimation of the precipitation rate at larger distances and fools the user into thinking that rain is lighter as it moves away. The radar beam has a distribution of energy similar to the diffraction pattern of a light passing through a slit. This is because the wave is transmitted to the parabolic antenna through a slit in the wave-guide at the focal point. Most of the energy is at the center of the beam and decreases along a curve close to a Gaussian function on each side. However, there are secondary peaks of emission that will sample the targets at off-angles from the center. Designers attempt to minimize the power transmitted by such lobes, but they cannot be completely eliminated. When a secondary lobe hits a reflective target such as a mountain or a strong thunderstorm, some of the energy is reflected to the radar. This energy is relatively weak but arrives at the same time that the central peak is illuminating a different azimuth. The echo is thus misplaced by the processing program. This has the effect of actually broadening the real weather echo making a smearing of weaker values on each side of it. This causes the user to overestimate the extent of the real echoes. There is more than rain and snow in the sky. Other objects can be misinterpreted as rain or snow by weather radars. Insects and arthropods are swept along by the prevailing winds, while birds follow their own course. As such, fine line patterns within weather radar imagery, associated with converging winds, are dominated by insect returns. Bird migration, which tends to occur overnight within the lowest 2000 metres of the Earth's atmosphere, contaminates wind profiles gathered by weather radar, particularly the WSR-88D, by increasing the environmental wind returns by 30–60 km/hr. Other objects within radar imagery include: - Thin metal strips (chaff) dropped by military aircraft to fool enemies. - Solid obstacles such as mountains, buildings, and aircraft. - Ground and sea clutter. - Reflections from nearby buildings ("urban spikes"). Such extraneous objects have characteristics that allow a trained eye to distinguish them. It is also possible to eliminate some of them with post-treatment of data using reflectivity, Doppler, and polarization data. The rotating blades of windmills on modern wind farms can return the radar beam to the radar if they are in its path. Since the blades are moving, the echoes will have a velocity and can be mistaken for real precipitation. The closer the wind farm, the stronger the return, and the combined signal from many towers is stronger. In some conditions, the radar can even see toward and away velocities that generate false positives for the tornado vortex signature algorithm on weather radar; such an event occurred in 2009 in Dodge City, Kansas. As with other structures that stand in the beam, attenuation of radar returns from beyond windmills may also lead to underestimation. Microwaves used in weather radars can be absorbed by rain, depending on the wavelength used. For 10 cm radars, this attenuation is negligible. That is the reason why countries with high water content storms are using 10 cm wavelength, for example the US NEXRAD. The cost of a larger antenna, klystron and other related equipment is offset by this benefit. For a 5 cm radar, absorption becomes important in heavy rain and this attenuation leads to underestimation of echoes in and beyond a strong thunderstorm. Canada and other northern countries use this less costly kind of radar as the precipitation in such areas is usually less intense. However, users must consider this characteristic when interpreting data. The images above show how a strong line of echoes seems to vanish as it moves over the radar. To compensate for this behaviour, radar sites are often chosen to somewhat overlap in coverage to give different points of view of the same storms. Shorter wavelengths are even more attenuated and are only useful on short range radar. Many television stations in the United States have 3 cm radars to cover their audience area. Knowing their limitations and using them with the local NEXRAD can supplement the data available to a meteorologist. A radar beam's reflectivity depends on the diameter of the target and its capacity to reflect. Snowflakes are large but weakly reflective while rain drops are small but highly reflective. When snow falls through a layer above freezing temperature, it melts into rain. Using the reflectivity equation, one can demonstrate that the returns from the snow before melting and the rain after, are not too different as the change in dielectric constant compensates for the change in size. However, during the melting process, the radar wave "sees" something akin to very large droplets as snow flakes become coated with water. This gives enhanced returns that can be mistaken for stronger precipitations. On a PPI, this will show up as an intense ring of precipitation at the altitude where the beam crosses the melting level while on a series of CAPPIs, only the ones near that level will have stronger echoes. A good way to confirm a bright band is to make a vertical cross section through the data, as illustrated in the picture above. An opposite problem is that drizzle (precipitation with small water droplet diameter) tends not to show up on radar because radar returns are proportional to the sixth power of droplet diameter. It is assumed that the beam hits the weather targets and returns directly to the radar. In fact, there is energy reflected in all directions. Most of it is weak, and multiple reflections diminish it even further so what can eventually return to the radar from such an event is negligible. However, some situations allow a multiple-reflected radar beam to be received by the radar antenna. For instance, when the beam hits hail, the energy spread toward the wet ground will be reflected back to the hail and then to the radar. The resulting echo is weak but noticeable. Due to the extra path length it has to go through, it arrives later at the antenna and is placed further than its source. This gives a kind of triangle of false weaker reflections placed radially behind the hail. Solutions for now and the future These two images show what can be presently achieved to clean up radar data. The output on the left is made with the raw returns and it is difficult to spot the real weather. Since rain and snow clouds are usually moving, one can use the Doppler velocities to eliminate a good part of the clutter (ground echoes, reflections from buildings seen as urban spikes, anomalous propagation). The image on the right has been filtered using this property. However, not all non-meteorological targets remain still (birds, insects, dust). Others, like the bright band, depend on the structure of the precipitation. Polarization offers a direct typing of the echoes which could be used to filter more false data or produce separate images for specialized purposes. This recent development is expected to improve the quality of radar products. Another question is the resolution. As mentioned previously, radar data are an average of the scanned volume by the beam. Resolution can be improved by larger antenna or denser networks. A program by the Center for Collaborative Adaptive Sensing of the Atmosphere (CASA) aims to supplement the regular NEXRAD (a network in the United States) using many low cost X-band (3 cm) weather radar mounted on cellular telephone towers. These radars will subdivide the large area of the NEXRAD into smaller domains to look at altitudes below its lowest angle. These will give details not currently available. Using 3 cm radars, the antenna of each radar is small (about 1 meter diameter) but the resolution is similar at short distance to that of NEXRAD. The attenuation is significant due to the wavelength used but each point in the coverage area is seen by many radars, each viewing from a different direction and compensating for data lost from others. Timeliness is also a point needing improvement. With 5 to 10 minutes time between complete scans of weather radar, much data is lost as a thunderstorm develops. A Phased-array radar is being tested at the National Severe Storms Lab in Norman, Oklahoma, to speed the data gathering. Avionics weather radar Aircraft application of radar systems include weather radar, collision avoidance, target tracking, ground proximity, and other systems. For commercial weather radar, ARINC 708 is the primary weather radar system using an airborne pulse-Doppler radar. Unlike ground weather radar, which is set at a fixed angle, airborne weather radar is being utilized from the nose or wing of an aircraft. Not only will the aircraft be moving up, down, left, and right, but it will be rolling as well. To compensate for this, the antenna is linked and calibrated to the vertical gyro located on the aircraft. By doing this, the pilot is able to set a pitch or angle to the antenna that will enable the stabilizer to keep the antenna pointed in the right direction under moderate maneuvers. The small servo motors will not be able to keep up with abrupt maneuvers, but it will try. In doing this the pilot is able to adjust the radar so that it will point towards the weather system of interest. If the airplane is at a low altitude, the pilot would want to set the radar above the horizon line so that ground clutter is minimized on the display. If the airplane is at a very high altitude, the pilot will set the radar at a low or negative angle, to point the radar towards the clouds wherever they may be relative to the aircraft. If the airplane changes attitude, the stabilizer will adjust itself accordingly so that the pilot doesn't have to fly with one hand and adjust the radar with the other. There are two major systems when talking about the receiver/transmitter: the first is high-powered systems, and the second is low-powered systems; both of which operate in the X-band frequency range (8,000 – 12,500 MHz). High-powered systems operate at 10,000 – 60,000 watts. These systems consist of magnetrons that are fairly expensive (approximately $1,700) and allow for considerable noise due to irregularities with the system. Thus, these systems are highly dangerous for arcing and are not safe to be used around ground personnel. However, the alternative would be the low-powered systems. These systems operate 100 – 200 watts, and require a combination of high gain receivers, signal microprocessors, and transistors to operate as effectively as the high-powered systems. The complex microprocessors help to eliminate noise, providing a more accurate and detailed depiction of the sky. Also, since there are fewer irregularities throughout the system, the low-powered radars can be used to detect turbulence via the Doppler Effect. Since low-powered systems operate at considerable less wattage, they are safe from arcing and can be used at virtually all times. Digital radar systems now have capabilities far beyond that of their predecessors. Digital systems now offer thunderstorm tracking surveillance. This provides users with the ability to acquire detailed information of each storm cloud being tracked. Thunderstorms are first identified by matching precipitation raw data received from the radar pulse to some sort of template preprogrammed into the system. In order for a thunderstorm to be identified, it has to meet strict definitions of intensity and shape that set it apart from any non-convective cloud. Usually, it must show signs of organization in the horizontal and continuity in the vertical: a core or a more intense center to be identified and tracked by digital radar tracking systems. Once the thunderstorm cell is identified, speed, distance covered, direction, and Estimated Time of Arrival (ETA) are all tracked and recorded to be utilized later. - Barber's pole - Lockheed WP-3D Orion (P-3) - National Hurricane Research Laboratory - David Atlas, "Radar in Meteorology", published by American Meteorological Society - "Stormy Weather Group". McGill University. 2000. Retrieved 2006-05-21. - "The First Tornadic Hook Echo Weather Radar Observations". Colorado State University. 2008. Retrieved 2008-01-30. - Cobb, Susan (29 October 2004). "Weather radar development highlight of the National Severe Storms Laboratory first 40 years". NOAA Magazine. National Oceanic and Atmospheric Administration. Retrieved 2009-03-07. - "NSSL Research Tools: Radar". NSSL. Retrieved 1 March 2014. - Crozier, C.L.; Joe, P.I.; Scott, J.W.; Herscovitch, H.N.; Nichols, T.R. (1991). "The King City Operational Doppler Radar: Development, All-Season Applications and Forecasting". Atmosphere-Ocean (Canadian Meteorological and Oceanographic Society (CMOS)) 29 (3): 479–516. doi:10.1080/07055900.1991.9649414. Archived from the original on 2 October 2006. Retrieved 10 May 2012. - "Information about Canadian radar network". The National Radar Program. Environment Canada. 2002. Retrieved 2006-06-14. - [url=http://ams.confex.com/ams/pdfpapers/96217.pdf] The PANTHERE project and the evolution of the French operational radar network and products: Rain estimation, Doppler winds, and dual polarization, Parent du Châtelet, Jacques et al. Météo-France (2005) 32nd Radar Conference of the American Meteorological Society, Albuquerque NM - National Weather Service (25 April 2013). "Dual-polarization radar: Stepping stones to building a Weather-Ready Nation". NOAA. Retrieved 26 April 2013. - Doviak, R. J.; Zrnic, D. S. (1993). Doppler Radar and Weather Observations (2nd ed.). San Diego CA: Academic Press. ISBN 0-12-221420-X. - (English) "Pulse volume". Glossary of Meteorology. American Meteorological Society. 2009. Retrieved 2009-09-27. - de Podesta, M (2002). Understanding the Properties of Matter. CRC Press. p. 131. ISBN 0-415-25788-3. - Doviak, R.J.; Zrnic, D. S. (1993). "ATMS 410 – Radar Meteorology : Beam propagation". - Airbus (14 March 2007). "Flight Briefing Notes: Adverse Weather Operations Optimum Use of Weather Radar". SKYbrary. p. 2. Retrieved 2009-11-19. - Skolnik, Merrill I. (22 January 2008). "1.2". equation%20radar Radar handbook (3rd ed.). McGraw-Hill. ISBN 9780071485470. Retrieved 2009-09-27. - Skolnik, Merrill I. (22 January 2008). "19.2". Radar handbook (3rd ed.). McGraw-Hill. ISBN 9780071485470. Retrieved 2009-09-27. - Yau, M.K.; Rogers, R.R. Short Course in Cloud Physics (3rd ed.). Butterworth-Heinemann. ISBN 0-08-034864-5. - National Weather Service. "What do the colors mean in the reflectivity products?". WSR-88D Radar FAQs. National Oceanic and Atmospheric Administration. Retrieved 2009-12-15. - Stoen, Hal (27 November 2001). "Airborne Weather Radar". Aviation Tutorials Index. stoenworks.com. Retrieved 2009-12-15. - Haby, Jeff. "Winter Weather Radar". Nowcasting winter precipitation on the Internet. theweatherprediction.com. Retrieved 2009-12-14. - "Precipitation Type Maps". Types of Maps. The Weather Network. Retrieved 2009-12-14. - Carey, Larry (2003). "Lecture on Polarimetric Radar". Texas A&M University. Retrieved 2006-05-21. - Schuur, Terry. "What does a polarimetric radar measure?". CIMMS. National Severe Storms Laboratory. Retrieved 19 April 2013. - "Q&As on Upgrade to Dual Polarization Radar". 2012-08-03. Retrieved 2013-05-09. - National Weather Service. Q&As on Upgrade to Dual Polarization Radar (pdf). NOAA. Retrieved 18 April 2013. - Schuur, Terry. "How can polarimetric radar measurements lead to better weather predictions?". CIMMS. National Severe Storms Laboratory. Retrieved 19 April 2013. - Schurr, Terry; Heinselman, P.; Scharfenberg, K. (October 2003). Overview of the Joint Polarization Experiment. NSSL and CIMMS. Retrieved 2013-04-19. - Fabry, Frédéric; J. S. Marshall Radar Observatory. "Definition: dual-polarization". McGill University. Retrieved 2013-04-18. - J. S. Marshall Radar Observatory. "Target ID Radar Images PPI 0.5-degree". McGill University. Retrieved 2013-04-18. - Ryzhkov; Giangrande; Krause; Park; Schuur; Melnikov. "Polarimetric Hydrometeor Classification and Rainfall Estimation for Better Detecting and Forecasting High-Impact Weather Phenomena Including Flash Floods". Doppler Weather Radar Research and Development. CIMMS. Retrieved 2009-02-12. - Doviak, R. J.; Zrnic, D. S. (1993). Doppler Radar and Weather Observations. San Diego Cal.: Academic Press. p. 562. - Government of Canada (25 January 2012). "Weather Monitoring Infrastructure". Environnement Canada. Retrieved 29 October 2012. - Parent du Châtelet, Jacques; et al. (Météo-France) (2005). "Le projet PANTHERE" (pdf). 32nd Conférence radar, Albuquerque, NM. American Meteorological Society. - Fabry, Frédéric (August 2010). "Radial velocity CAPPI". Examples of remote-sensed data by instrument. J.S. Marshall Radar Observatory. Retrieved 2010-06-14. - Harasti, Paul R.; McAdie, Colin J.; Dodge, Peter P.; Lee, Wen-Chau; Tuttle, John; Murillo, Shirley T.; Marks, Frank D., Jr. (April 2004). "Real-Time Implementation of Single-Doppler Radar Analysis Methods for Tropical Cyclones: Algorithm Improvements and Use with WSR-88D Display Data". Weather and Forecasting (American Meteorological Society) 19 (2): 219–239. Bibcode:2004WtFor..19..219H. doi:10.1175/1520-0434(2004)019<0219:RIOSRA>2.0.CO;2. Retrieved 2009-06-09. - "CAPPI: Constant Altitude Plan Position Indicator". http://www.vaisala.com/weather/products/sigmet.html. SIGMET. November 2004. Retrieved 2009-06-09. - National Weather Service. "RIDGE presentation of 2011 Joplin tornado". National Oceanic and Atmospheric Administration. Retrieved 2011-07-12. - Doppler Radar – RIDGE (Radar Integrated Display w/ Geospatial Elements), National Weather Service (Texas Geographic Society – 2007) - National Weather Service (31 January 2011). "Downloading RIDGE Radar Images". Jetstream Online School for Weather. National Oceanic and Atmospheric Administration. Retrieved 2011-07-12. - "Commons errors in interpreting radar". Environment Canada. Retrieved 2007-06-23. - Herbster, Chris (3 September 2008). "Anomalous Propagation (AP)". Introduction to NEXRAD Anomalies. Embry-Riddle Aeronautical University. Retrieved 2010-10-11. - Diana Yates (2008). Birds migrate together at night in dispersed flocks, new study indicates. University of Illinois at Urbana – Champaign. Retrieved 2009-04-26 - Bart Geerts and Dave Leon (2003). P5A.6 Fine-Scale Vertical Structure of a Cold Front As Revealed By Airborne 95 GHZ Radar. University of Wyoming. Retrieved 2009-04-26 - Thomas A. Niziol (1998). Contamination of WSR-88D VAD Winds Due to Bird Migration: A Case Study. Eastern Region WSR-88D Operations Note No. 12, August 1998. Retrieved 2009-04-26 - National Weather Service Office, Buffalo NY (8 June 2009). "Wind Farm Interference Showing Up on Doppler Radar". National Oceanic and Atmospheric Administration. Retrieved 2009-09-01. - Lammers, Dirk (29 August 2009). "Wind farms can appear sinister to weather forecasters". Houston Chronicle. Associated Press. Retrieved 2009-09-01. - Lemon, Leslie R. (June 1998). "The Radar "Three-Body Scatter Spike": An Operational Large-Hail Signature". Weather and Forecasting 13 (2): 327–340. Bibcode:1998WtFor..13..327L. doi:10.1175/1520-0434(1998)013<0327:TRTBSS>2.0.CO;2. ISSN 1520-0434. Retrieved 2011-05-25. - David, McLaughlin; et al. (December 2009). "Short-wavelength technology and potential for distributed networks of small radar systems". Bulletin of the American Meteorological Society (Boston MA: American Meteorological Society) 90 (12): 1797–1817. Bibcode:2009BAMS...90.1797M. doi:10.1175/2009BAMS2507.1. ISSN 1520-0477. Retrieved 2010-08-31. - "List of lectures on CASA". American Meteorological Society. 2005. Retrieved 2010-08-31. - National Severe Storms Laboratory. "New Radar Technology Can Increase Tornado Warning Lead Times". National Oceanic and Atmospheric Administration. Retrieved 2009-09-29. - Bendix Corporation. Avionics Division. RDR-1200 Weather Radar System. Rev. Jul/73 ed. Fort Lauderdale: Bendix, Avionics Division, 1973. - Barr, James C. Airborne Weather Radar. 1st ed. Ames: Iowa State UP, 1993. - "IntelliWeather StormPredator". IntelliWeather Inc. 2008. Retrieved 2011-11-26. - David Atlas, Radar in Meteorology: Battan Memorial and 40th Anniversary Radar Meteorology Conference, published by American Meteorological Society, Boston, 1990, 806 pages, ISBN 0-933876-86-6, AMS Code RADMET. - Yves Blanchard, Le radar, 1904–2004: histoire d'un siècle d'innovations techniques et opérationnelles , published by Ellipses, Paris, France, 2004 ISBN 2-7298-1802-2 - R. J. Doviak and D. S. Zrnic, Doppler Radar and Weather Observations, Academic Press. Seconde Edition, San Diego Cal., 1993 p. 562. - Gunn K. L. S., and T. W. R. East, 1954: The microwave properties of precipitation particles. Quart. J. Royal Meteorological Society, 80, pp. 522–545. - M K Yau and R.R. Rogers, Short Course in Cloud Physics, Third Edition, published by Butterworth-Heinemann, 1 January 1989, 304 pages. EAN 9780750632157 ISBN 0-7506-3215-1 - Roger M. Wakimoto and Ramesh Srivastava, Radar and Atmospheric Science: A Collection of Essays in Honor of David Atlas, publié par l'American Meteorological Society, Boston, August 2003. Series: Meteorological Monograph, Volume 30, number 52, 270 pages, ISBN 1-878220-57-8; AMS Code MM52. - V. N. Bringi and V. Chandrasekar, Polarimetric Doppler Weather Radar, published by Cambridge University Press, New York, US, 2001 ISBN 0-521-01955-9 - This article incorporates information from |Wikimedia Commons has media related to Weather radar.| - History of Operational Use of Weather Radar by U.S. Weather Service : - "The atmosphere, the weather and flying (Weather radars chapter 19)" (pdf). Environment Canada. - Networks and radar research - OU's Atmospheric Radar Research Center - Canadian weather radar FAQ - McGill radar homepage - Hong Kong radar image gallery - University of Alabama Huntsville C-band Dual-polarimetric research Radar - NEXRAD Doppler radar network information: - Joint Polarization Experiment - University of Oklahoma dual-polarization research and development - Real time data - European OPERA project -links to all weather radar maps in the World - RainRadar – Find out what's coming your way, wherever you are - U.S. doppler radar sites - Realtime weather radar for South Africa from South African Weather Service - Australian radar sites - El Salvador Marn radar sites - UK and Ireland radar sites
Statistical inference is a powerful process that helps us draw meaningful conclusions about a population using data from a sample. In essence, it allows us to bridge the gap between the information we have and the broader population we want to understand. To grasp the concept of statistical inference, it’s essential to differentiate between two major categories of statistics. Bucket 1: Known Data In the first bucket, we have all the data for the entire population. Here, we employ mathematical and pictorial tools like calculating averages, medians, quartiles, and creating histograms to understand the data. Bucket 2: Unknown Data The second bucket, which is our focus here, involves scenarios where we don’t have data for the entire population but can gather a sample from it. Our objective is to extract meaningful insights from this sample and use them to make inferences about the entire population. The Key Goal In statistical inference, the crucial goal is not just understanding the sample itself, but rather using it to infer characteristics of the entire population. We still analyze the sample in ways similar to the first bucket, looking at measures like averages, medians, and data spread, but our ultimate aim is predictive power for the entire population. Approximating the Population Statistical inference aims to provide an approximate description of the larger population based on observable data from a smaller sample. In other words, the sample serves as a window through which we gain insights into the broader population. How Close and How Confident? The central questions in statistical inference revolve around “how close” and “how confident” we are in our data. Let’s delve into these concepts: - Closeness: When analyzing data, we seek to understand the middle point, shape, and spread of the sample. However, the critical aspect is determining how closely these characteristics of the sample align with those of the entire population. We want to gauge the similarity between the two. - Confidence: Determining confidence is a complex task. It involves assessing how confident we can be in extending the findings from our sample to the entire population. This confidence level plays a pivotal role in making informed decisions and predictions. Practical Applications of Statistical Inference - Election Polling: Election predictions exemplify statistical inference in action. It’s impractical to survey every voter, so pollsters select a sample and use the results to estimate the voting behavior of the entire electorate. - Medical Trials: In medical trials, researchers can’t test a new drug on the entire population. Instead, they work with a sample group to infer the drug’s effectiveness for the broader population. Randomness in selecting the sample helps mitigate bias. Randomness and Representative Samples Random selection is fundamental in obtaining a representative sample. It ensures that each individual in the population has an equal chance of being included in the sample. While achieving perfect randomness may be challenging in practice, it’s essential to reduce bias and improve the reliability of inferences. Estimates and Confidence Intervals Statistical inference provides estimates of population parameters like the mean or proportion. These estimates are accompanied by confidence intervals, which represent a range of values likely to contain the true population parameter. The confidence level indicates how confident we are in the accuracy of this interval. Probability Theory: The Backbone of Inference Probability theory serves as the backbone of statistical inference. It equips us with the mathematical tools to quantify uncertainty and make educated predictions. It allows us to assess how likely or unlikely observed data would be under a particular statistical model. Hypothesis testing is a core element of statistical inference. It involves formulating two opposing hypotheses—the null hypothesis and the alternative hypothesis. We collect data, compute a test statistic, and decide whether to reject the null hypothesis in favor of the alternative. This process enables us to make informed decisions based on sample data. A Real-Life Example: Testing Coin Fairness Imagine testing the fairness of a coin, where you want to determine whether it’s equally probable for the coin to land on heads or tails. By flipping the coin multiple times and analyzing the results, you can apply statistical inference to gauge the fairness of the coin. Statistical inference is a vital set of tools that enables us to make informed decisions and predictions about populations based on sample data. It’s a fundamental aspect of various fields, including science, economics, medicine, business, and politics. Understanding how to apply statistical inference concepts empowers us to draw meaningful conclusions from limited data, bridging the gap between the known and the unknown.
Coordinate Reference Systems¶ Understanding of Coordinate Reference Systems. Coordinate Reference System (CRS), Map Projection, On the Fly Projection, Latitude, Longitude, Northing, Easting Map projections try to portray the surface of the earth, or a portion of the earth, on a flat piece of paper or computer screen. In layman’s term, map projections try to transform the earth from its spherical shape (3D) to a planar shape (2D). A coordinate reference system (CRS) then defines how the two-dimensional, projected map in your GIS relates to real places on the earth. The decision of which map projection and CRS to use depends on the regional extent of the area you want to work in, on the analysis you want to do, and often on the availability of data. Map Projection in detail¶ A traditional method of representing the earth’s shape is the use of globes. There is, however, a problem with this approach. Although globes preserve the majority of the earth’s shape and illustrate the spatial configuration of continent-sized features, they are very difficult to carry in one’s pocket. They are also only convenient to use at extremely small scales (e.g. 1:100 million). Most of the thematic map data commonly used in GIS applications are of considerably larger scale. Typical GIS datasets have scales of 1:250 000 or greater, depending on the level of detail. A globe of this size would be difficult and expensive to produce and even more difficult to carry around. As a result, cartographers have developed a set of techniques called map projections designed to show, with reasonable accuracy, the spherical earth in two-dimensions. When viewed at close range the earth appears to be relatively flat. However when viewed from space, we can see that the earth is relatively spherical. Maps, as we will see in the upcoming map production topic, are representations of reality. They are designed to not only represent features, but also their shape and spatial arrangement. Each map projection has advantages and disadvantages. The best projection for a map depends on the scale of the map, and on the purposes for which it will be used. For example, a projection may have unacceptable distortions if used to map the entire African continent, but may be an excellent choice for a large-scale (detailed) map of your country. The properties of a map projection may also influence some of the design features of the map. Some projections are good for small areas, some are good for mapping areas with a large East-West extent, and some are better for mapping areas with a large North-South extent. The three families of map projections¶ The process of creating map projections is best illustrated by positioning a light source inside a transparent globe on which opaque earth features are placed. Then project the feature outlines onto a two-dimensional flat piece of paper. Different ways of projecting can be produced by surrounding the globe in a cylindrical fashion, as a cone, or even as a flat surface. Each of these methods produces what is called a map projection family. Therefore, there is a family of planar projections, a family of cylindrical projections, and another called conical projections (see figure_projection_families) Today, of course, the process of projecting the spherical earth onto a flat piece of paper is done using the mathematical principles of geometry and trigonometry. This recreates the physical projection of light through the globe. Accuracy of map projections¶ Map projections are never absolutely accurate representations of the spherical earth. As a result of the map projection process, every map shows distortions of angular conformity, distance and area. A map projection may combine several of these characteristics, or may be a compromise that distorts all the properties of area, distance and angular conformity, within some acceptable limit. Examples of compromise projections are the Winkel Tripel projection and the Robinson projection (see figure_robinson_projection), which are often used for producing and visualizing world maps. It is usually impossible to preserve all characteristics at the same time in a map projection. This means that when you want to carry out accurate analytical operations, you need to use a map projection that provides the best characteristics for your analyses. For example, if you need to measure distances on your map, you should try to use a map projection for your data that provides high accuracy for distances. Map projections with angular conformity¶ When working with a globe, the main directions of the compass rose (North, East, South and West) will always occur at 90 degrees to one another. In other words, East will always occur at a 90 degree angle to North. Maintaining correct angular properties can be preserved on a map projection as well. A map projection that retains this property of angular conformity is called a conformal or orthomorphic projection. These projections are used when the preservation of angular relationships is important. They are commonly used for navigational or meteorological tasks. It is important to remember that maintaining true angles on a map is difficult for large areas and should be attempted only for small portions of the earth. The conformal type of projection results in distortions of areas, meaning that if area measurements are made on the map, they will be incorrect. The larger the area the less accurate the area measurements will be. Examples are the Mercator projection (as shown in figure_mercator_projection) and the Lambert Conformal Conic projection. The U.S. Geological Survey uses a conformal projection for many of its topographic maps. Map projections with equal distance¶ If your goal in projecting a map is to accurately measure distances, you should select a projection that is designed to preserve distances well. Such projections, called equidistant projections, require that the scale of the map is kept constant. A map is equidistant when it correctly represents distances from the centre of the projection to any other place on the map. Equidistant projections maintain accurate distances from the centre of the projection or along given lines. These projections are used for radio and seismic mapping, and for navigation. The Plate Carree Equidistant Cylindrical (see figure_plate_caree_projection) and the Equirectangular projection are two good examples of equidistant projections. The Azimuthal Equidistant projection is the projection used for the emblem of the United Nations (see figure_azimuthal_equidistant_projection). Projections with equal areas¶ When a map portrays areas over the entire map, so that all mapped areas have the same proportional relationship to the areas on the Earth that they represent, the map is an equal area map. In practice, general reference and educational maps most often require the use of equal area projections. As the name implies, these maps are best used when calculations of area are the dominant calculations you will perform. If, for example, you are trying to analyse a particular area in your town to find out whether it is large enough for a new shopping mall, equal area projections are the best choice. On the one hand, the larger the area you are analysing, the more precise your area measures will be, if you use an equal area projection rather than another type. On the other hand, an equal area projection results in distortions of angular conformity when dealing with large areas. Small areas will be far less prone to having their angles distorted when you use an equal area projection. Alber’s equal area, Lambert’s equal area and Mollweide Equal Area Cylindrical projections (shown in figure_mollweide_equal_area_projection) are types of equal area projections that are often encountered in GIS work. Keep in mind that map projection is a very complex topic. There are hundreds of different projections available world wide each trying to portray a certain portion of the earth’s surface as faithfully as possible on a flat piece of paper. In reality, the choice of which projection to use, will often be made for you. Most countries have commonly used projections and when data is exchanged people will follow the national trend. Coordinate Reference System (CRS) in detail¶ With the help of coordinate reference systems (CRS) every place on the earth can be specified by a set of three numbers, called coordinates. In general CRS can be divided into projected coordinate reference systems (also called Cartesian or rectangular coordinate reference systems) and geographic coordinate reference systems. Geographic Coordinate Systems¶ The use of Geographic Coordinate Reference Systems is very common. They use degrees of latitude and longitude and sometimes also a height value to describe a location on the earth’s surface. The most popular is called WGS 84. Lines of latitude run parallel to the equator and divide the earth into 180 equally spaced sections from North to South (or South to North). The reference line for latitude is the equator and each hemisphere is divided into ninety sections, each representing one degree of latitude. In the northern hemisphere, degrees of latitude are measured from zero at the equator to ninety at the north pole. In the southern hemisphere, degrees of latitude are measured from zero at the equator to ninety degrees at the south pole. To simplify the digitisation of maps, degrees of latitude in the southern hemisphere are often assigned negative values (0 to -90°). Wherever you are on the earth’s surface, the distance between the lines of latitude is the same (60 nautical miles). See figure_geographic_crs for a pictorial view. Lines of longitude, on the other hand, do not stand up so well to the standard of uniformity. Lines of longitude run perpendicular to the equator and converge at the poles. The reference line for longitude (the prime meridian) runs from the North pole to the South pole through Greenwich, England. Subsequent lines of longitude are measured from zero to 180 degrees East or West of the prime meridian. Note that values West of the prime meridian are assigned negative values for use in digital mapping applications. See figure_geographic_crs for a pictorial view. At the equator, and only at the equator, the distance represented by one line of longitude is equal to the distance represented by one degree of latitude. As you move towards the poles, the distance between lines of longitude becomes progressively less, until, at the exact location of the pole, all 360° of longitude are represented by a single point that you could put your finger on (you probably would want to wear gloves though). Using the geographic coordinate system, we have a grid of lines dividing the earth into squares that cover approximately 12363.365 square kilometres at the equator — a good start, but not very useful for determining the location of anything within that square. To be truly useful, a map grid must be divided into small enough sections so that they can be used to describe (with an acceptable level of accuracy) the location of a point on the map. To accomplish this, degrees are divided into minutes ') and seconds ( "). There are sixty minutes in a degree, and sixty seconds in a minute (3600 seconds in a degree). So, at the equator, one second of latitude or longitude = 30.87624 meters. Projected coordinate reference systems¶ A two-dimensional coordinate reference system is commonly defined by two axes. At right angles to each other, they form a so called XY-plane (see figure_projected_crs on the left side). The horizontal axis is normally labelled X, and the vertical axis is normally labelled Y. In a three-dimensional coordinate reference system, another axis, normally labelled Z, is added. It is also at right angles to the X and Y axes. The Z axis provides the third dimension of space (see figure_projected_crs on the right side). Every point that is expressed in spherical coordinates can be expressed as an X Y Z coordinate. A projected coordinate reference system in the southern hemisphere (south of the equator) normally has its origin on the equator at a specific Longitude. This means that the Y-values increase southwards and the X-values increase to the West. In the northern hemisphere (north of the equator) the origin is also the equator at a specific Longitude. However, now the Y-values increase northwards and the X-values increase to the East. In the following section, we describe a projected coordinate reference system, called Universal Transverse Mercator (UTM) often used for South Africa. Universal Transverse Mercator (UTM) CRS in detail¶ The Universal Transverse Mercator (UTM) coordinate reference system has its origin on the equator at a specific Longitude. Now the Y-values increase southwards and the X-values increase to the West. The UTM CRS is a global map projection. This means, it is generally used all over the world. But as already described in the section ‘accuracy of map projections’ above, the larger the area (for example South Africa) the more distortion of angular conformity, distance and area occur. To avoid too much distortion, the world is divided into 60 equal zones that are all 6 degrees wide in longitude from East to West. The UTM zones are numbered 1 to 60, starting at the antimeridian (zone 1 at 180 degrees West longitude) and progressing East back to the antemeridian (zone 60 at 180 degrees East longitude) as shown in figure_utm_zones. As you can see in figure_utm_zones and figure_utm_for_sa, South Africa is covered by four UTM zones to minimize distortion. The zones are called UTM 33S, UTM 34S, UTM 35S and UTM 36S. The S after the zone means that the UTM zones are located south of the equator. Say, for example, that we want to define a two-dimensional coordinate within the Area of Interest (AOI) marked with a red cross in figure_utm_for_sa. You can see, that the area is located within the UTM zone 35S. This means, to minimize distortion and to get accurate analysis results, we should use UTM zone 35S as the coordinate reference system. The position of a coordinate in UTM south of the equator must be indicated with the zone number (35) and with its northing (Y) value and easting (X) value in meters. The northing value is the distance of the position from the equator in meters. The easting value is the distance from the central meridian (longitude) of the used UTM zone. For UTM zone 35S it is 27 degrees East as shown in figure_utm_for_sa. Furthermore, because we are south of the equator and negative values are not allowed in the UTM coordinate reference system, we have to add a so called false northing value of 10,000,000 m to the northing (Y) value and a false easting value of 500,000 m to the easting (X) value. This sounds difficult, so, we will do an example that shows you how to find the correct UTM 35S coordinate for the Area of Interest. The northing (Y) value¶ The place we are looking for is 3,550,000 meters south of the equator, so the northing (Y) value gets a negative sign and is -3,550,000 m. According to the UTM definitions we have to add a false northing value of 10,000,000 m. This means the northing (Y) value of our coordinate is 6,450,000 m (-3,550,000 m + 10,000,000 m). The easting (X) value¶ First we have to find the central meridian (longitude) for the UTM zone 35S. As we can see in figure_utm_for_sa it is 27 degrees East. The place we are looking for is 85,000 meters West from the central meridian. Just like the northing value, the easting (X) value gets a negative sign, giving a result of -85,000 m. According to the UTM definitions we have to add a false easting value of 500,000 m. This means the easting (X) value of our coordinate is 415,000 m (-85,000 m + 500,000 m). Finally, we have to add the zone number to the easting value to get the correct value. As a result, the coordinate for our Point of Interest, projected in UTM zone 35S would be written as: 35 415,000 m E / 6,450,000 m N. In some GIS, when the correct UTM zone 35S is defined and the units are set to meters within the system, the coordinate could also simply appear as 415,000 6,450,000. As you can probably imagine, there might be a situation where the data you want to use in a GIS are projected in different coordinate reference systems. For example, you might get a vector layer showing the boundaries of South Africa projected in UTM 35S and another vector layer with point information about rainfall provided in the geographic coordinate system WGS 84. In GIS these two vector layers are placed in totally different areas of the map window, because they have different projections. To solve this problem, many GIS include a functionality called on-the-fly projection. It means, that you can define a certain projection when you start the GIS and all layers that you then load, no matter what coordinate reference system they have, will be automatically displayed in the projection you defined. This functionality allows you to overlay layers within the map window of your GIS, even though they may be in different reference systems. In QGIS, this functionality is applied by default. Common problems / things to be aware of¶ The topic map projection is very complex and even professionals who have studied geography, geodetics or any other GIS related science, often have problems with the correct definition of map projections and coordinate reference systems. Usually when you work with GIS, you already have projected data to start with. In most cases these data will be projected in a certain CRS, so you don’t have to create a new CRS or even re project the data from one CRS to another. That said, it is always useful to have an idea about what map projection and CRS means. What have we learned?¶ Let’s wrap up what we covered in this worksheet: Map projections portray the surface of the earth on a two-dimensional, flat piece of paper or computer screen. There are global map projections, but most map projections are created and optimized to project smaller areas of the earth’s surface. Map projections are never absolutely accurate representations of the spherical earth. They show distortions of angular conformity, distance and area. It is impossible to preserve all these characteristics at the same time in a map projection. A Coordinate reference system (CRS) defines, with the help of coordinates, how the two-dimensional, projected map is related to real locations on the earth. There are two different types of coordinate reference systems: Geographic Coordinate Systems and Projected Coordinate Systems. On the Fly projection is a functionality in GIS that allows us to overlay layers, even if they are projected in different coordinate reference systems. Now you try!¶ Here are some ideas for you to try with your learners: In No projection (or unknown/non-Earth projection)check Load two layers of the same area but with different projections Let your pupils find the coordinates of several places on the two layers. You can show them that it is not possible to overlay the two layers. Then define the coordinate reference system as Geographic/WGS 84 inside the Project Properties dialog Load the two layers of the same area again and let your pupils see how setting a CRS for the project (hence, enabling “on-the-fly” projection) works. You can open the Project Properties dialog in QGIS and show your pupils the many different Coordinate Reference Systems so they get an idea of the complexity of this topic. You can select different CRSs to display the same layer in different projections. Something to think about¶ If you don’t have a computer available, you can show your pupils the principles of the three map projection families. Get a globe and paper and demonstrate how cylindrical, conical and planar projections work in general. With the help of a transparency sheet you can draw a two-dimensional coordinate reference system showing X axes and Y axes. Then, let your pupils define coordinates (X and Y values) for different places. Chang, Kang-Tsung (2006). Introduction to Geographic Information Systems. 3rd Edition. McGraw Hill. ISBN: 0070658986 DeMers, Michael N. (2005). Fundamentals of Geographic Information Systems. 3rd Edition. Wiley. ISBN: 9814126195 Galati, Stephen R. (2006): Geographic Information Systems Demystified. Artech House Inc. ISBN: 158053533X The QGIS User Guide also has more detailed information on working with map projections in QGIS. In the section that follows we will take a closer look at Map Production.
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 231 Adding + It Up: Helping Children Learn Mathematics 7 DEVELOPING PROFICIENCY WITH OTHER NUMBERS In this chapter, we look beyond the whole numbers at other numbers that are included in school mathematics in grades pre-K to 8, particularly in the upper grades. We first look at the rational numbers, which constitute what is undoubtedly the most challenging number system of elementary and middle school mathematics. Then we consider proportional reasoning, which builds on the ratio use of rational numbers. Finally, we examine the integers, a stepping stone to algebra. Rational Numbers Learning about rational numbers is more complicated and difficult than learning about whole numbers. Rational numbers are more complex than whole numbers, in part because they are represented in several ways (e.g., common fractions and decimal fractions) and used in many ways (e.g., as parts of regions and sets, as ratios, as quotients). There are numerous properties for students to learn, including the significant fact that the two numbers that compose a common fraction (numerator and denominator) are related through multiplication and division, not addition.1 This feature often causes misunderstanding when students first encounter rational numbers. Further, students are likely to have less out-of-school experience with rational numbers than with whole numbers. The result is a number system that presents great challenges to students and teachers. Moreover, how students become proficient with rational numbers is not as well understood as with whole numbers. Significant work has been done, however, on the teaching and learning of rational numbers, and several points OCR for page 232 Adding + It Up: Helping Children Learn Mathematics can be made about developing proficiency with them. First, students do have informal notions of sharing, partitioning sets, and measuring on which instruction can build. Second, in conventional instructional programs, the proficiency with rational numbers that many students develop is uneven across the five strands, and the strands are often disconnected from each other. Third, developing proficiency with rational numbers depends on well-designed classroom instruction that allows extended periods of time for students to construct and sustain close connections among the strands. We discuss each of these points below. Then we examine how students learn to represent and operate with rational numbers. Using Informal Knowledge Students’ informal notions of partitioning, sharing, and measuring provide a starting point for developing the concept of rational number.2 Young children appreciate the idea of “fair shares,” and they can use that understanding to partition quantities into equal parts. Their experience in sharing equal amounts can provide an entrance into the study of rational numbers. In some ways, sharing can play the role for rational numbers that counting does for whole numbers. In some ways, sharing can play the role for rational numbers that counting does for whole numbers. In view of the preschooler’s attention to counting and number that we noted in chapter 5, it is not surprising that initially many children are concerned more that each person gets an equal number of things than with the size of each thing.3 As they move through the early grades of school, they become more sensitive to the size of the parts as well.4 Soon after entering school, many students can partition quantities into equal shares corresponding to halves, fourths, and eighths. These fractions can be generated by successively partitioning by half, which is an especially fruitful procedure since one half can play a useful role in learning about other fractions.5 Accompanying their actions of partitioning in half, many students develop the language of “one half” to describe the actions. Not long after, many can partition quantities into thirds or fifths in order to share quantities fairly among three or five people. An informal understanding of rational number, which is built mostly on the notion of sharing, is a good starting point for instruction. The notion of sharing quantities and comparing sizes of shares can provide an entry point that takes students into the world of rational numbers.6 Equal shares, for example, opens the concept of equivalent fractions (e.g., If there are 6 chil- OCR for page 233 Adding + It Up: Helping Children Learn Mathematics dren sharing 4 pizzas, how many pizzas would be needed for 12 children to receive the same amount?). It is likely, however, that an informal understanding of rational numbers is less robust and widespread than the corresponding informal understanding of whole numbers. For whole numbers, many young children enter school with sufficient proficiency to invent their own procedures for adding, subtracting, multiplying, and dividing. For rational numbers, in contrast, teachers need to play a more active and direct role in providing relevant experiences to enhance students’ informal understanding and in helping them elaborate their informal understanding into a more formal network of concepts and procedures. The evidence suggests that carefully designed instructional programs can serve both of these functions quite well, laying the foundation for further progress.7 Discontinuities in Proficiency Proficiency with rational numbers, as with all mathematical topics, is signaled most clearly by the close intertwining of the five strands. Large-scale surveys of U.S. students’ knowledge of rational number indicate that many students are developing some proficiency within individual strands.8 Often, however, these strands are not connected. Furthermore, the knowledge students acquire within strands is also disconnected. A considerable body of research describes this separation of knowledge.9 As we said at the beginning of the chapter, rational numbers can be expressed in various forms (e.g., common fractions, decimal fractions, percents), and each form has many common uses in daily life (e.g., a part of a region, a part of a set, a quotient, a rate, a ratio).10 One way of describing this complexity is to observe that, from the student’s point of view, a rational number is not a single entity but has multiple personalities. The scheme that has guided research on rational number over the past two decades11 identifies the following interpretations for any rational number, say : (a) a part-whole relation (3 out of 4 equal-sized shares); (b) a quotient (3 divided by 4); (c) a measure ( of the way from the beginning of the unit to the end); (d) a ratio (3 red cars for every 4 green cars); and (e) an operation that enlarges or reduces the size of something ( of 12). The task for students is to recognize these distinctions and, at the same time, to construct relations among them that generate a coherent concept of rational number.12 Clearly, this process is lengthy and multifaceted. OCR for page 234 Adding + It Up: Helping Children Learn Mathematics Instructional practices that tend toward premature abstraction and extensive symbolic manipulation lead students to have severe difficulty in representing rational numbers with standard written symbols and using the symbols appropriately.13 This outcome is not surprising, because a single rational number can be represented with many different written symbols (e.g., 0.6, 0.60, 60%). Instructional programs have often treated this complexity as simply a “syntactic” translation problem: One written symbol had to be translated into another according to a sequence of rules. Different rules have often been taught for each translation situation. For example, “To change a common fraction to a decimal fraction, divide the numerator by the denominator.” But the symbolic representation of rational numbers poses a “semantic” problem—a problem of meaning—as well. Each symbol representation means something. Current instruction often gives insufficient attention to developing the meanings of different rational number representations and the connections among them. The evidence for this neglect is that a majority of U.S. students have learned rules for translating between forms but understand very little about what quantities the symbols represent and consequently make frequent and nonsensical errors.14 This is a clear example of the lack of proficiency that results from pushing ahead within one strand but failing to connect what is being learned with other strands. Rules for manipulating symbols are being memorized, but students are not connecting those rules to their conceptual understanding, nor are they reasoning about the rules. Another example of disconnection among the strands of proficiency is students’ tendency to compute with written symbols in a mechanical way without considering what the symbols mean. Two simple examples illustrate the point. First, recall (from chapter 4) the result from the National Assessment of Educational Progress (NAEP)15 showing that more than half of U.S. eighth graders chose 19 or 21 as the best estimate of These choices do not make sense if students understand what the symbols mean and are reasoning about the quantities represented by the symbols. Another survey of students’ performance showed that the most common error for the addition problem 4+.3=? is .7, which is given by 68% of sixth graders and 51% of fifth and seventh graders.16 Again, the errors show that many students have learned rules for manipulating symbols without understanding what those symbols mean or why the rules work. Many students are unable to reason appropriately about symbols for rational numbers and do not have the strategic competence that would allow them to catch their mistakes. OCR for page 235 Adding + It Up: Helping Children Learn Mathematics Supporting Connections Of all the ways in which rational numbers can be interpreted and used, the most basic is the simplest—rational numbers are numbers. That fact is so fundamental that it is easily overlooked. A rational number like is a single entity just as the number 5 is a single entity. Each rational number holds a unique place (or is a unique length) on the number line (see chapter 3). As a result, the entire set of rational numbers can be ordered by size, just as the whole numbers can. This ordering is possible even though between any two rational numbers there are infinitely many rational numbers, in drastic contrast to the whole numbers. It may be surprising that, for most students, to think of a rational number as a number—as an individual entity or a single point on a number line—is a novel idea.17 Students are more familiar with rational numbers in contexts like parts of a pizza or ratios of hits to at-bats in baseball. These everyday interpretations, although helpful for building knowledge of some aspects of rational number, are an inadequate foundation for building proficiency. The difficulty is not just due to children’s limited experience. Even the interpretations ordinarily given by adults to various forms of rational numbers, such as percent, do not lead easily to the conclusion that rational numbers are numbers.18 Further, the way common fractions are written (e.g., ) does not help students see a rational number as a distinct number. After all, looks just like one whole number over another, and many students initially think of it as two different numbers, a 3 and a 4. Research has verified what many teachers have observed, that students continue to use properties they learned from operating with whole numbers even though many whole number properties do not apply to rational numbers. With common fractions,19 for example, students may reason that is larger than because 8 is larger than 7. Or they may believe that equals because in both fractions the difference between numerator and denominator is 1. With decimal fractions,20 students may say .25 is larger than .7 because 25 is larger than 7. Such inappropriate extensions of whole number relationships, many based on addition, can be a continuing source of trouble when students are learning to work with fractions and their multiplicative relationships.21 The task for instruction is to use, rather than to ignore, the informal knowledge of rational numbers that students bring with them and to provide them with appropriate experiences and sufficient time to develop meaning for these new numbers and meaningful ways of operating with them. Systematic errors can best be regarded as useful diagnostic tools for instruction since they more OCR for page 236 Adding + It Up: Helping Children Learn Mathematics often represent incomplete rather than incorrect knowledge.22 From the current research base, we can make several observations about the kinds of learning opportunities that instruction must provide students if they are to develop proficiency with rational numbers. These observations address both representing rational numbers and computing with them. Representing Rational Numbers As with whole numbers, the written notations and spoken words used for decimal and common fractions contribute to—or at least do not help correct— the many kinds of errors students make with them. Both decimals and common fractions use whole numbers in their notations. Nothing in the notation or the words used conveys their meaning as fractured parts. The English words used for fractions are the same words used to tell order in a line: fifth in line and three fifths (for ). In contrast, in Chinese, is read “out of 5 parts (take) 3.” Providing students with many experiences in partitioning quantities into equal parts using concrete models, pictures, and meaningful contexts can help them create meaning for fraction notations. Introducing the standard notation for common fractions and decimals must be done with care, ensuring that students are able to connect the meanings already developed for the numbers with the symbols that represent them. Research does not prescribe a one best set of learning activities or one best instructional method for rational numbers. But some sequences of activities do seem to be more effective than others for helping students develop a conceptual understanding of symbolic representations and connect it with the other strands of proficiency.23 The sequences that have been shown to promote mathematical proficiency differ from each other in a number of ways, but they share some similarities. All of them spend time at the outset helping students develop meaning for the different forms of representation. Typically, students work with multiple physical models for rational numbers as well as with other supports such as pictures, realistic contexts, and verbal descriptions. Time is spent helping students connect these supports with the written symbols for rational numbers. In one such instructional sequence, fourth graders received 20 lessons introducing them to rational numbers.24 Almost all the lessons focused on helping the students connect the various representations of rational number with concepts of rational number that they were developing. Unique to this program was the sequence in which the forms were introduced: percents, then decimal fractions, and then common fractions. Because many children OCR for page 237 Adding + It Up: Helping Children Learn Mathematics in the fourth grade have considerable informal knowledge of percents, percents were used as the starting point. Students were asked to judge, for example, the relative fullness of a beaker (e.g., 75%), and the relative height of a tube of liquid (e.g., 30%). After a variety of similar activities, the percent representations were used to introduce the decimal fractions and, later, the common fractions. Compared with students in a conventional program, who spent less time developing meaning for the representations and more time practicing computation, students in the experimental program demonstrated higher levels of adaptive reasoning, conceptual understanding, and strategic competence, with no loss of computational skill. This finding illustrates one of our major themes: Progress can be made along all strands if they remain connected. Another common feature of learning activities that help students understand and use the standard written symbols is the careful attention the activities devote to the concept of unit.25 Many conventional curricula introduce rational numbers as common fractions that stand for part of a whole, but little attention is given to the whole from which the rational number extracts its meaning. For example, many students first see a fraction as, say, of a pizza. In this interpretation the amount of pizza is determined by the fractional part and by the size of the pizza. Hence, three fourths of a medium pizza is not the same amount of pizza as three fourths of a large pizza, although it may be the same number of pieces. Lack of attention to the nature of the unit or whole may explain many of the misconceptions that students exhibit. A sequence of learning activities that focus directly on the whole unit in representing rational numbers comes from an experimental curriculum in Russia.26 In this sequence, rational numbers are introduced in the early grades as ratios of quantities to the unit of measure. For example, a piece of string is measured by a small piece of tape and found to be equivalent to five copies of the tape. Children express the result as “string/tape=5.” Rational numbers appear quite naturally when the quantity is not measured by the unit an exact number of times. The leftover part is then represented, first informally and then as a fraction of the unit. With this approach, the size of the unit always is in the foreground. The evidence suggests that students who engage in these experiences develop coherent meanings for common fractions, meanings that allow them to reason sensibly about fractions.27 OCR for page 238 Adding + It Up: Helping Children Learn Mathematics Computing with Rational Numbers As with representing rational numbers, many students need instructional support to operate appropriately with rational numbers. Adding, subtracting, multiplying, and dividing rational numbers require that they be seen as numbers because in elementary school these operations are defined only for numbers. That is, the principles on which computation is based make sense only if common fractions and decimal fractions are understood as representing numbers. Students may think of a fraction as part of a pizza or as a batting average, but such interpretations are not enough for them to understand what is happening when computations are carried out. The trouble is that many students have not developed a meaning for the symbols before they are asked to compute with rational numbers. Proficiency in computing with rational numbers requires operating with at least two different representations: common fractions and finite decimal fractions. There are important conceptual similarities between the rules for computing with both of these forms (e.g., combine those terms measured with the same unit when adding and subtracting). However, students must learn how those conceptual similarities play out in each of the written symbol systems. Procedural fluency for arithmetic with rational numbers thus requires that students understand the meaning of the written symbols for both common fractions and finite decimal fractions. What can be learned from students’ errors? Research reveals the kinds of errors that students are likely to make as they begin computing with common fractions and finite decimals. Whether the errors are the consequence of impoverished learning of whole numbers or insufficiently developed meaning for rational numbers, effective instruction with rational numbers needs to take these common errors into account. Some of the errors occur when students apply to fractions poorly understood rules for calculating with whole numbers. For example, they learn to “line up the numbers on the right” when they are adding and subtracting whole numbers. Later, they may try to apply this rule to decimal fractions, probably because they did not understand why the rule worked in the first place and because decimal fractions look a lot like whole numbers. This confusion leads many students to get .61 when adding 1.5 and .46, for example.28 It is worth pursuing the above example a bit further. Notice that the rule “line up the numbers on the right” and the new rule for decimal fractions “line up the decimal points” are, on the surface, very different rules. They OCR for page 239 Adding + It Up: Helping Children Learn Mathematics prescribe movements of digits in different-sounding ways. At a deeper level, however, they are exactly the same. Both versions of the rule result in aligning digits measured with the same unit—digits with the same place value (tens, ones, tenths, etc.). This deeper level of interpretation is, of course, the one that is more useful. When students know a rule only at a superficial level, they are working with symbols, rules, and procedures in a routine way, disconnected from strands such as adaptive reasoning and conceptual understanding. But when students see the deeper level of meaning for a procedure, they have connected the strands together. In fact, seeing the second level is a consequence of connecting the strands. This example illustrates once more why connecting the strands is the key to developing proficiency. A second example of a common error and one that also can be traced to previous experience with whole numbers is that “multiplying makes larger” and “dividing makes smaller.”29 These generalizations are not true for the full set of rational numbers. Multiplying by a rational number less than 1 means taking only a part of the quantity being multiplied, so the result is less than the original quantity (e.g., which is less than 12). Likewise, dividing by a rational number less than 1 produces a quantity larger than either quantity in the original problem (e.g.,). As with the addition and subtraction of rational numbers, there are important conceptual similarities between whole numbers and rational numbers when students learn to multiply and divide. These similarities are often revealed by probing the deeper meaning of the operations. In the division example above, notice that to find the answer to 6÷2=? and the same question can be asked: How many [2s or ] are in 6? The similarities are not apparent in the algorithms for manipulating the symbols. Therefore, if students are to connect what they are learning about rational numbers with what they already understand about whole numbers, they will need to do so through other kinds of activities. One helpful approach is to embed the calculation in a realistic problem. Students can then use the context to connect their previous work with whole numbers to the new situations with rational numbers. An example is the following problem: I have six cups of sugar. A recipe calls for of a cup of sugar. How many batches of the recipe can I make? Since the size of the parts is less than one whole, the number of batches will necessarily be larger than the six (there are nine in 6). Useful activities OCR for page 240 Adding + It Up: Helping Children Learn Mathematics might include drawing pictures of the division calculation, describing solution methods, and explaining why the answer makes sense. Simply teaching the rule “invert and multiply” leads to the same sort of mechanical manipulation of symbols that results from just telling students to “line up the decimal points.” What can be learned from conventional and experimental instruction? Conventional instruction on rational number computation tends to be rule based.30 Classroom activities emphasize helping students become quick and accurate in executing written procedures by following rules. The activities often begin by stating a rule or algorithm (e.g., “to multiply two fractions, multiply the numerators and multiply the denominators”), showing how it works on several examples (sometimes just one), and asking students to practice it on many similar problems. Researchers express concern that this kind of learning can be “highly dependent on memory and subject to deterioration.”31 This “deterioration” results when symbol manipulation is emphasized to the relative exclusion of conceptual understanding and adaptive reasoning. Students learn that it is not important to understand why the procedure works but only to follow the prescribed steps to reach the correct answer. This approach breaks the incipient connections between the strands of proficiency, and, as the breaks increase, proficiency is thwarted. Conventional instruction on rational number computation tends to be rule based. A number of studies have documented the results of conventional instruction.32 One study, for example, found that only 45% of a random sample of 20 sixth graders interviewed could add fractions correctly.33 Equally disturbing was that fewer than 10% of them could explain how one adds fractions even though all had heard the rules for addition, had practiced the rules on many problems, and sometimes could execute the rules correctly. These results, according to the researchers, were representative of hundreds of interviews conducted with sixth, seventh, and ninth graders. The results point to the need for instructional materials that support teachers and students so that they can explain why a procedure works rather than treating it as a sequence of steps to be memorized. Many researchers who have studied what students know about operations with fractions or decimals recommend that instruction emphasize conceptual understanding from the beginning.34 More specifically, say these researchers, instruction should build on students’ intuitive understanding of fractions and use objects or contexts that help students make sense of the operations. The rationale for that approach is that students need to under OCR for page 241 Adding + It Up: Helping Children Learn Mathematics stand the key ideas in order to have something to connect with procedural rules. For example, students need to understand why the sum of two fractions can be expressed as a single number only when the parts are of the same size. That understanding can lead them to see the need for constructing common denominators. One of the most challenging tasks confronting those who design learning environments for students (e.g., curriculum developers, teachers) is to help students learn efficient written algorithms for computing with fractions and decimals. The most efficient algorithms often do not parallel students’ informal knowledge or the meaning they create by drawing diagrams, manipulating objects, and so on. Several instructional programs have been devised that use problem situations and build on algorithms invented by students.35 Students in these programs were able to develop meaningful and reasonably efficient algorithms for operating with fractions, even when the formal algorithms were not presented.36 It is not yet clear, however, what sequence of activities can support students’ meaningful learning of the less transparent but more efficient formal algorithms, such as “invert and multiply” for dividing fractions. Although there is only limited research on instructional programs for developing proficiency with computations involving rational numbers, it seems clear that instruction focused solely on symbolic manipulation without understanding is ineffective for most students. It is necessary to correct that imbalance by paying more attention to conceptual understanding as well as the other strands of proficiency and by helping students connect them. Proportional Reasoning Proportions are statements that two ratios are equal. These statements play an important role in mathematics and are formally introduced in middle school. Understanding the underlying relationships in a proportional situation and working with these relationships has come to be called proportional reasoning.37 Considerable research has been conducted on the features of proportional reasoning and how students develop it.38 Proportional reasoning is based, first, on an understanding of ratio. A ratio expresses a mathematical relationship that involves multiplication, as in $2 for 3 balloons or of a dollar for one balloon. A proportion, then, is a relationship between relationships. For example, a proportion expresses the fact that $2 for 3 balloons is in the same relationship as $6 for 9 balloons Ratios are often changed to unit ratios by dividing. For example, the unit ratio dollars per balloon is obtained by “dividing” $2 by 3 balloons. The OCR for page 244 Adding + It Up: Helping Children Learn Mathematics of proportional reasoning, can create difficulties for students. The aspects of proportional reasoning that must be developed can be supported through exploring proportional (and nonproportional) situations in a variety of problem contexts using concrete materials or situations in which students collect data, build tables, and determine the relationships between the number pairs (ratios) in the tables.50 When 187 seventh-grade students with different curricular experiences were presented with a sequence of realistic rate problems, the students in the reform curricula considerably outperformed a comparison group of students 53% versus 28% in providing correct answers with correct support work.51 These students were part of the field trials for a new middle school curriculum in which they were encouraged to develop their own procedures through collaborative problem-solving activities. The comparison students had more traditional, teacher-directed instructional experiences. Proportional reasoning is complex and clearly needs to be developed over several years.52 One simple implication from the research suggests that presenting the cross-multiplication algorithm before students understand proportions and can reason about them leads to the same kind of separation between the strands of proficiency that we described earlier for other topics. But more research is needed to identify the sequences of activities that are most helpful for moving from well-understood but less efficient procedures to those that are more efficient. Ratios and proportions, like fractions, decimals, and percents, are aspects of what have been called multiplicative structures.53 These are closely related ideas that no doubt develop together, although they are often treated as separate topics in the typical school curriculum. Reasoning about these ideas likely interacts, but it is not well understood how this interaction develops. Much more work needs to be done on helping students integrate their knowledge of these components of multiplicative structures. Integers The set of integers comprises the positive and negative whole numbers and zero or, expressed another way, the whole numbers and their inverses, often called their opposites (see Chapter 3). The set of integers, like the set of whole numbers, is a subset of the rational numbers. Compared with the research on whole numbers and even on noninteger rational numbers, there has been relatively little research on how students acquire an understanding of negative numbers and develop proficiency in operating with them. OCR for page 245 Adding + It Up: Helping Children Learn Mathematics A half-century ago students did not encounter negative numbers until they took high school algebra. Since then, integers have been introduced in the middle grades and even in the elementary grades. Some educators have argued that integers are easier for students than fractions and decimals and therefore should be introduced first. This approach has been tried, but there is very little research on the long-term effects of this alternative sequencing of topics. Concept of Negative Numbers Even young children have intuitive or informal knowledge of nonpositive quantities prior to formal instruction.54 These notions often involve action-based concepts like those associated with temperature, game moves, or other spatial and quantitative situations. For example, in some games there are moves that result in points being lost, which can lead to scores below zero or “in the hole.” Various metaphors have been suggested as approaches for introducing negative numbers, including elevators, thermometers, debts and assets, losses and gains, hot air balloons, postman stories, pebbles in a bag, and directed arrows on a number line.55 Many of the physical metaphors for introducing integers have been criticized because they do not easily support students’ understanding of the operations on integers (other than addition).56 But some studies have demonstrated the value of using these metaphors, especially for introducing negative numbers.57 Students do appear to be capable of understanding negative numbers far earlier than was once thought. Although more research is needed on the metaphors and models that best support students’ conceptual understanding of negative numbers, there already is enough information to suggest that a variety of metaphors and models can be used effectively. Operations with integers Research on learning to add, subtract, multiply, and divide integers is limited. In the past, students often learned the “rules of signs” (e.g., the product of a positive and negative number is negative) without much understanding. In part, perhaps, because instruction has not found ways to make the learning meaningful, some secondary and college students still have difficulty working with negative numbers.58 Alternative approaches, using the models mentioned earlier, have been tried with various degrees of success.59 A complete set of appropriate learn- OCR for page 246 Adding + It Up: Helping Children Learn Mathematics ing activities with integers has not been identified, but there are some promising elements that should be explored further. Students generally perform better on problems posed in the context of a story (debts and assets, scores and forfeits) or through movements on a number line than on the same problems presented solely as formal equations.60 This result suggests, as for other number domains, that stories and other conceptual structures such as a number line can be used effectively as the context in which students begin their work and develop meaning for the operations. Furthermore, there are some approaches that seem to minimize commonly reported errors.61 In general, approaches that use an appropriate model of integers and operations on integers, and that spend time developing these and linking them to the symbols, offer the most promise. Beyond Whole Numbers Although the research provides a less complete picture of students’ developing proficiency with rational numbers and integers than with whole numbers, several important points can be made. First, developing proficiency is a gradual and prolonged process. Many students acquire useful informal knowledge of fractions, decimals, ratios, percents, and integers through activities and experiences outside of school, but that knowledge needs to be made more explicit and extended through carefully designed instruction. Given current learning patterns, effective instruction must prepare for interferences arising from students’ superficial knowledge of whole numbers. The unevenness many students show in developing proficiency that we noted with whole numbers seems especially pronounced with rational numbers, where progress is made on different fronts at different rates. The challenge is to engage students throughout the middle grades in learning activities that support the integration of the strands of proficiency. A second observation is that doing just that—integrating the strands of proficiency—is an even greater challenge for rational numbers than for whole numbers. Currently, many students learn different aspects of rational numbers as separate and isolated pieces of knowledge. For example, they fail to see the relationships between decimals, fractions, and percents, on the one hand, and whole numbers, on the other, or between integers and whole numbers. Also, connections among the strands of proficiency are often not made. Numerous studies show that with common fractions and decimals, especially, conceptual understanding and computational procedures are not appropriately linked. Further, students can use their informal knowledge of propor- OCR for page 247 Adding + It Up: Helping Children Learn Mathematics tionality or rational numbers strategically to solve problems but are unable to represent and solve the same problem formally. These discontinuities are of great concern because the research we have reviewed indicates that real progress along each strand and within any single topic is exceedingly difficult without building connections between them. A third issue concerns the level of procedural fluency that should be required for arithmetic with decimals and common fractions. Decimal fractions are crucial in science, in metric measurement, and in more advanced mathematics, so it is important for students to be computationally fluent—to understand how and why computational procedures work, including being able to judge the order-of-magnitude accuracy of calculator-produced answers. Some educators have argued that common fractions are no longer essential in school mathematics because digital electronics have transformed almost all numerical transactions into decimal fractions. Technological developments certainly have increased the importance of decimals, but common fractions are still important in daily life and in their own right as mathematical objects, and they play a central role in the development of more advanced mathematical ideas. For example, computing with common fractions sets the stage for computing with rational expressions in algebra. It is important, therefore, for students to develop sound meanings for common fractions and to be fluent with ordering fractions, finding equivalent fractions, and using unit rates. Students should also develop procedural fluency for computations with “manageable” fractions. However, the rapid execution of paper-and-pencil computation algorithms for less frequently used fractions (e.g., ) is unnecessary today. Finally, we cannot emphasize too strongly the simple fact that students need to be fully proficient with rational numbers and integers. This proficiency forms the basis for much of advanced mathematical thinking, as well as the understanding and interpretation of daily events. The level at which many U.S. students function with rational numbers and integers is unacceptable. The disconnections that many students exhibit among their conceptual understanding, procedural fluency, strategic competence, and adaptive reasoning pose serious barriers to their progress in learning and using mathematics. Evidence from experimental programs in the United States and from the performance of students in other countries suggests that U.S. middle school students are capable of learning more about rational numbers and integers, with deeper levels of understanding. OCR for page 248 Adding + It Up: Helping Children Learn Mathematics Notes 1. See Harel and Confrey, 1994. Rational numbers, ratios, and proportions, which on the surface are about division, are called multiplicative concepts because any division problem can be rephrased as multiplication. See Chapter 3. 2. Behr, Lesh, Post, and Silver, 1983; Confrey, 1994, 1995; Empson, 1999; Kieren, 1992; Mack, 1990, 1995; Pothier and Sawada, 1983; Streefland, 1991, 1993. 3. Hiebert and Tonnessen, 1978; Pothier and Sawada, 1983. 4. Empson, 1999; Pothier and Sawada, 1983. 5. Confrey, 1994; Pothier and Sawada 1989. 6. Confrey, 1994; Streefland, 1991, 1993. 7. Cramer, Behr, Post, and Lesh, 1997; Empson, 1999; Mack, 1995; Morris, in press; Moss and Case, 1999; Streefland, 1991, 1993. 8. Kouba, Zawojewski, and Strutchens, 1997; Wearne and Kouba, 2000. 9. Behr, Lesh, Post, and Silver, 1983; Behr, Wachsmuth, Post, and Lesh, 1984; Bezuk and Bieck, 1993; Hiebert and Wearne, 1985; Mack, 1990, 1995; Post, Wachsmuth, Lesh, and Behr, 1985; Streefland, 1991, 1993. 10. Kieren, 1976. 11. Kieren, 1976, 1980, 1988. 12. Students not only should “construct relations among them” but should also eventually have some grasp of what is entailed in these relations—for example, that Interpretation D is a contextual instance of E—namely, you multiply the number of green cars by to get the number of red cars, while thinking of as three times (Interpretation A), and thinking of it as 3 divided by 4, is the equation which is basically the associative law for multiplication. 13. Behr, Wachsmuth, Post, and Lesh, 1984; Hiebert and Wearne, 1986. 14. Hiebert and Wearne, 1986; Resnick, Nesher, Leonard, Magone, Omanson, and Peled, 1989. 15. Carpenter, Corbitt, Kepner, Lindquist, and Reys, 1981. 16. Hiebert and Wearne, 1986. 17. Behr, Lesh, Post, and Silver, 1983. 18. Davis, 1988. 19. Behr, Wachsmuth, Post, and Lesh, 1984. 20. Resnick, Nesher, Leonard, Magone, Omanson, and Peled, 1989. 21. Behr, Wachsmuth, Post, and Lesh, 1984. 22. Resnick, Nesher, Leonard, Magone, Omanson, and Peled, 1989. 23. Cramer, Post, Henry, and Jeffers-Ruff, in press; Hiebert and Wearne, 1988; Hunting, 1983; Mack, 1990, 1995; Morris, in press; Moss and Case, 1999; Hiebert, Wearne, and Taber, 1991. 24. Moss and Case, 1999. 25. Behr, Harel, Post, and Lesh, 1992. 26. Davydov and Tsvetkovich, 1991; Morris, in press; Schmittau, 1993. 27. Morris, in press. OCR for page 249 Adding + It Up: Helping Children Learn Mathematics 28. Hiebert and Wearne, 1986. 29. Bell, Fischbein, and Greer, 1984; Fischbein, Deri, Nello, and Marino, 1985. 30. Hiebert and Wearne, 1985. 31. Kieren, 1988, p. 178. 32. Mack, 1990; Peck and Jencks, 1981; Wearne and Kouba, 2000. 33. Peck and Jencks, 1981. 34. Behr, Lest, Post, and Silver, 1983; Bezuk and Bieck, 1993; Bezuk and Cramer, 1989; Hiebert and Wearne, 1986; Kieren, 1988; Mack, 1990; Peck and Jencks, 1981; Streefland, 1991, 1993. 35. Cramer, Behr, Post, and Lesh, 1997; Huinker, 1998; Lappan, Fey, Fitzgerald, Friel, and Phillips, 1996; Streefland, 1991. 36. Huinker, 1998; Lappan and Bouck, 1998. 37. Lesh, Post, and Behr, 1988. 38. Tourniaire and Pulos, 1985. 39. Behr, Harel, Post, and Lesh, 1992; Cramer, Behr, and Bezuk, 1989. 40. Post, Behr, and Lesh, 1988. 41. Lesh, Post, and Behr, 1988. 42. Wearne and Kouba, 2000. 43. Ahl, Moore, and Dixon, 1992; Dixon and Moore, 1996. 44. Lamon, 1993, 1995. 45. Lamon, 1993. 46. Lamon, 1995. 47. The term composite unit refers to thinking of 3 balloons (and hence $2) as a single entity. The related term compound unit is used in science to refer to units such as “miles/hour,” or in this case “dollars per balloon.” 48. Lamon, 1993, 1994. 49. Heller, Ahlgren, Post, Behr, and Lesh, 1989; Langrall and Swafford, 2000. 50. Cramer, Post, and Currier, 1993; Kaput and West, 1994. 51. Ben-Chaim, Fey, Fitzgerald, Benedetto, and Miller, 1998; Heller, Ahlgren, Post, Behr, and Lesh, 1989. 52. Behr, Harel, Post, and Lesh, 1992; Karplus, Pulas, and Stage, 1983. 53. Vergnaud, 1983. 54. Hativa and Cohen, 1995. 55. English, 1997. See also Crowley and Dunn, 1985. 56. Fischbein, 1987, ch. 8. 57. Duncan and Sanders, 1980; Moreno and Mayer, 1999; Thompson, 1988. 58. Bruno, Espinel, Martinon, 1997; Kuchemann, 1980. 59. Arcavi and Bruckheimer, 1981; Carson and Day, 1995; Davis, 1990; Liebeck, 1990; Human and Murray, 1987. 60. Moreno and Mayer, 1999; Mukhopadhyay, Resnick, and Schauble, 1990. 61. Duncan and Saunders, 1980; Thompson, 1988; Thompson and Dreyfus, 1988. OCR for page 250 Adding + It Up: Helping Children Learn Mathematics References Ahl, V.A., Moore, C.F., & Dixon, J.A. (1992). Development of intuitive and numerical proportional reasoning. Cognitive Development, 7, 81–108. Arcavi, A., & Bruckheimer, M. (1981). How shall we teach the multiplication of negative numbers? Mathematics in School, 10, 31–33. Behr, M., Harel, G., Post, T., & Lesh, R. (1992). Rational number, ratio, and proportion. In D.Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 296– 333). New York: Macmillan. Behr, M.J., Lesh, R., Post, T.R., & Silver, E.A. (1983). Rational number concepts. In R. Lesh & M.Landau (Eds.), Acquisition of mathematics concepts and processes (pp. 91– 126). New York: Academic Press. Behr, M.J., Wachsmuth, I., Post, T.R., & Lesh, R. (1984). Order and equivalence of rational numbers: A clinical teaching experiment. Journal for Research in Mathematics Education, 15, 323–341. Bell, A.W., Fischbein, E., & Greer, B. (1984). Choice of operation in verbal arithmetic problems: The effects of number size, problem structure and content. Educational Studies in Mathematics, 15, 129–147. Ben-Chaim, D., Fey, J.T., Fitzgerald, W.M., Benedetto, C., & Miller, J. (1998). Proportional reasoning among 7th grade students with different curricular experiences. Educational Studies in Mathematics, 36, 247–273. Bezuk, N.D., & Bieck, M. (1993). Current research on rational numbers and common fractions: Summary and implications for teachers. In D.T.Owens (Ed.), Research ideas for the classroom: Middle grades mathematics (pp. 118–136). New York: Macmillan. Bezuk, N., & Cramer, K. (1989). Teaching about fractions: What, when, and how? In P. Trafton (Ed.), New directions for elementary school mathematics (1989 Yearbook of the National Council of Teachers of Mathematics, pp. 156–167). Reston VA: NCTM. Bruno, A., Espinel, M.C., Martinon, A. (1997). Prospective teachers solve additive problems with negative numbers. Focus on Learning Problems in Mathematics, 19, 36–55. Carpenter, T.P., Corbitt, M.K., Kepner, H.S., Jr., Lindquist, M.M., & Reys, R.E. (1981). Results from the second mathematics assessment of the National Assessment of Educational Progress. Reston, VA: National Council of Teachers of Mathematics. Carson, C.L., & Day, J. (1995). Annual report on promising practices: How the algebra project eliminates the “game of signs” with negative numbers. San Francisco: Far West Lab for Educational Research and Development. (ERIC Document Reproduction Service No. ED 394 828). Confrey, J. (1994). Splitting, similarity, and the rate of change: New approaches to multiplication and exponential functions. In G.Harel & J.Confrey (Eds.), The development of multiplicative reasoning in the learning of mathematics (pp. 293–332). Albany: State University of New York Press. Confrey, J. (1995). Student voice in examining “splitting” as an approach to ratio, proportion, and fractions. In L.Meira & D.Carraher (Eds.), Proceedings of the nineteenth international conference for the Psychology of Mathematics Education (Vol. 1, pp. 3–29). Recife, Brazil: Federal University of Pernambuco. (ERIC Document Reproduction Service No. ED 411 134). Cramer, K., Behr, M., & Bezuk, N. (1989). Proportional relationships and unit rates. Mathematics Teacher, 82, 537–544. OCR for page 251 Adding + It Up: Helping Children Learn Mathematics Cramer, K., Behr, M., Post, T., & Lesh, R. (1997). Rational Numbers Project: Fraction lessons for the middle grades, level 1 and level 2. Dubuque, IA: Kendall Hunt. Cramer, K., Post, T., & Currier, S. (1993). Learning and teaching ratio and proportion: Research implications. In D.T.Owens (Ed.), Research ideas for the classroom: Middle grades mathematics (pp. 159–178). New York: Macmillan. Cramer, K., Post, T., Henry, A., & Jeffers-Ruff, L. (in press). Initial fraction learning of fourth and fifth graders using a commercial textbook or the Rational Number Project Curriculum. Journal for Research in Mathematics Education. Crowley, M.L., & Dunn, K.A. (1985). On multiplying negative numbers. Mathematics Teacher, 78, 252–256. Davydov, V.V., & Tsvetkovich, A.H. (1991). On the objective origin of the concept of fractions. Focus on Learning Problems in Mathematics, 13, 13–64. Davis, R.B. (1988). Is a “percent” a number?” Journal of Mathematical Behavior, 7(1), 299–302. Davis, R.B. (1990). Discovery learning and constructivism. In R.B.Davis, C.A.Maher, & N.Noddings, (Eds.), Constructivist views on the teaching and learning of mathematics (Journal for Research in Mathematics Education Monograph No. 4, pp. 93–106). Reston, VA: National Council of Teachers of Mathematics. Dixon, J.A., & Moore, C.F. (1996). The developmental role of intuitive principles in choosing mathematical strategies. Developmental Psychology, 32, 241–253. Duncan, R.K., & Saunders, W.J. (1980). Introduction to integers. Instructor, 90(3), 152– 154. Empson, S.B. (1999). Equal sharing and shared meaning: The development of fraction concepts in a first-grade classroom. Cognition and Instruction, 17, 283–342. English, L.D. (Ed.). (1997). Mathematical reasoning: Analogies, metaphors, and images. Mahwah, NJ: Erlbaum. Fischbein, E. (1987). Intuition in science and mathematics. Dordrecht, The Netherlands: Reidel. Fischbein, E., Deri, M., Nello, M.S., & Marino, M.S. (1985). The role of implicit models in solving problems in multiplication and division . Journal for Research in Mathematics Education, 16, 3–17. Harel, G., & Confrey, J. (1994). The development of multiplicative reasoning in the learning of mathematics. Albany: State University of New York Press. Hativa, N., & Cohen, D. (1995). Self learning of negative number concepts by lower division elementary students through solving computer-provided numerical problems. Educational Studies in Mathematics, 28, 401–431. Heller, P., Ahlgren, A., Post, T., Behr, M., & Lesh, R. (1989). Proportional reasoning: The effect of two concept variables, rate type and problem setting. Journal for Research in Science Teaching, 26, 205–220. Hiebert, J., & Tonnessen, L.H. (1978). Development of the fraction concept in two physical contexts: An exploratory investigation. Journal for Research in Mathematics Education, 9, 374–378. Hiebert, J., & Wearne, D. (1985). A model of students’ decimal computation procedures. Cognition and Instruction, 2, 175–205. OCR for page 252 Adding + It Up: Helping Children Learn Mathematics Hiebert, J., & Wearne, D. (1986). Procedures over concepts: The acquisition of decimal number knowledge. In J.Hiebert (Ed.), Conceptual and procedural knowledge: The case of mathematics (pp. 199–223). Hillsdale, NJ: Erlbaum. Hiebert, J., & Wearne, D. (1988). Instruction and cognitive change in mathematics. Educational Psychologist, 23, 105–117. Hiebert, J., Wearne, D., & Taber, S. (1991). Fourth graders’ gradual construction of decimal fractions during instruction using different physical representations. Elementary School Journal, 91, 321–341. Huinker, D. (1998). Letting fraction algorithms emerge through problem solving. In L. J.Morrow & M.J.Kenney (Eds.), The teaching and learning of algorithms in school mathematics (1998 Yearbook of the National Council of Teachers of Mathematics, pp. 170–182). Reston, VA:NCTM. Human, P., & Murray, H. (1987). Non-concrete approaches to integer arithmetic. In J.C. Bergeron, N.Herscovics, & C.Kieran (Eds.), Proceedings of the Eleventh International Conference for the Psychology of Mathematics Education (vol. 2, pp. 437–443). Montreal: University of Montreal. (ERIC Document Reproduction Service No. ED 383 532) Hunting, R.P. (1983). Alan: A case study of knowledge of units and performance with fractions . Journal for Research in Mathematics Education, 14, 182–197. Kaput, J.J., & West, M.M. (1994). Missing-value proportional reasoning problems: Factors affecting informal reasoning patterns. In G. Harel & J. Confrey (Eds.), The development of multiplicative reasoning in the learning of mathematics (pp. 235–287). Albany: State University of New York Press. Karplus, R., Pulas S., & Stage E. (1983). Proportional reasoning and early adolescents. In R.Lesh & M.Landau (Eds.), Acquisition of mathematics concepts and processes (pp. 45– 91). New York: Academic Press. Kieren, T.E. (1976). On the mathematical, cognitive and institutional foundations of rational numbers. In R.Lesh & D.Bradbard (Eds.), Number and measurement: Papers from a research workshop (pp. 104–144). Columbus OH: ERIC/SMEAC. (ERIC Document Reproduction Service No. ED 120 027). Kieren, T.E. (1980). The rational number construct—Its elements and mechanisms. In T.E.Kieren (Ed.), Recent research on number learning (pp. 125–149). Columbus, OH: ERIC/SMEAC. (ERIC Document Reproduction Service No. ED 212 463). Kieren, T.E. (1988). Personal knowledge of rational numbers: Its intuitive and formal development . In J.Hiebert & M.Behr (Eds.), Number concepts and operations in the middles grades (pp. 162–181). Reston, VA: National Council of Teachers of Mathematics. Kieren, T.E. (1992). Rational and fractional numbers as mathematical and personal knowledge; Implications for curriculum and instruction. In G.Leinhardt & R.T. Putnam (Eds.), Analysis of arithmetic for mathematics teaching (pp. 323–371). Hillsdale, NJ: Erlbaum. Kouba, V.L., Zawojewski, J.S., & Strutchens, M.E. (1997). What do students know about numbers and operations? In P.A.Kenney & E.A.Silver (Eds.), Results from the sixth mathematics assessment of the National Assessment of Educational Progress (pp. 33– 60). Reston, VA: National Council of Teachers of Mathematics. Kuchemann, D. (1980). Children’s understanding of integers. Mathematics in School, 9, 31–32. OCR for page 253 Adding + It Up: Helping Children Learn Mathematics Lamon, S.J. (1993). Ratio and proportion: Connecting content and children’s thinking. Journal for Research in Mathematics Education, 24, 41–61. Lamon, S.J. (1994). Ratio and proportion: Cognitive foundations in unitizing and norming. In G.Harel & J.Confrey (Eds.), The development of multiplicative reasoning in the learning of mathematics (pp. 89–120). Albany: State University of New York Press. Lamon, S.J. (1995). Ratio and proportion: Elementary didactical phenomenology. In J. T.Sowder & B.P Schappell (Eds.), Providing a foundation for teaching mathematics in the middle grades (pp. 167–198). Albany: State University of New York Press. Langrall, C.W., & Swafford, J.O. (2000). Three balloons for two dollars: Developing proportional reasoning. Mathematics Teaching in the Middle School, 6, 254–261. Lappan, G., & Bouck, M.K. (1998). Developing algorithms for adding and subtracting fractions. In L.J.Morrow & M.J.Kenney (Eds.), The teaching and learning of algorithms in school mathematics (1998 Yearbook of the National Council of Teachers of Mathematics, pp. 183–197). Reston, VA: NCTM. Lappan, G., Fey, J.Fitzgerald, W., Friel, S., & Phillips E. (1996). Bits and pieces 2: Using rational numbers. Palo Alto, CA: Dale Seymour. Lesh, R., Post, T.R., & Behr, M. (1988). Proportional reasoning. In J.Hiebert & M.Behr (Eds.), Number concepts and operations in the middle grades (pp. 93–118). Reston, VA: National Council of Teachers of Mathematics. Liebeck, P. (1990). Scores and forfeits: An intuitive model for integer arithmetic. Educational Studies in Mathematics, 21, 221–239. Mack, N.K. (1990). Learning fractions with understanding: Building on informal knowledge. Journal for Research in Mathematics Education, 21, 16–32. Mack, N.K. (1995). Confounding whole-number and fraction concepts when building on informal knowledge. Journal for Research in Mathematics Education, 26, 422–441. Moreno, R., & Mayer, R.E. (1999). Multimedia-supported metaphors for meaning making in mathematics. Cognition and Instruction, 17, 215–248. Morris, A.L. (in press). A teaching experiment: Introducing fourth graders to fractions from the viewpoint of measuring quantities using Davydov’s mathematics curriculum. Focus on Learning Problems in Mathematics. Moss, J., & Case, R. (1999). Developing children’s understanding of the rational numbers: A new model and an experimental curriculum. Journal for Research in Mathematics Education, 30, 122–147. Mukhopadhyay, S., Resnick, L.B., & Schauble, L. (1990). Social sense-making in mathematics; Children’s ideas of negative numbers. Pittsburgh: University of Pittsburgh, Learning Research and Development Center. (ERIC Document Reproduction Service No. ED 342 632 ). Peck, D.M., & Jencks, S.M. (1981). Conceptual issues in the teaching and learning of fractions. Journal for Research in Mathematics Education, 12, 339–348. Post, T., Behr, M., & Lesh, R. (1988). Proportionality and the development of pre-algebra understanding. In A.F.Coxford & A.P.Schulte (Eds.), The ideas of algebra, K-12 (1988 Yearbook of the National Council of Teachers of Mathematics, pp. 78–90). Reston, VA: NCTM. Post, T.P., Wachsmuth, I., Lesh, R., & Behr, M.J. (1985). Order and equivalence of rational numbers: A cognitive analysis. Journal for Research in Mathematics Education, 16, 18–36. OCR for page 254 Adding + It Up: Helping Children Learn Mathematics Pothier, Y., & Sawada, D. (1983). Partitioning: The emergence of rational number ideas in young children. Journal for Research in Mathematics Education, 14, 307–317. Pothier, Y., & Sawada, D. (1989). Children’s interpretation of equality in early fraction activities. Focus on Learning Problems in Mathematics, 11(3), 27–38. Resnick, L.B., Nesher, P., Leonard, F., Magone, M., Omanson, S., & Peled, I. (1989). Conceptual bases of arithmetic errors: The case of decimal fractions. Journal for Research in Mathematics Education, 20, 8–27. Schmittau, J. (1993). Connecting mathematical knowledge: A dialectical perspective. Journal of Mathematical Behavior, 12, 179–201. Streefland, L. (1991). Fractions in realistic mathematics education: A paradigm of developmental research. Dordrecht, The Netherlands: Kluwer. Streefland, L. (1993). Fractions: A realistic approach. In T.P.Carpenter, E.Fennema, & T.A.Romberg (Eds.), Rational numbers: An integration of research (pp. 289–325). Hillsdale, NJ: Erlbaum. Thompson, F.M. (1988). Algebraic instruction for the younger child. In A.F.Coxford & A.P.Shulte (Eds.), The ideas of algebra, K-12 (1988 Yearbook of the National Council of Teachers of Mathematics, pp. 69–77). Reston, VA: NCTM. Thompson, P.W., & Dreyfus, T. (1988). Integers as transformations. Journal for Research in Mathematics Education, 19, 115–133. Tourniaire, F., & Pulos, S. (1985). Proportional reasoning: A review of the literature. Educational Studies in Mathematics, 16, 181–204. Vergnaud, G. (1983). Multiplicative structures. In D.Lesh & M.Landau (Eds.), Acquisition of mathematics concepts and processes (pp. 127–174). New York: Academic Press. Wearne, D., & Kouba, V.L. (2000). Rational numbers. In E.A.Silver & P.A.Kenney (Eds.), Results from the seventh mathematics assessment of the National Assessment of Educational Progress (pp. 163–191). Reston, VA: National Council of Teachers of Mathematics. Representative terms from entire chapter:
SAT Test Prep WHAT THE SAT MATH IS REALLY TESTING Lesson 3: Finding Patterns Finding patterns means looking for simple rules that relate the parts of a problem. One key to simplifying many SAT math problems is exploiting repetition. If something repeats, you usually can cancel or substitute to simplify. If , then what is the value of x? This question is much simpler than it looks at first because of the repetition in the equation. If you subtract the repetitive terms from both sides of the equation, it reduces to . Subtracting 4x2from both sides then gives , so . Patterns in Geometric Figures Sometimes you need to play around with the parts of a problem until you find the patterns or relationships. For instance, it often helps to treat geometric figures like jigsaw puzzle pieces. The figure above shows a circle with radius 3 in which an equilateral triangle has been inscribed. Three diameters have been drawn, each of which intersects a vertex of the triangle. What is the sum of the areas of the shaded regions? This figure looks very complicated at first. But look closer and notice the symmetry in the figure. Notice that the three diameters divide the circle into six congruent parts. Since a circle has 360°, each of the central angles in the circle is . Then notice that the two shaded triangles fit perfectly with the other two shaded regions to form a sector such as this: Moving the regions is okay because it doesn’t change their areas. Notice that this sector is 1/3 of the entire circle. Now finding the shaded area is easy. The total area of the circle is . So the area of 1/3 of the circle is . Patterns in Sequences Some SAT questions will ask you to analyze a sequence. When given a sequence question, write out the terms of the sequence until you notice the pattern. Then use whole-number division with remainders to find what the question asks for. 1, 0, –1, 1, 0, –1, 1, 0, –1, … If the sequence above continues according to the pattern shown, what will be the 200th term of the sequence? Well, at least you know it’s either 1, 0, or –1, right? Of course, you want a better than a one-in-three guess, so you need to analyze the sequence more deeply. The sequence repeats every 3 terms. In 200 terms, then the pattern repeats itself times with a remainder of 2. This means that the 200th term is the same as the second term, which is 0. What is the units digit of 2740? The units digit is the “ones” digit or the last digit. You can’t find it with your calculator because when 2740 is expressed as a decimal, it has 58 digits, and your calculator can only show the first 12 or so. To find the units digit, you need to think of 2740 as a term in the sequence 271, 272, 273, 274, …. If you look at these terms in decimal form, you will notice that the units digits follow a pattern: 7, 9, 3, 1, 7, 9, 3, 1, …. The sequence has a repeating pattern of four terms. Every fourth term is 1, so the 40th term is also 1. Therefore, the units digit of 2740 is 1. Concept Review 3: Finding Patterns Solve the following problems by taking advantage of repetition. 1. If 5 less than 28% of x2 is 10, then what is 15 less than 28% of x2? 2. If m is the sum of all multiples of 3 between 1 and 100, and n is the sum of all multiples of 3 between 5 and 95, what is ? 3. How much greater is the combined surface area of two cylinders each with a height of 4 cm and a radius of 2 cm than the surface area of a single cylinder with a height of 8 cm and a radius of 2 cm? Solve each of the following problems by analyzing a sequence. 4. What is the units digit of 4134? 5. The first two terms of a sequence are 1 and 2. If every term after the second term is the sum of the previous two, then how many of the first 100 terms are odd? SAT Practice 3: Finding Patterns 1. If , then 2. Every term of a sequence, except the first, is 6 less than the square of the previous term. If the first term is 3, what is the fifth term of this sequence? 3. If the sequence above continues according to the pattern shown, what is the sum of the first 200 terms of the sequence? 4. The figure above shows a square with three line segments drawn through the center. What is the total area of the shaded regions? 5. What is the units digit of 340? Answer Key 3: Finding Patterns Concept Review 3 1. Don’t worry about the percent or about finding x. Translate: 5 less than 28% of x2 is 10 means So 15 less than 28% of x2 is 0. When you subtract n from m, all the terms cancel except . 3. Don’t calculate the total surface area. Instead, just notice that the two small cylinders, stacked together, are the same size as the large cylinder. But remember that you are comparing surface areas, not volumes. The surface areas are almost the same, except that the smaller cylinders have two extra bases. Each base has an area of so the surface area of the smaller cylinders is greater than that of the larger cylinder. 4. Your calculator is no help on this one because 4134 is so huge. Instead, think of 4134 as a term in the sequence 41, 42, 43, 44, …. What is the units digit of 4134? If you write out the first few terms, you will see a clear pattern to the units digits: 4, 16, 64, 256, …. Clearly, every odd term ends in a 4 and every even term ends in a 6. So 4134 must end in a 6. 5. The first few terms are 1, 2, 3, 5, 8, 13, 21, …. Since we are concerned only about the “evenness” and “oddness” of the numbers, think of the sequence as odd, even, odd, odd, even, odd, odd, even, …. Notice that the sequence repeats every three terms: (odd, even, odd), (odd, even, odd), (odd, even, odd), …. In the first 100 terms, this pattern repeats times. Since each pattern contains 2 odd numbers, the first 33 repetitions contain 66 odd numbers and account for the first 99 terms. The last term must also be odd because each pattern starts with an odd number. Therefore, the total number of odds is . SAT Practice 3 2. A If every term is 6 less than the square of the previous term, then the second term must be . The third term, then, is also , and so on. Every term, then, must be 3, including the fifth. 3. C The sequence repeats every three terms: (–4, 0, 4), (–4, 0, 4), (–4, 0, 4), …. Each one of the groups has a sum of 0. Since , the first 200 terms contain 67 repetitions of this pattern, plus two extra terms. The 67 repetitions will have a sum of , but the last two terms must be –4 and 0, giving a total sum of –4. 4. 50 Move the shaded regions around, as shown above, to see that they are really half of the square. Since the area of the square is , the area of the shaded region must be half of that, or 50. 5. A The number 340 is so big that your calculator is useless for telling you what the last digit is. Instead, think of 340 as being an element in the sequence 31, 32, 33, 34,…. If you write out the first six terms or so, you will see that there is a clear pattern to the units digits: 3, 9, 27, 81, 243, 729, …. So the pattern in the units digits is 3, 9, 7, 1, 3, 9, 7, 1, …. The sequence repeats every four terms. Since 40 is a multiple of 4, the 40th term is the same as the 4th and the 8th and the 12th terms, so the 40th term is 1.
In this lesson, our instructor Jibin Park gives an introduction on gross domestic product. He explains national accounts, circular flow of the economy, expanded circular flow diagram, and real vs. nominal gross domestic product. The macroeconomics equation is as follows: Y = C + I + G + NX Y represents real GDP C = Consumer expenditures I = Business Investment G = Government purchases NX = Exports – Imports The Circular Flow Diagram provides a simplified view of how an economy operates. There exists three different ways of calculating the GDP Nominal GDP uses current prices; real GDP uses constant prices Real GDP is a more accurate means of measuring the health of the economy than is the nominal GDP GDP per capita is the most common measure used to determine a nation’s standard of living Gross Domestic Product Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. This book created a 5-step plan to help you study more effectively, use your preparation time wisely, and get your best score. This book includes two full-length practice exams modeled on the real test, all the terms and concepts you need to know to get your best score, and your choice of three customized study schedules. This book includes an in-depth preparation for both AP economics exams. It features two full-length practice tests, one in Microeconomics and one in Macroeconomics, and all test questions answered and explained. It also features a detailed review of all test topics, which include: supply and demand, theory of consumer choice, economics in the public sector, costs, perfect and imperfect competition, monopolies, labor resources, game theory, the national income and gross domestic product, inflation and unemployment, fiscal policy, money and banking, monetary policy, economic growth, international trade and exchange, interest rate determination, and the market for loanable funds.
Bonding. Year 11 DP Chemistry. What is a bond?. A chemical bond is a force that holds atoms together making a new substance. Ionic Bonds result from electrostatic attraction between oppositely charged ions Year 11 DP Chemistry A chemical bond is a force that holds atoms together making a new substance Ionic Bond: As a rule of thumb, we say that the difference between the electronegativity values needs to be high (i.e. greater than 1.7) to be ionic. They form between cations on the left and anions on the right of the Periodic Table. Covalent Bond: If the difference between the electronegativity values of two highly electronegative atoms is low, a covalent bond is formed. They tend to form between non-metals, but sometimes metals are involved (eg Al2Cl6) Metallic Bond: If the difference between the electronegativity values of two highly electropositive atoms is low, a metallic bond is formed. These form between metals of the same or different type of atom The relative tendency of an atom to attract bonding electrons to itself on the Pauling Scale Group 5 – gains 3 e- to gain a full valence shell Group 6 – gains 2 e- to gain a full outer shell Group 7 - ?? For example – Fe forms two ions Can you deduce which two ions and why? Fe(II) – losing two 4s electrons F(III) – losing two 4s e- and one 3d e- to give a half-filled d Oppositely charged ions are formed by electron transfer due to a large electronegativity difference(> 1.7 difference) Na has a low electronegativity relative to Cl, so ions are formed by a transfer of an electron to achieve a full valence shell for both atoms. These oppositely charged ions then form a bond. This shows a model of a NaCl lattice with alternating positive and negative ions Li+ F- LiF Mg2+Cl- MgCl2 Some ions contain more than one element and the charge on the ion is spread (delocalised) over the entire ion. They have specific names and act as a single unit. Important ones to know: HCO3- (bicarbonate or hydrogen carbonate) Ionic compounds form in the same way with polyatomic ions. Here we see Na2SO4. Notice the sulfate did not change formula. Electrons are shared between two atoms. These atoms are most commonly non-metals. The more shared pairs the stronger the bond and the shorter the bond. Note the trend in C-C bonds. Note: bond energy is the amount of energy required to break a bond In this type of covalent bond, the difference is that one of the atoms in the pair donates both of the electrons in the bond. Examples: CO, NH4+, H3O+ The dots represent the valence electrons for each element Chemical compounds tend to form so that each atom, by gaining, losing, or sharing electrons, has an octet of electrons in its highest occupied energy level. Here, the octet rule is satisfied for Cl, but is irrelevant for H which can only hold 2 e-. Two pairs of shared electrons Draw the Lewis structure for Acetylene (ethyne) – C2H2 Notice the double bond in O2. No other configuration will satisfy the octet rule. Why is H4O not formed? Notice that the bond lengths would be different in benzene unless there is a resonance structure like the one represented above All octets (ignoring H) have been satisfied. N2 has a triple bond. Using dot diagrams, show why a single or double bond is incorrect. Illustrate how CO and H3O+ contain dative bonds. Draw a Lewis diagram for HCN Draw the two resonance structure of ozone O3 Use a drawing to show how many resonance structures are possible for the nitrate ion (NO3-) Compare the bond lengths and strengths of the two C,O bonds in the carboxyl group below. There are 3 ways the octet rule breaks down: NO (nitrous oxide) This mostly occurs with H, B and Be BF3 (boron trifluoride) Octets to outer atoms Extra e- (24-24=0) to B Therefore no extra electrons to add. B has only 6 e- (< octet). What about double bonds??? see next… This would give 3 resonance structures. What would they look like? Add a double bond to BF3 for a possible octet… The above structure would lead to a δ+ on F and a δ- on B. Is this likely considering the electronegativites? Because B has only 6 valence electrons, BF3 reacts strongly with compounds that have unshared pairs of electrons Starting in period 3, expanded valence shells are possible. This is the most likely exception to the octet rule. The octet rule is based upon valence orbitals containing an s and p orbital. This gives 2 + 6 = 8 e- (an octet). In the third shell (n=3) d orbitals become available. P is below. A 3s can be exited to the 3d, which allows for 5 valence shell bonding electrons. Promote one e- Sulfur (also in group III) can expand it’s octet to have more than 8 electrons as well. Sulfur can form SF2, SF4, SF6 Go to this website to see an animation on expanded octets in sulfur: Other notable expanded octets… PF6- (12 e-) Some bonds are not purely ionic, but they still have a significant difference in electronegativity that leads to one atom pulling the electrons more strongly than the other. This electronegativity difference leads to a partially positive end δ+ of a molecule and a partially negative end δ- (note: δ is the Greek letter delta and means partial) F is more electronegative, so the electrons spend more time around the F nucleus Small or no difference in electronegativity values leads to non-polar substances. Cl is a highly electronegative element, but there is no difference when one Cl atom bonds to another Cl atom C and H have very little difference in electronegativity, so methane is non-polar Some compounds contain polar bonds, but the polarity is cancelled out due to the structure. O is more electronegative than C meaning each bond is polar towards the O atom, but due to its linear shape, these polarities cancel each other resulting in a non-polar molecule. To determine the shape of covalent molecules, we use the Valence Shell Electron Pair Repulsion Theory (VSEPR) which states: “The geometric arrangement of atoms around a central atom is determined by the repulsion between electron pairs in the valence shell of the central atom.” Stay away! I am repulsed by you In other words… So, VSEPR theory says that molecular geometry is determined by the shape that keeps e- pairs as far apart as possible We know C forms four bonds and O forms 2 bonds What arrangement will allow the valence e- around the central atom (C) to be as far apart as possible? The Lewis structure looks like this: Linear molecules have two areas of high electron density around the central atom. Other examples of linear molecules : Ethyne (acetylene) C2H2 Molecular chlorine (Cl2) 3 e- pairs around the central atom leads to a trigonal planar shape as in BF3 Trigonal planar - Angles are1200 If one of those pairs is a non-bonding or lone pair of electrons, the shape is described as bent or v-shaped as in SO2 Bent – angles are less than 1200 due to lone pairs taking up more space than bonding pairs. These angles are 1170 However, in 3-D space, it is possible to allow the electrons to be further apart using a tetrahedral shape with bond angles of 109.50 If we look at the Lewis structure for CCl4, we might assume a flat structure with 900 bond angles (0 lone pairs) (1 lone pair) Angles = 900 & 1200 2 lone pairs 3 lone pairs Octahedral (0 lone pairs) (1 lone pair) (2 lone pairs) Justify the shape of XeF4. Why are the lone pairs at 1800?What other two shapes are possible with 6 pairs? Each carbon atom in graphite is bonded to 3 other carbon atoms forming flat sheets of carbon rings. These layers are loosely bonded to each other making graphite soft In diamond, each carbon is bonded to 4 other carbons in a giant repeating lattice. This lattice is non-polar and very strong, making diamond the hardest mineral on Earth. It’s m.p. is over 35000C! The fullerene contains 60 carbons arranged like a soccer ball with alternating 5 and 6-member rings. Carbon bonded to three other carbon atoms leaves one valence electron per carbon atom. These electrons are delocalised allowing graphite to conduct electricity The individual layers contain strong covalent bonds, but are only loosely bonded to other layers. This allows them to easily slide over one another making graphite useful in pencils and as a solid lubricant The third allotrope of carbon was discovered in 1985 and includes some weird and wonderful shapes. The first was known as a “Bucky Ball” which has a structure like a soccer ball and contains 60 carbons- C60 There are other fullerenes that contain more or less than 60 carbons. These have been detected in space leading scientists to suggest that they could be the origin of life in the universe Nanotubes are another area of current research by materials scientists. These tubes have high tensile strength and are able to conduct electricity. These have potential medical applications by attaching to resistant bacteria or cancer cells. They are also being researched for the proposed “space elevator” to be used as cables. Silicon dioxide (SiO2), which is the chemical component of sand. has a structure similar to diamond with a repeating giant lattice of tetrahedral shapes. This makes it very hard. However, it has polar bonds allowing it to be dissolved slowly by some alkaline solutions and it also has a much lower m.p. than diamond (17700C) Silicon atoms contain 4 valence electrons and form regular repeating covalent bonds. It s chemically similar to C. The picture below show it’s crystalline form This shows a computer generated model of the complicated giant lattice of silicon dioxide. Metallic bonding can be described as A repeating lattice of positive metal ions in a sea of delocalised electrons Delocalised electrons are not attached to any particular metallic nucleus and are free to move about the lattice. This is due to the electrons being able to move freely through the lattice. This means the electrons can act as charge carriers for conducting electricity and energy carriers for conducting heat Electrical Conductivity of Metals Malleability and Ductility The delocalised electrons in the 'sea' of electrons in the metallic bond, enable the metal atoms to roll over each other when a stress is applied. Source of animations and diagrams: www.google.com (images search) Sundin, C.; (n.d.); found at http://www.uwplatt.edu/ ; accessed 04/2011 Dynamic Science; (n.d.); found at http://www.dynamicscience.com.au/tester/solutions/chemistry/chemistry%20index.htm ; accessed 04/2011 Courtland, R.; New Scientist (online); 28/10/2010; found at http://www.newscientist.com/blogs/shortsharpscience/2010/10/buckyballs-abound-in-space.html ; accessed 04/2011
Sponge(Redirected from Sea sponge) Sponges, the members of the phylum Porifera (//; meaning "pore bearer"), are a basal Metazoa (animal) clade as a sister of the Diploblasts. They are multicellular organisms that have bodies full of pores and channels allowing water to circulate through them, consisting of jelly-like mesohyl sandwiched between two thin layers of cells. The branch of zoology that studies sponges is known as spongiology. Temporal range: Ediacaran–recent |A stove-pipe sponge| Sponges have unspecialized cells that can transform into other types and that often migrate between the main cell layers and the mesohyl in the process. Sponges do not have nervous, digestive or circulatory systems. Instead, most rely on maintaining a constant water flow through their bodies to obtain food and oxygen and to remove wastes. Sponges were first to branch off the evolutionary tree from the common ancestor of all animals, making them the sister group of all other animals. Sponges are similar to other animals in that they are multicellular, heterotrophic, lack cell walls and produce sperm cells. Unlike other animals, they lack true tissues and organs, and have no body symmetry. The shapes of their bodies are adapted for maximal efficiency of water flow through the central cavity, where it deposits nutrients, and leaves through a hole called the osculum. Many sponges have internal skeletons of spongin and/or spicules of calcium carbonate or silicon dioxide. All sponges are sessile aquatic animals. Although there are freshwater species, the great majority are marine (salt water) species, ranging from tidal zones to depths exceeding 8,800 m (5.5 mi). While most of the approximately 5,000–10,000 known species feed on bacteria and other food particles in the water, some host photosynthesizing microorganisms as endosymbionts and these alliances often produce more food and oxygen than they consume. A few species of sponge that live in food-poor environments have become carnivores that prey mainly on small crustaceans. Most species use sexual reproduction, releasing sperm cells into the water to fertilize ova that in some species are released and in others are retained by the "mother". The fertilized eggs form larvae which swim off in search of places to settle. Sponges are known for regenerating from fragments that are broken off, although this only works if the fragments include the right types of cells. A few species reproduce by budding. When conditions deteriorate, for example as temperatures drop, many freshwater species and a few marine ones produce gemmules, "survival pods" of unspecialized cells that remain dormant until conditions improve and then either form completely new sponges or recolonize the skeletons of their parents. The mesohyl functions as an endoskeleton in most sponges, and is the only skeleton in soft sponges that encrust hard surfaces such as rocks. More commonly, the mesohyl is stiffened by mineral spicules, by spongin fibers or both. Demosponges use spongin, and in many species, silica spicules and in some species, calcium carbonate exoskeletons. Demosponges constitute about 90% of all known sponge species, including all freshwater ones, and have the widest range of habitats. Calcareous sponges, which have calcium carbonate spicules and, in some species, calcium carbonate exoskeletons, are restricted to relatively shallow marine waters where production of calcium carbonate is easiest. The fragile glass sponges, with "scaffolding" of silica spicules, are restricted to polar regions and the ocean depths where predators are rare. Fossils of all of these types have been found in rocks dated from . In addition Archaeocyathids, whose fossils are common in rocks from , are now regarded as a type of sponge. The single-celled choanoflagellates resemble the choanocyte cells of sponges which are used to drive their water flow systems and capture most of their food. This along with phylogenetic studies of ribosomal molecules have been used as morphological evidence to suggest sponges are the sister group to the rest of animals. Some studies have shown that sponges do not form a monophyletic group, in other words do not include all and only the descendants of a common ancestor. Recent phylogenetic analyses suggest that comb jellies rather than sponges are the sister group to the rest of animals. The few species of demosponge that have entirely soft fibrous skeletons with no hard elements have been used by humans over thousands of years for several purposes, including as padding and as cleaning tools. By the 1950s, though, these had been overfished so heavily that the industry almost collapsed, and most sponge-like materials are now synthetic. Sponges and their microscopic endosymbionts are now being researched as possible sources of medicines for treating a wide range of diseases. Dolphins have been observed using sponges as tools while foraging. Sponges constitute the phylum Porifera, and have been defined as sessile metazoans (multicelled immobile animals) that have water intake and outlet openings connected by chambers lined with choanocytes, cells with whip-like flagella. However, a few carnivorous sponges have lost these water flow systems and the choanocytes. All known living sponges can remold their bodies, as most types of their cells can move within their bodies and a few can change from one type to another. Even if a few sponges are able to produce mucus – which acts as a microbial barrier in all other animals –, no sponge with the ability to secrete a functional mucus layer has been recorded. Without such a mucus layer their living tissue is covered by a layer of microbial symbionts, which can contribute up to 40-50% of the sponge wet mass. This inability to prevent microbes from penetrating their porous tissue could be a major reason why they have never evolved a more complex anatomy. Like cnidarians (jellyfish, etc.) and ctenophores (comb jellies), and unlike all other known metazoans, sponges' bodies consist of a non-living jelly-like mass (mesoglea) sandwiched between two main layers of cells. Cnidarians and ctenophores have simple nervous systems, and their cell layers are bound by internal connections and by being mounted on a basement membrane (thin fibrous mat, also known as "basal lamina"). Sponges have no nervous systems, their middle jelly-like layers have large and varied populations of cells, and some types of cells in their outer layers may move into the middle layer and change their functions. |Sponges||Cnidarians and ctenophores| |Nervous system||No||Yes, simple| |Cells in each layer bound together||No, except that Homoscleromorpha have basement membranes.||Yes: inter-cell connections; basement membranes| |Number of cells in middle "jelly" layer||Many||Few| |Cells in outer layers can move inwards and change functions||Yes||No| A sponge's body is hollow and is held in shape by the mesohyl, a jelly-like substance made mainly of collagen and reinforced by a dense network of fibers also made of collagen. The inner surface is covered with choanocytes, cells with cylindrical or conical collars surrounding one flagellum per choanocyte. The wave-like motion of the whip-like flagella drives water through the sponge's body. All sponges have ostia, channels leading to the interior through the mesohyl, and in most sponges these are controlled by tube-like porocytes that form closable inlet valves. Pinacocytes, plate-like cells, form a single-layered external skin over all other parts of the mesohyl that are not covered by choanocytes, and the pinacocytes also digest food particles that are too large to enter the ostia, while those at the base of the animal are responsible for anchoring it. - Lophocytes are amoeba-like cells that move slowly through the mesohyl and secrete collagen fibres. - Collencytes are another type of collagen-producing cell. - Rhabdiferous cells secrete polysaccharides that also form part of the mesohyl. - Oocytes and spermatocytes are reproductive cells. - Sclerocytes secrete the mineralized spicules ("little spines") that form the skeletons of many sponges and in some species provide some defense against predators. - In addition to or instead of sclerocytes, demosponges have spongocytes that secrete a form of collagen that polymerizes into spongin, a thick fibrous material that stiffens the mesohyl. - Myocytes ("muscle cells") conduct signals and cause parts of the animal to contract. - "Grey cells" act as sponges' equivalent of an immune system. - Archaeocytes (or amoebocytes) are amoeba-like cells that are totipotent, in other words each is capable of transformation into any other type of cell. They also have important roles in feeding and in clearing debris that block the ostia. Glass sponges' syncytia Glass sponges present a distinctive variation on this basic plan. Their spicules, which are made of silica, form a scaffolding-like framework between whose rods the living tissue is suspended like a cobweb that contains most of the cell types. This tissue is a syncytium that in some ways behaves like many cells that share a single external membrane, and in others like a single cell with multiple nuclei. The mesohyl is absent or minimal. The syncytium's cytoplasm, the soupy fluid that fills the interiors of cells, is organized into "rivers" that transport nuclei, organelles ("organs" within cells) and other substances. Instead of choanocytes, they have further syncytia, known as choanosyncytia, which form bell-shaped chambers where water enters via perforations. The insides of these chambers are lined with "collar bodies", each consisting of a collar and flagellum but without a nucleus of its own. The motion of the flagella sucks water through passages in the "cobweb" and expels it via the open ends of the bell-shaped chambers. Some types of cells have a single nucleus and membrane each, but are connected to other single-nucleus cells and to the main syncytium by "bridges" made of cytoplasm. The sclerocytes that build spicules have multiple nuclei, and in glass sponge larvae they are connected to other tissues by cytoplasm bridges; such connections between sclerocytes have not so far been found in adults, but this may simply reflect the difficulty of investigating such small-scale features. The bridges are controlled by "plugged junctions" that apparently permit some substances to pass while blocking others. Water flow and body structures Most sponges work rather like chimneys: they take in water at the bottom and eject it from the osculum ("little mouth") at the top. Since ambient currents are faster at the top, the suction effect that they produce by Bernoulli's principle does some of the work for free. Sponges can control the water flow by various combinations of wholly or partially closing the osculum and ostia (the intake pores) and varying the beat of the flagella, and may shut it down if there is a lot of sand or silt in the water. Although the layers of pinacocytes and choanocytes resemble the epithelia of more complex animals, they are not bound tightly by cell-to-cell connections or a basal lamina (thin fibrous sheet underneath). The flexibility of these layers and re-modeling of the mesohyl by lophocytes allow the animals to adjust their shapes throughout their lives to take maximum advantage of local water currents. The simplest body structure in sponges is a tube or vase shape known as "asconoid", but this severely limits the size of the animal. The body structure is characterized by a stalk-like spongocoel surrounded by a single layer of choanocytes. If it is simply scaled up, the ratio of its volume to surface area increases, because surface increases as the square of length or width while volume increases proportionally to the cube. The amount of tissue that needs food and oxygen is determined by the volume, but the pumping capacity that supplies food and oxygen depends on the area covered by choanocytes. Asconoid sponges seldom exceed 1 mm (0.039 in) in diameter. Some sponges overcome this limitation by adopting the "syconoid" structure, in which the body wall is pleated. The inner pockets of the pleats are lined with choanocytes, which connect to the outer pockets of the pleats by ostia. This increase in the number of choanocytes and hence in pumping capacity enables syconoid sponges to grow up to a few centimeters in diameter. The "leuconoid" pattern boosts pumping capacity further by filling the interior almost completely with mesohyl that contains a network of chambers lined with choanocytes and connected to each other and to the water intakes and outlet by tubes. Leuconid sponges grow to over 1 m (3.3 ft) in diameter, and the fact that growth in any direction increases the number of choanocyte chambers enables them to take a wider range of forms, for example "encrusting" sponges whose shapes follow those of the surfaces to which they attach. All freshwater and most shallow-water marine sponges have leuconid bodies. The networks of water passages in glass sponges are similar to the leuconid structure. In all three types of structure the cross-section area of the choanocyte-lined regions is much greater than that of the intake and outlet channels. This makes the flow slower near the choanocytes and thus makes it easier for them to trap food particles. For example, in Leuconia, a small leuconoid sponge about 10 centimetres (3.9 in) tall and 1 centimetre (0.39 in) in diameter, water enters each of more than 80,000 intake canals at 6 cm per minute. However, because Leuconia has more than 2 million flagellated chambers whose combined diameter is much greater than that of the canals, water flow through chambers slows to 3.6 cm per hour, making it easy for choanocytes to capture food. All the water is expelled through a single osculum at about 8.5 cm per second, fast enough to carry waste products some distance away. In zoology a skeleton is any fairly rigid structure of an animal, irrespective of whether it has joints and irrespective of whether it is biomineralized. The mesohyl functions as an endoskeleton in most sponges, and is the only skeleton in soft sponges that encrust hard surfaces such as rocks. More commonly the mesohyl is stiffened by mineral spicules, by spongin fibers or both. Spicules may be made of silica or calcium carbonate, and vary in shape from simple rods to three-dimensional "stars" with up to six rays. Spicules are produced by sclerocyte cells, and may be separate, connected by joints, or fused. Some sponges also secrete exoskeletons that lie completely outside their organic components. For example, sclerosponges ("hard sponges") have massive calcium carbonate exoskeletons over which the organic matter forms a thin layer with choanocyte chambers in pits in the mineral. These exoskeletons are secreted by the pinacocytes that form the animals' skins. Although adult sponges are fundamentally sessile animals, some marine and freshwater species can move across the sea bed at speeds of 1–4 mm (0.039–0.157 in) per day, as a result of amoeba-like movements of pinacocytes and other cells. A few species can contract their whole bodies, and many can close their oscula and ostia. Juveniles drift or swim freely, while adults are stationary. Respiration, feeding and excretion Sponges do not have distinct circulatory, respiratory, digestive, and excretory systems – instead the water flow system supports all these functions. They filter food particles out of the water flowing through them. Particles larger than 50 micrometers cannot enter the ostia and pinacocytes consume them by phagocytosis (engulfing and internal digestion). Particles from 0.5 μm to 50 μm are trapped in the ostia, which taper from the outer to inner ends. These particles are consumed by pinacocytes or by archaeocytes which partially extrude themselves through the walls of the ostia. Bacteria-sized particles, below 0.5 micrometers, pass through the ostia and are caught and consumed by choanocytes. Since the smallest particles are by far the most common, choanocytes typically capture 80% of a sponge's food supply. Archaeocytes transport food packaged in vesicles from cells that directly digest food to those that do not. At least one species of sponge has internal fibers that function as tracks for use by nutrient-carrying archaeocytes, and these tracks also move inert objects. It used to be claimed that glass sponges could live on nutrients dissolved in sea water and were very averse to silt. However, a study in 2007 found no evidence of this and concluded that they extract bacteria and other micro-organisms from water very efficiently (about 79%) and process suspended sediment grains to extract such prey. Collar bodies digest food and distribute it wrapped in vesicles that are transported by dynein "motor" molecules along bundles of microtubules that run throughout the syncytium. Sponges' cells absorb oxygen by diffusion from water into cells as water flows through body, into which carbon dioxide and other soluble waste products such as ammonia also diffuse. Archeocytes remove mineral particles that threaten to block the ostia, transport them through the mesohyl and generally dump them into the outgoing water current, although some species incorporate them into their skeletons. A few species that live in waters where the supply of food particles is very poor prey on crustaceans and other small animals. So far only 137 species have been discovered. Most belong to the family Cladorhizidae, but a few members of the Guitarridae and Esperiopsidae are also carnivores. In most cases little is known about how they actually capture prey, although some species are thought to use either sticky threads or hooked spicules. Most carnivorous sponges live in deep waters, up to 8,840 m (5.49 mi), and the development of deep-ocean exploration techniques is expected to lead to the discovery of several more. However, one species has been found in Mediterranean caves at depths of 17–23 m (56–75 ft), alongside the more usual filter feeding sponges. The cave-dwelling predators capture crustaceans under 1 mm (0.039 in) long by entangling them with fine threads, digest them by enveloping them with further threads over the course of a few days, and then return to their normal shape; there is no evidence that they use venom. Most known carnivorous sponges have completely lost the water flow system and choanocytes. However, the genus Chondrocladia uses a highly modified water flow system to inflate balloon-like structures that are used for capturing prey. Freshwater sponges often host green algae as endosymbionts within archaeocytes and other cells, and benefit from nutrients produced by the algae. Many marine species host other photosynthesizing organisms, most commonly cyanobacteria but in some cases dinoflagellates. Symbiotic cyanobacteria may form a third of the total mass of living tissue in some sponges, and some sponges gain 48% to 80% of their energy supply from these micro-organisms. In 2008 a University of Stuttgart team reported that spicules made of silica conduct light into the mesohyl, where the photosynthesizing endosymbionts live. Sponges that host photosynthesizing organisms are most common in waters with relatively poor supplies of food particles, and often have leafy shapes that maximize the amount of sunlight they collect. Sponges do not have the complex immune systems of most other animals. However, they reject grafts from other species but accept them from other members of their own species. In a few marine species, gray cells play the leading role in rejection of foreign material. When invaded, they produce a chemical that stops movement of other cells in the affected area, thus preventing the intruder from using the sponge's internal transport systems. If the intrusion persists, the grey cells concentrate in the area and release toxins that kill all cells in the area. The "immune" system can stay in this activated state for up to three weeks. Sponges have three asexual methods of reproduction: after fragmentation; by budding; and by producing gemmules. Fragments of sponges may be detached by currents or waves. They use the mobility of their pinacocytes and choanocytes and reshaping of the mesohyl to re-attach themselves to a suitable surface and then rebuild themselves as small but functional sponges over the course of several days. The same capabilities enable sponges that have been squeezed through a fine cloth to regenerate. A sponge fragment can only regenerate if it contains both collencytes to produce mesohyl and archeocytes to produce all the other cell types. A very few species reproduce by budding. Gemmules are "survival pods" which a few marine sponges and many freshwater species produce by the thousands when dying and which some, mainly freshwater species, regularly produce in autumn. Spongocytes make gemmules by wrapping shells of spongin, often reinforced with spicules, round clusters of archeocytes that are full of nutrients. Freshwater gemmules may also include phytosynthesizing symbionts. The gemmules then become dormant, and in this state can survive cold, drying out, lack of oxygen and extreme variations in salinity. Freshwater gemmules often do not revive until the temperature drops, stays cold for a few months and then reaches a near-"normal" level. When a gemmule germinates, the archeocytes round the outside of the cluster transform into pinacocytes, a membrane over a pore in the shell bursts, the cluster of cells slowly emerges, and most of the remaining archeocytes transform into other cell types needed to make a functioning sponge. Gemmules from the same species but different individuals can join forces to form one sponge. Some gemmules are retained within the parent sponge, and in spring it can be difficult to tell whether an old sponge has revived or been "recolonized" by its own gemmules. Most sponges are hermaphrodites (function as both sexes simultaneously), although sponges have no gonads (reproductive organs). Sperm are produced by choanocytes or entire choanocyte chambers that sink into the mesohyl and form spermatic cysts while eggs are formed by transformation of archeocytes, or of choanocytes in some species. Each egg generally acquires a yolk by consuming "nurse cells". During spawning, sperm burst out of their cysts and are expelled via the osculum. If they contact another sponge of the same species, the water flow carries them to choanocytes that engulf them but, instead of digesting them, metamorphose to an ameboid form and carry the sperm through the mesohyl to eggs, which in most cases engulf the carrier and its cargo. A few species release fertilized eggs into the water, but most retain the eggs until they hatch. There are four types of larvae, but all are balls of cells with an outer layer of cells whose flagellae or cilia enable the larvae to move. After swimming for a few days the larvae sink and crawl until they find a place to settle. Most of the cells transform into archeocytes and then into the types appropriate for their locations in a miniature adult sponge. Glass sponge embryos start by dividing into separate cells, but once 32 cells have formed they rapidly transform into larvae that externally are ovoid with a band of cilia round the middle that they use for movement, but internally have the typical glass sponge structure of spicules with a cobweb-like main syncitium draped around and between them and choanosyncytia with multiple collar bodies in the center. The larvae then leave their parents' bodies. Sponges in temperate regions live for at most a few years, but some tropical species and perhaps some deep-ocean ones may live for 200 years or more. Some calcified demosponges grow by only 0.2 mm (0.0079 in) per year and, if that rate is constant, specimens 1 m (3.3 ft) wide must be about 5,000 years old. Some sponges start sexual reproduction when only a few weeks old, while others wait until they are several years old. Coordination of activities Adult sponges lack neurons or any other kind of nervous tissue. However, most species have the ability to perform movements that are coordinated all over their bodies, mainly contractions of the pinacocytes, squeezing the water channels and thus expelling excess sediment and other substances that may cause blockages. Some species can contract the osculum independently of the rest of the body. Sponges may also contract in order to reduce the area that is vulnerable to attack by predators. In cases where two sponges are fused, for example if there is a large but still unseparated bud, these contraction waves slowly become coordinated in both of the "Siamese twins". The coordinating mechanism is unknown, but may involve chemicals similar to neurotransmitters. However, glass sponges rapidly transmit electrical impulses through all parts of the syncytium, and use this to halt the motion of their flagella if the incoming water contains toxins or excessive sediment. Myocytes are thought to be responsible for closing the osculum and for transmitting signals between different parts of the body. Sponges contain genes very similar to those that contain the "recipe" for the post-synaptic density, an important signal-receiving structure in the neurons of all other animals. However, in sponges these genes are only activated in "flask cells" that appear only in larvae and may provide some sensory capability while the larvae are swimming. This raises questions about whether flask cells represent the predecessors of true neurons or are evidence that sponges' ancestors had true neurons but lost them as they adapted to a sessile lifestyle. Sponges are worldwide in their distribution, living in a wide range of ocean habitats, from the polar regions to the tropics. Most live in quiet, clear waters, because sediment stirred up by waves or currents would block their pores, making it difficult for them to feed and breathe. The greatest numbers of sponges are usually found on firm surfaces such as rocks, but some sponges can attach themselves to soft sediment by means of a root-like base. Sponges are more abundant but less diverse in temperate waters than in tropical waters, possibly because organisms that prey on sponges are more abundant in tropical waters. Glass sponges are the most common in polar waters and in the depths of temperate and tropical seas, as their very porous construction enables them to extract food from these resource-poor waters with the minimum of effort. Demosponges and calcareous sponges are abundant and diverse in shallower non-polar waters. The different classes of sponge live in different ranges of habitat: |Water type||Depth||Type of surface| |Calcarea||Marine||less than 100 m (330 ft)||Hard| |Glass sponges||Marine||Deep||Soft or firm sediment| |Demosponges||Marine, brackish; and about 150 freshwater species||Inter-tidal to abyssal; a carnivorous demosponge has been found at 8,840 m (5.49 mi)||Any| As primary producers Sponges with photosynthesizing endosymbionts produce up to three times more oxygen than they consume, as well as more organic matter than they consume. Such contributions to their habitats' resources are significant along Australia's Great Barrier Reef but relatively minor in the Caribbean. Many sponges shed Sponge spicules, forming a dense carpet several meters deep that keeps away echinoderms which would otherwise prey on the sponges. They also produce toxins that prevent other sessile organisms such as bryozoans or sea squirts from growing on or near them, making sponges very effective competitors for living space. One of many examples includes ageliferin. A few species, the Caribbean fire sponge Tedania ignis, cause a severe rash in humans who handle them. Turtles and some fish feed mainly on sponges. It is often said that sponges produce chemical defenses against such predators. However, experiments have been unable to establish a relationship between the toxicity of chemicals produced by sponges and how they taste to fish, which would diminish the usefulness of chemical defenses as deterrents. Predation by fish may even help to spread sponges by detaching fragments. However, some studies have shown fish showing a preference for non chemically defended sponges, and another study found that high levels of coral predation did predict the presence of chemically defended species. Sponge flies, also known as spongilla-flies (Neuroptera, Sisyridae), are specialist predators of freshwater sponges. The female lays her eggs on vegetation overhanging water. The larvae hatch and drop into the water where they seek out sponges to feed on. They use their elongated mouthparts to pierce the sponge and suck the fluids within. The larvae of some species cling to the surface of the sponge while others take refuge in the sponge's internal cavities. The fully grown larvae leave the water and spin a cocoon in which to pupate. The Caribbean chicken-liver sponge Chondrilla nucula secretes toxins that kill coral polyps, allowing the sponges to grow over the coral skeletons. Others, especially in the family Clionaidae, use corrosive substances secreted by their archeocytes to tunnel into rocks, corals and the shells of dead mollusks. Sponges may remove up to 1 m (3.3 ft) per year from reefs, creating visible notches just below low-tide level. Caribbean sponges of the genus Aplysina suffer from Aplysina red band syndrome. This causes Aplysina to develop one or more rust-colored bands, sometimes with adjacent bands of necrotic tissue. These lesions may completely encircle branches of the sponge. The disease appears to be contagious and impacts approximately 10 percent of A. cauliformis on Bahamian reefs. The rust-colored bands are caused by a cyanobacterium, but it is unknown whether this organism actually causes the disease. Collaboration with other organisms In addition to hosting photosynthesizing endosymbionts, sponges are noted for their wide range of collaborations with other organisms. The relatively large encrusting sponge Lissodendoryx colombiensis is most common on rocky surfaces, but has extended its range into seagrass meadows by letting itself be surrounded or overgrown by seagrass sponges, which are distasteful to the local starfish and therefore protect Lissodendoryx against them; in return the seagrass sponges get higher positions away from the sea-floor sediment. Shrimps of the genus Synalpheus form colonies in sponges, and each shrimp species inhabits a different sponge species, making Synalpheus one of the most diverse crustacean genera. Specifically, Synalpheus regalis utilizes the sponge not only as a food source, but also as a defense against other shrimp and predators. As many as 16,000 individuals inhabit a single loggerhead sponge, feeding off the larger particles that collect on the sponge as it filters the ocean to feed itself. Systematics and evolutionary history Linnaeus, who classified most kinds of sessile animals as belonging to the order Zoophyta in the class Vermes, mistakenly identified the genus Spongia as plants in the order Algae. For a long time thereafter sponges were assigned to a separate subkingdom, Parazoa ("beside the animals"), separate from the Eumetazoa which formed the rest of the kingdom Animalia. They have been regarded as a paraphyletic phylum, from which the higher animals have evolved. Other research indicates Porifera is monophyletic. - Hexactinellida (glass sponges) have silicate spicules, the largest of which have six rays and may be individual or fused. The main components of their bodies are syncytia in which large numbers of cell share a single external membrane. - Calcarea have skeletons made of calcite, a form of calcium carbonate, which may form separate spicules or large masses. All the cells have a single nucleus and membrane. - Most Demospongiae have silicate spicules or spongin fibers or both within their soft tissues. However, a few also have massive external skeletons made of aragonite, another form of calcium carbonate. All the cells have a single nucleus and membrane. - Archeocyatha are known only as fossils from the Cambrian period. In the 1970s, sponges with massive calcium carbonate skeletons were assigned to a separate class, Sclerospongiae, otherwise known as "coralline sponges". However, in the 1980s it was found that these were all members of either the Calcarea or the Demospongiae. So far scientific publications have identified about 9,000 poriferan species, of which: about 400 are glass sponges; about 500 are calcareous species; and the rest are demosponges. However, some types of habitat, vertical rock and cave walls and galleries in rock and coral boulders, have been investigated very little, even in shallow seas. Sponges were traditionally distributed in three classes: calcareous sponges (Calcarea), glass sponges (Hexactinellida) and demosponges (Demospongiae). However, studies have shown that the Homoscleromorpha, a group thought to belong to the Demospongiae, is actually phylogenetically well separated. Therefore, they have recently been recognized as the fourth class of sponges. |Type of cells||Spicules||Spongin fibers||Massive exoskeleton||Body form| |Calcarea||Single nucleus, single external membrane||Calcite May be individual or large masses Made of calcite if present. |Asconoid, syconoid, leuconoid or solenoid| |Hexactinellida||Mostly syncytia in all species||Silica May be individual or fused |Demospongiae||Single nucleus, single external membrane||Silica||In many species||In some species. Made of aragonite if present. |Homoscleromorpha||Single nucleus, single external membrane||Silica||In many species||Never||Sylleibid or leuconoid| Although molecular clocks and biomarkers suggest sponges existed well before the Cambrian explosion of life, silica spicules like those of demosponges are absent from the fossil record until the Cambrian. One unsubstantiated report exists of spicules in rocks dated around . Well-preserved fossil sponges from about in the Ediacaran period have been found in the Doushantuo Formation. These fossils, which include spicules, pinacocytes, porocytes, archeocytes, sclerocytes and the internal cavity, have been classified as demosponges. Fossils of glass sponges have been found from around in rocks in Australia, China and Mongolia. Early Cambrian sponges from Mexico belonging to the genus Kiwetinokia show evidence of fusion of several smaller spicules to form a single large spicule. Calcium carbonate spicules of calcareous sponges have been found in Early Cambrian rocks from about in Australia. Other probable demosponges have been found in the Early Cambrian Chengjiang fauna, from . Freshwater sponges appear to be much younger, as the earliest known fossils date from the Mid-Eocene period about . Although about 90% of modern sponges are demosponges, fossilized remains of this type are less common than those of other types because their skeletons are composed of relatively soft spongin that does not fossilize well. Earliest sponge symbionts are known from the early Silurian. A chemical tracer is 24-isopropylcholestane, which is a stable derivative of 24-isopropylcholesterol, which is said to be produced by demosponges but not by eumetazoans ("true animals", i.e. cnidarians and bilaterians). Since choanoflagellates are thought to be animals' closest single-celled relatives, a team of scientists examined the biochemistry and genes of one choanoflagellate species. They concluded that this species could not produce 24-isopropylcholesterol but that investigation of a wider range of choanoflagellates would be necessary in order to prove that the fossil 24-isopropylcholestane could only have been produced by demosponges. Although a previous publication reported traces of the chemical 24-isopropylcholestane in ancient rocks dating to , recent research using a much more accurately dated rock series has revealed that these biomarkers only appear before the end of the Marinoan glaciation approximately , and that "Biomarker analysis has yet to reveal any convincing evidence for ancient sponges pre-dating the first globally extensive Neoproterozoic glacial episode (the Sturtian, ~ in Oman)". While it has been argued that this 'sponge biomarker' could have originated from marine algae, recent research suggests that the algae's ability to produce this biomarker evolved only in the Carboniferous; as such, the biomarker remains strongly supportive of the presence of demosponges in the Cryogenian. Archaeocyathids, which some classify as a type of coralline sponge, are very common fossils in rocks from the Early Cambrian about , but apparently died out by the end of the Cambrian . It has been suggested that they were produced by: sponges; cnidarians; algae; foraminiferans; a completely separate phylum of animals, Archaeocyatha; or even a completely separate kingdom of life, labeled Archaeata or Inferibionta. Since the 1990s archaeocyathids have been regarded as a distinctive group of sponges. It is difficult to fit chancelloriids into classifications of sponges or more complex animals. An analysis in 1996 concluded that they were closely related to sponges on the grounds that the detailed structure of chancellorid sclerites ("armor plates") is similar to that of fibers of spongin, a collagen protein, in modern keratose (horny) demosponges such as Darwinella. However, another analysis in 2002 concluded that chancelloriids are not sponges and may be intermediate between sponges and more complex animals, among other reasons because their skins were thicker and more tightly connected than those of sponges. In 2008 a detailed analysis of chancelloriids' sclerites concluded that they were very similar to those of halkieriids, mobile bilaterian animals that looked like slugs in chain mail and whose fossils are found in rocks from the very Early Cambrian to the Mid Cambrian. If this is correct, it would create a dilemma, as it is extremely unlikely that totally unrelated organisms could have developed such similar sclerites independently, but the huge difference in the structures of their bodies makes it hard to see how they could be closely related. Relationships to other animal groups In the 1990s sponges were widely regarded as a monophyletic group, all of them having descended from a common ancestor that was itself a sponge, and as the "sister-group" to all other metazoans (multi-celled animals), which themselves form a monophyletic group. On the other hand, some 1990s analyses also revived the idea that animals' nearest evolutionary relatives are choanoflagellates, single-celled organisms very similar to sponges' choanocytes – which would imply that most Metazoa evolved from very sponge-like ancestors and therefore that sponges may not be monophyletic, as the same sponge-like ancestors may have given rise both to modern sponges and to non-sponge members of Metazoa. Analyses since 2001 have concluded that Eumetazoa (more complex than sponges) are more closely related to particular groups of sponges than to the rest of the sponges. Such conclusions imply that sponges are not monophyletic, because the last common ancestor of all sponges would also be a direct ancestor of the Eumetazoa, which are not sponges. A study in 2001 based on comparisons of ribosome DNA concluded that the most fundamental division within sponges was between glass sponges and the rest, and that Eumetazoa are more closely related to calcareous sponges, those with calcium carbonate spicules, than to other types of sponge. In 2007 one analysis based on comparisons of RNA and another based mainly on comparison of spicules concluded that demosponges and glass sponges are more closely related to each other than either is to calcareous sponges, which in turn are more closely related to Eumetazoa. Other anatomical and biochemical evidence links the Eumetazoa with Homoscleromorpha, a sub-group of demosponges. A comparison in 2007 of nuclear DNA, excluding glass sponges and comb jellies, concluded that: Homoscleromorpha are most closely related to Eumetazoa; calcareous sponges are the next closest; the other demosponges are evolutionary "aunts" of these groups; and the chancelloriids, bag-like animals whose fossils are found in Cambrian rocks, may be sponges. The sperm of Homoscleromorpha share with those of Eumetazoa features that those of other sponges lack. In both Homoscleromorpha and Eumetazoa layers of cells are bound together by attachment to a carpet-like basal membrane composed mainly of "type IV" collagen, a form of collagen not found in other sponges – although the spongin fibers that reinforce the mesohyl of all demosponges is similar to "type IV" collagen. The analyses described above concluded that sponges are closest to the ancestors of all Metazoa, of all multi-celled animals including both sponges and more complex groups. However, another comparison in 2008 of 150 genes in each of 21 genera, ranging from fungi to humans but including only two species of sponge, suggested that comb jellies (ctenophora) are the most basal lineage of the Metazoa included in the sample. If this is correct, either modern comb jellies developed their complex structures independently of other Metazoa, or sponges' ancestors were more complex and all known sponges are drastically simplified forms. The study recommended further analyses using a wider range of sponges and other simple Metazoa such as Placozoa. The results of such an analysis, published in 2009, suggest that a return to the previous view may be warranted. 'Family trees' constructed using a combination of all available data – morphological, developmental and molecular – concluded that the sponges are in fact a monophyletic group, and with the cnidarians form the sister group to the bilaterians. A very large and internally consistent alignment of 1,719 proteins at the metazoan scale, published in 2017, showed that (i) sponges – represented by Homoscleromorpha, Calcarea, Hexactinellida, and Demospongiae – are monophyletic, (ii) sponges are sister-group to all other multicellular animals, (iii) ctenophores emerge as the second-earliest branching animal lineage, and (iv) placozoans emerge as the third animal lineage, followed by cnidarians sister-group to bilaterians. - Pedro M. Alcolado - Céline Allewaert - Bernard Banaigs - Patricia Bergquist - James Scott Bowerbank - Maurice Burton - Henry John Carter - Max Walker de Laubenfels - Arthur Dendy - Édouard Placide Duchassaing de Fontbressin - Willard D. Hartman - George John Hechtel - Thomas Higgin - John N.A. Hooper - Efthimios Kefalas - Randolph Kirkpatrick - Robert J. Lendlmayer von Lendenfeld - Swee Cheng Lim - Claude Lévi - Edward Alfred Minchin - Giovanni Domenico Nardo - Stuart O. Ridley - Eduard Oscar Schmidt - Émile Topsent A report in 1997 described use of sponges as a tool by bottlenose dolphins in Shark Bay in Western Australia. A dolphin will attach a marine sponge to its rostrum, which is presumably then used to protect it when searching for food in the sandy sea bottom. The behavior, known as sponging, has only been observed in this bay, and is almost exclusively shown by females. A study in 2005 concluded that mothers teach the behavior to their daughters, and that all the sponge-users are closely related, suggesting that it is a fairly recent innovation. The calcium carbonate or silica spicules of most sponge genera make them too rough for most uses, but two genera, Hippospongia and Spongia, have soft, entirely fibrous skeletons. Early Europeans used soft sponges for many purposes, including padding for helmets, portable drinking utensils and municipal water filters. Until the invention of synthetic sponges, they were used as cleaning tools, applicators for paints and ceramic glazes and discreet contraceptives. However, by the mid-20th century, over-fishing brought both the animals and the industry close to extinction. See also sponge diving. Many objects with sponge-like textures are now made of substances not derived from poriferans. Synthetic sponges include personal and household cleaning tools, breast implants, and contraceptive sponges. Typical materials used are cellulose foam, polyurethane foam, and less frequently, silicone foam. The luffa "sponge", also spelled loofah, which is commonly sold for use in the kitchen or the shower, is not derived from an animal but mainly from the fibrous "skeleton" of the sponge gourd (Luffa aegyptiaca, Cucurbitaceae). Other biologically active compounds Lacking any protective shell or means of escape, sponges have evolved to synthesize a variety of unusual compounds. One such class is the oxidized fatty acid derivatives called oxylipins. Members of this family have been found to have anti-cancer, anti-bacterial and anti-fungal properties. One example isolated from the Okinawan plakortis sponges, plakoridine A, has shown potential as a cytotoxin to murine lymphoma cells. - Feuda, Roberto; Dohrmann, Martin; Pett, Walker; Philippe, Hervé; Rota-Stabelli, Omar; Lartillot, Nicolas; Wörheide, Gert; Pisani, Davide (2017). "Improved Modeling of Compositional Heterogeneity Supports Sponges as Sister to All Other Animals". Current Biology. 27 (24): 3864–3870.e4. doi:10.1016/j.cub.2017.11.008. PMID 29199080. - Pisani, Davide; Pett, Walker; Dohrmann, Martin; Feuda, Roberto; Rota-Stabelli, Omar; Philippe, Hervé; Lartillot, Nicolas; Wörheide, Gert (15 December 2015). "Genomic data do not support comb jellies as the sister group to all other animals". Proceedings of the National Academy of Sciences. 112 (50): 15402–15407. Bibcode:2015PNAS..11215402P. doi:10.1073/pnas.1518127112. ISSN 0027-8424. PMC 4687580. PMID 26621703. - Simion, Paul; Philippe, Hervé; Baurain, Denis; Jager, Muriel; Richter, Daniel J.; Franco, Arnaud Di; Roure, Béatrice; Satoh, Nori; Quéinnec, Éric (3 April 2017). "A Large and Consistent Phylogenomic Dataset Supports Sponges as the Sister Group to All Other Animals". Current Biology. 27 (7): 958–967. doi:10.1016/j.cub.2017.02.031. ISSN 0960-9822. PMID 28318975. - Giribet, Gonzalo (1 October 2016). "Genomics and the animal tree of life: conflicts and future prospects". Zoologica Scripta. 45: 14–21. doi:10.1111/zsc.12215. ISSN 1463-6409. - Laumer, Christopher E.; Gruber-Vodicka, Harald; Hadfield, Michael G.; Pearse, Vicki B.; Riesgo, Ana; Marioni, John C.; Giribet, Gonzalo (2017-10-11). "Placozoans are eumetazoans related to Cnidaria". bioRxiv 200972. - "Spongiology". Merriam-Webster Dictionary. Retrieved 27 December 2017. - http://www.cell.com/current-biology/pdf/S0960-9822(17)31453-7.pdf Improved Modeling of Compositional Heterogeneity Supports Sponges as Sister to All Other Animals - "Henry George Liddell, Robert Scott, A Greek-English Lexicon". - Vacelet & Duport 2004, pp. 179–190. - Bergquist 1978, pp. 183–185. - Bergquist 1978, pp. 120–127. - Bergquist 1978, p. 179. - A. G. Collins (December 1998). "Evaluating multiple alternative hypotheses for the origin of Bilateria: an analysis of 18S rRNA molecular evidence". Proceedings of the National Academy of Sciences of the United States of America. 95 (26): 15458–15463. Bibcode:1998PNAS...9515458C. doi:10.1073/pnas.95.26.15458. PMC 28064. PMID 9860990. - Casey W. Dunn, Andreas Hejnol, David Q. Matus, Kevin Pang, William E. Browne, Stephen A. Smith, Elaine Seaver, Greg W. Rouse, Matthias Obst, Gregory D. Edgecombe, Martin V. Sorensen, Steven H. D. Haddock, Andreas Schmidt-Rhaesa, Akiko Okusu, Reinhardt Mobjerg Kristensen, Ward C. Wheeler, Mark Q. Martindale & Gonzalo Giribet (April 2008). "Broad phylogenomic sampling improves resolution of the animal tree of life". Nature. 452 (7188): 745–749. Bibcode:2008Natur.452..745D. doi:10.1038/nature06614. PMID 18322464. - Andreas Hejnol, Matthias Obst, Alexandros Stamatakis, Michael Ott, Greg W. Rouse, Gregory D. Edgecombe, Pedro Martinez, Jaume Baguna, Xavier Bailly, Ulf Jondelius, Matthias Wiens, Werner E. G. Muller, Elaine Seaver, Ward C. Wheeler, Mark Q. Martindale, Gonzalo Giribet & Casey W. Dunn (December 2009). "Assessing the root of bilaterian animals with scalable phylogenomic methods". Proceedings of the Royal Society B: Biological Sciences. 276 (1677): 4261–4270. doi:10.1098/rspb.2009.0896. PMC 2817096. PMID 19759036. - Joseph F. Ryan, Kevin Pang, Christine E. Schnitzler, Anh-Dao Nguyen, R. Travis Moreland, David K. Simmons, Bernard J. Koch, Warren R. Francis, Paul Havlak, Stephen A. Smith, Nicholas H. Putnam, Steven H. D. Haddock, Casey W. Dunn, Tyra G. Wolfsberg, James C. Mullikin, Mark Q. Martindale & Andreas D. Baxevanis (December 2013). "The genome of the ctenophore Mnemiopsis leidyi and its implications for cell type evolution". Science. 342 (6164): 1242592. doi:10.1126/science.1242592. PMC 3920664. PMID 24337300. - Leonid L. Moroz, Kevin M. Kocot, Mathew R. Citarella, Sohn Dosung, Tigran P. Norekian, Inna S. Povolotskaya, Anastasia P. Grigorenko, Christopher Dailey, Eugene Berezikov, Katherine M. Buckley, Andrey Ptitsyn, Denis Reshetov, Krishanu Mukherjee, Tatiana P. Moroz, Yelena Bobkova, Fahong Yu, Vladimir V. Kapitonov, Jerzy Jurka, Yuri V. Bobkov, Joshua J. Swore, David O. Girardo, Alexander Fodor, Fedor Gusev, Rachel Sanford, Rebecca Bruders, Ellen Kittler, Claudia E. Mills, Jonathan P. Rast, Romain Derelle, Victor V. Solovyev, Fyodor A. Kondrashov, Billie J. Swalla, Jonathan V. Sweedler, Evgeny I. Rogaev, Kenneth M. Halanych & Andrea B. Kohn (June 2014). "The ctenophore genome and the evolutionary origins of neural systems". Nature. 510 (7503): 109–114. Bibcode:2014Natur.510..109M. doi:10.1038/nature13400. PMC 4337882. PMID 24847885. - Krutzen M; Mann J; Heithaus M.R.; Connor R. C; Bejder L; Sherwin W.B. (21 June 2005). "Cultural transmission of tool use in bottlenose dolphins". Proceedings of the National Academy of Sciences. 102 (25): 8939–8943. Bibcode:2005PNAS..102.8939K. doi:10.1073/pnas.0500232102. PMC 1157020. PMID 15947077.. News report at Dolphin Moms Teach Daughters to Use Tools, publisher National Geographic). - Bergquist 1978, p. 29. - Bergquist 1978, p. 39. - Hooper, J. N. A., Van Soest, R. W. M., and Debrenne, F. (2002). "Phylum Porifera Grant, 1836". In Hooper, J. N. A.; Van Soest, R. W. M. Systema Porifera: A Guide to the Classification of Sponges. New York: Kluwer Academic/Plenum. pp. 9–14. ISBN 978-0-306-47260-2. - Ruppert, Fox & Barnes 2004, pp. 76–97 - Bakshani, Cassie R; Morales-Garcia, Ana L; Althaus, Mike; Wilcox, Matthew D; Pearson, Jeffrey P; Bythell, John C; Burgess, J Grant (2018-07-04). "Evolutionary conservation of the antimicrobial function of mucus: a first defence against infection". Npj Biofilms and Microbiomes. 4 (1): 14. doi:10.1038/s41522-018-0057-2. ISSN 2055-5008. PMC 6031612. PMID 30002868. - Bergquist, P. R., (1998). "Porifera". In Anderson, D.T. Invertebrate Zoology. Oxford University Press. pp. 10–27. ISBN 978-0-19-551368-4. - Hinde, R. T., (1998). "The Cnidaria and Ctenophora". In Anderson, D.T. Invertebrate Zoology. Oxford University Press. pp. 28–57. ISBN 978-0-19-551368-4. - Exposito, J-Y., Cluzel, C., Garrone, R., and Lethias, C. (1 November 2002). "Evolution of collagens". The Anatomical Record Part A: Discoveries in Molecular, Cellular, and Evolutionary Biology. 268 (3): 302–316. doi:10.1002/ar.10162. PMID 12382326. - Ruppert, E.E.; Fox, R.S. & Barnes, R.D. (2004). Invertebrate Zoology (7th ed.). Brooks / Cole. p. 82. ISBN 0030259827. - Ruppert, E.E.; Fox, R.S. & Barnes, R.D. (2004). Invertebrate Zoology (7th ed.). Brooks / Cole. p. 83. ISBN 0030259827. Fig. 5-7 - Leys, S. P. (2003). "The significance of syncytial tissues for the position of the Hexactinellida in the Metazoa". Integrative and Comparative Biology. 43 (1): 19–27. doi:10.1093/icb/43.1.19. PMID 21680406. - Ruppert, E.E.; Fox, R.S. & Barnes, R.D. (2004). Invertebrate Zoology (7th ed.). Brooks / Cole. p. 78. ISBN 0030259827. - Ruppert, Fox & Barnes 2004, p. 83. - C. Hickman, C .P. (Jr.), Roberts, L. S., and Larson, A. (2001). Integrated Principles of Zoology (11 ed.). New York: McGraw-Hill. p. 247. ISBN 978-0-07-290961-6. - Bergquist, P. R. (2001). "Porifera (Sponges)". Encyclopedia of Life Sciences. John Wiley & Sons, Ltd. doi:10.1038/npg.els.0001582. ISBN 978-0470016176. - Krautter, M. (1998). "Ecology of siliceous sponges: Application to the environmental interpretation of the Upper Jurassic sponge facies (Oxfordian) from Spain" (PDF). Cuadernos de Geología Ibérica. 24: 223–239. Archived from the original (PDF) on March 19, 2009. Retrieved 2008-10-10. - Yahel, G., Whitney, F., Reiswig, H. M., Eerkes-Medrano, D. I., and Leys, S.P. (2007). "In situ feeding and metabolism of glass sponges (Hexactinellida, Porifera) studied in a deep temperate fjord with a remotely operated submersible". Limnology and Oceanography. 52 (1): 428–440. Bibcode:2007LimOc..52..428Y. CiteSeerX 10.1.1.597.9627. doi:10.4319/lo.2007.52.1.0428. - "4 new species of 'killer' sponges discovered off Pacific coast". CBC News. April 19, 2014. Archived from the original on April 19, 2014. Retrieved 2014-09-04. - Vacelet, J. (2008). "A new genus of carnivorous sponges (Porifera: Poecilosclerida, Cladorhizidae) from the deep N-E Pacific, and remarks on the genus Neocladia" (PDF). Zootaxa. 1752: 57–65. Retrieved 2008-10-31. - Watling, L. (2007). "Predation on copepods by an Alaskan cladorhizid sponge". Journal of the Marine Biological Association of the United Kingdom. 87 (6): 1721–1726. doi:10.1017/S0025315407058560. - Vacelet, J. & Boury-Esnault, N. (1995). "Carnivorous sponges". Nature. 373 (6512): 333–335. Bibcode:1995Natur.373..333V. doi:10.1038/373333a0. - Vacelet, J. & Kelly, M. (2008). "New species from the deep Pacific suggest that carnivorous sponges date back to the Early Jurassic". Nature Precedings. doi:10.1038/npre.2008.2327.1 (inactive 2018-09-26). - News report at Brümmer, F., Pfannkuchen, M., Baltz, A., Hauser, T., and Thiel, V. (2008). "Light inside sponges". Journal of Experimental Marine Biology and Ecology. 367 (2): 61–64. doi:10.1016/j.jembe.2008.06.036. . News report at Walker, Matt (November 10, 2008). "Nature's 'fibre optics' experts". BBC News. Archived from the original on December 17, 2008. Retrieved 2008-10-10. - Ruppert, Fox & Barnes 2004, p. 239. - Ruppert, Fox & Barnes 2004, pp. 90–94. - Ruppert, Fox & Barnes 2004, pp. 87–88. - Smith, D. G. & Pennak, R. W. (2001). Pennak's Freshwater Invertebrates of the United States: Porifera to Crustacea (4 ed.). John Wiley and Sons. pp. 47–50. ISBN 978-0-471-35837-4. - Ruppert, Fox & Barnes 2004, pp. 89–90. - Ruppert, Fox & Barnes 2004, p. 77. - Leys, S., Cheung, E., and Boury-Esnault, N. (2006). "Embryogenesis in the glass sponge Oopsacas minuta: Formation of syncytia by fusion of blastomeres". Integrative and Comparative Biology. 46 (2): 104–117. doi:10.1093/icb/icj016. PMID 21672727. - Nickel, M. (December 2004). "Kinetics and rhythm of body contractions in the sponge Tethya wilhelma (Porifera: Demospongiae)". Journal of Experimental Biology. 207 (Pt 26): 4515–4524. doi:10.1242/jeb.01289. PMID 15579547. - Sakarya; O.; Armstrong; K. A.; Adamska; M.; Adamski; M.; Wang (2007). Vosshall, Leslie, ed. "A Post-Synaptic Scaffold at the Origin of the Animal Kingdom". PLOS One. 2 (6): e506. Bibcode:2007PLoSO...2..506S. doi:10.1371/journal.pone.0000506. PMC 1876816. PMID 17551586. - Weaver, James C.; Aizenberg, Joanna; Fantner, Georg E.; Kisailus, David; Woesz, Alexander; Allen, Peter; Fields, Kirk; Porter, Michael J.; Zok, Frank W.; Hansma, Paul K.; Fratzl, Peter; Morse, Daniel E. (2007). "Hierarchical assembly of the siliceous skeletal lattice of the hexactinellid sponge Euplectella aspergillum". Journal of Structural Biology. 158 (1): 93–106. doi:10.1016/j.jsb.2006.10.027. PMID 17175169. - Ruzicka, R. & Gleason, D. F. (2008). "Latitudinal variation in spongivorous fishes and the effectiveness of sponge chemical defenses" (PDF). Oecologia. 154 (4): 785–794. Bibcode:2008Oecol.154..785R. doi:10.1007/s00442-007-0874-0. PMID 17960425. Archived from the original (PDF) on 2008-10-06. Retrieved 2008-11-11. - Gage & Tyler 1996, pp. 91–93 - Dunlap, M.; Pawlik, J. R. (1996). "Video-monitored predation by Caribbean reef fishes on an array of mangrove and reef sponges". Marine Biology. 126 (1): 117–123. doi:10.1007/bf00571383. ISSN 0025-3162. - Loh, Tse-Lynn; Pawlik, Joseph R. (2014-03-18). "Chemical defenses and resource trade-offs structure sponge communities on Caribbean coral reefs". Proceedings of the National Academy of Sciences. 111 (11): 4151–4156. doi:10.1073/pnas.1321626111. ISSN 0027-8424. PMC 3964098. PMID 24567392. - Piper 2007, p. 148. - Gochfeld, DJ; Easson, CG; Slattery, M; Thacker, RW; Olson, JB (2012). "Population Dynamics of a Sponge Disease on Caribbean Reefs". In: Steller D, Lobel L, Eds. Diving for Science 2012. Proceedings of the American Academy of Underwater Sciences 31st Symposium. Retrieved 2013-11-17. - Olson, J. B., Gochfeld, D. J., and Slattery, M. (2006). "Aplysina red band syndrome: a new threat to Caribbean sponges". Diseases of aquatic organisms. 71 (2): 163–168. doi:10.3354/dao071163. PMID 16956064. News report at New disease threatens sponges (Practical Fishkeeping) - Wulff, J. L (June 2008). "Collaboration among sponge species increases sponge diversity and abundance in a seagrass meadow". Marine Ecology. 29 (2): 193–204. Bibcode:2008MarEc..29..193W. doi:10.1111/j.1439-0485.2008.00224.x. - Duffy, J. E. (1996). "Species boundaries, specialization, and the radiation of sponge-dwelling alpheid shrimp" (PDF). Biological Journal of the Linnean Society. 58 (3): 307–324. doi:10.1111/j.1095-8312.1996.tb01437.x. Archived from the original (PDF) on August 3, 2010. - Murphy 2002, p. 51. - "Spongia Linnaeus, 1759". World Register of Marine Species. Retrieved 2012-07-18. - Rowland, S. M. & Stephens, T. (2001). "Archaeocyatha: A history of phylogenetic interpretation". Journal of Paleontology. 75 (6): 1065–1078. doi:10.1666/0022-3360(2001)075<1065:AAHOPI>2.0.CO;2. JSTOR 1307076. Archived from the original on December 6, 2008. - Sperling, E. A.; Pisani, D.; Peterson, K. J. (January 1, 2007). "Poriferan paraphyly and its implications for Precambrian palaeobiology" (PDF). Geological Society, London, Special Publications. 286 (1): 355–368. Bibcode:2007GSLSP.286..355S. doi:10.1144/SP286.25. Archived from the original (PDF) on December 20, 2009. Retrieved 2012-08-22. - Whelan, Nathan V.; Kocot, Kevin M.; Moroz, Leonid L.; Halanych, Kenneth M. (2015-05-05). "Error, signal, and the placement of Ctenophora sister to all other animals". Proceedings of the National Academy of Sciences. 112 (18): 5773–5778. Bibcode:2015PNAS..112.5773W. doi:10.1073/pnas.1503453112. ISSN 0027-8424. PMC 4426464. PMID 25902535. - Hartman, W. D. & Goreau, T. F. (1970). "Jamaican coralline sponges: Their morphology, ecology and fossil relatives". Symposium of the Zoological Society of London. 25: 205–243. (cited by MGG.rsmas.miami.edu). - J. Vacelet (1985). "Coralline sponges and the evolution of the Porifera". In Conway Morris, S.; George, J. D.; Gibson, R.; Platt, H. M. The Origins and Relationships of Lower Invertebrates. Oxford University Press. pp. 1–13. ISBN 978-0-19-857181-0. - Bergquist 1978, pp. 153–154. - Gazave, E; Lapébie, P; Renard, E; Vacelet, J; Rocher, C; Ereskovsky, AV; Lavrov, DV; Borchiellini, C (14 December 2010). "Molecular phylogeny restores the supra-generic subdivision of homoscleromorph sponges (porifera, homoscleromorpha)". PLOS One. 5 (12): e14290. Bibcode:2010PLoSO...514290G. doi:10.1371/journal.pone.0014290. PMC 3001884. PMID 21179486. - Gazave, E.; Lapébie, P.; Ereskovsky, A.; Vacelet, J.; Renard, E.; Cárdenas, P.; Borchiellini, C. (May 2012). "No longer Demospongiae: Homoscleromorpha formal nomination as a fourth class of Porifera". Hydrobiologia. 687: 3–10. doi:10.1007/s10750-011-0842-x. - Cavalcanti, F. F.; Klautau, M. (2011). "Solenoid: a new aquiferous system to Porifera". Zoomorphology. 130 (4): 255–260. doi:10.1007/s00435-011-0139-7. - Sperling, E.A., Robinson, J.M., Pisani, D., and Peterson K.J. (2010). "Where's the glass? Biomarkers, molecular clocks, and microRNAs suggest a 200-Myr missing Precambrian fossil record of siliceous sponge spicules". Geobiology. 8 (1): 24–36. doi:10.1111/j.1472-4669.2009.00225.x. PMID 19929965. - Reitner, J. & Wörheide, G. (2002). "Non-Lithistid Fossil Demospongiae – Origins of their Palaeobiodiversity and Highlights in History of Preservation". In Hooper, J. N. A. & Van Soest, R. W. M. Systema Porifera: A Guide to the Classification of Sponges (PDF). New York: Kluwer Academic Plenum. Retrieved November 4, 2008. - Müller, W. E. G., Li, J., Schröder, H. C., Qiao, L., and Wang, X. (2007). "The unique skeleton of siliceous sponges (Porifera; Hexactinellida and Demospongiae) that evolved first from the Urmetazoa during the Proterozoic: a review". Biogeosciences. 4 (2): 219–232. doi:10.5194/bg-4-219-2007. - McMenamin, M. A. S. (2008). "Early Cambrian sponge spicules from the Cerro Clemente and Cerro Rajón, Sonora, México". Geologica Acta. 6 (4): 363–367. - Li, C-W., Chen, J-Y., and Hua, T-E. (1998). "Precambrian Sponges with Cellular Structures". Science. 279 (5352): 879–882. Bibcode:1998Sci...279..879L. doi:10.1126/science.279.5352.879. PMID 9452391. - "Demospongia". University of California Museum of Paleontology. Archived from the original on October 18, 2013. Retrieved 2008-11-27. - Vinn, O; Wilson, M.A.; Toom, U.; Mõtus, M.-A. (2015). "Earliest known rugosan-stromatoporoid symbiosis from the Llandovery of Estonia (Baltica)". Palaeogeography, Palaeoclimatology, Palaeoecology. 31: 1–5. doi:10.1016/j.palaeo.2015.04.023. Retrieved 2015-06-18. - Kodner, R. B., Summons, R. E., Pearson, A., King, N., and Knoll, A. H. (22 July 2008). "Sterols in a unicellular relative of the metazoans". Proceedings of the National Academy of Sciences. 105 (29): 9897–9902. Bibcode:2008PNAS..105.9897K. doi:10.1073/pnas.0803975105. PMC 2481317. PMID 18632573. - Nichols, S. & Wörheide, G. (2005). "Sponges: New Views of Old Animals". Integrative and Comparative Biology. 45 (2): 333–334. doi:10.1093/icb/45.2.333. PMID 21676777. - Love, G.D., Grosjean, E., Stalvies, C., Fike, D.A., Grotzinger, J.P., Bradley, A.S., Kelly, A.E., Bhatia, M., Meredith, W., Snape, C.E., Bowring, S.A., Condon, D.J., and Summons, R.E. (5 February 2009). "Fossil steroids record the appearance of Demospongiae during the Cryogenian period". Nature. 457 (7230): 718–721. Bibcode:2009Natur.457..718L. doi:10.1038/nature07673. PMID 19194449. - Antcliffe, J. B. (2013). Stouge, Svend, ed. "Questioning the evidence of organic compounds called sponge biomarkers". Palaeontology. 56: 917–925. doi:10.1111/pala.12030. - Gold, David A. (Jun 29, 2018). "The slow rise of complex life as revealed through biomarker genetics". Emerging Topics in Life Sciences: ETLS20170150. doi:10.1042/ETLS20170150. - Gold, David A.; et al. (March 8, 2016). "Sterol and genomic analyses validate the sponge biomarker hypothesis". Proceedings of the National Academy of Sciences. 113 (10) (Mar 2016, 2684–2689, DOI:): 2684–2689. doi:10.1073/pnas.1512614113. PMC 4790988. PMID 26903629. Retrieved 21 September 2018. - Porter, S. M (2008). "Skeletal microstructure indicates Chancelloriids and Halkieriids are closely related". Palaeontology. 51 (4): 865–879. doi:10.1111/j.1475-4983.2008.00792.x. - Butterfield, N. J. & C. J. Nicholas (1996). "Burgess Shale-type preservation of both non-mineralizing and "shelly" Cambrian organisms from the Mackenzie Mountains, northwestern Canada". Journal of Paleontology. 70 (6): 893–899. doi:10.2307/1306492 (inactive 2018-09-26). JSTOR 1306492. - Janussen, D., Steiner, M., and Zhu, M-Y. (2002). "New Well-preserved Scleritomes of Chancelloridae from the Early Cambrian Yuanshan Formation (Chengjiang, China) and the Middle Cambrian Wheeler Shale (Utah, USA) and paleobiological implications". Journal of Paleontology. 76 (4): 596–606. doi:10.1666/0022-3360(2002)076<0596:NWPSOC>2.0.CO;2. Free full text without images at Janussen, Dorte (2002). "(as above)". Journal of Paleontology. Archived from the original on December 10, 2008. Retrieved 2008-08-04. - Borchiellini, C., Manuel, M., Alivon, E., Boury-Esnault, N., Vacelet J., and Le Parco, Y. (2001). "Sponge paraphyly and the origin of Metazoa". Journal of Evolutionary Biology. 14 (1): 171–179. doi:10.1046/j.1420-9101.2001.00244.x. PMID 29280585. - Sperling, E.A.; Pisani, D.; Peterson, K.J. (2007). "Poriferan paraphyly and its implications for Precambrian paleobiology" (PDF). Journal of the Geological Society of London. 286: 355–368. Bibcode:2007GSLSP.286..355S. doi:10.1144/SP286.25. Archived from the original (PDF) on December 20, 2009. Retrieved 2008-11-04. - Medina, M., Collins, A. G., Silberman, J. D., and Sogin, M. L. (2001). "Evaluating hypotheses of basal animal phylogeny using complete sequences of large and small subunit rRNA". Proceedings of the National Academy of Sciences. 98 (17): 9707–9712. Bibcode:2001PNAS...98.9707M. doi:10.1073/pnas.171316998. PMC 55517. PMID 11504944. - Dunn, Casey W.; Hejnol, Andreas; Matus, David Q.; Pang, Kevin; Browne, William E.; Smith, Stephen A.; Seaver, Elaine; Rouse, Greg W.; Obst, Matthias; Edgecombe, Gregory D.; Sørensen, Martin V.; Haddock, Steven H. D.; Schmidt-Rhaesa, Andreas; Okusu, Akiko; Møbjerg Kristensen, Reinhardt; Wheeler, Ward C.; Martindale, Mark Q.; Giribet, Gonzalo (2008). "Broad phylogenomic sampling improves resolution of the animal tree of life". Nature. 452 (7188): 745–749. Bibcode:2008Natur.452..745D. doi:10.1038/nature06614. PMID 18322464. - Schierwater, B.; Eitel, M.; Jakob, W.; Osigus, J.; Hadrys, H.; Dellaporta, L.; Kolokotronis, O.; Desalle, R. (January 2009). Penny, David, ed. "Concatenated Analysis Sheds Light on Early Metazoan Evolution and Fuels a Modern "Urmetazoon" Hypothesis". PLoS Biology. 7 (1): e20. doi:10.1371/journal.pbio.1000020. ISSN 1544-9173. PMC 2631068. PMID 19175291. - Simion, Paul; Philippe, Hervé; Baurain, Denis; Jager, Muriel; Richter, Daniel J.; DiFranco, Arnaud; Roure, Béatrice; Satoh, Nori; Quéinnec, Éric; Ereskovsky, Alexander; Lapébie, Pascal; Corre, Erwan; Delsuc, Frédéric; King, Nicole; Wörheide, Gert; Manuel, Michaël (2017). "A Large and Consistent Phylogenomic Dataset Supports Sponges as the Sister Group to All Other Animals". Current Biology. 27 (7): 958–967. doi:10.1016/j.cub.2017.02.031. PMID 28318975. - Schoenberg, Christine (17 June 2015). "Who is who in sponge science 2015". In: Van Soest RWM et al. (2015) World Porifera Database. doi:10.13140/RG.2.1.1499.1526. Retrieved 15 April 2018. - "Chemistry and the Natural Defences of Coral – ITW with Bernard Banaigs". Tara Expeditions Foundation. Retrieved 15 April 2018. - "Willard D. Hartman". Invertebrate Zoology : Collections : Yale Peabody Museum of Natural History. 7 December 2010. Retrieved 15 April 2018. - John N.A. Hooper; Rob W.M. van Soest (2012). Systema Porifera: A Guide to the Classification of Sponges. Springer Science & Business Media. p. 30. ISBN 978-1-4615-0747-5. - Annals & Magazine of Natural History. Taylor & Francis, Limited. 1875. p. 377. - Maurizio Pansini (2004). Sponge Science in the New Millennium: Papers Contributed to the VI International Sponge Conference, Rapallo (Italy), 29 September-5 October 2002. Universitá di Genova. p. 89. - "THE VOYAGE OF H.M.S. CHALLENGER". www.19thcenturyscience.org. Zoology, Part LIX, Volume XX. 1887. - Smolker; R. A.; Connor, Richard; Mann, Janet; Berggren, Per (1997). "Sponge-carrying by Indian Ocean bottlenose dolphins: Possible tool-use by a delphinid". Ethology. 103 (6): 454–465. doi:10.1111/j.1439-0310.1997.tb00160.x. - Bergquist 1978, p. 88. - McClenachan, L. (2008). "Social conflict, Over-fishing and Disease in the Florida Sponge Fishery, 1849–1939". In Starkey, D. J.; Holm, P.; Barnard, M. Oceans Past: Management Insights from the History of Marine Animal Populations. Earthscan. pp. 25–27. ISBN 978-1-84407-527-0. - Jacobson, N. (2000). Cleavage. Rutgers University Press. p. 62. ISBN 978-0-8135-2715-4. - "Sponges". Cervical Barrier Advancement Society. 2004. Archived from the original on January 14, 2009. Retrieved 2006-09-17. - Porterfield, W. M. (1955). "Loofah — The sponge gourd". Economic Botany. 9 (3): 211–223. doi:10.1007/BF02859814. - Imhoff, J. F. & Stöhr, R. (2003). "Sponge-Associated Bacteria". In Müller, W. E. G. Sponges (Porifera): Porifera. Springer. pp. 43–44. ISBN 978-3-540-00968-9. - Teeyapant, R., Woerdenbag, H. J., Kreis, P., Hacker, J., Wray, V., Witte, L., and Proksch P. (1993). "Antibiotic and cytotoxic activity of brominated compounds from the marine sponge Verongia aerophoba". Zeitschrift für Naturforschung C. 48: 939–45. - Takeuchi, Shinji; Ishibashi, Masami; Kobayashi, Junichi (1994). "Plakoridine A, a new tyramine-containing pyrrolidine alkaloid from the Okinawan marine sponge Plakortis sp". Journal of Organic Chemistry. 59 (13): 3712–3713. doi:10.1021/jo00092a039. - Etchells, L; Sardarian A.; Whitehead R. C. (18 April 2005). "A synthetic approach to the plakoridines modeled on a biogenetic theory". Tetrahedron Letters. 46 (16): 2803–2807. doi:10.1016/j.tetlet.2005.02.124. - Bergquist, Patricia R. (1978). Sponges. London: Hutchinson. ISBN 978-0-520-03658-1. - Hickman, C., Jr.; Roberts, L. & Larson, A. (2003). Animal Diversity (3rd ed.). New York: McGraw-Hill. ISBN 978-0-07-234903-0. - Ereskovsky, Alexander V. (2010). The Comparative Embryology of Sponges. Russia: Springer Science+Business Media. ISBN 978-90-481-8575-7. - Piper, Ross (2007). Extraordinary Animals: An Encyclopedia of Curious and Unusual Animals. Greenwood Publishing Group. ISBN 978-0-313-33922-6. - Ruppert, Edward E.; Fox, Richard S.; Barnes, Robert D. (2004). Invertebrate Zoology (7 ed.). Brooks / COLE Publishing. ISBN 978-0-03-025982-1. - Murphy, Richard C. (2002). Coral Reefs: Cities Under The Seas. The Darwin Press, Inc. ISBN 978-0-87850-138-0. - Gage, John D.; Tyler, Paul A. (1996). Deep-sea Biology: A Natural History of Organisms at the Deep-Sea Floor. Cambridge University Press. ISBN 978-0-521-33665-9. - Vacelet, J.; Duport, E. (2004). "Prey capture and digestion in the carnivorous sponge Asbestopluma hypogea (Porifera: Demospongiae)". Zoomorphology. 123 (4): 179–190. doi:10.1007/s00435-004-0100-0. |Wikimedia Commons has media related to Porifera.| |Wikispecies has information related to Porifera| |The Wikibook Dichotomous Key has a page on the topic of: Porifera| |Wikisource has the text of the 1911 Encyclopædia Britannica article Sponges.| - Water flow and feeding in the phylum Porifera (sponges) – Flash animations of sponge body structures, water flow and feeding - Carsten's Spongepage, Information on the ecology and the biotechnological potential of sponges and their associated bacteria. - History of Tarpon Springs, Florida sponge industry - Nature's 'fibre optics' experts - The Sponge Reef Project - Queensland Museum information about sponges - Queensland Museum Sessile marine invertebrates collections - Queensland Museum Sessile marine invertebrates research - Sponge Guide for Britain and Ireland, Bernard Picton, Christine Morrow & Rob van Soest - World Porifera database, the world list of extant sponges, includes a searchable database.
Speech pathologists love barrier games and they are great for parents, teachers, support workers and child care workers too, really anyone who works with children. Barrier games are a fun way to develop listening skills, oral language skills, social language skills, clear talking and understanding of concepts. They are great for extending the amount of information your child can understand or express within a sentence. Barrier games are fun, cooperative, flexible, portable and inexpensive. They are easy to adjust to different children's skill levels and to make more complex as a child progresses. They can be varied to suit a wide range of age groups and can be used with one child, in a pair or with a group.Here are ten different ways that you can use barrier games to target different skills and some free downloads as well! Barrier games require a listener, a speaker, two identical sets of materials and a barrier such as a large book that will stand up. The barrier is placed between the two players so that each cannot see the others materials. The speaker then arranges his materials and describes to the listener what they are doing. The listener arranges their materials in the same way. Your speech pathologist can supply you with picture sets to use and there are links throughout this blog of picture sets that you can download and print. Once you know what to do with the picture sets it is easy to use other materials as well. Materials can include blocks, Lego, miniature objects, animals and figures, sticker sets, picture cards from games, coloured pencils and paper, real objects, maths materials, collage materials. Here's how to use the picture sets: - Print and laminate the sheets, with a set for each person. Leave the background sheet, or large pictures as a full page and cut up the small pictures into individual pieces. Each player should have one background sheet and one set of small pictures. You will also need a barrier such as a large book or folder. - Next sit facing each other. Set down the background sheet and lay out the small pictures so that your child can see all of their pictures. Place your own background sheet and small pictures in front of yourself. - Check that your child knows the names of all the small pictures. If there are any they do not not know remove them or teach the word to your child. - Explain to your child that you are going to play a game to see that you are good listeners and talkers. Explain that you will put your pictures onto the background and tell the child what to do to make their picture look the same as yours. Tell them that they need to listen carefully, because they will not be able to see what you are doing. Stand up the barrier and explain that this is so that the child cannot see what you are doing and needs to listen carefully. - Place your small pictures on the background one at a time and give your child clear instructions about how to put their pictures in the same position as you go. Make sure you give your child enough time to respond before giving the next instruction. - When you have placed all the pictures on the background take the barrier away and talk to your child about the pictures that they have placed in the correct position. Explain to them that this means they have listened carefully. Explain the correct position of any pictures that the child may not have placed correctly. - Play the game again and this time, tell your child that it is their turn to talk. Explain that you will listen carefully and make your pictures look the same as theirs. Put the barrier up again and ask the child to tell you where to put the pictures. If your child’s instructions are not clear, you may need to cue them such as if your child says “put the car there” you might say “I've got the car, but I'm not sure where to put it”. - Take the barrier away and look at all the pictures that are correctly positioned and tell your child how this means that they did a good job of talking and that you listened carefully. Talk about any pictures that are in the incorrect position. Model the correct instruction such as “oh, I needed to put the cat under the tree”. Once your child understands how to play barrier games, you can use a range of items from around the house to make your own games. You can use cut out pictures, from catalogues or clip art, objects and small toys from around the house, and toy or sticker sets from “cheap shops”. You can gradually make a game more difficult by increasing the length and complexity of the instructions, and the number of items that need to be placed. You can introduce concepts of space such as: in, on, under, next to, above and below. You can also introduce concepts of colour and size. Here are five ways that you can use barrier games to develop communication skills: 1. Listening and auditory memory. To develop listening and auditory memory skills you need to begin with an instruction that your child can follow easily and gradually increase the length of the instructions. Begin with a simple barrier game and give your child some instructions with two key words such as "put the dog in the car". If your child can do this repeat five similar two word instructions and note how many they get exactly right without any help. If four or five of five turns are correct try three key words "put the cat and the dog in the car". If these are mostly correct try four "put the cat in the truck and the dog in the car". You want to work where your child is getting some correct, but not all, say two or three out of five. If your child can do four key words consistently add some concept words as in part three below and keep counting key words until you find the level to work at. There are more ideas and activities for developing skills with following instructions here. You can print two copies of the following instructions activities to turn them into barrier games. There is more information about developing listening and following instructions here. 2. Extending sentence length. To extend your child's sentence length you can work through the number of key words your child can do as above but get your child to tell you what to do with the pictures. So first ask them where to put one picture, then two and so on. Practice at a level where your child needs a little help but not too much and gradually add an extra word as they get more skilled and confident. If your child finds it hard to form the sentences taking turns can help, 'I'll tell you then you tell me' because you can model for your child how to make the sentences. If your child has a go but makes some mistakes model say their sentence back to them the right way but be encouraging and positive. Here are some simple barrier games for practicing both listening and forming longer sentences. 3. Developing vocabulary. Barrier games are great for practicing vocabulary. For younger children begin with games where they know most of the words and add a few new ones. Start with your child listening and then work to them using the words to tell you what to do. Barrier games are also great for familiarising preschool children with vocabulary for themes at pre-school or school. If your child is going to be learning about dinosaurs or Africa or insects why not make a barrier game and use this to practice the vocabulary your child will be needing to understand and use. The more familiar these words are the easier learning will be, the more confident our child will feel and the more information your child will remember. Ask your child's teacher for a word list and use Google images to find some pictures to use. For more information about developing vocabulary and some lotto games that can make simple barrier games click here. 4. Developing understanding and use of concepts. Once your child knows how to do barrier games, can follow simple instructions with at least three key words, and form simple sentences, you can begin to add some concepts such as size, colour and position words to your games. Remember to keep in mind how many key words your child can understand and that the concept words count too. Put the big pig in the truck would be three key words. Put the big pig under the truck would be four, because you need to pay attention to the position word as well. Here are some pictures you can use in your games to practice colour and size concepts. For more information on developing concepts click here and you can also download some games for developing position words. Print two copies to make barrier games! 5. Developing descriptive language. As your child's skills develop you can use barrier games to develop descriptive language. One fun and easy way to do this is to make a rule that you cannot say the names of the pictures, so to "put the dog on the boat" you might have to say "put the brown animal that barks on the blue vehicle that goes in the water". Another way it to use more complex materials. There are some more complex barrier games to print off here. A way to really challenge those descriptive skills is to use three dimensional materials such as blocks or objects such as plastic animals. This requires much more careful description to place the items in the correct spots. You can begin by placing items such as plastic animals onto a printed board such as the one from the farm game, then move to using other objects such as trees, fences or blocks to form the background instead of the printed sheet. Drawing is also challenging. You might start with a grid as a background to help with positioning them move to a blank page as your child becomes more skilled. Here are some more ideas for developing descriptive language. Variations on barrier games. Rather than one speaker and one listener, two people can take turns giving the instructions. This is a good way to support beginners. If you are working with groups of children you can play barrier games in a group. The children can sit in a half circle and one child sits with their back to the rest of the group and gives the instructions. Alternatively, children can take turns giving one instruction each. You can do barrier games as a whole class activity using photocopied background boards and instructions which involve drawing, pasting or colouring on the sheets. We hope you and your child have lots of fun with barrier games. Check back next fortnight ( next week will be an occupational therapy topic) for five more ways to use barrier games with some more links and downloads for: 6. Developing grammar skills 7. Transferring new speech sounds to conversation 8. Developing clearer speech 9. Developing theory of mind 10. Developing written language Why not follow so you don't miss out! To find out more about how the Related Blog Posts If you liked this post you may also like:
Chapter 12 -- KEY TERMS Measurement Rules for assigning numbers to objects to represent quantities of attributes. Nominal scale Measurement in which numbers are assigned to objects or classes of objects solely for the purpose of identification. Ordinal scale Measurement in which numbers are assigned to data on the basis of some order (for example, more than, greater than) of the objects. Interval scale Measurement in which the assigned numbers legitimately allow the comparison of the size of the differences among and between members. Ratio scale Measurement that has a natural, or absolute, zero and therefore allows the comparison of absolute magnitudes of the numbers. Hypothetical construct A concept used in theoretical models to explain how things work. Hypothetical constructs include such things as attitudes, personality, and intentions—things that cannot be seen but that are useful in theoretical explanations. Conceptual definition A definition in which a given construct is defined in terms of other constructs in the set, sometimes in the form of an equation that expresses the relationship among them. Operational definition A definition of a construct that describes the operations to be carried out in order for the construct to be measured empirically. Systematic error Error in measurement that is also known as constant error since it affects the measurement in a constant way. Random error Error in measurement due to temporary aspects of the person or measurement situation and which affects the measurement in irregular ways. Validity The extent to which differences in scores on a measuring instrument reflect true differences among individuals, groups, or situations in the characteristic that it seeks to measure, or true differences in the same individual, group, or situation from one occasion to another, rather than systematic or random errors. Reliability Ability of a measure to obtain similar scores for the same object, trait, or construct across time, across different evaluators, or across the items forming the measure. Predictive validity The usefulness of the measuring instrument as a predictor of some other characteristic or behavior of the individual; it is sometimes called criterion-related validity. Content validity The adequacy with which the important aspects of the characteristic are captured by the measure; it is sometimes called face validity. Construct validity Assessment of how well the instrument captures the construct, concept, or trait it is supposed to be measuring.
UCLA physicists have proposed new theories for how the universe's first black holes might have formed and the role they might play in the production of heavy elements such as gold, platinum and uranium. Two papers on their work were published in the journal Physical Review Letters. A long-standing question in astrophysics is whether the universe's very first black holes came into existence less than a second after the Big Bang or whether they formed only millions of years later during the deaths of the earliest stars. Alexander Kusenko, a UCLA professor of physics, and Eric Cotner, a UCLA graduate student, developed a compellingly simple new theory suggesting that black holes could have formed very shortly after the Big Bang, long before stars began to shine. Astronomers have previously suggested that these so-called primordial black holes could account for all or some of the universe's mysterious dark matter and that they might have seeded the formation of supermassive black holes that exist at the centers of galaxies. The new theory proposes that primordial black holes might help create many of the heavier elements found in nature. The researchers began by considering that a uniform field of energy pervaded the universe shortly after the Big Bang. Scientists expect that such fields existed in the distant past. After the universe rapidly expanded, this energy field would have separated into clumps. Gravity would cause these clumps to attract one another and merge together. The UCLA researchers proposed that some small fraction of these growing clumps became dense enough to become black holes. Their hypothesis is fairly generic, Kusenko said, and it doesn't rely on what he called the "unlikely coincidences" that underpin other theories explaining primordial black holes. The paper suggests that it's possible to search for these primordial black holes using astronomical observations. One method involves measuring the very tiny changes in a star's brightness that result from the gravitational effects of a primordial black hole passing between Earth and that star. Earlier this year, U.S. and Japanese astronomers published a paper on their discovery of one star in a nearby galaxy that brightened and dimmed precisely as if a primordial black hole was passing in front of it. In a separate study, Kusenko, Volodymyr Takhistov, a UCLA postdoctoral researcher, and George Fuller, a professor at UC San Diego, proposed that primordial black holes might play an important role in the formation of heavy elements such as gold, silver, platinum and uranium, which could be ongoing in our galaxy and others. The origin of those heavy elements has long been a mystery to researchers. "Scientists know that these heavy elements exist, but they're not sure where these elements are being formed," Kusenko said. "This has been really embarrassing." The UCLA research suggests that a primordial black hole occasionally collides with a neutron star -- the city-sized, spinning remnant of a star that remains after some supernova explosions -- and sinks into its depths. When that happens, Kusenko said, the primordial black hole consumes the neutron star from the inside, a process that takes about 10,000 years. As the neutron star shrinks, it spins even faster, eventually causing small fragments to detach and fly off. Those fragments of neutron-rich material may be the sites in which neutrons fuse into heavier and heavier elements, Kusenko said. However, the probability of a neutron star capturing a black hole is rather low, said Kusenko, which is consistent with observations of only some galaxies being enriched in heavy elements. The theory that primordial black holes collide with neutron stars to create heavy elements also explains the observed lack of neutron stars in the center of the Milky Way galaxy, a long-standing mystery in astrophysics. This winter, Kusenko and his colleagues will collaborate with scientists at Princeton University on computer simulations of the heavy elements produced by a neutron star-black hole interaction. By comparing the results of those simulations with observations of heavy elements in nearby galaxies, the researchers hope to determine whether primordial black holes are indeed responsible for Earth's gold, platinum and uranium. The research was supported by the U.S. Department of Energy, the National Science Foundation and Japan's World Premier International Research Center Initiative of the Ministry of Education, Culture, Sports, Science and Technology.
Surface area of a cone can be defined as the total area covered by its surface. The total surface area of cone will include the base area as well as the lateral surface area of the cone. Cone can be defined as a three-dimensional solid structure having a circular base and is in the form of a pyramid-like structure. As discussed earlier, the area of the given object can be classified into three types. In the case of cones, the area will comprise two sections, which are ‘Curved Surface Area’ and ‘Total Surface Area’. Definition of Surface Area The area or region occupied by the surface of the object can be defined as the surface area of the given object. For a given three-dimensional shape, the area of the given object can be classified into three types. They are as follows: - Curved Surface Area - Lateral Surface Area - Total Surface Area Curved Surface Area: The area of all the curved regions of the solid is called the curved surface area of the object. Lateral Surface Area: The area of all the regions except the bases of the object, i.e., top and bottom, are called the lateral surface area of the object. Total Surface Area: The area of all the sides, top and bottom of the solid object is called the total surface area of the object. Curved Surface Area of Cone The surface which excludes the base of the cone is termed as the ‘curved surface’ of a cone. And the area of that surface is called the ‘Curved Surface Area of Cone’. The Curved Surface Area of Cone is calculated as follows: For finding the Curved Surface Area of the cone use where, r = radius of the circular base of the cone, and l = Slant height of the cone Total Surface Area of a Cone The total surface area of a cone can be defined as the total area which a cone can occupy in a three-dimensional space. It is the summation of the curved surface area of the cone and the area of the base of the cone. The Total Surface Area of Cone is calculated as follows: Total Surface Area of the cone = CSA + Area of Circular Base = πrl+πr2 Total Surface Area of cone =πr(l+r) Surface Area of Sphere A sphere is a 3-d object which is perfectly round shaped. The meaning of being a 3-d object is that it is defined in three axes, i.e., x-axis, y-axis, and z-axis. This creates the major difference between a circle and a sphere. Like other 3D shapes, a sphere does not contain any edges or vertices. The area which covers the outer surface of the given sphere is called the surface area of the given sphere. Visualizing a sphere, we can see it has a three-dimensional shape that is formed by rotating a disc that is circular with one of the diagonals. Let us consider, there’s an unpainted ball which is to be painted. Now to paint the whole ball, the paint quantity required has to be calculated early. Because of this, the area of each and every face has to be identified to calculate the paint quantity to paint the ball. We call this term(area) the total surface area of the ball. Formula for the Surface Area of Sphere The formula for the surface area of sphere depends on the radius of the sphere. If the radius of the sphere is ‘r’ and the surface area of the sphere is S. Then, the surface area of the sphere will be given by: Surface area ‘S’ of the sphere is given by, S = 4πr2 In case the diameter is given, then the surface area of the sphere is given as, S = 4π 2. Where ‘d’ is termed as the diameter of the sphere. You can practice this topic at Cuemath to learn it in a fun way. Applications of the Surface Area of Sphere Surface area of an object can be used for knowing things that are proportional to the surface area. - If a spherical object is to be painted then we can find how much paint will it take to cover the object. - To find out the surface tension in case of liquid bubbles. - If the surface area is known then the costing of the materials applied on the surface of the sphere can be calculated accurately. As a practical application of surface area of sphere, the calculation of total and curved surface area of the hemisphere can also be included as it itself is a part of the sphere. Also Read : Taking Skill-based Higher Education to the grassroots of India Calculation of Total and Curved Surface Area of Hemisphere As we can see from the word hemisphere it depicts a sphere that is cut into half. As the sphere is cut into half, therefore the hemisphere will have curved as well as total surface area. So, accordingly, the two surface areas will be calculated. Since the hemispheres are the sphere cut into two equal parts. So, Curved Surface Area of Hemisphere = 1/2×4πr2 (Total Surface Area of Sphere) Total Surface of Hemisphere = Curved Surface Area of Hemisphere + Area of Base (circle with radius ‘r’)= 2πrr2+πr2
In mathematics, the winding number or winding index of a closed curve in the plane around a given point is an integer representing the total number of times that curve travels counterclockwise around the point. The winding number depends on the orientation of the curve, and is negative if the curve travels around the point clockwise. Winding numbers are fundamental objects of study in algebraic topology, and they play an important role in vector calculus, complex analysis, geometric topology, differential geometry, and physics, including string theory. Suppose we are given a closed, oriented curve in the xy plane. We can imagine the curve as the path of motion of some object, with the orientation indicating the direction in which the object moves. Then the winding number of the curve is equal to the total number of counterclockwise turns that the object makes around the origin. When counting the total number of turns, counterclockwise motion counts as positive, while clockwise motion counts as negative. For example, if the object first circles the origin four times counterclockwise, and then circles the origin once clockwise, then the total winding number of the curve is three. Using this scheme, a curve that does not travel around the origin at all has winding number zero, while a curve that travels clockwise around the origin has negative winding number. Therefore, the winding number of a curve may be any integer. The following pictures show curves with winding numbers between −2 and 3: If we think of the parameter t as time, then these equations specify the motion of an object in the plane between t = 0 and t = 1. The path of this motion is a curve as long as the functions x(t) and y(t) are continuous. This curve is closed as long as the position of the object is the same at t = 0 and t = 1. We can define the winding number of such a curve using the polar coordinate system. Assuming the curve does not pass through the origin, we can rewrite the parametric equations in polar form: The functions r(t) and θ(t) are required to be continuous, with r > 0. Because the initial and final positions are the same, θ(0) and θ(1) must differ by an integer multiple of 2π. This integer is the winding number: This defines the winding number of a curve around the origin in the xy plane. By translating the coordinate system, we can extend this definition to include winding numbers around any point p. Winding number is often defined in different ways in various parts of mathematics. All of the definitions below are equivalent to the one given above: A simple combinatorial rule for defining the winding number was proposed by August Ferdinand Möbius in 1865 and again independently by James Waddell Alexander II in 1928. Any curve partitions the plane into several connected regions, one of which is unbounded. The winding numbers of the curve around two points in the same region are equal. The winding number around (any point in) the unbounded region is zero. Finally, the winding numbers for any two adjacent regions differ by exactly 1; the region with the larger winding number appears on the left side of the curve (with respect to motion down the curve). In differential geometry, parametric equations are usually assumed to be differentiable (or at least piecewise differentiable). In this case, the polar coordinate θ is related to the rectangular coordinates x and y by the equation: The one-form dθ (defined on the complement of the origin) is closed but not exact, and it generates the first de Rham cohomology group of the punctured plane. In particular, if ω is any closed differentiable one-form defined on the complement of the origin, then the integral of ω along closed loops gives a multiple of the winding number. Some of the basic properties of the winding number in the complex plane are given by the following theorem: In topology, the winding number is an alternate term for the degree of a continuous mapping. In physics, winding numbers are frequently called topological quantum numbers. In both cases, the same concept applies. Maps from the 3-sphere to itself are also classified by an integer which is also called the winding number or sometimes Pontryagin index. One can also consider the winding number of the path with respect to the tangent of the path itself. As a path followed through time, this would be the winding number with respect to the origin of the velocity vector. In this case the example illustrated at the beginning of this article has a winding number of 3, because the small loop is counted. This is only defined for immersed paths (i.e., for differentiable paths with nowhere vanishing derivatives), and is the degree of the tangential Gauss map. The winding number is closely related with the (2 + 1)-dimensional continuous Heisenberg ferromagnet equations and its integrable extensions: the Ishimori equation etc. Solutions of the last equations are classified by the winding number or topological charge (topological invariant and/or topological quantum number).
Students examine how different balls react when colliding with different surfaces, giving plenty of opportunity for them to see the difference between elastic and inelastic collisions, learn how to calculate momentum, and understand the principle of conservation of momentum. Students build their own small-scale model roller coasters using pipe insulation and marbles, and then analyze them using physics principles learned in the associated lesson. They examine conversions between kinetic and potential energy and frictional effects to design roller coasters that are completely driven by gravity. A class competition using different marbles types to represent different passenger loads determines the most innovative and successful roller coasters. Students experiment with an online virtual laboratory set at a skate park. They make predictions of graphs before they use the simulation to create graphs of energy vs. time under different conditions. This simulation experimentation strengths their comprehension of conservation of energy solely between gravitational potential energy and kinetic energy Students will explore the definition of energy by making careful observations about simple toys that illustrate basic principles of energy. Students learn about kinetic and potential energy, including various types of potential energy: chemical, gravitational, elastic and thermal energy. They identify everyday examples of these energy types, as well as the mechanism of corresponding energy transfers. They learn that energy can be neither created nor destroyed and that relationships exist between a moving object's mass and velocity. Further, the concept that energy can be neither created nor destroyed is reinforced, as students see the pervasiveness of energy transfer among its many different forms. A PowerPoint(TM) presentation and post-quiz are provided. As a weighted plastic egg is dropped into a tub of flour, students see the effect that different heights and masses of the same object have on the overall energy of that object while observing a classic example of potential (stored) energy transferred to kinetic energy (motion). The plastic egg's mass is altered by adding pennies inside it. Because the egg's shape remains constant, and only the mass and height are varied, students can directly visualize how these factors influence the amounts of energy that the eggs carry for each experiment, verified by measurement of the resulting impact craters. Students learn the equations for kinetic and potential energy and then make predictions about the depths of the resulting craters for drops of different masses and heights. They collect and graph their data, comparing it to their predictions, and verifying the relationships described by the equations. This classroom demonstration is also suitable as a small group activity. Using the LEGO MINDSTORMS(TM) NXT kit, students construct experiments to measure the time it takes a free falling body to travel a specified distance. Students use the touch sensor, rotational sensor, and the NXT brick to measure the time of flight for the falling object at different release heights. After the object is released from its holder and travels a specified distance, a touch sensor is triggered and time of object's descent from release to impact at touch sensor is recorded and displayed on the screen of the NXT. Students calculate the average velocity of the falling object from each point of release, and construct a graph of average velocity versus time. They also create a best fit line for the graph using spreadsheet software. Students use the slope of the best fit line to determine their experimental g value and compare this to the standard value of g. Background: students are familiar with static electricity, charge, and sparks. They also know about conservation of energy, forms of energy including potential energy, power, and work. Students will complete a variety of activities using breadboards, which will display various types of circuits and their effect on the flow of electricity. In this hands-on activity rolling a ball down an incline and having it collide into a cup the concepts of mechanical energy, work and power, momentum, and friction are all demonstrated. During the activity, students take measurements and use equations that describe these energy of motion concepts to calculate unknown variables, and review the relationships between these concepts. Students are introduced to renewable energy, including its relevance and importance to our current and future world. They learn the mechanics of how wind turbines convert wind energy into electrical energy and the concepts of lift and drag. Then they apply real-world technical tools and techniques to design their own aerodynamic wind turbines that efficiently harvest the most wind energy. Specifically, teams each design a wind turbine propeller attachment. They sketch rotor blade ideas, create CAD drawings (using Google SketchUp) of the best designs and make them come to life by fabricating them on a 3D printer. They attach, test and analyze different versions and/or configurations using a LEGO wind turbine, fan and an energy meter. At activity end, students discuss their results and the most successful designs, the aerodynamics characteristics affecting a wind turbine's ability to efficiently harvest wind energy, and ideas for improvement. The activity is suitable for a class/team competition. Example 3D rotor blade designs are provided. This resource is a lesson and project to guide students through using a roller coaster simulation to explain how energy can be transformed from one form to another (energy transformation). The lesson, resources, project, and energy quiz can be accessed and modified through a Google Slides presentation (http://bit.ly/RollerCoasterEnergyTransformations) In this lesson students investigate how the distance of stretch in a rubber band at rest relates to the distance the rubber band travels after being released. Students will be pulling rubber bands back to five different stretch lengths. They will then measure how far the rubber bands fly when released from the different stretch lengths and then record the results down in a data table. Students conduct an experiment to determine the relationship between the speed of a wooden toy car at the bottom of an incline and the height at which it is released. They observe how the photogate-based speedometer instrument "clocks" the average speed of an object (the train). They gather data and create graphs plotting the measured speed against start height. After the experiment, as an optional extension activity, students design brakes to moderate the speed of the cart at the bottom of the hill to within a specified speed range. Students learn the history of the waterwheel and common uses for water turbines today. They explore kinetic energy by creating their own experimental waterwheel from a two-liter plastic bottle. They investigate the transformations of energy involved in turning the blades of a hydro-turbine into work, and experiment with how weight affects the rotational rate of the waterwheel. Students also discuss and explore the characteristics of hydroelectric plants. Students learn how engineers transform wind energy into electrical energy by building their own miniature wind turbines and measuring the electrical current it produces. They explore how design and position affect the electrical energy production.
RAM (Random Access Memory) is an important component in a computer that functions as a temporary storage area for data and instructions that are being used by the CPU (Central Processing Unit). There are two types of RAM that are most commonly used: Static RAM (SRAM) and Dynamic RAM (DRAM). In this article, we will discuss the differences between SRAM and DRAM, the advantages and disadvantages of each type of RAM, and how SRAM and DRAM work. Static RAM (SRAM) SRAM is a type of RAM that uses flip-flops to store data. A flip-flop is a digital logic circuit that can maintain its state (0 or 1) even without signal input. Therefore, SRAM can store data without the need for continuous refresh. Advantages of SRAM One of the advantages of SRAM is its high speed. Because data can be accessed directly without waiting for it to be refreshed, SRAM has very fast access times, even faster than DRAM. SRAM also has the ability to read and write data simultaneously, which is known as dual-porting. This allows SRAM to be used in applications that require access to multiple data locations simultaneously. Disadvantages of SRAM The main drawback of SRAM is that it is more expensive than DRAM. In addition, SRAM has a lower data density than DRAM, meaning it requires more space to store the same data. Dynamic RAM (DRAM) DRAM is a type of RAM that uses capacitors to store data. The capacitor is an electronic component that can store electric charge. Each data bit in DRAM is stored in a capacitor. However, capacitors have the disadvantage that their charge will be lost after some time. Therefore, the data in DRAM must be refreshed regularly to keep its load intact. Advantages of DRAM The main advantage of DRAM is that it is cheaper than SRAM. In addition, DRAM has a higher data density, meaning it can store more data in a smaller space than SRAM. Disadvantages of DRAM The main drawback of DRAM is that it is slower than SRAM. Because data must be refreshed regularly, DRAM has a slower access time than SRAM. Additionally, DRAM can only read or write data to one location at a time, which is known as single-ported. This limits the use of DRAM in applications that require access to multiple data locations simultaneously. The difference between Static RAM and Dynamic RAM There are several key differences between SRAM and DRAM: Feature SRAM DRAM Method of Storage Stores data as a latch using flip-flops. Stores data as a charge on a capacitor. Refreshing Does not require refreshing. Requires refreshing to prevent data loss. Speed Faster than DRAM due to no refreshing and simpler circuitry. Slower than SRAM due to refreshing and more complex circuitry. Density Lower density due to larger cells. Higher density due to smaller cells. Power Consumption Consumes more power due to higher voltage and more transistors. Consumes less power due to lower voltage and fewer transistors. Cost More expensive due to simpler circuitry and lower density. Less expensive due to complex circuitry and higher density. How Static RAM and Dynamic RAM Work SRAM and DRAM work in different ways. SRAM uses flip-flops that can maintain their state without a signal input, whereas DRAM uses capacitors that require a regular refresh to maintain charge. When the CPU requires data or instructions from RAM, a request signal will be sent to RAM. Then, RAM will retrieve data from memory and send it to the CPU. In SRAM, when the CPU needs data, the flip-flop will fetch data from memory and store it. Data can be accessed directly from the flip-flop, so it has a very fast access time. When the CPU wants to write data to RAM, the flip-flop will be activated and the data will be stored in memory. In DRAM, when the CPU needs data, the capacitors fetch data from memory and store it. However, because capacitors have the disadvantage of losing their charge over time, the data must be refreshed regularly to maintain the charge. When the CPU wants to write data to RAM, the capacitor will be activated and the data will be stored in memory. The choice Between SRAM and DRAM depends on the specific requirements of the system. If the system requires fast data access and low data density, SRAM can be a good choice. However, if the system requires high data density at a lower cost, DRAM may be a better choice. Apart from that, there are also other types of RAM such as SDRAM (Synchronous Dynamic RAM) and DDR (Double Data Rate) SDRAM which are faster than regular DRAM and cheaper than SRAM. SDRAM and DDR SDRAM can be good choices for systems that require fast access times and high data densities at a more affordable price. Read more: Virtual Memory In everyday use, the usage of SRAM and DRAM may not matter much to the end user, but the difference between the two can be seen in the performance of the system and the price of the product. For example, on a gaming computer that requires fast data access, using SRAM can significantly improve system performance. However, on laptops or mobile devices that require high data density at a more affordable price, using DRAM or SDRAM may be more appropriate. In the tech industry, the development of RAM is continuing and there may be new types of RAM that are faster and cheaper in the future. However, the fundamental difference between SRAM and DRAM remains the basis for the development of new RAM technologies. Examples of SRAM products: - CY7C199CN-15ZXI Cypress Semiconductor SRAM, 32Kx8, 15ns, 5V - AS7C1024-15TIN Alliance Memory SRAM, 1Mx8, 15ns, 5V - IDT71V416L10PHI IDT SRAM, 4Mx16, 10ns, 3.3V Examples of DRAM products: - MT41K256M16TW-107:P Micron DRAM, 4Gb, 16Mx16, DDR3 SDRAM, 933MHz, 1.5V - K4B4G1646D-BYK0 Samsung DRAM, 4Gb, 16Mx16, DDR4 SDRAM, 2400MHz, 1.2V - H5AN4G8NMFRTFC Hynix DRAM, 4Gb, 8Mx32, LPDDR4 SDRAM, 3733MHz, 1.1V In conclusion, Static RAM and Dynamic RAM are two types of RAM that are commonly used in computer systems. SRAM uses flip-flops to store data and has a fast access time, but is more expensive and has a lower data density. DRAM uses capacitors to store data and has a higher data density for a more affordable price, but has a slower access time and must be refreshed regularly. The choice between SRAM and DRAM depends on the specific requirements of the system. Apart from that, there are also other types of RAM such as SDRAM and DDR SDRAM which can be good choices for systems that require fast access times and high data density at a more affordable price.
Posted on Mar 9, 2022 475 “You can’t divide by zero!” - everyone memorizes this rule by heart without thinking about it. Why can’t they? It’s because the four actions of arithmetic-addition, subtraction, multiplication, and division-are actually unequal. Mathematicians recognize only two of them - addition and multiplication - as valid. These operations and their properties are included in the very definition of the concept of number. All other operations are constructed in either from these two. Consider, for example, subtraction. What does 5 - 3 mean? A schoolboy will answer it simply: you have to take five objects, subtract (remove) three of them and see how many remain. But mathematicians look at this problem very differently. There is no subtraction, only addition. That’s why the notation 5 - 3 means the number that when added to the number 3 gives the number 5. So 5 - 3 is just a shortened notation of the equation: x + 3 = 5. There is no subtraction in this equation. There is only finding the right number. It is the same with multiplication and division. The entry 8 : 4 can be understood as the result of dividing eight objects into four equal piles. But in reality it is just a shortened form of the equation 4 - x = 8. This is where it becomes clear why you can’t (or rather can’t) divide by zero. The entry 5 : 0 is an abbreviation of 0 - x = 5. It is a task to find a number which, when multiplied by 0, yields 5. But we know that multiplying by 0 always yields 0. This is an inherent property of zero, strictly , part of its definition. There is no number that, when multiplied by 0, would yield anything other than zero. Our problem has no solution. This means that the notation 5 : 0 does not correspond to any specific number, and it simply means nothing and therefore has no meaning. The meaninglessness of this entry is briefly expressed by the phrase “You cannot divide by zero”. The most attentive readers will inevitably ask at this point: can zero be divided by zero? The equation 0 - x = 0 is safely solved. For example, we can take x = 0, and then we get 0 - 0 = 0. So 0 : 0 = 0? But let’s not be in a hurry. Let us try to take x = 1. We get 0 - 1 = 0. Right? So 0 : 0 = 1? But we can take any number and get 0 : 0 = 5 , 0 : 0 = 317 and so on. And, if any number fits, then we have no reason to choose any of them. We cannot say to which number the entry 0 : 0 corresponds. And since this is the case, we are forced to admit that this entry makes little sense either. It turns out that even zero cannot be divided by zero. This is the peculiarity of the division operation. More precisely, the multiplication operation and the associated number zero have such a peculiarity.
The new research by scientists at NASA and the Italian Space Agency has implications for the entire Saturn system as well as other planets and moons. Just as our own Moon floats away from Earth a tiny bit more each year, other moons are doing the same with their host planets. As a moon orbits, its gravity pulls on the planet, causing a temporary bulge in the planet as it passes. Over time, the energy created by the bulging and subsiding transfers from the planet to the moon, nudging it farther and farther out. Our Moon drifts 1.5 inches (3.8 centimeters) from Earth each year. Scientists thought they knew the rate at which the giant moon Titan is moving away from Saturn, but they recently made a surprising discovery: Using data from NASA’s Cassini spacecraft, they found Titan drifting a hundred times faster than previously understood — about 4 inches (11 centimeters) per year. The findings may help address an age-old question. While scientists know that Saturn formed 4.6 billion years ago in the early days of the solar system, there’s more uncertainty about when the planet’s rings and its system of more than 80 moons formed. Titan is currently 759,000 miles (1.2 million kilometers) from Saturn. The revised rate of its drift suggests that the moon started out much closer to Saturn, which would mean the whole system expanded more quickly than previously believed. “This result brings an important new piece of the puzzle for the highly debated question of the age of the Saturn system and how its moons formed,” said Valery Lainey, lead author of the work published June 8 in Nature Astronomy. He conducted the research as a scientist at NASA’s Jet Propulsion Laboratory in Southern California before joining the Paris Observatory at PSL University. To learn more about Saturn, zoom in and give the planet a spin. Use the search function at bottom to learn more about its moons – or just about anything else in the solar system. View the full interactive experience at Eyes on the Solar System. Credit: NASA/JPL-Caltech The findings on Titan’s rate of drift also provide important confirmation of a new theory that explains and predicts how planets affect their moons’ orbits. For the last 50 years, scientists have applied the same formulas to estimate how fast a moon drifts from its planet, a rate that can also be used to determine a moon’s age. Those formulas and the classical theories on which they’re based were applied to moons large and small all over the solar system. The theories assumed that in systems such as Saturn’s, with dozens of moons, the outer moons like Titan migrated outward more slowly than moons closer in because they are farther from their host planet’s gravity. Four years ago, theoretical astrophysicist Jim Fuller, now of Caltech, published research that upended those theories. Fuller’s theory predicted that outer moons can migrate outward at a similar rate to inner moons because they become locked in a different kind of orbit pattern that links to the particular wobble of a planet and slings them outward. “The new measurements imply that these kind of planet-moon interactions can be more prominent than prior expectations and that they can apply to many systems, such as other planetary moon systems, exoplanets — those outside our solar system — and even binary star systems, where stars orbit each other,” said Fuller, a coauthor of the new paper. To reach their results, the authors mapped stars in the background of Cassini images and tracked Titan’s position. To confirm their findings, they compared them with an independent dataset: radio science data collected by Cassini. During ten close flybys between 2006 and 2016, the spacecraft sent radio waves to Earth. Scientists studied how the signal’s frequency was changed by their interactions with their surroundings to estimate how Titan’s orbit evolved. “By using two completely different datasets, we obtained results that are in full agreement, and also in agreement with Jim Fuller’s theory, which predicted a much faster migration of Titan,” said coauthor Paolo Tortora, of Italy’s University of Bologna. Tortora is a member of the Cassini Radio Science team and worked on the research with the support of the Italian Space Agency. Managed by JPL, Cassini was an orbiter that observed Saturn for more than 13 years before exhausting its fuel supply. The mission plunged it into the planet’s atmosphere in September 2017, in part to protect its moon Enceladus, which Cassini discovered might hold conditions suitable for life. The Cassini-Huygens mission is a cooperative project of NASA, ESA (the European Space Agency) and the Italian Space Agency. JPL, a division of Caltech in Pasadena, manages the mission for NASA’s Science Mission Directorate in Washington. JPL designed, developed and assembled the Cassini orbiter. The world’s first-ever planetary defense test is a big hit … A major hurricane spotted… Hurricane Ian is pictured above in a stunning photograph that was taken by an astronaut… On Thursday, September 22, NASA and SpaceX signed an unfunded Space Act Agreement to study… NASA Teams Confirm No Damage to Artemis Flight Hardware, Focus on November for Launch On… On September 8, the CAPSTONE spacecraft executed a planned trajectory correction maneuver. An issue occurred… According to a new study, super-recognizers spread their gaze more evenly than average observers. A…
The earliest Egyptians had buried their dead directly into the backing hot desert sand, where the high dry temperature desiccates the bodies to effectively mummify them. As the civilization developed, mud-brick structures known as ‘mastabas’ began to appear. These buildings were trapezoid structures --- rectangular in plan with inward sloping sides and a flat top. Over time it became the practice to build one slightly smaller mastaba on top of another, which led too the development of the step pyramid. Then there followed a phase during which the architects improved the design of the step pyramid by adding triangular infills for the saw-tooth sides, leading to the sort of smooth pyramid with which most people are familiar. The later stages of this process actually came about surprisingly rapidly --- given the normally ponderous nature of building evolution. (Before the Pyramids) The engineering of the Giza plateau pyramid structures, using the same materials as the builders, could not be replicated today with the degree of perfection they exhibit. These three great Egyptian pyramids contain from one million to three million stones with average weights of three to four tons each. The structures would have to have been started at the bottom where a deviation of less than an inch on one stone at the bottom would produce a 20-foot error by the time the masons reached the apex. This discussion does not even include the exact astronomical measurements built into the structure. The pyramids' master builders reportedly taught humans their architectural and engineering skills. (Gods, Genes, and Consciousness) The Great Pyramid...is designed around the "2 x Pi x radius" formula of a circle. The perimeter length around the base of the pyramid is the equivalent of the perimeter of a circle, and the height of the pyramid represents the radius of the same circle. The Second Pyramid has simply been made to a different mathematical design from its neighbour; it has an angle of elevation that is based around the Pythagorean 3-4-5 triangle principle, with the unit of '4' representing the vertical height of the pyramid. It is quite apparent that the design principle of both these pyramids revolves around fundamental mathematical formulae, and so the Great Pyramid represents a circle while the Second Pyramid represents a square. The ratio of 22:7 is the closest whole-number fraction of Pi and it would seem that this is the Pi ratio that was used in the design of the Great Pyramid. (Thoth: Architect of the Universe) ...if the Thoth cubit (tc) is actually 52.35 cm, then what impact will this have on the external size of the Great Pyramid? Take the base of this pyramid, which is 230.36 m in length, divide this by 0.5235 and the result is 440.04 tc, which is fairly close to the whole-number of 440. In the same manner, the pyramid's height of 146.59 m, when divided by 0.5235, becomes 280.02 tc which is again quite close to a whole-number value of 280. ...it demonstrates a high level of accuracy to round figures in using the cubit length of 0.5235 m. This is a real unit of measure, as used by the original architect, and therefore this is the unit that should be used when studying these designs. (Thoth: Architect of the Universe) The designer of the Great Pyramid was simply looking for a multiple of the Pi fraction 22:7 that would satisfy four requirements: i) The first ratio figure had to be a multiple of the number 22 ii) The second ratio figure had to be a multiple of the number 7 iii) It would be useful if the chosen ratio could be divided by simple 2s and 4s to fit the simple ancient maths iv) The chosen ratio had to produce a cubit/yard length that was small enough to be handled easily in everyday usage. The ratio chosen for the Great Pyramid was simply 880:280, which is a 40-times multiple of the 22:7 ratio for Pi...(Thoth: Architect of the Universe) The northern pyramid on the Dahshur site is known for the colour of the sandstone used in its construction. The 'Red' Pyramid, as it is known, has exterior elevations of the more stable angle of 43° 36', and once more it does not exhibit any indication of having ever been used as a tomb. The quality of the construction of the Red Pyramid is far superior to those of Meidum or Saqqara, with the quality of the stonework in the chambers, for example, being particularly fine. The Red Pyramid would still have been a very imposing monument, were it not for the stone robbers who have stripped the casing off. Like the Red Pyramid, Bent is of much superior construction to those at Saqqara and Meidum, as is witnessed by its near complete survival. Nearly all the casing stones have survived intact, which gives us a good example of what the pyramids of Giza must have looked like in the past. Only the colour has changed over the years, from the original brilliant white to the sandy buff hue that we see today. (Thoth: Architect of the Universe) As we can see from the remaining cladding stones that still cover the Bent Pyramid and the upper portions of the Second Pyramid, in the desert climate, good quality stone usually weathers very slowly. Then, after many millennia, someone came along and started pilfering the cladding stones from the pyramids; something that is usually ascribed to the eighth or ninth century AD. From this time onwards, the whole of the paving slab was now exposed to the elements and started to weather, hence a line was formed in the paving stones between the two periods of weathering. In general, it would appear that there was a minimum of ten times as much erosion on the exposed section of each block as on the portion that had been covered with the cladding stones, and this ratio would in turn give us a direct indication of the true age of these pyramids. If a constant erosion rate is presumed, and if the time elapsed since the cladding was stolen is about 1,000 years, then the time required for the erosion of the exposed sections of each slab would equate to about 10,000 years, and quite possibly, much much longer. (Thoth: Architect of the Universe) In addition, some of the pavement blocks around the Second Pyramid weigh in at up to 200 tonnes and the depth of this pavement is quite astounding. The Second Pyramid was built on an incline and therefore the south western corner had to be countersunk into the bedrock. Indeed, part of this corner of the pyramid was actually formed from a raised section of remaining bedrock. To the east, the problem was the reverse; the ground here sloped away and was deeply fissured. Thus, the foundations to the pyramid across all of this eastern area had to be raised up to make the site level and stable. The Bronze Age solution to this defect on the topography of the chosen site was 'simple': just form a thick raft of megalithic blocks, each weighing in at hundreds of tonnes, and then build the pyramid on top of that. (Thoth: Architect of the Universe) Now we know that the sites are intimately linked, we can join up the dots and see that the resulting layout is exactly the same at Avebury as it is at Dahshur - the angles between the monuments are the same at both sites. So we have another direct link between Avebury and Giza; again, it would appear that the same designer was at work on both sites. I started in Giza, where the layout of the pyramids seemed to mimic a planisphere of the stars. From there, the comparison was made with the Wessex monuments, which seem to copy the layout of the pyramids. Finally, there was the Uffington horse, which confirms the planisphere layout in the same way as the Sphinx does at Giza. The stellar layout of the henges is nearly complete. Stonehenge does, indeed, seem to mimic Giza in its role as the constellation of Orion. (Thoth: Architect of the Universe) There were huge numbers of columns, some broken, some virtually intact, but all tumbled and fallen. There were Doric column bases surrounded by tumbled debris. Here and there one or two courses of a wall could be seen, rising up out of the murk. There were dozens of metre-wide hemispherical stones, hollowed inside, of a type that I had never encountered before in Egypt. There were several small sphinxes, one broken jaggedly in half, and large segments of more than one granite obelisk seemed to have been tossed about like matchsticks. There were also quarried granite blocks scattered everywhere. Most were in the 2-3 square metre range but some were much larger - 70 tonnes or more. A notable group of these behemoths, some a staggering 11 metres in length, lay in a line running south-west to north-east in the open waters just outside Qait Bey When I researched the matter later I learnt that they were amongst the blocks that Empereur had identified as coming from the Pharos: “some of them are broken into two or even three pieces, which shows that they fell from quite a height. In view of the location the ancient writers give for the lighthouse, and taking into consideration the technical difficulty of moving such large objects, it is probable that these are parts of the Pharos itself which lie where they were flung by a particularly violent earthquake.”(Underworld) The simple, honest truth is that during the thousands of years of the Sphinx's existence, often with only its head protruding above the sand, almost anyone could have worked on its face at almost any time. Moreover, Lehner's own photogrammetric study has thrown up at least one piece of evidence which is highly suggestive of major recarving: the Sphinx's head, he writes, is 'too small' in proportion to the body. To deal with the first point first, it is a simple matter of fact that no objective test presently exists for the accurate dating of rock-hewn monuments. In all honesty, therefore, what confronts us at Giza is an entirely anonymous monument, carved out of undatable rock, about which, as the forthright Egyptologist Selim Hassan wrote in 1949, 'no definite facts are known'. Excepting for the mutilated line on the granite stela of Thothmosis IV, which proves nothing, there is not a single ancient inscription which connects the Sphinx with Khafre. So sound as it may appear, we must treat this evidence as circumstantial until such a time as a lucky turn of the spade will reveal to the world definite reference to the erection of this statue...(Keeper of Genesis) Back in the Sphinx enclosure the first interesting result came from Dobecki, who had conducted seismographic tests around the Sphinx. The sophisticated equipment that he had brought with him picked up numerous indications of 'anomalies and cavities in the bedrock between the paws and along the sides of the Sphinx'." One of these cavities he described as: a fairly large feature; it's about nine metres by twelve metres in dimension, and buried less than five metres in depth. Now the regular shape of this - rectangular - is inconsistent with naturally occurring cavities ... So there's some suggestion that this could be man-made."(Keeper of Genesis) The unifying features of these ancient and anonymous structures are the stark, undecorated austerity of the building style, and the use throughout of ponderous megaliths - many of which are estimated to weigh in the range of 200 tons apiece. There are no small blocks here at all: every single piece of stone is enormous - the least of them weighing more than 50 tons - and it is difficult to understand how such monsters could have been lifted and manoeuvred into place by the ancient Egyptians. Indeed, even today, contractors using the latest construction technology would face formidable challenges if they were commissioned to produce exact replicas of the Sphinx Temple and the Valley Temple. (Keeper of Genesis) The problems are manifold but stem mainly from the extremely large size of the blocks - which can be envisaged in terms of their dimensions and weight as a series of diesel locomotive engines stacked one on top of the other. Such loads simply cannot be hoisted by the typical tower and hydraulic cranes that we are familiar with from building sites in our cities. These cranes, which are pieces of advanced technology, can generally 'pick' a maximum load of 20 tons at what is called 'minimum span' - i.e. at the closest distance to the tower along the 'boom' or 'arm' of the crane. The longer the span the smaller the load and at -'maximum span' the limit is around 5 tons. Loads exceeding 50 tons require special cranes. Furthermore, there are few cranes in the world today that would be capable of picking 200-ton blocks of quarried limestone. Such cranes would normally have to be of the 'bridge' or 'gantry' type, often seen in factories and at major industrial ports where they are used to move large pieces of equipment and machinery such as bulldozers, military tanks, or steel shipping containers. Built with structural steel members and powered with massive electric motors, the majority of these cranes have a load limit of under 100 tons. (Keeper of Genesis) ...the French master engineer Leherou Kerisel, a consultant for the building of the Cairo Metro, worked out the logistics of hauling into place the 70-ton blocks that were used in the construction of the so-called King's Chamber. According to his calculations the job could just about have been done - although with enormous difficulty - with teams of 600 men arranged in ranks across a very wide ramp buttressed against one face of the Pyramid," From this it follows that teams 1800 men strong would have been required to haul the Valley Temple blocks. But could 1800 men have been effectively harnessed to such dense and relatively compact loads (the maximum dimensions of each block are 30 feet by 10 feet by 12 feet)? And more to the point, since the temple walls do not exceed 130 feet along each side, how likely is it that such large teams could have been organized to work efficiently - or at all in the limited space available? Assuming a minimum of three feet of horizontal space per man, each rank of haulers could not have contained more than fifty men. To make up the total of 1800 men needed to move a 200-ton block, therefore, would have called for no less than thirty-six ranks of men pulling in unison, to be harnessed to each block. The potential complications that might have arisen are mind boggling. Even assuming they could all have been overcome, however, the next question that presents itself is perhaps the most intriguing of all. (Keeper of Genesis) Why? Why bother? Why specify temples built out of unwieldy 200-ton blocks when it would have been much easier, much more feasible and just as aesthetically pleasing, to use smaller blocks of say two or three tons each? There are really only two answers. Either the people who designed these hulking edifices had knowledge of some technique that made it easy for them to quarry, manipulate and position enormous pieces of stone, or their way of thinking was utterly different from our own - in which case their motives and priorities are unlikely to be fathomable in terms of normal cross-cultural comparisons. (Keeper of Genesis) ...the megaliths of the temples demonstrate precisely the same apparent precipitationinduced weathering features as the Sphinx itself. And it is of interest to note that the surviving granite casing blocks seem to have been carved on their inner faces to fit over the limestone core-blocks at a lime when these were a/ready heavily marked by erosion. Since the granite casing has the look of other Old Kingdom Egyptian architecture (while the limestone core-blocks do not) this may be laken as further evidence of the theory that an ancient, revered and much-eroded structure was restored and renovated by the Old Kingdom Pharaohs. Robert Schoch certainly favours this view. 'I remain convinced,' comments the Boston University geology professor, 'that the backs of the Old Kingdom granite facing stones were carved to match or complement the earlier weathering features seen on the surfaces of the core limestone blocks of the temples."(Keeper of Genesis) All the Arab commentators prior to the fourteenth century tell us that the Great Pyramid's casing was a marvel of architecture that caused the edifice to glow brilliantly under the Egyptian sun. It consisted of an estimated 22 acres of 8-foot-thick blocks, each weighing in the region of 16 tons, 'so subtilly jointed that one would have said that it was a single slab from top to bottom'. A few surviving sections can still be seen today at the base of the monument. When they were studied in 1881 by Sir W. M. Flinders Petrie, he noted with astonishment that 'the mean thickness of the joints is 0.020 of an inch; and, therefore, the mean variation of the cutting of the stone from a straight line and from a true square is but 0.01 of an inch on a length of 75 inches up the face, an amount of accuracy equal to the most modern opticians' straight-edges of such a length.' Another detail that Petrie found very difficult to explain was that the blocks had been carefully and precisely cemented together: 'To merely place such stones in exact contact at the sides would be careful work, but to do so with cement in the joint seems almost impossible...' (Keeper of Genesis) The causeways - one for each of the three Pyramids - are important features of the Giza necropolis, though all have fallen into an advanced state of disrepair. The three causeways, like the Mortuary and Valley Temples, are fashioned out of huge blocks of limestone. Indeed all of these prodigious structures are clearly 'of a piece' from a design point of view and seem to have been the work of builders who thought like gods or giants. There is about them an overwhelming, weary, aching sense of antiquity and it is certainly not hard to imagine that they might be the leavings of a lost civilization. In this regard we are reminded of The Sacred Sermon, a 'Hermetic' text of Egyptian origin that speaks with awe of lordly men 'devoted to the growth of wisdom' who lived 'before the Flood' and whose civilization was destroyed: 'And there shall be memorials mighty of their handiworks upon the earth, leaving dim trace behind when cycles are renewed...' (Keeper of Genesis) ...Egyptian irrigation specialists checking for groundwater drilled in the same area, less than 100 feet away from the Hawass dig, and were able to go down more than 50 feet without impediment before their drill-bit suddenly collided with something hard and massive. After freeing the drill, much to their surprise, they found that they had brought to the surface a large lump of Aswan granite. No granite occurs naturally anywhere in the Nile Delta area where Giza is located, and Aswan - the source of all the granite used by the ancients at Giza - is located 500 miles to the south. The discovery of what appears to be a substantial granite obstacle - or perhaps several obstacles - 50 feet below ground level in the vicinity of the Sphinx is therefore intriguing to say the least. Since 1982, we were suprised to learn, almost no further research has been officially authorized to investigate the numerous tantalizing hints of deeply buried structures and chambers in the vicinity of the Sphinx. The single exception was Thomas Dobecki's seismic work in the early 1990S. ...this resulted in the discovery of what appears to be a large, rectangular chamber beneath the forepaws of the Sphinx. (Keeper of Genesis) On Friday, 26 May 1837, after a couple of days of blasting and clearing, Hill discovered the flat iron plate mentioned above. This is to certify that the piece of iron found by me near the mouth of the air-passage [shaft], in the southern side of the Great Pyramid at Gizeh, on Friday, May 26th, was taken out by me from an inner joint, after having removed by blasting the two outer tiers of the stones of the present surface of the Pyramid; and that no joint or opening of any sort was connected with the above mentioned joint, by which the iron could have been placed in it after the original building of the Pyramid. In 1881 the plate was re-examined by Sir W. M. Flinders Petrie...Though some doubt has been thrown on the piece, merely from its rarity, [he noted) yet the vouchers for it are very precise; and it has a cast of a nummulite [fossilized marine protozoa] on the rust of it, proving it to have been buried for ages beside a block of nummulitic limestone, and therefore to be certainly ancient. No reasonable doubt can therefore exist about its being a really genuine piece...(Keeper of Genesis) ...[the so-called 'Pyramid Texts' of ancient Egypt] take the form of extensive funerary and rebirth inscriptions carved on the tomb walls of certain Fifth- and Sixth-Dynasty pyramids at Saqqara, about ten miles south of Giza. Egyptologists agree that much if not all of the content of the inscriptions predates the Pyramid Age. It is thus unsettling to discover in these ancient scriptures, supposedly the work of neolithic farmers "who had hardly even begun to master copper, that there are abundant references to iron. The name given to it is B'ja - 'the divine metal' - and we always encounter it in distinctive contexts related in one way or another to astronomy, to the stars and to the gods." Iron is also mentioned in the texts as being necessary for the construction of a bizarre instrument called a Meshtyw. Very much resembling a carpenter's adze or cutting tool, this was a ceremonial device which was used to 'strike open the mouth' of the deceased Pharaoh's mummified and embalmed corpse - an indispensable ritual if the Pharaoh's soul were to be re-awakened to eternal life amidst the cycles of the stars. (Keeper of Genesis) Unlike the King's Chamber shafts, those in the Queen's Chamber (a) do not exit on the outside of the monument and (b) were not originally cut through the chamber's limestone walls. Instead the builders left the last five inches intact in the last block over the mouth of each of the shafts - thus rendering them invisible and inaccessible to any casual intruder....the Dixons found three small relics in the shafts. These objects - a rough stone sphere, a small two-pronged hook made out of some form of metal, and a fine piece of cedar wood some 12 centimetres long with strange notches cut into it...These three items are the only relics ever to have been found inside the Great Pyramid. (Keeper of Genesis) Dormion and Goidin had persuaded certain senior officials at the Egyptian Antiquities Organization that a "hidden chamber" could lie behind the west wall of the horizontal corridor leading to the Queen's Chamber. In a rare move, the EAO gave permission for the drilling of a series of small holes to test the theory. Apparently some evidence was found of a large 'cavity' which was filled with unusually fine sand - nothing more... The project was eventually stopped and Dormion and Goidin were never to resume their work in the Great Pyramid. (Keeper of Genesis) The same thing happened again in 1988 when a Japanese scientific team from Waseda University took up the challenge. They were led by Professor Sakuji Yoshimura. This time the Japanese used 'non destructive techniques' based on a high-tech system of electromagnetic waves and radar equipment. They, too, detected the existence of a 'cavity' off the Queen's Chamber passageway, some three metres under the floor and, as it turned out, very close to where the French had drilled. They also detected a large cavity behind the north-west wall of the Queen's Chamber itself, and a 'tunnel' outside and to the south of the Pyramid which appeared to run underneath the monument. Before any further exploration or drilling could be done, the Egyptian authorities intervened and halted the project....after crawling a total distance of 200 feet into the shaft, the floor and walls became smooth and polished and the robot suddenly and one might almost say 'in the nick of time' - reached the end of its journey. As the first images of the 'door' with its peculiar metal fittings appeared on the small television monitor in the Queen's Chamber, Rudolf Gantenbrink immediately realised the massive implications of his find. (Keeper of Genesis) What better candidate is there for that 'Predynastic religious centre near to Memphis' - that 'homeland' of the Egyptian temple - than the sacred city of Heliopolis and its associated Pyramids and other structures on the Giza plateau? (Keeper of Genesis) If we regard the Giza Pyramids (in relation to the Nile) as part of a scaled-down 'map' of the right bank of the Milky Way, then we would need to extend that 'map' some 20 miles to the south in order to arrive at the point on the ground where the Hyades-Taurus should be represented. How likely is it to be an accident that two enormous Pyramids - the so-called 'Bent' and 'Red' Pyramids of Dahshur - are found at this spot? And how likely is it to be an accident, as was demonstrated in The Orion Mystery, that the site plan of these monuments, i.e. their pattern on the ground, correlates very precisely with the pattern in the sky of the two most prominent stars in the Hyades? (Keeper of Genesis) ...there are little round holes in the top slabs of graves at Tiahuanaco, precisely as in Egyptian tombs, to let through the soul, presumed as being essential by the burial ritual of the Egyptians. (The God-Kings & the Titans) We now have the first real evidence as to why the Pyramid of Khafre was placed in its precise location vis-a-vis the Great Pyramid. If the location had varied even slightly, it would not have cast this winter solstice shadow upon the south face of its neighbour. So the winter solstice shadow was a stark signal, displayed upon the gleaming triangular white wall. Just as the sun went down in the West on the shortest day of the year, it gave a mammoth and spectacular public demonstration of the slope of all the interior passages of the Great Pyramid. Thereafter, the shadow would shrink away again and disappear until the next winter solstice returned. (The Crystal Sun) One very unusual feature of the Great Pyramid is a concavity of the core that makes the monument an eight-sided figure, rather than foursided like every other Egyptian pyramid. This is to say, that its four sides are hollowed in or indented along their centre lines, from base to peak. This concavity divides each of the apparent four sides in half, creating a very special and unusual eight-sided pyramid; and it is executed to such an extraordinary degree of precision as to enter the realm of the uncanny. For, viewed from any ground position or distance, this concavity is quite invisible to the naked eye. The hollowing-in can be noticed only from the air, and only at certain times of the day. This explains why the concavity was never discovered until the age of aviation. A 'sun-flash' would have been generated when the Great Pyramid was still gleaming white by this mysterious and all but invisible concavity or hollowing of the faces. I believe such flashes would have happened at sunrise and sunset just prior to and just after the two equinoxes. It is possible that in both cases the 'flashes' would have been deeply golden in colour. On the actual days of the equinoxes themselves, the sun-flashes would have vanished, to be resumed again two or three days later. The cessation of the flash would prove the equinox had arrived, since the sun was then briefly absolutely dead-on. The Great Pyramid was thus a great centre of light and shadow phenomena marking the four points of the year, and enabling the year to be defined with precision to 365.24 days, a length which was in turn embodied in the perimeter measurements. (The Crystal Sun) ...the length of the granite coffer, which is not a coffin but a standard of measure placed in the king's chamber inside the Great Pyramid, is exactly 2,268 millimetres and that the total volume of the pyramid in cubic cubits multiplied by the historically sacred number 126, again gives us the number of the great constant: 2,268 million. (Our Cosmic Ancestors) We know that, unlike many other calendars, the Egyptian calendar was not based on movements of the Sun, the Moon, or even the planets Jupiter and Saturn, but on apparent motions of the star Sirius. This celestial reference point moves by 1° every 72 years, so that 15° correspond to 1,080 years, or 3 times 360. That tells us that the temple at Karnak was realigned with the star Sirius once every 360 years, so that the priests could maintain their line of vision on certain stars or constellations on certain days of the solar year. (Our Cosmic Ancestors) Obviously, no one would build such monuments, and in such great numbers, over thousands of years, for uncultivated peasants. This work is of necessity that of an elite, and, even more remarkably, an elite that never ceased to renew itself, an elite that seems to have been uniquely endowed with a wealth of scientific knowledge, including an understanding of the laws of Life. What, then, was this inexhaustible source, and what means so powerful and so stable assured such continuity? We are dealing here, not with an evolution of science, but rather, on the contrary, with an immutable basis: for the existence of a language and a form of writing that were already complete from the time of the earliest dynasties of the historical period seems to confirm this. What we see is not the beginnings of research, but the application of a Knowledge already possessed. (The Temple in Man) In the construction of the temple, several kinds of foundations can be observed: 1. The temple set on virgin soil, with no real foundation. The soil is prepared by the symbolic sowing of various materials such as charcoal, resins, bitumen, natural salts combined for this purpose, and other consecrated materials. 2. The temple constructed on chosen blocks from a temple that has been "turned under," like germinated seed that returns to the earth. The blocks are chosen and placed with care, providing, among other things, information on the meaning of the preceding monument and on the orientations of past and future temples. Let us note as well foundations on the unfired bricks of a previous temple, symbolizing water, that is to say, the "mud of the waters." Karnak. Temple of Montu, resting on blocks from old temples. Karnak. Temple of Montu. Sandstone doorway resting on fired bricks. 3. The temple built on a hollow basin or stylobate filled with stones from the preceding work, in apparent disorder. One must be on one's guard, for this disorder is only apparent; great experience is necessary to discover in it the location of the sanctuary, the axes of orientation, and various symbols indicating the esoteric purpose of the new temple. This stylobate plays the role of a vase, in which the final "growth" of the seed thrown in this place will be accomplished. Fig. 22. Karnak. Temple of Montu built on a hollow stylobate. 4. The monument dug in the earth or carved in the rock must also be noted. Here earth and rock are considered the matrix of the temple. (The Temple in Man) The pharaonic architects construct their sanctuaries just as Nature constructs a plant. If a certain cell ought to be hexagonal, it will be so - since it is living, and growing - it adapts itself according to the needs of the moment and of the place. Similarly, certain chambers, apparently square or rectangular in plan, will be slightly rhomboidal or trapezoidal. One need only examine, in their angles, the cut of the stones to establish that for this distortion, an exceptional effort was required to give these angles a few degrees more or less than a right angle. The same kind of rhomboids or trapezoids can be found on the surfaces of the walls or tableaux. We might be inclined to attribute this to an oversight, but these distortions are insistently compensated for or occasionally repeated. The purpose of this is always to specify measure in the spirit that I have described. For example: The north wall of room 12, with its twelve columns, shows a slight concave curve, verifiable along two-thirds of its length. Obviously one is tempted to attribute this curve to a modification of the entire construction. However, each of the blocks making up this wall is cut with a slight curve having the same versed sine. (The Temple in Man) It wasn't just the tens of thousands of blocks weighing 15 tons or more that the builders would have had to worry about. Year in, year out, the real crises would have been caused by the millions of 'average-sized' blocks, weighing say 2.5 tons, that also had to be brought to the working plane. The Pyramid has been reliably estimated to consist of a total of 2.3 million blocks. Assuming that the masons worked ten hours a day, 365 days year, the mathematics indicate that they would have needed to place 31 blocks in position every hour (about one block every two minutes) to complete the Pyramid in twenty years. Assuming that construction work had been confined to the annual three-month lay-off, the problems multiplied: four blocks a minute would have had to be delivered, about 24 every hour. To carry an inclined plane to the top of the Great Pyramid at a gradient of 1:10 would have required a ramp 4800 feet long and more than three times as massive as the Great Pyramid itself (with an estimated volume of 8 million cubic metres as against the Pyramid's 2.6 million cubic metres). Heavy weights could not have been dragged up any gradient steeper than this by any normal means. If a lesser gradient had been chosen, the ramp would have had to be even more absurdly and disproportionately massive. The problem was that mile-long ramps reaching a height of 480 feet could not have been made out of 'bricks and earth' as Edwards and other Egyptologists supposed. On the contrary, modern builders and architects had proved that such ramps would have caved in under their own weight if they had consisted of any material less costly and less stable than the limestone ashlars of the Pyramid itself. (Fingerprints of the Gods) Since this obviously made no sense (besides, where had the 8 million cubic metres of surplus blocks been taken after completion of the work?), other Egyptologists had proposed the use of spiral ramps made of mud brick and attached to the sides of the Pyramid. These would certainly have required less material to build, but they would also have failed to reach the top. They would have presented deadly and perhaps insurmountable problems to the teams of men attempting to drag the big blocks of stone around their hairpin corners. And they would have crumbled under constant use. Most problematic of all, such ramps would have cloaked the whole pyramid, thus making it impossible for the architects to check the accuracy of the setting-out during building. Covering a full 13.1 acres at the base, it weighed about six million tons - more than all the buildings in the Square Mile of the City of London added together...To these had once been added 22-acre, mirrorlike cladding consisting of an estimated 115,000 highly polished casing stones, each weighing 10 tons, which hid originally covered all four of its faces....enough had remained in position to permit the great nineteen century archaeologist, W.M. Flinders Petrie, to carry out a detailed study of them. He had been stunned to encounter tolerances of less than one- hundredth of an inch and cemented joints so precise and so carefully aligned that it was impossible to slip even the fine blade of a pocket knife between them. 'Merely to place such stones in exact contact would be careful work', he admitted, 'but to do so with cement in the joint seems almost impossible; it is to be compared to the finest opticians' work on scale of acres. (Fingerprints of the Gods) Acting on impulse, I climbed into the granite coffer and lay down, face upwards, my feet pointed towards the south and my head to the north Hoping that I would remain undisturbed for few minutes, I folded my hands across my chest and gave voice to a sustained low-pitched tone - something I had tried out several times before at other points in the King's Chamber. On those occasions, in the centre of the floor, I had noticed that the walls and ceiling seemed to collect the sound, to gather and to amplify it and project it back at me so that I could sense the returning vibrations through my feet and scalp and skin. Now in the sarcophagus I was aware of very much the same effect, although seemingly amplified and concentrated many times over. It was like being in the sound-box of some giant, resonant musical instrument designed to emit for ever just one reverberating note. The sound was intense and quite disturbing. (Fingerprints of the Gods) The [Valley] Temple was square in plan, 147 feet along each side. It was built into the slope of the plateau, which was higher in the west than in the east. In consequence, while its western wall stood only a little over 20 feet tall, its eastern wall exceeded 40 feet. Another important and unusual feature of the Valley Temple was that its core structure was built entirely, entirely, of gigantic limestone megaliths. The majority of these measured about 18 feet long x 10 feet wide x 8 feet high and some were as large as 30 feet long x 12 feet wide x 10 feet high. Routinely exceeding 200 tons in weight, each was heavier than a modern diesel locomotive - and there were hundreds of blocks. At present there are only two land-based cranes in the world that could lift weights of this magnitude. At the very frontiers of construction technology, these are both vast, industrialized machines, with booms reaching more than 220 feet into the air, which require on-board counterweights of 160 tons to prevent them from tipping over. The preparation-time for a single lift is around six weeks and calls for the skills of specialized teams of up to 20 men. What the Inventory Stela had to say about the Valley Temple was that it had been standing during the reign of Khafre's predecessor Khufu, when it had been regarded not as a recent but as a remotely ancient building. Moreover, it was clear from the context that it was not thought to have been the work of any earlier pharaoh. Instead, it was believed to have come down from the 'First Time' and to have been built by the 'gods' who had settled in the Nile Valley in that remote epoch. It was referred to quite explicitly as the 'House of Osiris, Lord of Rostau'" (Rostau being an archaic name for the Giza necropolis). (Fingerprints of the Gods) Water, water, everywhere - this seemed to be the theme of the Osireion, which lay at the bottom of the huge crater Naville and his men excavated in 1914. It was positioned some 50 feet below the level of floor of the Seti I Temple... Two pools, one rectangular and the other square, had been cut into the plinth along the centre of its long axis and at either end stairways led down to a depth of about 12 feet below the water level. The plinth also supported the two massive colonnades Naville mentioned in his report, each of which consisted of five chunky rose-coloured granite monoliths about eight feet square by 12 feet high and weighing, on average, around 100 tons. The tops of these huge columns were spanned by granite lintels and there was evidence that the whole building had once been roofed over with a series of even larger monolithic slabs. ... the plinth formed a rectangular island, surrounded on all four sides by a water- filled moat about 10 feet wide. The moat was contained by an immense rectangular enclosure wall, no less than 20 feet thick, made of very large blocks of red sandstone disposed in polygonal jigsaw-puzzle patterns. Into the huge thickness of this wall were set the 17 cells mentioned in Naville's report. Describing himself as overawed by the 'grandeur and stern simplicity' of the monument's central hall, with its remarkable granite monoliths, and by 'the power of those ancients who could bring from a distance and move such gigantic blocks', Naville made a suggestion concerning the function the Osireion might originally have been intended to serve: 'Evidently this huge construction was a large reservoir where water was stored during the high Nile...It is curious that what we may consider as a beginning in architecture is neither a temple nor a tomb, but a gigantic pool, a waterwork...(Fingerprints of the Gods) The Sphinx was part of a master-plan. And the Khafre Pyramid is maybe the most interesting in that respect because it was definitely built in two stages. If you look at it - maybe you've noticed - you'll see that its base consists of several courses of gigantic blocks similar in style to the blocks of the core masonry of the Valley Temple. Superimposed above the base, the rest of the pyramid is composed of smaller, less precisely engineered stuff. But when you look at it, knowing what you're looking for, you see instantly that it's built in two separate bits. They talk...about two long prior periods. In the first of these Egypt was supposedly ruled by the gods - the Neteru - and in the second it was ruled by the Shemsu Hor, the "Companions of Horus". (Fingerprints of the Gods) Any theory about the Great Pyramid should both satisfy the demands of logic and provide answers for all the relevant discoveries that have promoted so much perplexity in the past. ...current theories regarding the function and construction of the pyramid fall short. A credible theory would have to explain the following conditions found inside the Great Pyramid: * The selection of granite as the building material for the King's Chamber. It is evident that in choosing granite, the builders took upon themselves an extremely difficult task. * The presence of four superfluous chambers above the King's Chamber. * The characteristics of the giant granite monoliths that were used to separate these so-called "construction chambers." * The presence of exuviae, or the cast-off shells of insects, that coated the chamber above the King's Chamber, turning those who entered black. * The violent disturbance in the King's Chamber that expanded its walls and cracked the beams in its ceiling but left the rest of the Great Pyramid seemingly undisturbed. * The fact that the guardians were able to detect the disturbance inside the King's Chamber, when there was little or no exterior evidence of it. * The reason the guardians thought it necessary to smear the cracks in the ceiling of the King's Chamber with cement. * The fact that two shafts connect the King's Chamber to the outside. * The design logic for these two shafts - their function, dimensions, features, and so forth. Any theory offered for serious consideration concerning the Great Pyramid also would have to provide logical reasons for all the anomalies we have already discussed and several we soon will examine, including: * The Antechamber. * The Grand Gallery, with its corbeled walls and steep incline. * The Ascending Passage, with its enigmatic granite barriers. * The Well Shaft down to the Subterranean Pit. * The salt encrustations on the walls of the Queen's Chamber. * The rough, unfinished floor inside the Queen's Chamber. * The corbeled niche cut into the east wall of the Queen's Chamber. * The shafts that originally were not fully connected to the Queen's Chamber. * The copper fittings discovered by Rudolph Gantenbrink in 1993. * The green stone ball, grapnel hook, and cedar-like wood found in the Queen's Chamber shafts. * The plaster of paris that oozed out of the joints inside the shafts. * The repugnant odor that assailed early explorers. (The Giza Power Plant) Petrie's close examination of the casing stones revealed variations so minute that they were barely discernible to the naked eye. The records show that the outer casing blocks were square and flat, with a mean variation of 1/100 inch (.010) over an area of thirty-five square feet. Fitted together, the blocks maintained a gap of 0 to 1/50 inch (.020), which might be compared with the thickness of a fingernail. Inside this gap was cement that bonded the limestone so firmly that the strength of the joint was greater than the limestone itself. The composition of this cement has been a mystery for years. The casing blocks were reported to weigh between sixteen and twenty tons each, with the largest blocks measuring five feet high, twelve feet long, and eight feet deep. Here was a prehistoric monument that was constructed with such precision that you could not find a comparable modern building. More remarkable to me was that the builders evidently found it necessary to maintain a standard of precision that can be found today in machine shops, but certainly not on building sites. In proposing methods of construction, academics have given little or no consideration to the fine tolerances maintained throughout the Great Pyramid's structure. They pass over the astounding accuracy of the Descending Passage's construction, or at best give it just cursory consideration. These facts have not attracted the critical attention they deserve because there is a big difference between reading these figures in a book and the actual experience of having to maintain this precision in one's work. It was clear to me that modern quarrymen and the ancient pyramid builders were not using the same set of guidelines or standards. They were both cutting and dressing stone for the erection of a building, but the ancient Egyptians somehow found it necessary to maintain tolerances that were a mere four percent of modern requirements. Two questions sprang from this revelation. Why did the ancient pyramid builders find it necessary to hold such close tolerances? And how were they able to consistently achieve them? (The Giza Power Plant) Using the most modern quarrying equipment available for cutting, lifting, and transporting the stone, Booker estimated that the present-day Indiana limestone industry would need to triple its output, and it would take the entire industry, which as I have said includes thirty-three quarries, twenty-seven years to fill the order for 131,467,940 cubic feet of stone.' These estimates were based on the assumption that production would proceed without problems. Then we would be faced with the task of putting the limestone blocks in place. The level of accuracy in the base of the Great Pyramid is astounding, and is not demanded, or even expected, by building codes today. Civil engineer Roland Dove, of Roland P. Dove & Associates, explained that .02 inch per foot variance was acceptable in modern building foundations. When I informed him of the minute variation in the foundation of the Great Pyramid, he expressed disbelief and agreed with me that in this particular phase of construction, the builders of the pyramid exhibited a state of the art that would be considered advanced by modern standards. In analyzing the reason for this high degree of perfection, I consider two possible alternative answers. First, the building was for some reason required to conform to precise specifications regarding its dimensions, geometric proportions, and its mass. Second, the builders of the Great Pyramid were highly evolved in their building skills and possessed greatly advanced instruments and tools. The bald fact is that the Great Pyramid - by any standard old or new - is the largest and most accurately constructed building in the world. (The Giza Power Plant) In Pyramids and Temples of Gizeh, Petrie noted, "At El Bersheh (lat. 27° 42 ') there is a still larger example, where a platform of limestone rock has been dressed down, by cutting it away with tube drills about 18 inches in diameter; the circular grooves occasionally intersecting, prove that it was done merely to remove the rock." The estimated height of the Great Pyramid is 480.95 feet. It is estimated to weigh 5,300,000 tons and contain 2,300,000 blocks of stone. The stones that makes up the bulk of the pyramid are limestone, which was quarried locally on the plateau itself and in the Mokattam Hills across the Nile River, twenty miles away. It has been stated that it contains more stone than that used in all the churches, cathedrals, and chapels built in England since the time of Christ. Thirty Empire State Buildings could be built with the estimated 2,300,000 stones. A wall three-feet high and one-foot thick could be built across the United States and back using the amount of masonry contained in the Great Pyramid. (The Giza Power Plant) The characteristics of the holes, the cores that came out of them, and the tool marks would be an impossibility according to any conventional theory of ancient Egyptian craftsmanship, even with the technology available in Petrie's day. Three distinct characteristics of the hole and core, as illustrated in Figure 21, make the artifacts extremely remarkable: * A taper on both the hole and the core. * A symmetrical helical groove following these tapers showing that the drill advanced into the granite at a feedrate of .10 inch per revolution of the drill. * The confounding fact that the spiral groove cut deeper through the quartz than through the softer feldspar. In conventional machining the reverse would be the case. In 1983 Donald Rahn of Rahn Granite Surface Plate Co. told me that diamond drills, rotating at nine hundred revolutions per minute, penetrate granite at the rate of one inch in five minutes. In 1996, Eric Leither of Tru-Stone Corp. told me that these parameters have not changed since then. The feedrate of modern drills, therefore, calculates to be .0002 inch per revolution, indicating that the ancient Egyptians drilled into granite with a feedrate that was five hundred times greater or deeper per revolution of the drill than modern drills! In contrast, ultrasonic drilling fully explains how the holes and cores found in the Valley Temple at Giza could have been cut, and it is capable of creating all the details that Petrie and I puzzled over. Unfortunately for Petrie, ultrasonic drilling was unknown at the time he made his studies, so it is not surprising that he could not find satisfactory answers to his queries. In my opinion, the application of ultrasonic machining is the only method that completely satisfies logic, from a technical viewpoint, and explains all noted phenomena. The most significant detail of the drilled holes and cores studied by Petrie was that the groove was cut deeper through the quartz than through the feldspar. Quartz crystals are employed in the production of ultrasonic sound and, conversely, are responsive to the influence of vibration in the ultrasonic ranges and can be induced to vibrate at high frequency.(The Giza Power Plant) Crouching through the entrance passage and into the bedrock chamber, I climbed inside the box and - with a flashlight and the parallel- was astounded to find the surface on the inside of the box perfectly smooth and perfectly flat. Placing the edge of the parallel against the surface I shone my flashlight behind it. No light came through the interface. No matter where I moved the parallel - vertically, horizontally, sliding it along as one would a gauge on a precision surface plate - I could not detect any deviation from a perfectly flat surface. It would be impossible to do that kind of work on the inside of an object by hand. Even with modern machinery it would be a very difficult and complicated task! I was even more impressed with other artifacts found at another site in the rock tunnels at the temple of Serapeum at Saqqara, the site of the Step Pyramid and Zoser's Tomb. These tunnels contain twenty-one huge granite and basalt boxes. Each box weighs an estimated sixty-five tons, and, together with the huge lid that sits on top of it, the total weight of each assembly is around one hundred tons. The granite boxes were approximately 13-feet long, 7 1/2- feet wide, and l l-feet high. I jumped down into a crypt and placed my parallel against the outside surface of the box. It was perfectly flat. I shone the flashlight and found no deviation from a perfectly flat surface. I clambered through a broken-out edge into the inside of another giant box and, again, I was astonished to find it astoundingly flat. I looked for errors and could not find any. Checking the lid and the surface on which it sat, I found them both to be perfectly flat. It occurred to me that this gave the manufacturers of this piece a perfect seal - two perfectly flat surfaces pressed together, with the weight of one pushing out the air between the two surfaces. With such a convincing collection of artifacts that prove the existence of precision machinery in ancient Egypt, the idea that the Great Pyramid was built by an advanced civilization that inhabited the Earth thousands of years ago becomes more admissible. I am not proposing that this civilization was more advanced technologically than ours on all levels, but it does appear that as far as masonry work and construction are concerned they were exceeding current capabilities and specifications. Making routine work of precision machining huge pieces of extremely hard igneous rock is astonishingly impressive. (The Giza Power Plant) Such a revisioning occurred in 1986 when a French chemist named Joseph Davidovits rocked the world with a startling new theory on pyramid construction. Davidovits proposed that the blocks used to construct the pyramids and temples in Egypt were actually cast in place by pouring geopolymer materials into molds. In 1982, Davidovits analyzed limestone, given to him by French Egyptologist lean-Philippe Lauer, which was taken from the Ascending Passage of the Great Pyramid and also the outer casing stones of the pyramid of Teti. In his book The Pyramids: An Enigma Solved, coauthored with Margie Morris, he reported: X-ray chemical analysis detects bulk chemical composition. These tests undoubtedly show that Lauer's samples are man-made. The samples contain mineral elements highly uncommon in natural limestone, and these foreign minerals can take part in the production of geopolymeric binder. (The Giza Power Plant) Although today we stand in amazement before ancient megalithic sites that were built employing huge stones, if we had Leedskalnin's technique for lifting huge stones, it would make sense to us that the ancient masons might make their building blocks as large as possible. Very simply, it would be more economical to build in that manner. If we had a need to fill a five- foot cube, the energy and time required to cut smaller blocks would be much greater than what would be required to cut a large one. As we have seen, the evidence carved into the granite artifacts in Egypt clearly points to manufacturing methods that involved the use of machinery such as lathes, milling machines, ultrasonic drilling machines, and highspeed saws. They also possess attributes that cannot be produced without a system of measurement that is equal to the system of measure we use today. Their accuracy was not produced by chance, but is repeated over and over again. (The Giza Power Plant) It was the discovery of the knowledge of the transcendental number of pi in the Great Pyramid that prompted Taylor to conclude that the perimeter of the Great Pyramid could be analogous to the circumference of the Earth at the equator. The height would represent the distance from the center of the Earth to the poles. Further studies of the dimensions of the Great Pyramid revealed surprising inferences regarding the knowledge of its builders. When searching for a unit that would fit the pyramid in whole numbers yet still retain the pi proportion, Taylor's answer of 366 base and 116.5 height suggested to him that the Egyptians may have divided the perimeter of the Great Pyramid into segments of the solar year. He also found the figure 366 when he divided the base of the pyramid by 25 inches. This suggested that the British inch was close to the Egyptian unit of measure, with 25 such units making one cubit. To review Taylor's findings: * A pyramid inch is .001 inch larger than a British inch. There are 25 pyramid inches in a cubit and there were 365.24 cubits in the square base of the Great Pyramid. * There are 365.24 days in a calendar year. * One pyramid inch is equal in length to 1/500 millionth of the Earth's axis of rotation. This relationship suggests that not only were the builders of the Great Pyramid knowledgeable of the dimensions of the planet, they based their measurement system on them. (The Giza Power Plant) What else is unique about the Great Pyramid? Although it is a pyramid in shape, its geometry possesses an astounding approximation to the unique properties of a circle, or sphere. The pyramid's height is in relationship with the perimeter of its base as the radius of a circle is in relationship with its circumference. A perfectly constructed pyramid with an exact angle of 51 degrees 51 minutes. 14.3" has the value pi incorporated into its shape. William Fix presented well-founded and objective data to support this claim: "We know that someone in very deep antiquity was aware of the size and shape of the earth with great precision. The three key measurements of the earth are incorporated in the dimensions of the Great Pyramid. The perimeter of the Pyramid equals a half minute of equatorial latitude. The perimeter of the sockets equals a half minute of equatorial longitude, or 1/43,200 of the earth's circumference. The height of the Pyramid including the platform, equals 1/43,200 of the earth's polar radius .... We do not know how they measured it, but that they did so is now an article of knowledge."(The Giza Power Plant) With this experimental evidence available, and with what can be extrapolated from the dimensions and mass of the Great Pyramid, we have an object that fits the criteria established as necessary for an object to draw vibrations from the Earth. That object is the Great Pyramid of Giza! Here is the product of an ancient civilization empowered with the knowledge that as long as the moon continued to orbit the Earth, the special relationship that existed between the two assured the Egyptians of vast amounts of energy. The source of the energy is the Earth itself, in the form of seismic energy. The ancient Egyptians saw tremendous value in this form of energy and expended a considerable amount of effort to tap into it. The benefits they received may have been twofold: energy to fuel their civilization, and the ability to stabilize the Earth's crust by drawing off seismic energy over a period of time rather than allowing it to build up to destructive levels. Covering a large land area, the Great Pyramid is, in fact, in harmonic resonance with the vibration of the Earth - a structure that could act as an acoustical horn for collecting, channeling, and/or focusing terrestrial vibration. We are led to consider, therefore, that energy associated with the pyramid shape is not drawn from the air or magically generated simply by the geometric form of a pyramid, but that the pyramid acts as a receiver of energy from within the Earth itself. (The Giza Power Plant) ...the Great Pyramid conducts a broad range of vibrational frequencies through its mass. When I consider the mathematical comparison of the dimensions of the Great Pyramid with the dimensions of the Earth, I am led to conclude that this correspondence was no coincidence, but was in fact the expressed intention of the builders. If the dimensions of the Earth determine the wave characteristics of vibrations emanating from the core, then it would obviously be beneficial to incorporate these dimensions in a receiver of these vibrations. The receiver would respond harmonically to the influence of the vibrations and be in a state of resonance with them. The energy of the Earth is tremendous. The seismic disturbances around the globe (for instance, an estimated one million earthquakes occur annually) and the awesome power released by a volcanic eruption attest to the magnitude of this Earth energy. And these accumulated stresses are a constant factor in the Earth's evolution. Let me make no apology for the theory I am proposing. The Great Pyramid was a geomechanical power plant that responded sympathetically with the Earth's vibrations and converted that energy into electricity. (The Giza Power Plant) Rather than suffering from a lack of attention, therefore, the rough top surfaces of those granite beams in the King's Chamber have been given more careful and deliberate attention and work than the beams' sides or bottoms. Before the ancient craftspeople placed them inside the Great Pyramid, each beam may have been "tested" or "tuned" by being suspended on each end in the same position that it would have once it was placed inside the pyramid. The workers would then shape and gouge the topside of each beam in order to tune it before it was permanently positioned inside the pyramid. After cutting three sides square and true to each other, the remaining side could have been cut and shaped until it reached a specific resonating frequency. The removal of material on the upper side of the beam would take into consideration the elasticity of the beam, as a variation of elasticity might result in more material being removed at one point along the beam's length than at another. The fact that the beams above the King's Chamber are all shapes and sizes would support this speculation. In some of the granite beams, I would not be surprised if we found holes gouged out of the granite as the tuners worked on trouble spots. What we find in the King's Chamber, then, are thousands of tons of granite that were precisely tuned to resonate in harmony with the fundamental frequency of the Earth and the pyramid! (The Giza Power Plant) The Grand Gallery, which is considered to be an architectural masterpiece, is an enclosed space in which resonators were installed in the slots along the ledge that runs the length of the gallery. As the Earth's vibration flowed through the Great Pyramid, the resonators converted the vibrational energy to airborne sound. By design, the angles and surfaces of the Grand Gallery walls and ceiling caused reflection of the sound, and its focus into the King's Chamber. Although the King's Chamber also was responding to the energy flowing through the pyramid, much of the energy would flow past it. The specific design and utility of the Grand Gallery was to transfer the energy flowing through a large area of the pyramid into the resonant King's Chamber. This sound was then focused into the granite resonating cavity at sufficient amplitude to drive the granite ceiling beams to oscillation. These beams, in turn, compelled the beams above them to resonate in harmonic sympathy. Thus, with the input of sound and the maximization of resonance, the entire granite complex, in effect, became a vibrating mass of energy. The existence of resonators in this gallery is predicted by what has been found inside the King's Chamber and the design of, and phenomena noted in, the Grand Gallery. The mystery of the twenty-seven pairs of slots in the side ramps is logically explained if we theorize that each pair of slots contained a resonator assembly and the slots served to lock these assemblies into place To increase the resonators' frequency, the ancient scientists would have made the dimensions smaller, and correspondingly reduced the distance between the two walls adjacent to each resonator. In fact, the walls of the Grand Gallery actually step inward seven times in their height and most probably the resonators' supports reached almost to the ceiling. (The Giza Power Plant) In the [Cairo] museum's collection are stone jars and bowls so finely machined and perfectly balanced that they inspire awe and wonder. One bowl in particular, a schist bowl with three lobes folded toward the center hub, is an incredible piece of work. (above) Graham Hancock…wrote, "During my travels in Egypt I had examined many stone vessels - dating back in some cases to pre-dynastic times - that had been mysteriously hollowed out of a range of materials such as diorite, basalt, quartz crystal and metamorphic schist. For example, more than 30,000 such vessels had been found in the chambers beneath the Third Dynasty Step Pyramid of Zoser at Saqqara. That meant that they were at least as old as Zoser himself (i.e. around 2650 BC)." It then occurred to me that perhaps these stone artifacts were not domestic vases at all, but were used in some other way. Perhaps they were being used to convert vibration into airborne sound. Given their shape and dimensions - and the fact that there were 30,000 of them found in chambers underneath the Step Pyramid - are these vessels the Helmholtz resonators we are looking for? As if to provide us with clues, one of the bowls in the Cairo Museum has a horn attached to it, and one of the bowls does not have handles normally seen on a domestic vase, or urn, but has trunnion-like appendages machined on each side of it. These trunnions would be needed to hold the bowl securely in a resonator assembly. (The Giza Power Plant) By installing baffles inside the Antechamber, sound waves traveling from the Grand Gallery through the passageway into the King's Chamber would be filtered as they passed through, allowing only a single frequency or harmonic of that frequency to enter the resonant chamber. Those sound waves with a wavelength that did not coincide with the dimensions between the baffles would be filtered out, thereby ensuring that no interference sound waves could enter the resonant King's Chamber to reduce the output of the system. To explain the half-round grooves on the west side of the Antechamber and the flat surface on the east, we could speculate that when the installation of these baffles took place, they received a final tuning or "tweaking.”(The Giza Power Plant) In the Great Pyramid, there is evidence that strongly indicates that the ancient Egyptian engineers and designers knew about and utilized the principles of a maser to collect the energy that was being drawn through the pyramid from the Earth and deliver it to the outside. This evidence can be found in the King's Chamber. The key to converting or transducing hydrogen gas to usable power in the Giza power system was the introduction of acoustical vibration of the correct frequency and amplitude. (The amplitude is the amount of energy contained within a sound wave.) Based on the previous evidence, sound must have been focused into the King's Chamber to force oscillations of the granite, creating in effect a vibrating mass of thousands of tons of granite. The frequencies inside this chamber, then, would rise above the low frequency of the Earth - through a scale of harmonic steps - to a level that would excite the hydrogen gas to higher energy levels. The King's Chamber is a technical wonder. It is where Earth's mechanical energy was converted, or transduced, into usable power. It is a resonant cavity in which sound was focused. Sound roaring through the passageway at the resonant frequency of this chamber - or its harmonic - at sufficient amplitude would drive these granite beams to vibrate in resonance. Sound waves not of the correct frequency would be filtered in the acoustic filter, more commonly known as the Antechamber (above). (The Giza Power Plant) With the granite beams vibrating at their resonant frequency, the sound energy would be converted through the piezoelectric effect of the silicon-quartz crystals embedded in the granite, creating high-frequency radio waves. Ultrasonic radiation would also be generated by this assembly. The hydrogen generated in the Queen's Chamber, directly below the King's Chamber, would fill the upper chambers and then efficiently absorb this energy as each atom responded in resonance to its input. In the Giza power plant, the Northern Shaft served as a waveguide through which the input microwave signal traveled. A typical waveguide is rectangular in shape, with its width being the wavelength of microwave energy and its height measuring approximately one-half its width. The Northern Shaft waveguide was constructed precisely to pass through the masonry from the north face of the pyramid and into the King's Chamber. That microwave signal could have been collected off the outer surface of the Great Pyramid and directed into the waveguide (above). Amazingly, this waveguide leading to the chamber has dimensions that closely approximate the wavelength of microwave energy - 1,420,405,751. 786 hertz (cycles per second). This is the frequency of energy emitted by atomic hydrogen in the universe. These features and facts are gathered in Table 3 (below). (The Giza Power Plant) These features inexorably move us to consider the purpose for the gold-plated iron that was discovered embedded in the limestone near the Southern Shaft. In order to have an efficient conduit for electromagnetic radiation, the entire lengths of the Northern and Southern Shafts would have to have been lined with this material, thereby making a very efficient conduit for the input signal and the power output (above). There is evidence to suggest that the granite box could refract electromagnetic radiation as it passed through the box's north and south walls. Though fully accurate measurements for optical characteristics have not been made on these surfaces, Smyth's measurements show that the grinding on these surfaces produced a concave surface. Manufactured in such a manner, the coffer - positioned in the path of the incoming signal through the Northern Shaft and with oscillating crystals adding energy to the microwave beam - may have served to spread or diverge the signal inside the box as it passed through the first wall. Within the confines of the granite box, the spreading beam would then interact and stimulate the emission of energy from the energized, or "pumped,” hydrogen atoms (right). If we follow a straight line across the King's Chamber from where the Northern Shaft enters, we find a feature cut into the granite wall that closely resembles a horn antenna, much like a microwave receiver. Passing through the opposite wall of the coffer, then, the radiation picked up more energy, was once more refracted, and then focused into this horn antenna. (The Giza Power Plant) Considering the investment the ancient Egyptians made in building such a structure, and its intended purpose as a power plant, it would be nearly unthinkable for them not to have fully tested the machinery that would be put to use. The remarkable similarity in the dimensions of both the passages in the Great Pyramid and the Trial Passages supports my speculation that every piece of equipment critical to the operation of the power plant was first fully developed and tested prior to its installation. The power plant theory currently is the only one that provides a logical pattern of events to explain the purpose for the Trial Passages. (The Giza Power Plant) The Queen's Chamber's characteristics indicate that its specific purpose was to produce fuel, which is of paramount importance for any power plant. Although it would be difficult to pinpoint exactly what process took place inside the Queen's Chamber, it appears a chemical reaction repeatedly took place there. The residual substance the process left behind (the salts on the chamber wall) and what can be deduced from artifacts (grapnel hook and cedarlike wood) and structural details (Gantenbrink's "door" for example) are too prominent to be ignored. They all indicate that the energy created in the King's Chamber was the result of the efficient operation of the hydrogen-generating Queen's Chamber. The equipment that provided the priming pulses was most likely housed in the Subterranean Pit. Before or at the time the "key was turned" to start the priming pulses, a supply of chemicals was pumped into the Northern and Southern Shafts of the Queen's Chamber, filling them until contact was made between the grapnel hook and the electrodes that were sticking out of the "door." Seeping through the "lefts" in the Queen's Chamber, these chemicals combined to produce hydrogen gas, which filled the interior passageways and chambers of the pyramid. The waste from the spent chemicals flowed along the Horizontal Passage and down the Well Shaft. Induced by priming pulses of vibration - tuned to the resonant frequency of the entire structure - the vibration of the pyramid gradually increased in amplitude and oscillated in harmony with the vibrations of the Earth. Harmonically coupled with the Earth, vibrational energy then flowed in abundance from the Earth through the pyramid and influenced a series of tuned Helmholtz-type resonators housed in the Grand Gallery, where the vibration was converted into airborne sound. By virtue of the acoustical design of the Grand Gallery, the sound was focused through the passage leading to the King's Chamber. Only frequencies in harmony with the resonant frequency of the King's Chamber were allowed to pass through an acoustic filter that was housed in the Antechamber. (The Giza Power Plant) The King's Chamber was the heart of the Giza power plant, an impressive power center comprised of thousands of tons of granite containing fifty-five percent silicon-quartz crystal. The chamber was designed to minimize any damping of vibration, and its dimensions created a resonant cavity that was in harmony with the incoming acoustical energy. As the granite vibrated in sympathy with the sound, it stressed the quartz in the rock and stimulated electrons to flow by what is known as the piezoelectric effect. The energy that filled the King's Chamber at that point became a combination of acoustical energy and electromagnetic energy. Both forms of energy covered a broad spectrum of harmonic frequencies, from the fundamental infrasonic frequencies of the Earth to the ultrasonic and higher electromagnetic microwave frequencies. The hydrogen freely absorbed this energy, for the designers of the Giza power plant had made sure that the frequencies at which the King's Chamber resonated were harmonics of the frequency at which hydrogen resonates. As a result, the hydrogen atom, which consists of one proton and one electron, efficiently absorbed this energy, and its electron was "pumped" to a higher energy state. (The Giza Power Plant) The Northern Shaft served as a conduit, or a waveguide, and its original metal lining - which passed with extreme precision through the pyramid from the outside - served to channel a microwave signal into the King's Chamber. The microwave signal that flowed through this waveguide may have been the same signal that we know today is created by the atomic hydrogen that fills the universe and that is constantly bombarding the Earth. This microwave signal probably was reflected off the outside face of the pyramid, then was focused down the Northern Shaft. Traveling through the King's Chamber and passing through a crystal box amplifier located in its path, the input signal increased in power as it interacted with the highly energized hydrogen atoms inside the resonating box amplifier and chamber. This interaction forced the electrons back to their natural "ground state." In turn, the hydrogen atoms released a packet of energy of the same type and frequency as the input signal. This "stimulated emission" was entrained with the input signal and followed the same path. The process built exponentially - occurring trillions of times over. What entered the chamber as a low energy signal became a collimated (parallel) beam of immense power as it was collected in a microwave receiver housed in the south wall of the King's Chamber and was then directed through the metal-lined Southern Shaft to the outside of the pyramid. This tightly collimated beam was the reason for all the science, technology, craftsmanship, and untold hours of work that went into designing, testing, and building the Giza power plant. (The Giza Power Plant) There can be little doubt that the Pyramid Texts make a clear statement that the dead kings become stars, especially seen in the lower eastern sky. They also tell us that it is the souls of departed kings which become stars: 'be a soul as a living star ….' 'I am a soul ...I (am) a star of gold …' '0 king, you are this Great Star, the companion of Orion, who traverses the sky with Orion, who navigates the Duat with Osiris …' What an incredible description of events related to the pyramids' link to the stars. Suddenly I saw a new meaning for this star. Imagine if we had put a vehicle in space, for whatever reason, and were beaming energy to it - for the vehicle's own use or to be returned to some location on Earth. Would that vehicle not appear as a bright star in the night sky? Assuming that the energy beam would have some divergence (it would grow in size) the farther it traveled from its source, then the larger the microwave receiver - the "star" - would have to be. And more fantastic still, what if the energy were being used to provide power to a space ship? The microwave energy that was projected from the Southern Shaft to Orion's belt stars may have been delivering more than Khufu's soul to the heavens. The energy, which Robert Bauval describes as Khufu's soul traveling to Orion, may have been his actual person along with an entourage! (The Giza Power Plant) These readings described the ancient Atlantean power plants, which on the surface seem far removed from the Egyptian pyramids; however, an interpretation of the readings becomes more meaningful when we compare (what Cayce described as) the "firestone" with granite, out of which the King's Chamber, the power center in the Giza power plant, is constructed: About the firestone - the entity's activities then made such applications as dealt both with the constructive as well as destructive forces in that period. It would be well that there be given something of a description of this so that it may be understood better by the entity in the present. (The Giza Power Plant) In the center of a building which would today be said to be lined with nonconductive stone - something akin to asbestos, with ... other nonconductors such as are now being manufactured in England under a name which is well known to many of those who deal in such things. The building above the stone was oval; or a dome, wherein there could be ... a portion for rolling back, so that the activity of the stars - the concentration of energies that emanate from bodies that are on fire themselves, along with elements that are found and not found in the earth's atmosphere. The concentration through the prisms of glass (as would be called in the present) was in such manner that it acted upon the instruments which were connected with the various modes of travel through induction methods which made much the [same] character of control as would in the present day be termed remote control through radio vibrations or directions; though the kind of force impelled from the stone acted upon the motivating forces in the crafts themselves. The building was constructed so that when the dome was rolled back there might be little or no hindrance in the direct application of power to various crafts that were to be impelled through space - whether within the radius of vision or whether directed under water or under other elements, or through other elements. (The Giza Power Plant) The preparation of this stone was solely in the hands of the initiates at the time; and the entity was among those who directed the influences of the radiation which arose in the form of rays that were invisible to the eye but acted upon the stones themselves as set in the motivating forces - whether the aircraft were lifted by the gases of the period; or whether for guiding the more-of-pleasure vehicles that might pass along close to the earth, or crafts on the water or under the water. These, then, were impelled by the concentration of rays from the stone which was centered in the middle of the power station, or powerhouse (as would be the term in the present). In the active forces of these, the entity brought destructive forces by setting up - in various portions of the land - the kind that was to act in producing powers for the various forms of the people's activities in the cities, the towns and the countries surrounding same. These, not intentionally, were tuned too high; and brought the second period of destructive forces to the people in the land - and broke up the land into those isles which later became the scene of further destructive forces in the land. (Dec. 20, 1933) When I looked for an event in Egyptian history that would explain the destruction of this culture and at the same time explain the erosion of the pyramids, I found a clue in the 1985 discovery of volcanic ash twenty feet underground in the Nile delta. This ash was found to be identical to that from an enormous eruption that occurred approximately 3,500 years ago on the Greek island of Santorini. The eruption is estimated to have been 22,000 times more destructive than the atomic bomb that was dropped on Hiroshima. (The Giza Power Plant) Brad Steiger presented a forceful argument that in prehistory nuclear explosions had affected several areas of the Earth. He cited the discovery of fused green glass in deep stratas of the Earth, and in Gabon, Africa, the Euphrates Valley, the Sahara Desert, the Gobi Desert, the Mojave Desert, and Iraq. These vast wastelands of melted sand can be compared with White Sands, New Mexico, where the sands were fused as a result of nuclear bomb testing. Steiger wrote, "Perhaps the most potentially mind-boggling evidence of an advanced prehistoric technology that might have blown its parent- culture away is to be found in those sites which ostensibly bear mute evidence of pre-Genesis nuclear reactions .... At the same time, scientists have found a number of uranium deposits that appear to have been mined or depleted in antiquity,"! Knowing that the energy of the sun comes from the fusion of hydrogen atoms, the thought of hydrogen bombs brings terrible visions of vast destruction, mushroom clouds, and insidious radiation wafting across the land. These visions are included in other books that reference The Mahabharata as testimony of nuclear war in prehistory. In We Are Not the First, Andrew Tomas wrote: " 'A blazing missile possessed of the radiance of smokeless fire was discharged. A thick gloom suddenly encompassed the heavens. Clouds roared into the higher air, showering blood. The world, scorched by the heat of that weapon, seemed to be in fever,' thus describes the Drona Parva a page of the unknown past of mankind. One can almost visualize the mushroom cloud of an atomic bomb explosion and atomic radiation. Another passage compares the detonation with a flare-up of ten thousand suns."(The Giza Power Plant) That the Egyptians were acquainted with a cutting jewel far harder than quartz, and that they used this jewel as a sharp pointed graver, is put beyond doubt by the diorite bowls with inscriptions of the fourth dynasty, of which I found fragments at Gizeh; as well as the scratches on polished granite of Ptolemaic age at San. The hieroglyphs are incised, with a very free-cutting point; they are not scraped nor ground out, but are ploughed through the diorite, with rough edges to the line. As the lines are only 1/150 inch wide (the figures being about .2 long), it is evident that the cutting point must have been harder than quartz; and tough enough not to splinter when so fine an edge was being employed, probably only 1/200 inch wide. Parallel lines are graved only 1/30 inch apart from centre to centre. That the blades of the saws were of bronze, we know from the green staining on the sides of saw cuts, and on grains of sand left in a saw cut. The forms of tools were straight saws, circular saws, tubular drills, and lathes. The straight saws varied from .03 to .2 inch thick, according to the work; the largest were 8 feet or more in length, as the cuts run lengthways on the Great Pyramid coffer, which is 7 feet 6 in. long. These tubular drills vary from 1/4 inch to 5 inches in diameter, and from 1/30 to 1/5 thick. The smallest hole yet found in granite is 2 inches diameter, all the lesser holes being in limestone or alabaster, which was probably worked merely with tube and sand. A peculiar feature of these cores is that they are always tapered, and the holes are always enlarged towards the top. The principle of rotating the tool was, for smaller objects, abandoned in favour of rotating the work; and the lathe appears to have been as familiar an instrument in the fourth dynasty, as it is in modern workshops. (The Giza Power Plant) It is significant to note, in this connection, that a piece of wrought-iron was found in the Great Pyramid by one of Col. Howard Vyse's assistants, Mr. J.R. Hill, during the operations carried out at Giza in 1837. Mr. Hill found it embedded in the cement in an inner joint, while removing some of the masonry preparatory to clearing the southern air-channel of the King's Chamber. Dr. Sayed El Gayer, who gained his Ph.D. in extraction metallurgy at the University of Aston in Birmingham, reported that "it is concluded, on the basis of the present investigation, that the iron plate is very ancient. Furthermore, the metallurgical evidence supports the archaeological evidence which suggests that the plate was incorporated within the Pyramid at the time that structure was being built."(The Giza Power Plant) According to John Anthony West, an experienced Egyptologist, the Pharaohs and priests were preoccupied with a principle known as Ma'at - often translated as 'equilibrium' or 'balance'. It was possible, he suggested, that this principle might have been carried over into practical spheres and 'that the Egyptians understood and used techniques of mechanical balance unknown to us'. Such techniques would have enabled them to 'manipulate these immense stones with ease and finesse ... What would be magic to us was method to them.' (The Sign and the Seal) Certainly, when I first entered the Great Pyramid at Giza, I felt like a Lilliputian - dwarfed and slightly intimidated, not only by the sheer mass and size of this mountain of stone but also by an almost tangible sense of the accumulated weight of the ages. At some point, for a reason that I cannot explain, I moved to the middle of the floor and gave voice to a sustained low-pitched tone like the song of the fallen obelisk at Karnak. The walls and the ceiling seemed to collect this sound, to gather and amplify it - and then to project it back at me so that I could sense the returning vibrations through my feet and scalp and skin. I felt electrified and energized, excited and at the same time calm, as though I stood on the brink of some tremendous and absolutely inevitable revelation. (The Sign and the Seal) ...every aspect of Egyptian knowledge seems to have been complete at the very beginning. The sciences, artistic and architectural techniques and the hieroglyphic system show virtually no signs of a period of 'development'; indeed, many of the achievements of the earliest dynasties were never surpassed, or even equalled later on. (The Sign and the Seal) One large block in the West wall measures approximately 18 feet x 10 feet x 8 feet and would weigh somewhere between fifty and sevety tons. For no concievable rational architectural or engineering reason, it is elaborately dressed and slotted into place as if it were no more than a piece of a jigsaw puzzle. It is typical of the stones in the Sphinx temple complex, and quite atypical of all of the rest of Egypt. (The Sign and the Seal) ...although there is again a marked difference of degree, there is no difference in kind between the geometric, astronomically aligned structures of the Giza plateau and the geometric, astronomically aligned structures of the Mississippi Valley. All of them seem bound together by the single purpose of the triumph of the soul over death and by the means deployed to achieve that purpose. (America Before) The Benben is one of the most ancient traditions in Egypt. It was centered on the city of Heliopolis and the temple of the Phoenix. The Benben stone was not the entire tower but the conical stone on the top, which is often presumed to have been of meteoric origins. Because of its dramatic and fiery appearance in the world, in falling from the skies, the Benben was considered to have originally been the property of the gods. From this, the link was made to its being the seed of the gods, part of the ritual of death and rebirth. Legends indicate that the original Benben stone at Heliopolis went missing around 2000 BC. Various pharaohs have erected replacement towers there, offerings to the gods which were presumably the root of the obelisk cult in Egypt. The oldest of these obelisks to have survived is the red granite construction of Senusret I, which still stands in the suburbs of Cairo. The obelisk is quite bare now, but many of these stelae were originally covered in precious metals - electrum or gold. (Jesus: Last of the Pharaohs) There seems to be mounting evidence that there was some symbolic architecture at Giza and Dashur during Zep Tepi and that it referenced Vega and Sirius, because these stars represented the First Time, the beginning of the Great Year of precession. By the third dynasty, King Djoser, with his astronomer-priest Imhotep, built at Saqqara the first major monumental complex of dynastic Egypt. Then fourthdynasty founder King Sneferu, and Sneferu's son, King Khufu, built the Bent Pyramid at Dashur, followed by the Great Pyramid at Giza, both constructed on top of much more ancient sacred subterranean passages and platforms from Zep Tepi. Thus, all the truly monumental pyramid architecture of the dynastic period (with the exception, perhaps, of the fourth-dynasty Unfinished Pyramid at Zawiyet elAryan) is associated with Zep Tepi. In summary, the astronomically determined dates related to Zep Tepi are these: (1) the layout of the Great Pyramids at Giza, referring back to the centuries around 11,700 BC; (2) the southern culmination of Sirius circa 12,280 BC, marked by the location of the Giza monuments and the Queen's Chamber horizontal passage; and (3) Vega located as North Star at its northern culmination, in 12,070 BC, marked by the subterranean passage of Khufu's Great Pyramid. (Black Genesis)
The stars can no longer hide behind the light from feeding the massive black hole in the infant universe. With the help of the James Webb Space Telescope (JWST), astronomers have detected starlight from two early galaxies that host supermassive black holes, or quasars, for the first time. The findings could help scientists better understand how supermassive black holes rapidly grow to masses equivalent to millions or billions of suns and how they and the galaxies that host them hold together. -hand that changes. “25 years ago, it was amazing to us that we could observe host galaxies from 3 billion years ago, using large ground-based telescopes,” said team member and Max Planck Institute for Astronomy researcher Knud Janke in a statement. “The Hubble Space Telescope allows us to probe the peak epoch of black hole growth 10 billion years ago. And now we have JWST available to see the galaxies where the first supermassive black holes appeared.” Related: Cosmic monsters have been found hidden among ancient star clusters by the James Webb Space Telescope The team observed two of these so-called active galaxies, seeing what they were like when the 13.8-billion-year-old universe was less than a billion years old. They were able to calculate the mass of the galaxies and the mass of the supermassive black hole that powers the quasars, designated J2236+0032 and J2255+0251. Light from these two galaxies took 12.9 and 12.8 billion years to reach us, so they appear to astronomers as 870 and 880 million years after the Big Bang, respectively. The observations revealed that the mass of the galaxies is 130 billion and 30 billion times that of the sun, and the mass of the massive feeding black holes as 1.4 billion solar masses for J2236+0032 and 200 million solar masses for J2255 +0251. It showed the masses of these early galaxies and their central black holes were related in the same way seen in galaxies observed closer to the Milky Way and, thus, more recent in time. How do supermassive black holes grow with their galaxies? Quasars are some of the most intense objects in the entire universe. Powered by supermassive black holes surrounded by gas and dust, some of which accumulates in the black hole, some of which explodes at speeds approaching the speed of light, quasars emit so much light that they often surpassing every star in the host galaxy. they are combined. Almost every galaxy is believed to have a supermassive black hole at its heart, but not all of these are quasars. For example, the supermassive black hole at the center of the Milky Way, Sagittarius A* (Sgr A*), consumes very little matter equivalent to a person eating a grain of rice every million years. So, this is not enough nourishment to power a quasar. The first quasar was spotted in 1963, and since then, scientists have been unraveling the processes that power their massive light output. In the 2000s, it was discovered that the masses of galaxies and their supermassive black holes are correlated, with the mass of the stars in a galaxy being about 1000 times greater than the mass of its central black hole. The relation between the masses of supermassive black holes and their galaxies holds for galaxies with supermassive black holes millions of times that of the sun and for those with central black holes billions times the mass of our star. The connection between the mass of galaxies and the mass of their supermassive black hole can be attributed to the fact that both grow through a chain of mergers between galaxies that eventually lead to black holes at the center of those galaxies violently colliding with each other and creating a larger black hole. Consequently, after multiple mergers, the mass of a galaxy will be around the average mass of the initial galaxy times the number of galaxies it merged with, while the central black hole mass will be around the mass of the initial black hole which is times the same number. , leading to an almost linear relationship. Another suggestion is that when a supermassive black hole consumes enough material to become a quasar, the radiation it blasts out controls the material available for both powering the quasar and for forming new stars. So when the quasar runs out of food and stops growing, star formation slows in that galaxy as well. Whatever the reason for this relationship, astronomers have not been able to determine whether it exists for galaxies and their supermassive black holes in the very early universe until now. This is because, while the luminosity of quasars allows them to be studied at distances billions of light-years away, it also makes it difficult to observe dimmer starlight from quasar-hosting galaxies. Ground-based telescopes have difficulty distinguishing the light from quasars and the light from stars in their galaxies because of the effect of Earth’s atmosphere. From its position above the atmosphere, the Hubble Space Telescope has had little success unraveling the light from these galaxies when they are 10 billion light-years away. But to do this for more distant and earlier galaxies, astronomers had to wait for the most powerful space telescope ever to be put into orbit, the JWST. The quasars J2236+0032 and J2255+0251 were observed with JWST’s main instrument, the Near Infrared Camera (NIRCam), for 2 hours at two different wavelengths. The team took the combined spectrum of quasar light and starlight for both galaxies and then separated the quasar light to detect light from the early stars in such galaxies for the first time. Interestingly, observations of J2236+0032 and J2255+0251 and their galaxies with JWST showed that the supermassive black hole/galaxy mass relationship existed even in the early universe. Currently, this data alone is not enough to reveal the origins of this massive correlation and how supermassive black holes grow to massive sizes, but it will inform future investigations. The findings represent only part of JWST’s observations of distant quasars, with the powerful space telescope currently observing an additional dozen supermassive black hole-powered objects and their galaxies. In addition, an additional 11 hours of observation time was granted to this particular exploration of the early universe. The research was published on June 28 in the journal Nature. #James #Webb #Space #Telescope #sees #starlight #ancient #quasars #groundbreaking #discovery
Differences in national income equality around the world as measured by the national Gini coefficient as of 2018. The Gini coefficient is a number between 0 and 100, where 0 corresponds with perfect equality (where everyone has the same income) and 100 corresponds with absolute inequality (where one person has all the income, and everyone else has zero income). There are wide varieties of economic inequality, most notably measured using the distribution of income (the amount of money people are paid) and the distribution of wealth (the amount of wealth people own). Besides economic inequality between countries or states, there are important types of economic inequality between different groups of people. Research suggests that greater inequality hinders economic growth, and that land and human capital inequality reduce growth more than inequality of income. Whereas globalization has reduced global inequality (between nations), it has increased inequality within nations. Research has generally linked economic inequality to political instability, including democratic breakdown and civil conflict. Share of income of the top 1% for selected developed countries, 1975 to 2015 In 1820, the ratio between the income of the top and bottom 20 percent of the world's population was three to one. By 1991, it was eighty-six to one. A 2011 study titled "Divided we Stand: Why Inequality Keeps Rising" by the Organisation for Economic Co-operation and Development (OECD) sought to explain the causes for this rising inequality by investigating economic inequality in OECD countries; it concluded that the following factors played a role: Changes in the structure of households can play an important role. Single-headed households in OECD countries have risen from an average of 15% in the late 1980s to 20% in the mid-2000s, resulting in higher inequality. Assortative mating refers to the phenomenon of people marrying people with similar background, for example doctors marrying other doctors rather than nurses. OECD found out that 40% of couples where both partners work belonged to the same or neighbouring earnings deciles compared with 33% some 20 years before. In the bottom percentiles, number of hours worked has decreased. The main reason for increasing inequality seems to be the difference between the demand for and supply of skills. The study made the following conclusions about the level of economic inequality: Income inequality in OECD countries is at its highest level for the past half century. The ratio between the bottom 10% and the top 10% has increased from 1:7 to 1:9 in 25 years. There are tentative signs of a possible convergence of inequality levels towards a common and higher average level across OECD countries. With very few exceptions (France, Japan, and Spain), the wages of the 10% best-paid workers have risen relative to those of the 10% lowest paid. A 2011 OECD study investigated economic inequality in Argentina, Brazil, China, India, Indonesia, Russia and South Africa. It concluded that key sources of inequality in these countries include "a large, persistent informal sector, widespread regional divides (e.g. urban-rural), gaps in access to education, and barriers to employment and career progression for women." Countries by total wealth (trillions USD), Credit Suisse A study by the World Institute for Development Economics Research at United Nations University reports that the richest 1% of adults alone owned 40% of global assets in the year 2000. The three richest people in the world possess more financial assets than the lowest 48 nations combined. The combined wealth of the "10 million dollar millionaires" grew to nearly $41 trillion in 2008.Oxfam's 2021 report on global inequality said that the COVID-19 pandemic has increased economic inequality substantially; the wealthiest people across the globe were impacted the least by the pandemic and their fortunes recovered quickest, with billionaires seeing their wealth increase by $3.9 trillion, while at the same time those living on less than $5.50 a day likely increased by 500 million. The report also emphasized that the wealthiest 1% are by far the biggest polluters and main drivers of climate change, and said that government policy should focus on fighting both inequality and climate change simultaneously. According to PolitiFact, the top 400 richest Americans "have more wealth than half of all Americans combined." According to The New York Times on July 22, 2014, the "richest 1 percent in the United States now own more wealth than the bottom 90 percent".Inherited wealth may help explain why many Americans who have become rich may have had a "substantial head start". A 2017 report by the IPS said that three individuals, Jeff Bezos, Bill Gates and Warren Buffett, own as much wealth as the bottom half of the population, or 160 million people, and that the growing disparity between the wealthy and the poor has created a "moral crisis", noting that "we have not witnessed such extreme levels of concentrated wealth and power since the first gilded age a century ago." In 2016, the world's billionaires increased their combined global wealth to a record $6 trillion. In 2017, they increased their collective wealth to 8.9 trillion. In 2018, U.S. income inequality reached the highest level ever recorded by the Census Bureau. The existing data and estimates suggest a large increase in international (and more generally inter-macroregional) components between 1820 and 1960. It might have slightly decreased since that time at the expense of increasing inequality within countries. The United Nations Development Programme in 2014 asserted that greater investments in social security, jobs and laws that protect vulnerable populations are necessary to prevent widening income inequality. There is a significant difference in the measured wealth distribution and the public's understanding of wealth distribution. Michael Norton of the Harvard Business School and Dan Ariely of the Department of Psychology at Duke University found this to be true in their research conducted in 2011. The actual wealth going to the top quintile in 2011 was around 84%, whereas the average amount of wealth that the general public estimated to go to the top quintile was around 58%. According to a 2020 study, global earnings inequality has decreased substantially since 1970. During the 2000s and 2010s, the share of earnings by the world's poorest half doubled. Two researchers claim that global income inequality is decreasing due to strong economic growth in developing countries. According to a January 2020 report by the United Nations Department of Economic and Social Affairs, economic inequality between states had declined, but intra-state inequality has increased for 70% of the world population over the period 1990-2015. In 2015, the OECD reported in 2015 that income inequality is higher than it has ever been within OECD member nations and is at increased levels in many emerging economies. According to a June 2015 report by the International Monetary Fund: Widening income inequality is the defining challenge of our time. In advanced economies, the gap between the rich and poor is at its highest level in decades. Inequality trends have been more mixed in emerging markets and developing countries (EMDCs), with some countries experiencing declining inequality, but pervasive inequities in access to education, health care, and finance remain. In October 2017, the IMF warned that inequality within nations, in spite of global inequality falling in recent decades, has risen so sharply that it threatens economic growth and could result in further political polarization. The Fund's Fiscal Monitor report said that "progressive taxation and transfers are key components of efficient fiscal redistribution." In October 2018 Oxfam published a Reducing Inequality Index which measured social spending, tax and workers' rights to show which countries were best at closing the gap between the rich and the poor. Wealth distribution within individual countries The following table shows information about individual wealth distribution in different countries from a 2018 report by Crédit Suisse. The wealth is calculated by various factor, for instance: liabilities, debts, exchange rates and their expected development, real estate prices, human resources, natural resources and technical advancements etc. Median and mean wealth per adult, in US dollars. Countries and subnational areas. Initially in rank order by median wealth. Countries' income inequality according to their most recent reported Gini index values as of 2018. Income inequality is measured by Gini coefficient (expressed in percent %) that is a number between 0 and 1. Here 0 expresses perfect equality, meaning that everyone has the same income, whereas 1 represents perfect inequality, meaning that one person has all the income and others have none. A Gini index value above 50% is considered high; countries including Brazil, Colombia, South Africa, Botswana, and Honduras can be found in this category. A Gini index value of 30% or above is considered medium; countries including Vietnam, Mexico, Poland, the United States, Argentina, Russia and Uruguay can be found in this category. A Gini index value lower than 30% is considered low; countries including Austria, Germany, Denmark, Norway, Slovenia, Sweden, and Ukraine can be found in this category. In the low income inequality category (below 30%) is a wide representation of countries previously being part of Soviet Union or its satellites, like Slovakia, Czech Republic, Ukraine and Hungary. In 2012 the Gini index for income inequality for whole European Union was only 30.6%. Income distribution can differ from wealth distribution within each country. The wealth inequality is also measured in Gini index. There the higher Gini index signify greater inequality within the wealth distribution in country, 0 means total wealth equality and 1 represents situation, where everyone has no wealth, except an individual that has everything. For instance countries like Denmark, Norway and Netherlands, all belonging to the last category (below 30%, low income inequality) also have very high Gini index in wealth distribution, ranging from 70% up to 90%. Various proposed causes of economic inequality There are various reasons for economic inequality within societies, including both global market functions (such as trade, development, and regulation) as well as social factors (including gender, race, and education). Recent growth in overall income inequality, at least within the OECD countries, has been driven mostly by increasing inequality in wages and salaries. Economist Thomas Piketty argues that widening economic disparity is an inevitable phenomenon of free marketcapitalism when the rate of return of capital (r) is greater than the rate of growth of the economy (g). A major cause of economic inequality within modern market economies is the determination of wages by the market. Where competition is imperfect; information unevenly distributed; opportunities to acquire education and skills unequal; market failure results. Since many such imperfect conditions exist in virtually every market, there is in fact little presumption that markets are in general efficient. According to Joseph Stiglitz this means that there is an enormous potential role for government to correct such market failures. In his book, The Price of Inequality published in 2012, Stiglitz argues that the economical inequality is inevitable and permanent, because it is caused by the great amount of political power the richest have. "While there may be underlying economic forces at play, politics have shaped the market, and shaped it in ways that advantage the top at the expense of the rest."- The Price of Inequality Thomas Malthus was originally a demographer, but later in his life he focused on studying economy, mainly inequalities across population. In his work he raised questions related to population growth and economy. In his Essay on Principle of Population, published in 1798, Thomas Malthus claims that the population grows at geometrical speed, but the resources can only grow at arithmetical speed. In his theory, also referred to as Malthusianism, he explains that whenever there is spare food or resources, the population will grow faster to fulfill the gap. "The happiness of a country does not depend, absolutely, upon its poverty, or its riches, upon its youth, or its age, upon its being thinly, or fully inhabited, but upon the rapidity with which it is increasing, upon the degree in which the yearly increase of food approaches to the yearly increase of an unrestricted population." - An Essay on the Principle of Population The malthusian argument could be described as "Despite the population getting bigger, the quality of life will not increase". Even with new technologies and more effective ways of providing resources, the population will grow to the size at which the quality is the same as before per capita. This would lead to the point at which there would be not enough food for everyone, cause great famine or war for resources among the people and potentially die out of the whole population. This event is called Malthusian catastrophe and causes reduction of population back to the sustainable level. In his theory, Malthus uses "checks" - terms describing the limiting factors of the population size at any time. He divided them into 2 groups: the preventive checks and the positive checks. A preventive check is a conscious decision to abstain from procreation based on material or spiritual belief, for example a lack of resources or sex abstinence. Malthus explained this by his statement that people are perceiving the possible consequences of uncontrolled population growth and so wouldn't knowingly contribute to that. A positive check is, on the other hand, any event that shortens the human life span, for example war, diseases or famine. This also includes poor financial or health situation. The Malthusian catastrophe occurs when the rate of early death is too high in the population Another cause is the rate at which income is taxed coupled with the progressivity of the tax system. A progressive tax is a tax by which the tax rate increases as the taxable base amount increases. In a progressive tax system, the level of the top tax rate will often have a direct impact on the level of inequality within a society, either increasing it or decreasing it, provided that income does not change as a result of the change in tax regime. Additionally, steeper tax progressivity applied to social spending can result in a more equal distribution of income across the board. Tax credits such as the Earned Income Tax Credit in the US can also decrease income inequality. The difference between the Gini index for an income distribution before taxation and the Gini index after taxation is an indicator for the effects of such taxation. Illustration from a 1916 advertisement for a vocational school in the back of a US magazine. Education has been seen as a key to higher income, and this advertisement appealed to Americans' belief in the possibility of self-betterment, as well as threatening the consequences of downward mobility in the great income inequality existing during the Industrial Revolution. An important factor in the creation of inequality is variation in individuals' access to education. Education, especially in an area where there is a high demand for workers, creates high wages for those with this education. However, increases in education first increase and then decrease growth as well as income inequality. As a result, those who are unable to afford an education, or choose not to pursue optional education, generally receive much lower wages. The justification for this is that a lack of education leads directly to lower incomes, and thus lower aggregate saving and investment. Conversely, quality education raises incomes and promotes growth because it helps to unleash the productive potential of the poor. Economic liberalism, deregulation and decline of unions John Schmitt and Ben Zipperer (2006) of the CEPR point to economic liberalism and the reduction of business regulation along with the decline of union membership as one of the causes of economic inequality. In an analysis of the effects of intensive Anglo-American liberal policies in comparison to continental European liberalism, where unions have remained strong, they concluded "The U.S. economic and social model is associated with substantial levels of social exclusion, including high levels of income inequality, high relative and absolute poverty rates, poor and unequal educational outcomes, poor health outcomes, and high rates of crime and incarceration. At the same time, the available evidence provides little support for the view that U.S.-style labor market flexibility dramatically improves labor-market outcomes. Despite popular prejudices to the contrary, the U.S. economy consistently affords a lower level of economic mobility than all the continental European countries for which data is available." More recently, the International Monetary Fund has published studies which found that the decline of unionization in many advanced economies and the establishment of neoliberal economics have fueled rising income inequality. The growth in importance of information technology has been credited with increasing income inequality. Technology has been called "the main driver of the recent increases in inequality" by Erik Brynjolfsson, of MIT. In arguing against this explanation, Jonathan Rothwell notes that if technological advancement is measured by high rates of invention, there is a negative correlation between it and inequality. Countries with high invention rates -- "as measured by patent applications filed under the Patent Cooperation Treaty" -- exhibit higher inequality than those with less. In one country, the United States, "salaries of engineers and software developers rarely reach" above $390,000/year (the lower limit for the top 1% earners). Some researchers, such as Juliet B. Schor, highlight the role of for-profit online sharing economy platforms as an accelerator of income inequality and calls into question their supposed contribution in empowering outsiders of the labour market. Taking the example of TaskRabbit, a labour service platform, she shows that a large proportion of providers already have a stable full-time job and participate part-time in the platform as an opportunity to increase their income by diversifying their activities outside employment, which tends to restrict the volume of work remaining for the minority of platform workers. In addition, there is an important phenomenon of labour substitution as manual tasks traditionally performed by workers without a degree (or just a college degree) integrated into the labour market in the traditional economy sectors are now performed by workers with a high level of education, (in 2013, 70% of TaskRabbit's workforce held a Bachelor's degree, 20% a Master's Degree and 5% a PhD. The development of platforms, which are increasingly capturing demand for these manual services at the expense of non-platform companies, may therefore benefit mainly skilled workers who are offered more earning opportunities that can be used as supplemental or transitional work during periods of unemployment. "Elephant curve": Change in real income between 1988 and 2008 at various income percentiles of global income distribution. Trade liberalization may shift economic inequality from a global to a domestic scale. When rich countries trade with poor countries, the low-skilled workers in the rich countries may see reduced wages as a result of the competition, while low-skilled workers in the poor countries may see increased wages. Trade economist Paul Krugman estimates that trade liberalisation has had a measurable effect on the rising inequality in the United States. He attributes this trend to increased trade with poor countries and the fragmentation of the means of production, resulting in low skilled jobs becoming more tradeable. The gender gap in median earnings of full-time employees according to the OECD 2015 In many countries, there is a gender pay gap in favor of males in the labor market. Several factors other than discrimination contribute to this gap. On average, women are more likely than men to consider factors other than pay when looking for work, and may be less willing to travel or relocate.Thomas Sowell, in his book Knowledge and Decisions, claims that this difference is due to women not taking jobs due to marriage or pregnancy. A U.S. Census's report stated that in US once other factors are accounted for there is still a difference in earnings between women and men. There is also a globally recognized disparity in the wealth, income, and economic welfare of people of different races. In many nations, data exists to suggest that members of certain racial demographics experience lower wages, fewer opportunities for career and educational advancement, and intergenerational wealth gaps. Studies have uncovered the emergence of what is called "ethnic capital", by which people belonging to a race that has experienced discrimination are born into a disadvantaged family from the beginning and therefore have less resources and opportunities at their disposal. The universal lack of education, technical and cognitive skills, and inheritable wealth within a particular race is often passed down between generations, compounding in effect to make escaping these racialized cycles of poverty increasingly difficult. Additionally, ethnic groups that experience significant disparities are often also minorities, at least in representation though often in number as well, in the nations where they experience the harshest disadvantage. As a result, they are often segregated either by government policy or social stratification, leading to ethnic communities that experience widespread gaps in wealth and aid. As a general rule, races which have been historically and systematically colonized (typically indigenous ethnicities) continue to experience lower levels of financial stability in the present day. The global South is considered to be particularly victimized by this phenomenon, though the exact socioeconomic manifestations change across different regions. Even in economically developed societies with high levels of modernization such as may be found in Western Europe, North America, and Australia, minority ethnic groups and immigrant populations in particular experience financial discrimination. While the progression of civil rights movements and justice reform has improved access to education and other economic opportunities in politically advanced nations, racial income and wealth disparity still prove significant. In the United States for example, a survey[when?] of African-American populations show that they are more likely to drop out of high school and college, are typically employed for fewer hours at lower wages, have lower than average intergenerational wealth, and are more likely to use welfare as young adults than their white counterparts. Mexican-Americans, while suffering less debilitating socioeconomic factors than black Americans, experience deficiencies in the same areas when compared to whites and have not assimilated financially to the level of stability experienced by white Americans as a whole. These experiences are the effects of the measured disparity due to race in countries like the US, where studies show that in comparison to whites, blacks suffer from drastically lower levels of upward mobility, higher levels of downward mobility, and poverty that is more easily transmitted to offspring as a result of the disadvantage stemming from the era of slavery and post-slavery racism that has been passed through racial generations to the present. These are lasting financial inequalities that apply in varying magnitudes to most non-white populations in nations such as the US, the UK, France, Spain, Australia, etc. In the countries of the Caribbean, Central America, and South America, many ethnicities continue to deal with the effects fo European colonization, and in general nonwhites tend to be noticeably poorer than whites in this region. In many countries with significant populations of indigenous races and those of Afro-descent (such as Mexico, Colombia, Chile, etc.) income levels can be roughly half as high as those experiences by white demographics, and this inequity is accompanied by systematically unequal access to education, career opportunities, and poverty relief. This region of the world, apart from urbanizing areas like Brazil and Costa Rica, continues to be understudied and often the racial disparity is denied by Latin Americans who consider themselves to be living in post-racial and post-colonial societies far removed from intense social and economic stratification despite the evidence to the contrary. African countries, too, continue to deal with the effects of the Trans-Atlantic Slave Trade, which set back economic development as a whole for blacks of African citizenship more than any other region. The degree to which colonizers stratified their holdings on the continent on the basis of race has had a direct correlation in the magnitude of disparity experienced by nonwhites in the nations that eventually rose from their colonial status. Former French colonies, for example, see much higher rates of income inequality between whites and nonwhites as a result of the rigid hierarchy imposed by the French who lived in Africa at the time. Another example is found in South Africa, which, still reeling from the socioeconomic impacts of Apartheid, experiences some of the highest racial income and wealth inequality in all of Africa. In these and other countries like Nigeria, Zimbabwe, and Sierra Leone, movements of civil reform have initially led to improved access to financial advancement opportunities, but data[when?] shows that for nonwhites this progress is either stalling or erasing itself in the newest generation of blacks that seek education and improved transgenerational wealth. The economic status of one's parents continues to define and predict the financial futures of African and minority ethnic groups.[needs update] Asian regions and countries such as China, the Middle East, and Central Asia have been vastly understudied in terms of racial disparity, but even here the effects of Western colonization provide similar results to those found in other parts of the globe. Additionally, cultural and historical practices such as the caste system in India leave their marks as well. While the disparity is greatly improving in the case of India, there still exists social stratification between peoples of lighter and darker skin tones that cumulatively result in income and wealth inequality, manifesting in many of the same poverty traps seen elsewhere. A Kuznets curve Economist Simon Kuznets argued that levels of economic inequality are in large part the result of stages of development. According to Kuznets, countries with low levels of development have relatively equal distributions of wealth. As a country develops, it acquires more capital, which leads to the owners of this capital having more wealth and income and introducing inequality. Eventually, through various possible redistribution mechanisms such as social welfare programs, more developed countries move back to lower levels of inequality. Andranik Tangian argues that the growing productivity due to advanced technologies results in increasing wages' purchase power for most commodities, which enables employers underpay workers in "labor equivalents", maintaining nevertheless an impression of fair pay. This illusion is dismanteled by the wages' decreasing purchase power for the commodities with a significant share of hand labor. This difference between the appropriate and factual pay goes to enterprise owners and top earners, increasing the inequality. As of 2019, Jeff Bezos is the richest person in the world. Wealth concentration is the process by which, under certain conditions, newly created wealth concentrates in the possession of already-wealthy individuals or entities. Accordingly, those who already hold wealth have the means to invest in new sources of creating wealth or to otherwise leverage the accumulation of wealth, and thus they are the beneficiaries of the new wealth. Over time, wealth concentration can significantly contribute to the persistence of inequality within society. Thomas Piketty in his book Capital in the Twenty-First Century argues that the fundamental force for divergence is the usually greater return of capital (r) than economic growth (g), and that larger fortunes generate higher returns. According to a 2020 study by the RAND Corporation, the top 1% of U.S. income earners have taken $50 trillion from the bottom 90% between 1975 and 2018. Economist Joseph Stiglitz argues that rather than explaining concentrations of wealth and income, market forces should serve as a brake on such concentration, which may better be explained by the non-market force known as "rent-seeking". While the market will bid up compensation for rare and desired skills to reward wealth creation, greater productivity, etc., it will also prevent successful entrepreneurs from earning excess profits by fostering competition to cut prices, profits and large compensation. A better explainer of growing inequality, according to Stiglitz, is the use of political power generated by wealth by certain groups to shape government policies financially beneficial to them. This process, known to economists as rent-seeking, brings income not from creation of wealth but from "grabbing a larger share of the wealth that would otherwise have been produced without their effort" Jamie Galbraith argues that countries with larger financial sectors have greater inequality, and the link is not an accident.[why?] A 2019 study published in PNAS found that global warming plays a role in increasing economic inequality between countries, boosting economic growth in developed countries while hampering such growth in developing nations of the Global South. The study says that 25% of gap between the developed world and the developing world can be attributed to global warming. A 2020 report by Oxfam and the Stockholm Environment Institute says that the wealthiest 10% of the global population were responsible for more than half of global carbon dioxide emissions from 1990 to 2015, which increased by 60%. According to a 2020 report by the UNEP, overconsumption by the rich is a significant driver of the climate crisis, and the wealthiest 1% of the world's population are responsible for more than double the greenhouse gas emissions of the poorest 50% combined. Inger Andersen, in the foreword to the report, said "this elite will need to reduce their footprint by a factor of 30 to stay in line with the Paris Agreement targets." Countries with a left-leaninglegislature generally have lower levels of inequality. Many factors constrain economic inequality - they may be divided into two classes: government sponsored, and market driven. The relative merits and effectiveness of each approach is a subject of debate. Typical government initiatives to reduce economic inequality include: Public education: increasing the supply of skilled labor and reducing income inequality due to education differentials. Progressive taxation: the rich are taxed proportionally more than the poor, reducing the amount of income inequality in society if the change in taxation does not cause changes in income. Market forces outside of government intervention that can reduce economic inequality include: propensity to spend: with rising wealth & income, a person may spend more. In an extreme example, if one person owned everything, they would immediately need to hire people to maintain their properties, thus reducing the wealth concentration. On the other hand, high-income persons have higher propensity to save. Robin Maialeh then shows that increasing economic wealth decreases propensity to spend and increases propensity to invest which consequently leads to even greater growth rate of already rich agents. Research shows that since 1300, the only periods with significant declines in wealth inequality in Europe were the Black Death and the two World Wars. Historian Walter Scheidel posits that, since the stone age, only extreme violence, catastrophes and upheaval in the form of total war, Communist revolution, pestilence and state collapse have significantly reduced inequality. He has stated that "only all-out thermonuclear war might fundamentally reset the existing distribution of resources" and that "peaceful policy reform may well prove unequal to the growing challenges ahead." A lot of research has been done about the effects of economic inequality on different aspects in society: Health: For long time the higher material living standards lead to longer life, as those people were able to get enough food, water and access to warmth. British researchers Richard G. Wilkinson and Kate Pickett have found higher rates of health and social problems (obesity, mental illness, homicides, teenage births, incarceration, child conflict, drug use) in countries and states with higher inequality. Their research included 24 developed countries, including most of the states from USA, and found that in the more developed countries, such as Finland and Japan, the heath issues are much lower than in states with rather higher inequality rates, such as Utah and New Hampshire. Some studies link a surge in "deaths of despair", suicide, drug overdoses and alcohol related deaths, to widening income inequality. Conversely, other research did not find these effects or concluded that research suffered from issues of confounding variables. Social cohesion: Research has shown an inverse link between income inequality and social cohesion. In more equal societies, people are much more likely to trust each other, measures of social capital (the benefits of goodwill, fellowship, mutual sympathy and social connectedness among groups who make up a social units) suggest greater community involvement. Crime: The cross national research shows that in societies with less economic inequality the homicide rates are consistently lower. A 2016 study finds that interregional inequality increases terrorism. Other research has argued inequality has little effect on crime rates. Welfare: Studies have found evidence that in societies where inequality is lower, population-wide satisfaction and happiness tend to be higher. Poverty: Study made by Jared Bernstein and Elise Gould suggest, that the poverty in the USA could has been reduced by the lowering of economic inequality for the past few decades. Debt: Income inequality has been the driving factor in the growing household debt, as high earners bid up the price of real estate and middle income earners go deeper into debt trying to maintain what once was a middle class lifestyle. Economic growth: A 2016 meta-analysis found that "the effect of inequality on growth is negative and more pronounced in less developed countries than in rich countries", though the average impact on growth was not significant. The study also found that wealth inequality is more pernicious to growth than income inequality. Civic participation: Higher income inequality led to less of all forms of social, cultural, and civic participation among the less wealthy. Political instability: Studies indicate that economic inequality leads to greater political instability, including an increased risk of democratic breakdown and civil conflict. Political party responses: One study finds that economic inequality prompts attempts by left-leaning politicians to pursue redistributive policies while right-leaning politicians seek to repress the redistributive policies. Fairness vs. equality According to Christina Starmans et al. (Nature Hum. Beh., 2017), the research literature contains no evidence on people having an aversion on inequality. In all studies analyzed, the subjects preferred fair distributions to equal distributions, in both laboratory and real-world situations. In public, researchers may loosely speak of equality instead of fairness, when referring to studies where fairness happens to coincide with equality, but in many studies fairness is carefully separated from equality and the results are univocal. Already very young children seem to prefer fairness over equality. When people were asked, what would be the wealth of each quintile in their ideal society, they gave a 50-fold sum to the richest quintile than to the poorest quintile. The preference for inequality increases in adolescence, and so do the capabilities to favor fortune, effort and ability in the distribution. Preference for unequal distribution has been developed to the human race possibly because it allows for better co-operation and allows a person to work with a more productive person so that both parties benefit from the co-operation. Inequality is also said to be able to solve the problems of free-riders, cheaters and ill-behaving people, although this is heavily debated. Researches demonstrate that people usually underestimate the level of actual inequality, which is also much higher than their desired level of inequality. In many societies, such as the USSR, the distribution led to protests from wealthier landowners. In the current U.S., many feel that the distribution is unfair in being too unequal. In both cases, the cause is unfairness, not inequality, the researchers conclude. Socialists attribute the vast disparities in wealth to the private ownership of the means of production by a class of owners, creating a situation where a small portion of the population lives off unearnedproperty income by virtue of ownership titles in capital equipment, financial assets and corporate stock. By contrast, the vast majority of the population is dependent on income in the form of a wage or salary. In order to rectify this situation, socialists argue that the means of production should be socially owned so that income differentials would be reflective of individual contributions to the social product. Marxian economics attributes rising inequality to job automation and capital deepening within capitalism. The process of job automation conflicts with the capitalist property form and its attendant system of wage labor. In this analysis, capitalist firms increasingly substitute capital equipment for labor inputs (workers) under competitive pressure to reduce costs and maximize profits. Over the long term, this trend increases the organic composition of capital, meaning that less workers are required in proportion to capital inputs, increasing unemployment (the "reserve army of labour"). This process exerts a downward pressure on wages. The substitution of capital equipment for labor (mechanization and automation) raises the productivity of each worker, resulting in a situation of relatively stagnant wages for the working class amidst rising levels of property income for the capitalist class. Meritocracy favors an eventual society where an individual's success is a direct function of his merit, or contribution. Economic inequality would be a natural consequence of the wide range in individual skill, talent and effort in human population. David Landes stated that the progression of Western economic development that led to the Industrial Revolution was facilitated by men advancing through their own merit rather than because of family or political connections. Most modern social liberals, including centrist or left-of-center political groups, believe that the capitalist economic system should be fundamentally preserved, but the status quo regarding the income gap must be reformed. Social liberals favor a capitalist system with active Keynesian macroeconomic policies and progressive taxation (to even out differences in income inequality). Research indicates that people who hold liberal beliefs tend to see greater income inequality as morally wrong. The liberal champions of equality under the law were fully aware of the fact that men are born unequal and that it is precisely their inequality that generates social cooperation and civilization. Equality under the law was in their opinion not designed to correct the inexorable facts of the universe and to make natural inequality disappear. It was, on the contrary, the device to secure for the whole of mankind the maximum of benefits it can derive from it. Henceforth no man-made institutions should prevent a man from attaining that station in which he can best serve his fellow citizens. Robert Nozick argued that government redistributes wealth by force (usually in the form of taxation), and that the ideal moral society would be one where all individuals are free from force. However, Nozick recognized that some modern economic inequalities were the result of forceful taking of property, and a certain amount of redistribution would be justified to compensate for this force but not because of the inequalities themselves.John Rawls argued in A Theory of Justice that inequalities in the distribution of wealth are only justified when they improve society as a whole, including the poorest members. Rawls does not discuss the full implications of his theory of justice. Some see Rawls's argument as a justification for capitalism since even the poorest members of society theoretically benefit from increased innovations under capitalism; others believe only a strong welfare state can satisfy Rawls's theory of justice. Classical liberal Milton Friedman believed that if government action is taken in pursuit of economic equality then political freedom would suffer. In a famous quote, he said: A society that puts equality before freedom will get neither. A society that puts freedom before equality will get a high degree of both. Economist Tyler Cowen has argued that though income inequality has increased within nations, globally it has fallen over the 20 years leading up to 2014. He argues that though income inequality may make individual nations worse off, overall, the world has improved as global inequality has been reduced. Social justice arguments Patrick Diamond and Anthony Giddens (professors of Economics and Sociology, respectively) hold that 'pure meritocracy is incoherent because, without redistribution, one generation's successful individuals would become the next generation's embedded caste, hoarding the wealth they had accumulated'. They also state that social justice requires redistribution of high incomes and large concentrations of wealth in a way that spreads it more widely, in order to "recognise the contribution made by all sections of the community to building the nation's wealth." (Patrick Diamond and Anthony Giddens, June 27, 2005, New Statesman) Pope Francis stated in his Evangelii gaudium, that "as long as the problems of the poor are not radically resolved by rejecting the absolute autonomy of markets and financial speculation and by attacking the structural causes of inequality, no solution will be found for the world's problems or, for that matter, to any problems." He later declared that "inequality is the root of social evil." In most western democracies, the desire to eliminate or reduce economic inequality is generally associated with the political left. One practical argument in favor of reduction is the idea that economic inequality reduces social cohesion and increases social unrest, thereby weakening the society. There is evidence that this is true (see inequity aversion) and it is intuitive, at least for small face-to-face groups of people.Alberto Alesina, Rafael Di Tella, and Robert MacCulloch find that inequality negatively affects happiness in Europe but not in the United States. It has also been argued that economic inequality invariably translates to political inequality, which further aggravates the problem. Even in cases where an increase in economic inequality makes nobody economically poorer, an increased inequality of resources is disadvantageous, as increased economic inequality can lead to a power shift due to an increased inequality in the ability to participate in democratic processes. The capabilities approach - sometimes called the human development approach - looks at income inequality and poverty as form of "capability deprivation". Unlike neoliberalism, which "defines well-being as utility maximization", economic growth and income are considered a means to an end rather than the end itself. Its goal is to "wid[en] people's choices and the level of their achieved well-being" through increasing functionings (the things a person values doing), capabilities (the freedom to enjoy functionings) and agency (the ability to pursue valued goals). When a person's capabilities are lowered, they are in some way deprived of earning as much income as they would otherwise. An old, ill man cannot earn as much as a healthy young man; gender roles and customs may prevent a woman from receiving an education or working outside the home. There may be an epidemic that causes widespread panic, or there could be rampant violence in the area that prevents people from going to work for fear of their lives. As a result, income inequality increases, and it becomes more difficult to reduce the gap without additional aid. To prevent such inequality, this approach believes it is important to have political freedom, economic facilities, social opportunities, transparency guarantees, and protective security to ensure that people aren't denied their functionings, capabilities, and agency and can thus work towards a better relevant income. Policy responses intended to mitigate No business which depends for existence on paying less than living wages to its workers has any right to continue in this country. The Economist wrote in December 2013: "A minimum wage, providing it is not set too high, could thus boost pay with no ill effects on jobs....America's federal minimum wage, at 38% of median income, is one of the rich world's lowest. Some studies find no harm to employment from federal or state minimum wages, others see a small one, but none finds any serious damage." General limitations on and taxation of rent-seeking are popular across the political spectrum. A 2017 study in the Journal of Political Economy by Daron Acemoglu, James Robinson and Thierry Verdier argues that American "cutthroat" capitalism and inequality gives rise to technology and innovation that more "cuddly" forms of capitalism cannot. As a result, "the diversity of institutions we observe among relatively advanced countries, ranging from greater inequality and risk-taking in the United States to the more egalitarian societies supported by a strong safety net in Scandinavia, rather than reflecting differences in fundamentals between the citizens of these societies, may emerge as a mutually self-reinforcing world equilibrium. If so, in this equilibrium, 'we cannot all be like the Scandinavians,' because Scandinavian capitalism depends in part on the knowledge spillovers created by the more cutthroat American capitalism." A 2012 working paper by the same authors, making similar arguments, was challenged by Lane Kenworthy, who posited that, among other things, the Nordic countries are consistently ranked as some of the world's most innovative countries by the World Economic Forum's Global Competitiveness Index, with Sweden ranking as the most innovative nation, followed by Finland, for 2012-2013; the U.S. ranked sixth. There are however global initiative like the United Nations Sustainable Development Goal 10 which aims to garner international efforts in reducing economic inequality considerably by 2030. ^Neves, Pedro Cunha; Afonso, Óscar; Silva, Sandra Tavares (2016). "A Meta-Analytic Reassessment of the Effects of Inequality on Growth". World Development. 78: 386-400. doi:10.1016/j.worlddev.2015.10.038. Summary - This paper develops a meta-analysis of the empirical literature that estimates the effect of inequality on growth. It covers studies published in scientific journals during 1994-2014 that examine the impact on growth of inequality in income, land, and human capital distribution. We find traces of publication bias in this literature, as authors and journals are more willing to report and publish statistically significant findings, and the results tend to follow a predictable time pattern over time according to which negative and positive effects are cyclically reported. After correcting for these two forms of publication bias, we conclude that the high degree of heterogeneity of the reported effect sizes is explained by study conditions, namely the structure of the data, the type of countries included in the sample, the inclusion of regional dummies, the concept of inequality and the definition of income. In particular, our meta-regression analysis suggests that: cross-section studies systematically report a stronger negative impact than panel data studies; the effect of inequality on growth is negative and more pronounced in less developed countries than in rich countries; the inclusion of regional dummies in the growth regression of the primary studies considerably weakens such effect; expenditure and gross income inequality tend to lead to different estimates of the effect size; land and human inequality are more pernicious to subsequent growth than income inequality is. We also find that the estimation technique, the quality of data on income distribution, and the specification of the growth regression do not significantly influence the estimation of the effect sizes. These results provide new insights into the nature of the inequality-growth relationship and offer important guidelines for policy makers. ^Novotný, Josef (2007). "On the measurement of regional inequality: Does spatial dimension of income inequality matter?". The Annals of Regional Science. 41 (3): 563-80. doi:10.1007/s00168-007-0113-y. S2CID51753883. ^Hatch, Megan E.; Rigby, Elizabeth (2015). "Laboratories of (In)equality? Redistributive Policy and Income Inequality in the American States". Policy Studies Journal. 43 (2): 163-187. doi:10.1111/psj.12094. ^Antony, Jürgen, and Torben Klarl. "Estimating the income inequality-health relationship for the United States between 1941 and 2015: Will the relevant frequencies please stand up?." The Journal of the Economics of Ageing 17 (2020): 100275. ^Neapolitan, Jerome L (1999). "A comparative analysis of nations with low and high levels of violent crime". Journal of Criminal Justice. 27 (3): 259-74. doi:10.1016/S0047-2352(98)00064-6. ^Ezcurra, Roberto; Palacios, David (2016). "Terrorism and spatial disparities: Does interregional inequality matter?". European Journal of Political Economy. 42: 60-74. doi:10.1016/j.ejpoleco.2016.01.004. ^Kang, Songman (2015). "Inequality and crime revisited: Effects of local inequality and economic segregation on crime". Journal of Population Economics. 29 (2): 593-626. doi:10.1007/s00148-015-0579-3. S2CID155852321. ^Corvalana, Alejandro, and Matteo Pazzonab. "Does Inequality Really Increase Crime? Theory and Evidence." In Technical Report. 2019. ^The Way ForwardArchived July 11, 2012, at archive.today By Daniel Alpert, Westwood Capital; Robert Hockett, Professor of Law, Cornell University; and Nouriel Roubini, Professor of Economics, New York University, New America Foundation, October 10, 2011 Landes, David. S. (1969). The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present. Cambridge, New York: Press Syndicate of the University of Cambridge. ISBN978-0-521-09418-4. ^O'Donnell, Michael, and Serena Chen. "Political Ideology, the Moralizing of Income Inequality, and Its Social Consequences." Available at SSRN 3253666 (2018). ^The relation between economic inequality and political inequality is explained by Robert Alan Dahl in the chapters The Presence of a Market Economy (p. 63), The Distribution of Political Resources (p. 84) und Market Capitalism and Human Dispositions (p. 87) in On Political Equality, 2006, 120 pages, Yale University Press, ISBN978-0-300-12687-7 ^ abAmartya Sen (1999). "Poverty as Capability Deprivation". Development as Freedom. New York: Anchor Books. ^, UNDP (1990) Human Deuelopment Report, Oxford University Press, New York ^Deneulin, Séverine; Alkire, Sabina (2009), "The human development and capability approach", in Deneulin, Séverine; Shahani, Lila (eds.), An introduction to the human development and capability approach freedom and agency, Sterling, Virginia Ottawa, Ontario: Earthscan International Development Research Centre, pp. 22-48, ISBN9781844078066 Deneulin, Séverine; Shahani, Lila (2009). An introduction to the human development and capability approach freedom and agency. Sterling, Virginia Ottawa, Ontario: Earthscan International Development Research Centre. ISBN9781844078066. Gilens, Martin (2012). Affluence and influence: Economic inequality and political power in America. Princeton, New Jersey New York: Princeton University Press Russell Sage Foundation. ISBN9780691162423. Sen, Amartya; Foster, James E. (1997). On economic inequality. Radcliffe Lectures. Oxford New York: Clarendon Press Oxford University Press. ISBN9780198281931. von Braun, Joachim; Diaz-Bonilla, Eugenio (2008). Globalization of food, and agriculture, and the poor. New Delhi Washington D.C: Oxford University Press International Food Policy Research Institute. ISBN9780195695281. Rivera Vicencio, Eduardo, "Inequality,Precariousness and Social Costs of Capitalism. In the Era of Corporate Governmentality" International Journal of Critical Accounting (IJCA), Vol 11, Nº1, pp. 40-70. . Ahamed, Liaquat, "Widening Gyre: The rise and fall and rise of economic inequality", The New Yorker, September 2, 2019, pp. 26-29. "[T]here seems to [be] some sort of cap on inequality - a limit to the economic divisions a country can ultimately cope with." (p. 28.) Hatch, Megan E.; Rigby, Elizabeth (2015). "Laboratories of (In)equality? Redistributive Policy and Income Inequality in the American States". Policy Studies Journal. 43 (2): 163-187. doi:10.1111/psj.12094. Seguino, Stephanie (2000). "Gender Inequality and Economic Growth: A Cross-Country Analysis". World Development. 28 (7): 1211-30. doi:10.1016/S0305-750X(00)00018-8. Smeeding, Timothy M.; Thompson, Jeffrey P. (2011). "Recent Trends in Income Inequality". In Immervoll, Herwig; Peichl, Andreas; Tatsiramos, Konstantinos (eds.). Who Loses in the Downturn? Economic Crisis, Employment and Income Distribution. Research in Labor Economics. 32. pp. 1-50. doi:10.1108/S0147-9121(2011)0000032004. ISBN978-0-85724-749-0. Crayen, Dorothee, and Joerg Baten. "New evidence and new methods to measure human capital inequality before and during the industrial revolution: France and the US in the seventeenth to nineteenth centuries." Economic History Review 63.2 (2010): 452-478. online Hoffman, Philip T., et al. "Real inequality in Europe since 1500." Journal of Economic History 62.2 (2002): 322-355. online Morrisson, Christian, and Wayne Snyder. "The income inequality of France in historical perspective." European Review of Economic History 4.1 (2000): 59-83. online Lindert, Peter H., and Steven Nafziger. "Russian inequality on the eve of revolution." Journal of Economic History 74.3 (2014): 767-798. online Nicolini, Esteban A.; Ramos Palencia, Fernando (2016). "Decomposing income inequality in a backward pre-industrial economy: Old Castile (Spain) in the middle of the eighteenth century". Economic History Review. 69 (3): 747-772. doi:10.1111/ehr.12122. S2CID154988112. Piketty, Thomas, and Emmanuel Saez. "The evolution of top incomes: a historical and international perspective." American economic review 96.2 (2006): 200-205. online Piketty, Thomas, and Emmanuel Saez. "Income inequality in the United States, 1913-1998." Quarterly journal of economics 118.1 (2003): 1-41. online Saito, Osamu. "Growth and inequality in the great and little divergence debate: a Japanese perspective." Economic History Review 68.2 (2015): 399-419. Covers 1600-1868 with comparison to Stuart England and Mughal India. Stewart, Frances. "Changing perspectives on inequality and development." Studies in Comparative International Development 51.1 (2016): 60-80. covers 1801 to 2016. Sutch, Richard. "The One Percent across Two Centuries: A Replication of Thomas Piketty's Data on the Concentration of Wealth in the United States." Social Science History 41.4 (2017): 587-613. Strongly rejects all Piketty's estimates for US inequality before 1910 for both top 1% and top 10%. online Van Zanden, Jan Luiten. "Tracing the beginning of the Kuznets curve: Western Europe during the early modern period." Economic History Review 48.4 (1995): 643-664. covers 1400 to 1800. Wei, Yehua Dennis. "Geography of inequality in Asia." Geographical Review 107.2 (2017): 263-275. covers 1981 to 2015.
National income is the value of the aggregate output of the different sectors during a certain time period. In other words, it is the flow of goods and services produced in an economy in a particular year. Thus, the measurement of National Income becomes important. Measurement of National Income There are three ways of measuring the National Income of a country. They are from the income side, the output side and the expenditure side. Thus, we can classify these perspectives into the following methods of measurement of National Income. Methods of Measuring National Income 1. Product Method Under this method, we add the values of output produced or services rendered by the different sectors of the economy during the year in order to calculate the National Income. In this method, we include only the value added by each firm in the production process in the output figure. Hence, we use the value-added method. The value-added output of all the sectors of the economy is the GNP at factor cost. However, this method is unscientific as it adds the value of only those goods and services that are sold in the market or are available for sale in the market Browse more Topics under National Income - The concept of National Income - The concept of Consumption, Saving, and Investment - Economic Growth - Economic Fluctuations 2. Income Method Under this method, we add all the incomes from employment and ownership of assets before taxation received from all the production activities in an economy. Thus, it is also the Factor Income method. We also need to add the undistributed profits of the private sector and the trading surplus of the public sector corporations. However, we need to exclude items not arising from productive activities such as sickness benefits, interest on the national debt, etc. 3. Expenditure Method This method measures the total domestic expenditure of the economy. It consists of two elements, viz. Consumption expenditure and Investment expenditure. Consumption expenditure includes consumption expenditure of the household sector on goods and services and consumption outlays of the business sector and public authorities. Investment expenditure refers to the expenditure on the making of fixed capital such as Plant and Machinery, buildings, etc. Learn more about Income Determination here in detail. Difficulties in Measurement of National Income Following are the difficulties in estimating the National Income - Conceptual difficulties - Statistical difficulties A. Conceptual difficulties - It is difficult to calculate the value of some of the items such as services rendered for free and goods that are to be sold but are used for self-consumption. - Sometimes, it becomes difficult to make a clear distinction between primary, intermediate and final goods. - What price to choose to determine the monetary value of a National Product is always a difficult question? - Whether to include the income of the foreign companies in the National Income or not because they emit a major part of their income outside India? B. Statistical difficulties - In case of changes in the price level, we need to use the Index numbers which have their own inherent limitations. - Statistical figures are not always accurate as they are based on the sample surveys. Also, all the data are not often available. - All the countries have different methods of estimating National Income. Thus, it is not easily comparable. Questions on National Income What is the usefulness of estimating the National Income? The usefulness of estimating National Income is as follows: - It depicts the change in the production to output and also the effects of the Government policies on the economy. - The National Income studies the relation between the input of one industry and the output of the other. - It shows the income distribution among different economic units. - It also shows the change in the tastes and preferences of the consumers and thus, helps the producers to decide what to produce and for whom to produce. - The quantum of the National Income of a country indicates its ability to pay its share for international purposes, such as membership of IMF, World Bank or SAARC.
NASA is actively planning to expand human spaceflight and robotic exploration beyond low Earth orbit. To meet this challenge, a capability driven architecture will be developed to transport explorers to multiple destinations that each have their own unique space environments. Future destinations may include the moon, near Earth asteroids, and Mars and its moons. NASA is preparing to explore these destinations by first conducting analog missions here on Earth. Analog missions are remote field tests in locations that are identified based on their physical similarities to the extreme space environments of a target mission. NASA engineers and scientists work with representatives from other government agencies, academia and industry to gather requirements and develop the technologies necessary to ensure an efficient, effective and sustainable future for human space exploration. Analog teams test robotic equipment, vehicles, habitats, communications, and power generation and storage. They evaluate mobility, infrastructure, and effectiveness in the harsh environments. Analogs provide NASA with data about strengths, limitations, and the validity of planned human-robotic exploration operations, and help define ways to combine human and robotic efforts to enhance scientific exploration. Test locations include the Antarctic, oceans, deserts, and arctic and volcanic environments. Analog missions and field tests include: Two NEEMO 13 crewmembers participate in an undersea session of extravehicular activity. NASA's Extreme Environment Mission Operations (NEEMO) The National Oceanic and Atmospheric Administration's Aquarius Undersea Laboratory is the analog test site for NASA's Extreme Environment Mission Operations, or NEEMO. NASA uses the laboratory's underwater environment to execute a range of analog "spacewalks" or extravehicular activities and to assess equipment for exploration concepts in advanced navigation and communication. Long-duration NEEMO missions provide astronauts with a realistic approximation of situations they will likely encounter on missions in space and provide an understanding of how to carry out daily operations in a simulated planetary environment. Inflatable Lunar Habitat NASA conducted a range of analog tests to evaluate the inflatable lunar habitat. Astronauts may one day live on the moon in something similar to the conceptual housing structure. The tests were conducted at McMurdo Station in the cold, isolated landscape of Antarctica to provide information about the structure’s power consumption and resilience. The analog test was also used to evaluate how easily a suited astronaut could assemble, pack, and transport the habitat. If selected for future missions, the structure will reduce the amount of hardware and fuel necessary for transportation and logistics on the moon. In June, 2008, astronauts, engineers and scientists gathered at Moses Lake, Wash., to test spacesuits and rovers. Moses Lake, WA Short Distance Mobility Exploration Engineering Evaluation Field Tests—Phase 1 The Short Distance Mobility Exploration Engineering Evaluation analog field tests were designed to measure the benefits of using pressurized vehicles versus unpressurized vehicles and to incorporate the findings into upcoming lunar missions. The sand dunes of Moses Lake, WA, provide a lunar-like environment of sand dunes, rugged terrain, soil inconsistencies, sandstorms, and temperature swings. Here, NASA tests a newly enhanced extravehicular activity suit and the new line of robotic rovers: ATHLETE Rover, K10, Lunar Truck, Lance Blade and Lunar Manipulator. Black Point Lava Flow, AZ Short Distance Mobility Exploration Engineering Evaluation Field Tests—Phase 2 The terrain and size of Black Point Lava Flow provide an environment geologically similar to the lunar surface. It is here that NASA first introduced the Small Pressurized Rover, a conceptual vehicle with an extended range and capability to travel rugged planetary terrain. The Black Point landscape enables small, pressurized rovers to undertake sorties with ranges that extend greater than 10 kilometers. The sorties tests include a 3-day exploration mission. Tested in Hawaii in November 2008, ROxygen could produce two thirds of the oxygen needed to sustain a crew of four on the moon. In-Situ Resource Utilization Demonstrations The volcanic terrain, rock distribution, and soil composition of Hawaii’s islands provide an ideal simulated environment for testing hardware and operations. NASA performs analogs to identify a process that uses hardware or employs an operation to harness local resources (in-situ) for use in human and robotic exploration. The demonstrations could help reduce risk to lunar missions by demonstrating technologies for end-to-end oxygen extraction, separation, and storage from the volcanic material and other technologies that could be used to look for water or ice at the lunar poles. Devon Island, Nunavut, Canada Haughton Mars Project The rocky arctic desert setting, geological features, and biological attributes of the Haughton Crater, one of Canada’s uninhabited treasures, provides NASA with an optimal setting to assess requirements for possible future robotic and human missions to Mars. During the Haughton Mars Project, scientists, engineers, and astronauts perform multiple representative lunar science and exploration surface activities using existing field infrastructure and surface assets. They demonstrate scientific and operational concepts, including extravehicular activity traverses, long-term high-data communication, complex robotic interaction, and onboard rover and suit engineering.
The surface area of a cylinder can be derived from the area of rectangle. If you 'unroll' a cylinder you have a shape of a rectangle, similar to a sheet of paper. The width of the rectangle will be the height of the cylinder and the length of the rectangle will be the circumference of the cylinder end.So, Area = length * widthwhere, width = height of cylinder & length = circumference of cylinder end = PI*(Diameter of cylinder)Therefore,surface area of a cylinder = (PI)*(diameter of cylinder)*(height of cylinder)Hope that helps! Area of the base of a cylinder = pi*radius2 Curved surface area includes the area of the length of the cylinder only whereas surface area includes the two bases as well... if the cylinder is on the inside, it would not affect the surface area. otherwise, subtract the part of the inside cylinder that touches the outside from the cylinder The area for the base of a cylinder is the area of a circle. pi times radius squared. We assume the surface area of a cylinder excluding its ends. Area = height of cylinder multiplied by circumference. That is A = h x Pi x diameter. 1. Find the surface area of the whole cylinder 2. Find the area of one of the two circles on either end of the cylinder 3. Multiply the circle's area by two and subtract their area from the total surface area 4. Now you have the surface area of an unclosed cylinder! The volume of a cylinder is the cross-sectional area of the cylinder multiplied by its length. The perpendicular cross-section of a cylinder is a circle. The base of a cylinder is the circle on the bottom, the area being the area of that circle. The area of the base of a cylinder is the same as the area of a circle. The equation for the area of a circle is: Pi r squared The volume of a cylinder is its height times the area of its base. And the area of its base is the area of a circular shape. The surface area of a cylinder is 2πrh + 2πr². The variable r is the radius and h is the height.2πr² is the area of the top and bottom parts of the cylinder.2πrh is the area of the round circular part. Because the volume of the cylinder is proportional to the cross sectional area of the cylinder. The cross sectional area is a circle and the area of a circle is pi*r2. area of the cylinder base multiplied the height of the cylinder the volume of the cylinder The area of the base of a cylinder is: pi times radius squared Assuming a rod is a cylinder the surface area is the area of the top + the area of the bottom + the area of the cylinder wallsA = 2πr2 + 2πrh = 2πr(r + h) If cylinder radius and cylinder length are known : (pi = 3.141592654 . . . ) > Surface area = ( (2 * pi * radius) * length ) The surface area is 747.7cm2 The surface area of this cylinder is 2,111.15 square feet. face or surface are related words for a surface area of a cylinder measure liquids or the area/circumference of the cylinder. You Find the Hieght of the cylinder Volume of a cylinder = base area times height
This class explores the essential qualities of computers, how they work, what they can and cannot do, and requires not computer background at all. In this first section we'll look at the basic features of computers and get started playing with computer code. Acknowledgements: thanks to Google for supporting my early research that has helped create this class. Thanks to Mark Guzdial who popularized the idea of using digital media to introduce computers. Fundamental Equation of Computers The fundamental equation of computers is: Computer = Powerful + Stupid Powerful - look through masses of a data -Billions of "operations" per second -Operations are simple and mechanical -Nothing like "insight" or "understanding" (HAL 9000 video) Powerful + Stupid ... vividly experienced in our exercises Stupid, but very useful. How does that work? That's what CS101 is about -Visit this funny computer world, see how it works -Understand what they can do, how made useful -Not intimidated, computer is not some magic box Hidden agenda: open eyes for some, more computer science courses Computers are very powerful, looking through large amounts of data quickly. Computers can literally perform billions of operations per second. However, the individual "operations" that computers can perform are extremely simple and mechanical, nothing like a human thought or insight. A typical operation in the language of computers is adding two numbers together. So although the computers are fast at what they do, the operations that they can do are extremely rigid, simple, and mechanical. The computer lacks anything like real insight. Or put another way, computers are not like the HAL 9000 from the movie 2001: A Space Odyssey: HAL 9000 video If nothing else, you should not be intimidated by the computer as if it's some sort of brain. The computer is a mechanical tool which can do amazing things, but it requires a human to tell it what to do. High Level - How Does a Computer Work? Computer is driven by "code" instructions Instructions simple, mechanical, e.g. add 2 numbers The computer "runs" a long series of instructions This is not insight, purely mechanical Question: if the computer is so mechanical... But So Many Useful Features Think of all the useful computer features (phone, camera) --Email, instant messaging If computers are so stupid... how are they so useful? What connects the two sides? Programmers Make It Happen Human programmer thinks of a useful feature --Creativity, insight about problems, computers Programmer thinks through the solution "Algorithm" -- steps to accomplish Breaking it down, writing code for the computer This is computer programming Every useful feature you've ever used has this pattern Best features of both sides: inexpensive/fast + creative insight CS101 explorations: code and algorithm Since the computer is totally mechanical and stupid -- how do they manage to do so many useful things? The gap between the computer and doing something useful is where the human programmer creates solutions. Programming is about a person using their insight about what would be useful and how it could be done, and breaking the steps down into code the computer can follow. Code refers to the language the computer can understand. For these lectures, we'll write and run short snippets of code to understand what the essential qualities of computers, and especially their strengths and limitations. Experimenting with code, the nature of computers will come through very clearly ... powerful in their own way, but with a limited, mechanical quality. IMHO, this mixed nature of computers is something everyone should understand, to use them well, to not be intimidated by them. Before Coding - Patience We'll start with some simple coding below First code examples are not flashy Code is like lego bricks... -Individual pieces are super simple -Eventually build up great combinations But we have to start small Within a few hours of lecture, we'll be doing special effects with images such as the following: But for now we just have print()! Patience We're going to start by learning a few programming elements, and later we'll recombine these few elements to solve many problems. These first elements are simple, so they are not much to look at on their own. Be patient, soon we'll put these elements together -- like lego bricks -- to make pretty neat projects. Our code phrases are small... -Just big enough to experiment with key ideas -Not full, professional programs -But big enough to show the real issues, bugs of coding 1. First Code Example - Print Here is code which calls the "print" function. Click the Run button below, and your computer will run this code, and the output of the code will appear to the right. Run executes each line once, running from top to bottom print is a function -- like a verb in the code Numbers within the parenthesis ( ... ) are passed in to the print function Multiple values separated by commas Experiments change the code and run after each change see the new output: -Change a number -Add more numbers separated by commas inside the print(...) -Copy the first line and paste it in twice after the last line -I promise the output will get more interesting! Syntax the code is not free form -Allowed syntax is strict and narrow -A reflection of the inner, mechanical nature of the computer -Don't be put off - "When in Rome..." -We're visiting the world of the computer 2. Print String Thus far we have numbers, e.g. 6 A string is a sequence of letters written within quotes to be used as data within the code -Strings work with the print function, in addition to numbers -Strings in the computer store text, such as urls or the text of paragraphs, etc. A comment begins with // and extends through the end of the line. A way to write notes about the code, ignored by the computer. Experiments: -Edit the text within a string -Add more strings separated by commas -Add the string "print" - inside of string is just data, not treated as code Code = instructions that are Run Data = numbers, strings, handled by the code Note that print is recognized as a function in the code vs. the "hello" string which is just passive data (like verbs and nouns) The computer ignores the comments, so they are just a way for you to write notes to yourself about what the code is doing. Comments can be use it to temporarily remove a line of code -- "commenting out" the code, by placing a "//" to its Thinking About Syntax and Errors (today's key message!) Syntax -- code is structured for the computer Very common error -- type in code, with slight syntax problem Professional programmers make that sort of "error" all the time Fortunately, very easy to fix ... don't worry about it Not a reflection of some author flaw Just the nature of typing ideas into the mechanical computer language Beginners can be derailed by syntax step, thinking they are making some big error Exercise to inoculate you all: a bunch of typical syntax errors + fixing them Fixing these little errors is a small, normal step Syntax The syntax shown above must be rigidly followed or the code will not work: function name, parenthesis, each string has opening and closing quotes, commas separating values for a function call. The rigidity of the syntax is a reflection of the limitations of computers .. their natural language is fixed and mechanical. This is important to absorb when working with computers, and I think this is where many people get derailed getting started with computers. You write something that any human could understand, but the computer can only understand the code where it fits the computer's mechanical syntax. When writing for the computer, it is very common to make a trivial, superficial syntax mistakes in the code. The most expert programmers on earth make that sort of error all the time, and think nothing of it. The syntax errors do not reflect some flawed strategy by the author. It's just a natural step in translating our thoughts into the more mechanical language of the computer. As we'll see below, fixing these errors is very fast. It's important to not be derailed by these little superficial-error cases. To help teach you the patterns, below we have many examples showing typical errors, so you can see what the error messages look like and see how to fix them. For each snippet of code below, what is the error? Sometimes the error message generated by the computer points to the problem accurately, but sometimes the error message just reveals that the error has so deeply confused the computer that it cannot create an accurate error message. Firefox currently produces the most helpful error messages often pointing to the specific line with problems. Syntax Error Examples These syntax problems are quick to fix. Change the code below so, when run, it produces the following output: 1 2 buckle 3 4 knock For the example problems shown in lecture, the solutions are available as shown below. So you can revisit the problem, practice with it, and still see the solution if you like.
Virtual private networks, or VPNs, allow you to use public networks, such as the Internet, as your own private, secure network connection. Many companies use VPNs to connect branch offices to headquarters via the Internet. VPNs rely on data encapsulation and encryption to work and provide reliable, secure connectivity options for remote access. Understanding how a VPN works requires you to first understand the basic nature of modern networking. Networks use layered protocols, called stacks, to perform various functions. Users interact most directly with the application layer, which is located at the top of the network stack. A web browser, for example, uses the application-level protocol HTTP. The collection of wires and electrical signals that form a network exists at the bottom layer of the stack. In between the high- and low-level protocols are midlevel protocols that package data for delivery to specific machines and make sure the data arrives safely at its destination. When your web browser transmits an HTTP request, your computer's IP stack packages, or encapsulates, that request in a packet that uses the TCP protocol. The TCP packet is then encapsulated within a lower-level IP packet, then again within an Ethernet packet. The Ethernet packet contains the information necessary for the data to be translated into electrical signals and placed onto the network. The IP and TCP protocols contain information necessary for routers to get the data to the correct destination. Once the packet arrives at its destination, its various layers are stripped away by the recipient's network stack, revealing the original HTTP request. Figure 14-7 shows a logical diagram of how encapsulation works. VPN protocols step in before packets are handed off to your computer's network hardware. They encapsulate the data one more time, within a VPN packet. In fact, your computer often sees a VPN as a "virtual network adapter" and passes packets to that "adapter" for final transmission. The VPN adapter encapsulates the data and passes it to your computer's real network interface card (NIC), which places the packet onto the network. Figure 14-8 shows the virtual adapter in action. Once the data arrives at its destination, the recipient's network adapter passes the data to another virtual network adapter, which strips off the VPN packet information and passes the remaining data to the computer's TCP/IP stack, where the data is processed normally. Because your computer treats the virtual adapter as a real network adapter, your computer thinks that it has established a private, point-to-point connection with the computer on the other end of the VPN. In effect, a virtual tunnel exists between the two computers. All data entering the tunnel is packaged by the VPN protocol and sent directly to the other end of the tunnel, where the data can be unpackaged and read. There are two common VPN protocols, which are supported by Windows Server 2003. PPTP was originally created by Microsoft and introduced in Windows NT 4.0's Routing and Remote Access Services. PPTP encrypts the data it encapsulates, but it does not encrypt the VPN's header data. That means an eavesdropper can detect that a PPTP tunnel is in use and can identify the packets "contained" within the tunnel. However, the eavesdropper would still have to break the decryption on the tunnel's contents to read the data moving through the tunnel. PPTP is widely supported within the Windows product line all the way back to Windows 95. L2TP is the newest VPN protocol and provides only the tunneling aspect of a VPN, not encryption. However, L2TP is usually used in conjunction with IPSec, which encrypts the entire L2TP packet. A primary advantage of L2TP over PPTP is that eavesdroppers cannot tell that a VPN is in use, because IPSec encrypts even the L2TP header information. L2TP also enjoys wider industry support outside of Microsoft. Within the Microsoft product line, L2TP is supported natively on Windows 2000, Windows XP, and Windows Server 2003. VPNs are inherently rather secure end-to-end connections, simply because of the way they work. However, the way you build a VPN solution into your network can enhance the security of your overall network as well. Here are some tips: Use remote access policies to restrict the users who can use the VPN, just as you would restrict users who can dial in to your network. VPN connections can be authenticated through RADIUS, allowing you to use IAS for centralized policy management, even if you aren't using Windows-based VPN servers. Instructions for restricting the users of VPNs are provided later in this chapter. Use L2TP VPNs whenever possible, since they encrypt more of the packets passing through the tunnel. They also provide much stronger encryption than earlier VPN security models. Where your VPN server is placed on your network is an important security consideration. One technique is to place the VPN server behind your firewall, as shown in Figure 14-9. That placement works only under a small number of circumstances, though. Depending on the age and patch level of your clients, they may not be able to access the VPN server through the firewall. You may, however, need to configure the firewall to allow the ports and protocols required for RAS. PPTP uses TCP port 1723 and IP Protocol 47. L2TP uses UDP port 1701. If you're using IPSec with L2TP, you must allow IP Protocols 50 and 51 and TCP and UDP port 500. More commonly, administrators place their VPN server directly on the Internet, as shown in Figure 14-10. This placement avoids the problems caused by firewalls, but makes your VPN server a target for attackers, who will try and access your corporate network by exploiting security vulnerabilities in the VPN server itself. If the VPN server runs Windows Server 2003, you can lock it down by enabling VPN filters. As shown in Figure 14-11, you can configure the network interface connected to the Internet so that it drops all packets unrelated to the VPN protocol in use (in this example, PPTP, which uses a destination TCP port of 1723). These settings are available as properties of the RAS interface.
A new report from Stanford University warns that biodiversity is close to a tipping point that will lead to the next mass extinction. Credit:Smith609, Wikipedia (CC BY-SA 3.0) Earth's current biodiversity – the product of 3.5 billion years of evolutionary trial and error – is very high when looking at the long history of life. But it may be reaching a tipping point. In a review of scientific literature and data analysis published in Science, an international team of scientists cautions that the loss and decline of animals is contributing to what appears to be the early days of the planet's next mass extinction event. Since 1500, over 320 terrestrial vertebrates have gone extinct. Populations of the remaining species show a 25 percent average decline in abundance. The situation is similarly dire for invertebrate animal life. And while previous extinctions have been driven by natural planetary transformations or catastrophic asteroid strikes, the current die-off can be associated to human activity – a situation that lead author Rodolfo Dirzo, professor of biology at Stanford University, calls an era of "Anthropocene defaunation." Across vertebrates, up to 33 percent of all species are estimated to be globally threatened or endangered. Large animals – described as megafauna and including elephants, rhinoceroses, polar bears and countless other species worldwide – face the highest rates of decline, a trend that matches previous extinction events. Larger animals tend to have lower population growth rates and produce fewer offspring. They need larger habitat areas to maintain viable populations. Their size and meat mass make them easier and more attractive hunting targets for humans. Although these species represent a relatively low percentage of the animals at risk, their loss would have trickle-down effects that could shake the stability of other species and, in some cases, even human health. For instance, previous experiments conducted in Kenya have isolated patches of land from megafauna such as zebras, giraffes and elephants, and observed how an ecosystem reacts to the removal of its largest species. Rather quickly, these areas become overwhelmed with rodents. Grass and shrubs increase and the rate of soil compaction decreases. Seeds and shelter become more easily available, and the risk of predation drops. Consequently, the number of rodents doubles – and so does the abundance of the disease-carrying ectoparasites that they harbour. If rats dominate ecosystems, they could evolve to giant sizes in the future, according to recent research. "Where human density is high, you get high rates of defaunation, high incidence of rodents, and thus high levels of pathogens, which increases the risks of disease transmission," says Dirzo. "Who would have thought that just defaunation would have all these dramatic consequences? But it can be a vicious circle." The scientists also detailed a troubling trend in invertebrate defaunation. Human population has doubled in the past 40 years; during the same period, the number of invertebrate animals – such as beetles, butterflies, spiders and worms – has decreased by 45 percent. As with larger animals, the loss is driven primarily by loss of habitat and global climate disruption, and could have trickle-up effects in our everyday lives. For instance, insects pollinate roughly 75 percent of the world's food crops, an estimated 10 percent of the economic value of the world's food supply. They also play critical roles in nutrient cycling and decomposing organic materials, which helps to ensure ecosystem productivity. Dirzo says that the solutions are complicated. Immediately reducing rates of habitat change and overexploitation would help, but these approaches need to be tailored to individual regions and situations. He said he hopes that raising awareness of the ongoing mass extinction – and not just of large, charismatic species – and its associated consequences will help spur change. "We tend to think about extinction as loss of a species from the face of Earth, and that's very important, but there's a loss of critical ecosystem functioning in which animals play a central role that we need to pay attention to as well," he said. "Ironically, we have long considered that defaunation is a cryptic phenomenon, but I think we will end up with a situation that is non-cryptic because of the increasingly obvious consequences to the planet and to human wellbeing." Globally, June 2014 was the hottest June since records began in 1880. Experts predict that 2014 will be an El Niño year. According to NOAA scientists, the globally averaged temperature over land and ocean surfaces for June 2014 was the highest for June since records began in 1880. This follows the hottest May on record the previous month. It also marked the 38th consecutive June and 352nd consecutive month with a global temperature above the 20th century average. The last below-average global temperature for June was in 1976 and the last below-average global temperature for any month was February 1985. Most of the world experienced warmer-than-average monthly temperatures, with record warmth across part of southeastern Greenland, parts of northern South America, areas in eastern and central Africa, and sections of southern and southeastern Asia. Drought conditions in the southwest U.S. continued to worsen, with Lake Mead dropping to its lowest levels ever – triggering fears of major water shortages within the next several years. Australia saw nationally-averaged rainfall 32 percent below average and in Western Australia precipitation was 72 percent below average. Ocean surface temperatures for June were 0.64°C (1.15°F) above the 20th century average of 16.4°C (61.5°F), the highest for June on record and the highest departure from average for any month. Notably, large parts of the western equatorial and northeast Pacific Ocean and most of the Indian Ocean were record warm or much warmer than average for the month. Although neither El Niño nor La Niña conditions were present across the central and eastern equatorial Pacific Ocean during June 2014, ocean waters in that region continued to trend above average. NOAA's Climate Prediction Centre estimates there is about a 70 percent chance that El Niño conditions will develop during Northern Hemisphere summer 2014 and 80 percent chance it will develop during the fall and winter. With many of Earth's metals and minerals facing a supply crunch in the decades ahead, deep ocean mining could provide a way of unlocking major new resources. Amid growing commercial interest, the UN's International Seabed Authority has just issued seven exploration licences. Credit: Nautilus Minerals Inc. To build a fantastic utopian future of gleaming eco-cities, flying cars, robots and spaceships, we're going to need metal. A huge amount of it. Unfortunately, our planet is being mined at such a rapid pace that some of the most important elements face critical shortages in the coming decades. These include antimony (2022), silver (2029), lead (2031) and many others. To put the impact of our mining and other activities in perspective: on land, humans are now responsible for moving about ten times as much rock and earth as natural phenomena such as earthquakes, volcanoes and landslides. The UN predicts that on current trends, humanity's annual resource consumption will triple by 2050. While substitution in the form of alternative metals could help, a longer term answer is needed. Asteroid mining could eventually provide an abundance from space – but a more immediate, technically viable and commercially attractive solution is likely to arise here on Earth. That's where deep sea mining comes in. Just as offshore oil and gas drilling was developed in response to fossil fuel scarcity on land, the same principle could be applied to unlock massive new metal reserves from the seabed. Oceans cover 72% of the Earth's surface, with vast unexplored areas that may hold a treasure trove of rare and precious ores. Further benefits would include: • Curbing of China's monopoly on the industry. As of 2014, the country is sitting on nearly half the world's known reserves of rare earth metals and produces over 90% of the world's supply. • Limited social disturbance. Seafloor production will not require the social dislocation and resulting impact on culture or disturbance of traditional lands common to many land-based operations. • Little production infrastructure. As the deposits are located on the seafloor, production will be limited to a floating ship with little need for additional land-based infrastructure. The concentration of minerals is an order of magnitude higher than typical land-based deposits with a corresponding smaller footprint on the Earth's surface. • Minimal overburden or stripping. The ore generally occurs directly on the seafloor and will not require large pre-strips or overburden removal. • Improved worker safety. Operations will be mostly robotic and won't require human exposure to typically dangerous mining or "cutting face" activities. Only a hundred or so people will be employed on the production vessel, with a handful more included in the support logistics. Credit: Nautilus Minerals Inc. Interest in deep sea mining first emerged in the 1960s – but consistently low prices of mineral resources at the time halted any serious implementation. By the 2000s, the only resource being mined in bulk was diamonds, and even then, just a few hundred metres below the surface. In recent years, however, there has been renewed interest, due to a combination of rising demand and improvements in exploration technology. The UN's International Seabed Authority (ISA) was set up to manage these operations and prevent them from descending into a free-for-all. Until 2011, only a handful of exploration permits had been issued – but since then, demand has surged. This week, seven new licences were issued to companies based in Brazil, Germany, India, Russia, Singapore and the UK. The number is expected to reach 26 by the end of 2014, covering a total area of seabed greater than 1.2 million sq km (463,000 sq mi). Michael Lodge of the ISA told the BBC: "There's definitely growing interest. Most of the latest group are commercial companies so they're looking forward to exploitation in a reasonably short time – this move brings that closer." So far, only licences for exploration have been issued, but full mining rights are likely to be granted over the next few years. The first commercial activity will take place off the coast of Papua New Guinea, where a Canadian company – Nautilus Minerals – plans to extract copper, gold and silver from hydrothermal vents. After 18 months of delays, this was approved outside the ISA system and is expected to commence in 2016. Nautilus has been developing Seafloor Production Tools (SPTs), the first of which was completed in April. This huge robotic machine is known as the Bulk Cutter and weighs 310 tonnes when fully assembled. The SPTs have been designed to work at depths of 1 mile (1.6 km), but operations as far down as 2.5 miles (4 km) should be possible eventually. As with any mining activity, concerns have been raised from scientists and conservationists regarding the environmental impact of these plans, but the ISA says it will continue to demand high levels of environmental assessment from its applicants. Looking ahead, analysts believe that deep sea mining could be widespread in many parts of the world by 2040. Scientists at the National Oceanic and Atmospheric Administration (NOAA) have developed a new high-resolution climate model, showing that southwestern Australia's long-term decline in fall and winter rainfall is caused by manmade greenhouse gas emissions and ozone depletion. "This new high-resolution climate model is able to simulate regional-scale precipitation with considerably improved accuracy compared to previous generation models," said Tom Delworth, a research scientist at NOAA's Geophysical Fluid Dynamics Laboratory in Princeton, N.J., who helped develop the new model and is co-author of the paper. "This model is a major step forward in our effort to improve the prediction of regional climate change, particularly involving water resources." NOAA researchers conducted several climate simulations using this global climate model to study long-term changes in rainfall in various regions across the globe. One of the most striking signals of change emerged over Australia, where a long-term decline in fall and winter rainfall has been observed over parts of southern Australia. Simulating natural and manmade climate drivers, scientists showed that the decline in rainfall is primarily a response to manmade increases in greenhouse gases as well as a thinning of the ozone caused by manmade aerosol emissions. Several natural causes were tested with the model, including volcano eruptions and changes in the sun's radiation. However, none of these natural drivers reproduced the long-term observed drying, indicating this trend is clearly due to human activity. Southern Australia's decline in rainfall began around 1970 and has increased over the last four decades. The model projects a continued decline in winter rainfall throughout the rest of the 21st century, with significant implications for regional water resources. The drying is most severe over the southwest, predicted to see a 40 percent decline in average rainfall by the late 21st century. "Predicting potential future changes in water resources, including drought, are an immense societal challenge," said Delworth. "This new climate model will help us more accurately and quickly provide resource planners with environmental intelligence at the regional level. The study of Australian drought helps to validate this new model, and thus builds confidence in this model for ongoing studies of North American drought." Dubai is already known for its luxury tourist experience, super-tall skyscrapers and extravagant megaprojects. Now developers have announced it will host the world's first temperature-controlled city – incorporating the largest mall, largest domed park, cultural theatres and wellness resorts. Known as the "Mall of the World", this gigantic $7bn project will encompass 50 million square feet of floorspace, taking 10 years to construct. Intended as a year-round destination, its capacity will be large enough to accommodate 180 million visitors each year in 100 hotels and serviced apartment buildings. Glass-roofed streets, modelled on New York's Broadway and London's Oxford Street, will stretch for 7 km (4.6 miles). These will be air-conditioned in summer as temperatures soar above 40°C, but the mall and its glass dome will be open to the elements during cooler winter months. Cars will be redundant in this "integrated pedestrian city." Credit: Dubai Holding "The project will follow the green and environmentally friendly guidelines of the Smart Dubai model," explained Ahmad bin Byat, the chief executive of Dubai Holding. "It will be built using state-of-the-art technology to reduce energy consumption and carbon footprint, ensuring high levels of environmental sustainability and operational efficiency." In response to concerns about another real estate bubble, he insisted there was demand for such a project: "The way things are growing I think we are barely coping with the demand ... tourism is growing in Dubai," he said in an interview with Reuters. "This is a long-term project and we are betting strongly on Dubai." Speaking at the launch of the mall, Sheikh Mohammed said: "The growth in family and retail tourism underpins the need to enhance Dubai's tourism infrastructure as soon as possible. This project complements our plans to transform Dubai into a cultural, tourist and economic hub for the 2 billion people living in the region around us – and we are determined to achieve our vision." Mall of the World is one of several hi-tech, futuristic cities that could set the standard for eco-city designs in the coming decades. Others include China's car-free "Great City" (planned to be finished by 2020) and the Masdar City arcology (due in 2025). The largest ever study of its kind has found significant differences between organic food and conventionally-grown crops. Organic food contains almost 70% more antioxidants and significantly lower levels of toxic heavy metals. Conventionally-grown potatoes on the left of the picture and organically grown potatoes on the right. Credit: Newcastle University Analysing 343 studies into the differences between organic and conventional crops, an international team of experts led by Newcastle University, UK, found that a switch to eating organic fruit, vegetable and cereals – and food made from them – would provide additional antioxidants equivalent to eating between 1-2 extra portions of fruit and vegetables a day. The study, published in the British Journal of Nutrition, also shows significantly lower levels of toxic heavy metals in organic crops. Cadmium – one of only three metal contaminants along with lead and mercury for which the European Commission has set maximum permitted contamination levels in food – was found to be almost 50% lower in organic crops than conventionally-grown ones. Professor Carlo Leifert, who led the study, says: “This study demonstrates that choosing food produced according to organic standards can lead to increased intake of nutritionally desirable antioxidants and reduced exposure to toxic heavy metals. This constitutes an important addition to the information currently available to consumers which until now has been confusing and in many cases is conflicting.” New methods used to analyse the data This is the most extensive analysis of the nutrient content in organic vs conventionally-produced foods ever undertaken and is the result of a groundbreaking new systematic literature review and meta-analysis by the international team. The findings contradict those of a 2009 UK Food Standards Agency (FSA) commissioned study, which found there were no substantial differences or significant nutritional benefits from organic food. The FSA study based its conclusions on just 46 publications covering crops, meat and dairy, while Newcastle led meta-analysis is based on data from 343 peer-reviewed publications on composition difference between organic and conventional crops now available. “The main difference between the two studies is time,” explains Professor Leifert, who is Professor of Ecological Agriculture at Newcastle University. “Research in this area has been slow to take off the ground and we have far more data available to us now than five years ago.” Dr Gavin Stewart, a Lecturer in Evidence Synthesis and the meta-analysis expert in the Newcastle team, added: “The much larger evidence base available in this synthesis allowed us to use more appropriate statistical methods to draw more definitive conclusions regarding the differences between organic and conventional crops.” What the findings mean The study, funded jointly by the European Framework 6 programme and the Sheepdrove Trust, found that concentrations of antioxidants such as polyphenolics were between 18-69% higher in organically-grown crops. Numerous studies have linked antioxidants to a reduced risk of chronic diseases, including cardiovascular and neurodegenerative diseases and certain cancers. Substantially lower concentrations of a range of the toxic heavy metal cadmium were also detected in organic crops (on average 48% lower). Nitrogen concentrations were found to be significantly lower in organic crops. Concentrations of total nitrogen were 10%, nitrate 30% and nitrite 87% lower in organic compared to conventional crops. The study also found that pesticide residues were four times more likely to be found in conventional crops than organic ones. Professor Charles Benbrook, one of the authors of the study and a leading scientist based at Washington State University, explains: “Our results are highly relevant and significant and will help both scientists and consumers sort through the often conflicting information currently available on the nutrient density of organic and conventional plant-based foods.” Professor Leifert added: “The organic vs non-organic debate has rumbled on for decades now, but the evidence from this study is overwhelming – organic food is high in antioxidants and lower in toxic metals and pesticides. “But this study should just be a starting point. We have shown without doubt there are composition differences between organic and conventional crops, now there is an urgent need to carry out well-controlled human dietary intervention and cohort studies specifically designed to identify and quantify the health impacts of switching to organic food.” The authors of this study welcome the continued public and scientific debate on this important subject. The entire database generated and used for this analysis is freely available on the Newcastle University website for the benefit of other experts and interested members of the public. A new interactive graphic and analysis released this week by research and journalism organisation Climate Central illustrates how much hotter summers will be in 1,001 U.S. cities by 2100, if current emissions trends continue, and shows which cities they are going to most feel like. "Summer temperatures in most American cities are going to feel like summers now in Texas and Florida — very, very hot," comments Alyson Kenward, lead researcher of the analysis, which looked at projected changes in summer (June-July-August) high temperatures. On average, those temperatures will be 3.9 to 5.6°C (7-10°F) hotter, with some cities as much as 6.7°C (12°F) hotter by the end of the century. Among the most striking examples featured in the interactive are: • Boston, where average summer high temperatures will likely be more than 5.6°C (10°F) hotter than they are now, making it feel as steamy as North Miami Beach is today. • Saint Paul, Minnesota, where summer highs are expected to rise by an average of 6.7°C (12°F), putting it on par with Mesquite, Texas. • Memphis, where summer high temperatures could average a sizzling 37.8°C (100°F), typical of Laredo, Texas. • Las Vegas, with summer highs projected to average a scorching 43.9°C (111°F), like summers today in Riyadh, Saudi Arabia. • Phoenix, where summer high temperatures would average a sweltering 45.6°C (114°F), which will feel like Kuwait City. This analysis only accounts for daytime summer heat — the hottest temperatures of the day, on average between June-July-August — and doesn't incorporate humidity or dewpoint, both of which contribute to how uncomfortable summer heat can feel. Other impacts the map does not include are rising sea levels and a likely increase in storms and severe weather events. Recent articles by Fox News and the Daily Telegraph claimed that scientists have been "tampering" with U.S. temperature data. For those who care about real science (as opposed to conspiracy theories), Skeptical Science has a thorough debunking here. A study co-authored by a University of Guelph scientist that involved fitting bumblebees with tiny radio frequency tags shows long-term exposure to a neonicotinoid pesticide hampers bees’ ability to forage for pollen. Bees fitted with RFID tags. Credit: Richard Gill The research by Nigel Raine, a professor in Guelph’s School of Environmental Sciences, and Richard Gill of Imperial College London is published in the British Ecological Society’s journal Functional Ecology. The study shows how long-term pesticide exposure affects individual bees’ day-to-day behaviour, including pollen collection and which flowers the worker bees chose to visit. “Bees have to learn many things about their environment, including how to collect pollen from flowers,” says Raine, who holds the Rebanks Family Chair in Pollinator Conservation. “Exposure to this neonicotinoid pesticide seems to prevent bees from being able to learn these essential skills.” The researchers monitored bee activity using radio frequency identification (RFID) tags – seen in the photograph above – similar to those used by courier firms to track parcels. They tracked when individual bees left and returned to the colony, how much pollen they collected and from which flowers. The bees from untreated colonies got better at collecting pollen as they learned to forage. However, bees exposed to neonicotinoid insecticides became less successful over time at collecting pollen. Neonicotinoid-treated colonies even sent out more foragers to try to compensate for lack of pollen from individual bees. Besides collecting less pollen, said Raine, “flower preferences of neonicotinoid-exposed bees were different to those of foraging bees from untreated colonies.” Raine and Gill studied the effects of two pesticides – imidacloprid, one of three neonicotinoid pesticides currently banned for use on crops attractive to bees by the European Commission, and pyrethroid (lambda cyhalothrin) – used both alone and together, on the behaviour of individual bumblebees from 40 colonies over four weeks. “Although pesticide exposure has been implicated as a possible cause for bee decline, until now we had limited understanding of the risk these chemicals pose, especially how it affects natural foraging behaviour,” Raine said. Neonicotinoids make up about 30 per cent of the global pesticide market. Plants grown from neonicotinoid-treated seed have the pesticide in all their tissues, including the nectar and pollen. “If pesticides are affecting the normal behaviour of individual bees, this could have serious knock-on consequences for the growth and survival of colonies,” explained Raine. He suggests reform of pesticide regulations, including adding bumblebees and solitary bees to risk assessments that currently cover only honeybees. “Bumblebees may be much more sensitive to pesticide impacts as their colonies contain a few hundred workers at most, compared to tens of thousands in a honeybee colony,” he added.
The find() method is almost the same as the index() method, the only difference is that the index() method raises an exception if … Python string.count() function with example: In this article, we are going to learn with an example about string.count() function, to count occurrences of a substring in string. Let's say, we have a string that contains the following sentence: The brown-eyed man drives a brown car. To find the position of first occurrence of a string, you can use string.find() method. If found it returns index of the substring, otherwise -1. Python String find() Method Example 1. The following are the list of examples for extracting or returning substring in Python. Let’s now see the details and check out how can we use it. Example:1. With the startswith function, you can check whether the string starts with the chosen substring. Sie gibt den niedrigsten Index in str zurück, wo der Substring sub gefunden wird, andernfalls wird -1 zurückgegeben, wenn sub nicht gefunden wird. If substring doesn't exist inside the string, it returns -1. The find() method returns an integer value:. Otherwise, it returns False. I am trying to find out the index of a substring in a string. A substring is the part of a string. Each has their own use-cases and pros/cons, some of which we'll briefly cover here: 1) The in Operator The easiest way to check if a Python string contains a substring is to use the in operator. string.count() is an in-built function in Python, it is used to find the occurrences of a substring in a given string. As the name suggests, it counts the occurrence of a substring in a given string. You may actually get parts of data or slice for sequences; not only just strings. Using Python Regular Expression. Here’s how to do that: stringexample = "kiki" stringexample.find("ki") The output for this will be 0. The in operator is used to check data structures for membership in Python. In this challenge, the user enters a string and a substring. Definition and Usage. Python 3 - String find() Method - The find() method determines if the string str occurs in string, or in a substring of string if the starting index beg and ending index end are given. str.find() Methode um zu prüfen, ob ein String einen Substring enthält find ist eine eingebaute Methode von string - str.find(sub) . String’s find() is a built-in Python function that finds a substring into another string which contains it. Submitted by IncludeHelp, on January 19, 2018 . In this article, we'll examine four ways to use Python to check whether a string contains a substring. Find a string in Python - Hacker Rank Solution. Example of index() and find() method. Basically, it is much wider than the ordinary “substring” methods as using the extended slicing. str.slice function extracts the substring of the column in pandas dataframe python. We've found the string! The… Let's see some examples to understand the find() method. Slicing Python String. Python String index() is an function that returns the index of a substring. str – This specifies the string to be searched. Substring 'is fun': 19 Traceback (most recent call last): File "", line 6, in result = sentence.index('Java') ValueError: substring not found Note: Index in Python starts from 0 and not 1. Introduction Replacing all or n occurrences of a substring in a given string is a fairly common problem of string manipulation and text processing in general. Python string provides various methods to create a substring, check if it contains a substring, index of substring etc. So by checking the number of strings in the list, we can identify if a string is a substring or not. Will also explore ways to check the existence of sub-string in case in-sensitive manner by ignoring case. String Find() Syntax The find() method returns -1 if the value is not found.. Example The in operator returns True if the substring exists in the string. With regular expression, we have many string operations that we can perform on a string. Python - string.count() function. If you want to use a method to find a substring, you can use the find() method. The sub stands for the search term and str is the source string – where you want to find the sub. The index() method finds the first occurrence of the substring into the source string. It follows this template: string[start: end: step] Where, start: The starting index of the substring. Python substring has quite a few methods that string objects can call in order to perform frequently occurring tasks (related to string). Python has a built-in function for counting the repeated substring in a given string called count(). Otherwise, it will give us a list of strings. "in" tells that the string is present but "find" is returning -1. These two parameters are optional too. Python’s implementation (CPython) uses a mix of boyer-moore and horspool for string searching. In this tutorial, we will look into various operations related to substrings. Python Substring Example. start and end are optional arguments. Method 2: Using Find Method. It is often called ‘slicing’. Python Substring. Otherwise, the function returns a (-1) to mark the failure. In this article we will discuss different ways to check if a sub-string exists in another string and if yes then find it’s index. So the occurrence is 19 and not 20. Python – count() function string="abcdefghijklmnop" string.count(substring, start_index, end_index) The count function has 3 … If start is not included, it is assumed to equal to 0. Default filler is a space. str.find() returns the lowest index in the string where the substring sub is found within the slice s[start:end]. sub: substring; start: start index a range; end: last index of the range; Return Type. Both of these methods return the index of the substring if the substring you want to search is present in the given string. while True: ... Python program that uses in and not in on strings filename = "cat.png" # See if the string contains this substring. This code is case sensitive. Python If you want to find the substring or character in the string, there are two methods- index() and find(). This code is case sensitive. Now, we can calculate the position of the second “web” keyword in a given string. Let’s see an Example of how to get a substring from column of pandas dataframe and store it in new column. Padding is. The find() method finds the first occurrence of the specified value.. Extracting the substring of the column in pandas python can be done by using extract function with regular expression in it. count() string_name.count(substring [,start [, end]]): The Python String Functions count(), returns the number of nonoverlapping occurrences of substring in the range [start, An index() method is almost the same as the find() method, the only difference is that find() method returns -1 if that value is not found. Traditional way to find out out a substring is by using a loop. If successful, it returns the index of the first match. If the substring is not found then -1 is returned. The sub in str will return true if the substring is found in the given string. For example, if you require the first letter of a string to be capitalized, you can use capitalize() method. If you need to check if the sub is a substring or not in the specified string then use the in operator. Python program to find a substring in a string using find() : Finding substring in a string is one of the most commonly faced problem. An example of simple find method which takes only single parameter (a substring). Return Value from find() method. Example of Find() function in Python: # find function in python str1 = "this is beautiful earth!! Python Substring Syntax sub_string = [start_index : end_index] The start_index tells the substring operator where to start and end_index tells where to end. beg – This is the starting index, by default its 0. end – This is the ending index, by default its equal to the length of the string. Python substring operation is based on the index value of the characters in a String. The find method returns the index of the beginning of the substring if found, otherwise -1 is returned. We can use one loop and scan each character of the main string one by one. Python find the second (nth) occurrence in the string How to Python find the position of the Second same substring or keyword in a given string? See an example of using the Python … Write a program to find the index of the substring in the given string. Python also has easy ways to get substring of a string. The count will be greater than one if the string is a substring. Python String Find() with Examples. Note: In given String two “web” keywords are used. Python index method use. It returns -1 if the sub is not found. Python offers many ways to substring a string. Python String – Find the index of first occurrence of substring . Learning Python? Since, the string is also a sequence of characters in Python … Substring in Python by extended slicing example. If the separator value is not a substring then the split method returns the same string in a list. You have to print the number of times tha The index() method raises the exception if the value is not found. The character at this index is included in the substring. If you are looking to find or replace items in a string, Python has several built-in methods that can help you search a target string for a specified substring..find() Method Syntax string.find(substring, start, end) Note: start and end are optional arguments. Code: import numpy as np cnt = … The Python String Functions center() makes string_name centered by taking width parameter into account. In Python, you can slice any object to return a portion of the Object. If the substring exists inside the string, it returns the index of the first occurence of the substring. To get or find the string substring in Python, we use some built-in string functions and functionalities. Regular Expression is used to find the pattern in a string, and they provide a more flexible and efficient way to find a substring from a string. Luckily, most of these tasks are made easy in Python by its vast array of built-in functions, including this one. specified by parameter fillchar. Python program that uses string find, while value = "cat picture is cat picture" # Start with this value. str.find(sub[, start[, end]]) This function returns the lowest index in the string where substring “sub” is found within the slice s[start:end].. start default value is 0 and it’s an optional argument.. end default value is length of the string, it’s an optional argument.. There is no dedicated function in Python to find the substring of a string.But you can use slicing to get the substring. Using slicing, you can find the substring of a string, from a specific starting position to specific ending position. index = string.find(substring, start, end) where string is the string in which you have to find the index of first occurrence of substring. location = -1 # Loop while true.
Brightstorm is like having a personal tutor for every subject See what all the buzz is aboutCheck it out Using the Inverse Trigonometric Functions - Concept 19,305 views In a problem where two trig functions are not inverses of each other (also known as "inverse trigonometric functions", (1) replace the inverse function with a variable (which represents an angle), (2) use the definition of the inverse function to draw the angle in the unit circle and identify one coordinate, (3) find the missing coordinate (use Pythagorean Theorem, for example), (4) use the coordinates to find the missing value. When you're studying the inverse trig functions you might come across a problem like this, evaluate tangent of inverse sine of four fifths. Notice these are not inverses of one another so you can't use the inverse identities here. So what I suggest you do is you make a little substitution let's call this, now remember inverse trig functions will give you an angle their output is always going to be an angle. Let's call this alpha, in Math we have a tendency to label angles with Greek letters so let's call it alpha. And I'm going to draw alpha on the unit circle in a second. Let me just use the definition of inverse sine to figure out what kind of an angle alpha is. Remember the definition of inverse sine y equals the inverse sine of x means x equals sine of y for y between negative pi over 2 and pi over 2. So alpha equals inverse sine of four fifths means sine of alpha equals four fifths and alpha is between negative pi over 2 and pi over 2. And that suggests that alpha is going to be in the first quadrant because we have a positive value. So let me draw alpha like this, so I've drawn alpha in the first quadrant, the sine value is four fifths and in order to solve this problem I'm going to have to figure out what the x coordinate would be. But remember this is a point in the unit circle and on the unit circle x squared plus y squared equals 1. So in this case we'd have x squared plus four fifths squared equals 1 or x squared plus 16 over 25 equals and I'll change this to 25 over 25 and then I subtract 16 over 25 from both sides and I get x squared equals 9 over 25. So x is going to be plus or minus 3 over 5, now which is it? Well we're in the first quadrant x has got to be positive, so x is three fifths, now that doesn't exactly help me figure out what the tangent of alpha is because remember that's what we're looking for, the tangent of this angle alpha. If I were looking for the cosine o alpha I'd be done it would be three fifths. But remember that the tangent of alpha is the y coordinate divided by the x coordinate. So the tangent of alpha is going to be the y coordinate of four fifths divided by the x coordinate of three fifths and that's going to give me four thirds and that's it so always make a substitution like this. Let's see another example, sine of arc cosine of negative two thirds remember that arc cosine is the same as inverse cosine. So sine of arc cosine of two thirds this is going to be another angle, let me call it beta and I should plot beta in a second but first let's figure out what kind of an angle beta is remember that beta equals inverse cosine of negative two thirds means the cosine of beta equals negative two thirds. And beta is between 0 and pi remember that, that's the going to be the range of inverse cosine. So if I draw beta on this unit circle I've got to draw beta between 0 and pi somewhere up here. Now where would the cosine be negative? Remember that cosine comes from the first coordinate at the point of the unit circle. It's going to be somewhere in this quadrant, so let me pick a point where it looks like the x coordinate is negative two thirds how about there? Negative two thirds, something and this would be my angle. Now all I have to do is figure out what the y coordinate is, well I still have the fact that x squared plus y squared equals 1 and I can plug in negative two thirds for x. Our negative two thirds squared is0over 9 and I can change the 1 to 9 over 9 and thus if I subtract both sides 4 9 is from both sides I get y squared equals 5 over 9 and y equals plus or minus root 5 over 3. And you can tell that since we're in the second quadrant the y coordinate is going to be positive. So I should choose y equals plus root 5 over 3. Now I'm looking for the sine of beta, the sine of beta is exactly the y value. So I've got my answer sine of beta which is root 5 over 3 and I'm done. Don't forget this trick of renaming the arc cosine or the inverse trigonometric function value because they always give you angles. You can plot that angle on the unit circle figure out what both coordinates are and then use that to find the sine, cosine or tangent of the result.
Frequency Function; 10 Ways It Can Be Used Understanding Frequency Function’s Parameters The frequency function in Microsoft Excel is a function that has two different arrays at performing its duty for the users. This is what have come to be known as data array and the bins array, which works together, and it is impossible to use it without inputting data into both of them. Data Array: This is known as either an array or reference that is useful for setting values for what the user wishes to know its frequency. If there is no value inputted in the data array, then the frequency would most likely return with zeros. Bins Array: The bins array is the part of frequency function that works as either an array or a reference that interval to the groups that values that are in the data array would easily use for calculating the frequency. If there is no value that has been inputted in bins array, then a frequency function would be returning that number of elements in the data array. Finding a Single Frequency In this example, we have frequency, and we want to find a single frequency. We’d know that there are multiple frequency, but we are looking for one. This is one of the places that the frequency function in Microsoft Excel can be extremely useful. Multiple Frequency in Data In the previous example, it was clarified that it is possible to find a single frequency, and there might be some curiosity to know about multiple frequency, which is why this example is placed, to satisfy that curiosity. In this example, we will use the previous data to find the multiple frequency in it. For the successfulness of this, we need to know how many times different values are repeated. Frequency of Non Communication We are a company that likes to communicate with our customers many times, and we are worrying that we might be losing touch with some of our customers. We want to know if there has been any customers that we have not contacted within the last week. We’d know that if the customers have not been contacting us, then there are different risks associated with it. It might be that they do not use our products anymore, and we need to know if there is any non-communication with any of our customers. How frequently do Employee Make a Sales Yesterday and Today? In this example, we want to find the possibilities of knowing how frequently the employees make sales for the days. We have a number that shows 2 times in one place, and there is a number that shows 4 times in the other, which means that there is a frequency between yesterday and today’s sales. Using Frequency Function in Exam In this example, there is a teacher that would like to use the frequency function to make the assessments. We’d have students’ names and their grade on side of the names. Then we will have the grade evaluation in another column, and then finally use the frequency function to make the assessment. Breaking Ages into Different Ranges This example is going to effectively make it possible to divide the ages into certain age groups. In the data range, we have people with different ages, and in the bin range, we’d have the age groups we have assigned to them. Height Division with Frequency Function We are trying to setup a new basketball tournament, and there have been many people signing up for the tournament. We have different people with various height, and we would like to know how many would be in each group, so we could identify if it is necessary to have more people to improve the whole process, and be more productive when the basketball tournament starts. Handling Frequency with N/A Error It is possible that the same formula would show error, which is why it is important to understand that the circumstances are the same, but it is different because of the N/A error showing in the result. In this scenario, we are looking at the scenario where we divided the height into groups, but this time there is N/A error in the frequency. This is because the value we have added is too large, which makes it show an N/A error, and the only thing needed to be done is reducing the large value. If you’d calculate numbers of the frequency, you would get 11, which is the number of people that is under the “People’s height”. Frequency of Company’s Performance in all Departments In this example, we are a company that have about 1500 different customers, and we know that they have different needs, based on their requirements. We want to know how frequently the customers are in need of a certain services, so we can assess how frequently the company is performing in all the 6 departments. In the picture below, you would notice that 2 out of 6 are providing installation services, and another 2 out of 6 are providing technical services, while respective 1 out of 6 are providing sales and general services. Race Preparation with Frequency Function We are preparing for a race, and as a school we are looking for people who could run the 100 meters within a specific time frame. We are also prepared to know which ones we could have chosen under the circumstances that we do not find anyone who could run the distance, but we are going to use percentage to simplify the process with the excel function. In this example, we’d have six candidates who have performed very well, and could participate in the race. In the picture showing below, 1 means 100%. Even though we are using FREQUENCY, but this time there is also a fixed table included, and we’d only use the frequency formula. This example is meant to clarify that in tables, it is not allowed to use frequency, when it has a multi formula in it.
August 15, 2016 – Our planet is nestled in the center of two immense, concentric doughnuts of powerful radiation: the Van Allen radiation belts, which harbor swarms of charged particles that are trapped by Earth’s magnetic field. On March 17, 2015, an interplanetary shock – a shockwave created by the driving force of a coronal mass ejection, or CME, from the sun – struck Earth’s magnetic field, called the magnetosphere, triggering the greatest geomagnetic storm of the preceding decade. And NASA’s Van Allen Probes were there to watch the effects on the radiation belts. One of the most common forms of space weather, a geomagnetic storm describes any event in which the magnetosphere is suddenly, temporarily disturbed. Such an event can also lead to change in the radiation belts surrounding Earth, but researchers have seldom been able to observe what happens. But on the day of the March 2015 geomagnetic storm, one of the Van Allen Probes was orbiting right through the belts, providing unprecedentedly high-resolution data from a rarely witnessed phenomenon. A paper on these observations was published in the Journal of Geophysical Research on August 15, 2016. Researchers want to study the complex space environment around Earth because the radiation and energy there can impact our satellites in a wide variety of ways – from interrupting onboard electronics to increasing frictional drag to disrupting communications and navigation signals. “We study radiation belts because they pose a hazard to spacecraft and astronauts,” said David Sibeck, a Van Allen Probes mission scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, who was not involved with the paper. “If you knew how bad the radiation could get, you would build a better spacecraft to accommodate that.” Studying the radiation belts is one part of our efforts to monitor, study and understand space weather. NASA launched the twin Van Allen Probes in 2012 to understand the fundamental physical processes that create this harsh environment so that scientists can develop better models of the radiation belts. These spacecraft were specifically designed to withstand the constant bombardment of radiation in this area and to continue to collect data even under the most intense conditions. A set of observations on how the radiation belts respond to a significant space weather storm, from this harsh space environment, is a goldmine. The recent research describes what happened: The March 2015 storm was initiated by an interplanetary shock hurtling toward Earth – a giant shockwave in space set off by a CME, much like a tsunami is triggered by an earthquake. Swelling and shrinking in response to such events and solar radiation, the Van Allen belts are highly dynamic structures within our planet’s magnetosphere. Sometimes, changing conditions in near-Earth space can energize electrons in these ever-changing regions. Scientists don’t yet know whether energization events driven by interplanetary shocks are common. Regardless, the effects of interplanetary shocks are highly localized events – meaning if a spacecraft is not precisely in the right place when a shock hits, it won’t register the event at all. In this case, only one of the Van Allen Probes was in the proper position, deep within the magnetosphere – but it was able to send back key information. The spacecraft measured a sudden pulse of electrons energized to extreme speeds – nearly as fast as the speed of light – as the shock slammed the outer radiation belt. This population of electrons was short-lived, and their energy dissipated within minutes. But five days later, long after other processes from the storm had died down, the Van Allen Probes detected an increased number of even higher energy electrons. Such an increase so much later is a testament to the unique energization processes following the storm. “The shock injected – meaning it pushed – electrons from outer regions of the magnetosphere deep inside the belt, and in that process, the electrons gained energy,” said Shri Kanekal, the deputy mission scientist for the Van Allen Probes at Goddard and the leading author of a paper on these results. Researchers can now incorporate this example into what they already know about how electrons behave in the belts, in order to try to understand what happened in this case – and better map out the space weather processes there. There are multiple ways electrons in the radiation belts can be energized or accelerated: radially, locally or by way of a shock. In radial acceleration, electrons are carried by low-frequency waves towards Earth. Local acceleration describes the process of electrons gaining energy from relatively higher frequency waves as the electrons orbit Earth. And finally, during shock acceleration, a strong interplanetary shock compresses the magnetosphere suddenly, creating large electric fields that rapidly energize electrons. Scientists study the different processes to understand what role each process plays in energizing particles in the magnetosphere. Perhaps these mechanisms occur in combination, or maybe just one at a time. Answering this question remains a major goal in the study of radiation belts – a difficult task considering the serendipitous nature of the data collection, particularly in regard to shock acceleration. Additionally, the degree of electron energization depends on the process that energizes them. One can liken the process of shock acceleration, as observed by the Van Allen Probe, to pushing a swing. “Think of ‘pushing’ as the phenomenon that’s increasing the energy,” Kanekal said. “The more you push a swing, the higher it goes.” And the faster electrons will move after a shock. In this case, those extra pushes likely led to the second peak in high-energy electrons. While electromagnetic waves from the shock lingered in the magnetosphere, they continued to raise the electrons’ energy. The stronger the storm, the longer such waves persist. Following the March 2015 storm, resulting electromagnetic waves lasted several days. The result: a peak in electron energy measured by the Van Allen Probe five days later. This March 2015 geomagnetic storm was one of the strongest yet of the decade, but it pales in comparison to some earlier storms. A storm during March 1991 was so strong that it produced long-lived, energized electrons that remained within the radiation belts for multiple years. With luck, the Van Allen Probes may be in the right position in their orbit to observe the radiation belt response to more geomagnetic storms in the future. As scientists gather data from different events, they can compare and contrast them, ultimately helping to create robust models of the little-understood processes occurring in these giant belts. The Johns Hopkins Applied Physics Laboratory in Laurel, Maryland, built and operates the Van Allen Probes for NASA’s Heliophysics Division in the Science Mission Directorate. The Van Allen Probes are the second mission in NASA’s Living With a Star program, an initiative managed by Goddard and focused on aspects of the sun-Earth system that directly affect human lives and society. The Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado, provided two key instruments for the Van Allen Probes mission.
This poster commemorates two landmark lunar anniversaries. On July 20th 1969, humans set foot on the Moon for the first time. At the time, little was known about the history and composition of Earth's nearest celestial neighbor. The samples that Neil Armstrong and Buzz Aldrin collected, followed by the science and samples from the other Apollo missions, allowed scientists to measure the age, composition, and other properties of the Moon and to apply that understanding to other objects in our solar system. 50 years later, the Lunar Reconnaissance Orbiter (LRO) celebrates its 10th anniversary in orbit at the Moon. Launched on June 18, 2009, LRO has built on NASA's legacy of lunar science and exploration. In its 10 years, LRO has identified valuable resources on the Moon- including water, and has measured current and ancient impact rates, determined the time period of dormant volcanic activity, and made detailed temperature and topographical maps. Data from LRO will be used to identify the next sites for human exploration in the decades to come. Color an outline of NASA's Lunar Reconnaissance Orbiter spacecraft as it orbits the Moon. Meteor impacts can radically alter the surface of a planet. Scientists used data from the Lunar Orbiter Laser Altimeter (LOLA) instrument on board LRO to build a map that highlights lunar craters with greater clarity than ever before Data from prior lunar missions such as Lunar Prospector suggested that the Moon's polar regions may be hiding ice. One of the primary reasons the Lunar Reconnaissance Orbiter (LRO) was placed into a polar orbit around the Moon was to search for water near the Moon's poles. Scientists use data from several LRO instruments to piece together the story of water on the Moon. The Moon started off hot, but where did the heat come from? Scientists look for scarps, steep slopes on the surface of the Moon's crust, as an indicator of its cooling history. Scientists have found hundreds of previously undetected scarps in LRO images. Lunar Reconnaissance Orbiter Camera (LROC) data have shown that many of the newly discovered scarps which could be as young as 50 million years old, suggesting that the Moon continues to cool even today. Looking Back Fifty Years : When Neil Armstrong, Buzz Aldrin, and Michael Collins first embarked on their journey to the Moon on July 16, 1969, no human had ever set foot on another celestial body. Apollo 11 was a landmark step in our exploration of the Moon. The Diviner Lunar Radiometer instrument onboard the Lunar Reconnaissance Orbiter (LRO) has been mapping the temperature of the Moon since its launch in 2009 . A picture is worth much more than a thousand words. Although scientists rely on a great variety of instruments to gather data about the Moon, detailed photographs remain one of the dominant sources of information. In 1972, the Apollo 17 crew set up science experiments and gathering samples of lunar rocks and regolith (soil) to bring to Earth for analysis. Forty years later, the Lunar Reconnaissance Orbiter Camera (LROC) imaged the landing site with enough detail to see the tracks of the rover and footprints the astronauts left behind! These lithograph images show the face of the Moon taken by Apollo 11 astronauts. Astronauts Edwin Aldrin, Pete Conrad and Harrison Schmitt are shown in other photographs on missions to the lunar surface. The shape of a planetary surface tells scientists a lot about how that surface formed and changed over time. With the Lunar Reconnaissance Orbiter (LRO), NASA has returned to the Moon, enabling new discoveries and bringing the Moon back into the public eye.
The idea that black holes exist dates back to the 18th century with John Michell and astronomer Pierre-Simon de Laplace. Later, Einstein used his General Theory of Relativity to pave the way for these “dark bodies” to enter the realms of science. Black holes are not made up of matter, although they have a large mass. This explains why it has not yet been possible to observe them directly, but only via the effect of their gravity on the surroundings. They distort space and time and have a really irresistible attraction. It is hard to believe that the idea behind such exotic objects is already more than 230 years old. The birthplace of black holes is to be found in the peaceful village of Thornhill in the English county of Yorkshire. In the 18th century, this is where John Michell made his home, next to the medieval church. He was the rector here for 26 years and – as borne out by the inscription on his memorial in the church – highly respected as a scholar as well. In fact, Michell had studied not only theology, Hebrew, and Greek at Cambridge, but had also turned his attention to the natural sciences. His main interest was geology. In one treatise, which was published after the Lisbon earthquake of 1755, he claimed that subterranean waves existed which propagated such an earthquake. This theory caused quite a stir in the academic world, and led to John Michell being accepted as a Fellow of the Royal Society in London, not least because of this theory. He gave a talk before this renowned society in 1783 on the gravitation of stars. He used a thought experiment to explain that light would not leave the surface of a very massive star if the gravitation was sufficiently large. And he deduced: “Should such an object really exist in nature, its light could never reach us.” More than a decade after Michell, another scientist took up this same topic: in his book published in 1796 – Exposition du Système du Monde – the French mathematician, physicist, and astronomer Pierre-Simon de Laplace described the idea of massive stars from which no light could escape; this light consisted of corpuscles, very small particles, according to the generally accepted theory of Isaac Newton. Laplace called such an object corps obscur, i.e. dark body. The physical thought games played by John Michell and Pierre-Simon de Laplace did not meet with much response, however, and were quickly forgotten. It was left to Albert Einstein with his General Theory of Relativity to pave the way for these “dark bodies” to enter the realms of science – without this really being his intention. Although the existence of point singularities, in which matter and radiation from our world would simply disappear, can be derived from the equations he published in 1915, 1939 saw Einstein publish an article in the journal Annals of Mathematics in which he intended to prove that such black holes were impossible. But back in 1916, the astronomer Karl Schwarzschild had taken the Theory of General Relativity as his basis to calculate the size and the behavior of a non-rotating static black hole carrying no electric charge. His name has been given to the mass-dependent radius of such an object, inside which nothing can escape to the outside. This radius would be around one centimeter for Earth. Schwarzschild had a meteoric career during his short life. Born in 1873 as the eldest of six children of a German-Jewish family in Frankfurt, his talent emerged at an early age. He was only 16 when he published two papers in a renowned journal on the determination of the orbits of planets and binary stars. His subsequent career in astronomy took him via Munich, Vienna and Göttingen to Potsdam, where he became director of the astrophysical observatory in 1909. A few years later, in the middle of Word War I – Karl Schwarzschild was artillery second lieutenant on the Eastern front in Russia – he derived the exact solutions for Einstein’s field equations. He died on 11 May 1916 from an auto-immune disease of the skin. The topic of black holes did not yet find its way into the scientific domain, however. If anything, the interest in Einstein’s theoretical construct diminished more and more after the initial hype. This phase lasted approximately from the mid-1920s to the middle of the 1950s. Then followed what the physicist Clifford Will called the “renaissance” of the General Theory of Relativity. It now became important to describe objects which initially were only of interest to the theoreticians. White dwarves, for example, or neutron stars where matter exists in very extreme states. Their unexpected properties could be explained with the aid of new concepts derived from this theory. So the black holes moved into the focus of attention as well. And scientists working on them became stars – like the British physicist Stephen Hawking. At the beginning of the 1970s, Uhuru heralded in a new era for observational astronomy. The satellite surveyed the universe in the range of extremely short wavelength X-ray radiation. Uhuru discovered hundreds of sources, usually neutron stars. But among them was one particular object in the Cygnus (=swan) constellation. It was given the designation Cygnus X-1. Researchers discovered it to be a giant star of around 30 solar masses which shone with a blue glow. An invisible object of around 15 solar masses orbits around it – apparently a black hole. This also explains the X-rays recorded: the gravity of the black hole attracts the matter of the main star. This collects in a so-called accretion disk around the massive monster, swirls around it at incredibly high speed, is heated up to several million degrees by the friction – and emits X-rays before it disappears in the space-time chasm. Cygnus X-1 is by no means the only black hole which the astronomers have detected indirectly. So far, they have found a whole series of them with between 4 and 16 solar masses. But there is one which is much more massive. It is located at the heart of our Milky Way, around 26,000 light years away, and was discovered at the end of the 1990s. In 2002, a group including Reinhard Genzel from the Max Planck Institute for Extraterrestrial Physics succeeded in making a sensational discovery: at the Very Large Telescope of the European Southern Observatory (ESO), the scientists observed a star which had approached the galactic center to within a mere 17 light hours (just over 18 billion kilometers or 11 billion miles). During the months and years that followed, they were able to observe the orbital motion of this star, which was given the designation S2. It orbits the center of the galaxy (Sagittarius A*) once every 15.2 years at a speed of 5,000 kilometers (3,100 miles) per second. From the motion of S2 and other stars, the astronomers concluded that around 4.5 million solar masses are concentrated in a region the size of our planetary system. There is only one plausible explanation for such a density: a gigantic black hole. Our Milky Way is no exception: the scientists believe that these mass monsters lurk at the centers of most galaxies – some even much larger than Sagittarius A*. A black hole of approx. 6.6 billion solar masses is located inside a giant galaxy known as M87! Like Sagittarius A*, this stellar system 53 million light years away is also part of the observation program of the Event Horizon Telescope. With the discovery of gravitational waves in September 2015, the history of black holes reached its present climax. At that time, waves from two merging holes with 36 and 29 solar masses were registered. This heralded in a new era of astronomy, whose aim is to bring light into the dark universe. And also to shed light on these mysterious black holes.
What is Ideal Gas? There are many topics that chemistry students have to learn to prepare for their final examinations. One of the most important topics that students have to prepare for their final examination is the derivation of the ideal gas equation. In this article, students will be able to learn the answer to questions like what is ideal gas, what are ideal gas laws, why is the ideal gas equation important, and what are some important ideal gas examples. Let us first define ideal gas. According to experts, ideal gas can be described as a theoretical gas that comprises a set of randomly-moving point particles. These particles only interact with one another through elastic collisions. It is easy to define ideal gas, but the ideal gas meaning extends beyond that. This concept of the ideal gas formula is important as it obeys all ideal gas law equations, provides a simple equation of state, and is also amenable to analysis by employing statistical mechanisms. Further, students might be interested to note that the requirement of zero interaction can also be relaxed for an ideal gas. This ideal gas meaning is possible if interactions between all particles are perfectly elastic or regarded as simple point-like collisions. It is almost important for students to note that under various conditions of pressure and temperature, many gases actually qualitatively behave like an ideal gas. In those cases, the ideal gas formula is somewhat bent as the gas molecules or atoms for monatomic gas play the role of ideal particles. If one relaxes the ideal gas definition a bit, then many gases like oxygen, nitrogen, noble gases, hydrogen, and some heavier gases like carbon dioxide, and a mixture of gases in the air can be treated as ideal gases. However, one must remember that all of this is done with reasonable tolerances to the ideal gas definition and ideal gas law equation. This is done over various parameter ranges around standard pressure and temperature. Usually, gases are more likely to behave like an ideal gas and follow the ideal gas constant at lower pressure and higher temperature. Can you take a guess as to why this happens? The simple reason behind this is that the potential energy becomes less significant in comparison to the kinetic energy of the particles. This happens due to the intermolecular forces of attraction. Also, the size of the molecules becomes less significant, too, when compared to the empty spaces between the particles. This concept of an ideal gas constant can also be illustrated by the fact that one mole of an ideal gas has a capacity of 22.710947(13) liters at standard pressure and temperature (S.T.P.). According to the ideal gas law formula, the standard temperature is often measured at 273.15 K, and absolute pressure is identified at 105 Pa. These values have also been defined by IUPAC since 1982. Ideal Gas Examples The ideal gas law definition and some concepts related to the ideal gas law definition are discussed in the previous section. Hence, now we will take a look at some ideal gas law examples. Some of the common ideal gas law examples are given below. (Image will be uploaded soon) (Image will be uploaded soon) (Image will be uploaded soon) Ideal Gas Laws In this section, students will be able to find out the answer to the question of what is the ideal gas law. According to experts, ideal gas laws are laws that state the behaviour of ideal gases. These laws were primarily formulated by the observational work of Boyle in the 17th century and Charles in the 18th century. Both of these ideal gas laws are stated below. 1. Boyle's Law: According to Boyle's Law, if a given mass of a gas is being kept at a constant temperature, then the pressure of that gas is inversely proportional to its volume. 2. Charles Law: This law states that for any given fixed mass of a gas that is held at constant pressure, the volume of the gas is directly proportional to its temperature. Ideal Gas Equation Let us look at some ideal gas equations now. The ideal gas equation is formulated as: PV = nRT In this equation, P refers to the pressure of the ideal gas, V is the volume of the ideal gas, n is the total amount of ideal gas that is measured in terms of moles, R is the universal gas constant, and T is the temperature. This means that according to the ideal gas equation, the product of pressure and volume of a gas bears a constant relation (it is proportional) with the product of the universal gas constant and the temperature. Here, the universal gas constant is denoted by R. The universal gas constant is the product of the molecular mass of any gas multiplied with the specific gas constant. According to the S.I. system, the value of the universal gas constant is 8.314 J mol-1K-1. Deriving the Ideal Gas Equation Let us assume that the pressure of a gas is ‘p,’ and the volume of the gas is ‘v.’ Also, let the temperature be ‘T,’ R is the universal gas constant, and n is the number of moles of gas. Hence, according to Boyle's Law, if the values of n and T are kept constant, then the volume is inversely proportional to the pressure that is exerted by the gas. This can be represented as: V ∝ 1/P According to Charle’s Law, if the values of p and n are kept constant, then the volume of the gas is directly proportional to the temperature. This can be represented as: V ∝ T According to Avogadro’s Law, if both P and T are kept constant, then the volume of the gas would be directly proportional to the number of moles of the gas. This can be represented by V ∝ n If we combine all the three equations, then V ∝ n T or PV = nRT Fun Facts About Ideal Gas Did you know that there are three basic classes of ideal gases? These types of ideal gases are the classical or Maxwell-Boltzmann ideal gas, the ideal quantum Bose gas that is composed of bosons, and the ideal quantum Fermi gas that is composed of fermions. Most of these gases have the same characteristics. However, there are some minute differences that students should have a clear idea of. Tips for Students to Score Well Study in short intervals: Allow your brain and body to rest so you can consume the material with full energy and attention. "For every 30 minutes you study, take a short 10–15 minute break to refresh," says Oxford Learning. Short study sessions are more efficient and allow you to maximize your study time." Exercise boosts blood flow to the brain, giving you more energy and allowing you to concentrate better. Concentration and focus might be improved by a yoga or stretching exercise. Spending too much time on one thing might cause you to lose focus. To avoid learning fatigue, one of the most crucial study strategies for college examinations is to switch subjects every 30 minutes or so. After you've given your brain a pause, proceed to difficult topics. By following this technique you will be able to focus well. Eat healthy: To save time, students often start consuming junk food which is not the best technique for studying. Instead, feed your body with a balanced diet of "brain foods" including fresh fruits and vegetables, as well as protein and healthy fats. The same may be said about sleep: obtain a sound sleep; the night before the exam. Take the right approach: Different sorts of college examinations require different techniques of studying. Focusing on definitions and concepts is what multiple choice means. You must have a conceptual knowledge of the topic in essay based assessments. Inquire about the exam's format with your professor so you know how to prepare. Go through previous years question papers as it would give an overview of the exam. Students must visit our website www.vendantu.com if they are looking for study material. Test your knowledge: Create a practice test based on what you expect the test will cover once you've figured out the format. This will allow you to gain a better understanding of the topic and will help you choose what you should be studying. The practice test may then be used to quiz yourself and your study group. Vendantu provides students a huge question bank with solutions. These solutions are available in pdf format which make its access easier. Why should you go with Vedantu? In all subjects of study, Vedantu has a number of skilled professors. Vedantu allows the users to select online tuition from professors and have access to all the study resources. The study resources are simple to comprehend and might assist you in achieving high test results. You have the option of learning live online at any time and from any location. It's entirely up to you how much time you want to devote to studying. Vedantu also ensures that a PTM is held quarterly to share the students' progress reports and discuss their advancements with students and their parents. FAQs on Derivation of Ideal Gas Equation 1. Let us assume that there is one mole of an ideal gas that is filled inside a closed container. The container has a volume of 1 cubic meter. The temperature is set at 300K. Using this information, find the pressure that is exerted by the gas on the walls of the container. PV = nRT P x 1 = 1 x 25 / 3 x 300 P = 2500 Pa 2. What is R? R is called the gas constant. This gas constant was first discovered in the mid-1830s by Emil Clapeyron. That discovery is now better known as the ideal gas law. In some cases, R is also regarded as the universal constant. This is because this constant shows up in many non-gas-related situations. This is why, depending on the units that are selected, the value of R can take many different units and forms. 3. What is the compressibility factor of an ideal gas? The compressibility factor of an ideal gas is always 1. PV = nRT where Z is the compressibility factor at P, T for a given composition. P is the absolute pressure. T is the temperature. R is the gas constant 4.What is the compressibility factor for H₂ and He ? The compressibility factor of H₂ and He is greater than 1. This is because both of them are gases and they show real behaviour of gases. Moreover, these gases are heavy so they occupy more volume than usual. 5. Cylinder A contains H₂ gas and cylinder B contains CH₄ gas have the same mass and volume at 30K and 60K respectively. Which of the cylinders will have a greater compressibility factor? Assuming ideal behavior for both the gases. Both of the cylinders will have equal compressibility because it is assumed that both gases are ideal gases. and, PV = nRT is the equation for ideal gas So, Z = PV/ RT = 1 Therefore, both gases will have the same compressibility factor.
The interaction of the solar wind—variable streams of charged plasma from the Sun—and Earth’s atmosphere produces auroras. These northern and southern lights dance across the night sky and have mesmerized and inspired observers for centuries. For scientists, this dance of light also leads to many questions about how space weather affects Earth’s atmosphere. In late January 2015, NASA-funded scientists launched a rocket-borne experiment into the northern lights in order to learn more about how they heat the planet’s atmosphere. The Auroral Spatial Structures Probe (ASSP) was launched at 5:41 a.m. on January 28 from the Poker Flat Research Range about 50 kilometers (30 miles) north of Fairbanks, Alaska. The research team captured time-lapse photos of the Oriole IV sounding rocket and payload amidst the aurora borealis (above) and at the moment of liftoff (below). “This is likely the most complicated mission the sounding rocket program has ever undertaken and it was not easy by any stretch,” said John Hickman, operations manager of the NASA sounding rocket program office at Wallops Flight Facility. “It was technically challenging every step of the way.” All of the rocket-borne experiments were launched from Poker Flats, a site often used by NASA for suborbital sounding rocket launches. The ASSP carried seven instruments to study the electromagnetic energy that can heat the thermosphere—the second highest layer of the atmosphere—during auroral events. The interaction of waves and particles from the solar wind, Earth’s magnetosphere, and the upper atmosphere can cause “Joule heating.” Essentially, the electrical currents on the edge of space run into a resistant media (the air in the atmosphere) and generate heat in a process similar to that of a toaster coil or electric stove. This heating can expand the atmosphere upward and increase the friction, or drag, on spacecraft and satellites. The AASP launch occurred just two days after the successful launches of the Mesosphere-Lower Thermosphere Turbulence Experiment (M-TeX) and the Mesospheric Inversion-layer Stratified Turbulence (MIST) experiment. Two pairs of instrumented rockets were launched about 30 minutes apart to study how turbulence is formed in the presence of inversion layers in the upper atmosphere. This turbulence causes particles to diffuse between atmospheric layers. The MIST launches included the release of harmless trimethyl aluminum vapor to help researchers trace diffusion at high altitude “Recent solar storms have resulted in major changes to the composition of the upper atmosphere above 80 kilometers (50 miles), where enhancements in nitrogen compounds have been found,” said Richard Collins, upper atmospheric researcher from the University of Alaska. “These compounds can be transported into the middle atmosphere where they can contribute to ozone destruction. However, the meteorological conditions do not always allow such transport to occur. Thus, the impact of solar activity on the Earth is not just about how the Sun is a source of energetic particles, but also how the Earth’s meteorological conditions determine the fate of these particles in the atmosphere.” Top photographs by Jamie Adkins, NASA. Second photo by Lee Wingfield, NASA. Video copyright Ronn Murray. Caption by Mike Carlowicz, adapted from NASA press releases. Photographers captured these digital photos of a four-stage Black Brant XII sounding rocket and the aurora borealis on December 12, 2010, during the NASA-funded Rocket Experiment for Neutral Upwelling (RENU).
Black Holes Common in Early Universe? Black holes may have been abundant among the first stars in the universe, helping explain the origin of the supermassive monsters that lurk at the heart of galaxies today, researchers say. An international team of astronomers has found that black holes likely contributed at least 20 percent of the infrared cosmic background, light emitted 400 million to 800 million years after the Big Bang that created our universe 13.8 billion years ago. These early pioneers may have been the seeds that later grew into supermassive black holes, which contain millions to billions of times the mass of our sun, researchers said. [Gallery: Black Holes of the Universe] "It's a relief to find a possible signature of these seeds," study co-author Guenther Hasinger, director of the Institute for Astronomy at the University of Hawaii in Honolulu, told SPACE.com. The earliest black holes Black holes possess gravitational fields so powerful that not even light can escape. They are generally believed to form after a star dies in a gigantic explosion known as a supernova, which crushes the remaining core into a tiny but incredibly dense volume. It's unclear how black holes grow to supermassive proportions, but they can apparently do so quite rapidly. For example, some of them were apparently already well-established by 800 million years or so after the Big Bang. To learn more about the earliest stars and the first black holes, the study team analyzed X-ray and infrared signals using NASA's Chandra X-ray Observatory and Spitzer Space Telescope, respectively. The X-rays that Chandra saw likely came from matter that became superheated as it rushed into black holes, researchers said. The infrared rays Spitzer detected, on the other hand, make up the cosmic infrared background, the collective light from clusters of massive stars in the universe's first stellar generations after the Big Bang, as well as from black holes, which generate vast amounts of energy as they devour gas. The investigators focused on a region known as the Extended Groth Strip, a well-analyzed slice of sky slightly larger than the full moon in the constellation Bootes. They concentrated on spots that shone powerfully in both infrared and X-ray light. Black holes are the only plausible sources that can produce both forms of light at the intensities they looked at, scientists said. "This measurement took us some five years to complete and the results came as a great surprise to us," lead author Nico Cappelluti, an astronomer with the National Institute of Astrophysics in Bologna, Italy, and the University of Maryland, Baltimore County, said in a statement. "Our results indicate black holes are responsible for at least 20 percent of the cosmic infrared background, which indicates intense activity from black holes feeding on gas during the epoch of the first stars," co- author Alexander Kashlinsky, of NASA's Goddard Space Flight Center in Greenbelt, Md., said in a statement. How monsters grow These early objects could help explain the origins of supermassive black holes, researchers said, and also shed light on another puzzle from the universe's youth — a stage known as reionization. During this era between about 150 million to 800 million years after the Big Bang, radiation ionized the neutrally charged hydrogen pervading the universe to its constituent protons and electrons. "It is currently thought generally, although not unanimously, that stars were responsible for reionization," Kashlinsky told SPACE.com. "Our result indicates that black holes were a significant, potentially dominant, contributor to that process." It remains uncertain how massive these early black holes were. They could be mini-quasars containing a few tens of thousands of solar masses, born from the collapse of giant clouds of gas and dust. Or they could be micro-quasars a few hundred solar masses large spawned from massive dying stars. Mini-quasars would be heavily obscured by clouds and thus likely would not factor into reionization very much, while micro-quasars could easily pump out enough radiation to make a key contribution, Hasinger said. The Euclid mission from the European Space Agency and the eROSITA mission from Russia and Germany might be able to shed more light on these early black holes. In addition, NASA's upcoming James Webb Space Telescope might be able to see these objects individually, confirming whether they are mini-quasars or micro-quasars, Hasinger said. More on Space.com: - Black Hole Quiz: How Well Do You Know Nature's Weirdest Creations? - The Strangest Black Holes in the Universe - The History & Structure of the Universe (Infographic) Copyright 2013 SPACE.com, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Algorithms are the backbone of computer programming and play a crucial role in solving problems and making decisions. They are step-by-step procedures or instructions designed to perform specific tasks or calculations. In this article, we will dive into the process of creating an algorithm, exploring the key steps and considerations involved. Understanding the Problem Before creating an algorithm, it is essential to have a clear understanding of the problem you are trying to solve. Break down the problem into smaller components and identify the inputs, outputs, and constraints. This analysis will help you define the scope of the algorithm and guide its development. Designing the Algorithm Once you have a clear understanding of the problem, you can start designing the algorithm. Here are the key steps involved in this process: 1. Define the problem: Clearly state the problem and its objectives. This step helps you focus on the specific task at hand. 2. Plan the approach: Determine the overall strategy or approach you will take to solve the problem. Consider different algorithms and choose the one that best suits the problem requirements. 3. Break it down: Divide the problem into smaller sub-problems or tasks. This step helps in managing complexity and allows you to tackle each component separately. 4. Define the steps: Specify the individual steps or actions required to solve each sub-problem. These steps should be precise, unambiguous, and ordered logically. 5. Use flowcharts or pseudocode: Visualize the algorithm using flowcharts or write it in pseudocode. Flowcharts provide a graphical representation of the algorithm’s flow, while pseudocode is a high-level description of the algorithm using simple language. Implementing the Algorithm After designing the algorithm, it’s time to implement it in a programming language. Here are the steps involved in the implementation process: 1. Choose a programming language: Select a programming language that is suitable for the problem at hand. Consider factors such as performance, availability of libraries or frameworks, and your familiarity with the language. 2. Write the code: Translate the algorithm into code using the chosen programming language. Follow the steps defined in the algorithm design phase and ensure the code accurately represents the logic. 3. Test and debug: Test the code with different inputs and verify that it produces the expected outputs. Debug any issues or errors that arise during the testing process. Optimizing and Refining the Algorithm Creating an algorithm is an iterative process, and it often requires optimization and refinement. Here are some techniques to improve the efficiency and effectiveness of your algorithm: 1. Analyze the complexity: Evaluate the algorithm’s time and space complexity to understand its efficiency. Look for opportunities to reduce unnecessary computations or memory usage. 2. Benchmark and compare: Compare the performance of your algorithm with other existing algorithms for the same problem. This analysis can help you identify areas for improvement. 3. Iterate and refine: Based on the analysis and benchmarking results, refine your algorithm by making necessary adjustments or optimizations. Repeat this process until you achieve the desired performance. Creating an algorithm involves understanding the problem, designing a logical approach, implementing it in a programming language, and refining it through optimization. By following these steps, you can develop efficient and effective algorithms to solve a wide range of problems. – GeeksforGeeks: geeksforgeeks.org – Khan Academy: khanacademy.org – Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein
Or download our app "Guided Lessons by Education.com" on your device's app store. Partial Quotients Method Students will be able to divide using the partial quotients method. - Display a simple multiplication chart with no coloring or notations. - Conduct a whole-class discussion on which factors are easiest to estimate products (e.g., 2, 5, 10, 100, etc.) Highlight multiples and circle related factors on the multiplication chart so they can visually think about patterns they see in the chart. Students should understand that factors follow a pattern and that some numbers are easier to compute or estimate products. - Tell students that today they'll use their understanding of factors and their multiples to use the partial quotients method to solve division problems. Explicit Instruction/Teacher modeling(10 minutes) - Define factor as the number multiplied by another number to get a product, or the answer to a multiplication problem. Multiples are a list of products specific to a factor (e.g., 4, 8, and 12 are all multiples of four). When discussing division, the quotient is the answer to a division problem. Review other key terms about division, such as the divisor and dividend if necessary. - Display the teaching component from the worksheet Partial Quotients with Two-Digit Divisors. Label the products (i.e., totals that are subtracted), quotients (numbers that are added), divisor and dividend. - Define the partial quotients method as a way to solve division problems by repeatedly finding pieces of the quotient, or partial quotients, and then adding up all the quotients to determine the answer to the division problem. - Model redoing the given model on the board, thinking aloud each step, and using the same color-coding shown in the example. (Tip: use correct place value terminology, such as, "I need to borrow one 100 to make 110 so that I can subtract 70 from 110.") - Discuss ideas that can make the partial quotients methods efficient, such as choosing to multiply the divisor by 10 because it was a familiar calculation. (Tip: it can be helpful for students to list the multiples of the divisor, or round the divisor up when deciding which partial quotient to use throughout the process.) Guided Practice(15 minutes) - Ask students to use their whiteboards to redo the example problem again. Tell students to continue to use the same terminology you modeled to explain their answer to their elbow partners when they are done. - Challenge a student to redo the problem again for the class but create different products to subtract from the dividend. For example, the student can choose to multiply 20 x 17 instead of 10 x 17. - Distribute the worksheet Partial Quotients with Two-Digit Divisors and have students complete problem #1 and #2 in partnerships, making sure to write their answers on their own sheets. - Review the answers as a class, correcting misconceptions about the strategy as necessary. Independent working time(12 minutes) - Pass out three dice to each partnership and ask students to roll the three dice to create the dividend and then roll only two dice to get the divisor. Then, ask students to use a sheet of copy paper to solve the division problem using the partial quotients method. - Tell students to complete at least three problems. Challenge them to complete the problem on their own and then share it with their partner. Ask them to note which multiples they chose and how their choices differed from each other. - Conduct a class discussion where students make generalizations about the partial quotients method with division. Ask students to make some generalizations about how they were able to find the quotient faster. Also have them think about which method was easier for them. Write their ideas on the board with tally marks next to each idea to represent the number of students who agree with the idea. - Provide visuals and definitions for difficult vocabulary and sentence stems for ELs and students with disabilities. - Allow students to use color-coding for each of the partial quotients and products. Ask students to line up the partial products so they do not forget digits as they add them up to get the quotient. - Have students model their thinking aloud by allowing them to share their explanations. Use some of their language and write it on the board for other students to consider. - Challenge students to use more dice or 10-sided dice for their division problem creation. - Ask students to connect their ideas about the partial quotients method to that of the partial products method. Allow them to share their ideas during the Review and Closing section if time allows. - Have students consider other ways of solving the division problem for each of the problems on the worksheet. - Hand out one large index card to each student and have them create a division problem either using their shared dice, or just writing a problem with three-digits ÷ two-digits. - Collect all the index cards and randomly pass out one card to each student. Have students complete the division problem with the partial quotients method. If time allows, ask them to solve the same problem on the back of the index card a different way (e.g., repeated subtraction, area model, standard algorithm, etc.). Review and closing(5 minutes) - Have students write their name on their index card and give it to another student. Ask the student to correct the problem by solving it themselves on their whiteboard. - Ask students to consider the generalizations they created in the Independent section and add on more or eliminate some of them. - Ask students to vote on whether they enjoyed using the method.
Use the grid method for multiplication In this multiplication worksheet, students use the grid method to solve multiplication problems. Students complete 36 multiplication problems. 3rd - 6th Math 3 Views 1 Download Number & Operations: Multi-Digit Multiplication A set of 14 lessons on multiplication would make a great learning experience for your fourth grade learners. After completing a pre-assessment, kids work through lessons that focus on multiples of 10, double-digit multiplication, and... 4th Math CCSS: Adaptable A Five Day Approach to Using Technology and Manipulatives to Explore Area and Perimeter Young mathematicians build an understanding of area and perimeter with their own two hands in a series of interactive geometry lessons. Through the use of different math manipulatives, children investigate the properties of rectangles,... 3rd - 6th Math CCSS: Adaptable Solve Multiplication Problems: Using Skip Counting Explore yet another use of number lines as young mathematicians learn to answer multiplication problems by skip counting. This problem solving approach is clearly modeled with the examples presented in the third video of this series.... 4 mins 2nd - 4th Math CCSS: Designed Understand Multiplication Problems: Using Equal Groups Understanding multiplication as the sum of equal sized groups is a big step for young mathematicians. This concept is clearly demonstrated in the first video of this series, as students learn to write multiplication equations to... 3 mins 2nd - 4th Math CCSS: Designed
The James Webb Space Telescope headed to space Dec. 25, 2021. With it, astronomers hope to find the first galaxies to form in the universe, and they will search for Earth-like atmospheres around other planets and accomplish many other scientific goals. I am an astronomer and the principal investigator for the Near-Infrared Camera — or NIRCam — aboard the Webb telescope. I have participated in the development and testing of both my camera and the telescope as a whole. To see deep into the universe, the telescope has a very large mirror and must be kept extremely cold. But getting a fragile piece of equipment like this to space is no simple task. There have been many challenges my colleagues and I have had to overcome to design, test and, soon, launch and align the most powerful space telescope ever built. YOUNG GALAXIES AND ALIEN ATMOSPHERES The Webb telescope has a mirror over 20 feet across, a tennis-court-sized sunshade to block solar radiation and four separate camera and sensor systems to collect data. It works kind of like a satellite dish. Light from a star or galaxy will enter the mouth of the telescope and bounce off the primary mirror toward the four sensors: NIRCam, which takes images in the near infrared region; the Near Infrared Spectrograph, which can split the light from a selection of sources into its constituent colors and measure the strength of each; the Mid-Infrared Instrument, which takes images and measures wavelengths in the middle infrared region; and the Near Infrared Imaging Slitless Spectrograph, which splits and measures the light of anything scientists point the satellite at. This design will allow scientists to study how stars form in the Milky Way and the atmospheres of planets outside the solar system. It may even be possible to figure out the composition of these atmospheres. Ever since Edwin Hubble proved that distant galaxies are just like the Milky Way, astronomers have asked: How old are the oldest galaxies? How did they first form? And how have they changed over time? The Webb telescope was originally dubbed the “First Light Machine” because it is designed to answer these very questions. One of the main goals of the telescope is to study distant galaxies close to the edge of the observable universe. It takes billions of years for the light from these galaxies to cross the universe and reach Earth. I estimate that images my colleagues and I will collect with NIRCam could show protogalaxies that formed a mere 300 million years after the Big Bang — when they were just 2% of their current age. Finding the first aggregations of stars that formed after the Big Bang is a daunting task for a simple reason: These protogalaxies are very far away and so appear to be very faint. Webb’s mirror is made of 18 segments and can collect more than six times as much light as the Hubble Space Telescope mirror. Distant objects also appear to be very small, so the telescope must be able to focus the light as tightly as possible. The Webb telescope also has to cope with another complication: Since the universe is expanding, the galaxies that scientists will study with it are moving away from Earth, and the Doppler effect comes into play. Just as the pitch of an ambulance’s siren shifts down and becomes deeper when it passes and starts moving away from you, the wavelength of light from distant galaxies shifts down from visible light to infrared light. Webb detects infrared light — it is essentially a giant heat telescope. To “see” faint galaxies in infrared light, the telescope needs to be exceptionally cold or else all it would see would be its own infrared radiation. This is where the heat shield comes in. The shield is made of a thin plastic coated with aluminum. It is five layers thick and measures 46.5 feet (17.2 meters) by 69.5 feet (21.2 meters) and will keep the mirror and sensors at minus 390 degrees Fahrenheit (minus 234 Celsius). The Webb telescope is an incredible feat of engineering, but how does one get such a thing safely to space and guarantee that it will work? TEST AND REHEARSE The James Webb Space Telescope will orbit the sun from a point a million miles from Earth — about 4,500 times more distant than the International Space Station, and much too far to be serviced by astronauts. Over the past 12 years, the team has tested the telescope and instruments, shaken them to simulate the rocket launch and tested them again. Everything has been cooled and tested under the extreme operating conditions of orbit. I will never forget when my team was in Houston testing the NIRCam using a chamber designed for the Apollo lunar rover. It was the first time that my camera detected light that had bounced off the telescope’s mirror, and we couldn’t have been happier — even though Hurricane Harvey was fighting us outside. After testing came the rehearsals. The telescope will be controlled remotely by commands sent over a radio link. But because the telescope will be so far away, it takes six seconds for a signal to go one way — there is no real-time control. So for the past three years, my team and I have been going to the Space Telescope Science Institute in Baltimore and running rehearsal missions on a simulator covering everything from launch to routine science operations. The team even has practiced dealing with potential problems that the test organizers throw at us and cutely call “anomalies.” SOME ALIGNMENT REQUIRED The Webb team will continue to rehearse and practice until the launch date in December, but our work is far from done after Webb is folded and loaded into the rocket. We need to wait 35 days after launch for the parts to cool before beginning alignment. After the mirror unfolds, NIRCam will snap sequences of high-resolution images of the individual mirror segments. The telescope team will analyze the images and tell motors to adjust the segments in steps measured in billionths of a meter. Once the motors move the mirrors into position, we will confirm that telescope alignment is perfect. This task is so mission critical that there are two identical copies of NIRCam on board — if one fails, the other can take over the alignment job. This alignment and checkout process should take six months. When finished, Webb will begin collecting data. After 20 years of work, astronomers will at last have a telescope able to peer into the most distant reaches of the universe. Marcia Rieke receives funding from NASA, and her endowed chair is partially funded by the Heising-Simon Foundation. In the next issue, Arizona Alumni Magazine will report on what happens after the launch and more. This article was first published in The Conversation.
In flow chemistry, a chemical reaction is run in a continuously flowing stream rather than in batch production. In other words, pumps move fluid into a tube, and where tubes join one another, the fluids contact one another. If these fluids are reactive, a reaction takes place. Flow chemistry is a well-established technique for use at a large scale when manufacturing large quantities of a given material. However, the term has only been coined recently for its application on a laboratory scale. Often, microreactors are used. - 1 Batch vs. flow - 2 Running flow reactions - 3 Continuous flow reactors - 4 Key application areas - 5 Segmented Flow Chemistry - 6 See also - 7 References - 8 External links Batch vs. flow Comparing parameters in Batch vs Flow: - Reaction stoichiometry. In batch production this is defined by the concentration of chemical reagents and their volumetric ratio. In Flow this is defined by the concentration of reagents and the ratio of their flow rate. - Residence time. In batch production this is determined by how long a vessel is held at a given temperature. In flow the volumemetric residence time is used given by the ratio of volume of the reactor and the overall flow rate, as most often, plug flow reactors are used. Running flow reactions - Reaction temperature can be raised above the solvent's boiling point as the volume of the laboratory devices is typically small. Typically, non-compressible fluids are used with no gas volume so that the expansion factor as a function of pressure is small. - Mixing can be achieved within seconds at the smaller scales used in flow chemistry. - Heat transfer is intensified. Mostly, because the area to volume ratio is large. Thereby, endothermal and exothermal reaction can be thermostated. The temperature gradient can be steep, allowing efficient control over reaction time. - Safety is increased: - Thermal mass of the system is dominated by the apparatus making thermal runaways unlikely. - Smaller reaction volume is also considered a safety benefit. - The reactor operates under steady-state conditions. - Flow reactions can be automated with far less effort than batch reactions. This allows for unattended operation and experimental planning. By coupling the output of the reactor to a detector system, it is possible to go further and create an automated system which can sequentially investigate a range of possible reaction parameters (varying stoichiometry, residence time and temperature) and therefore explore reaction parameters with little or no intervention. Typical drivers are higher yields/selectivities, less needed manpower or a higher safety level. - Multi step reactions can be arranged in a continuous sequence. This can be especially beneficial if intermediate compounds are unstable, toxic, or sensitive to air, since they will exist only momentarily and in very small quantities. - Position along the flowing stream and reaction time point are directly related to one another. This means that it is possible to arrange the system such that further reagents can be introduced into the flowing reaction stream at precisely the time point in the reaction that is desired. - It is possible to arrange a flowing system such that purification is coupled with the reaction. There are three primary techniques that are used: - Solid phase scavenging - Chromatographic separation - Liquid/Liquid Extraction - Reactions which involve reagents containing dissolved gases are easily handled, whereas in batch a pressurized "bomb" reactor would be necessary. - Multi phase liquid reactions (e.g. phase transfer catalysis) can be performed in a straightforward way, with high reproducibility over a range of scales and conditions. - Scale up of a proven reaction can be achieved rapidly with little or no process development work, by either changing the reactor volume or by running several reactors in parallel, provided that flows are recalculated to achieve the same residence times. - Dedicated equipment is needed for precise continuous dosing (e.g. pumps), connections, etc. - Start up and shut down procedures have to be established. - Scale up of micro effects such as the high area to volume ratio is not possible and economy of scale may not apply. Typically, a scale up leads to a dedicated plant. - Safety issues for the storage of reactive material still apply. The drawbacks have been discussed in view of establishing small scale continuous production processes by Pashkova and Greiner. Continuous flow reactors Continuous reactors are typically tube like and manufactured from non-reactive materials such as stainless steel, glass and polymers. Mixing methods include diffusion alone (if the diameter of the reactor is small e.g. <1 mm, such as in microreactors) and static mixers. Continuous flow reactors allow good control over reaction conditions including heat transfer, time and mixing. The residence time of the reagents in the reactor (i.e. the amount of time that the reaction is heated or cooled) is calculated from the volume of the reactor and the flow rate through it: - Residence time = Reactor Volume / Flow Rate Therefore, to achieve a longer residence time, reagents can be pumped more slowly and/or a larger volume reactor used. Production rates can vary from nano liters to liters per minute. Some examples of flow reactors are spinning disk reactors (Colin Ramshaw); spinning tube reactors; multi-cell flow reactors; oscillatory flow reactors; microreactors; hex reactors; and 'aspirator reactors'. In an aspirator reactor a pump propels one reagent, which causes a reactant to be sucked in. This type of reactor was patented around 1941 by the Nobel company for the production of nitroglycerin. Flow reactor scale The smaller scale of micro flow reactors or microreactors can make them ideal for process development experiments. Although it is possible to operate flow processes at a ton scale, synthetic efficiency benefits from improved thermal and mass transfer as well as mass transport. Key application areas Use of gases in flow Laboratory scale flow reactors are ideal systems for using gases, particularly those that are toxic or associated with other hazards. The gas reactions that have been most successfully adapted to flow are Hydrogenation and carbonylation although work has also been performed using other gases, e.g. ethylene and ozone. Reasons for the suitability of flow systems for hazardous gas handling are: - Systems allow the use of a fixed bed catalyst. Combined with low solution concentrations, this allows all compound to be adsorbed to catalyst in the presence of gas - Comparatively small amounts of gas are continually exhausted by the system, eliminating the need for many of the special precautions normally required for handling toxic and/or flammable gases - The addition of pressure means that a far greater proportion of the gas will be in solution during the reaction than is the case conventionally - The greatly enhanced mixing of the solid, liquid and gaseous phases allows the researcher to exploit the kinetic benefits of elevated temperatures without being concerned about the gas being displaced from solution The process development change from a serial approach to a parallel approach. In batch the chemist works first followed by the chemical engineer. In flow chemistry this changes to a parallel approach, where chemist and chemical engineer work interactively. Typically there is a plant setup in the lab, which is the a tool for both. This setup can be either commercial or non commercial. The development scale can be small (ml/hour) for idea verification using a chip system and in the range of a couple of liters per hour for scalable systems like the flow miniplant technology. Chip systems are mainly used for liquid-liquid application while flow miniplant systems can deal with solids or viscous material. Scale up of microwave reactions Microwave reactors are frequently used for small scale batch chemistry. However, due to the extremes of temperature and pressure reached in a microwave it is often difficult to transfer these reactions to conventional non-microwave apparatus for subsequent development, leading to difficulties with scaling studies. A flow reactor with suitable high temperature ability and pressure control can directly and accurately mimic the conditions created in a microwave reactor. This eases the synthesis of larger quantities by extending reaction time. Manufacturing scale solutions Flow systems can be scaled to the tons per hour scale. Plant redesign (batch to conti for an existing plant), Unit Operation (exchaning only one reaction step) and Modular Multi-purpose (Cutting a continuous plant into modular units) are typical realization solutions for flow processes. Other uses of flow It is possible to run experiments in flow using more sophisticated techniques, such as solid phase chemistries. Solid phase reagents, catalysts or scavengers can be used in solution and pumped through glass columns, for example, the synthesis of alkaloid natural product oxomaritidine using solid phase chemistries. Continuous flow techniques have also been used for controlled generation of nanoparticles. The very rapid mixing and excellent temperature control of microreactors are able to give consistent and narrow particle size distribution of nanoparticles. Segmented Flow Chemistry As discussed above, running experiments in continuous flow systems is difficult, especially when one is developing new chemical reactions, which requires screening of multiple components, varying stoichiometry, temperature and residence time. In continuous flow, experiments are performed serially, which means one experimental condition can be tested. Experimental throughput is highly variable and as typically five times the residence time is needed for obtaining steady state. For temperature variation the thermal mass of the reactor as well as peripherals such as fluid baths need to be considered. More often than not, the analysis time needs to be considered. Segmented flow is an approach that improves upon the speed in which screening, optimization and libraries can be conducted in flow chemistry. Segmented flow uses a "Plug Flow" approach where specific volumetric experimental mixtures are created and then injected into a high pressure flow reactor. Diffusion of the segment (reaction mixture) is minimized by using immiscible solvent on the leading and rear ends of the segment. One of the primary benefits of segmented flow chemistry is the ability to run experiments in a serial/parallel manner where experiments that share the same residence time and temperature can be repeatedly created and injected. In addition, the volume of each experiment is independent to that of the volume of the flow tube thereby saving a significant amount of reactant per experiment. When performing reaction screening and libraries, segment composition is typically varied by composition of matter. When performing reaction optimization, segments vary by stoichiometry. Segmented flow is also used with online LCMS, both analytical and preparative where the segments are detected when exiting the reactor using UV and subsequently diluted for analytical LCMS or injected directly for preparative LCMS. - A. Kirschning (Editor): Chemistry in flow systems and Chemistry in flow systems II Thematic Series in the Open Access Beilstein Journal of Organic Chemistry. - Smith, Christopher D.; Baxendale, Ian R.; Tranmer, Geoffrey K.; Baumann, Marcus; Smith, Stephen C.; Lewthwaite, Russell A. & Ley, Steven V. (2007). "Tagged phosphine reagents to assist reaction work-up by phase-switched scavenging using a modular flow reactor". Org. Biomol. Chem. 5: 1562–1568. doi:10.1039/b703033a. Retrieved 10 May 2013. - Pashkova, A.; Greiner, L. (2011). "Towards Small-Scale Continuous Chemical Production: Technology Gaps and Challenges". Chemie Ingenieur Technik: n/a. doi:10.1002/cite.201100037. - Oxley, Paul; Brechtelsbauer, Clemens; Ricard, Francois; Lewis, Norman; Ramshaw, Colin (2000). "Evaluation of Spinning Disk Reactor Technology for the Manufacture of Pharmaceuticals" (PDF). Ind. Eng. Chem. Res. 39 (7): 2175–2182. doi:10.1021/ie990869u. Retrieved 10 May 2013. - Csajági, Csaba; Borcsek, Bernadett; Niesz, Krisztián; Kovács, Ildikó; Székelyhidi, Zsolt; Bajkó, Zoltán; Ürge, László; Darvas, Ferenc (22 March 2008). "High-Efficiency Aminocarbonylation by Introducing CO to a Pressurized Continuous Flow Reactor". Org. Lett. 10 (8): 1589–1592. doi:10.1021/ol7030894. Retrieved 10 May 2013. - Mercadante, Michael A.; Leadbeater, Nicholas E. (July 2011). "Continuous-flow, palladium-catalysed alkoxycarbonylation reactions using a prototype reactor in which it is possible to load gas and heat simultaneously". Org. Biomol. Chem. 9: 6575–6578. doi:10.1039/c1ob05808h. Retrieved 10 May 2013. - Roydhouse, M. D.; Ghaini, A.; Constantinou, A.; Cantu-Perez, A.; Motherwell, W. B.; Gavriilidis, A. (23 June 2011). "Ozonolysis in Flow Using Capillary Reactors". Org. Process Res. Dev. 15 (5): 989–996. doi:10.1021/op200036d. Retrieved 10 May 2013. - Damm, M.; Glasnov, T. N.; Kappe, C. O. (2010). "Translating High-Temperature Microwave Chemistry to Scalable Continuous Flow Processes". Organic Process Research & Development 14: 215. doi:10.1021/op900297e. - Baxendale, Ian R.; Jon Deeley; Charlotte M. Griffiths-Jones; Steven V. Ley; Steen Saaby; Geoffrey K. Tranmer (2006). "A flow process for the multi-step synthesis of the alkaloid natural product oxomaritidine: a new paradigm for molecular assembly". Chemical Communications (24): 2566–2568. doi:10.1039/B600382F. Retrieved 9 May 2013. - Hornung, Christian H.; Guerrero-Sanchez, Carlos; Brasholz, Malte; Saubern, Simon; Chiefari, John; Moad, Graeme; Rizzardo, Ezio & Thang, San H. (March 2011). "Controlled RAFT Polymerization in a Continuous Flow Microreactor". Org. Process Res. Dev. 15 (3): 593–601. doi:10.1021/op1003314. Retrieved 10 May 2013. - Vandenbergh, Joke; Junkers, Thomas (August 2012). "Use of a continuous-flow microreactor for thiol–ene functionalization of RAFT-derived poly(butyl acrylate)". Polym. Chem. 3: 2739–2742. doi:10.1039/c2py20423a. Retrieved 10 May 2013. - Seyler, Helga; Jones, David J.; Holmes, Andrew B.; Wong, Wallace W. H. (2012). "Continuous flow synthesis of conjugated polymers". Chem. Commun. 48: 1598–1600. doi:10.1039/c1cc14315h. Retrieved 10 May 2013. - Marek Wojnicki; Krzysztof Pacławski; Magdalena Luty-Błocho; Krzysztof Fitzner; Paul Oakley; Alan Stretton (2009). "Synthesis of Gold Nanoparticles in a Flow Microreactor". Rudy Metale. - Continuous flow multi-step organic synthesis - a Chemical Science Mini Review by Damien Webb and Timothy F. Jamison discussing the current state of the art and highlighting recent progress and current challenges facing the emerging area of continuous flow techniques for multi-step synthesis. Published by the Royal Society of Chemistry - Continuous flow reactors: a perspective Review by Paul Watts and Charlotte Wiles. Published by the Royal Society of Chemistry - Flow Chemistry: Continuous Synthesis and Purification of Pharmaceuticals and Fine Chemicals Short Course offered at MIT by Professors Timothy Jamison and Klavs Jensen] - Flow Chemistry Reactions & Applications - a list of flow chemistry application notes and reactions from Syrris using modern flow chemistry reactors - Flow Chemistry Applications - a list from Chemtrix
4 In the preceding sections, we studied: IntroductionIn the preceding sections, we studied:Parabolas with vertices at the origin.Ellipses and hyperbolas with centers at the origin.We restricted ourselves to these cases as these equations have the simplest form. 5 IntroductionIn this section, we:Consider conics whose vertices and centers are not necessarily at the origin.Determine how this affects their equations. 6 In general, for any equation in x and y, IntroductionIn Section 3.5, we studied transformations of functions that have the effect of shifting their graphs.In general, for any equation in x and y,If we replace x by x – h or by x + h, the graph of the new equation is simply the old graph shifted horizontally.If y is replaced by y – k or by y + k, the graph is shifted vertically. 7 Shifting Graphs of Equations If h and k are positive real numbers, then replacing x by x – h or by x + h and replacing y by y – k or by y + k has the following effect(s) on the graph of any equation in x and y. 8 Shifting Graphs of Equations ReplacementHow the graph is shiftedx replaced by x – hRight h unitsx replaced by x + hLeft h unitsy replaced by y – kUpward k unitsy replaced by y + kDownward k units 10 Shifted EllipsesLet’s apply horizontal and vertical shifting to the ellipse with equation 11 Shifted EllipsesIf we shift it so that its center is at the point (h, k) instead of at the origin, its equation becomes 12 E.g. 1—Sketching the Graph of a Shifted Ellipse Sketch the graph of the ellipse and determine the coordinates of the foci. 13 E.g. 1—Sketching the Graph of a Shifted Ellipse The ellipse is shifted so that its center is at (–1, 2).It is obtained from the ellipse by shifting it left 1 unit and upward 2 units. 14 E.g. 1—Sketching the Graph of a Shifted Ellipse The endpoints of the minor and major axes of the unshifted ellipse are: (2, 0), (–2, 0), (0, 3), (0, –3) 15 E.g. 1—Sketching the Graph of a Shifted Ellipse We apply the required shifts to these points to obtain the corresponding points on the shifted ellipse: (2, 0) → (2 – 1, 0 + 2) = (1, 2) (–2, 0) → (–2 – 1, 0 + 2) = (–3, 2) (0, 3) → (0 – 1, 3 + 2) = (–1, 5) (0, –3) → (0 – 1, –3 + 2) = (–1, –1) 16 E.g. 1—Sketching the Graph of a Shifted Ellipse This helps us sketch the graph here. 17 E.g. 1—Sketching the Graph of a Shifted Ellipse To find the foci of the shifted ellipse, we first find the foci of the ellipse with center at the origin.As a2 = 9 and b2 = 4, we have c2 = 9 – 4 = 5.Thus, c = .So, the foci are (0, ± ) 18 E.g. 1—Sketching the Graph of a Shifted Ellipse Shifting left 1 unit and upward 2 units, we get:The foci of the shifted ellipse are: (–1, ) and (–1, 2 – ) 20 Applying shifts to parabolas leads to the equations and graphs shown. Shifted ParabolasApplying shifts to parabolas leads to the equations and graphs shown. 21 E.g. 2—Graphing a Shifted Parabola Determine the vertex, focus, and directrix, and sketch the graph of the parabola x2 – 4x = 8y – 28We complete the square in x to put this equation into one of the forms in Figure 3. 22 E.g. 2—Graphing a Shifted Parabola x2 – 4x + 4 = 8y – (x – 2)2 = 8y – (x – 2)2 = 8(y – 3)This parabola opens upward with vertex at (2, 3).It is obtained from the parabola x2 = 8y by shifting right 2 units and upward 3 units. 23 E.g. 2—Graphing a Shifted Parabola Since 4p = 8, we have p = 2.Thus, the focus is 2 units above the vertex and the directrix is 2 units below the vertex.So, the focus is (2, 5) and the directrix is y = 1. 25 Applying shifts to hyperbolas leads to the equations and graphs shown. Shifted HyperbolasApplying shifts to hyperbolas leads to the equations and graphs shown. 26 E.g. 3—Graphing a Shifted Hyperbola A shifted conic has the equation 9x2 – 72x – 16y2 – 32y = 16Complete the square in x and y to show that the equation represents a hyperbola.Find the center, vertices, foci, and asymptotes of the hyperbola and sketch its graph.Draw the graph using a graphing calculator. 27 E.g. 3—Graphing Shifted Hyperbola Example (a)We complete the squares in both x and y: 28 E.g. 3—Graphing Shifted Hyperbola Example (a)Comparing to Figure 5 (a), we see that this is the equation of a shifted hyperbola. 29 E.g. 3—Graphing Shifted Hyperbola Example (b)The shifted hyperbola has center (4, –1) and a horizontal transverse axis.Its graph will have the same shape as the unshifted hyperbola 30 E.g. 3—Graphing Shifted Hyperbola Example (b)Since a2 = 16 and b2 = 9, we have: a = b = 3The foci lie 5 units to the left and to the right of the center.The vertices lie 4 units to either side of the center. 31 E.g. 3—Graphing Shifted Hyperbola Example (b)The asymptotes of the unshifted hyperbola are y = ± ¾x.So, the asymptotes of the shifted hyperbola are: y + 1 = ± ¾(x – 4) y + 1 = ± ¾x y = ¾x – 4 and y = –¾x + 2 32 E.g. 3—Graphing Shifted Hyperbola Example (b)To help us sketch the hyperbola, we draw the central box.It extends 4 units left and right from the center and 3 units upward and downward from the center. 33 E.g. 3—Graphing Shifted Hyperbola Example (b)We then draw the asymptotes and complete the graph as shown. 34 E.g. 3—Graphing Shifted Hyperbola Example (c)To draw the graph using a graphing calculator, we need to solve for y.The given equation is a quadratic equation in y.So, we use the quadratic formula to solve for y.Writing the equation in the form y2 + 32y – 9x2 + 72x + 16 = 0 we get the following result. 38 General Equation of a Shifted Conic If we expand and simplify the equations of any of the shifted conics illustrated in Figures 1, 3, and 5, then we will always obtain an equation of the form Ax2 + Cy2 + Dx + Ey + F = 0 where A and C are not both 0. 39 Degenerate ConicConversely, if we begin with an equation of this form, then we can complete the square in x and y to see which type of conic section the equation represents.In some cases, the graph of the equation turns out to be just a pair of lines, a single point, or there may be no graph at all.These cases are called degenerate conics. 40 General Equation of a Shifted Conic If the equation is not degenerate, then we can tell whether it represents a parabola, an ellipse, or a hyperbola simply by examining the signs of A and C. 41 General Equation of a Shifted Conic The graph of the equation Ax2 + Cy2 + Dx + Ey + F = 0 where A and C are not both 0, is a conic or a degenerate conic. 42 General Equation of a Shifted Conic In the nondegenerate cases, the graph is:A parabola if A or C is 0.An ellipse if A and C have the same sign (or a circle if A = C).A hyperbola if A and C have opposite signs. 43 E.g. 4—Equation that Leads to Degenerate Conic Sketch the graph of the equation 9x2 – y2 + 18x + 6y = 0The coefficients of x2 and y2 are of opposite sign.So, it looks as if the equation should represent a hyperbola (like the equation of Example 3).To see whether this is in fact the case, we complete the squares. 44 E.g. 4—Equation that Leads to Degenerate Conic For this to fit the form of the equation of a hyperbola, we would need a nonzero constant to the right of the equal sign. 45 E.g. 4—Equation that Leads to Degenerate Conic In fact, further analysis shows that this is the equation of a pair of intersecting lines (y – 3)2 = 9(x + 1) y – 3 = ± 3(x + 1) y = 3(x + 1) or y = –3(x + 1) y = 3x y = –3x 46 E.g. 4—Equation that Leads to Degenerate Conic The lines are graphed here. 47 However, it turned out to represent simply a pair of lines. Degenerate HyperbolaThe equation in Example 4 looked at first glance like the equation of a hyperbola.However, it turned out to represent simply a pair of lines.Hence, we refer to its graph as a degenerate hyperbola. 48 Degenerate ConicsDegenerate ellipses and parabolas can also arise when we complete the square(s) in an equation that seems to represent a conic.For example, the equation x2 + y2 – 8x + 2y + 6 = 0 looks as if it should represent an ellipse, because the coefficients of x2 and y2 have the same sign. 49 However, completing the squares leads to: Degenerate ConicsHowever, completing the squares leads to:This has no solution at all (since the sum of two squares cannot be negative).This equation is therefore degenerate.
From Wikipedia, the free encyclopedia In fluid dynamics, Bernoulli's principle states that an increase in the speed of a fluid occurs simultaneously with a decrease in pressure or a decrease in the fluid's potential energy. The principle is named after Daniel Bernoulli who published it in his book Hydrodynamica in 1738. Bernoulli's principle can be applied to various types of fluid flow, resulting in various forms of Bernoulli's equation; there are different forms of Bernoulli's equation for different types of flow. The simple form of Bernoulli's equation is valid for incompressible flows (e.g. most liquid flows and gases moving at low Mach number). More advanced forms may be applied to compressible flows at higher Mach numbers (see the derivations of the Bernoulli equation). Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of energy in a fluid along a streamline is the same at all points on that streamline. This requires that the sum of kinetic energy, potential energy and internal energy remains constant. Thus an increase in the speed of the fluid – implying an increase in both its dynamic pressure and kinetic energy – occurs with a simultaneous decrease in (the sum of) its static pressure, potential energy and internal energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same on all streamlines because in a reservoir the energy per unit volume (the sum of pressure and gravitational potential ρ g h) is the same everywhere. Bernoulli's principle can also be derived directly from Isaac Newton's Second Law of Motion. If a small volume of fluid is flowing horizontally from a region of high pressure to a region of low pressure, then there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline. Fluid particles are subject only to pressure and their own weight. If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest. - 1 Incompressible flow equation - 2 Compressible flow equation - 3 Derivations of the Bernoulli equation - 4 Applications - 5 Misunderstandings about the generation of lift - 6 Misapplications of Bernoulli's principle in common classroom demonstrations - 7 See also - 8 References - 9 Further reading - 10 External links Incompressible flow equation In most flows of liquids, and of gases at low Mach number, the density of a fluid parcel can be considered to be constant, regardless of pressure variations in the flow. Therefore, the fluid can be considered to be incompressible and these flows are called incompressible flows. Bernoulli performed his experiments on liquids, so his equation in its original form is valid only for incompressible flow. A common form of Bernoulli's equation, valid at any arbitrary point along a streamline, is: - v is the fluid flow speed at a point on a streamline, - g is the acceleration due to gravity, - z is the elevation of the point above a reference plane, with the positive z-direction pointing upward – so in the direction opposite to the gravitational acceleration, - p is the pressure at the chosen point, and - ρ is the density of the fluid at all points in the fluid. The constant on the right-hand side of the equation depends only on the streamline chosen, whereas v, z and p depend on the particular point on that streamline. The following assumptions must be met for this Bernoulli equation to apply: - the flow must be steady, i.e. the fluid velocity at a point cannot change with time, - the flow must be incompressible – even though pressure varies, the density must remain constant along a streamline; - friction by viscous forces has to be negligible. where Ψ is the force potential at the point considered on the streamline. E.g. for the Earth's gravity Ψ = gz. By multiplying with the fluid density ρ, equation (A) can be rewritten as: - q = 1/ρv2 is dynamic pressure, - h = z + p/ is the piezometric head or hydraulic head (the sum of the elevation z and the pressure head) and - p0 = p + q is the total pressure (the sum of the static pressure p and dynamic pressure q). The constant in the Bernoulli equation can be normalised. A common approach is in terms of total head or energy head H: The above equations suggest there is a flow speed at which pressure is zero, and at even higher speeds the pressure is negative. Most often, gases and liquids are not capable of negative absolute pressure, or even zero pressure, so clearly Bernoulli's equation ceases to be valid before zero pressure is reached. In liquids – when the pressure becomes too low – cavitation occurs. The above equations use a linear relationship between flow speed squared and pressure. At higher flow speeds in gases, or for sound waves in liquid, the changes in mass density become significant so that the assumption of constant density is invalid. In many applications of Bernoulli's equation, the change in the ρgz term along the streamline is so small compared with the other terms that it can be ignored. For example, in the case of aircraft in flight, the change in height z along a streamline is so small the ρgz term can be omitted. This allows the above equation to be presented in the following simplified form: where p0 is called "total pressure", and q is "dynamic pressure". Many authors refer to the pressure p as static pressure to distinguish it from total pressure p0 and dynamic pressure q. In Aerodynamics, L.J. Clancy writes: "To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure." The simplified form of Bernoulli's equation can be summarized in the following memorable word equation: - static pressure + dynamic pressure = total pressure Every point in a steadily flowing fluid, regardless of the fluid speed at that point, has its own unique static pressure p and dynamic pressure q. Their sum p + q is defined to be the total pressure p0. The significance of Bernoulli's principle can now be summarized as "total pressure is constant along a streamline". If the fluid flow is irrotational, the total pressure on every streamline is the same and Bernoulli's principle can be summarized as "total pressure is constant everywhere in the fluid flow". It is reasonable to assume that irrotational flow exists in any situation where a large body of fluid is flowing past a solid body. Examples are aircraft in flight, and ships moving in open bodies of water. However, it is important to remember that Bernoulli's principle does not apply in the boundary layer or in fluid flow through long pipes. If the fluid flow at some point along a streamline is brought to rest, this point is called a stagnation point, and at this point the total pressure is equal to the stagnation pressure. Applicability of incompressible flow equation to flow of gases Bernoulli's equation is sometimes valid for the flow of gases: provided that there is no transfer of kinetic or potential energy from the gas flow to the compression or expansion of the gas. If both the gas pressure and volume change simultaneously, then work will be done on or by the gas. In this case, Bernoulli's equation – in its incompressible flow form – cannot be assumed to be valid. However, if the gas process is entirely isobaric, or isochoric, then no work is done on or by the gas, (so the simple energy balance is not upset). According to the gas law, an isobaric or isochoric process is ordinarily the only way to ensure constant density in a gas. Also the gas density will be proportional to the ratio of pressure and absolute temperature, however this ratio will vary upon compression or expansion, no matter what non-zero quantity of heat is added or removed. The only exception is if the net heat transfer is zero, as in a complete thermodynamic cycle, or in an individual isentropic (frictionless adiabatic) process, and even then this reversible process must be reversed, to restore the gas to the original pressure and specific volume, and thus density. Only then is the original, unmodified Bernoulli equation applicable. In this case the equation can be used if the flow speed of the gas is sufficiently below the speed of sound, such that the variation in density of the gas (due to this effect) along each streamline can be ignored. Adiabatic flow at less than Mach 0.3 is generally considered to be slow enough. Unsteady potential flow For an irrotational flow, the flow velocity can be described as the gradient ∇φ of a velocity potential φ. In that case, and for a constant density ρ, the momentum equations of the Euler equations can be integrated to: which is a Bernoulli equation valid also for unsteady—or time dependent—flows. Here ∂φ/ denotes the partial derivative of the velocity potential φ with respect to time t, and v = | ∇φ | is the flow speed. The function f(t) depends only on time and not on position in the fluid. As a result, the Bernoulli equation at some moment t does not only apply along a certain streamline, but in the whole fluid domain. This is also true for the special case of a steady irrotational flow, in which case f is a constant. Further f(t) can be made equal to zero by incorporating it into the velocity potential using the transformation Note that the relation of the potential to the flow velocity is unaffected by this transformation: ∇Φ = ∇φ. The Bernoulli equation for unsteady potential flow also appears to play a central role in Luke's variational principle, a variational description of free-surface flows using the Lagrangian (not to be confused with Lagrangian coordinates). Compressible flow equation Bernoulli developed his principle from his observations on liquids, and his equation is applicable only to incompressible fluids, and compressible fluids up to approximately Mach number 0.3. It is possible to use the fundamental principles of physics to develop similar equations applicable to compressible fluids. There are numerous equations, each tailored for a particular application, but all are analogous to Bernoulli's equation and all rely on nothing more than the fundamental principles of physics such as Newton's laws of motion or the first law of thermodynamics. Compressible flow in fluid dynamics - p is the pressure - ρ is the density - v is the flow speed - Ψ is the potential associated with the conservative force field, often the gravitational potential In engineering situations, elevations are generally small compared to the size of the Earth, and the time scales of fluid flow are small enough to consider the equation of state as adiabatic. In this case, the above equation becomes where, in addition to the terms listed above: - γ is the ratio of the specific heats of the fluid - g is the acceleration due to gravity - z is the elevation of the point above a reference plane In many applications of compressible flow, changes in elevation are negligible compared to the other terms, so the term gz can be omitted. A very useful form of the equation is then: - p0 is the total pressure - ρ0 is the total density Compressible flow in thermodynamics Here w is the enthalpy per unit mass, which is also often written as h (not to be confused with "head" or "height"). Note that w = ε + p/ where ε is the thermodynamic energy per unit mass, also known as the specific internal energy. So, for constant internal energy ε the equation reduces to the incompressible-flow form. The constant on the right hand side is often called the Bernoulli constant and denoted b. For steady inviscid adiabatic flow with no additional sources or sinks of energy, b is constant along any given streamline. More generally, when b may vary along streamlines, it still proves a useful parameter, related to the "head" of the fluid (see below). When the change in Ψ can be ignored, a very useful form of this equation is: where w0 is total enthalpy. For a calorically perfect gas such as an ideal gas, the enthalpy is directly proportional to the temperature, and this leads to the concept of the total (or stagnation) temperature. When shock waves are present, in a reference frame in which the shock is stationary and the flow is steady, many of the parameters in the Bernoulli equation suffer abrupt changes in passing through the shock. The Bernoulli parameter itself, however, remains unaffected. An exception to this rule is radiative shocks, which violate the assumptions leading to the Bernoulli equation, namely the lack of additional sinks or sources of energy. Derivations of the Bernoulli equation Bernoulli equation for incompressible fluids The Bernoulli equation for incompressible fluids can be derived by either integrating Newton's second law of motion or by applying the law of conservation of energy between two sections along a streamline, ignoring viscosity, compressibility, and thermal effects. - Derivation through integrating Newton's Second Law of Motion The simplest derivation is to first ignore gravity and consider constrictions and expansions in pipes that are otherwise straight, as seen in Venturi effect. Let the x axis be directed down the axis of the pipe. Define a parcel of fluid moving through a pipe with cross-sectional area A, the length of the parcel is dx, and the volume of the parcel A dx. If mass density is ρ, the mass of the parcel is density multiplied by its volume m = ρA dx. The change in pressure over distance dx is dp and flow velocity v = dx/. Apply Newton's second law of motion (force = mass × acceleration) and recognizing that the effective force on the parcel of fluid is −A dp. If the pressure decreases along the length of the pipe, dp is negative but the force resulting in flow is positive along the x axis. In steady flow the velocity field is constant with respect to time, v = v(x) = v(x(t)), so v itself is not directly a function of time t. It is only when the parcel moves through x that the cross sectional area changes: v depends on t only through the cross-sectional position x(t). With density ρ constant, the equation of motion can be written as by integrating with respect to x where C is a constant, sometimes referred to as the Bernoulli constant. It is not a universal constant, but rather a constant of a particular fluid system. The deduction is: where the speed is large, pressure is low and vice versa. In the above derivation, no external work–energy principle is invoked. Rather, Bernoulli's principle was derived by a simple manipulation of Newton's second law. - Derivation by using conservation of energy - the change in the kinetic energy Ekin of the system equals the net work W done on the system; The system consists of the volume of fluid, initially between the cross-sections A1 and A2. In the time interval Δt fluid elements initially at the inflow cross-section A1 move over a distance s1 = v1 Δt, while at the outflow cross-section the fluid moves away from cross-section A2 over a distance s2 = v2 Δt. The displaced fluid volumes at the inflow and outflow are respectively A1s1 and A2s2. The associated displaced fluid masses are – when ρ is the fluid's mass density – equal to density times volume, so ρA1s1 and ρA2s2. By mass conservation, these two masses displaced in the time interval Δt have to be equal, and this displaced mass is denoted by Δm: The work done by the forces consists of two parts: - The work done by the pressure acting on the areas A1 and A2 - The work done by gravity: the gravitational potential energy in the volume A1s1 is lost, and at the outflow in the volume A2s2 is gained. So, the change in gravitational potential energy ΔEpot,gravity in the time interval Δt is - Now, the work by the force of gravity is opposite to the change in potential energy, Wgravity = −ΔEpot,gravity: while the force of gravity is in the negative z-direction, the work—gravity force times change in elevation—will be negative for a positive elevation change Δz = z2 − z1, while the corresponding potential energy change is positive. So: And therefore the total work done in this time interval Δt is The increase in kinetic energy is Putting these together, the work-kinetic energy theorem W = ΔEkin gives: After dividing by the mass Δm = ρA1v1 Δt = ρA2v2 Δt the result is: or, as stated in the first paragraph: - (Eqn. 1), Which is also Equation (A) Further division by g produces the following equation. Note that each term can be described in the length dimension (such as meters). This is the head equation derived from Bernoulli's principle: - (Eqn. 2a) The middle term, z, represents the potential energy of the fluid due to its elevation with respect to a reference plane. Now, z is called the elevation head and given the designation zelevation. when arriving at elevation z = 0. Or when we rearrange it as a head: The hydrostatic pressure p is defined as with p0 some reference pressure, or when we rearrange it as a head: The term p/ is also called the pressure head, expressed as a length measurement. It represents the internal energy of the fluid due to the pressure exerted on the container. When we combine the head due to the flow speed and the head due to static pressure with the elevation above a reference plane, we obtain a simple relationship useful for incompressible fluids using the velocity head, elevation head, and pressure head. - (Eqn. 2b) If we were to multiply Eqn. 1 by the density of the fluid, we would get an equation with three pressure terms: - (Eqn. 3) We note that the pressure of the system is constant in this form of the Bernoulli Equation. If the static pressure of the system (the far right term) increases, and if the pressure due to elevation (the middle term) is constant, then we know that the dynamic pressure (the left term) must have decreased. In other words, if the speed of a fluid decreases and it is not due to an elevation difference, we know it must be due to an increase in the static pressure that is resisting the flow. All three equations are merely simplified versions of an energy balance on a system. Bernoulli equation for compressible fluids The derivation for compressible fluids is similar. Again, the derivation depends upon (1) conservation of mass, and (2) conservation of energy. Conservation of mass implies that in the above figure, in the interval of time Δt, the amount of mass passing through the boundary defined by the area A1 is equal to the amount of mass passing outwards through the boundary defined by the area A2: Conservation of energy is applied in a similar manner: It is assumed that the change in energy of the volume of the streamtube bounded by A1 and A2 is due entirely to energy entering or leaving through one or the other of these two boundaries. Clearly, in a more complicated situation such as a fluid flow coupled with radiation, such conditions are not met. Nevertheless, assuming this to be the case and assuming the flow is steady so that the net change in the energy is zero, where ΔE1 and ΔE2 are the energy entering through A1 and leaving through A2, respectively. The energy entering through A1 is the sum of the kinetic energy entering, the energy entering in the form of potential gravitational energy of the fluid, the fluid thermodynamic internal energy per unit of mass (ε1) entering, and the energy entering in the form of mechanical p dV work: where Ψ = gz is a force potential due to the Earth's gravity, g is acceleration due to gravity, and z is elevation above a reference plane. A similar expression for ΔE2 may easily be constructed. So now setting 0 = ΔE1 − ΔE2: which can be rewritten as: Now, using the previously-obtained result from conservation of mass, this may be simplified to obtain which is the Bernoulli equation for compressible flow. An equivalent expression can be written in terms of fluid enthalpy (h): In modern everyday life there are many observations that can be successfully explained by application of Bernoulli's principle, even though no real fluid is entirely inviscid and a small viscosity often has a large effect on the flow. - Bernoulli's principle can be used to calculate the lift force on an airfoil, if the behaviour of the fluid flow in the vicinity of the foil is known. For example, if the air flowing past the top surface of an aircraft wing is moving faster than the air flowing past the bottom surface, then Bernoulli's principle implies that the pressure on the surfaces of the wing will be lower above than below. This pressure difference results in an upwards lifting force. Whenever the distribution of speed past the top and bottom surfaces of a wing is known, the lift forces can be calculated (to a good approximation) using Bernoulli's equations – established by Bernoulli over a century before the first man-made wings were used for the purpose of flight. Bernoulli's principle does not explain why the air flows faster past the top of the wing and slower past the underside. See the article on aerodynamic lift for more info. - The carburettor used in many reciprocating engines contains a venturi to create a region of low pressure to draw fuel into the carburettor and mix it thoroughly with the incoming air. The low pressure in the throat of a venturi can be explained by Bernoulli's principle; in the narrow throat, the air is moving at its fastest speed and therefore it is at its lowest pressure. - An injector on a steam locomotive (or static boiler). - The pitot tube and static port on an aircraft are used to determine the airspeed of the aircraft. These two devices are connected to the airspeed indicator, which determines the dynamic pressure of the airflow past the aircraft. Dynamic pressure is the difference between stagnation pressure and static pressure. Bernoulli's principle is used to calibrate the airspeed indicator so that it displays the indicated airspeed appropriate to the dynamic pressure. - The flow speed of a fluid can be measured using a device such as a Venturi meter or an orifice plate, which can be placed into a pipeline to reduce the diameter of the flow. For a horizontal device, the continuity equation shows that for an incompressible fluid, the reduction in diameter will cause an increase in the fluid flow speed. Subsequently Bernoulli's principle then shows that there must be a decrease in the pressure in the reduced diameter region. This phenomenon is known as the Venturi effect. - The maximum possible drain rate for a tank with a hole or tap at the base can be calculated directly from Bernoulli's equation, and is found to be proportional to the square root of the height of the fluid in the tank. This is Torricelli's law, showing that Torricelli's law is compatible with Bernoulli's principle. Viscosity lowers this drain rate. This is reflected in the discharge coefficient, which is a function of the Reynolds number and the shape of the orifice. - The Bernoulli grip relies on this principle to create a non-contact adhesive force between a surface and the gripper. Misunderstandings about the generation of lift Many explanations for the generation of lift (on airfoils, propeller blades, etc.) can be found; some of these explanations can be misleading, and some are false, in particular the idea that air particles flowing above and below a cambered wing should reach simultaneously the trailing edge. This has been a source of heated discussion over the years. In particular, there has been debate about whether lift is best explained by Bernoulli's principle or Newton's laws of motion. Modern writings agree that both Bernoulli's principle and Newton's laws are relevant and either can be used to correctly describe lift. Several of these explanations use the Bernoulli principle to connect the flow kinematics to the flow-induced pressures. In cases of incorrect (or partially correct) explanations relying on the Bernoulli principle, the errors generally occur in the assumptions on the flow kinematics and how these are produced. It is not the Bernoulli principle itself that is questioned because this principle is well established (the airflow above the wing is faster, the question is why it is faster). Misapplications of Bernoulli's principle in common classroom demonstrations There are several common classroom demonstrations that are sometimes incorrectly explained using Bernoulli's principle. One involves holding a piece of paper horizontally so that it droops downward and then blowing over the top of it. As the demonstrator blows over the paper, the paper rises. It is then asserted that this is because "faster moving air has lower pressure". One problem with this explanation can be seen by blowing along the bottom of the paper: were the deflection due simply to faster moving air one would expect the paper to deflect downward, but the paper deflects upward regardless of whether the faster moving air is on the top or the bottom. Another problem is that when the air leaves the demonstrator's mouth it has the same pressure as the surrounding air; the air does not have lower pressure just because it is moving; in the demonstration, the static pressure of the air leaving the demonstrator's mouth is equal to the pressure of the surrounding air. A third problem is that it is false to make a connection between the flow on the two sides of the paper using Bernoulli’s equation since the air above and below are different flow fields and Bernoulli's principle only applies within a flow field. As the wording of the principle can change its implications, stating the principle correctly is important. What Bernoulli's principle actually says is that within a flow of constant energy, when fluid flows through a region of lower pressure it speeds up and vice versa. Thus, Bernoulli's principle concerns itself with changes in speed and changes in pressure within a flow field. It cannot be used to compare different flow fields. A correct explanation of why the paper rises would observe that the plume follows the curve of the paper and that a curved streamline will develop a pressure gradient perpendicular to the direction of flow, with the lower pressure on the inside of the curve. Bernoulli's principle predicts that the decrease in pressure is associated with an increase in speed, i.e. that as the air passes over the paper it speeds up and moves faster than it was moving when it left the demonstrator's mouth. But this is not apparent from the demonstration. Other common classroom demonstrations, such as blowing between two suspended spheres, inflating a large bag, or suspending a ball in an airstream are sometimes explained in a similarly misleading manner by saying "faster moving air has lower pressure". - Terminology in fluid dynamics - Navier–Stokes equations – for the flow of a viscous fluid - Euler equations – for the flow of an inviscid fluid - Hydraulics – applied fluid mechanics for liquids - Torricelli's Law - a special case of Bernoulli's principle - Daniel Bernoulli - Coandă effect - Clancy, L.J., Aerodynamics, Chapter 3. - Batchelor, G.K. (1967), Section 3.5, pp. 156–64. - "Hydrodynamica". Britannica Online Encyclopedia. Retrieved 2008-10-30. - Streeter, V.L., Fluid Mechanics, Example 3.5, McGraw–Hill Inc. (1966), New York. - "If the particle is in a region of varying pressure (a non-vanishing pressure gradient in the x-direction) and if the particle has a finite size l, then the front of the particle will be ‘seeing’ a different pressure from the rear. More precisely, if the pressure drops in the x-direction (dp/ < 0) the pressure at the rear is higher than at the front and the particle experiences a (positive) net force. According to Newton’s second law, this force causes an acceleration and the particle’s velocity increases as it moves along the streamline... Bernoulli's equation describes this mathematically (see the complete derivation in the appendix)."Babinsky, Holger (November 2003), "How do wings work?" (PDF), Physics Education - "Acceleration of air is caused by pressure gradients. Air is accelerated in direction of the velocity if the pressure goes down. Thus the decrease of pressure is the cause of a higher velocity." Weltner, Klaus; Ingelman-Sundberg, Martin, Misinterpretations of Bernoulli's Law, archived from the original on April 29, 2009 - " The idea is that as the parcel moves along, following a streamline, as it moves into an area of higher pressure there will be higher pressure ahead (higher than the pressure behind) and this will exert a force on the parcel, slowing it down. Conversely if the parcel is moving into a region of lower pressure, there will be an higher pressure behind it (higher than the pressure ahead), speeding it up. As always, any unbalanced force will cause a change in momentum (and velocity), as required by Newton’s laws of motion." See How It Flies John S. Denker http://www.av8n.com/how/htm/airfoils.html - Resnick, R. and Halliday, D. (1960), section 18-4, Physics, John Wiley & Sons, Inc. - Batchelor, G.K. (1967), §5.1, p. 265. - Mulley, Raymond (2004). Flow of Industrial Fluids: Theory and Equations. CRC Press. p. 43–44. ISBN 0-8493-2767-9. - Chanson, Hubert (2004). Hydraulics of Open Channel Flow: An Introduction. Butterworth-Heinemann. p. 22. ISBN 0-7506-5978-5. - Oertel, Herbert; Prandtl, Ludwig; Böhle, M.; Mayes, Katherine (2004). Prandtl's Essentials of Fluid Mechanics. Springer. pp. 70–71. ISBN 0-387-40437-6. - "Bernoulli's Equation". NASA Glenn Research Center. Retrieved 2009-03-04. - Clancy, L.J., Aerodynamics, Section 3.5. - Clancy, L.J. Aerodynamics, Equation 3.12 - Batchelor, G.K. (1967), p. 383 - White, Frank M. Fluid Mechanics, 6th ed. McGraw-Hill International Edition. p. 602. - Clarke C. and Carswell B., Astrophysical Fluid Dynamics - Clancy, L.J., Aerodynamics, Section 3.11 - Landau & Lifshitz (1987, §5) - Van Wylen, G.J., and Sonntag, R.E., (1965), Fundamentals of Classical Thermodynamics, Section 5.9, John Wiley and Sons Inc., New York - Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures on Physics. ISBN 0-201-02116-1., Vol. 2, §40–3, pp. 40–6 – 40–9. - Tipler, Paul (1991). Physics for Scientists and Engineers: Mechanics (3rd extended ed.). W. H. Freeman. ISBN 0-87901-432-6., p. 138. - Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures on Physics. ISBN 0-201-02116-1., Vol. 1, §14–3, p. 14–4. - Physics Today, May 1010, "The Nearly Perfect Fermi Gas", by John E. Thomas, p 34. - Clancy, L.J., Aerodynamics, Section 5.5 ("When a stream of air flows past an airfoil, there are local changes in flow speed round the airfoil, and consequently changes in static pressure, in accordance with Bernoulli's Theorem. The distribution of pressure determines the lift, pitching moment and form drag of the airfoil, and the position of its centre of pressure.") - Resnick, R. and Halliday, D. (1960), Physics, Section 18–5, John Wiley & Sons, Inc., New York ("Streamlines are closer together above the wing than they are below so that Bernoulli's principle predicts the observed upward dynamic lift.") - Eastlake, Charles N. (March 2002). "An Aerodynamicist's View of Lift, Bernoulli, and Newton" (PDF). The Physics Teacher. 40. "The resultant force is determined by integrating the surface-pressure distribution over the surface area of the airfoil." - Clancy, L.J., Aerodynamics, Section 3.8 - Mechanical Engineering Reference Manual Ninth Edition - Glenn Research Center (2006-03-15). "Incorrect Lift Theory". NASA. Retrieved 2010-08-12. - Chanson, H. (2009). Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows. CRC Press, Taylor & Francis Group, Leiden, The Netherlands, 478 pages. ISBN 978-0-415-49271-3. - "Newton vs Bernoulli". - Ison, David. Bernoulli Or Newton: Who's Right About Lift? Retrieved on 2009-11-26 - Phillips, O.M. (1977). The dynamics of the upper ocean (2nd ed.). Cambridge University Press. ISBN 0-521-29801-6. Section 2.4. - Batchelor, G.K. (1967). Sections 3.5 and 5.1 - Lamb, H. (1994) §17–§29 - Weltner, Klaus; Ingelman-Sundberg, Martin. "Physics of Flight – reviewed". "The conventional explanation of aerodynamical lift based on Bernoulli’s law and velocity differences mixes up cause and effect. The faster flow at the upper side of the wing is the consequence of low pressure and not its cause." - "Bernoulli's law and experiments attributed to it are fascinating. Unfortunately some of these experiments are explained erroneously..." Weltner, Klaus; Ingelman-Sundberg, Martin. "Misinterpretations of Bernoulli's Law". Department of Physics, University Frankfurt. Archived from the original on June 21, 2012. Retrieved June 25, 2012. - "This occurs because of Bernoulli’s principle — fast-moving air has lower pressure than non-moving air." Make Magazine http://makeprojects.com/Project/Origami-Flying-Disk/327/1 - " Faster-moving fluid, lower pressure. ... When the demonstrator holds the paper in front of his mouth and blows across the top, he is creating an area of faster-moving air." University of Minnesota School of Physics and Astronomy http://www.physics.umn.edu/outreach/pforce/circus/Bernoulli.html - "Bernoulli's Principle states that faster moving air has lower pressure... You can demonstrate Bernoulli's Principle by blowing over a piece of paper held horizontally across your lips." "Educational Packet" (PDF). Tall Ships Festival – Channel Islands Harbor. Archived from the original (PDF) on December 3, 2013. Retrieved June 25, 2012. - "If the lift in figure A were caused by "Bernoulli principle," then the paper in figure B should droop further when air is blown beneath it. However, as shown, it raises when the upward pressure gradient in downward-curving flow adds to atmospheric pressure at the paper lower surface." Craig, Gale M. "Physical Principles of Winged Flight". Retrieved March 31, 2016. - "In fact, the pressure in the air blown out of the lungs is equal to that of the surrounding air..." Babinsky http://iopscience.iop.org/0031-9120/38/6/001/pdf/pe3_6_001.pdf - "...air does not have a reduced lateral pressure (or static pressure...) simply because it is caused to move, the static pressure of free air does not decrease as the speed of the air increases, it misunderstanding Bernoulli's principle to suggest that this is what it tells us, and the behavior of the curved paper is explained by other reasoning than Bernoulli's principle." Peter Eastwell Bernoulli? Perhaps, but What About Viscosity? The Science Education Review, 6(1) 2007 http://www.scienceeducationreview.com/open_access/eastwell-bernoulli.pdf - "Make a strip of writing paper about 5 cm × 25 cm. Hold it in front of your lips so that it hangs out and down making a convex upward surface. When you blow across the top of the paper, it rises. Many books attribute this to the lowering of the air pressure on top solely to the Bernoulli effect. Now use your fingers to form the paper into a curve that it is slightly concave upward along its whole length and again blow along the top of this strip. The paper now bends downward...an often-cited experiment, which is usually taken as demonstrating the common explanation of lift, does not do so..." Jef Raskin Coanda Effect: Understanding Why Wings Work http://karmak.org/archive/2003/02/coanda_effect.html - "Blowing over a piece of paper does not demonstrate Bernoulli’s equation. While it is true that a curved paper lifts when flow is applied on one side, this is not because air is moving at different speeds on the two sides... It is false to make a connection between the flow on the two sides of the paper using Bernoulli’s equation." Holger Babinsky How Do Wings Work Physics Education 38(6) http://iopscience.iop.org/0031-9120/38/6/001/pdf/pe3_6_001.pdf - "An explanation based on Bernoulli’s principle is not applicable to this situation, because this principle has nothing to say about the interaction of air masses having different speeds... Also, while Bernoulli’s principle allows us to compare fluid speeds and pressures along a single streamline and... along two different streamlines that originate under identical fluid conditions, using Bernoulli’s principle to compare the air above and below the curved paper in Figure 1 is nonsensical; in this case, there aren’t any streamlines at all below the paper!" Peter Eastwell Bernoulli? Perhaps, but What About Viscosity? The Science Education Review 6(1) 2007 http://www.scienceeducationreview.com/open_access/eastwell-bernoulli.pdf - "The well-known demonstration of the phenomenon of lift by means of lifting a page cantilevered in one’s hand by blowing horizontally along it is probably more a demonstration of the forces inherent in the Coanda effect than a demonstration of Bernoulli’s law; for, here, an air jet issues from the mouth and attaches to a curved (and, in this case pliable) surface. The upper edge is a complicated vortex-laden mixing layer and the distant flow is quiescent, so that Bernoulli’s law is hardly applicable." David Auerbach Why Aircreft Fly European Journal of Physics Vol 21 p 295 http://iopscience.iop.org/0143-0807/21/4/302/pdf/0143-0807_21_4_302.pdf - "Millions of children in science classes are being asked to blow over curved pieces of paper and observe that the paper "lifts"... They are then asked to believe that Bernoulli's theorem is responsible... Unfortunately, the "dynamic lift" involved...is not properly explained by Bernoulli's theorem." Norman F. Smith "Bernoulli and Newton in Fluid Mechanics" The Physics Teacher Nov 1972 - "Bernoulli’s principle is very easy to understand provided the principle is correctly stated. However, we must be careful, because seemingly-small changes in the wording can lead to completely wrong conclusions." See How It Flies John S. Denker http://www.av8n.com/how/htm/airfoils.html#sec-bernoulli - "A complete statement of Bernoulli's Theorem is as follows: "In a flow where no energy is being added or taken away, the sum of its various energies is a constant: consequently where the velocity increasees the pressure decreases and vice versa."" Norman F Smith Bernoulli, Newton and Dynamic Lift Part I School Science and Mathematics Vol 73 Issue 3 http://onlinelibrary.wiley.com/doi/10.1111/j.1949-8594.1973.tb08998.x/pdf - "...if a streamline is curved, there must be a pressure gradient across the streamline, with the pressure increasing in the direction away from the centre of curvature." Babinsky http://iopscience.iop.org/0031-9120/38/6/001/pdf/pe3_6_001.pdf - "The curved paper turns the stream of air downward, and this action produces the lift reaction that lifts the paper." Norman F. Smith Bernoulli, Newton, and Dynamic Lift Part II School Science and Mathematics vol 73 Issue 4 pg 333 http://onlinelibrary.wiley.com/doi/10.1111/j.1949-8594.1973.tb09040.x/pdf - "The curved surface of the tongue creates unequal air pressure and a lifting action. ... Lift is caused by air moving over a curved surface." AERONAUTICS An Educator’s Guide with Activities in Science, Mathematics, and Technology Education by NASA pg 26 http://www.nasa.gov/pdf/58152main_Aeronautics.Educator.pdf - "Viscosity causes the breath to follow the curved surface, Newton's first law says there a force on the air and Newton’s third law says there is an equal and opposite force on the paper. Momentum transfer lifts the strip. The reduction in pressure acting on the top surface of the piece of paper causes the paper to rise." The Newtonian Description of Lift of a Wing David F. Anderson & Scott Eberhardt pg 12 http://www.integener.com/IE110522Anderson&EberhardtPaperOnLift0902.pdf - '"Demonstrations" of Bernoulli's principle are often given as demonstrations of the physics of lift. They are truly demonstrations of lift, but certainly not of Bernoulli's principle.' David F Anderson & Scott Eberhardt Understanding Flight pg 229 https://books.google.com/books?id=52Hfn7uEGSoC&pg=PA229 - "As an example, take the misleading experiment most often used to "demonstrate" Bernoulli's principle. Hold a piece of paper so that it curves over your finger, then blow across the top. The paper will rise. However most people do not realize that the paper would not rise if it were flat, even though you are blowing air across the top of it at a furious rate. Bernoulli's principle does not apply directly in this case. This is because the air on the two sides of the paper did not start out from the same source. The air on the bottom is ambient air from the room, but the air on the top came from your mouth where you actually increased its speed without decreasing its pressure by forcing it out of your mouth. As a result the air on both sides of the flat paper actually has the same pressure, even though the air on the top is moving faster. The reason that a curved piece of paper does rise is that the air from your mouth speeds up even more as it follows the curve of the paper, which in turn lowers the pressure according to Bernoulli." From The Aeronautics File By Max Feil https://www.mat.uc.pt/~pedro/ncientificos/artigos/aeronauticsfile1.ps Archived May 17, 2015, at the Wayback Machine. - "Some people blow over a sheet of paper to demonstrate that the accelerated air over the sheet results in a lower pressure. They are wrong with their explanation. The sheet of paper goes up because it deflects the air, by the Coanda effect, and that deflection is the cause of the force lifting the sheet. To prove they are wrong I use the following experiment: If the sheet of paper is pre bend the other way by first rolling it, and if you blow over it than, it goes down. This is because the air is deflected the other way. Airspeed is still higher above the sheet, so that is not causing the lower pressure." Pim Geurts. sailtheory.com http://www.sailtheory.com/experiments.html - "Finally, let’s go back to the initial example of a ball levitating in a jet of air. The naive explanation for the stability of the ball in the air stream, 'because pressure in the jet is lower than pressure in the surrounding atmosphere,' is clearly incorrect. The static pressure in the free air jet is the same as the pressure in the surrounding atmosphere..." Martin Kamela Thinking About Bernoulli The Physics Teacher Vol. 45, September 2007 http://tpt.aapt.org/resource/1/phteah/v45/i6/p379_s1 - "Aysmmetrical flow (not Bernoulli's theorem) also explains lift on the ping-pong ball or beach ball that floats so mysteriously in the tilted vacuum cleaner exhaust..." Norman F. Smith, Bernoulli and Newton in Fluid Mechanics" The Physics Teacher Nov 1972 p 455 - "Bernoulli’s theorem is often obscured by demonstrations involving non-Bernoulli forces. For example, a ball may be supported on an upward jet of air or water, because any fluid (the air and water) has viscosity, which retards the slippage of one part of the fluid moving past another part of the fluid." Bauman, Robert P. "The Bernoulli Conundrum" (PDF). Professor of Physics Emeritus, University of Alabama at Birmingham. Archived from the original (PDF) on February 25, 2012. Retrieved June 25, 2012. - "In a demonstration sometimes wrongly described as showing lift due to pressure reduction in moving air or pressure reduction due to flow path restriction, a ball or balloon is suspended by a jet of air." Craig, Gale M. "Physical Principles of Winged Flight". Retrieved March 31, 2016. - "A second example is the confinement of a ping-pong ball in the vertical exhaust from a hair dryer. We are told that this is a demonstration of Bernoulli's principle. But, we now know that the exhaust does not have a lower value of ps. Again, it is momentum transfer that keeps the ball in the airflow. When the ball gets near the edge of the exhaust there is an asymmetric flow around the ball, which pushes it away from the edge of the flow. The same is true when one blows between two ping-pong balls hanging on strings." Anderson & Eberhardt The Newtonian Description of Lift on a Wing http://lss.fnal.gov/archive/2001/pub/Pub-01-036-E.pdf - "This demonstration is often incorrectly explained using the Bernoulli principle. According to the INCORRECT explanation, the air flow is faster in the region between the sheets, thus creating a lower pressure compared with the quiet air on the outside of the sheets." "Thin Metal Sheets – Coanda Effect". University of Maryland – Physics Lecture-Demonstartion Facility. Archived from the original on June 23, 2012. Retrieved October 23, 2012. - "Although the Bernoulli effect is often used to explain this demonstration, and one manufacturer sells the material for this demonstration as "Bernoulli bags," it cannot be explained by the Bernoulli effect, but rather by the process of entrainment." "Answer #256". University of Maryland – Physics Lecture-Demonstartion Facility. Archived from the original on December 13, 2014. Retrieved December 9, 2014. - Batchelor, G.K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-66396-2. - Clancy, L.J. (1975). Aerodynamics. Pitman Publishing, London. ISBN 0-273-01120-0. - Lamb, H. (1993). Hydrodynamics (6th ed.). Cambridge University Press. ISBN 978-0-521-45868-9. Originally published in 1879; the 6th extended edition appeared first in 1932. - Landau, L.D.; Lifshitz, E.M. (1987). Fluid Mechanics. Course of Theoretical Physics (2nd ed.). Pergamon Press. ISBN 0-7506-2767-0. - Chanson, H. (2009). Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows. CRC Press, Taylor & Francis Group. ISBN 978-0-415-49271-3. |Wikimedia Commons has media related to Bernoulli's principle.|
Instructional design, or instructional systems design (ISD), is the practice of creating "instructional experiences which make the acquisition of knowledge and skill more efficient, effective, and appealing." The process consists broadly of determining the state and needs of the learner, defining the end goal of instruction, and creating some "intervention" to assist in the transition. The outcome of this instruction may be directly observable and scientifically measured or completely hidden and assumed. There are many instructional design models but many are based on the ADDIE model with the five phases: analysis, design, development, implementation, and evaluation. As a field, instructional design is historically and traditionally rooted in cognitive and behavioral psychology, though recently constructivism has influenced thinking in the field. - 1 History - 2 Instructional media history - 3 Robert Gagné - 4 Learning design - 5 Models - 6 Motivational design - 7 Influential researchers and theorists - 8 See also - 9 References - 10 External links During World War II, a considerable amount of training materials for the military were developed based on the principles of instruction, learning, and human behavior. Tests for assessing a learner’s abilities were used to screen candidates for the training programs. After the success of military training, psychologists began to view training as a system, and developed various analysis, design, and evaluation procedures. In 1946, Edgar Dale outlined a hierarchy of instructional methods, organized intuitively by their concreteness. B. F. Skinner's 1954 article “The Science of Learning and the Art of Teaching” suggested that effective instructional materials, called programmed instructional materials, should include small steps, frequent questions, and immediate feedback; and should allow self-pacing. Robert F. Mager popularized the use of learning objectives with his 1962 article “Preparing Objectives for Programmed Instruction”. The article describes how to write objectives including desired behavior, learning condition, and assessment. In 1956, a committee led by Benjamin Bloom published an influential taxonomy with three domains of learning: cognitive (what one knows or thinks), psychomotor (what one does, physically) and affective (what one feels, or what attitudes one has). These taxonomies still influence the design of instruction. Robert Glaser introduced “criterion-referenced measures” in 1962. In contrast to norm-referenced tests in which an individual's performance is compared to group performance, a criterion-referenced test is designed to test an individual's behavior in relation to an objective standard. It can be used to assess the learners’ entry level behavior, and to what extent learners have developed mastery through an instructional program. In 1965, Robert Gagne (see below for more information) described three domains of learning outcomes (cognitive, affective, psychomotor), five learning outcomes (Verbal Information, Intellectual Skills, Cognitive Strategy, Attitude, Motor Skills), and nine events of instruction in “The conditions of Learning”, which remain foundations of instructional design practices. Gagne’s work in learning hierarchies and hierarchical analysis led to an important notion in instruction – to ensure that learners acquire prerequisite skills before attempting superordinate’s ones. In 1967, after analyzing the failure of training material, Michael Scriven suggested the need for formative assessment – e.g., to try out instructional materials with learners (and revise accordingly) before declaring them finalized. During the 1970s, the number of instructional design models greatly increased and prospered in different sectors in military, academia, and industry. Many instructional design theorists began to adopt an information-processing-based approach to the design of instruction. David Merrill for instance developed Component Display Theory (CDT), which concentrates on the means of presenting instructional materials (presentation techniques). Although interest in instructional design continued to be strong in business and the military, there was little evolution of ID in schools or higher education. However, educators and researchers began to consider how the personal computer could be used in a learning environment or a learning space. PLATO (Programmed Logic for Automatic Teaching Operation) is one example of how computers began to be integrated into instruction. Many of the first uses of computers in the classroom were for "drill and skill" exercises. There was a growing interest in how cognitive psychology could be applied to instructional design. The influence of constructivist theory on instructional design became more prominent in the 1990s as a counterpoint to the more traditional cognitive learning theory. Constructivists believe that learning experiences should be "authentic" and produce real-world learning environments that allow the learner to construct their own knowledge. This emphasis on the learner was a significant departure away from traditional forms of instructional design. Performance improvement was also seen as an important outcome of learning that needed to be considered during the design process. The World Wide Web emerged as an online learning tool with hypertext and hypermedia being recognized as good tools for learning. As technology advanced and constructivist theory gained popularity, technology’s use in the classroom began to evolve from mostly drill and skill exercises to more interactive activities that required more complex thinking on the part of the learner. Rapid prototyping was first seen during the 1990s. In this process, an instructional design project is prototyped quickly and then vetted through a series of try and revise cycles. This is a big departure from traditional methods of instructional design that took far longer to complete. Academic degrees focused on integrating technology, internet, and human-computer interaction with education gained momentum with the introduction of Learning Design and Technology (LDT) majors. Universities such as Bowling Green State University, Penn State, Purdue, San Diego State University, Stanford, University of Georgia, and Carnegie Mellon University have established undergraduate and graduate degrees in technology-centered methods of designing and delivering education. Instructional media history |1900s||Visual media||School museum as supplementary material (First school museum opened in St. Louis in 1905)||Materials are viewed as supplementary curriculum materials. District-wide media center is the modern equivalent.| |1914-1923||Visual media films, Slides, Photographer||Visual Instruction Movement||The effect of visual instruction was limited because of teacher resistance to change, quality of the file and cost etc.| |Mid 1920s to 1930s||Radio broadcasting, Sound recordings, Sound motion pictures||Radio Audiovisual Instruction movement||Education in large was not affected.| |World War II||Training films, Overhead projector, Slide projector, Audio equipment, Simulators and training devices||Military and industry at this time had strong demand for training.||Growth of audio-visual instruction movement in school was slow, but audiovisual device were used extensively in military services and industry.| |Post World War II||Communication medium||Suggested to consider all aspects of a communication process (influenced by communication theories).||This view point was first ignored, but eventually helped to expand the focus of the audiovisual movement.| |1950s to mid-1960s||Television||Growth of Instructional television||Instructional television was not adopted to a greater extent.| |1950s-1990s||Computer||Computer-assisted instruction (CAI) research started in the 1950s, became popular in the 1980s a few years after computers became available to general public.||The effect of CAI was rather small and the use of computer was far from innovative.| |1990s-2000s||Internet, Simulation||The internet offered opportunities to train many people long distances. Desktop simulation gave advent to levels of Interactive Multimedia Instruction (IMI).||Online training increased rapidly to the point where entire curriculums were given through web-based training. Simulations are valuable but expensive, with the highest level being used primarily by the military and medical community.| |2000s-2010s||Mobile Devices, Social Media||On-demand training moved to people's personal devices; social media allowed for collaborative learning.||The effect from both are too new to be measured.| Robert Gagné's work is widely used and cited in the design of instruction, as exemplified by more than 130 citations in prominent journals in the field during the period from 1985 through 1990. Synthesizing ideas from behaviorism and cognitivism, he provided a clear template, which is easy to follow for designing instructional events. Instructional designers who follow Gagné's theory will likely have tightly focused, efficient instruction. - Cognitive Domain - Verbal information - is stated: state, recite, tell, declare - Intellectual skills - label or classify the concepts - Intellectual skills - apply the rules and principles - Intellectual skills - problem solve by generating solutions or procedures - Discrimination: discriminate, distinguish, differentiate - Concrete Concept: identify, name, specify, label - Defined Concept: classify, categorize, type, sort (by definition) - Rule: demonstrate, show, solve (using one rule) - Higher order rule: generate, develop, solve (using two or more rules) - Cognitive strategies - are used for learning: adopt, create, originate - Affective Domain - Attitudes - are demonstrated by preferring options: choose, prefer, elect, favor - Psychomotor Domain - Motor skills - enable physical performance: execute, perform, carry out According to Gagné, learning occurs in a series of learning events. Each of nine learning events are conditions for learning which must be accomplished before the next in order for learning to take place, termed . Similarly, instructional events should mirror the learning events: - Gaining attention: To ensure reception of coming instruction, the teacher gives the learners a stimulus. Before the learners can start to process any new information, the instructor must gain the attention of the learners. This might entail using abrupt changes in the instruction. - Informing learners of objectives: The teacher tells the learner what they will be able to do because of the instruction. The teacher communicates the desired outcome to the group. - Stimulating recall of prior learning: The teacher asks for recall of existing relevant knowledge. - Presenting the stimulus: The teacher gives emphasis to distinctive features. - Providing learning guidance: The teacher helps the students in understanding (semantic encoding) by providing organization and relevance. - Eliciting performance: The teacher asks the learners to respond, demonstrating learning. - Providing feedback: The teacher gives informative feedback on the learners' performance. - Assessing performance: The teacher requires more learner performance, and gives feedback, to reinforce learning. - Enhancing retention and transfer: The teacher provides varied practice to generalize the capability. Some educators believe that Gagné's taxonomy of learning outcomes and events of instruction oversimplify the learning process by over-prescribing. However, using them as part of a complete instructional package can assist many educators in becoming more organized and staying focused on the instructional goals. Robert Gagné’s work has been the foundation of instructional design since the beginning of the 1960s when he conducted research and developed training materials for the military. Among the first to coin the term “instructional design”, Gagné developed some of the earliest instructional design models and ideas. These models have laid the groundwork for more present-day instructional design models from theorists like Dick, Carey, and Carey (The Dick and Carey Systems Approach Model), Jerold Kemp’s Instructional Design Model, and David Merrill (Merrill’s First Principle of Instruction). Each of these models are based on a core set of learning phases that include (1) activation of prior experience, (2) demonstration of skills, (3) application of skills, and (4) integration or these skills into real world activities. The figure below illustrates these five ideas. Gagné's main focus for instructional design was how instruction and learning could be systematically connected to the design of instruction. He emphasized the design principles and procedures that need to take place for effective teaching and learning. His initial ideas, along with the ideas of other early instructional designers were outlined in Psychological Principles in Systematic Development, written by Roberts B. Miller and edited by Gagné. Gagné believed in internal learning and motivation which paved the way for theorists like Merrill, Li, and Jones who designed the Instructional Transaction Theory, Reigeluth and Stein’s Elaboration Theory, and most notably, Keller’s ARCS Model of Motivation and Design. Prior to Robert Gagné, learning was often thought of as a single, uniform process. There was little or no distinction made between “learning to load a rifle and learning to solve a complex mathematical problem”. Gagné offered an alternative view which developed the idea that different learners required different learning strategies. Understanding and designing instruction based on a learning style defined by the individual brought about new theories and approaches to teaching. Gagné 's understanding and theories of human learning added significantly to understanding the stages in cognitive processing and instructions. For example, Gagné argued that instructional designers must understand the characteristics and functions of short-term and long-term memory to facilitate meaningful learning. This idea encouraged instructional designers to include cognitive needs as a top-down instructional approach. Gagné (1966) defines curriculum as a sequence of content units arranged in such a way that the learning of each unit may be accomplished as a single act, provided the capabilities described by specified prior units (in the sequence) have already been mastered by the learner. His definition of curriculum has been the basis of many important initiatives in schools and other educational environments. In the late 1950s and early 1960s, Gagné had expressed and established an interest in applying theory to practice with particular interest in applications for teaching, training and learning. Increasing the effectiveness and efficiency of practice was of particular concern. His ongoing attention to practice while developing theory continues to influence education and training. Gagné's work has had a significant influence on American education, and military and industrial training. Gagné was one of the early developers of the concept of instructional systems design which suggests the components of a lesson can be analyzed and should be designed to operate together as an integrated plan for instruction. In "Educational Technology and the Learning Process" (Educational Researcher, 1974), Gagné defined instruction as "the set of planned external events which influence the process of learning and thus promote learning.". The concept of learning design arrived in the literature of technology for education in the late 1990s and early 2000s with the idea that "designers and instructors need to choose for themselves the best mixture of behaviourist and constructivist learning experiences for their online courses". But the concept of learning design is probably as old as the concept of teaching. Learning design might be defined as "the description of the teaching-learning process that takes place in a unit of learning (e.g., a course, a lesson or any other designed learning event)". As summarized by Britain, learning design may be associated with: - The concept of learning design - The implementation of the concept made by learning design specifications like PALO, IMS Learning Design, LDL, SLD 2.0, etc. - The technical realisations around the implementation of the concept like TELOS, RELOAD LD-Author, etc. Perhaps the most common model used for creating instructional materials is the ADDIE Model. This acronym stands for the 5 phases contained in the model (Analyze, Design, Develop, Implement, and Evaluate). Brief History of ADDIE’s Development – The ADDIE model was initially developed by Florida State University to explain “the processes involved in the formulation of an instructional systems development (ISD) program for military interservice training that will adequately train individuals to do a particular job and which can also be applied to any interservice curriculum development activity.” The model originally contained several steps under its five original phases (Analyze, Design, Develop, Implement, and [Evaluation and] Control), whose completion was expected before movement to the next phase could occur. Over the years, the steps were revised and eventually the model itself became more dynamic and interactive than its original hierarchical rendition, until its most popular version appeared in the mid-80s, as we understand it today. The five phases are listed and explained below: Analyze – The first phase of content development is Analysis. Analysis refers to the gathering of information about one’s audience, the tasks to be completed, how the learners will view the content, and the project’s overall goals. The instructional designer then classifies the information to make the content more applicable and successful. Design – The second phase is the Design phase. In this phase, instructional designers begin to create their project. Information gathered from the analysis phase, in conjunction with the theories and models of instructional design, is meant to explain how the learning will be acquired. For example, the design phase begins with writing a learning objective. Tasks are then identified and broken down to be more manageable for the designer. The final step determines the kind of activities required for the audience in order to meet the goals identified in the Analyze phase. Develop – The third phase, Development, involves the creation of the activities that will be implemented. It is in this stage that the blueprints of the design phase are assembled. Implement – After the content is developed, it is then Implemented. This stage allows the instructional designer to test all materials to determine if they are functional and appropriate for the intended audience. Evaluate – The final phase, Evaluate, ensures the materials achieved the desired goals. The evaluation phase consists of two parts: formative and summative assessment. The ADDIE model is an iterative process of instructional design, which means that at each stage the designer can assess the project's elements and revise them if necessary. This process incorporates formative assessment, while the summative assessments contain tests or evaluations created for the content being implemented. This final phase is vital for the instructional design team because it provides data used to alter and enhance the design. Connecting all phases of the model are external and reciprocal revision opportunities. As in the internal Evaluation phase, revisions should and can be made throughout the entire process. Most of the current instructional design models are variations of the ADDIE process. An adaptation of the ADDIE model, which is used sometimes, is a practice known as rapid prototyping. Proponents suggest that through an iterative process the verification of the design documents saves time and money by catching problems while they are still easy to fix. This approach is not novel to the design of instruction, but appears in many design-related domains including software design, architecture, transportation planning, product development, message design, user experience design, etc. In fact, some proponents of design prototyping assert that a sophisticated understanding of a problem is incomplete without creating and evaluating some type of prototype, regardless of the analysis rigor that may have been applied up front. In other words, up-front analysis is rarely sufficient to allow one to confidently select an instructional model. For this reason many traditional methods of instructional design are beginning to be seen as incomplete, naive, and even counter-productive. However, some consider rapid prototyping to be a somewhat simplistic type of model. As this argument goes, at the heart of Instructional Design is the analysis phase. After you thoroughly conduct the analysis—you can then choose a model based on your findings. That is the area where most people get snagged—they simply do not do a thorough-enough analysis. (Part of Article By Chris Bressi on LinkedIn) Dick and Carey Another well-known instructional design model is The Dick and Carey Systems Approach Model. The model was originally published in 1978 by Walter Dick and Lou Carey in their book entitled The Systematic Design of Instruction. Dick and Carey made a significant contribution to the instructional design field by championing a systems view of instruction, in contrast to defining instruction as the sum of isolated parts. The model addresses instruction as an entire system, focusing on the interrelationship between context, content, learning and instruction. According to Dick and Carey, "Components such as the instructor, learners, materials, instructional activities, delivery system, and learning and performance environments interact with each other and work together to bring about the desired student learning outcomes". The components of the Systems Approach Model, also known as the Dick and Carey Model, are as follows: - Identify Instructional Goal(s): A goal statement describes a skill, knowledge or attitude (SKA) that a learner will be expected to acquire - Conduct Instructional Analysis: Identify what a learner must recall and identify what learner must be able to do to perform particular task - Analyze Learners and Contexts: Identify general characteristics of the target audience, including prior skills, prior experience, and basic demographics; identify characteristics directly related to the skill to be taught; and perform analysis of the performance and learning settings. - Write Performance Objectives: Objectives consists of a description of the behavior, the condition and criteria. The component of an objective that describes the criteria will be used to judge the learner's performance. - Develop Assessment Instruments: Purpose of entry behavior testing, purpose of pretesting, purpose of post-testing, purpose of practive items/practive problems - Develop Instructional Strategy: Pre-instructional activities, content presentation, Learner participation, assessment - Develop and Select Instructional Materials - Design and Conduct Formative Evaluation of Instruction: Designers try to identify areas of the instructional materials that need improvement. - Revise Instruction: To identify poor test items and to identify poor instruction - Design and Conduct Summative Evaluation With this model, components are executed iteratively and in parallel, rather than linearly. The instructional design model, Guaranteed Learning, was formerly known as the Instructional Development Learning System (IDLS). The model was originally published in 1970 by Peter J. Esseff, PhD and Mary Sullivan Esseff, PhD in their book entitled IDLS—Pro Trainer 1: How to Design, Develop, and Validate Instructional Materials. Peter (1968) & Mary (1972) Esseff both received their doctorates in Educational Technology from the Catholic University of America under the mentorship of Dr. Gabriel Ofiesh, a founding father of the Military Model mentioned above. Esseff and Esseff synthesized existing theories to develop their approach to systematic design, "Guaranteed Learning" aka "Instructional Development Learning System" (IDLS). In 2015, the Drs. Esseffs created an eLearning course to enable participants to take the GL course online under the direction of Dr. Esseff. The components of the Guaranteed Learning Model are the following: - Design a task analysis - Develop criterion tests and performance measures - Develop interactive instructional materials - Validate the interactive instructional materials - Create simulations or performance activities (Case Studies, Role Plays, and Demonstrations) Other useful instructional design models include: the Smith/Ragan Model, the Morrison/Ross/Kemp Model and the OAR Model of instructional design in higher education, as well as, Wiggins' theory of backward design. Learning theories also play an important role in the design of instructional materials. Theories such as behaviorism, constructivism, social learning and cognitivism help shape and define the outcome of instructional materials. Also see: Managing Learning in High Performance Organizations, by Ruth Stiehl and Barbara Bessey, from The Learning Organization, Corvallis, Oregon. ISBN 0-9637457-0-0. Motivation is defined as an internal drive that activates behavior and gives it direction. The term motivation theory is concerned with the process that describe why and how human behavior is activated and directed. Intrinsic and Extrinsic Motivation - Instrinsic: defined as the doing of an activity for its inherent satisfactions rather than for some separable consequence. When intrinsically motivated a person is moved to act for the fun or challenge entailed rather than because of external rewards. Intrinsic motivation reflects the desire to do something because it is enjoyable. If we are intrinsically motivated, we would not be worried about external rewards such as praise. - Examples: Writing short stories because you enjoy writing them, reading a book because you are curious about the topic, and playing chess because you enjoy effortful thinking - Extrinsic: reflects the desire to do something because of external rewards such as awards, money and praise. People who are extrinsically motivated may not enjoy certain activities. They may only wish to engage in certain activities because they wish to receive some external reward. - Examples: The writer who only writes poems to be submitted to poetry contests, a person who dislikes sales but accepts a sales position because he/she desires to earn an above average salary, and a person selecting a major in college based on salary and prestige, rather than personal interest. John Keller has devoted his career to researching and understanding motivation in instructional systems. These decades of work constitute a major contribution to the instructional design field. First, by applying motivation theories systematically to design theory. Second, in developing a unique problem-solving process he calls the ARCS Motivation. The ARCS Model of Motivational Design was created by John Keller while he was researching ways to supplement the learning process with motivation. The model is based on Tolman's and Lewin's expectancy-value theory, which presumes that people are motivated to learn if there is value in the knowledge presented (i.e. it fulfills personal needs) and if there is an optimistic expectation for success. The model consists of four main areas: Attention, Relevance, Confidence, and Satisfaction. Attention and relevance according to John Keller's ARCS motivational theory are essential to learning. The first 2 of 4 key components for motivating learners, attention and relevance can be considered the backbone of the ARCS theory, the latter components relying upon the former. The attention mentioned in this theory refers to the interest displayed by learners in taking in the concepts/ideas being taught. This component is split into three categories: perceptual arousal, using surprise or uncertain situations; inquiry arousal, offering challenging questions and/or problems to answer/solve; and variability, using a variety of resources and methods of teaching. Within each of these categories, John Keller has provided further sub-divisions of types of stimuli to grab attention. Grabbing attention is the most important part of the model because it initiates the motivation for the learners. Once learners are interested in a topic, they are willing to invest their time, pay attention, and find out more. Relevance, according to Keller, must be established by using language and examples that the learners are familiar with. The three major strategies Keller presents are goal-oriented, motive matching, and familiarity. Like the Attention category, Keller divided the three major strategies into subcategories, which provide examples of how to make a lesson plan relevant to the learner. Learners will throw concepts to the wayside if their attention cannot be grabbed and sustained and if relevance is not conveyed. The confidence aspect of the ARCS model focuses on establishing positive expectations for achieving success among learners. The confidence level of learners is often correlated with motivation and the amount of effort put forth in reaching a performance objective. For this reason, it's important that learning design provides students with a method for estimating their probability of success. This can be achieved in the form of a syllabus and grading policy, rubrics, or a time estimate to complete tasks. Additionally, confidence is built when positive reinforcement for personal achievements is given through timely, relevant feedback. Finally, learners must obtain some type of satisfaction or reward from a learning experience. This satisfaction can be from a sense of achievement, praise from a higher-up, or mere entertainment. Feedback and reinforcement are important elements and when learners appreciate the results, they will be motivated to learn. Satisfaction is based upon motivation, which can be intrinsic or extrinsic. To keep learners satisfied, instruction should be designed to allow them to use their newly learned skills as soon as possible in as authentic a setting as possible. Although Keller's ARCS model currently dominates instructional design with respect to learner motivation, in 2006 Hardré and Miller proposed a need for a new design model that includes current research in human motivation, a comprehensive treatment of motivation, integrates various fields of psychology and provides designers the flexibility to be applied to a myriad of situations. Hardré proposes an alternate model for designers called the Motivating Opportunities Model or MOM. Hardré's model incorporates cognitive, needs, and affective theories as well as social elements of learning to address learner motivation. MOM has seven key components spelling the acronym 'SUCCESS' – Situational, Utilization, Competence, Content, Emotional, Social, and Systemic. Influential researchers and theorists ||This article contains embedded lists that may be poorly defined, unverified or indiscriminate. (December 2010)| Alphabetic by last name - Bloom, Benjamin – Taxonomies of the cognitive, affective, and psychomotor domains – 1950s - Bonk, Curtis – Blended learning – 2000s - Bransford, John D. – How People Learn: Bridging Research and Practice – 1990s - Bruner, Jerome – Constructivism - 1950s-1990s - Clark, Richard – Clark-Kozma "Media vs Methods debate", "Guidance" debate. - Gagné, Robert M. – Nine Events of Instruction (Gagné and Merrill Video Seminar) - Hannum, Wallace H. - Heinich, Robert – Instructional Media and the new technologies of instruction 3rd ed. – Educational Technology – 1989 - Jonassen, David – problem-solving strategies – 1990s - Kemp, Jerold E. – Created a cognitive learning design model - 1980s - Langdon, Danny G – The Instructional Designs Library: 40 Instructional Designs, Educational Technology Publications - Mager, Robert F. – ABCD model for instructional objectives – 1962 - Criterion Referenced Instruction and Learning Objectives - Marzano, Robert J. - "Dimensions of Learning", Formative Assessment - 2000s - Mayer, Richard E. - Multimedia Learning - 2000s - Merrill, M. David – Component Display Theory / Knowledge Objects / First Principles of Instruction - Papert, Seymour – Constructionism, LOGO – 1970s-1980s - Piaget, Jean – Cognitive development – 1960s - Reigeluth, Charles – Elaboration Theory, "Green Books" I, II, and III – 1990s–2010s - Schank, Roger – Constructivist simulations – 1990s - Simonson, Michael – Instructional Systems and Design via Distance Education – 1980s - Skinner, B.F. – Radical Behaviorism, Programed Instruction - 1950s-1970s - Vygotsky, Lev – Learning as a social activity – 1930s - Wilson, Brent G. - Constructivist learning environments - 1990s - ADDIE Model - blended learning - educational assessment - confidence-based learning - design-based learning - educational animation - educational psychology - educational technology - e-learning framework - electronic portfolio - First Principles of Instruction - human–computer interaction - instructional theory - interaction design - Learning environment - learning object - learning science - Learning space - multimedia learning - instructional design coordinator - interdisciplinary teaching - rapid prototyping - lesson study - Understanding by Design - Merrill, M. D.; Drake, L.; Lacy, M. J.; Pratt, J. (1996). "Reclaiming instructional design" (PDF). Educational Technology. 36 (5): 5–7. - Ed Forest: Instructional Design, Educational Technology - Mayer, Richard E (1992). "Cognition and instruction: Their historic meeting within educational psychology". Journal of Educational Psychology. 84 (4): 405–412. doi:10.1037/0022-06220.127.116.115. - Duffy, T. M., & Cunningham, D. J. (1996). Constructivism: Implications for the design and delivery of instruction. In D. Jonassen (Ed.), Handbook of Research for Educational Communications and Technology (pp. 170-198). New York: Simon & Schuster Macmillan - Duffy, T. M. , & Jonassen, D. H. (1992). Constructivism: New implications for instructional technology. In T. Duffy & D. Jonassen (Eds.), Constructivism and the technology of instruction (pp. 1-16). Hillsdale, NJ: Erlbaum. - Reiser, R. A., & Dempsey, J. V. (2012). Trends and issues in instructional design and technology. Boston: Pearson. - Clark, B. (2009). The history of instructional design and technology. - Thalheimer, Will. People remember 10%, 20%...Oh Really? October 8, 2006. - Bloom's Taxonomy. Retrieved from Wikipedia on April 18, 2012 at Bloom's Taxonomy - Instructional Design Theories. Instructionaldesign.org. Retrieved on 2011-10-07. - Reiser, R. A. (2001). "A History of Instructional Design and Technology: Part II: A History of Instructional Design". ETR&D, Vol. 49, No. 2, 2001, pp. 57–67. - History of instructional media. Uploaded to YouTube by crozitis on Jan 17, 2010. Retrieved from https://www.youtube.com/watch?v=y-fKcf4GuOU - A hypertext history of instructional design. Retrieved April 11, 2012 - Markham, R. "History of instructional design". Retrieved on April 11, 2012 - History and timeline of instructional design. Retrieved April 11, 2012 - Braine, B., (2010). "Historical Evolution of Instructional Design & Technology". Retrieved on April 11, 2012 from http://timerime.com/en/timeline/415929/Historical+Evolution+of+Instructional+Design++Technology/ - Webbees. "Xterior - Windschermen, Windschermen". www.xterior-windschermen.nl. Retrieved 2016-11-05. - Trentin G. (2001). Designing Online Courses. In C.D. Maddux & D. LaMont Johnson (Eds) The Web in Higher Education: Assessing the Impact and Fulfilling the Potential, pp. 47-66, The Haworth Press Inc., New York, London, Oxford, ISBN 0-7890-1706-7. - http://www.bgsu.edu/technology-architecture-and-applied-engineering/departments-and-programs/visual-communication-and-technology-education/learning-design-and-technology.html BGSU LDT - http://ed.psu.edu/lps/ldt Penn State LDT - http://online.purdue.edu/ldt/learning-design-technology Purdue LDT - http://jms.sdsu.edu/index.php/admissions/ldt_admissions_requirements SDSU LDT - https://ed.stanford.edu/academics/masters-handbook/program-requirements/ldt Stanford LDT - https://coe.uga.edu/academics/degrees/med/learning-design-technology UGA LDT - Anglin, G. J., & Towers, R. L. (1992). Reference citations in selected instructional design and technology journals, 1985-1990. Educational Technology Research and Development, 40, 40-46. - Perry, J. D. (2001). Learning and cognition. [On-Line]. Available: http://education.indiana.edu/~p540/webcourse/gagne.html - Gagné, R. M. (1985). The conditions of learning (4th ed.). New York: Holt, Rinehart & Winston. - Gagné, R. M., & Driscoll, M. P. (1988). Essentials of learning for instruction. Englewood Cliffs, NJ: Prentice-Hall. - Haines, D. (1996). Gagné. [On-Line]. Available: http://education.indiana.edu/~educp540/haines1.html - Dowling, L. J. (2001). Robert Gagné and the Conditions of Learning. Walden University. - Dick, W., & Carey, L. (1996). The systematic design of instruction. 4th ed. New York, NY: Harper Collin - Instructional Design Models and Theories, Retrieved April 9th 2012 from http://www.instructionaldesigncentral.com/htm/IDC_instructionaldesignmodels.htm#kemp - Psychological Principles in System Development-1962. Retrieved on April 15, 2012 from http://www.nwlink.com/~donclark/history_isd/gagne.html - Merrill, D.M., Jones, M.K., & Chongqing, L. (December 1990). Instructional Transaction Theory. Retrieved from http://www.speakeasydesigns.com/SDSU/student/SAGE/compsprep/ITT_Intro.pdf - Elaboration Theory (Charles Reigeluth), Retrieved April 9, 2012 from http://www.instructionaldesign.org/theories/elaboration-theory.html - Wiburg, K. M. (2003). [Web log message]. Retrieved from http://www.internettime.com/itimegroup/Is it Time to Exchange Skinner's Teaching Machine for Dewey's.htm - Richey, R. C. (2000). The legacy of Robert M.Gagné . Syracuse, NY: ERIC Clearinghouse on Information & Technology. - Gagné, R.M. (n.d.). Biographies. Retrieved April 18, 2012, from Answers.com Web site: http://www.answers.com/topic/robert-mills-gagn - Conole G., and Fill K., "A learning design toolkit to create pedagogically effective learning activities". Journal of Interactive Media in Education, 2005 (08). - Carr-Chellman A. and Duchastel P., "The ideal online course," British Journal of Educational Technology, 31(3), 229–241, July 2000. - Koper R., "Current Research in Learning Design," Educational Technology & Society, 9 (1), 13–22, 2006. - Britain S., "A Review of Learning Design: Concept, Specifications and Tools" A report for the JISC E-learning Pedagogy Programme, May 2004. - IMS Learning Design webpage. Imsglobal.org. Retrieved on 2011-10-07. - Branson, R. K., Rayner, G. T., Cox, J. L., Furman, J. P., King, F. J., Hannum, W. H. (1975). Interservice procedures for instructional systems development. (5 vols.) (TRADOC Pam 350-30 NAVEDTRA 106A). Ft. Monroe, VA: U.S. Army Training and Doctrine Command, August 1975. (NTIS No. ADA 019 486 through ADA 019 490). - Piskurich, G.M. (2006). Rapid Instructional Design: Learning ID fast and right. - Saettler, P. (1990). The evolution of American educational technology. - Stolovitch, H.D., & Keeps, E. (1999). Handbook of human performance technology. - Kelley, T., & Littman, J. (2005). The ten faces of innovation: IDEO's strategies for beating the devil's advocate & driving creativity throughout your organization. New York: Doubleday. - Hokanson, B., & Miller, C. (2009). Role-based design: A contemporary framework for innovation and creativity in instructional design. Educational Technology, 49(2), 21–28. - Dick, Walter, Lou Carey, and James O. Carey (2005) . The Systematic Design of Instruction (6th ed.). Allyn & Bacon. pp. 1–12. ISBN 0-205-41274-2. - Ed Forest. "Dick and Carey Instructional Model". - Esseff, Peter J.; Esseff, Mary Sullivan (1998) . Instructional Development Learning System (IDLS) (8th ed.). ESF Press. pp. 1–12. ISBN 1-58283-037-1. - ESF, Inc. – Train-the-Trainer – ESF ProTrainer Materials – 813.814.1192. ESF-ProTrainer.com (2007-11-06). Retrieved on 2011-10-07. - Smith, P. L. & Ragan, T. J. (2004). Instructional design (3rd Ed.). Danvers, MA: John Wiley & Sons. - Morrison, G. R., Ross, S. M., & Kemp, J. E. (2001). Designing effective instruction, 3rd ed. New York: John Wiley. - Joeckel, G., Jeon, T., Gardner, J. (2010). Instructional Challenges In Higher Education: Online Courses Delivered Through A Learning Management System By Subject Matter Experts. In Song, H. (Ed.) Distance Learning Technology, Current Instruction, and the Future of Education: Applications of Today, Practices of Tomorrow. (link to article) - R. Ryan; E. Deci. "Intrinsic and Extrinsic Motivations". Contemporary Educational Psychology. Retrieved April 1, 2012. - Brad Bell. "Intrinsic Motivation and Extrinsic Motivation with Examples of Each Types of Motivation". Blue Fox Communications. Retrieved April 1, 2012. - Keller, John. "arcsmodel.com". John M. Keller. Retrieved April 1, 2012. - Ely, Donald (1983). Development and Use of the ARCS Model of Motivational Design. Libraries Unlimited. pp. 225–245. - Hardré, Patricia; Miller, Raymond B. (2006). "Toward a current, comprehensive, integrative, and flexible model of motivation for instructional design". Performance Improvement Quarterly. 19 (3). - Hardré, Patricia (2009). "The motivating opportunities model for Performance SUCCESS: Design, Development, and Instructional Implications". Performance Improvement Quarterly. 22 (1). doi:10.1002/piq.20043. |Wikiversity has learning materials about Instructional design| - Instructional Design – An overview of Instructional Design - ISD Handbook - Edutech wiki: Instructional design model - Debby Kalk, Real World Instructional Design Interview - How to build effective eLearning courses (ebook) - Trends in Instructional Design and eLearning Solutions
The CGI Process The basic principle of Common Gateway Interface (CGI) is that a Web server passes client request information to CGI programs in system environment variables (and in some cases through standard input or command line arguments) and all standard output of CGI programs is returned to Web clients. - Parsing CGI input - Processing the request - Generating the response Throughout the topic there will be references to conversion modes, which has to deal with how data is presented to a CGI programs and how data that is returned by the CGI program is processed by the HTTP Server. To learn more about conversion modes, see CGI data conversions. Parsing CGI input When the environment variables have been set by the HTTP server, it starts the CGI program. (For complete list of environment variables set by the HTTP Server, see Environment variables set by HTTP Server.) It is then up to this CGI program to find out where to get the information needed to fulfill the request. The two most common ways a CGI program may be called from the HTML document: - By using an HTML form and the request method (environment variable REQUEST_METHOD) POST. - By using an HTML anchor tag to specify the URL for the CGI program and adding the variables to this URL. This would be interpreted as The CGI script has to perform the following tasks in order to retrieve the necessary information: - Find out the REQUEST_METHOD used by the client. - If the REQUEST_METHOD used was the GET method, the CGI program knows that all additional values may be retrieved from the QUERY_STRING environment variable. - If the REQUEST_METHOD used was POST, the CGI knows that additional information was passed using STDIN. It will then have to query the CONTENT_LENGTH environment variable to know how much information it will have to read from STDIN. An example of data read in the QUERY_STRING variable (%%MIXED%% mode): - A plus sign (+) represents spaces. - A percent sign (%) that is followed by the American National Standard Code for Information Interchange (ASCII) hexadecimal equivalent of the symbol represents special characters, such as a period (.) or slash (/). - An ampersand (&) separates fields and sends multiple values for a field such as check boxes. Parsing breaks the fields at the ampersands and decodes the ASCII hexadecimal characters. The results look like this: NAME=Eugene T. Fox ADDRemail@example.com INTEREST=RCO You can use the QtmhCvtDb() API to parse the information into a structure. The CGI program can refer to the structure fields. If using %%MIXED%% input mode, the “%xx” encoding values are in ASCII and must be converted into the “%xx” EBCDIC encoding values before calling QtmhCvtDb(). If using %%EBCDIC%% mode, the server will do this conversion for you. The system converts ASCII “%xx” first to the ASCII character and then to the EBCDIC character. Ultimately, the system sets the EBCDIC character to the “%xx” in the EBCDIC CCSID. The main advantage of using the GET method is that you can access the CGI program with a query without using a form. The main advantage to the POST method is that the query length can be unlimited so you do not have to worry about the client or server truncating data. The query string of the GET method cannot exceed 8 KB. Processing the request Processing the request is the second stage of a CGI program. In this stage, the program takes the parsed data and performs the appropriate action. For example, a CGI program designed to process an application form might perform one of the following functions: - Take the input from the parsing stage - Convert abbreviations into more meaningful information - Plug the information into an e-mail template - Use SNDDST to send the e-mail. Generating the response When the CGI program has finished processing it has to send its result back to the HTTP server that invoked the program. By doing so the output indirectly is sent to the client that initially requested the information. Because the CGI program issues its result through STDOUT, the HTTP server has to read the information from there and interpret what to do. A CGI program writes a CGI header that is followed by an entity body to standard output. The CGI header is the information that describes the data in the entity body. The entity body is the data that the server sends to the client. A single newline character always ends the CGI header. The newline character for ILE C is \n. For ILE RPG or ILE COBOL, it is hexadecimal '15'. The following are some examples of Content-Type headers: Content-Type: text/html\n\n Content-Type: text/html; charset=iso-8859-2\n\n If the response is a static document, the CGI program returns either the URL of the document using the CGI Location header or returns a Status header. The CGI program does not have an entity body when using the Location header. If the host name is the local host, HTTP Server will retrieve the specified document that the CGI program sent. It will then send a copy to the Web client. If the host name is not the local host, the HTTP processes it as a redirect to the Web client. For example: The Status header should have a Content_Type: and a Status in the CGI header. When Status is in the CGI header, an entity body should be sent with the data to be returned by the server. The entity body data contains information that the CGI program provides to a client for error processing. The Status line is the Status with an HTTP 3 digit status code and a string of alphanumeric characters (A-Z, a-z, 0-9 and space). The HTTP status code must be a valid 3 digit number from the HTTP/1.1 specification. CONTENT-TYPE: text/html\n Status: 600 Invalid data\n \n <html><head><title>Invalid data</title> </head><body> <h1>Invalid data typed</h1> <br><pre> The data entered must be valid numeric digits for id number <br></pre> </body></html>
Use the periodic table, an activity series, or solubility rules to predict whether single-replacement reactions or double-replacement reactions will occur up to now, we have presented chemical reactions as a topic, but we have not discussed how the products of a chemical reaction can be predicted. Precipitation reactions occur when the cations of one reactant and the anions of a second reactant found in aqueous solutions combine to form an insoluble ionic solid that we call a precipitate. Determination of percent composition of pennies using redox and double displacement (precipitation) reactions objectives: the lab experiment will consist of oxidation-reduction and double displacement reactions as well as titration techniques all these components will be used in order to determine. Double displacement reactions can be further classified as neutralization, precipitation and gas formation reactions neutralization reactions are a specific kind of double displacement reaction an acid-base reaction occurs, when an acid reacts with equal quantity of base. Redox reactions, or oxidation-reduction reactions, have a number of similarities to acid-base reactions fundamentally, redox reactions are a family of reactions that are concerned with the transfer of electrons between species. A single replacement reaction (or single displacement reaction) involves two elements trading places within a compound for example, many metals react with dilute acids to form salts and hydrogen gas oxidation-reduction (redox) reactions introduction to redox reactions redox reaction with iron oxidizing and reducing agents. This resource is a 54-slide powerpoint presentation covering: how to balance an equation, synthesis reactions, decomposition reactions, single displacement reactions, double displacement reactions, and combustion/oxidation reactions. • double displacement reactions are never redox the charges of the ions do not change when they switch • look for polyatomic ions that have been broken down because they usually contain an element that has been oxidized or reduced. There are three types of reactions that fall under the double displacement reaction category: precipitation, neutralization and gas formation a precipitation reaction forms an insoluble solid. Displacement reactions are also classified as oxidation-reduction reactions for example, in the first reaction given above, elemental lead is oxidized to lead(ii) and copper is reduced from copper(ii) to elemental copper. Students will perform a set of double replacement reactions they will be given the opportunity to record observations, write formulas for compounds, and balance the chemical equations for a set of double replacement reactions. With two fundamental types of chemical reactions - redox reactions and metathesis (double-displacement) reactions by means of these reactions, you will chemical reactions of copper and percent yield key pre-lab (review questions) 1 give an example, other than the ones listed in this experiment, of redox and metathesis reactions. Types of chemical reactions: exothermic, endothermic, combination, decomposition, single replacement, double replacement, combustion, acid-base, oxidation-reduction, hydrolysis an acid-base reaction is a type of double displacement reaction that occurs between an acid and a base some combination and decomposition reactions are redox. Gas-forming reactions, reactions between ionic compounds and acids, are double displacement reactions that result in the formation of a gas, often carbon dioxide ( figure 416 ) figure 416 the reaction of calcium carbonate with an acid produces carbon dioxide gas. Use questions and models to determine the relationships between variables in investigations data sets and experimental outcomes should be presented to students for analysis within the context of the entire course describe the composition of the atom and the experiments that led to that knowledge. Start studying classify the reaction as synthesis, decomposition, single-displacement, double displacement, or combustion learn vocabulary, terms, and more with flashcards, games, and other study tools. Kathrin gibson chem 1411-115 october 26 2015 determination of % composition of pennies using redox and double displacement (precipitation) reactions introduction: oxidation involves the gain of electrons of hydrogen or the loss of oxygen or decrease in oxidation state. Introduction titration can be traced to the beginnings of volumetric analysis which began in the late 18th century study of analytical chemical science began in france and the first burette was made by francois antoine henri descroizilles. Precipitation reactions also play a central role in many chemical analysis techniques, including spot tests used to identify metal ions and gravimetric methods for determining the composition of matter (see the last module of this chapter. Oxidation-reduction (redox) reactions are another important class of chemical reactions in redox reactions electrons are transferred from one substance to another for example, if a copper precipitation, redox, etc), not about how to write and balance an equation for complete. Redox reactions everyday examples 1 redox reaction- everyday examples types of redox reactions redox reactions are among the most common and most important chemical reactions in everyday life. The main four types of reactions are direct combination, analysis reaction, single displacement, and double displacement if you're asked the five main types of reactions, it is these four and then either acid-base or redox (depending who you ask. We will write a custom essay sample on chemistry: displacement reactions specifically for you for only $1638 $139/page the precipitation experiment determination of % composition of pennies using redox and double displacement (precipitation) reactions. Determination of % composition of pennies using redox and double displacement (precipitation) reactions introduction: oxidation involves the gain of electrons of hydrogen or the loss of oxygen or decrease in oxidation state. Has the pattern of a double displacement reaction, insoluble and having a double precipitation for example, mixing a solution of mgso4 and a solution of ba(oh)2 forms mg(oh)2 and baso4 a solubility table reveals that both products are insoluble (and that chem 130 prelab chemical reactions 1) when a solution of lead(ii) nitrate, pb. Ch 105 - chemistry and society chemical reactions 02/13/2008 an electron transfer occurs in an oxidation/reduction reactions finally soluble ions of salts can be transferred towards each other in solution and form an insoluble solid this reaction is called a precipitation reaction in precipitation reactions, an aqueous salt. Basic idea of precipitation reactions note: when working with precipitation reactions, the solubility rules for ionic compounds are used to determine if a precipitate forms or not. Determination of % composition of pennies using redox and double displacement (precipitation) reactions introduction: oxidation involves the gain of electrons of hydrogen or the loss of oxygen or decrease in oxidation state if zinc completely reacts with hcl, then the theoretical yield of copper should be equivalent to the actual yield. The five main types of redox reactions are combination, decomposition, displacement, combustion, and disproportionation learning objectives explain the processes involved in a redox reaction and describe what happens to their various components. A type of redox reaction in which the reductant (fuel) and oxidant (often, but not necessarily, molecular oxygen) react vigorously and produce significant amounts of heat, light, and often flames single-displacement (replacement) reactions.
NASA scientists announced the discovery of a massive ‘super-Earth’ planet located 180 Light Years from our Earth, discovered by using the planet-hunting Kepler Space Telescope spacecraft. The new planet is about two and a half times the size of Earth with diameter of approximately 20,000 miles. It is estimated to be 12 times as massive as the Earth. Such large planets do not exist in our solar system. A new planet circles a star about 180 light-years from the Earth and is unlivable with a thick atmosphere or a water world. During a test run with the telescope in early 2014, a team of astrophysicists led by Andrew Vanderburg detected a planet passing in front of a star dubbed as HIP 116454. Follow-up observations with the Canadian MOST satellite and ground-based telescopes confirmed the presence of this planet. The new planet is in the Pisces constellation. Vanderburg, of the Harvard-Smithsonian Center for Astrophysics and the lead author of a study, said the new discovery is ripe for follow-up studies. He also pointed about reactivation of Kepler and compared it with a phoenix rising from the ashes. NASA launched Kepler in March 2009 to discover how often Earth-like planets occur around the Milky Way. So far it has confirmed almost 1,000 such planets and more than 3,500 exoplanets. However, in 2013, one of the reaction wheels broke down, which was essential to keep the telescope pointed. Engineers knocked the success in stabilizing the spacecraft by using the pressure of sunlight on its solar panels. In November 2013, NASA announced that Kepler could remain in operations for at least next three to four years. The spacecraft has significantly contributed in broadening the understanding of the abundance of habitable, Earth-size planets in the Milky Way. It also revolutionized the understanding of the internal structures of the stars through asteroseismology.
1969 Curaçao uprising The 1969 Curaçao uprising (known as Trinta di Mei, "Thirtieth of May", in Papiamentu, the local language) was a series of riots on the Caribbean island of Curaçao, then part of the Netherlands Antilles, a semi-independent country in the Kingdom of the Netherlands. The uprising took place mainly on May 30, but continued into the night of May 31 – June 1, 1969. The riots arose from a strike by workers in the oil industry. A protest rally during the strike turned violent, leading to widespread looting and destruction of buildings and vehicles in the central business district of Curaçao's capital, Willemstad. Several causes for the uprising have been cited. The island's economy, after decades of prosperity brought about by the oil industry, particularly a Shell refinery, was in decline and unemployment was rising. Curaçao, a former colony of the Netherlands, became part of the semi-independent Netherlands Antilles under a 1954 charter, which redefined the relationship between the Netherlands and its former colonies. Under this arrangement, Curaçao was still part of the Kingdom of the Netherlands. Anti-colonial activists decried this status as a continuation of colonial rule but others were satisfied the political situation was beneficial to the island. After slavery was abolished in 1863, black Curaçaoans continued to face racism and discrimination. They did not participate fully in the riches resulting from Curaçao's economic prosperity and were disproportionately affected by the rise in unemployment. Black Power sentiments in Curaçao were spreading, mirroring developments in the United States and across the Caribbean, of which Curaçaoans were very much aware. The Democratic Party dominated local politics but could not fulfill its promise to maintain prosperity. Radical and socialist ideas became popular in the 1960s. In 1969, a labor dispute arose between a Shell sub-contractor and its employees. This dispute escalated and became increasingly political. A demonstration by workers and labor activists on May 30 became violent, sparking the uprising. The riots left two people dead and much of central Willemstad destroyed, and hundreds of people were arrested. The protesters achieved most of their immediate demands: higher wages for workers and the Netherlands Antillean government's resignation. It was a pivotal moment in the history of Curaçao and of the vestigial Dutch Empire. New parliamentary elections in September gave the uprising's leaders seats in parliament, the Estates of the Netherlands Antilles. A commission investigated the riots; it blamed economic issues, racial tensions, and police and government misconduct. The uprising prompted the Dutch government to undertake new efforts to fully decolonize the remains of its colonial empire. Suriname became independent in 1975 but leaders of the Netherlands Antilles resisted independence, fearing the economic repercussions. The uprising stoked long-standing distrust of Curaçao in nearby Aruba, which seceded from the Netherlands Antilles in 1986. Papiamentu gained social prestige and more widespread use after the uprising. It was followed by a renewal in Curaçaoan literature, much of which dealt with local social issues and sparked discussions about Curaçao's national identity. Curaçao is an island in the Caribbean Sea. It is a country ( Dutch: land) within the Kingdom of the Netherlands. In 1969, Curaçao had a population of around 141,000, of whom 65,000 lived in the capital, Willemstad. Until 2010, Curaçao was the most populous island and seat of government of the Netherlands Antilles, a country and former Dutch colony composed of six Caribbean islands, which in 1969 had a combined population of around 225,000. In the 19th century the island's economy was in poor shape. It had few industries other than the manufacture of dyewood, salt, and straw hats. After the Panama Canal was built and oil was discovered in Venezuela's Maracaibo Basin, Curaçao's economic situation improved. Shell opened an oil refinery on the island in 1918; the refinery was continually expanded until 1930. The plant's production peaked in 1952, when it employed around 11,000 people. This economic boom made Curaçao one of the wealthiest islands in the region and raised living standards there above even those in the Netherlands. This wealth attracted immigrants, particularly from other Caribbean islands, Suriname, Madeira, and the Netherlands. In the 1960s, the number of people working in the oil industry fell and by 1969, Shell's workforce in Curaçao had dropped to around 4,000. This was a result both of automation and of sub-contracting. Employees of sub-contractors typically received lower wages than Shell workers. Unemployment on the island rose from 5,000 in 1961 to 8,000 to 1966, with nonwhite, unskilled workers particularly affected. The government's focus on attracting tourism brought some economic growth but did little to reduce unemployment. The rise of the oil industry led to the importation of civil servants, mostly from the Netherlands. This led to a segmentation of white, Protestant Curaçaoan society into landskinderen—those whose families had been in Curaçao for generations, and makamba, immigrants from Europe who had closer ties to the Netherlands. Dutch immigrants undermined native white Curaçaoans' political and economic hegemony. As a result, the latter began to emphasize their Antillean identity and use of Papiamentu, the local Creole language. Dutch cultural dominance in Curaçao was a source of conflict; for example, the island's official language was Dutch, which was used in schools, creating difficulties for many students. Another issue that would come to the fore in the uprising was the Netherlands Antilles', and specifically Curaçao's, relationship with the Netherlands. The Netherlands Antilles' status had been changed in 1954 by the Charter for the Kingdom of the Netherlands. Under the Charter, the Netherlands Antilles, like Suriname until 1975, was part of the Kingdom of the Netherlands but not of the Netherlands itself. Foreign policy and national defense were Kingdom matters and presided over by the Council of Ministers of the Kingdom of the Netherlands, which consisted of the full Council of Ministers of the Netherlands with one minister plenipotentiary for each of the countries Netherlands Antilles and Suriname. Other issues were governed at the country or island level. Although this system had its proponents, who pointed to the fact that managing its own foreign relations and national defense would be too costly for a small country like the Netherlands Antilles, many Antilleans saw it as a continuation of the area's subaltern colonial status. There was no strong pro-independence movement in the Antilles as most local identity discourses centered around insular loyalty. The Dutch colonization of Curaçao began with the importation of African slaves in 1641, and in 1654 the island became the Caribbean's main slave depot. Only in 1863, much later than Britain or France, did the Netherlands abolish slavery in its colonies. A government scholarship program allowed some Afro-Curaçaoans to attain social mobility but the racial hierarchy from the colonial era remained largely intact and blacks continued to face discrimination and were disproportionately affected by poverty. Although 90% of Curaçao's population was of African descent, the spoils of the economic prosperity that began in the 1920s benefited whites and recent immigrants much more than black native Curaçaoans. Like the rest of the Netherlands Antilles, Curaçao was formally democratic but political power was mostly in the hands of white elites. The situation of black Curaçaoans was similar to that of blacks in the United States and Caribbean countries such as Jamaica, Trinidad and Tobago, and Barbados. The movement leading up to the 1969 uprising used many of the same symbols and rhetoric as Black Power and civil rights movements in those countries. A high Antillean government official would later claim that the island's wide-reaching mass media was one of the uprising's causes. People in Curaçao were aware of events in the US, Europe, and Latin America. Many Antilleans, including students, traveled abroad and many Dutch and American tourists visited Curaçao and many foreigners worked in Curaçao's oil industry. The uprising would parallel anti-colonial, anti-capitalist, and anti-racist movements throughout the world. It was particularly influenced by the Cuban Revolution. Government officials in Curaçao falsely alleged that Cuban communists were directly involved in sparking the uprising, but the revolution did have an indirect influence in that it inspired many of the uprising's participants. Many of the uprising's leaders wore khaki uniforms similar to those worn by Fidel Castro. Black Power movements were emerging throughout the Caribbean and in the US at the time; foreign Black Power figures were not directly involved in the 1969 uprising but they inspired many of its participants. Local politics also contributed to the uprising. The center-left Democratic Party (DP) had been in power in the Netherlands Antilles since 1954. The DP was more closely connected to the labor movement than its major rival, the National People's Party (NVP). This relationship was strained by the DP's inability to satisfy expectations it would improve workers' conditions. The DP was mainly associated with the white segments of the working class and blacks criticized it for primarily advancing white interests. The 1960s also saw the rise of radicalism in Curaçao. Many students went to the Netherlands to study and some returned with radical left-wing ideas and founded the Union Reformista Antillano (URA) in 1965. The URA established itself as a socialist alternative to the established parties, although it was more reformist than revolutionary in outlook. Beyond parliamentary politics, Vitó, a weekly magazine at the center of a movement aiming to end the economic and political exploitation of the masses thought to be a result of neo-colonialism, published analyses of local economic, political, and social conditions. Vitó started being published in Papiamentu rather than in Dutch in 1967, and gained a mass following. It had close ties with radical elements in the labor movement. Papa Godett, a leader in the dock workers' union, worked with Stanley Brown, the editor of Vitó. Although the progressive priest Amado Römer had warned that "great changes still need to come through a peaceful revolution, because, if this doesn't happen peacefully, the day is not far off when the oppressed [...] will rise up", Curaçao was thought an unlikely site for political turmoil despite low wages, high unemployment, and economic disparities between blacks and whites. The relative tranquility was attributed by the island's government to the strength of family ties. In a 1965 pitch to investors, the government ascribed the absence of a communist party and labor unions' restraint to the fact that "Antillean families are bound together by unusually strong ties and therefore extremist elements have little chance to interfere in labor relations". Labor relations, including those between Shell and the refinery's workers, had indeed generally been peaceful. After two minor strikes in the 1920s and another in 1936, a contract committee for Shell workers was established. In 1942, workers of Dutch nationality gained the right to elect representatives to this committee. In 1955, the Puerto Rican section of the American labor federation Congress of Industrial Organizations (CIO) aided workers in launching the Petroleum Workers' Federation of Curaçao (PWFC). In 1957, the Federation reached a collective bargaining agreement with Shell for workers at the refinery. The PWFC was part of the General Conference of Trade Unions (AVVC), the island's largest labor confederation. The AVVC generally took a moderate stance in labor negotiations and was often criticized for this, and for its close relationship to the Democratic Party, by the more radical parts of the Curaçaoan labor movement. Close relations between unions and political leaders were widespread in Curaçao, though few unions were explicitly allied with a particular party and the labor movement was starting to gain independence. The Curaçao Federation of Workers (CFW), another union in the AVVC, represented construction workers employed by the Werkspoor Caribbean Company, a Shell sub-contractor. The CWF was to play an important role in the events that led to the uprising. Among the unions criticizing the AVVC was the General Dock Workers Union (AHU), which was led by Papa Godett and Amador Nita and was guided by a revolutionary ideology seeking to overthrow the remnants of Dutch colonialism, especially discrimination against blacks. Godett was closely allied with Stanley Brown, the editor of Vitó. The labor movement before the 1969 uprising was very fragmented and personal animosity between labor leaders further exacerbated this situation. In May 1969, there was a labor dispute between the CFW and Werkspoor. It revolved around two central issues. For one, Antillean Werkspoor employees received lower wages than workers from the Netherlands or other Caribbean islands as the latter were compensated for working away from home. Secondly, Werkspoor employees performed the same work as Shell employees but received lower wages. Werkspoor's response pointed to the fact that it could not afford to pay higher wages under its contract with Shell. Vitó was heavily involved in the dispute, helping to keep the conflict in the public consciousness. Though the dispute between CFW and Werkspoor received the most attention, that month significant labor unrest occurred throughout the Netherlands Antilles. On May 6, around 400 Werkspoor employees went on strike. The Antillean Werkspoor workers received support and solidarity from non-Antilleans at Werkspoor and from other Curaçaoan unions. On May 8, this strike ended with an agreement to negotiate a new contract with government mediation. These negotiations failed, leading to a second strike that began on May 27. The dispute became increasingly political as labor leaders felt the government should intervene on their behalf. The Democratic Party was in a dilemma, as it did not wish to lose support from the labor movement and was wary of drawn-out and disruptive labor disputes, but also felt that giving in to excessive demands by labor would undermine its strategy to attract investments in industry. As the conflict progressed, radical leaders including Amador Nita and Papa Godett gained influence. On May 29, as a moderate labor figure was about to announce a compromise and postpone a strike, Nita took that man's notes and read a declaration of his own. He demanded the government resign and threatened a general strike. The same day, between thirty and forty workers marched to Fort Amsterdam, the Antillean government's seat, contending that the government, which had refused to intervene in the dispute, was contributing to the repression of wages. While the strike was led by the CFW, the PWFC under pressure from its members, showed solidarity with the strikers and decided to call for a strike to support the Werkspoor workers. On the morning of May 30, more unions announced strikes in support of the CFW's dispute with Werkspoor. Between three and four thousand workers gathered at a strike post. While the CFW emphasized that this was merely an economic dispute, Papa Godett, the dock workers' leader and Vitó activist, advocated a political struggle in his speech to the strikers. He criticized the government's handling of the labor dispute and demanded its removal. He called for another march to Fort Amsterdam, which was seven miles (11 km) away in Punda in downtown Willemstad. "If we don't succeed without force, then we have to use force. [...] The people is the government. The present government is no good and we will replace it", he proclaimed. The march was five thousand people strong when it started moving towards the city center. As it progressed through the city, people who were not associated with the strike joined, most of them young, black, and male, some oil workers, some unemployed. There were no protest marshals and leaders had little control over the crowd's actions. They had not anticipated any escalation. Among the slogans the crowd chanted were "Pan y rekonosimiento" ("Bread and recognition"), "Ta kos di kapitalista, kibra nan numa" ("These are possessions of capitalists, just destroy them"), and "Tira piedra. Mata e kachónan di Gobièrnu. Nos mester bai Punda, Fòrti. Mata e makambanan" ("Throw stones. Death to the government dogs. Let's go to Punda, to the fort. Death to the makamba"). The march became increasingly violent. A pick-up truck with a white driver was set on fire and two stores were looted. Then, large commercial buildings including a Coca-Cola bottling plant and a Texas Instruments factory were attacked, and marchers entered the buildings to halt production. Texas Instruments had a poor reputation because it had prevented unionization among its employees. Housing and public buildings were generally spared. Once it became aware, the police moved to stop the rioting and called for assistance from the local volunteer militia and from Dutch troops stationed in Curaçao. The police, with only sixty officers at the scene, was unable to halt the march and ended up enveloped by the demonstration, with car drivers attempting to hit them. The police moved to secure a hill on the march route and were pelted with rocks. Papa Godett was shot in the back by the police; he later said the police had orders to kill him, while law enforcement said officers acted only to save their own lives. Godett was taken to the hospital by members of the demonstration and parts of the march broke away to follow them. One of two fire trucks dispatched to assist the police was set on fire and pushed towards the police lines. The striker steering it was shot and killed. The main part of the march moved to Punda, Willemstad's central business district where it separated into smaller groups. The protesters chanted "Awe jiu di Korsou a bira konjo" ("Now the people of Curaçao are really fed up") and "Nos lo sinja nan respeta nos" ("We will teach them to respect us"). Some protesters crossed a bridge to the other side of Sint Anna Bay, an area known as Otrabanda. The first building burned in Otrabanda was a shop Vitó had criticized for having particularly poor working conditions. From this store, flames spread to other buildings. Stores on both sides of the bay were looted and subsequently set on fire, as were an old theater and the bishop's palace. Women took looted goods home in shopping carts. There was an attempt to damage the bridge that crossed the bay. The government imposed a curfew and a ban on liquor sales. The Prime Minister of the Netherlands Antilles Ciro Domenico Kroon went into hiding during the riots while Governor Cola Debrot and the Deputy Governor Wem Lampe were also absent. Minister of Justice Ronchi Isa requested the assistance of elements of the Netherlands Marine Corps stationed in Curaçao. While constitutionally required to honor this request under the Charter, the Kingdom's Council of Ministers did not officially approve it until later. The soldiers, however, immediately joined police, local volunteers and firemen as they fought to stop the rioting, put out fires in looted buildings, and guarded banks and other key buildings while thick plumes of smoke emanated from the city center. Many of the buildings in this part of Willemstad were old and therefore vulnerable to fire while the compact nature of the central business district further hampered firefighting efforts. In the afternoon, clergymen made a statement via radio urging the looters to stop. Meanwhile, union leaders announced that they had reached a compromise with Werkspoor. Shell workers would receive equal wages regardless of whether they were employed by contractors and regardless of their national origin. Although the protesters achieved their economic aims, rioting continued throughout the night and slowly abated on May 31. The uprising's focus shifted from economic demands to political goals. Union leaders, both radical and moderate, demanded the government's resignation and threatened a general strike. Workers broke into a radio station, forcing it to broadcast this demand; they argued that failed economic and social policies had led to the grievances and the uprising. On May 31, Curaçaoan labor leaders met with union representatives from Aruba, which was then also part of the Netherlands Antilles. The Aruban delegates agreed with the demand for the government's resignation, announcing Aruban workers would also go on a general strike if it was ignored. By the night of May 31 to June 1, the violence had ceased. Another 300 Dutch marines arrived from the Netherlands on June 1 to maintain order. The uprising cost two lives—the dead were identified as A. Gutierrez and A. Doran—and 22 police officers and 57 others were injured. The riots led to 322 arrests, including the leaders Papa Godett and Amador Nita of the dock workers' union, and Stanley Brown, the editor of Vitó. Godett was kept under police surveillance while he recovered in the hospital from his bullet wound. During the disturbances, 43 businesses and 10 other buildings were burned and 190 buildings were damaged or looted. Thirty vehicles were destroyed by fire. The damage caused by the uprising was valued at around US$40 million. The looting was highly selective, mainly targeting businesses owned by whites while avoiding tourists. In some cases rioters led tourists out of the disturbance to their hotels to protect them. Nevertheless, the riots drove away most tourists and damaged the island's reputation as a tourist destination. On May 31, Amigoe di Curaçao, a local newspaper, declared that with the uprising, "the leaden mask of a carefree, untroubled life in the Caribbean Sea was ripped from part of Curaçao, perhaps forever". The riots evoked a wide range of emotions among the island's population; "Everyone was crying" when it ended, said one observer. There was pride that Curaçaoans had finally stood up for themselves. Some were ashamed it had come to a riot or of having taken part. Others were angry at the rioters, the police, or at the social wrongs that had given rise to the riots. The uprising achieved both its economic and political demands. On June 2 all parties in the Estates of the Netherlands Antilles, pressured by the Chamber of Commerce that feared further strikes and violence, agreed to dissolve that body. On June 5, the Prime Minister Ciro Domenico Kroon submitted his resignation to the Governor. Elections for the Estates were set for September 5. On June 26, an interim government headed by new Prime Minister Gerald Sprockel took charge of the Netherlands Antilles. Trinta di Mei (Thirtieth of May in Papiamentu) became a pivotal moment in the history of Curaçao, contributing to the end of white political dominance. While Peter Verton as well as William Averette Anderson and Russell Rowe Dynes characterize the events as a revolt, historian Gert Oostindie considers this term too broad. All of these writers agree revolution was never a possibility. Anderson, Dynes, and Verton regard the uprising as part of a broader movement, the May Movement or May 30 Movement, which began with the strikes in early 1969 and continued in electoral politics and with another wave of strikes in December 1969. The uprising's leaders, Godett, Nita, and Brown, formed a new political party, the May 30 Labor and Liberation Front (Frente Obrero Liberashon 30 Di Mei, FOL), in June 1969. Brown was still in prison when the party was founded. The FOL fielded candidates in the September election against the Democratic Party, the National People's Party, and the URA with Godett as its top candidate. The FOL campaigned on the populist, anti-colonial, and anti-Dutch messages voiced during the uprising, espousing black pride and a positive Antillean identity. One of its campaign posters depicted Kroon, the former Prime Minister and the Democratic Party's main candidate, shooting protesters. The FOL received 22% of the vote in Curaçao and won three of the island's twelve seats in the Estates, which had a total of twenty-two seats. The three FOL leaders took those seats. In December, Ernesto Petronia of the Democratic Party became the Netherlands Antilles' first black Prime Minister and FOL formed part of the coalition government. In 1970, the Dutch government appointed Ben Leito as the first black Governor of the Netherlands Antilles. In October of the same year, a commission similar to the Kerner Commission in the United States was established to investigate the uprising. Five of its members were Antillean and three were Dutch. It released its report in May 1970 after gathering data, conducting interviews, and holding hearings. It deemed the uprising unexpected, finding no evidence it had been pre-planned. The report concluded that the primary causes of the riots were racial tensions and disappointed economic expectations. The report was critical of the conduct of the police and on its recommendation a Lieutenant Governor with police experience was appointed. Patronage appointments were reduced in keeping with the commission's recommendations but most of its suggestions, and its criticism of government and police conduct, were ignored. The commission also pointed to a contradiction between the demands for national independence and economic prosperity: according to the report, independence would almost certainly lead to economic decline. On June 1, 1969, in The Hague, the seat of the Dutch government, between 300 and 500 people, including some Antillean students, marched in support of the uprising in Curaçao and clashed with police. The protesters denounced the deployment of Dutch troops and called for Antillean independence. The 1969 uprising became a watershed moment in the decolonization of Dutch possessions in the Americas. The Dutch parliament discussed the events in Curaçao on June 3. The parties in government and the opposition agreed that no other response to the riots was possible under the Kingdom's charter. The Dutch press was more critical. Images of Dutch soldiers patrolling the streets of Willemstad with machine guns were shown around the world. Much of the international press viewed Dutch involvement as a neo-colonial intervention. The Indonesian War of Independence, in which the former Dutch East Indies broke away from the Netherlands in the 1940s and in which some 150,000 Indonesians and 5,000 Dutch died, was still on the Dutch public's minds. In January 1970, consultations about independence between Dutch Minister for Surinamese and Antillean Affairs Joop Bakker, Surinamese Prime Minister Jules Sedney, and Petronia began. The Dutch government, fearing after Trinta di Mei it could be forced into a military intervention, wanted to release the Antilles and Suriname into independence; according to Bakker, "It would be preferably today rather than tomorrow that the Netherlands would get rid of the Antilles and Suriname". Yet, the Netherlands insisted it did not wish to force independence on the two countries. Deliberations over the next years revealed that independence would be a difficult task, as the Antilleans and the Surinamese were concerned about losing Dutch nationality and Dutch development aid. In 1973, both countries rejected a Dutch proposal for a path to independence. In Suriname's case, this impasse was overcome suddenly in 1974 when new administrations took power both in the Netherlands and in Suriname, and rapid negotiations resulted in Surinamese independence on November 25, 1975. The Netherlands Antilles resisted any swift move to independence. It insisted that national sovereignty would only be an option once it had "attained a reasonable level of economic development", as its Prime Minister Juancho Evertsz put it in 1975. Surveys in the 1970s and 1980s showed most of Curaçao's inhabitants agreed with this reluctance to pursue independence: clear majorities favored continuing the Antilles' ties to the Netherlands, but many were in favor of loosening them. By the end of the 1980s, the Netherlands accepted that the Antilles would not be fully decolonized in the near future. The 1969 uprising in Curaçao encouraged separatist sentiments in Aruba that had existed since the 1930s. Unlike the black-majority Curaçao, most Arubans were of mixed European and Native descent. Though Aruba is just 73 miles (117 km) away from Curaçao, there was a long-standing resentment with significant racial undertones about being ruled from Willemstad. Aruban distrust of Curaçao was further stoked by the uprising's Black Power sentiments. The Aruban island government started working towards separation from the Antilles in 1975 and in 1986, Aruba became a separate country within the Kingdom of the Netherlands. Eventually, in 2010, insular nationalism led to the Netherlands Antilles being completely dissolved and Curaçao becoming a country as well. Trinta di Mei also reshaped Curaçao's labor movement. A strike wave swept Curaçao in December 1969. Around 3,500 workers participated in eight wildcat strikes that took place within ten days. New, more radical leaders were able to gain influence in the labor movement. As a result of unions' involvement in Trinta di Mei and the December strikes, Curaçaoans had considerably more favorable views of labor leaders than of politicians, as a poll in August 1971 found. In the following years, unions built their power and gained considerable wage increases for their members, forcing even the notoriously anti-union Texas Instruments to negotiate with them. Their membership also grew; the CFW, for instance, went from a pre-May 1969 membership of 1,200 to around 3,500 members in July 1970. The atmosphere after the uprising led to the formation of four new unions. The labor movement's relationship to politics was changed by Trinta di Mei. Unions had been close to political parties and the government for several reasons: They had not existed for very long and were still gaining their footing. Secondly, the government played an important role in economic development and, finally, workers' and unions' position vis-à-vis employers was comparatively weak and they relied on the government's help. The events of 1969 both expressed and hastened the development of a more distant relationship between labor and the state. Government and unions became more distinct entities, although they continued to try to influence one another. Labor was now willing to take a militant position against the state and both parties realized that labor was a force in Curaçaoan society. The government was accused of letting workers down and of using force to suppress their struggle. Unions' relationship with employers changed in a similar way; employers were now compelled to recognize labor as an important force. The 1969 uprising put an end to white dominance in politics and administration in Curaçao, and led to the ascendance of a new black political elite. Nearly all of the governors, prime ministers, and ministers in the Netherlands Antilles and Curaçao since 1969 have been black. Although there has been no corresponding change in the island's business elite, upward social mobility increased considerably for well-educated Afro-Curaçaoans and led to improved conditions for the black middle class. The rise of black political elites was controversial from the start. Many FOL supporters were wary of the party entering into government with the Democratic Party, which they had previously denounced as corrupt. The effects of the emergence of new elites for lower-class black Curaçaoans have been limited. Although workers received some new legal protections, their living standards stagnated. In a 1971 survey, three quarters of the respondents said their economic situation had remained the same or worsened. This is mostly the result of difficult conditions that hamper most Caribbean economies, but critics have also blamed mismanagement and corruption by the new political elites. Among the lasting effects of the uprising was an increase in the prestige of Papiamentu, which became more widely used in official contexts. Papiamentu was spoken by most Curaçaoans but its use was shunned; children who spoke it on school playgrounds were punished. According to Frank Martinus Arion, a Curaçaoan writer, "Trinta di Mei allowed us to recognize the subversive treasure we had in our language". It empowered Papiamentu speakers and sparked discussions about the use of the language. Vitó, the magazine that had played a large part in the build-up to the uprising, had long called for Papiamentu becoming Curaçao's official language once it became independent of the Netherlands. It was recognized as an official language on the island, along with English and Dutch, in 2007. Curaçaoan parliamentary debate is now conducted in Papiamentu and most radio and television broadcasts are in this language. Primary schools teach in Papiamentu but secondary schools still teach in Dutch. Trinta di Mei also accelerated the standardization and formalization of Papiamentu orthography, a process that had begun in the 1940s. The events of May 30, 1969, and the situation that caused them were reflected in local literature. Papiamentu was considered by many devoid of any artistic quality, but after the uprising literature in the language blossomed. According to Igma M. G. van Putte-de Windt, it was only in the 1970s after the May 30 uprising that an "Antillean dramatic expression in its own right" emerged. Days before the uprising, Stanley Bonofacio premiered Kondená na morto ("Sentenced to death"), a play about the justice system in the Netherlands Antilles. It was banned for a time after the riots. In 1970, Edward A. de Jongh, who watched the riots as he walked the streets, published the novel 30 di Mei 1969: E dia di mas historiko ("May 30, 1969: The Most Historic Day") describing what he perceived as the underlying causes of the uprising: unemployment, the lack of workers' rights, and racial discrimination. In 1971, Pacheco Domacassé wrote the play Tula about a 1795 slave revolt in Curaçao and in 1973 he wrote Konsenshi di un pueblo (A People's Conscience), which deals with government corruption and ends in a revolt reminiscent of the May 30 uprising. Curaçaoan poetry after Trinta di Mei, too, was rife with calls for independence, national sovereignty, and social justice. The 1969 uprising opened up questions concerning Curaçaoan national identity. Prior to Trinta di Mei, one's place in society was determined largely by race; afterward these hierarchies and classifications were put into question. This led to debates about whether Afro-Curaçaoans were the only true Curaçaoans and to what extent Sephardic Jews and the Dutch, who had been present throughout Curaçao's colonial period, and more recent immigrants belonged. In the 1970s, there were formal attempts at nation-building; an island anthem was introduced in 1979, an island Hymn and Flag Day were instituted in 1984, and resources were devoted to promoting the island's culture. Papiamentu became central to Curaçaoan identity. More recently, civic values, rights of participation, and a common political knowledge are said to have become important issues in determining national identity. - Anderson & Dynes 1975, p. 3, Oostindie 2015, p. 241, Sharpe 2014, p. 117. - Anderson & Dynes 1975, pp. 33–35. - Oostindie 2015, pp. 243–244. - Anderson & Dynes 1975, p. 35. - Anderson & Dynes 1975, p. 55. - Anderson & Dynes 1975, p. 57. - Anderson & Dynes 1975, pp. 56–57. - Anderson & Dynes 1975, pp. 35–36. - Anderson & Dynes 1975, pp. 48–49. - Oostindie 2015, p. 241. - Oostindie & Klinkers 2003, pp. 10, 84–85, Oostindie 2015, p. 240. - Anderson & Dynes 1975, pp. 43–43, Oostindie & Klinkers 2003, pp. 85–86, Oostindie 2015, p. 240. - Anderson & Dynes 1975, p. 48, Oostindie 2015, p. 242. - Oostindie & Klinkers 2003, p. 59, Blakely 1993, p. 29. - Oostindie 2015, p. 247. - Oostindie 2015, pp. 241–242. - Oostindie 2015, p. 247, Sharpe 2009, p. 942. - Anderson & Dynes 1975, pp. 11–13. - Anderson & Dynes 1975, pp. 7–9, Oostindie 2015, pp. 249–250. - Anderson & Dynes 1975, pp. 9–10, Oostindie 2015, p. 249. - Anderson & Dynes 1975, pp. 50–53, Oostindie 2015, p. 247. - Anderson & Dynes 1975, pp. 62–63, Oostindie 2015, p. 247. - Anderson & Dynes 1975, pp. 63–65, Oostindie 2015, p. 248, Verton 1976, p. 89. - Oostindie 2015, p. 244. - Anderson & Dynes 1975, p. 4. - Anderson & Dynes 1975, pp. 36–37. - Anderson & Dynes 1975, pp. 59–60. - Römer 1981, pp. 147–148. - Anderson & Dynes 1975, pp. 59–62. - Anderson & Dynes 1975, pp. 69–70. - Anderson & Dynes 1975, pp. 71–72. - Anderson & Dynes 1975, pp. 137–138. - Oostindie 2015, pp. 248–249. - Anderson & Dynes 1975, pp. 72–73, Oostindie 2015, p. 245. - Anderson & Dynes 1975, p. 74. - Anderson & Dynes 1975, p. 5. - Anderson & Dynes 1975, pp. 76–77, Oostindie 2015, p. 245, "Striking Oil Workers Burn, Loot in Curacao". Los Angeles Times. May 31, 1969, p. 2. - Oostindie 2015, p. 245, Verton 1976, p. 91. - Anderson & Dynes 1975, pp. 78–79. - Anderson & Dynes 1975, pp. 79–80. - Oostindie 2015, p. 239, Verton 1976, p. 90. - Anderson & Dynes 1975, pp. 80–81. - Verton 1976, p. 90. - Maidenberg, H. J. "Premier Silent as Rioters in Curacao Insist He Quit". The New York Times. June 2, 1969, p. 1. - Anderson & Dynes 1975, p. 81, Oostindie & Klinkers 2013, p. 98, "Striking Oil Workers Burn, Loot in Curacao". Los Angeles Times. May 1969, p. 2. - Anderson & Dynes 1975, pp. 81–83, 85. - Anderson & Dynes 1975, p. 83. - Anderson & Dynes 1975, p. 85. - "Strikers on Curacao Insist Regime Quit; Two Dead in Rioting". The New York Times. May 31, 1969, p. 1. - Anderson & Dynes 1975, pp. 83, 86. - Anderson & Dynes 1975, pp. 83, 88, "Strikers on Curacao Insist Regime Quit; Two Dead in Rioting". The New York Times. May 31, 1969, p. 1. - Drayer, Dick. " 50th anniversary of May 30th uprising". The Daily Herald. May 31, 2019. - Anderson & Dynes 1975, pp. 83, 132, "Strikers on Curacao Loot Resort; Dutch Troops Used". The New York Times. May 31, 1969, p. 1. - Maidenberg, H. J. "Government to Quit In Curacao's Strife". The New York Times. June 4, 1969, p. 1. - "Curacao Governing Party Loses Majority in Election". The New York Times. September 6, 1969, p. 5. - Oostindie 2015, p. 244, "Einde van een mythe" (in Dutch). Amigoe di Curaçao. May 31, 1969, p. 1. - Oostindie 2015, pp. 245–246. - Oostindie 2015, p. 246. - Anderson & Dynes 1975, pp. 86–87. - "Partijen in staten unaniem voor uitschrijven nieuwe verkiezingen" (in Dutch). Amigoe di Curaçao. June 3, 1969, p. 1. - "Antilliaanse regering treedt af" (in Dutch). Amigoe di Curaçao. June 6, 1969, p. 1. - "Verdeling portefeuilles en program nieuw kabinet" (in Dutch). Amigoe di Curaçao. June 26, 1969, p. 1. - Oostindie 2015, p. 250, Sharpe 2009, p. 942. - Anderson & Dynes 1975, pp. 92–93, 114, Oostindie 2015, p. 250, Verton 1976, p. 91. - Anderson & Dynes 1975, pp. 13–14, 70–71, Verton 1975, pp. 88–89. - Anderson & Dynes 1975, p. 88. - Anderson & Dynes 1975, pp. 88–90, Verton 1976, p. 90. - Anderson & Dynes 1975, pp. 100–101, Sharpe 2015, p. 122, Verton 1976, p. 90, "Nieuwe ministers legden eed af" (in Dutch). Amigoe di Curaçao. December 12, 1969, p. 1. - Anderson & Dynes 1975, pp. 145–147, Oostindie & Klinkers 2003, pp. 99–100, Oostindie 2015, p. 246. - Anderson & Dynes 1975, pp. 83–84. - Oostindie & Klinkers 2003, pp. 90–91, 96. - Oostindie & Klinkers 2003, pp. 72, 98–99, Oostindie 2015, pp. 250–251. - Oostindie & Klinkers 2003, pp. 100–102, 116, Oostindie 2015, pp. 252–254. - Oostindie & Klinkers 2003, pp. 102–112, Oostindie 2015, p. 254. - Oostindie & Verton 1998, pp 48–49, Oostindie & Klinkers 2003, pp. 116–117, 120. - Oostindie & Klinkers 2003, pp. 121–122, Oostindie 2015, pp. 241–242, Sharpe 2015, p. 119. - Anderson & Dynes 1975, pp. 111–112, Verton 1976, pp. 91–93, Verton 1977, pp. 249, 258–259. - Römer 1981, pp. 146–149. - Oostindie 2015, pp. 255–256, Verton 1976, pp. 93–94. - Oostindie 2015, pp. 255–256, Verton 1976, pp. 90–91, 94–95. - Sharpe 2015, p. 121. - Eckkramer 2007, pp. 78, 84, Romero, Simon. " A Language Thrives in Its Caribbean Home". The New York Times. July 5, 2010, p. 7. - Eckkramer 1999, pp. 63–64. - Clemencia 1994, pp. 434, 439. - van Putte-de Windt 1994, p. 608. - Anderson & Dynes 1975, pp. 6–7, van Putte-de Windt 1994, p. 608. - Clemencia 1994, p. 439. - Allen 2010, pp. 119–120, Sharpe 2015, pp. 118–119. - Allen, Rose Mary (2010). "The Complexity of National Identity Construction in Curaçao, Dutch Caribbean" (PDF). European Review of Latin American and Caribbean Studies (89): 117–125. doi: 10.18352/erlacs.9461. ISSN 0924-0608. - Anderson, William Averette; Dynes, Russell Rowe (1975). Social Movements, Violence, and Change: The May Movement in Curaçao. Columbus: Ohio State University Press. ISBN 0-8142-0240-3. - Blakely, Allison (1993). Blacks in the Dutch World: The Evolution of Racial Imagery in a Modern Society. Bloomington, IN: Indiana University Press. ISBN 0-2532-1433-5. - Clemencia, Joceline (1994). "Katibu ta galiña: From Hidden to Open Protest in Curaçao". In Arnold, A. James (ed.). A History of Literature in the Caribbean: English- and Dutch-speaking countries. Amsterdam/Philadelphia: John Benjamins Publishing Company. pp. 433–477. ISBN 1-58811-041-9. - Eckkrammer, Eva (1999). "The Standardisation of Papiamentu: New Trends, Problems and Perspectives". Bulletin Suisse de Linguistique Appliquée. 69 (1): 59–74. ISSN 1023-2044. - Eckkrammer, Eva (2007). "Papiamentu, Cultural Resistance, and Socio-Cultural Challenges: The ABC Islands in a Nutshell". Journal of Caribbean Literatures. 5 (1): 73–93. ISSN 2167-9460. JSTOR 40986319. - Oostindie, Gert; Verton, Peter (1998). "Ki sorto di Reino/What kind of Kingdom?: Antillean and Aruban views and expectations of the Kingdom of the Netherlands". New West Indian Guide / Nieuwe West-Indische Gids. 72 (1–2): 117–125. doi: 10.1163/13822373-90002599. ISSN 2213-4360. - Oostindie, Gert; Klinkers, Inge (2003). Decolonising the Caribbean: Dutch Policies in a Comparative Perspective. Amsterdam: Amsterdam University Press. ISBN 90-5356-654-6. - Oostindie, Gert (2015). "Black Power, Popular Revolt, and Decolonization in the Dutch Caribbean". In Quinn, Kate (ed.). Black Power in the Caribbean. Gainesville, FL: University Press of Florida. pp. 239–260. doi: 10.5744/florida/9780813049090.003.0012. ISBN 978-0-8130-4909-0. - Römer, R.A. (1981). "Labour Unions and Labour Conflict in Curaçao". New West Indian Guide / Nieuwe West-Indische Gids. 55 (1): 138–153. doi: 10.1163/22134360-90002122. ISSN 2213-4360. - Sharpe, Michael Orlando (2009). "Curaçao, 1969 Uprising". In Ness, Immanuel (ed.). The International Encyclopedia of Revolution and Protest: 1500 to the Present. Malden, MA: Wiley-Blackwell. pp. 942–943. doi: 10.1002/9781405198073.wbierp0429. ISBN 978-1-4051-8464-9. - Sharpe, Michael Orlando (2015). "Race, Color, and Nationalism in Aruban and Curaçaoan Political Identities". In Essed, Philomena; Hoving, Isabel (eds.). Dutch Racism. Amsterdam/New York: Brill. pp. 117–131. doi: 10.1163/9789401210096_007. ISBN 978-9-0420-3758-8. - van Putte-de Windt, Igma M. G. (1994). "Forms of Dramatic Expression in the Leeward Islands". In Arnold, A. James (ed.). A History of Literature in the Caribbean: English- and Dutch-speaking countries. Comparative History of Literatures in European Languages. Amsterdam/Philadelphia: John Benjamins Publishing Company. pp. 597–614. doi: 10.1075/chlel.xv.56put. ISBN 1-58811-041-9. - Verton, Peter (1976). "Emancipation and Decolonization: The May Revolt and Its Aftermath in Curaçao". Revista/Review Interamericana. 6 (1): 88–101. - Verton, Peter (1977). "Modernization in Twentieth Century Curaçao: New Elites and Their Followers". Revista/Review Interamericana. 7 (2): 248–259.
Variance is defined as the average value of the squared differences from the mean. As the name indicates, it is a measure of variability from the mean. It shows how far a data set is spread out around the mean. If the elements are largely apart from each other, a higher variance value is expected. On the other hand, if the elements are closer to the mean value and to each other, there is much less amount of variability and hence the variance will be lower. Variance is represented by σ2 To calculate the variance of a given dataset, first, we have to find the mean value by adding all the elements and dividing the sum by the total number of elements. Once the mean value is determined, the variance can be calculated by the formula given below. - σ2 = Population variance - x̅ = population mean - Xi = individual values - N = Size of Population After finding the value of the mean x̅, it is subtracted from each element x and it is squared to get the squared difference. Once the squared differences of each element are added, it is divided by the total number of elements in the population. The result is the variance of the dataset. Now. if we have to calculate the variance by taking a sample of the population, there is a change in the equation. For calculating the variance from a sample data the equation to be used is given below. - S2 = Sample variance - x̅ = sample mean - Xi = individual values - N = Size of Population As you can see 1 is subtracted from the sample size of n. This is done according to Bessel’s correction. What is Bessel’s Correction? When we are using sample data from the population, there will be errors while calculating the variance and standard deviation as we are only provided with a selection of elements and not the entire population. The value of the variance calculated from the sample data might be slightly different from the actual variance calculated by taking the whole population. This difference will be higher especially if the number of elements in the sample is low or the corresponding population size is high. So we need to correct the error to get the actual variance. As we are working with the sample mean x̅, the elements will be much closer to the sample mean, and hence the squared differences will be smaller. This will lead to a smaller value in the numerator of the sample variance equation compared to the population variance equation. If we divide this squared difference with the sample size n, this will lead to a variance value that is smaller than the actual value. But the variance value should be accurate. For this, we subtract 1 from the sample size so as to get a smaller denominator in the sample equation and thereby producing a larger variance value from the equation. This correction of the sample size from n to n-1 is known as Bessel’s correction. All values of variance that are not zero will be positive. We get a variance value of zero if the elements are identical Standard deviation can be defined as a statistic used to measure the dispersion of a given dataset in relation to it’s mean and is expressed as the square root of the variance. Meaning in simple words, the standard deviation shows how spread out the elements are in a data set. It is also known as the root mean square deviation. It is a measure of the dispersion of the dataset. If the elements are more spread out, there will be a higher measure of deviation and hence the standard deviation will be high and vice versa if the elements are less spread out, there will be a lower measure of deviation. The equation for the population and the sample standard deviation is given below: As it is seen from the equation, the standard deviation is simply the square root of the variance. We have used Bessel’s correction in this equation too to adjust the effect of the deflated numerator in the sample standard deviation equation. A standard deviation is always a positive number and it is always calculated in the same unit of the original data. The lowest value of the standard deviation is equal to zero when the elements are identical. Variance and standard deviation play a key role in statistical data analysis that has applications in various fields including finance, business, trade, and polls. Variance vs Standard Deviation Both variance and the standard deviation is a measure of the spread of the elements in a data set from its mean value. For calculating both, we need to know the mean of the population. However, variance and the standard deviation are not exactly the same. In fact, there are stark differences between both parameters. Let us look at each difference in detail - The variance describes how far the elements are dispersed from the mean, whereas the standard deviation measures the amount of this dispersion of elements. In other words, the variance indicates the variability of the elements and standard deviation quantifies it. - One of the most important differences between variance and the standard deviation is their units. - The variance is a squared value. Hence the unit will not be the same as that of the dataset. The unit will also get squared. For instance, if the data set is in the unit kilometer, the variance has a unit of a square kilometer. - On the other hand, the standard deviation has the same unit as the original data. There is no difference in units. - Variance indicates how far the individual elements are spread out in a dataset and standard deviation indicates how much the observations differ from the mean value. - The variance value will be always higher than the standard deviation value. Also, the variance will be the square of the standard deviation. - The variance is the average of the squared differences of each element from the mean and standard deviation is the square root of the average of the squared differences from the mean. - Variance is represented by σ2 and the standard deviation is represented by σ. Why do we square the differences from the mean? For both variance and standard deviation, we take the square of the difference of each element from the mean. This is to prevent the cancelling out of the numbers of different signs. Sometimes the value of elements will be smaller than the mean. Hence the difference will give a negative number. When we take the average of this difference as such, the negative and positive numbers might cancel out each other. So why not take the absolute value instead of square? Let us illustrate this with an example: Suppose the differences are given for data set 1 that has 4 elements as -1, 2, -4, 9 If we take the absolute value and divide by 4, we get: [ |-1| + |2| + |-4| + |9| ] / 4 = 4 Now if another data set 2 has the differences as -4, 4,4,4 We get [ |-4| + |4| + |4| +|4| ] / 4 = 4 Now, both have the same variability even though data set 1 has a greater spread than dataset 2. By squaring each difference, we ensure that this problem will not arise. If we square each difference, - The variance of data set 1 will be = 31 - The variance of dataset 2 will be = 16 Here, dataset 1 that has more spread has a greater variance than dataset 2. Calculation of Variance and Standard Deviation With An Example The list of the distance between the class and the home of each student is estimated in a class that has 10 students. The distance in kilometers are listed as : 9,4,8,10,5,3,7,8,9, N = 10 Mean distance = [ 9 + 4 + 8 + 10 + 5 + 3 + 7 + 8 + 9 + 7 ] / 10 Mean = x̅ = 70 / 10 = 7km |X||X – Mean||[X – Mean] ^ 2| Variance = [ 4 + 9 + 1 + 9 + 4 + 16 + 0 + 1 + 4 + 0] / 10 = 4.89 square km Standard deviation = [ 4.89 ] ^ (1/2) = 2.21 km
The monetary system |Cours||Introduction to Macroeconomics| - Introductory aspects of macroeconomics - Gross Domestic Product (GDP) - Consumer Price Index (CPI) - Production and economic growth - Financial Market - The monetary system - Monetary growth and inflation - Open Macroeconomics: Basic Concepts - Open Macroeconomics: the Exchange Rate - Equilibrium in an open economy - The Keynesian approach and the IS-LM model - Aggregate demand and supply - The impact of monetary and fiscal policies - Trade-off between inflation and unemployment - Response to the 2008 Financial Crisis and International Cooperation - 1 Types of currency - 2 Central banks and the money supply - 2.1 History of money and banking - 2.2 The role of central banks - 2.3 The European Central Bank and the Eurosystem - 2.4 The Bank of England and the Federal Reserve - 2.5 Role of the money supply - 2.6 Central bank instruments - 2.7 The central bank's balance sheet - 2.8 Creation of the monetary base - 2.9 Commercial banks and the supply of money - 2.10 The balance sheet of commercial banks - 2.11 Bank Money Creation and the Money Multiplier - 2.12 The Credit Multiplication Mechanism - 2.13 The Money Multiplier: full version - 2.14 BC's control instruments - 2.15 Effectiveness of the central bank's control of the money supply - 3 Summary - 4 Annexes - 5 References Types of currency[edit | edit source] The functions of money[edit | edit source] Currency is the stock of assets that can be readily mobilized for transactions. Beware of common parlance: - "Have money" or "pay in cash" refers to cash. Around 10% of money is made up of cash. - "Having money" or "earning a lot of money" refers to wealth or income. However, money is not wealth (assets or fortune) nor is it income. It essentially has three functions: - Reserve value (money is a means of transferring purchasing power from the present to the future). - Unit of account (the unit of account by which economic transactions are measured). - Intermediary of exchange (means used to purchase goods and services). In a barter economy, exchange requires the double coincidence of needs and economic agents can carry out only simple transactions. Money makes more indirect exchanges possible and reduces transaction costs. Liquidity: the ease with which an asset can be converted into the medium of exchange of the economic system. Currency = the most liquid asset of all. Types of currency[edit | edit source] Money can be seen as a good with important positive externalities. The more people who accept it, the more useful it is for each individual. Commodity money: most societies of the past used one or the other good with an intrinsic value (example: gold, but also camels, furs, salt... as the case may be). 'Fiduciary money (or fiat money): (Tangible) money devoid of any intrinsic value and which owes its status as money to the fact that the State has conferred legal tender on it (all metal coins and banknotes in circulation). scriptural money: intangible money, represented by an accounting entry that can be transformed into fiduciary money at any time (all assets held at the bank or post office). NB: Credit cards are simple media allowing the transfer of scriptural money over time. Monetary aggregates[edit | edit source] How is currency measured? Four types of monetary aggregates : Normally "money" is understood to mean the aggregate M1. M0 (or, hereafter, H) is also referred to as the monetary base. Monetary aggregates: euro area[edit | edit source] Central banks and the money supply[edit | edit source] History of money and banking[edit | edit source] The history of money is intimately linked to that of the banks (see video). From the XVIth-XVIIth century : - Depositing gold with a goldsmith to protect it from theft → received. - Use of receipts as a means of payment (beginning of fiat money). - Loan of gold on deposit against interest → goldsmiths become bankers. - For each gold bullion, issuing several notes → confidence in the repayment ability of the goldsmiths-bankers → reserves. - Increase of reserves against the risk of bankruptcies and creation of national banks controlling the issue of banknotes which were granted legal tender status. Until the First World War metal banknotes were convertible at the issuing institute at a rate established by the central bank (gold standard system). Convertibility was abandoned in the 1920s (coins and banknotes have a value conferred on them by general agreement, but no intrinsic value and they can no longer be exchanged for gold). The role of central banks[edit | edit source] In most countries, the quantity of money available (money supply) is controlled by the state and its supervision is delegated to an institution more or less independent of political power: the central bank. Central banks: - (i)supervise the banking system and - (ii) ensure the stability of the monetary system by regulating the quantity of money in circulation in the economy and thereby influencing interest rates and sometimes the exchange rate. These instruments may affect inflation and the levels of output and employment in the short term, as will be discussed in the following chapters. The set of actions put in place by the central bank to influence the money supply and to supervise the proper functioning of the monetary system constitutes monetary policy. The European Central Bank and the Eurosystem[edit | edit source] The European Central Bank (ECB), located in Frankfurt, Germany, was created on June 1, 1998 by the 12 European countries (then 11, now 17) that make up the European Monetary Union, i.e. the 12 countries that have decided to adopt a single currency (the euro) and a common monetary policy. The ECB and all the central banks of the European Union Member States that have adopted the euro make up the Eurosystem. The main objective of the ECB is to promote financial and price stability in order to ensure non-inflationary growth. The ECB is supposed to be independent of political power. The Bank of England and the Federal Reserve[edit | edit source] The Bank of England is the central bank of the United Kingdom. Established in 1694, it obtained independence in interest rate management only in 1997. As with the ECB, the main objective of the Bank of England is to promote price stability, but it is the government that defines this objective in concrete terms. The Federal Reserve is the central bank of the United States. Created in 1913, the "FED", is composed of a board of governors (of which Ben Bernanke is the president, since 2006), the Federal Open Market Committee (FOMC), twelve regional banks (Federal Reserve Banks). The FOMC is the committee responsible for monetary policy and is made up of the seven members of the Board of Governors and the twelve presidents of the regional banks (of which only five have voting rights at any given time). Role of the money supply[edit | edit source] The size of the money supply is expected to change in line with the quantity of transactions taking place on the market, i.e. in line with the development of GDP (= proxy of the volume of transactions). As will become clearer in the rest of our lecture, when money supply and GDP do not move at the same pace or in the same direction, this is when economic problems (inflation, unemployment) may begin to arise. Central banks therefore undertake monetary policy actions aimed at influencing the evolution of the money supply. Measuring the money supply therefore enables the BC (Central Bank) to know whether it is necessary to act on the quantity of money available in the economy in the event of imbalances (inflation in particular). But: the quantity of money in circulation in the economic system is greater than the monetary base, which is directly controlled by the CB () and, to understand its evolution, it is also necessary to analyse the role played by the commercial banks... Central bank instruments[edit | edit source] THREE MAIN TOOLS Central banks govern the amount of currency in circulation in the country through money market interventions (open-market operations), which involve buying or selling government bonds (or other non-monetary assets): - If selling bonds → reduction of the money supply. - If buying → bonds, increase the supply of money. Central banks also control the activities of commercial banks, especially the issuance of book money, by means of the discount rate and the reserve requirement ratio. The central bank's balance sheet[edit | edit source] The central bank is a true bank of banks: its main customers are commercial banks (and not households or companies): it provides them with banknotes and scriptural money and manages interbank payments on their giro accounts. Creation of the monetary base[edit | edit source] The monetary base (= asset or liability on the CB's balance sheet) is the currency issued by the central bank (coins and banknotes) + giro accounts. It is directly under the control of the BC. The central bank exchanges banknotes (or giro account assets) with commercial banks only as a counterpart: - of foreign currency (foreign currency) - of acknowledgements of debt (the bank makes credit) BC therefore buys gold, foreign currency or securities with its currency (constitution of reserves). It never provides secondary banks with currency without a counterpart. By building up reserves, the BC therefore gives itself the means to recover the currency it has issued at a later date, i.e. either : - by selling its assets (gold or foreign exchange) - by getting the loans made to the banks reimbursed... Constitution of reserves → increase in the supply of money Dissolution of reserves → decrease in the supply of money. Commercial banks and the supply of money[edit | edit source] The amount of money in circulation is influenced by the central bank's interventions in the asset market and its control over commercial banks and also by the commercial banks' decisions on deposits and loans to their customers. At any given time commercial banks must hold a certain fraction of the deposits received as reserves and they can lend the rest, thereby creating (scriptural) money → reinjecting money into the system. The fraction of the amount of deposits that banks are obliged to hold in the form of reserve requirements is the reserve requirement ratio. This coefficient is set by the central bank and can be changed according to the needs of monetary policy. NB: the effective reserve ratio of banks does not necessarily coincide with the reserve requirement ratio (banks may hold more reserves than those imposed by the central bank). The balance sheet of commercial banks[edit | edit source] The financial position of banks at a given point in time is summarized in their balance sheets. By convention, the assets (assets) are shown on the left side of the balance sheet and the liabilities (liabilities) on the right side. Bank Money Creation and the Money Multiplier[edit | edit source] The creation of scriptural money is exercised through the mechanism of credit: each time a bank that has excess reserves uses them to make credits, scriptural money is created. In general, the credits in turn give rise to deposits that make it possible to make new credits (once the required reserves have been subtracted) → phenomenon of multiplication of credits and deposits. The Credit Multiplication Mechanism[edit | edit source] 1. Banks are assumed to have only sight deposits and the amount of reserves covers their holdings of cash and giro accounts. Furthermore, in the initial situation, the bank has no excess reserves. Furthermore, the reserve requirement ratio, r = 20%. 2. Now suppose that the central bank decides to increase the monetary base by repurchasing securities from the bank (for an amount of 100) and credits the giro accounts of the bank as a counterpart. 3. The bank uses its surplus reserves to make new loans (for an amount of 100). This has two consequences: - a) The granting of loans leads to new deposits (by hypothesis, there is no transformation of scriptural money into cash at the moment). - b) The creation of new deposits requires an increase in reserve requirements. 4. The remainder of the excess reserves give rise to new loans amounting to 80. Again, there is no loss of reserves, but the creation of money and the transformation of excess reserves into required reserves. 5. The process may continue until there are no more excess reserves, i.e. until all initial excess reserves have been transformed into required reserves. The result is : Total money supply (maximum) In total, 500 scriptural money was created, for 100 percent of the excess reserves that initially appeared. The credit multiplier is therefore equal to 5. The credit multiplier is defined as the ratio between the total change in credits, ΔCR, and the initial change in excess reserves, ΔiRE. Assuming that there are no 'leakages' from the credit multiplier circuit, i.e. that the private sector does not seek to convert part of the newly created book money into fiat money. The credit multiplier is the inverse of the reserve requirement ratio. (): In the example: => . Si , . There are several ways to demonstrate that . The simplest is to point out that . Or · . Therefore, The Money Multiplier: full version[edit | edit source] Exemple (Same same but different!) 1. Same assumption as above, that we abandon the assumption that agents in the non-bank sector do not seek to exchange some of the newly created book money for cash. In this case, when the bank makes a loan, it must expect that the cash-in-transaction money created in return will be exchanged for cash. This means that the bank will have to draw on its excess reserves to provide the requested cash. Excess reserves will decline more rapidly, because of this leakage out of the credit multiplier circuit, and the credit multiplier will therefore be lower. Let us call the leakage coefficient, i.e. the share of new credits converted into cash: 3’. Bank A uses its excess reserves to make new loans (for an amount of 100). However, the non-bank sector wants to keep half of it in cash: Assumption: . 4’. The remaining excess reserves give rise to new loans for an amount of 40. 5’. The process can continue until there are no more excess reserves. The result is : In its full version (with leakage), the credit multiplier becomes : Because the process stops when mandatory stockpiling and leakage has exhausted the initial surplus reserves. Replacing the increase in reserve requirements and requests for conversion to cash with their respective values has: - (car et ) - (car ) From then on: BC's control instruments[edit | edit source] Open market operations: as already mentioned, central banks can intervene in asset markets. When a CB buys (sells) government bonds the amount of money in circulation increases (decreases). Modification of the reserve ratio: if the minimum fraction of deposits that must be kept as reserves increases (decrease), the quantity of money in circulation decreases (increases). NB: Traditionally, central banks have used this measure only very rarely, except in China in recent years. Examples: 10% USA, 20% CH, nothing UK, nothing Australia... Change in the discount rate (or key rate): commercial banks borrow money from the central bank when their reserves are too low in relation to the reserve requirement. If the discount rate ↑ (↓) ⇒ the money supply ↓ (↑). Most central banks have lowered this rate since the beginning of the last financial crisis. Today the key rates are extremely low. Effectiveness of the central bank's control of the money supply[edit | edit source] Central banks exercise only partial control over the supply of money. In particular, central banks cannot control : - the amount of deposits that households decide to convert into hard currency (leakage from the credit cycle). In this case, if private agents hold in cash a percentage of the deposits, the multiplier becomes , a good approximation of the true multiplier if and are small. - the aggregate amount of loans made by commercial banks. In summary, since the amount of money in circulation in the economy depends in part on the behaviour of bankers and depositors, central bank control can only be imperfect. In the following we will assume that the central bank perfectly controls the supply of money. Summary[edit | edit source] The term money refers to all assets used by households to purchase goods and services. Currency has three functions: exchange intermediary, unit of account, store of value, etc. Commodity money is money that has an intrinsic value. Currency has no intrinsic value. The central bank controls the supply of money through open market operations by changing the reserve requirement ratio, changing the discount rate, etc. The central bank cannot control the amount of loans granted by commercial banks or the deposit decisions of households. As a result, the central bank's control over the money supply is only partial. Some causes of inflation (= increase in the general price level) are related to the determinants of the demand (next chapter) and supply (this chapter) of money. Annexes[edit | edit source] - David James Gill and Michael John Gill | The New Rules of Sovereign Debt | Foreignaffairs.com,. (2015). Retrieved 23 January 2015, from http://www.foreignaffairs.com/articles/142804/david-james-gill-and-michael-john-gill/the-great-ratings-game References[edit | edit source] - Page personnelle de Federica Sbergami sur le site de l'Université de Genève - Page personnelle de Federica Sbergami sur le site de l'Université de Neuchâtel - Page personnelle de Federica Sbergami sur Research Gate - Researchgate.net - Nicolas Maystre - Google Scholar - Nicolas Maystre - VOX, CEPR Policy Portal - Nicolas Maystre - Nicolas Maystre's webpage - Cairn.info - Nicolas Maystre - Linkedin - Nicolas Maystre - Academia.edu - Nicolas Maystre
Distance Between Two Points on a Graph Study Guide (page 2) Introduction to Distance Between Two Points on a Graph Whenever you can, count. —Sir Francis Galton (1822–1911) English Geneticist and Statistician In this lesson, you'll learn how to find the distance between two points on a graph by counting, using the Pythagorean theorem, and using the distance formula. A line on the coordinate plane continues forever in both directions, but we can find the distance between two points on the line. When the points are on a horizontal line, such as y = 3, we can simply count the units from one point to the other. What is the distance from point (–5,3) to point (7,3)? We can count the unit boxes between the two points. There are 12 units from (–5,3) to (7,3), which means that the distance between the points is 12 units. If the y values of two points are the same, the distance between the two points is equal to the difference between their x values. If the x values of two points are the same, the distance between the two points is equal to the difference between their y values. The distance between (5,4) and (5,–10) is 14 units, because the x values of the points are the same, and 4 – (–10) = 14 units. Often though, we need to find the distance between two points that are not on a horizontal or vertical line. For instance, how can we find the distance between two points on the line y = x? The Pythagorean theorem describes the relationship between the sides of a right triangle. It states that the sum of the squares of the bases of the triangle is equal to the square of the hypotenuse of the triangle: a2 + b2 = c2. What is a triangle formula doing in a lesson about the distance between two points on a line? Drawing a triangle on a graph can help us find the distance between two points. The following graph shows the line y = + 2, which contains the points (3,6) and (6,10). We can't find the distance between these points just by counting. However, we can draw a vertical line down from (6,10) and a horizontal line from (3,6). These points meet at (6,6) and form a right triangle. The bases of the triangle are the line segment from (3,6) to (6,6) and the line segment from (6,6) to (6,10). Because a horizontal line connects (3,6) to (6,6), we can find its length just by counting. The length of this line is 3 units. In the same way, the distance from (6,6) to (6,10) is 4 units. Now that we know the length of each base of the triangle, we can use the Pythagorean theorem to find the length of the hypotenuse of the triangle. In the formula a2 + b2 = c2, a and b are the bases. Substitute 3 for a and 4 for b: 32 + 42 = c2 9 + 16 = c2 25 = c2 To find the value of c, take the square root of both sides of the equation. A distance can never be negative, so we only need the positive square root of 25, which is 5. The distance between (3,6) and (6,10) is 5 units. You might be thinking, "I don't want to draw a triangle every time I need to find the distance between two points." Well, let's look at exactly how we found the distance between (3,6) and (6,10). First, we added a point to the graph to form a triangle. The point represented the difference in the x values between the points (3 units, from 3 to 6) and the difference in the y values between the points (4 units, from 6 to 10). Then, we squared those differences, added them, and took the square root. To make finding distance easier, we can write these steps as a formula: the distance formula. The distance formula states that D = √(x2 – x1)2 + (y2 – y1)2. That doesn't look much easier, so let's break the formula down into pieces. We want to find the distance from point 1, (3,6) and point 2, (6,10). To find the difference between the x values of the points, we subtract the first x value, which we can write as x1, from the second x value, which we can write as x2. To find the difference between the y values of the points, we subtract the first y value, y1, from the second y value, y2. Difference between the x values: (x2 – x1) = (6 – 3) = 3 Difference between the y values: (y2 – y1) = (10 – 6) = 4 Now that we have the differences between each value, we square them and add them. This is the part of the formula that comes from the Pythagorean theorem. The square of 3 is 9 and the square of 4 is 16, and 9 + 16 = 25. Finally, this sum is equal to the square of the distance between the two points, so to find the distance, we must take the square root of the sum: √25 = 5. Now, not only do you know how to use the distance formula, you know where it comes from! Find the distance between (–2,4) and (3,16). We will call (–2,4) point 1 and (3, 16) point 2. Substitute these values into the distance formula: D = √(x2 – x1)2 + (y2 – y1)2. Because (–2,4) is the first point, x1 is –2 and y1 is 4. The value of x2 is 3 and the value of y2 is 16: D = √3 – (–2))2 + (16 – 4)2 Remember the order of operations: parentheses come before exponents, so perform the subtraction first: D = √(5)2 + (12)2 Exponents come before addition, so square 5 and 12 next: D = √25 + 144 D = √169 D = 13 units The distance between (–2,4) and (3,16) is 13 units. Sometimes, the distance between two points is not a whole number, and we are left with a radical. Let's find the distance between (1,2) and (7,12). Enter the x and y values of each point into the distance formula: D = √(7 – 1)2 + (12 – 1)2 D = √(6)2 + (10)2 D = √36 + 100 D = √139 The number 136 is not a perfect square. When the distance formula leaves you with a radical like this, the only way to simplify it is to factor out a perfect square. Check to see if the radicand is divisible by 4, 9, 25, or some other square. The number 136 is divisible by 4, which means that √136 is divisible by √4: √136 = √4√34, because we can factor a radicand into two radicands in the same way that we factor a whole number into two whole numbers. The square root of 4 is 2, so √4√34 is equal to 2√34. The distance between (1,2) and (7,12) is 2√34 units. Let's look at one last example: the distance between (2,1) and (–2,10). Enter the x and y values of each point into the distance formula: D = √((–2)– 2)2 + (10 – 1)2 D = √(–4)2 + (9)2 D = √16 + 81 D = √97 The number 97 is not divisible by any whole number perfect square (4, 25, 36, 49, 64, or 81), so √97 cannot be simplified. The distance between (2,1) and (–2,10) is √97 units. Find practice problems and solutions for these concepts at Distance Between Two Points on a Graph Practice Questions. - Kindergarten Sight Words List - First Grade Sight Words List - 10 Fun Activities for Children with Autism - Signs Your Child Might Have Asperger's Syndrome - Theories of Learning - A Teacher's Guide to Differentiating Instruction - Child Development Theories - Social Cognitive Theory - Curriculum Definition - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
DAWNs research is focused on the specific period in the history of the Universe know as Cosmic Dawn. This previously unexplored period, 300-600 million years after the big bang is when the first stars, black holes and galaxies is believed to have formed. DAWN will study the Cosmic Dawn with a suite of new telescopes, designed to study this period as well as though theory and simulations. James Webb Space Telescope: With a planned launch in 2020, the James Webb Space Telescope (JWST) is designed to study the stars and gas in the first galaxies. JWST primary mirror is 6.5 m in diameter, and it is equipped with 4 infrared instruments for imaging and spectroscopy. DAWN scientists have partaken in the construction of three of these instruments (NIRSpec, MIRI and NIRISS), and will be involved in the analysis of the first data from the telescope. Check out more at https://jwst.nasa.gov/index.html. Atacama Large Millimeter Array (ALMA): Astronomers at DAWN use ALMA to study the emission lines and continuum from the cold and dusty environments that compose the interstellar medium (ISM). The reason behind this is the fact the molecular hydrogen - a fuel for star formation, can only exist in these cold regions of the galaxy. The investigation of these environments is complimentary to the study of hot gas and stellar emission and is imperative in furthering our understanding of galaxy evolution. The telescope is composed of 66 high precision antennae, each with a diameter of 12m, which act together as a single telescope. ALMA is designed to observe emission from cold dust and gas in the early Universe, complementing JWSTs observations of the hot gas and stars in the first galaxies. For more information check out https://www.almaobservatory.org/en/home/. Following its launch in 2020, the ESA/Euclid mission will map out a large part of the sky, with the primary goal of constraining the nature of Dark energy. As part of its calibration plan, Euclid will observe 40 square degrees to great depth, the so-called Euclid Deep Fields. These fields will be a treasure trove for finding rare, bright early galaxies. DAWN scientists play a leading role in the exploration of these deep fields, and will have early access to samples of early galaxies for follow up observations with JWST and ALMA. Read more about Euclid at http://sci.esa.int/euclid/. Supporting extragalactic surveys: In addition to Euclid, DAWN scientists are involved in a number of the largest existing extragalactic surveys, which will be important sources of targets for JWST and ALMA. These include the COSMOS survey, the Hubble Frontier Field survey, Buffalo, 3DHST, SMUVS, Ultravista and Splash. Hubble Frontier Field: http://www.stsci.edu/hst/campaigns/frontier-fields/HST-Survey
Failure to sufficiently cool an electronic component will lead to overheating, which can reduce its lifespan or even permanently damage the product’s internal system. Selecting a cooling method for electronics generally comes down to four main options: natural convection, forced convection, fluid phase change, and liquid cooling. The right choice depends on identifying the electronic system needs and conditions in which it will operate. Liquid cooling is often used for high-powered electronics, including PCs and transistors such as insulated-gate bipolar transistors (IGBTs), electric motors, and other systems where forced air isn’t sufficient to keep components cool and functioning at full capacity. The liquid used to cool these devices can be water, deionized water, glycol, dielectric fluids (e.g., fluorocarbons), and PAO. The main benefits of liquid cooling are: A high level of efficiency, sustaining low temperatures over time Liquid cooling, most often using water, is much more efficient than air. Water has better heat-transfer capabilities and, for an identical flow rate and temperature limit, can carry more energy. Moreover, a liquid-cooling system keeps the temperature down at all times unlike air-based systems. In the latter, a fan starts to operate when components are already overheated, removing the excess heat rather than anticipating the issue. Requires less space than air-cooled systems Typical liquid-cooling systems include thin tubes that don’t occupy much space. For air-cooling systems, however, more space is needed to accommodate multiple fans. In a traditional cooling system, fans generate noise. A water-cooling system usually requires only one fan, which has the role of circulating the air. Because the water does most of the cooling work, noise will be kept to a minimum. Role of Engineering Simulation to Validate a Cooling System Engineering simulation, or computer-aided engineering (CAE), is used for complex virtual testing through numerical simulation. CAE enables the analysis of a design under different conditions early in the development process. The technology is used in iterative design processes and reduces the number of physical prototypes required to investigate a design. In the following case study, four high-powered IGBT modules are simulated. Case Study: Simulation of Water-Cooling IGBTs IGBTs are useful in high-load applications such as switched-mode power supplies, traction motor control, and induction heating. The liquid-cooling simulation evaluates consumed and removed energy, junction temperatures, and heat path. Ultimately, the goal is to find out how the heat travels from the sources within the system and around the components to the exit point. Particular areas of interest are those where that incur resistance and temperature gradients are heightened. Once the information is acquired from the simulation results, design modifications can then be made for optimization. The simulation type selected for this project is conjugate heat transfer (CHT) (Fig. 1). This is based on a thermal solver that computes velocities, pressure, temperatures, and other characteristics in both solid and fluid parts. CHT analysis is considered the most effective simulation type in electronics cooling applications. CAD Model and Mesh The CAD model used includes a water-cooling block, surrounding air, and an internal water channel. This design was directly imported onto the SimScale cloud-based CAE platform (Fig. 2). Once the CAD was uploaded, a mesh was created using an automatic hex-dominant algorithm with five inflated layers on no-slip surfaces and moderate refinement. Boundary Conditions for CHT Analysis Boundary conditions for the analysis featured a velocity inlet and a pressure outlet for the water-cooling block. The velocity inlet tells the CAE software how much flow is coming into the water block. The inlet and outlet allowed for natural air convection to occur within the system. The pressure outlet represents the fluid exit where the temperature was defined. Heat Source and Thermal Properties The project dictated that each silicon IGBT, with a thermal interface, water block, and brass connectors, dissipated 368 W of energy. The heat generation was modeled as a volumetric heat source, and the materials were given standard values for thermal conductivity and specific heat capacities for each material, silicon, paste, aluminum, and brass. Evaluating the Liquid-Cooled Electronic Design Analyzing the temperature distribution, it could be seen that the design allowed for uneven cooling, large temperature gradients, and unacceptable overall temperatures. The temperature gradient revealed that the biggest thermal resistances were due to conductivity within the thermal interface (Fig. 3). The IGBTs generated 1472 W of heat. The water-cooling system removed 92.6% of the heat and natural convection covered the remaining 7.4%. The maximum temperature of an IGBT should be around 32°C (90°F) to avoid damage through overheating. To increase the system’s longevity, a temperature below 21°C (70°F) is required. The first design revealed that the junction temperatures were too high (Fig. 4). In addition, the CHT analysis results showed that the flow channel produced significant pressure losses due to the design’s sharp edges and contractions (Fig. 5). Optimizing the Liquid-Cooled Electronic Design As seen in the simulation results, the initial design of the electronic system would have experienced overheating. The reason is that due to the low conductivity of phase-change material (PCM) in the interface, the heat flow faced too much resistance around the component. A simple design modification would be to reduce the thickness or improve the grade of the PCM altogether. Also, to reduce pressure losses and increase heat-transfer rates, it’s recommended to change the cooling block cavity design from channels to pins, enlarge contractions, and reduce the number of sharp edges or add fillets. These changes were subsequently made to the liquid-cooled electronic design, and the CHT analysis was rerun. The final results showed significant improvements (Fig. 6). Liquid-cooling systems have proven to be very effective in preventing electronics from overheating. Using the right system and placement, however, requires additional effort to ensure the best results possible over time. Testing designs with engineering simulation often makes the difference in this regard, allowing for quick optimization and design iteration. In the case study described here, after the resistance areas were pinpointed and amended, the IGBTs’ internal temperatures were brought down to acceptable levels at the expense of PCM volume. Still, even more improvements are possible with an in-depth understanding of heat transfer. In addition, pressure loss was reduced and, as a result, energy on pumping the water decreased by approximately 50%. Arnaud Girin is Technical Marketing Specialist at SimScale.
Introduction to Sociology/Economy Economy refers to the ways people use their environment to meet their material needs. It is the realized economic system of a country or other area. It includes the production, exchange, distribution, and consumption of goods and services of that area. A given economy is the end result of a process that involves its technological evolution, history and social organization, as well as its geography, natural resource endowment, and ecology, among other factors. These factors give context, content, and set the conditions and parameters in which an economy functions. As long as someone has been making and distributing goods or services, there has been some sort of economy; economies grew larger as societies grew and became more complex. The ancient economy was mainly based on subsistence farming. According to Herodotus, and most modern scholars, the Lydians were the first people to introduce the use of gold and silver coin. It is thought that these first stamped coins were minted around 650-600 BC. For most people the exchange of goods occurred through social relationships. There were also traders who bartered in the marketplaces. The Babylonians and their city state neighbors developed economic ideas comparable to those employed today. They developed the first known codified legal and administrative systems, complete with courts, jails, and government records. Several centuries after the invention of cuneiform, the use of writing expanded beyond debt/payment certificates and inventory lists to be applied for the first time, about 2600 BC, to messages and mail delivery, history, legend, mathematics, and astronomical records. Ways to divide private property, when it is contended, amounts of interest on debt, rules as to property and monetary compensation concerning property damage or physical damage to a person, fines for 'wrong doing', and compensation in money for various infractions of formalized law were standardized for the first time in history. In Medieval times, what we now call economy was not far from the subsistence level. Most exchange occurred within social groups. On top of this, the great conquerors raised venture capital to finance their land captures. The capital investment would be returned to the investor when goods from the newly discovered or captured lands were returned by the conquerors. The endeavors of Marco Polo (1254-1324), Christopher Columbus (1451-1506) and Vasco de Gama (1469-1524) set the foundations for a global economy. Note that while these historical figures are often referred to as "discoverers" among dominant groups, there is ample archeological evidence suggesting that the places they "found" were visited by many other expeditions dating back to BC times and that many others attempted to conquer these same lands. Rather than a process of discovery, what separated them from earlier attempts was the devastating effects of massive plagues upon Native populations, which allowed them to conquer native lands that had fought off such assaults previously (see Lies my teacher told me for summaries of many of these findings). Following these events, the first enterprises were trading establishments. In 1513 the first stock exchange was founded in Antwerpen. The European captures became branches of the European states, the so-called "colonies". The rising nation-states Spain, Portugal, France, Great Britain, and the Netherlands tried to control the trade through custom duties and taxes in order to protect their national economy. Mercantilism was a first approach to intermediate between private wealth and public interest. The first economist in the true meaning of the word was the Scotsman Adam Smith (1723-1790). He defined the elements of a national economy: products are offered at a natural price generated by the use of competition - supply and demand - and the division of labour. He maintained that the basic motive for free trade is human self interest. In Europe, capitalism (see below) started to replace the system of mercantilism and led to economic growth. The period today is called the industrial revolution because the system of production and division of labour enabled the mass production of goods. Capitalism is an economic and social system in which capital and the non-labor factors of production or the means of production are privately controlled; labor, goods and capital are traded in markets; profits are taken by owners or invested in technologies and industries; and wages are paid to labor. Capitalism as a system developed incrementally from the 16th century on in Europe, although capitalist-like organizations existed in the ancient world, and early aspects of merchant capitalism flourished during the Late Middle Ages. Capitalism gradually spread throughout Europe and other parts of the world. In the 19th and 20th centuries, it provided the main means of industrialization throughout much of the world. The origins of modern markets can be traced back to the Roman Empire and the Islamic Golden Age and Muslim Agricultural Revolution where the first market economy and earliest forms of merchant capitalism took root between the 8th–12th centuries. A vigorous monetary economy was created by Muslims on the basis of the expanding levels of circulation of a stable high-value currency and the integration of monetary areas that were previously independent. Innovative new business techniques and forms of business organisation were introduced by economists, merchants and traders during this time. Such innovations included the earliest trading companies, big businesses, contracts, bills of exchange, long-distance international trade, the first forms of partnerships, and the earliest forms of credit, debt, profit, loss, capital, capital accumulation, circulating capital, capital expenditure, revenue, cheques, promissory notes, trusts, startup companies, savings accounts, pawning, loaning, exchange rates, bankers, money changers, deposits, the double-entry bookkeeping system], and lawsuits. Organizational enterprises similar to corporations independent from the state also existed in the medieval Islamic world. Many of these early capitalist concepts were adopted and further advanced in medieval Europe from the 13th century onwards. The economic system employed between the 16th and 18th centuries is commonly described as mercantilism. This period was associated with geographic "discoveries" by merchant overseas traders, especially from England, and the rapid growth in overseas trade. Mercantilism was a system of trade for profit, although commodities were still largely produced by non-capitalist production methods. While some scholars see mercantilism as the earliest stage of modern capitalism, others argue that modern capitalism did not emerge until later. For example, Karl Polanyi, noted that "mercantilism, with all its tendency toward commercialization, never attacked the safeguards which protected [the] two basic elements of production - labor and land - from becoming the elements of commerce"; thus mercantilist attitudes towards economic regulation were closer to feudalist attitudes, "they disagreed only on the methods of regulation." Moreover Polanyi argued that the hallmark of capitalism is the establishment of generalized markets for what he referred to as the "fictitious commodities": land, labor, and money. Accordingly, "not until 1834 was a competitive labor market established in England, hence industrial capitalism as a social system cannot be said to have existed before that date." The commercial stage of capitalism began with the founding of the British East India Company and the Dutch East India Company. During this era, merchants, who had traded under the previous stage of mercantilism, invested capital in the East India Companies and other colonies, seeking a return on investment, setting the stage for capitalism. During the Industrial Revolution, the industrialist replaced the merchant as a dominant actor in the capitalist system and effected the decline of the traditional handicraft skills of artisans, guilds, and journeymen. Also during this period, the surplus generated by the rise of commercial agriculture encouraged increased mechanization of agriculture. Industrial capitalism marked the development of the factory system of manufacturing, characterized by a complex division of labor between and within the work process and the routinization of work tasks. In the late 19th century, the control and direction of large areas of industry came into the hands of trusts, financiers and holding companies. This period was dominated by an increasing number of oligopolistic firms earning supernormal profits. Major characteristics of capitalism in this period included the establishment of large industrial monopolies; the ownership and management of industry by financiers divorced from the production process; and the development of a complex system of banking, an equity market, and corporate holdings of capital through stock ownership. Inside these corporations, a division of labor separates shareholders, owners, managers, and actual laborers. By the last quarter of the 19th century, the emergence of large industrial trusts had provoked legislation in the US to reduce the monopolistic tendencies of the period. Gradually, during this era, the US government played a larger and larger role in passing antitrust laws and regulation of industrial standards for key industries of special public concern. By the end of the 19th century, economic depressions and boom and bust business cycles had become a recurring problem. In particular, the Long Depression of the 1870s and 1880s and the Great Depression of the 1930s affected almost the entire capitalist world, and generated discussion about capitalism’s long-term survival prospects. During the 1930s, Marxist commentators often posited the possibility of capitalism's decline or demise, often in contrast to the ability of the Soviet Union to avoid suffering the effects of the global depression. In his book The Protestant Ethic and the Spirit of Capitalism (1904-1905), Max Weber sought to trace how a particular form of religious spirit, infused into traditional modes of economic activity, was a condition of possibility of modern western capitalism. For Weber, the 'spirit of capitalism' was, in general, that of ascetic Protestantism; this ideology was able to motivate extreme rationalization of daily life, a propensity to accumulate capital by a religious ethic to advance economically, and thus also the propensity to reinvest capital: this was sufficient, then, to create "self-mediating capital" as conceived by Marx. This is pictured in Proverbs 22:29, “Seest thou a man diligent in his calling? He shall stand before kings” and in Colossians 3:23, "Whatever you do, do your work heartily, as for the Lord rather than for men." In the Protestant Ethic, Weber further stated that “moneymaking – provided it is done legally – is, within the modern economic order, the result and the expression of diligence in one’s calling…” And, "If God show you a way in which you may lawfully get more than in another way (without wrong to your soul or to any other), if you refuse this, and choose the less gainful way, you cross one of the ends of your calling, and you refuse to be God's steward, and to accept His gifts and use them for him when He requierth it: you may labour to be rich for God, though not for the flesh and sin" (p. 108). How capitalism works The economics of capitalism developed out of the interactions of the following five items: 1. Commodities: There are two types of commodities: capital goods and consumer goods. Capital goods are products not produced for immediate consumption (i.e. land, raw materials, tools machines and factories), but serve as the raw materials for consumer goods (i.e. televisions, cars, computers, houses) to be sold to others. 2. Money: Money is primarily a standardized means of exchange which serves to reduce all goods and commodities to a standard value. It eliminates the cumbersome system of barter by separating the transactions involved in the exchange of products, thus greatly facilitating specialization and trade through encouraging the exchange of commodities. 3. Labour power: Labour includes all mental and physical human resources, including entrepreneurial capacity and management skills, which are needed to transform one type of commodity into another. 4. Means of production: All manufacturing aids to production such as tools, machinery, and buildings. Individuals engage in the economy as consumers, labourers, and investors, providing both money and labour power. For example, as consumers, individuals influence production patterns through their purchase decisions, as producers will change production to produce what consumers want to buy. As labourers, individuals may decide which jobs to prepare for and in which markets to look for work. As investors they decide how much of their income to save and how to invest their savings. These savings, which become investments, provide much of the money that businesses need to grow. Business firms decide what to produce and where this production should occur. They also purchase capital goods to convert them into consumer goods. Businesses try to influence consumer purchase decisions through marketing as well as the creation of new and improved products. What drives the capitalist economy is the constant search for profits (revenues minus expenses). This need for profits, known as the profit motive, ensures that companies produce the goods and services that consumers desire and are able to buy. In order to be successful, firms must sell a certain quantity of their products at a price high enough to yield a profit. A business may consequently lose money if sales fall too low or costs are incurred that are too high. The profit motive also encourages firms to operate efficiently by using their resources in the most productive manner. By using less materials, labour or capital, a firm can cut its production costs which can lead to increased profits. Following Adam Smith, Karl Marx distinguished the use value of commodities from their exchange value in the market. Capital, according to Marx, is created with the purchase of commodities for the purpose of creating new commodities with an exchange value higher than the sum of the original purchases. For Marx, the use of labor power had itself become a commodity under capitalism; the exchange value of labor power, as reflected in the wage, is less than the value it produces for the capitalist. This difference in values, he argues, constitutes surplus value, which the capitalists extract and accumulate. The extraction of surplus value from workers is called exploitation. In his book Capital, Marx argues that the capitalist mode of production is distinguished by how the owners of capital extract this surplus from workers: all prior class societies had extracted surplus labor, but capitalism was new in doing so via the sale-value of produced commodities. Marx argues that a core requirement of a capitalist society is that a large portion of the population must not possess sources of self-sustenance that would allow them to be independent, and must instead be compelled, in order to survive, to sell their labor for a living wage. In conjunction with his criticism of capitalism was Marx's belief that exploited labor would be the driving force behind a revolution to a socialist-style economy. For Marx, this cycle of the extraction of the surplus value by the owners of capital or the bourgeoisie becomes the basis of class struggle. This argument is intertwined with Marx's version of the labor theory of value asserting that labor is the source of all value, and thus of profit. How capitalists generate profit is illustrated in the figure below. The market is a term used by economists to describe a central exchange through which people are able to buy and sell goods and services. In a capitalist economy, the prices of goods and services are controlled mainly through supply and demand and competition. Supply is the amount of a good or service produced by a firm and available for sale. Demand is the amount that people are willing to buy at a specific price. Prices tend to rise when demand exceeds supply and fall when supply exceeds demand, so that the market is able to coordinate itself through pricing until a new equilibrium price and quantity is reached. Competition arises when many producers are trying to sell the same or similar kinds of products to the same buyers. Competition is important in capitalist economies because it leads to innovation and more reasonable prices as firms that charge lower prices or improve the quality of their product can take buyers away from competitors (i.e., increase market share. Furthermore, without competition, a monopoly or cartel may develop. A monopoly occurs when a firm supplies the total output in the market. When this occurs, the firm can limit output and raise prices because it has no fear of competition. A cartel is a group of firms that act together in a monopolistic manner to control output and raise prices. Many countries have competition laws and anti-trust laws that prohibit monopolies and cartels from forming. In many capitalist nations, public utilities (communications, gas, electricity, etc), are able to operate as a monopoly under government regulation due to high economies of scale. Income in a capitalist economy depends primarily on what skills are in demand and what skills are currently being supplied. People who have skills that are in scarce supply are worth a lot more in the market and can attract higher incomes. Competition among employers for workers and among workers for jobs, helps determine wage rates. Firms need to pay high enough wages to attract the appropriate workers; however, when jobs are scarce workers may accept lower wages than when jobs are plentiful. Labour unions and the government also influence wages in capitalist nations. Unions act to represent labourers in negotiations with employers over such things as wage rates and acceptable working conditions. Most countries have an established minimum wage and other government agencies work to establish safety standards. Unemployment is a necessary component of a capitalist economy to insure an excessive pool of laborers. Without unemployed individuals in a capitalist economy, capitalists would be unable to exploit their workers because workers could demand to be paid what they are worth. What's more, when people leave the employed workforce and experience a period of unemployment, the longer they stay out of the workforce, the longer it takes to find work and the lower their returning salaries will be when they return to the workforce. Thus, not only do the unemployed help drive down the wages of those who are employed, they also suffer financially when they do return to the paid workforce. In capitalist nations, the government allows for private property and individuals are allowed to work where they please. The government also generally permits firms to determine what wages they will pay and what prices they will charge for their products. The government also carries out a number of important economic functions. For instance, it issues money, supervises public utilities and enforces private contracts. Laws, such as policy competition, protect against competition and prohibit unfair business practices. Government agencies regulate the standards of service in many industries, such as airlines and broadcasting, as well as financing a wide range of programs. In addition, the government regulates the flow of capital and uses things such as the interest rate to control factors such as inflation and unemployment. Criticisms of Capitalism Critics argue that capitalism is associated with the unfair distribution of wealth and power; a tendency toward market monopoly or oligopoly (and government by oligarchy); imperialism, counter-revolutionary wars and various forms of economic and cultural exploitation; repression of workers and trade unionists, and phenomena such as social alienation, economic inequality, unemployment, and economic instability. Critics have argued that there is an inherent tendency towards oligopolistic structures when laissez-faire laws are combined with capitalist private property. Capitalism is regarded by many socialists to be irrational in that production and the direction of the economy are unplanned, creating inconsistencies and internal contradictions and thus should be controlled through public policy. In the early 20th century, Vladimir Lenin argued that state use of military power to defend capitalist interests abroad was an inevitable corollary of monopoly capitalism. Economist Branko Horvat states, "it is now well known that capitalist development leads to the concentration of capital, employment and power. It is somewhat less known that it leads to the almost complete destruction of economic freedom." Ravi Batra argues that excessive income and wealth inequalities are a fundamental cause of financial crisis and economic depression, which will lead to the collapse of capitalism and the emergence of a new social order. Environmentalists have argued that capitalism requires continual economic growth, and will inevitably deplete the finite natural resources of the earth, and other broadly utilized resources. Murray Bookchin has argued that capitalist production externalizes environmental costs to all of society, and is unable to adequately mitigate its impact upon ecosystems and the biosphere at large. Labor historians and scholars, such as Immanuel Wallerstein have argued that unfree labor — by slaves, indentured servants, prisoners, and other coerced persons — is compatible with (and in many ways necessary for) capitalist relations. A common response to the criticism that capitalism leads to inequality is the argument that capitalism also leads to economic growth and generally improved standards of living. Capitalism does promote economic growth, as measured by Gross Domestic Product or GDP), capacity utilization or standard of living. This argument was central, for example, to Adam Smith's advocacy of letting a free market control production and price, and allocate resources. Many theorists have noted that this increase in global GDP over time coincides with the emergence of the modern world capitalist system. While the measurements are not identical, proponents argue that increasing GDP (per capita) is empirically shown to bring about improved standards of living, such as better availability of food, housing, clothing, and health care. Despite these claims, however, capitalists systems to date - despite increasing GDP in many places and ways overall - have never spurred economic growth and / or improved standards of living for whole populations, but rather, have offered significant growth and improvement for some while others remain without basic necessities (e.g., food, running or clean water, shelter, indoor plumbing) in even the most successful capitalist countries (like the United States). The ability of capitalism to spur economic growth and improved standards of living thus appears (thus far) to actually represent another way capitalism creates and sustains social inequalities within and between societies. Socialism refers to various theories of economic organization advocating public or direct worker ownership and administration of the means of production and allocation of resources, and a society characterized by equal access to resources for all individuals with a method of compensation based on the amount of labor expended. Most socialists share the view that capitalism unfairly concentrates power and wealth among a small segment of society that controls capital and derives its wealth through exploitation, creates an unequal society, does not provide equal opportunities for everyone to maximise their potentialities and does not utilise technology and resources to their maximum potential nor in the interests of the public. In one example of socialism, the Soviet Union, state ownership was combined with central planning. In this scenario, the government determined which goods and services were produced, how they were to be produced, the quantities, and the sale prices. Centralized planning is an alternative to allowing the market (supply and demand) to determine prices and production. In the West, neoclassical liberal economists such as Friedrich Hayek and Milton Friedman said that socialist planned economies would fail because planners could not have the business information inherent to a market economy (cf. economic calculation problem), nor could managers in Soviet-style socialist economies match the motivation of profit. Consequent to Soviet economic stagnation in the 1970s and 1980s, socialists began to accept parts of these critiques. Polish economist Oskar Lange, an early proponent of market socialism, proposed a central planning board establishing prices and controls of investment. The prices of producer goods would be determined through trial and error. The prices of consumer goods would be determined by supply and demand, with the supply coming from state-owned firms that would set their prices equal to the marginal cost, as in perfectly competitive markets. The central planning board would distribute a "social dividend" to ensure reasonable income equality. In western Europe, particularly in the period after World War II, many socialist parties in government implemented what became known as mixed economies. In the biography of the 1945 UK Labour Party Prime Minister Clement Attlee, Francis Beckett states: "the government... wanted what would become known as a mixed economy". Beckett also states that "Everyone called the 1945 government 'socialist'." These governments nationalised major and economically vital industries while permitting a free market to continue in the rest. These were most often monopolistic or infrastructural industries like mail, railways, power and other utilities. In some instances a number of small, competing and often relatively poorly financed companies in the same sector were nationalised to form one government monopoly for the purpose of competent management, of economic rescue (in the UK, British Leyland, Rolls Royce), or of competing on the world market. Typically, this was achieved through compulsory purchase of the industry (i.e. with compensation). In the UK, the nationalisation of the coal mines in 1947 created a coal board charged with running the coal industry commercially so as to be able to meet the interest payable on the bonds which the former mine owners' shares had been converted into. Marxist and non-Marxist social theorists agree that socialism developed in reaction to modern industrial capitalism, but disagree on the nature of their relationship. Émile Durkheim posits that socialism is rooted in the desire to bring the state closer to the realm of individual activity, in countering the anomie of a capitalist society. In socialism, Max Weber saw acceleration of the rationalisation started in capitalism. As a critic of socialism, he warned that placing the economy entirely in the state's bureaucratic control would result in an "iron cage of future bondage". The Marxist conception of socialism is that of a specific historical phase that will displace capitalism and be a precursor to communism. The major characteristics of socialism are that the proletariat will control the means of production through a workers' state erected by the workers in their interests. Economic activity is still organised through the use of incentive systems and social classes would still exist but to a lesser and diminishing extent than under capitalism. For orthodox Marxists, socialism is the lower stage of communism based on the principle of "from each according to his ability, to each according to his contribution" while upper stage communism is based on the principle of "from each according to his ability, to each according to his need"; the upper stage becoming possible only after the socialist stage further develops economic efficiency and the automation of production has led to a superabundance of goods and services. Socialism is not a concrete philosophy of fixed doctrine and program; its branches advocate a degree of social interventionism and economic rationalisation (usually in the form of economic planning), sometimes opposing each other. Some socialists advocate complete nationalisation of the means of production, distribution, and exchange; others advocate state control of capital within the framework of a market economy. Socialists inspired by the Soviet model of economic development have advocated the creation of centrally planned economies directed by a state that owns all the means of production. Others, including Yugoslavian, Hungarian, German and Chinese Communists in the 1970s and 1980s, instituted various forms of market socialism, combining co-operative and state ownership models with the free market exchange and free price system (but not free prices for the means of production). Social democrats propose selective nationalisation of key national industries in mixed economies, while maintaining private ownership of capital and private business enterprise. Social democrats also promote tax-funded welfare programs and regulation of markets. Many social democrats, particularly in European welfare states, refer to themselves as socialists, introducing a degree of ambiguity to the understanding of what the term means. Modern socialism originated in the late 18th-century intellectual and working class political movement that criticised the effects of industrialisation and private ownership on society. The utopian socialists, including w:Robert Owen (1771–1858), tried to found self-sustaining communes by secession from a capitalist society. Henri de Saint Simon (1760–1825), the first individual to coin the term socialisme, was the original thinker who advocated technocracy and industrial planning. The first socialists predicted a world improved by harnessing technology and better social organisation; many contemporary socialists share this belief. Early socialist thinkers tended to favour an authentic meritocracy combined with rational social planning. The Financial crisis of 2007–2009 led to mainstream discussions as to whether "Marx was right". Time magazine ran an article titled "Rethinking Marx" and put Karl Marx on the cover of its 28th of January 2009 European edition. While the mainstream media tended to conclude that Marx was wrong, this was not the view of socialists and left-leaning commentators. Examples of Socialism The People's Republic of China, North Korea, Laos and Vietnam are Asian states remaining from the first wave of socialism in the 20th century. States with socialist economies have largely moved away from centralised economic planning in the 21st century, placing a greater emphasis on markets, as in the case of the Chinese Socialist market economy and Vietnamese Socialist-oriented market economy. In China, the Chinese Communist Party has led a transition from the command economy of the Mao period to an economic program called the socialist market economy or "socialism with Chinese characteristics." Under Deng Xiaoping, the leadership of China embarked upon a program of market-based reform that was more sweeping than had been Soviet leader Mikhail Gorbachev's perestroika program of the late 1980s. Deng's program, however, maintained state ownership rights over land, state or cooperative ownership of much of the heavy industrial and manufacturing sectors and state influence in the banking and financial sectors. Elsewhere in Asia, some elected socialist parties and communist parties remain prominent, particularly in India and Nepal. The Communist Party of Nepal in particular calls for multi-party democracy, social equality, and economic prosperity. In Singapore, a majority of the GDP is still generated from the state sector comprised of government-linked companies. In Japan, there has been a resurgent interest in the Japanese Communist Party among workers and youth. In Europe, the Left Party in Germany has grown in popularity, becoming the fourth biggest party in parliament in the general election on 27 September 2009. Communist candidate Dimitris Christofias won a crucial presidential runoff in w:Cyprus, defeating his conservative rival with a majority of 53% of the vote. In Greece, in the general election on 4 October 2009, the Communist KKE got 7.5% of the votes and the new Socialist grouping, Syriza or "Coalition of the Radical Left", won 4.6% or 361,000 votes. In Ireland, in the 2009 European election, Joe Higgins of the Socialist Party took one of four seats in the capital Dublin European constituency. In Denmark, the Socialist People's Party more than doubled its parliamentary representation to 23 seats from 11, making it the fourth largest party. In France, the Revolutionary Communist League candidate in the 2007 presidential election, Olivier Besancenot, received 1,498,581 votes, 4.08%, double that of the Communist candidate. The LCR abolished itself in 2009 to initiate a broad anti-capitalist party, the New Anticapitalist Party, whose stated aim is to "build a new socialist, democratic perspective for the twenty-first century". In some Latin American countries, socialism has re-emerged in recent years, with an anti-imperialist stance, the rejection of the policies of neoliberalism, and the nationalisation or part nationalisation of oil production, land and other assets. Venezuelan President Hugo Chávez, Bolivian President Evo Morales, and Ecuadorian president Rafael Correa for instance, refer to their political programs as socialist. An April 2009 Rasmussen Reports poll conducted during the Financial crisis of 2007–2009 suggested there had been a growth of support for socialism in the United States. The poll results stated that 53% of American adults thought capitalism was better than socialism, and that "Adults under 30 are essentially evenly divided: 37% prefer capitalism, 33% socialism, and 30% are undecided". The question posed by Rasmussen Reports did not define either capitalism or socialism. Criticisms of Socialism Criticisms of socialism range from claims that socialist economic and political models are inefficient or incompatible with civil liberties to condemnation of specific socialist states. In the economic calculation debate, classical liberal Friedrich Hayek argued that a socialist command economy could not adequately transmit information about prices and productive quotas due to the lack of a price mechanism, and as a result it could not make rational economic decisions. Ludwig von Mises argued that a socialist economy was not possible at all, because of the impossibility of rational pricing of capital goods in a socialist economy since the state is the only owner of the capital goods. Hayek further argued that the social control over distribution of wealth and private property advocated by socialists cannot be achieved without reduced prosperity for the general populace, and a loss of political and economic freedoms. There are a number of ways to measure economic activity of a nation. These methods of measuring economic activity include: - Consumer spending - Exchange Rate - Gross domestic product - GDP per capita - GNP or Gross National Product - Stock Market - Interest Rate - National Debt - Rate of Inflation - Balance of Trade The Gross Domestic Product or GDP of a country is a measure of the size of its economy. While often useful, it should be noted that GDP only includes economic activity for which money is exchanged. GDP and GDP per capita are widely used indicators of a country's wealth. The map below shows GDP per capita of countries around the world: The Gini coefficient (also known as the Gini index or Gini ratio) is a measure of statistical dispersion intended to represent the income distribution of a nation's residents. The Gini coefficient measures the inequality among values of a frequency distribution. A Gini coefficient of zero expresses perfect equality, where all values are the same (for example, where everyone has the same income). A Gini coefficient of one (or 100%) expresses maximal inequality among values (for example where only one person has all the income). However, a value greater than one may occur if some persons represent negative contribution to the total (e.g., have negative income or wealth). For larger groups, values close to or above 1 are very unlikely in practice. The Gini coefficient was originally proposed as a measure of inequality of income or wealth. For OECD countries, in the late 2000s, considering the effect of taxes and transfer payments, the income Gini coefficient ranged between 0.24 to 0.49, with Slovenia the lowest and Chile the highest. The countries in Africa had the highest pre-tax Gini coefficients in 2008–2009, with South Africa the world's highest, variously estimated to be between 0.63 to 0.7. The global income inequality Gini coefficient in 2005, for all human beings taken together, has been estimated to be between 0.61 and 0.68. An informal economy is economic activity that is neither taxed nor monitored by a government and is contrasted with the formal economy as described above. The informal economy is thus not included in a government's Gross National Product or GNP. Although the informal economy is often associated with developing countries, all economic systems contain an informal economy in some proportion. Informal economic activity is a dynamic process which includes many aspects of economic and social theory including exchange, regulation, and enforcement. By its nature, it is necessarily difficult to observe, study, define, and measure. The terms "under the table" and "off the books" typically refer to this type of economy. The term black market refers to a specific subset of the informal economy. Examples of informal economic activity include: the sale and distribution of illegal drugs and unreported payments for house cleaning or baby sitting. - Herodotus. Histories, I, 94 - http://rg.ancients.info/lion/article.html Goldsborough, Reid. "World's First Coin" - Charles F. Horne, Ph.D. (1915). "The Code of Hammurabi : Introduction". Yale University. http://www.yale.edu/lawweb/avalon/medieval/hammint.htm. Retrieved September 14 2007. - Sheila C. Dow (2005), "Axioms and Babylonian thought: a reply", Journal of Post Keynesian Economics 27 (3), p. 385-391. - Braudel, Fernand. 1982. The Wheels of Commerce, Vol. 2, Civilization & Capitalism 15th-18th Century. University of California Press. Los Angeles. - Banaji, Jairus. 2007. Islam, the Mediterranean and the rise of capitalism. Journal of Historical Materialism. 15, 47–74. - Scott, John. 2005. Industrialism: A Dictionary of Sociology Oxford University Press. - Erdkamp, Paul (2005), "The Grain Market in the Roman Empire", (Cambridge University Press) - Hasan, M (1987) "History of Islam". Vol 1. Lahore, Pakistan: Islamic Publications Ltd. p. 160. - The Cambridge economic history of Europe, p. 437. Cambridge University Press, ISBN 0521087090. - Subhi Y. Labib (1969), "Capitalism in Medieval Islam", The Journal of Economic History 29 (1), pp. 79–96. - Banaji, Jairus. (2007), "Islam, the Mediterranean and the rise of capitalism", Historical Materialism 15 (1), pp. 47–74, Brill Publishers. - Lopez, Robert Sabatino, Irving Woodworth Raymond, Olivia Remie Constable (2001), Medieval Trade in the Mediterranean World: Illustrative Documents, Columbia University Press, ISBN 0231123574. - Kuran, Timur (2005), "The Absence of the Corporation in Islamic Law: Origins and Persistence", American Journal of Comparative Law 53, pp. 785–834 [798–9]. - Labib, Subhi Y. (1969), "Capitalism in Medieval Islam", The Journal of Economic History 29 (1), pp. 79–96 [92–3]. - Spier, Ray. (2002), "The history of the peer-review process", Trends in Biotechnology 20 (8), p. 357-358 . - Arjomand, Said Amir. (1999), "The Law, Agency, and Policy in Medieval Islamic Society: Development of the Institutions of Learning from the Tenth to the Fifteenth Century", Comparative Studies in Society and History 41, pp. 263–93. Cambridge University Press. - Amin, Samir. (1978), "The Arab Nation: Some Conclusions and Problems", MERIP Reports 68, pp. 3–14 [8, 13]. - Burnham, Peter (2003). Capitalism: The Concise Oxford Dictionary of Politics. Oxford University Press. - Polanyi, Karl. 1944. The Great Transformation. Beacon Press,Boston. - [Economy Professor http://www.economyprofessor.com/economictheories/monopoly-capitalism.php] - Engerman, Stanley L. 2001. The Oxford Companion to United States History. Oxford University Press. - Ragan, Christopher T.S., and Richard G. Lipsey. Microeconomics. Twelfth Canadian Edition ed. Toronto: Pearson Education Canada, 2008. Print. - Robbins, Richard H. Global problems and the culture of capitalism. Boston: Allyn & Bacon, 2007. Print. - Capital. v. 3. Chapter 47: Genesis of capitalist ground rent - Karl Marx. Chapter Twenty-Five: The General Law of Capitalist Accumulation. Das Kapital. - Dobb, Maurice. 1947. Studies in the Development of Capitalism. New York: International Publishers Co., Inc. - Harvey, David. 1989. The Condition of Postmodernity. - Wheen, Francis Books That Shook the World: Marx's Das Kapital1st ed. London: Atlantic Books, 2006 - swedberg, richard. 2007. “the market.” Contexts 6:64-66. - Arranz, Jose M., Carlos Garcia-Serrano, and Maria A. Davia. 2010. “Worker Turnover and Wages in Europe: The Influence of Unemployment and Inactivity.” The Manchester School 78:678-701. - Brander, James A. Government policy toward business. 4th ed. Mississauga, Ontario: John Wiley & Sons Canada, Ltd., 2006. Print. - [http://www.marxists.org/archive/lenin/works/1916/imp-hsc/index.htm Lenin, Vladimir. 1916. Imperialism: The Highest Stage of Capitalism - Horvat, B. The Political Economy of Socialism. Armonk, NY. M.E.Sharpe Inc. - Cass. 1999. Towards a Comparative Political Economy of Unfree Labor - Lucas, Robert E. Jr. The Industrial Revolution: Past and Future. Federal Reserve Bank of Minneapolis 2003 Annual Report. http://www.minneapolisfed.org/pubs/region/04-05/essay.cfm - DeLong, J. Bradford. Estimating World GDP, One Million B.C. – Present. http://www.j-bradford-delong.net/TCEH/1998_Draft/World_GDP/Estimating_World_GDP.html Accessed: 2008-02-26 - Nardinelli, Clark. Industrial Revolution and the Standard of Living. http://www.econlib.org/library/Enc/IndustrialRevolutionandtheStandardofLiving.html Accessed: 2008-02-26 - Prasad, Monica. 2012. The Land of Too Much: American Abundance and the Paradox of Poverty; Harvard University Press - Prasad, Monica. 2006. The Politics of Free Markets: The Rise of Neoliberal Economic Policies in Britain, France, Germany, and the United States;University of Chicago Press - Newman, Michael. 2005. Socialism: A Very Short Introduction, Oxford University Press, ISBN 0-19-280431-6 - Socialism, (2009), in Encyclopædia Britannica. Retrieved October 14, 2009, from Encyclopædia Britannica Online: http://www.britannica.com/EBchecked/topic/551569/socialism, "Main" summary: "Socialists complain that capitalism necessarily leads to unfair and exploitative concentrations of wealth and power in the hands of the relative few who emerge victorious from free-market competition—people who then use their wealth and power to reinforce their dominance in society." - Marx and Engels Selected Works, Lawrence and Wishart, 1968, p. 40. Capitalist property relations put a "fetter" on the productive forces. - John Barkley Rosser and Marina V. Rosser, Comparative Economics in a Transforming World Economy (Cambridge, MA.: MIT Press, 2004). - Beckett, Francis, Clem Attlee, (2007) Politico's. - Socialist Party of Great Britain. 1985. The Strike Weapon: Lessons of the Miners’ Strike. Socialist Party of Great Britain. London. http://www.worldsocialism.org/spgb/pdf/ms.pdf - Hardcastle, Edgar. 1947. The Nationalisation of the Railways Socialist Standard. 43:1 http://www.marxists.org/archive/hardcastle/1947/02/railways.htm - Schaff, Kory. 2001. Philosophy and the problems of work: a reader. Rowman & Littlefield. Lanham, MD. ISBN 0-7425-0795-5 - Walicki, Andrzej. 1995. Marxism and the leap to the kingdom of freedom: the rise and fall of the Communist utopia. Stanford University Press. Stanford, CA. ISBAN 0-8047-2384-2 - "Market socialism," Dictionary of the Social Sciences. Craig Calhoun, ed. Oxford University Press 2002 - "Market socialism" The Concise Oxford Dictionary of Politics. Ed. Iain McLean and Alistair McMillan. Oxford University Press, 2003 - Stiglitz, Joseph. "Whither Socialism?" Cambridge, MA: MIT Press, 1995 - Karl Marx: did he get it all right?, The Times (UK), October 21, 2008, http://www.timesonline.co.uk/tol/news/politics/article4981065.ece - Capitalism has proven Karl Marx right again, The Herald (Scotland), 17 Sep 2008, http://www.heraldscotland.com/capitalism-has-proven-karl-marx-right-again-1.889708 - Gumbell, Peter. Rethinking Marx, Time magazine, 28 January 2009, http://www.time.com/time/specials/packages/article/0,28804,1873191_1873190_1873188,00.html - Capitalist crisis - Karl Marx was right Editorial, The Socialist, 17 Sep 2008, www.socialistparty.org.uk/articles/6395 - Cox, David. Marx is being proved right. The Guardian, 29 January 2007, http://www.guardian.co.uk/commentisfree/2007/jan/29/marxisbeingprovedright - Communist Party of Nepal' - [http://www.countryrisk.com/editorials/archives/cat_singapore.html CountryRisk Maintaining Singapore's Miracle - Japan's young turn to Communist Party as they decide capitalism has let them down - Daily Telegraph October 18, 2008 - "Communism on rise in recession-hit Japan", BBC, May 4, 2009 - Germany’s Left Party woos the SPD' - Germany: Left makes big gains in poll http://www.greenleft.org.au/2009/813/41841 - Christofias wins Cyprus presidency' - Danish centre-right wins election http://news.bbc.co.uk/1/hi/world/europe/7091941.stm - Has France moved to the right? http://www.socialismtoday.org/110/france.html - Le Nouveau parti anticapitaliste d'Olivier Besancenot est lancé Agence France-Presse, June 29, 2008 - Rasmussen Reports http://www.rasmussenreports.com/public_content/politics/general_politics/april_2009/just_53_say_capitalism_better_than_socialism , accessed October 23, 2009. - Hayek, Friedrich. 1994. The Road to Serfdom. University of Chicago Press. ISBN 0-226-32061-8 - Hoppe, Hans-Hermann A Theory of Socialism and Capitalism. Kluwer Academic Publishers. page 46 in PDF.
electrons in the outermost orbit (or valency shell) involved in the formation of a chemical bond. The number of electrons in the valency shell is never greater than 7. The outermost electronic configuration is responsible for the variability of the valency. valencies. Examples of elements with variable valencies are iron (2 and 3), tin (2 and 4) and copper (1 and 2). The other elements with variable valencies are as shown in table 7.1. oxidation state of an element equals its valency or charge carried by its ion when an element ionizes in solution. An example of this relation is iron (II) whose oxidation state (or oxidation number) is 2 and its valency is 2. The same case applies to iron (III). Valencies of the normal elements may be deduced from the group number they occupy in the Periodic Table. The valencies of elements of group I to IV are equal to the group numbers they occupy in the periodic table. group number. For example, the valency of chlorine which is in group VII is 1, i.e. (8 -7) =1. The valency of oxygen in group VI is 2, i.e. (8-6) =2. Elements in group 0 (or VIII) have zero valency i.e. (8 – 8) = Chemical formula is a method of representing the molecule of a compound by using chemical symbols. It is a way of expressing information about the atoms that constitute a particular chemical compound. The chemical formula identifies each constituent element by its chemical symbol and indicates the number of atoms of each element found in each single molecule of that compound. symbol for hydrogen atom is H. When two hydrogen atoms join together they form a molecule,H2. The number 2 to the right and below the symbol shows the number of atoms a molecule contains. P4 and S8 represents 4 atoms of phosphorus and 8 atoms of sulphur contained in one molecule of phosphorus and one molecule of sulphur respectively. the formula for chlorine molecule isCl2, it cannot be expressed as 2Cl. This is because 2Cl means two atoms of chlorine and not a molecule of chlorine.H 2 O stands for a molecule of water which consists of two hydrogen atoms and one oxygen atom. for a molecule of sulphuric acid containing 2atoms of hydrogen, 1 atom of sulphur and 4 atoms of oxygen.CaCO3 stands for a molecule of calcium carbonate containing 1atom of calcium, 1 atom of carbon and 3 atoms of oxygen.Where it deems necessary to show the number of molecules a compound contains, this is achieved by writing the appropriate number before the formula of the compound. A few examples are shown below: 2H 2O means two molecules of water 3H2 SO4 means three molecules of sulphuric acid 5CaCO3 means five molecules of calcium carbonate is important to note that the figure appearing before the formula multiplies the whole of it. For example, 3H2SO4 stands for 3 molecules of sulphuric acid containing six atoms of hydrogen, three atoms of sulphur and twelve atoms of oxygen.It is a big mistake to think that the number 3 before the molecule multiplies only the symbol which immediately follows it (that is,H2). This is quite wrong. The 3 multiplies the whole of the formula binary compound is a compound made of only two types of the reacting species, for example, sodium chloride (NaCl),which is made of only sodium and chlorine, is a binary compound. Look at table 7.1. The size of the charge on an ion isa measure of its valency or combining power. You will notice that, ignoring the signs for the charge on ions, the value of the charge on ion is equal to the valency of the atom. You will need to memorise the valencies of these metals as much as possible so as to be able to write the formulae of their compounds correctly. Metals (or their positively charged ions) must start in theformula, followed by non-metals (or their negatively chargedions). Where the formula is to include a radical, the radical must betreated as a single atom and must be bracketed if need be. Ammonium ion is to be treated as if it were a metal. Positive charges must be equal to negative charges for aneutral molecule or compound. Single elements; say Na, K, Si, Ag, etc. should not bebracketed. Valencies of metals (or positive ions) should be exchangedand written as subscripts. The valency of 1 is simply assumed and not written in theformula. is best shown by using some examples. The following procedure must be followed when writing the formulae of binary compounds: Write down correct symbols for atoms of elements or ions that make up the compound. Write down the valencies of the atoms of the elements. the valencies and write them as subscripts in the final formula of the compound. Remember that the valency of 1 is not expressed in the formula. At this final step, the radicals must be bracketed if term “nomenclature” refers to “system of naming”. The system of naming in use is that recommended by the IUPAC (International Union of Pure and Applied Chemistry). The modern system of naming reveals the type of elements present in a given compound. The old or trivial names have been common and important compounds have historical names that do not seem to fit in the system, for example water H O 2 ,ammonia 〈NH3〉, methane , CH4and mineral acids such as sulphuric (VI) acid〈H2SO4〉, nitric (V) acid〈HNO3〉and hydrochloric acid〈HCl〉. Also organic acids such as ethanoic acid (CH3COOH) are also included in this group. These names are trivial but they have been adopted in modern nomenclature.If these exceptions are omitted, there are basic generalizations that are useful: there is a metal in the compound, it must be named first. In this case ammonium ion, NH4¹, is regarded as if it were a metal in the compounds it occurs such as NH4 NO3 , NH Cl 4 , etc. For elements with variable valencies such as iron and lead,Roman numerals are included in the name to indicate the valency of the metal or the ion which is present. For example,iron (III) chloride contains Fe3+ while iron (II) chloride contains Fe² . The same case applies to lead (II) and lead (IV)compounds and so on. Compounds containing two elements only (binary compounds) have names ending in …..ide; for example sodium chloride (NaCl), calcium bromide (CaBr2), magnesium nitrite( Mg3 N2) , etc. The important exception to this is hydroxides,which contains the hydroxide (OH) ion. Compounds containing a poly atomic ion (usually containing oxygen) have names that end with …ate; for example calcium carbonate (CaCO3) , potassium nitrate(KNO3) , magnesium sulphate(MgSO4) , sodium ethanoate (CH3 COONa) , etc. of some compounds use prefixes to tell you the number of that particular atom in the molecule. This is useful if two elements form more than one compound. For example:carbon monoxide (CO), carbon dioxide (CO2), nitrogen dioxide NO2 dinitrogen tetra oxide N2 O4 , sulphur dioxide SO2 sulphur trioxide SO3 , etc. following prefixes indicate the number of atoms in caseslike this: mono – one; di – two; tri – three; tetra – four; pent –five; hex – six; hept – seven; oct – eight; non – nine; and dec –ten. empirical formula is the simplest formula of any compound.It expresses the simplest ratio of all the atoms or ions that makeup a certain compound. For example, the empirical formula ofthe compound with the formula, C2H4is CH2. This means thatthe simplest ratio of (C:H) is 1:2. This ratio also indicates theratio in which carbon and hydrogen atoms combine to form thecompound C2H4. molecular formula is the formula which shows the actual number of all atoms present in a given compound. For example,the molecular formula of the above compound is C2H4. This means that two atoms of carbon and four atoms of hydrogen form the compound. Likewise, the molecular formula of water isH2O meaning that the compound is made up of two atoms of hydrogen and one atom of oxygen. of a compound is the simplest formula which shows its composition by mass, and which shows the ratio of the number of the different atoms present in the molecule. empirical formula differs from the molecular formula of the same compound since only the molecular formula agrees with the molar mass of 2 hydrogen atoms combine with 1 oxygen atom to form one molecule of water. of hydrogen atoms combine with 1 mole of oxygen atoms. Moles can be changed to grams using relative atomic masses (RAMs). So we can write: grams of hydrogen combines with 16 grams of oxygen. In the same way: 1g of hydrogen combines with 8g of oxygen. 4g of hydrogen combines with 32 of oxygen. masses of each substance taking part in the reaction are always in the same ratio.Therefore, from the molecular formula of a compound you can how many moles of different atoms combine; how many grams of the different elements combine; the number of each kind of atoms of different elements that combine to make up a compound; and the percentage of each atom in a compound based on RAMs of each atom And from the empirical formula you can tell: the simplest ratio or proportion of the different atoms that combine to form a compound. empirical formula of ethane C2H4andpropene C3H6with molar masses 28.0g and 42.0grespectively is CH2(i.e. the same) although the two compounds possess different molecular formulae and masses. general, the empirical formula multiplied by a whole number,n, gives the molar mass of the compound. So long as the value of n is known, then the molecular mass can be deduced.For example, suppose the molecular mass of ethene is 28.0g, its molecular formula can be deduced thus: suppose carbon dioxide has a molar mass of 44g and its empirical formula is CO2. Its molecular formula can be determined thus: find the percentage by mass of each element in a compound by experiment. Using this information, it is then possible to find the simplest formula of that compound. To do this we shall also need to know the relative atomic mass of each element present in the compound. work out the empirical formula you need to know the masses of elements that combine. For example, magnesium combines with oxygen to form magnesium oxide. The masses that combine can be found like this: Weigh a crucible and lid, empty. Then add a coil of magnesium ribbon and weigh again. Heat the crucible. Raise the lid carefully at intervals to let oxygen in. The magnesium ribbon burns brightly. burning is complete, let the crucible cool (still with its lid on). Then, weigh again. The increase in mass is due to oxygen. Mass of crucible + lid = 25.2 g Mass of crucible + lid + magnesium = 27.6g Mass of crucible + lid + magnesium oxide = 29.2g Mass of magnesium = 27.6g – 25.2g = 2.4g Mass of magnesium oxide = 29.2g – 25.2g = 4.0g Mass of oxygen, therefore = 4.0g – 2.4g =1.6g 2.4g of magnesium combines with 1.6g of oxygen.The RAMs are Mg = 24, O = 16. Changing masses to moles:24/2.4moles of magnesium atoms combines with1.6/16 moles of oxygen atoms moles magnesium combines with 0.1moles of oxygen atoms.So the atoms combine in a ratio of 0.1:0.1 or simply 1:1The empirical formula of magnesium oxide = MgO the empirical formula can be determined from the provided percentage composition or weight of atoms of the elements that constitute a experiment shows that 32g of sulphur combines with 32g ofoxygen to form the compound sulphur dioxide. What is itsempirical formula? each mass by the RAM of the respective element. This gives the ratio of different numbers of atoms of each element that make up the compound. Again, divide each of these ratios by the smallest to get the whole number ratio. This gives the simplest ratio of the constituent elements possible. Sometimes you may not get a whole number ratio. If this happens, round off the ratio to the nearest whole number. Finally, write down the formula using the obtained ratio of the elements. Study the following table and make sure you understand the procedure: oxide of carbon contains 27.3% carbon. Find the empirical formula of the oxide.Think big!What other element is present in the oxide of carbon?How do you know that the percentage of this other element order to work out the simplest formula we divide each percentage by the relative atomic mass of each element. This allows a comparison of the different numbers of atoms of each element that are present. We get a ratio of each element with respect to each other as worked out in the table below. To get a whole number ratio, we again, divide each of these ratios by the smallest.The result shows the simplest ratio of atoms, in this case one carbon atom to two oxygen atoms. The simplest ratio, therefore,is CO2 . compound X is a hydrocarbon. It contains only carbon and hydrogen atoms. 0.84g of X was completely burned in air. This produced 2.64g of carbon dioxide CO2 and 1.08g of water H2 O. Find the empirical formula In CO2 ,12/44of the mass is carbon All the carbon came from X 2.64g of CO2 contains 2.64g〈12/44〉*2.64g= 0.72g of carbon and,therefore, 0.12g of hydrogen (0.84g – 0.72g = 0.12g)Since we have deduced the weights of the respective elements in the compound, we can now work out the empirical formula as usual: molecular formula is more useful than the empirical formula because it gives more information. For some molecular compounds, both formulae are the same. For others they are different. the empirical formula; and the molecular weight of the compound. some cases, the molecular weights are given while in others the vapour density is given. To get the molecular weight,multiply the vapour density by two, e.g.:Molecular weight = Vapour density 2 Most people have no clue that scalp therapy shampoos for fast hair growth (obviously with no sulfates, no parabens and no DEA) are even a thing. Persons are now able to enjoy longer hair and attain more options. Certainly worth investigating. Whether you’re studying alopecia, damaged hair, avoiding hair disorders, hair growth, hair and scalp health in general, similar ideas apply. Generally, you will want to steer clear of hair products and treatments that include chemicals such as parabens, DEA and sulfates. What’s healthy for your hair is healthy for your skin all the same. Clearly your content here is so useful for various reasons. It avoids the accustomed traps and pitfalls most fall into: getting defective alternatives. Greatly appreciated!
Binary, Hexadecimal, and Other Base Numbers Learning and understanding a different base-numbering system has one very large similarity to learning a new language. You need to truly understand the language/base system you currently use. Before we jump into binary, hexadecimal, and other base numbers, let’s understand how our base-10 numbering system, or decimal, works. We have a base-10 numbering system because we have 10 fingers. That’s the general belief anyway, and I don’t see a reason to disagree. That makes it easy to flash all of our fingers twice to get 20, three times to get 30, etc. Now, we say “thirty” but seeing it written as “30” helps this make more sense. You break that into two parts and it’s literally “3 sets of 10”. Mathematically - … 100000*0 + 10000*0 + 1000*0 + 100*0 + 10*3 + 1*0 = 30. If we do a bigger example, like 582,173, it looks like this. In essence, 500,000 + 80,000 + 2,000 + 100 + 70 + 3 = 582,173 And to make it a little bit more generic, we can look at those multiples of 10 a different way. Do you see why it’s called base 10? The base of the exponent is 10! And with each spot you move to the left, the exponent increases by one. It’s important to note that the first spot is raised to the power of 0, which always makes the multiplier 1. Also, while this table only shows the exponent going up to 5, there is no limit to the size of the exponent, but it will always increment by one. Just in case you have any questions about why 10^0 = 1, let me stop you right now. I have no idea. We’re electrical engineers, not mathematicians. But. What’s important is that anything raised to the 0 is 1. Anything. Except infinity. Infinity… always complicating things. And something that may not seem important now but will in a little bit, is that with any base system, you need to represent the base numbers in that system with ONE character. With base 10, we have 10 characters to represent every number, 0-9. And don’t forget, 0 is a number, which is how you get 10 characters with 0-9. Now, let’s move on to binary. While we have 10 fingers, computers do not. Not yet, anyway. They can only think in terms of “yes” and “no” or “on” and “off”. Therefore, their counting system, their entire language, is based on either 1 (yes - represented as a high voltage) or 0 (no - represented as a low voltage). With that, they have to count in a way that works with this limitation. With base 10, we had the base number as 10. With binary, we only have 2 options, so we use base 2. It looks like this: And its equivalent values in decimal are: Let’s use the number 53, to show how this would be represented in binary. This yields 32+16+4+1 = 53. I want to note that since this is a 0-1 option, you can only have zero sets or 1 set of each multiplier with binary, but this isn’t the case with other numbering systems. Remember how in decimal, we have 10 characters to represent 10 values from 0-9? In binary, we have 2 characters to represent 2 values from 0-1. In binary, you will never see the symbols 2-9 that you’ve grown accustomed to in the decimal system. And this is literally all there is! That doesn’t necessarily mean it’s easy, though, so let’s run through another binary example in a little more depth before moving on. How to convert from Decimal to Binary? Let’s use 37 as our next example. First, find the first binary multiplier that is smaller than the number you want to multiply. In this case, it’s 32. Now, 37-32 leaves us with 5, 37-32=5. As we move to the increasingly smaller numbers, we compare, finding the next binary base that’s smaller than the number we’re looking for. We can see obviously that 16 and 8 are bigger than 5, so let’s put “0” in for those. But 4 is smaller than 5, so we have 1 set of 4. With 5 minus 4 equalling 1, things get pretty simple by this point. We know that 2 is bigger than one and 1 is equal to one, so we put in a zero and a 1 at the end. 32+4+1 = 37! And, because I make mistakes, I usually run over to our Binary/Hex/Decimal Converter tool and verify that I did it correctly. Another bonus of using that tool is that you can come up with any number you want and practice with it while also making sure you get the right answer. Going the other way, from binary to decimal, is actually easier, if you have something like this table ready. Let’s use 101110 as an example. As we’ve already filled it out, we can see that it’s 32+8+4+2 = 46! So, what is hexadecimal and why is it a thing? Hexadecimal is base-16, (hex = 6, dec = 10 so hexadec = 16, I guess). So, where as binary has “2” as a base and decimal has “10” as a base, hexadecimal has “16” as a base. Before we go into the complexities of representing a number bigger than 9 with a single character, I just want to emphasize that this works exactly the same as binary and decimal, just with a different base number. Okay, this is where it can get hard to wrap your head around it, as we can easily think of numbers less than 9 but struggle to think of numbers greater than 9 without using 2 digits. Remember from above where we discussed that the base-10 system uses 10 characters, 0-9, and the base-2 system uses 2 characters, 0-1? Well, base-16 uses 16 characters, 0-F. Look at the table below and you will probably be confused, so just give it a glance before moving on. You can always go back up and look at it again. This table compares counting from 0 to 15 in hexadecimal and decimal. Counting from 0-15 in Hexadecimal and Decimal … + 101*0 + 100*1 = 1 … + 101*0 + 100*2 = 2 … + 101*0 + 100*3 = 3 … + 101*0 + 100*9 = 9 (note the transition from 9-10 in base-10) … + 101*1 + 100*0 = 10 … + 101*1 + 100*1 = 11 … + 101*1 + 100*2 = 12 … + 101*1 + 100*3 = 13 … + 101*1 + 100*4 = 14 … + 101*1 + 100*5 = 15 See what happens after 9? In decimal, it’s “10*1 + 1*0 = 10” - which only makes sense in base-10. In hexadecimal, this doesn’t make sense because we’re using base-16, not base-10, so we need an additional 6 characters. So we use letters instead! A = 10, B = 11, C = 12, D = 13, E = 14, and F = 15. If you notice on our Binary/Hex/Decimal Converter, we only offer up to Base-36, because we ran out of alphabet letters to use and have no idea why you would need to go up that high anyway (If you do, reach out to us, we’d be interested to hear what shenanigans you’re up to!) Please note that the use of letters is mathematically arbitrary. If we had decided that 10 would be represented by “$” instead, then we would use “$”. But, with English anyway, the alphabet gives a good selection of already familiar and serialized symbols that most everyone both recognizes and already knows their order. Let’s use 2EA as an example and convert it from hexadecimal to decimal - use the table above as a reference for the values of the hexadecimal numbers: In this case, it would be 2*256 + 14*16 + 10*1 = 746 Or, another way to look at it is 2*(16^2) + 14*(16^1) + 10*(16^0) = 746 For our base-10 brain, this is hard to keep straight and may seem pointless. Who would use base-16? Actually, we would! Because it’s easier than binary and it’s *very* easy to convert between binary and hexadecimal. The reason hexadecimal is so easy is that each hexadecimal unit can be represented by four binary units (bits) or vice versa. In other words, hexadecimal and binary are extremely easy to convert. Look at the table below. As you can see, you can perfectly represent 0 - 15 in hexadecimal with four bits (remember that a bit is a “binary unit”). Now, computers speak in binary and it isn’t a big deal. But when you’re looking at a random number, is it easier to read and remember this? And with a direct comparison: If you said you’d prefer the binary, then awesome, you’re a robot. For humans, the second one is easier. And since you can represent one hexadecimal number with exactly four binary numbers, it scales beautifully. While this only discussed decimal, binary, and hexadecimal specifically, the basic concepts work when dealing with any base-N number, from 2 to infinity. Our recommendation is to work through a couple of conversions with different base numbers to give these concepts some time to sink in and become second nature for you. A couple key points to remember: - The base number is the multiplier. - The exponent of the base number always starts at 0 and always increments by one for each step. - Any number raised to the “0” is 1. In other words, any number with an exponent of “0” is 1. - Counting starts at 0, not 1. Which is by base-10 is from 0-9, not 1-10. - Not understanding this the first time through is okay! But try and practice making some conversions and then come back and read through this again as necessary. - Finally, always double-check your work with a converter, because mistakes happen. What about fractions? Or negative numbers? Or really, really big numbers? Or even letters? Yes, these are issues and we are working on some tutorials now to cover how to deal with these in binary. As it is, you’ve got whole numbers down and you should be proud of yourself. We’ll update this page once we get those other tutorials done so you can learn more about how computers deal with numbers and letters. Can't I just use a converter, why do I need to know this? Doing these conversions by hand isn't practical, but it's important to have a fundamental intuitive understanding of the different base number systems when working with microcontrollers. - 157 Tutorials - 5 Textbooks - 12 Study Guides - 31 Tools - 81 EE FAQs - 295 Equations Library - 182 Reference Materials - 89 Glossary of Terms Friends of CircuitBread 10% Student Discount for Components Relays, Switches & Connectors Knowledge Series Free YouTube Product Training Guides Get the latest tools and tutorials, fresh from the toaster.
Logo is one of the easiest computer languages, and one that is especially suitable for elementary school children. (See our Logo Bibliography on this website for references to classroom uses of Logo.) For example, the star-pattern in Figure 1 can be drawn on a computer by using just two Logo commands, FORWARD 80 RIGHT 156, and repeating these instructions until the pattern is complete. (For more star patterns, see exercise 27 in Logo Exercises.) Logo was written for children. In experiments at MIT's Artificial Intelligence Laboratory, children drew geometric figures by giving instructions to a mechanical robot called a turtle (see Figure 2). As the turtle moved across the floor on large sheets of paper, a pen at its center traced a path. The children could command the turtle to move from one point to another by giving it an angle to turn through and a distance to move. Figure 2. A robot at MIT's Artificial Intelligence Laboratory Now the same thing can be done on a computer screen by moving a small triangular pointer called a turtle. The turtle has a position, and at each position on the screen the turtle points in some direction, called its heading. The heading is some number of degrees from 0 to 360. The turtle's start position is at the center of the screen, heading north. This position is called home. Regardless of where the turtle is on the screen, if you type HOME and then press ENTER the turtle will go to the center of the screen and face north, as in Figure 3. You can move the turtle above by giving it commands. Type FORWARD 30 and press ENTER, and the turtle will move forward 30 turtle steps, tracing its path. Type RIGHT 90 and press ENTER, and the turtle will turn 90E to the right. Type FORWARD 50 and press ENTER, and the turtle will move forward 50 steps, tracing its path. These moves are shown on the screen in Figure 4. To obtain two screens, one with the turtle and one for typing commands, type SPLITSCREEN (SS) and press ENTER. Or, to obtain a full screen for typing commands, type FULLSCREEN (FS). To return the turtle to its home position and clear the screen of all turtle tracks, type CLEARSCREEN (CS) and press ENTER. Sometimes it is helpful to hid the turtle to get a better view of a geometric figure. You can make the turtle invisible by typing HIDETURTLE (HT), and you can make it appear again by typing SHOWTURTLE (ST). The turtle can be turned right or left any number of degrees, and it can be moved back as well as forward. Several commands and their abbreviations are shown in the table below. The abbreviations may be typed instead of the spelled-out commands. The commands and their abbreviations are typed in upper case in these instructions, but they may be typed in lower case on your computer. The ENTER key must be pressed before the computer will carry out any command, but in the remaining instruction, the ENTER step will usually not be stated.) A line can be drawn by moving the turtle forward and backward. The following commands will produce the line shown in Figure 5 and leave the turtle in its start position at the center of the screen. One of the advantages of LOGO is that we can define new commands, called procedures. For example, we can give a list of commands a name, and then whenever we type the name, the turtle will carry out these commands. We can think of creating a procedure as teaching the turtle a new word. Here is a procedure for drawing the line in Figure 5. This procedure has been named LINE. The TO tells the computer that you are defining a new command. The END tells the computer that you have finished the definition. The TO and END instructions must be on separate lines as shown in the above example. After you type END and press ENTER, the computer's response, LINE defined, will show on the screen. Now if you type LINE, the turtle will draw a line in the direction in which it is heading and will finish in the position in which it started. If your newly defined procedure does not work or you wish to make a revision, you can edit it by typing, edit "filename. For example, if you have defined Line, typing edit "Line, will produce the Logo Editor screen for making changes in the procedure. To exit from the Logo Editor, click on the box in the upper corner of the screen. Example A (This is the first of several worked examples. Try obtaining the answers for these examples before looking at the solutions.) LINE is used 3 times in the following set of commands. The turtle's initial position is the center of the screen, and its heading is north. Sketch the figure that will be drawn by the turtle in response to these commands. Solution: The turtle draws a vertical line and makes a right turn of 60E , then repeats this action 2 more times as shown in Figure 6. The turtle's final heading is 180E (south) because it has turned through three 60E angles. There are times when we want the turtle to move to a new location, but not to draw a path. This can be accomplished by using the command PENUP (PU). When we want the turtle to draw again, we use the command PENDOWN (PD). These commands were used, together with the command LINE, to instruct the turtle to draw the two parallel lines shown in Figure 7. Figure 7. Two parallel lines A powerful feature of computers is their ability to repeat a sequence of commands many times. This type of repetition is called recursion. One way of obtaining recursion is through the REPEAT command. For example, instead of using the commands LINE and RT 60 three times to produce three lines at 60E angles, as in Example A, we can use one REPEAT command. This command must include a number, to tell the turtle the number of times the instructions are to be repeated, and a list of instructions, which are typed inside square brackets. REPEAT 3 [LINE RT 60] The following commands produce the square in Figure 8, whose sides have length 60. Notice that two commands are written on each line. Several commands may be typed on a line before ENTER is pressed. Since FD 60 RT 90 is repeated 4 times, we can accomplish the same result by using the REPEAT command. REPEAT 4 [FD 60 RT 90] Now let's use the REPEAT command to define a procedure for drawing a square. By typing SQ, we instruct the turtle to draw a square whose sides have length 60. The command HOME is very helpful in drawing polygonal figures because regardless of where the turtle is on the screen, this command will send the turtle back to its start position to complete a closed curve. Sketch the figure that will be drawn by the following commands. RT 90 FD 35 LT 90 FD 50 HOME Solution: These commands instruct the turtle to draw a right triangle. The two legs of the triangle are drawn first, and then the hypotenuse is formed by sending the turtle home. Regular polygons are easy to draw by using LOGO commands. To draw any regular polygon, the turtle will make a sequence of equal forward moves and equal turns until it has turned a total of 360E. In general, the size of the turn will be 360E divided by the number of sides in the polygon. For example, to draw the regular hexagon in Figure 9, the turtle makes six forward moves of 50 steps, each followed by a right turn of 60E. These 60E angles are the exterior angles of the hexagon. Each interior angle of the hexagon is 120E, the supplement of a 60E turn. Notice that it does not matter where the turtle begins or what direction it is heading; the sequence of six moves and turns listed in Figure 9 will produce a regular hexagon. Figure 9. Regular hexagon The 12 commands for drawing this hexagon can be condensed into one command by using REPEAT, as shown in the following procedure. Once this procedure has been defined, we can obtain the hexagon shown above by typing HEXAGON. What regular polygon will be drawn by the following commands? REPEAT 10 [FD 25 RT 360/10] Solution: Since the turtle will take 10 turns, each with 360 ) 10 = 36 degrees, a regular decagon with sides of length 25 will be produced. Circles and Arcs As the number of sides in a regular polygon increases, the shape of the polygon becomes closer to a circle. The procedure shown in Figure 10 instructs the turtle to draw a regular polygon with 360 sides, which we will call CIRCLE. An arc is obtained by drawing part of a circle. The number of 1E turns the turtle makes is the number of degrees in the arc. The following program produces the 90E arc shown in Figure 11. Symmetric figures can be obtained by interchanging all RIGHT and LEFT commands in a procedure. When the original figure is combined with the revised figure, the result will be a figure with a vertical line of symmetry. Let's see how this works. The procedure RIGHTVENT produces the vent in Figure 12. Now if we use the commands in the procedure RIGHTVENT but change each RT to LT and change LT to RT, then the new procedure, called LEFTVENT, will produce a vent that points to the left, as shown in Figure 13. We can now instruct the turtle to draw both of these figures by typing RIGHTVENT LEFTVENT and pressing ENTER. The resulting figure (Figure 14) has one line of symmetry, the north-south centerline of the screen. A figure with rotation symmetry can be created by rotating a given figure. The next set of commands instructs the turtle to draw the flag shown in Figure 15. Then the procedure FLAG is used with the REPEAT command to create a figure with five rotation symmetries (Figure 16). REPEAT 5 [FLAG RT 72] Have you ever had to line a sheet of paper by drawing many carefully spaced lines or to produce a grid? The solution to the following problem will provide ideas for accomplishing such tasks. How can the turtle be instructed to draw an 8 x 8 grid? Understanding the Problem. A drawing will help you understand the problem and devise a plan. An 8 x 8 grid is shown in Figure 17. a. What is the fewest number of vertical and horizontal lines needed to form this grid? Devising a Plan The need for horizontal and vertical lines suggests defining a procedure to draw a line segment and then using the REPEAT command. The following commands, which include the procedure LINE that was defined in Figure 7, can be used to draw nine vertical lines. REPEAT 9 [LINE PENUP RT 90 FD 15 LT 90 PENDOWN] Figure 18 shows the nine vertical lines. b. Where will the turtle be located after the ninth line is drawn? Carrying Out the Plan After drawing the nine vertical lines, the turtle will be 15 steps to the right center of the ninth line and headed north. Since half of each line's length is 60 units, the commands PENUP BK 60 LT 90 FD 75 PENDOWN will put the turtle in position to draw the nine horizontal lines by repeating the same commands as were used for drawing the nine vertical lines. c. Where is this position? Looking Back The commands for drawing the preceding grid are defined below as a procedure called GRID. d. What changes would need to be made in the procedure GRID to decrease the space between the vertical and horizontal lines from 15 to 8 units?
Computer Networking Basics... A network is a collection two or more standalone computers that communicate with one another over a shared network medium to share resource and information. Network Interface Cards: It allows a network capable device such as computer, printer etc to access a computer network. Every NIC card has a unique MAC (media access control) address which is assigned to NIC Card for communications on the physical network segment. Hubs: A hub is a device for connecting multiple ethernet cables to communicate and making a network segment. Switches: A switch is a networking device that connects network segments or network devices. It forwards and filters datagrams (chunk of data communication) between ports based on the Mac-Addresses in the packets. It works faster than hubs. Routers: A router is a networking device that works on the basis of routing tables and routing policies configured by the programmer. A router has its own Processor, OS, memory etc. On the basis of scale or extent of reach of network: LAN (Local Area Network): LANs are networks usually confined to a geographic area, such as a building, college or within a house to share information and resources. Speed of data transfer in LAN is very fast as compared to MAN and WAN. MAN (Metropolitan Area Network): It refers to a network of computers with in a City or a Metropolitan area. WAN (Wide Area Network): Wide Area Networking combines multiple LANs that are geographically separate by using private or public network transports (ISP). • On the basis of hardware technology: Optical fiber, Ethernet, Wireless LAN. • Functional relationship: Client-Server: In a client/server network, all information and resources are used to store in a single central location like Server. Other client machines like computers used and access those resource from the server by network connection. It is used for some specific tasks like file sharing, print processing, Internet connectivity, e-mail service etc. Peer-to-peer: The most basic way to connect multiple computers in a peer-to-peer network to allow multiple users to share information or resources. It is used to connect computers running on workgroup. The OSI Model: A network protocol defines rules and conventions for communication between network devices. There are seven layers in OSI Model: physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and the application layer. When data is transfer in packet format it has to pass all the seven layers to reach the destination. Physical Layer: The physical layer, the lowest layer of the OSI model. It provides the hardware means of sending and receiving data on a carrier, including defining cables, cards and physical aspects. Protocols: ISDN. Data Link Layer: At this layer, data packets are encoded and decoded into bits. The layer also determines the size and format of data sent. The data link layer provides error-free transfer of data frames from one node to another over the physical layer. Protocols: Logical Link Control, Media Access Control. Network Layer: This layer determines how the data packets will travel through different networks. The network layer controls the operation of the subnet, deciding which physical path the data should take based on network conditions, priority of service, and other factors. Protocols: IP, ARP, RARP, ICMP, RIP. Transport Layer: This layer allows data to be broken into smaller packages for data to be distributed and addressed to other nodes (workstations). Protocols: TCP, ARP, RARP, SPXNWLink, NetBIOS / NetBEUI. Session Layer: The Session layer establishes, maintains, and manages the communication session between computers. Protocols: NetBIOSNames . Presentation Layer: This layer is responsible to code and decode data sent to the destination or node. Application Layer: This layer allows use an application that will communicate with the operation system of a node. Protocols: DNS, FTP, SMTP, MIME, NFS.
Introduction to ANOVA in R The following article ANOVA in R provides an outline for comparing the mean value of different groups. An Analysis of Variance (ANOVA) is a very common technique used to compare the mean value of different groups. ANOVA model is used for hypothesis testing, where certain assumption or parameter is generated for a population and the statistical method is used to determine whether the hypothesis is true or false. The hypothesis is derived from the investigator’s assumption and information available about the population. ANOVA is called an Analysis of Variance and used for hypothesis testing where means of a variable in multiple independent groups are required to be measured. For example, in a lab to study or invent a new medication for obesity, researchers will compare the result of experimental and standard treatment. In an obesity study, valuable results can be derived when the mean obesity rate of the population can be compared in different age groups. In this case, one would like to observe the mean obesity rate amongst different age groups such as age (5 to 18), (19, 35) and (36 to 50). The ANOVA method is applied as there are more than two groups that are independent. ANOVA method is used to compare the mean obesity of the independent groups. The aov() function is used and Syntax is aov(formula, data=dataframe) In this article, we will learn about the ANOVA model and further discuss one-way and two-way ANOVA model along with examples. - This technique is used to answer the hypothesis while analyzing multiple groups of data. There are multiple statistical approaches, however, the ANOVA in R is applied when comparison needs to be done on more than two independent groups, as in our previous example three different age groups. - ANOVA technique measures the mean of the independent groups to provide researchers with the result of the hypothesis. In order to get accurate results, sample means, sample size and standard deviation from each individual group must be taken in to account. - It is possible to observe the mean individually for each of the three groups for comparison. However, this approach has limitations and may prove incorrect because these three comparisons don’t consider total data and thus may lead to type 1 error. R provides us the function to conduct the ANOVA analysis to examine variability among the independent groups of data. There are five stages of conducting the ANOVA analysis. In the first stage, data is arranged in csv format and the column is generated for each variable. One of the columns would be a dependent variable and remaining are the independent variable. In the second stage, the data is read in R studio and named appropriately. In the third stage, a dataset is attached to individual variables and read by the memory. Finally, the ANOVA in R is defined and analyzed. In the below sections I’ve provided a couple of case study examples in which ANOVA techniques should be used. - Six insecticides were tested on 12 fields each, and the researchers counted the number of bugs that remained in each field. Now the farmers need to know if the insecticides make any difference, and if so, which one they best use. You answer this question by using the aov() function to perform an ANOVA. - Fifty patients received one of five cholesterol-reducing drug treatments (trt). Three of the treatment conditions involved the same drug administered as 20 mg once per day (1 time) 10mg twice per day (2 times) 5 mg four times per day (4 times). The two remaining conditions (drugD and drugE) represented competing drugs. Which drug treatment produced the greatest cholesterol reduction (response)? - The one-way method is one of the basis ANOVA technique in which variance analysis is applied and the mean value of multiple population groups is compared. - One-way ANOVA got its name because of the availability of one way classified data. In a one-way ANOVA single dependent variable and one or more independent variables may be available. - For example, we will perform the ANOVA technique on cholesterol dataset. The dataset consists of two variables trt ( which are treatments at 5 different levels) and response variables. Independent variable – groups of drug treatment, dependent variable – means of 2 or more groups ANOVA. From these results, you can confirm taking the 5 mg doses 4 times a day was better than taking a twenty mg dose once a day. Drug D has better effects when compared to that drug E Drug D provides better results if taken in 20mg doses compared to drug E Uses cholesterol dataset in the multcomp package aov_model <- aov(response ~ trt) The ANOVA F test for treatment (trt) is significant (p < .0001), providing evidence that the five treatments # aren’t all equally effective. The plotmeans() function in the gplots package can be used to produce a graph of group means and their confidence intervals This clearly shows treatment differences plotmeans(response ~ trt, xlab="Treatment", ylab="Response", main="Mean Plot\nwith 95% CI") Let’s examine the output from TukeyHSD() for pairwise differences between group means The mean cholesterol reductions for 1 time and 2 times aren’t significantly different from each other (p = 0.138), whereas the difference between 1 time and 4 times is significantly different (p < .001). par(mar=c(5,8,4,2)) # increase left margin plot(TukeyHSD(aov_model), las = 2) Confidence in results depends on the degree to which your data satisfies the assumptions underlying the statistical tests. In a one-way ANOVA, the dependent variable is assumed to be normally distributed and have equal variance in each group. You can use a Q-Q plot to assess the normality assumption library(car). Q-Q plot(lm(response ~ trt, data=cholesterol), simulate=TRUE, main=”Q-Q Plot”, labels=FALSE) Dotted line = 95% confidence envelope, suggesting that the normality assumption has been met fairly well ANOVA assumes that variances are equal across groups or samples. The Bartlett test can be used to verify that assumption bartlett.test(response ~ trt, data=cholesterol). Bartlett’s test indicates that the variances in the five groups don’t differ significantly (p = 0.97). ANOVA is also sensitive to outliers test for outliers using the outlierTest() function in the car package. You may not need to run this package to update your car library. update.packages(checkBuilt = TRUE) install.packages("car", dependencies = TRUE) From the output, you can see that there’s no indication of outliers in the cholesterol data (NA occurs when p > 1). Taking the Q-Q plot, Bartlett’s test, and outlier test together, the data appear to fit the ANOVA model quite well. Another variable is added in the Two-way ANOVA test. When there are two independent variables, we will need to use two way ANOVA rather than one-way ANOVA technique which was used in the previous case where we had one continuous dependent variable and more than one independent variable. In order to verify two-way ANOVA, multiple assumptions need to be satisfied. - Availability of independent observations - Observations should be normally distributed - Variance should be equal in observations - Outliers should not be present - Independent errors To verify the two-way ANOVA another variable called BP is added to the dataset. The variable indicates the rate of blood pressure in patients. We would like to verify if there is any statistical difference between BP and dosage given to the patients. df <- read.csv(“file.csv”) anova_two_way <- aov(response ~ trt + BP, data = df) From the output, it can be concluded that both the trt and BP are statistically different from 0. Hence, the Null hypothesis can be rejected. Benefits of ANOVA in R ANOVA test determines the difference in mean between two or more independent groups. This technique is very useful for multiple items analysis which is essential for market analysis. Using the ANOVA test one can get necessary insights from the data. For example, during a product survey where multiple information such as shopping lists, customer likes, and dislikes are collected from the users. The ANOVA test helps us to compare groups of the population. The group could either be Male vs Female or various age groups. ANOVA technique helps in distinguish between the mean values of different groups of the population which are indeed different. Conclusion – ANOVA in R ANOVA is one of the most commonly used methods for hypothesis testing. In this article, we have performed an ANOVA test on the data set consisting of fifty patients who received cholesterol-reducing drug treatment and have further seen how two-way ANOVA can be performed when an additional independent variable is available. This is a guide to ANOVA in R. Here we discuss One-Way and two-way Anova model along with examples and benefits of ANOVA. You can also go through our other suggested articles –
Understanding Geometry (Mathematical Reasoning) Geometry isn't treated with the intensity in middle school, in many cases, for high school success in the subject. This book follows the National Math Standards and presents an introduction to the basics of geometry. Students won't just learn the properties of geometry, but will learn the reasoning behind the properties and learn the basics of geometric proofs and coordinate geometry. A glossary is included in this 256-page book that can be used as a text in junior high and it is reproducible for family or classroom use. While offering some instruction, these programs include less teaching material or do not cover the full range of grade-level skills that the comprehensive programs offer. At such an early age, mathematical reasoning begins with simple activities such as distinguishing between even or odd amounts and learning the ordinal numbers. This series is correlated to the NCTM standards and accordingly incorporates topics such as patterns, number concepts/number lines, graphs, fractions, probability, geometry, and problem solving, as well as basic operations. The full color activity sheets should especially appeal to the younger student. Not only are the illustrations eye-catching, but they do an excellent job of demonstrating mathematical representations and relationships. Although manipulatives are not required, a link to a virtual manipulative website is provided in case additional concrete level reinforcement is needed. A wide variety of exercises and activities are utilized to keep your child both interested and motivated. For instance, Thinker Doodles and Mind Benders activities from parent company Critical Thinking are scattered throughout. Beginning 1 focuses on numbers 1-5 while Beginning 2 focuses on 0-10, and they are intended for use with 3 and 4 year olds. The concepts are presented in a spiral fashion, so students will see the concept repeated at different intervals throughout the book. In most cases, children who finish these first two books should be ready to tackle kindergarten math. Books A through F are for grades kindergarten through 5th grade, and can be used as core math curriculum. We also like to recommend them as a critical thinking supplement for any math program that is light in that particular area. These books incorporate the same spiral approach found in the Beginning books. Level G has been added for 6th grade and goes beyond drill and practice. It incorporates discussion-based problem solving to prepare students for the reasoning required for upper-level math. Book 2 of the original series, Mathematical Reasoning Through Verbal Analysis, can be used after the series as a supplement to your math curriculum. A more analytical or math reasoning approach is needed for higher level math and this book will help fill that need. This book can also be used as a reference and resource tool to fill in gaps. Since the book is nicely organized by "strands" (topics), and, additionally, into subtopics, you can easily find problem solving exercises to help repair any deficiency detected or an alternate way of teaching a concept (useful if your child has developed a case of "math block" to understanding a particular concept). Topics cover the entire realm of elementary math: understanding numbers, counting, sequencing, geometry, basic arithmetic operations, measurement, numerical comparisons (greater than, less than, equal to), and tables and graphs. The teacher's manual contains teaching suggestions and the answers. These books are reproducible for family use and range in length from 240-380+ pages.
The area of a four-sided polygon is base times height. A regular pyramid has an equilateral triangle base, not just a regular polygon. It has an apex above the centre of the base. The surface area of a pyramid is the area of all the faces of the pyramid, for a pyramid with apex in the centre and a regular polygon as its base, (the bottom of a pyramid is the base, it is regular if all sides are the same length) the surface area is: B + 1/2(P * H) where B is the area of the base, P is the perimeter (area around) the base and H is the height of the pyramid. It's different for every polygon. For example, for a triangle, it's (1/2) x (base) x (height). For a rectangle, it's (the entire) (base) x (height). Base: Depending on the shape of the base, use the formula for a square, a regular polygon, etc. Height: This must be measured (or calculated) perpendicular to the base. Naturally its perimeter and area will increase accordingly Area = Base * Height so Base = Area/Height Base times Height divided by three. The base is perpendicular to the height, which ends at the highest point of the pyramid. To find the area of the base, one simply uses 2-dimensional geometry. Triangle: Base of triangle times height of triangle over 2 Any regular polygon: Apothem (perpendicular distance from orthocenter to a side) times perimeter divided by 2. V= Base Area * Height Area of a triangle = base * height / 2 Therefore the base = Area * 2 / height A regular pyramid has a regular polygon base and a vertex over the center of the base. By:Cherrylvr :) The relation between the height of a triangle, its base and its area is given by: Area = 0.5 * Base * Height Therefore, we have: Height = (2 * Area)/ Base. You can find the height of a parallelogram given the area and base measures by working backwards from the area formula. The area of a parallelogram is found with the formula: Area = Base * Height To solve this equation for Height, we divide both sides by the base. Area / Base = (Base * Height) / Base Simplify: Area / Base = Height Area = B + ( (PH)/2 ) Where B is the base area, P is the base perimeter and H is the piramid height. You didn't say it was a regular pentagon. For an arbitrary pentagon, you would calculate its area as you would for any polygon: divide it up into triangles, and add up the areas of the triangle. The area of a triangle is 1/2 times the base times the height, the height being the length of the perpendicular dropped to the base from the opposite vertex. Area = 1/2*Base*Height so Base = 2*Area/Height Surface Area= 1/2perimeter x slant height + B * * * * * Perimeter = perimeter of base. B = Area of base.
I don't know about you, but when I try to learn new things, I find that I learn better if games and activities are used. This motivates me and encourages me to take risks in a safe environment. Because this works well for me, I use a similar process when teaching. I have found it especially successful when teaching French to beginners. The use of humor and creativity is also a great way to connect with your students. This helps them to take chances and encourages them to keep going. Tips for teaching FSL to beginners - Getting started These tips for getting started will help establish a positive environment for those beginning to learn French. Learning vocabulary and some basic phrases are key if you want your students to develop an understanding of this new language. Games are a great way to introduce these key elements and have fun at the same time. Try word puzzles, matching games, or even playing bingo if you're working towards basic vocabulary recognition. Teaching the intricacies of French grammar can be challenging at times, especially for younger children. Remembering masculine and feminine nouns can make your head spin, but fear not! If you start with thinking and memory games that match vocabulary and images, a connection can be made so that it is easier to remember whether an object is masculine or feminine. A personal dictionary can also help. Focusing on simple speaking and writing activities is an excellent way to give children a foundation in FSL. Additionally, storyboards, photo boards, and puzzles are all great tools for beginners as they provide students with more visual representations of French concepts that can help improve their understanding. Ideas to encourage speaking For practice speaking, here are some suggestions. - role playing conversations between two people - theater exercises such as charades or "guess that phrase" - sharing stories about their day, talking about something on vacation or giving compliments in French - FSL songs or stories that feature grammar rules you've covered in class - acting out skits - choral reading of French stories - games like "Go Fish" Ideas to encourage writing For writing, here are some activities to consider. - write sentences describing a picture or a story they made up. - build sentences with magnetic tiles - fill in the blanks with French vocabulary words to create simple stories - get creative and assign an obstacle course of sorts where they must correctly form sentences by walking around the room holding flash cards to spell out words - create a storyboard with pictures representing different verbs or nouns - try some scaffolded writing activities with sentence prompts or word banks - choose a selection of vocabulary words and create a scenario for others to act out - writing tasks that focus on grammar are also important to practice basic sentence structure It's important to provide encouragement and support while your students are learning so they won't be scared to take risks with speaking and writing. Motivation and a safe environment can go together. Here is a free matching activity and game that might be fun to try. Click on the image to get a copy. You can find many other French language games and activities in my TPT store. Last time, I wrote about using a second language after not using it for many years and how it was like riding a bicycle. It would be rusty, but with some practice it could come back. Imagine now that you were starting to learn a language with no previous experience to fall back on. You don't know any of the vocabulary, the way that the sentences are formed is different and all the nouns are either masculine or feminine, but you don't have any way of figuring out which gender they are. Imagine the feelings you would have if you needed to communicate. This can be the way young children feel when they enter an immersion program. Note: I will be referring to French throughout this post as that is the second language I have familiarity with. However, these thoughts can apply to other languages as well. When young children enter into an immersion program, they don't have someone translating for them. They have to figure out what is being said through pictures, stories, gestures, and songs. As they begin to do various activities and their ears become attuned to the accents and the ways the sentences are spoken, they will gradually develop a vocabulary that they can use to begin communicating themselves. Here are some ideas to help kids feel more comfortable when learning a new language. Some of these ideas will also work for older learners. it's important to keep in mind that beginning French learners can be scared to take risks in speaking and writing French, especially if they are older and more self conscious. If you are teaching French Immersion or tutoring beginning learners it can be challenging when your students don't understand what you are saying and they are unfamiliar with the sentence structure and grammar rules. You have to remain patient and provide activities that will engage their attention and stimulate their French comprehension. French immersion can be a tricky subject for beginning French learners, especially when it comes to grammar and remembering which words are masculine or feminine. To help ease their transition into French speaking, try encouraging them to take risks in their French by providing fun speaking and writing activities. The goal is to help them get comfortable with the language and encourage them to take risks speaking and writing. Listening carefully and repeating stories or poems, playing guessing games to learn vocabulary, conjugating verbs, creating songs and rhymes, as well as writing French postcards are among the many captivating tasks you can use to engage your new French speakers. If your students are reluctant to participate, try starting with gesture-based activities like Simon Says and Follow the Leader; challenge the children to listen carefully and respond in French. Instead of educational games, use French-style charades or improvisations where they build French sentences around their body movements. Beginners can sometimes even find it intimidating to take a risk and speak any French out loud – so here are some activities you can use in your classroom to develop French fluency among students. A good activity for speaking French is role-play of everyday tasks, like grocery shopping or ordering a meal, which shows students different ways they can use French in their daily lives. Other useful activities include group work to help students practice conversation, playing Pictionary or matching word games for spelling and vocabulary building, creating board stories or comic strips for writing practice, and making silly sentences. These activities are great for making French both challenging and amusing for beginning learners. Fortunately, there are plenty of fun activities that can be done to help them understand French better. I have only provided a small sampling of ideas. Throughout the years, I have created many resources that have been helpful in the classroom and with tutoring young students. You can check out my TPT store French categories to find out more about them. Here's one that can help with ordering food at a fast food place. This was created with one of my students. Click on the image to check it out. Don't forget that helping students feel safe makes a world of difference when exploring French -- positive reinforcement and plenty of encouragement will foster enthusiasm for speaking and writing in French. Did you learn a second language when you were in school? How comfortable would you be using it now? Imagine for a moment, that you were thrust into a situation where you needed to communicate and the only language spoken was the one you learned years ago at school. I suspect you would be tongue-tied and maybe even a bit petrified to attempt to speak at all. But, there is hope. It can be like riding a bicycle Learning a second language can be tricky at first, but once you get the hang of it, it will be there in the future as you need it. Often people say it is like riding a bicycle. With a bit a practice it will come back from the cobwebs of our memories. It can sometimes be difficult to re-learn how to speak and write in a second language after not using it for a long time. Even the most experienced second language speakers feel shy or rusty when it’s been a while. However, like riding a bicycle, all of those skills you developed come back to you quickly when you start using the second language again. To help with confidence when speaking and writing in your second language, try taking an online course that reviews basic grammar and conversation topics. This can help refresh your memory and get conversations flowing again. Plus, we all need to practice our second language from time to time so that we don’t lose the skill entirely! Sometimes you need a reason why I still remember when I started to refresh my French after not using it since high school. My kids were entering into French Immersion and I wanted to be able to understand what they were working on and help them out. I took a couple of courses through online university and with a bit of practice, I became comfortable with the language again. I started to help out in the classroom and this made it easier to see how to use simpler forms of the language to communicate with the kids. I also was able to practice my French with the kids without fear of any mistakes I might make with gender usage. Note: I still find it tough to remember which nouns are masculine and which are feminine. I often keep a dictionary nearby to check this out or I go to an online dictionary. I am so glad that I did brush up on my French, because when I first started teaching, I ended up in a long-term substitute situation where I needed to teach Grade 1 French Immersion for 4 months. With the help of my colleagues and with my knowledge of how to teach different subjects, I was able to create materials and lessons that worked. It was scary, but I realized that I could do it. That immersion into my own kids' classrooms helped me to learn simpler ways of communicating with my students and I was able to transfer that to my classroom situation. Following the 4 months in Grade 1, I ended up teaching French Immersion music for 9 years. This meant I needed to learn all the specific French jargon and terminology for music. Talk about choosing to jump into the fire! But I did it. Who knew that Frère Jacques could be sung so many ways in Kindergarten. I used it to teach emotions, beat, rhythm, echoing, and many other things when I first had the kids who knew no French. They thought that I was very silly, but they had fun joining me. You may need to refresh more than once Fast forward several years, and my French was rusty again from lack of use. I decided to do something about that because my grandchildren were entering French Immersion. I started to brush up on my French and volunteer in my grandson's classroom. I started creating resources for my older grandson who was going to go into Late Immersion and I started to tutor some other students who were going into Late Immersion. It was much faster getting my fluency this time. The grammar made sense and the vocabulary came back quickly. Creating the resources and using them with beginners also helped me to find out where things needed to be modified to make them work better. If you are interested in checking out some French resources that work for young learners or those beginning in Late Immersion or FSL, check out my French categories in my TPT store. Merry Christmas From Our Home To Yours It is Christmas! Christmas has been very different for the past couple of years due to the pandemic and restrictions. It is nice to have a more normal Christmas break this year. I am really happy to have a family Christmas. Last year, my daughter's family tested positive for Covid on Christmas Eve and we needed to quickly make different plans. The longterm home where my mother-in-law lives went into very strict restrictions on New Year's Eve and for 3 months I was the only person allowed to see her. Not what we were hoping for. It made the holidays bittersweet. Our Christmas wishes became musical videos for each other. Here is the one we shared last Christmas. Our kids also made music videos for us. Here is a parody our son made to ours. Here are the family videos made by some of our kids and their families. Carols and music are a special part of Christmas. I hope you enjoy listening to these ones. I know they have special meaning for us, but they can also bring joy to others as we think of Christmas and family time. Finally, this is my daughter and her kids singing Away In The Manger for a virtual Christmas service for her church. I could go on and on about music and how it has impacted our family, but let's just say it is more than a hobby. It is something that is of great meaning to us. As you can see from this photo, even the grandchildren play musical instruments. I am pretty sure we will have other musical moments to share in the future. I hope that you are able to have some special moments with family this holiday season. Enjoy your holiday break. See you in the New Year. If you ask a group of young children what a map is, you're likely to get a variety of answers. Some will say it's a picture of a place, while others will say it shows how to get from one place to another. Some will even tell you that it's a way to find buried treasure! While all of these answers are technically correct, they only scratch the surface of what maps are and what they can do. But what all these answers have in common is that they recognize the importance of maps in our lives. Mapping is important for kids to learn about. It helps them develop their geography and spatial awareness skills, and can also be a lot of fun! There are a few key mapping terms and skills that need to be taught in order for them to be able to use maps effectively. Words like map, title, legend, compass rose, grid, scale, and symbols need to be explained and activities need to be done to practice using these terms. Key Mapping Terms In a nutshell, here's a quick explanation of these terms. - A map is a two-dimensional representation of a three-dimensional space. - The different parts of a map include the title, symbols, legend, compass rose, grids and scale. - The title tells you what the map is of. - The symbols are shapes or small pictures that represent real things. - The legend explains the symbols used on the map. - The compass rose shows you which way is north, south, east, and west. - The grid is a set of lines that go up and down and side to side that help you find places on the map. - The scale shows the distance between places or objects. It can also be used for 3D maps to make sure that objects are appropriate sizes. Once kids know what the terms mean, they can start to figure out where things are on the map. This Map Skills booklet below will help explain these terms. Here are some sample pages. To practice mapping skills, there are many different activities that children can do. Here are a few ideas to try. I have broken them down into different features. Identifying symbols and reading legends For identifying symbols and reading legends, provide a variety of different types of maps and have the children find the different symbols shown in the legend. Do the reverse as well. Find symbols on the maps and then identify them using the legend. It is important that they realize that sometimes the symbols do not look like the real objects, but with the legend, they will still be able to identify what they are. Using a compass rose Using a compass rose can be lots of fun. If you have access to an area outside, it can be used to pretend to find treasure. For example: Take 10 steps North and then turn East for 20 steps. Turn South and follow along the fence for another 15 steps and turn Southwest. You are 12 steps away from the buried treasure. After physically practicing changing directions and moving, it is important to transfer this skill to a map. Perhaps they can find different places on a neighbourhood map and practice giving directions to others to help them find it too. For practice using grids, create grids on graph paper and practice drawing lines to connect the letters and numbers to see where they intersect. Put some objects on a grid and then play games like I Spy and have the kids tell you the coordinates for the space the object is on. You can also play games like Battleship. All of these activities will help them to become familiar with using a grid. Then you can move over to actual maps and do activities there. Working with scale Scale is a harder concept for kids to understand. It can be used in two different ways when creating a map or a community model. Scale on a map is used to represent distances. This ties in with measurement and understanding different linear measures such as inches/miles and centimetres/kilometres. Getting comfortable with using a scale with distance will take lots of practice. Using grid paper to practice drawing out different measurements will help with visualizing this. Measuring out distances on actual maps and doing the conversions is also necessary. For creating a community model, it is important for kids to see objects in relation to each other to understand how scale works. For example, if a toy car represents a real car, a toilet paper roll would be too big to represent a power pole. Creating 3D models help with visualizing scales of objects and what fits together. Creating own maps Finally, primary students can also learn how to make their own maps. This activity helps them understand how scale works, and how different features can be represented on a map. Kids love to be creative. Perhaps they could create their own neighbourhood map or treasure map and let others try to locate things using the skills they have learned. Maps are essential tools for navigation in the real world, whether we're trying to find our way around a new city or just planning a cross-country road trip. As children become more familiar with the features of a map and practice using their skills, they'll be better equipped to navigate their way through the world around them later on. It's that time of year again! Christmas is just 2 weeks away and a new year is around the corner. The New Year is a special occasion for kids, and there are plenty of ways to make it special in the classroom. With games and activities to teach skills and concepts, you can use special occasions to start out the year with fun. This is also a great time to refresh and set goals and prepare for new themes and units. Here are some ideas for celebrating special days in the new year. New Year's Day Although New Year's Day is usually a holiday, it can be the focus on the first day back after the winter break. New Year's Day is considered a day for setting goals and resolutions. Here are some ideas for making this meaningful. 1. Create a school goal, a personal goal, and a home goal and write them down. Put them on fancy paper and place it inside a personal planning folder. Throughout the year, look at them and see if they are still working. This is a good time to reflect on realistic goals and on followthrough. If they are working, celebrate. If not, make some adjustments and carry on. At the end of the school year, revisit the goals again. Grab a free copy by clicking the image below. 2. Set some class goals for the new year and maybe even a goal tracker to see how well the class is doing. There could be a reward schedule also for various accomplishments along the way. Creating a photo booth album for the class could also be fun. Check out this selection of different photo booth frames. Groundhog Day is a fun occasion that is great for teaching many different science concepts. It is a perfect time for doing a weather focus, lessons on seasons, hibernation, shadows, and of course, groundhogs. It is also a time to talk about predictions. There are many other activities you can do as well. You can read books about groundhogs, guess whether or not the groundhog will see his shadow and make a graph of the predictions, and check out whether or not the groundhog did predict an early spring or more winter. You can also do other fun math activities with a groundhog theme. Chinese New Year/ Lunar New Year Chinese New Year or the Lunar New Year is important in many different countries. This is a time to learn about different cultures and traditions. Read books, watch videos, and try some traditional foods as part of your celebrations. And for a fun math activity, have your students use dots (or coins) to create patterns with the lucky number 8. In North America we are most familiar with Chinese New Year and the animal zodiac. There are lots of activities that can be done to explore this further. Other places that celebrate the Lunar New Year may have different traditions and activities that they follow. It might be interesting to make some comparisons of how they are the same and different. Valentines Day is always a fun day for kids. It is the perfect time to talk about friendship and acts of kindness for others. One year my class tried to come up with 4 or 5 acts of friendship each and we made hearts with these on them and posted them on the bulletin board. It was great to see how this created a positive focus in the classroom. There are many language games that can be done such as sight word bingo, rhyming games, vocabulary activities, and conversation starters. Students can practice writing poems or making conversation hearts. It is also a great time to teach how to write friendly letters. Hundreds Day is a day of celebration in many primary classroom because it marks the hundredth day of school. There are so many different activities that can be done to celebrate this day. Hundreds Day is a perfect occasion for math activities! Students can count by ones, twos, fives, and tens to 100. They can also make patterns with 100 objects or solve word problems involving 100 objects. This is also a great day to introduce place value concepts such as ones, tens, and hundreds. Dressing up as someone who is one hundred is also a popular activity to try. It is also a great time to think about what life might have been like a hundred years ago. No matter what special days you choose to celebrate in your classroom, remember that the most important thing is to have fun! Enjoy these special occasions with your kids. They'll be sure to remember them for years to come. If you are interested in any of the resources in the images above, you can check them out here. If you want excitement, watch how kids react to the first sign of snow. When I woke up a few days ago, there was a light dusting of snow on the ground. Little did I know when I headed to school, it would be a few inches by lunch time. The kids kept looking out the window and watching the clock waiting for recess break so they could get outside and play. Of course this meant allowing more time for bundling up and preparing to go outside, then unbundling and dealing with snowy gear when they came back inside, as well as the many stories they had to tell about playing in the snow. Teachable moments are rampant at times like this. I like to use these events as springboards into different activities. You can still meet requirements of the curriculum by adding them in, they just have a fun twist to capture the excitement and focus of the kids. I learned early on to take advantage of this excitement instead of trying to squash it so that they could get back to work. Here are a few different ideas that I would do. Story telling and writing I would build in time to allow them to share their stories and then I would use that to help them write stories. Story writing using the fun activities they did outside can help even the most hesitant writer to put pen to paper. Once I had my class imagine what it would be like if the city froze. We talked about all kinds of crazy scenarios and possibilities and after brainstorming as a group, each person did some more brainstorming on their own. Then, they wrote stories and tried to add in many details and descriptive words to paint the picture in the reader's mind. Sharing the stories later was so much fun. Here is the template we used for the stories. Grab a free copy of Frozen templates by subscribing to my newsletter. Math And Science Activities Sometimes, I would take a math or science approach. This might include measuring the snow, seeing how long it takes to melt when brought inside, building a fort outside, seeing who can throw a snowball the farthest, making snow families, or checking the temperature at different times of the day to see if it gets colder or warmer. If you live in a place that doesn't get snow, you could try doing some activities that might mimic those we did. For example: Use rolled up socks as pretend snowballs and see who can throw them the farthest. Shave up some ice and form snowballs and try to make a small snowman. Use ice cubes to build small forts Check the temperatures in different parts of the world for a few days in a row and then graph the results. Imagine what a snow day would be like and write about it. There are several winter language and math activities that you can do, but adding in the real life moments just makes them so much more fun. Here are some other winter resources that might be of interest as the cold, white days continue. Winter Sports Bundle Winter Word Work Language Activities Winter Parts Of Speech Silly Sentences For lots more ideas, check out my winter math and literacy category. Winter novel studies are also a great way to include a winter theme into your language arts. Here are some novel studies that might interest you. Emma's Magic Winter The Kids In Ms. Coleman's Class - Snow War Horrible Harry And The Holidaze Grab the excitement and wonder of winter and add it to your lessons for more engagement and motivation. I would love to hear some of the other ways you weave winter into your lessons. Don't forget to grab your free copy of Frozen writing templates. Have you ever wondered why some communities look the way they do? Why there are buildings in some places and not others? And why some communities have more services than others? These are big questions that are important to answer when teaching children about communities and community planning. I loved creating a 3D community with my students. It took time to plan from beginning to end, many discussions and decisions, and space in the room for the completed project, but the final result was worth it. You can read more about it here. Types of communities Before doing any kind of planning it is important to know what kind of community you want to create. Kids need to look at different types of communities and see how they are the same and different. There are 3 main types of communities to explore - urban, suburban, and rural. They have unique characteristics that need to be considered when doing community planning. Here are some to get started with. - Urban communities are usually bustling with activity. - They are densely populated, with a mix of commercial and residential buildings. - High rises and busy streets are also often seen in urban communities. - There are a wide range of industries and services available from retail to healthcare to manufacturing. - Services and industries are often located close together, making it convenient for people to work and shop. - There are many restaurants and shops in urban areas to meet the demands of the population. - Suburban communities are not as densely populated - They have less industry and fewer services than urban areas - They tend to have more parks and recreation facilities - There are many single dwelling homes with some small apartment buildings, townhouses, or other types of multiple dwelling homes. - There are some local schools - There may be some small shops and restaurants - Rural communities are the least populated - They are primarily agricultural lands - They may also be used for industrial purposes - Houses are spaced further apart - There is much more space and privacy for people living there - Transportation access is important because of the distances away from many services such as schools and hospitals - Access to natural resources such as water supply is necessary When planning a community, it's important to consider the needs of the population. What types of services and industries are required? Where should different types of buildings be located? What is needed to make the community work for its residents? These are all important questions that need to be considered in order to create a successful community. It's important to help kids understand the various types of services and businesses that are found in each type of community. Things like schools, hospitals, and public transportation are essential for any community to function properly. Locations of these services are different depending on the type of community, but they need to be accessible. By understanding these things, kids can develop a better appreciation for the importance of planning in any community. Ultimately, a community needs to meet the needs of its population in order to be successful. This means that there must be a balance between residential, commercial, and industrial development. There also needs to be enough green space and amenities to support the community's residents. By taking all of these factors into consideration, planners can create communities that thrive. Kids love the hands on activities of planning and creating a community. Here are some samples of one community that was done by one of my grade 1/2 classes. If you would like to see a copy of the plan that we used and some of the materials included, check it out here. My class had a great experience creating this community. I wish you success should you venture to create one in your classroom. Do you sometimes wonder if teaching about money is important any more? Do you think children need to know how to use coins and other currency? These questions and many others often start to surface nowadays. Handling money and using it to pay for things is becoming less common now with so many of our transactions being done online or with debit machines and plastic. This doesn't mean that teaching about money is becoming less important. This means learning about money and practicing how to use it is more necessary if children are to be able to handle money situations in the real world. It is sad to see that many adults can't handle money correctly anymore. They rely on the machines to tell them how much they need to pay, and how much change to give. They struggle to count out money to make purchases. Standing in line at the local fast food place the other day, I watched the worker struggle to make change correctly and call her manager to help. I could see that the customer was getting frustrated. Unfortunately, this is going to become even more common if we don't teach our students how to count money and correctly make change. When it comes to teaching kids about money, there are a few key things to focus on. Identifying coins, counting money, and making change, are essential skills that kids need to learn. Here are some tips to help. Identifying coins is key to being able to handle money. After all, those quarters don't look anything like pennies! Do lots of activities that involve matching coins. You could do memory games, bingo, I Have, Who Has? games or any games that make coin recognition automatic. It is also necessary to recognize how money is written so that kids can recognize price tags and costs of different things. Counting coins is another skill that is important. Play money can be used for this, or real coins if you have access to enough of them. 1. Practice counting coins of equal value so that it helps with using the coins later. Count by ones with pennies, by fives with nickels, by tens with dimes, and by twenty-fives with quarters. 2. Practice making dollars with the coins. How many of each coin is needed to make a dollar? 3. Practice counting coins of different values and seeing what they total up to. Making change is a difficult skill for kids to master. There are a few other skills or steps needed first. It requires being very familiar with coin values and different coin combinations that make the same value. Activities that help with creating money amounts using different coin combinations and trading of coins to make similar amounts is a good first step. It is important to be able to add and subtract multiple digit numbers as well so that this skill can be applied to using money. Counting up is also important. Counting up from the amount paid until it matches money given is one way of making change. In Canada, we no longer have pennies, so it is necessary to also round up or down when paying with cash. Machines have been adjusted to help with providing the correct change, but it still requires understanding when to round up or down when paying. Sadly, many people cannot do this. Connecting to real life situations Teaching the skills is one thing, but providing opportunities for kids to see its use in the real world is necessary so they can make the connections that will help them to internalize them. If you give a child a handful of coins or bills, they often don't really understand the value of what they are holding. A cheque in a birthday card means even less to them. I remember watching as my grandchildren opened cards received from uncles or others and they didn't even look at the paper cheque that was inside. They just handed it over to their parents. Although in some way they realized it was money, they didn't understand its value or use. The more we give them practice handling and using money the more we will prepare them for how to use it and the better prepared they will be to understand its value and how to use it wisely in their everyday lives. This could involve setting up a store in your classroom, pretending to be at a restaurant, or even setting up mock debit machines and debit cards for kids to use. (If you are interested in trying out a using a menu, I have a free copy of Elisa's Café available for subscribers below.) Resources to help I had the opportunity to do a simplified version of parts of the entrepreneur study with my Grade 3 class one year. We were learning about money and it became a unit of money lessons that were created with my class. We also made and sold items for a spring fundraiser and used the money to pay for a bus trip up island to meet up with another class in a different town. Talk about making it a real life experience! You can find out more about this here. Here are some resources that could help with practicing money skills. American and Canadian versions are available. Counting Money - How Much Money American version Canadian Coins Match Up Money Lessons For Children Unit Rounding Up And Down With Money Money Word Problem Task Cards For Kids Don't forget to grab your free copy of Elisa's Café by signing up for my newsletter. For free resources, tips, and ideas, sign up for my newsletter. Kids hopping in the hallways, stretching to reach the tops of doorways, and making a human ruler stretched along the wall are sure signs that a class is learning about measurement, or that the teacher has disappeared and the kids are acting crazy. Measurement can be lots of fun if it is done with creativity and hands on activities. Kids love to have opportunities to try out new ideas. As soon as you put a measuring tape in a child's hand, you can bet they will start to measure everything around them. Of course, it's important that you show them how to use the equipment correctly if you want accuracy. Non-standard and standard measurement There's nothing more fun than a ruler that's constantly being moved around the classroom. So when it comes to teaching measurement, I always start by making sure my students understand the importance of a standard measure. In order to do this, they should do lots of activities using non-standard units first that give different results. One of my favourite activities is measuring with shoes. I choose two students with shoe sizes that are very different. We pretend to measure a length where we are going to build a fence. The number of shoe lengths is quite different for each student, so it is easy for the kids to see that we need something more standard to make sure we get the right amount of material needed. This is the perfect time to introduce rulers with inches, feet, and yards, or centimetres and metres, depending on the standard units where they live. Once they get the idea of standard measuring units, add in measuring tapes. There are so many activities that can be done with these tools. See below for more ideas. Measuring is an essential math skill that children need to learn in order to understand concepts like volume, area, and length. There are many different ways to measure things, and it can be tricky for kids to understand all of the different units. However, there are some games and activities that can help make learning about measurement a little bit easier - and even fun! Linear measurement activities Measuring things around the classroom is a great way to get kids interested, and there are plenty of games and activities you can use to keep them engaged. Here are a few ideas. 1. Set up stations around the room with various objects to measure and let the kids rotate around to each station. 2. Do a "measurement scavenger hunt" where kids have to find objects that match specific measurement criteria (e.g., an object that is exactly 10 cm long). 3. Use string to measure things around the class like furniture, doorways or cupboards. Let the kids use a different type of measurement each time e.g. feet/inches or metres/centimetres. 4. Have kids line up in a straight line and then measure them using a standard ruler. 5. Have kids estimate the length of various objects using their arms or feet and then measure the objects to see how accurate they were. 6. Have kids measure their own height or the height of a partner. 7. Estimate and measure! Have the children choose an object - it could be anything from a toy car to a pillow - and then estimate its length. Once they've written down their estimate, they can use a ruler or tape measure to find out its actual length. Volume and weight measurement activities Understanding volume/capacity and weight is another form of measurement that is necessary for real world use. It is important to have an idea of how much something weighs, how much is needed of various ingredients for cooking meals, how much soil is needed for planting a garden, etc. Doing hands on games and activities will help kids understand this and hopefully apply it to their own life experiences. Here are a few ideas for getting started. 1. Using candy or other small treats, measure out equal amounts into separate containers using standard measurements like cups, tablespoons or millilitres. Let the kids enjoy eating their treats as a reward for completing the task! 2. Get creative cooking! Set up small groups for cooking. Let the kids measure out ingredients using standard or metric measurements. Not only will they be learning about measurement, but they'll also get a delicious treat at the end! 3. Fill up different containers with water (or sand if you're outdoors) and have kids estimate how many litres (or gallons) each container holds. Then use a measuring cup to check their estimates. 4. Build towers! This game is perfect for exploring volume measurement. Give each child a specified amount of building blocks - 1 cup, 2 cups, 3 cups, etc. - and see how tall of a tower they can build with their blocks without letting any spill over. This is also a great opportunity to talk about capacity versus weight - how many blocks does it take to make 1 kilogram? 1 pound? 5. Give each child an object of a different size and have them guess which object is the heaviest, lightest, tallest, etc. Then check to see if the guesses are correct. Other types of measurement activities There are other forms of measurement that we use regularly as well. Time and temperature, for example. There are also many other ways that we use measurement in various subject areas. It is important to spend some time discussing different types of measurement - linear, area, weight, capacity and so on - and what units are used. Depending on the time available, activities could be done to look at more of these uses. Whenever possible, use real-life examples to illustrate measurement concepts. As much as possible, let kids get involved with the actual measuring. This will help them better understand the concepts and make it more enjoyable. If you are looking for some measurement resources for your classroom, here are some suggestions. You can find more by visiting my Measurement category in my TPT store. Measurement Anchor Charts And Conversions Linear Measurement Charts And Examples Measurement Games Team Events This booklet helps to explain the difference between non-standard and standard measurement. It also gives examples. The possibilities are endless! Teaching measurement doesn't have to be boring - by doing activities like these, your kids will be having so much fun they won't even realize they're learning! So get out there and let the kids hop in hallways, stretch to reach to tops of doorways, make human rulers, and start measuring! For more free resources, tips, and ideas, sign up for my newsletter. About Me Charlene Sequeira I am a wife, mother of 4, grandmother of 9, and a retired primary and music teacher. I love working with kids and continue to volunteer at school and teach ukulele.
In graphing inequalities on a number line and a coordinate plane, teachers may want to practice entering answers as a student. To graph inequalities on a number line, students solve the inequality in the first answer space. Then, students choose an open or closed circle from the drop-down menu above the number line. To graph the solution, the student will click in the direction of the solution on the number line. To help students understand graphing inequalities on a coordinate plane, show your students that they may choose from the drop-down menu at the top of the graph to choose Line or Shade. A student may choose solid or dotted lines from the button on the right side of the drop-down menu. Students will click on the first point on the line. The arrows on the side will allow a student to move a point. Students will then click on the location of the second point on the line, and GMM will connect the points with a line as Line was chosen from the drop-down menu above the graph. Next, the students will need to shade the correct portion of the graph by choosing Shade from the drop-down menu above the graph. Students enter the answer by clicking enter or clicking the green checkmark. To remove a point, or to start over, simply click on the trash can icon on the bottom left.
SODIUM. Sodium is normally present in food and in the body in its ionic (charged) form rather than as metallic sodium. Sodium is a positively charged ion or cation (Na+), and it forms salts with a variety of negatively charged ions (anions). Table salt or sodium chloride (NaCl) is an example of a sodium salt. In solution, NaCl dissociates into its ions, Na+ and Cl-. Other sodium salts include those of both inorganic (e.g., nitrite or bicarbonate) and organic anions (e.g., citrate or glutamate) in aqueous solution, these salts also dissociate into Na+ and the respective anion. Types and Amounts of Common Foods that Contain the Recommended Levels of Sodium Only small amounts of salt or sodium occur naturally in foods, but sodium salts are added to foods during food processing or during preparation as well as at the table. Most sodium is added to foods as sodium chloride (ordinary table salt), but small amounts of other salts such as sodium bicarbonate (baking soda and baking powder), monosodium glutamate, sodium sulfide, sodium nitrate, and sodium citrate are also added. Studies in a British population found that 75 percent of sodium intake came from salts added during manufacturing and processing, 15 percent from table salt added during cooking and at the table, and only 10 percent from natural foods (Sanchez-Castillo et al., 1987). Most sources of drinking water are low in sodium. However, the use of home water softening systems may greatly increase the sodium content of water; the system should be installed so that water for cooking and drinking bypasses the water softening system. The estimated minimum safe daily intake of sodium for an adult (0.5 grams) can be obtained from ¼ teaspoon of salt, ¼ of a large dill pickle, ⅕ can of condensed tomato soup, one frankfurter, or fifteen potato chips. The effect of salt added in processing is noted by the calculation that, whereas one would need to consume 333 cups of fresh green peas (with no salt added during cooking or at the table) in order to consume 0.5 grams of sodium, the estimated minimum safe daily intake of sodium is provided by only 1.4 cups of canned or 2.9 cups of frozen green peas. Whereas the estimated minimum safe intake for an adult is 0.5 g/day of sodium (1.3 g/day of sodium chloride), average Americans consume between 2 and 5 g/day of sodium (between 5 and 13 g/day of sodium chloride) (National Research Council, 1989). Sodium chloride, or salt, intake varies widely among cultures and among individuals. In Japan, where consumption of salt-preserved fish and the use of salt for seasoning are customary, salt intake is high, ranging from 14 to 20 g/day (Kono et al., 1983). On the other hand, the unacculturated Yanomamo Indians, who inhabit the tropical rain forest of northern Brazil and southern Venezuela, do not use salt in their diet and have an estimated sodium chloride intake of less than 0.3 g/day (Oliver et al., 1975). In the United States, individuals who consume diets high in processed foods tend to have high sodium chloride intakes, whereas vegetarians consuming unprocessed food may ingest less than 1 g/day of salt. Individuals with salt intakes less than 0.5 g/day do not normally exhibit chronic deficiencies, but appear to be able to regulate sodium chloride retention adequately. Recommended Intake of Sodium The daily minimum requirement of sodium for an adult is the amount needed to replace the obligatory loss of sodium. The minimum obligatory loss of sodium by an adult in the absence of profuse sweating or gastrointestinal or renal disease has been estimated to be approximately 115 mg/day, which is due to loss of about 23 mg/day in the urine and feces and of 46 to 92 mg/day through the skin (National Research Council, 1989). Because of large variations in the degrees of physical activity and in environmental conditions, the estimated level of safe minimum intake for a 70-kg adult was set at 500 mg/day of sodium (equivalent to 1,300 mg/day of sodium chloride) by the National Research Council (1989). Although there is no established optimal range of intake of sodium chloride, it is recommended that daily salt intake should not exceed 6 grams because of the association of high intake with hypertension (National Research Council, 1989). The Dietary Guidelines for Americans, published in 2000, include a recommendation to choose and prepare foods with less salt. Individuals who wish to lower their sodium or salt intakes should use less salt at the table and during cooking, avoid salty foods such as potato chips, soy sauce, pickled foods, and cured meat, and avoid processed foods such as canned pasta sauces, canned vegetables, canned soups, crackers, bologna, and sausages. Individuals should also become aware of and avoid "hidden" sources of sodium such as softened water, products made with baking soda, and foods containing additives in the form of sodium salts. The need for sodium chloride is increased during pregnancy and lactation, with the estimated safe minimum intake being increased by 69 mg/day and 135 mg/day, respectively, for women during pregnancy and lactation. The estimated minimum requirement for sodium is 120 mg/day for infants between birth and 5 months of age and 200 mg/day for infants 6 to 11 months of age (National Research Council, 1989); these intakes are easily met by human milk or infant formulas. The estimated minimum requirements of sodium for children range from 225 mg/day at one year of age to 500 mg/day at 10 to 18 years of age. General Overview of Role of Sodium in Normal Physiology Total body sodium has been estimated at 100 grams (4.3 moles) for a 70-kg adult. In general, the cytoplasm of cells is relatively rich in potassium (K>) and poor in sodium (Na>) and chloride (Cl<) ions. The concentrations of sodium (and potassium and chloride) ions in cells and the circulating fluids are held remarkably constant, and small deviations from normal levels in humans are associated with malfunction or disease. Na+, K+, and Cl- are referred to as electrolytes because of their role in the generation of gradients and electrical potential differences across cell membranes. Sodium and sodium gradients across cell membranes play several important roles in the body. First, sodium gradients are important in many transport processes. Sodium tends to enter cells down its electrochemical gradient (toward the intracellular compartment that has a lower Na+ concentration and a more negative charge compared to the extracellular fluid compartment). This provides a secondary driving force for absorption of Cl- in the same direction as Na+ movement or for the secretion of K+ or hydrogen ions (H+) in the opposite direction in exchange for Na+. The sodium gradient is also used to drive the coupled transport of Na+ and glucose, galactose, and amino acids by certain carrier proteins in cell membranes; because as Na+ enters down its electrochemical gradient, uptake of glucose/galactose or amino acids can occur against their concentration gradient. Second, sodium ions, along with potassium ions, play important roles in generating resting membrane potentials and in generating action potentials in nerve and muscle cells. Nerve and muscle cell membranes contain gated channels through which Na+ or K+ can flow. In the resting state, these cell membranes are highly impermeable to Na+ and permeable to K+ (i.e., Na+ channels are closed and K+ channels are open). These gated channels open or close in response to chemical messengers or to the traveling current (applied voltage). Action potentials are generated in nerve and muscle due to opening of Na+ channels followed by their closing and the re-opening of K+ channels. A third important function of sodium is its osmotic role as a major determinant of extracellular fluid volume. The volume of the extracellular fluid compartment is determined primarily by the total amount of osmotic particles present. Because Na+, along with Cl-, is the major determinant of osmolarity of extracellular fluid, disturbances in Na+ balance will change the volume of the extracellular fluid compartment. Finally, because Na+ is a fixed cation, it also plays a role in acid-base balance in the body. An excess of fixed cations (versus fixed anions) requires an increase in the concentration of bicarbonate ions. Consequences of Deficiency or Excessive Intake Levels Sodium balance in the body is well controlled via regulation of Na+ excretion by the kidneys. The kidneys respond to a deficiency of Na+ in the diet by decreasing its excretion, and they respond to an excess of Na+ by increasing its excretion in the urine. Physiological regulatory mechanisms for conservation of Na+ seem to be better developed in humans than mechanisms for excretion of Na+, and pathological states characterized by inappropriate retention of Na+ are more common than those characterized by Na+ deficiency. Retention of Na+ occurs when Na+ intake exceeds the renal excretory capacity. This can occur with rapid ingestion of large amounts of salt (for example, ingestion of seawater) or with too-rapid intravenous infusion of saline. Hypernatremia (abnormally high plasma concentration of Na+) and hypervolemia (abnormally increased volume of blood), resulting in acute hypertension, usually occur in these situations, and the Na+ regulatory mechanisms will cause natriuresis (urinary excretion of Na+) and water retention. The body may be depleted of Na+ under extreme conditions of heavy and persistent sweating or when conditions such as trauma, chronic vomiting or diarrhea, or renal disease produce an inability to retain Na+. Sodium depletion produces hyponatremia (abnormally low plasma concentration of Na+) and hypovolemia (abnormally decreased volume of blood) which place the individual at risk of shock. Medical treatment includes replacement of Na+ and water to restore the circulatory volume. If the loss of Na+ is not due to renal disease, mechanisms to conserve Na+ and water are activated. Loss of Na+ can also be caused by the administration of diuretics, which inhibit Na+ and Cl- reabsorption, or by untreated diabetes mellitus, which causes diuresis. Regulatory Processes that Govern the Uptake and Excretion of Sodium The kidneys are the main site of regulation of Na+ balance. The intestines play a relatively minor role. Under normal circumstances, about 99 percent of dietary Na+ and Cl- are absorbed, and the remainder is excreted in the feces. Absorption of Na+ and Cl- occurs along the entire length of the intestines; 90 to 95 percent is absorbed in the small intestine and the rest in the colon. Intestinal absorption of Na+ and Cl- is subject to regulation by the nervous system, hormones, and paracrine agonists released from neurons in the enteric nervous system in the wall of the intestines. The most important of these factors is aldosterone, a steroid hormone produced and secreted by the zona glomerulosa cells of the adrenal cortex. Aldosterone stimulates absorption of Na+ and secretion of K+, mainly by the colon and, to a lesser extent, by the ileum. The kidneys respond to a deficiency of Na+ in the diet by decreasing its excretion, and they respond to an excess by increasing its excretion in the urine. Urinary loss of Na+ is controlled by varying the rate of Na+ reabsorption from the filtrate by renal tubular cells. Individuals consuming diets that are low in Na+ efficiently reabsorb Na+ from the renal filtrate and have low rates of excretion of Na+. When there is an excess of Na+ from high dietary intake, little Na+ is reabsorbed by renal tubular cells, resulting in the excretion of the excess Na+ in the urine. As much as 13 g/day of Na+ can be excreted in the urine. The most important regulator of renal excretion of Na+ and Cl- is the renin-angiotensin-aldosterone system (Laragh, 1985). Sensors in the nephrons of the kidney respond to changes in Na+ load by influencing the synthesis and secretion of renin (Levens et al., 1981). A decrease in renal perfusion or Na+ load will increase the release of renin. In the circulation, renin acts to initiate the formation of active angiotensin II from angiotensinogen, a protein produced by the liver. Angiotensin II conserves body Na+ by stimulating Na+ reabsorption by the renal tubules and indirectly via stimulating secretion of aldosterone. Secretion of aldosterone by the adrenal cortex is stimulated by a low plasma Na+ concentration and by angiotensin II. Aldosterone stimulates cells of the renal tubules to reabsorb Na+. Because of the close association of Na+ and Cl- concentrations with effective circulating volume, Na+ (and Cl-) retention results in proportionate water retention, and Na+ (and Cl-) loss results in proportionate water loss. Expansion or contraction of the extracellular volume affects the activation of vascular pressure receptors, as well as the release of natriuretic peptides by certain tissues, and result in changes, mediated largely by antidiuretic hormone (ADH), in renal excretion of Na+, Cl-, and water. A deficiency of sodium chloride and hypovolemia have also been shown to produce an increase in appetite for salt, which will increase sodium chloride intake. Evidence that Sodium Intake May Be Related to Risk of Hypertension Both epidemiological and experimental studies implicate habitual high dietary salt intake in the development of hypertension (Weinberger, 1996). Primary hypertension, or abnormally high blood pressure, is a significant risk factor for cardiovascular disease, stroke, and renal failure in industrialized societies. Diets that are high in fat, high in sodium, low in potassium, low in calcium, and low in magnesium may contribute to the development of hypertension (Reusser and McCarron, 1994). Although epidemiological and experimental evidence suggest a positive correlation between habitual high-salt consumption and hypertension, controversy remains regarding the importance of sodium salts in the regulation of blood pressure and the mechanisms by which salt influences blood pressure. This is not surprising, because the response of blood pressure depends on an interplay of various factors, such as genetic susceptibility, body mass, cardiovascular factors, regulatory mechanisms mediated through the neural and hormonal systems, and renal function. A large comprehensive study on the role of sodium in hypertension was carried out in fifty-two geographically separate centers in thirty-two countries by the INTERSALT Cooperative Research Group (Stamler, 1997). Four centers included in the study had median values for Na+ excretion that were under 1.3 g/day. Subjects in these four unacculturated centers had low blood pressure, rare or absent hypertension, and no age-related rise in blood pressure as occurred in populations in the other forty-eight centers in which mean values for Na+ excretion were between 2.4 and 5.6 grams Na+ per day. Although blood pressure and sodium intake appeared to be associated when all fifty-two centers were included, the correlation between systolic blood pressure and excretion of sodium was not significant when the four centers with the lowest median values of sodium excretion were excluded from the analysis. Intervention studies of dietary salt restriction to lower blood pressure have produced mixed results. This may be explained by the facts that not all hypertensive patients are salt-sensitive and that many cases of hypertension are due to other causes. Nevertheless, various clinical trials indicate some beneficial effects of dietary restriction of sodium on blood pressure (Cutler et al., 1997; Reusser and McCarron, 1994) with response being greater in older patients, patients with the highest degree of restriction, and in nonoverweight, mildly hypertensive patients. Researchers are currently attempting to identify the genetic basis of salt-sensitive hypertension and to identify polymorphisms associated with salt-sensitive hypertensive individuals. More than thirty different gene variations could be responsible for essential hypertension, and hypertension is considered to have a complex genetic basis. Further insight into the basis of hypertension may help to determine individuals for whom lowering salt intake would be beneficial and to facilitate the prescription of appropriate drugs. See also Dietary Guidelines ; Fast Food ; Fish, Salted ; Health and Disease ; Meat, Salted ; Preserving ; Salt . Church, Charles F., and Helen N.Church. Food Values of Portions Commonly Used : Bowes and Church. Philadelphia: J. B. Lippincott, 1970. Cutler, Jeffrey A., Dean Follmann, and P. Scott Allender. "Randomized Trials of Sodium Reduction: An Overview." American Journal of Clinical Nutrition 65 (1997, Supp.): 643S–651S. Kono, Suminori, Masato Ikeda, and Michiharu Ogata. "Salt and Geographical Mortality of Gastric Cancer and Stroke in Japan." Journal of Epidemiology and Community Health 37 (1983): 43–46. Laragh, John H. "Atrial Natriuretic Hormone, the Renin-Aldosterone Axis, and Blood Pressure—Electrolyte Homeostasis." New England Journal of Medicine 313 (1985): 1330–1340. Levens, Nigel R., Michael J. Peach, and Robert M. Carey. "Role of Intrarenal Renin-Angiotensin System in the Control of Renal Function." Circulation Research 48 (1981):157–167. National Research Council. Recommended Dietary Allowances. 10th ed. Washington, D.C.: National Academy Press, 1989, pp. 247–261. Oliver, Walter J., Erik L. Cohen, and James V. Neel. "Blood Pressure, Sodium Intake and Sodium-Related Hormones in the Yanomamo Indians, a 'No-Salt' Culture." Circulation 52 (1975): 146–151. Reusser, Molly E., and David A. McCarron. "Micronutrient Effects on Blood Pressure Regulation." Nutrition Reviews 52 (1994): 367–375. Sanchez-Castillo, C. P., S. Warrender, T. P. Whitehead, and W. P. James. "An Assessment of the Sources of Dietary Salt in a British Population." Clinical Science 72 (1987): 95–102. Sheng, Hwai-Ping. "Sodium, Chloride, and Potassium." In Biochemical and Physiological Aspects of Human Nutrition, edited by Martha H. Stipanuk, pp. 686–710. Philadelphia: W. B. Saunders Co., 2000. Stamler, Jeremiah. "The INTERSALT Study: Background, Methods, Findings, and Implications." American Journal of Clinical Nutrition 65 (1997, Supp.): 626S–642S. United States Department of Agriculture. Nutrition and Your Health: Dietary Guidelines for Americans. 5th ed.. Washington, D.C.: U. S. Government Printing Office, 2000. Weinberger, Myron H. "Salt Sensitivity of Blood Pressure in Humans." Hypertension 27 (1996): 481–490. Martha H. Stipanuk Brief Outline of the History of Salt Common salt is the chemical compound NaCl. Salt makes up nearly 80 percent of the dissolved material in seawater and is also widely distributed in solid deposits. It is found in many evaporative deposits, where it crystallizes out of evaporating brine lakes, and in ancient bedrock, where large extinct salt lakes and seas evaporated millions of years ago. Salt was in general use long before history began to be recorded. Salt has been used widely for the curing, seasoning, and preserving of foods. "Sodium." Encyclopedia of Food and Culture. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/food/encyclopedias-almanacs-transcripts-and-maps/sodium "Sodium." Encyclopedia of Food and Culture. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/food/encyclopedias-almanacs-transcripts-and-maps/sodium Note: This article, originally published in 1998, was updated in 2006 for the eBook edition. Most people have never seen sodium metal. But it is almost impossible not to see many compounds of sodium every day. Ordinary table salt, baking soda, baking powder, household lye (such as Drano), soaps and detergents, aspirin and other drugs, and countless other consumer products are sodium products. Sodium is a member of the alkali metals family. The alkali family consists of elements in Group 1 (IA) of the periodic table. The periodic table is a chart that shows how chemical elements are related to one another. Other Group 1 (IA) elements are lithium, potassium, rubidium, cesium, and francium. The members of the alkali metals family are among the most active elements. Group 1 (IA) Compounds of sodium have been known, of course, throughout human history. But sodium metal was not prepared until 1807. The reason is that sodium attaches itself very strongly to other elements. Its compounds are very difficult to break apart. It was not until 1807 that English chemist Sir Humphry Davy (1778-1829) found a way to extract sodium from its compounds. (See sidebar on Davy in the calcium entry in Volume 1.) Sodium metal itself has relatively few uses. It reacts with other substances easily, sometimes explosively. However, many sodium compounds have many uses in industry, medicine, and everyday life. Discovery and naming Sodium carbonate, or soda (Na2CO3), was probably the sodium compound best known to ancient peoples. It is the most common ore of sodium found in nature. This explains why glass was one of the first chemical products made by humans. Glass is made by heating sodium carbonate and calcium oxide (lime) together. When the mixture cools, it forms the hard, clear, transparent material called glass. Glass was being manufactured on a large scale in Egypt as early as 1370 b.c. The Egyptians called soda natron. Much later, the Romans used a similar name for the compound, natrium. These names explain the chemical symbol used for sodium, Na. The name sodium probably originated from an Arabic word suda, meaning "headache." Soda was sometimes used as a cure for headaches among early peoples. The word suda also carried over into Latin to become sodanum, which also means "headache remedy." In the early 1800s, Davy found a way to extract a number of active elements from their compounds. Sodium was one of these elements. Davy's method involved melting a compound of the active element, then passing an electric current through the molten (melted) compound. Davy used sodium hydroxide (NaOH) to make sodium. Sodium is a silvery-white metal with a waxy appearance. It is soft enough to be cut with a knife. The surface is bright and shiny when first cut, but quickly becomes dull as sodium reacts with oxygen in the air. A thin film of sodium oxide (Na2O) forms that hides the metal itself. Sodium's melting point is 97.82°C (208.1°F) and its boiling point is 881.4°C (1,618°F). Its density is slightly less than that of water, 0.968 grams per cubic centimeter. Sodium is a good conductor of electricity. Sodium is a very active element. It combines with oxygen at room temperature. When heated, it combines very rapidly, burning with a brilliant golden-yellow flame. Sodium also reacts violently with water. (See accompanying sidebar.) It is so active that it is normally stored under a liquid with which it does not react. Kerosene or naphtha are liquids commonly used for this purpose. Sodium also reacts with most other elements and with many compounds. It reacts with acids to produce hydrogen gas. It also dissolves in mercury to form a sodium amalgam. An amalgam is an alloy of mercury and at least one other metal. Occurrence in nature Sodium never occurs as a free element in nature. It is much too active. It always occurs as part of a compound. The most common source of sodium in the Earth is halite. Halite is nearly pure sodium chloride (NaCl). It is also called rock salt. Halite can be found in underground deposits similar to coal mines. Those deposits were formed when ancient oceans evaporated (dried up), leaving sodium chloride behind. Earth movements eventually buried those deposits. Now they can be mined to remove the sodium chloride. Sodium and water aren't friends O il and vinegar don't mix. But sodium and water really don't mix! Sodium reacts violently with water. The effect is fascinating. When sodium metal is first placed into water, it floats. But it immediately begins to react with water, releasing hydrogen gas: A great deal of energy is released in this reaction. It is enough to set fire to the hydrogen gas. The sodium metal reacts with water. So much heat is released that the sodium melts. It turns into a tiny ball of liquid sodium. At the same time, the sodium releases hydrogen from water. The hydrogen gas catches fire and causes the ball of sodium to go sizzling across the surface of the water. Sodium reacts violently with water. Sodium chloride can also be obtained from seawater and brine. Brine is similar to seawater, but it contains more dissolved salt. Removing sodium chloride from seawater or brine is easy. All that is needed is to let the water evaporate. The sodium chloride is left behind. It only needs to be separated from other chemicals that were also dissolved in the water. There is only one naturally occurring isotope of sodium, sodium-23. Isotopes are two or more forms of an element. Isotopes differ from each other according to their mass number. The number written to the right of the element's name is the mass number. The mass number represents the number of protons plus neutrons in the nucleus of an atom of the element. The number of protons determines the element, but the number of neutrons in the atom of any one element can vary. Each variation is an isotope. Six radioactive isotopes of sodium are known also. A radioactive isotope is one that breaks apart and gives off some form of radiation. Radioactive isotopes are produced when very small particles are fired at atoms. These particles stick in the atoms and make them radioactive. Two radioactive isotopes of sodium—sodium-22 and sodium-24—are used in medicine and other applications. They can be used as tracers to follow sodium in a person's body. A tracer is a radioactive isotope whose presence in a system can easily be detected. The isotope is injected into the system at some point. Inside the system, the isotope gives off radiation. That radiation can be followed by means of detectors placed around the system. Sodium-24 also has non-medical applications. For example, it is used to test for leaks in oil pipe lines. These pipe lines are usually buried underground. It may be difficult to tell when a pipe begins to leak. One way to locate a leak is to add some sodium-24 to the oil. If oil leaks out of the pipe, so does the sodium-24. The leaking oil may not be visible, but the leaking sodium-24 is easily detected. It is located by instruments that are designed to detect radiation. One way to obtain pure sodium metal is by passing an electric current through molten (melted) sodium chloride: This method is similar to the one used by Humphry Davy in 1808. But there is not much demand for sodium metal. Sodium compounds are much more common. A second and similar method is used to make a compound known as sodium hydroxide (NaOH). The sodium hydroxide is then used as a starting point for making other sodium compounds. The method for making sodium hydroxide is called the chloralkali process. The name comes from the fact that both chlorine and an alkali metal (sodium) are produced at the same time. In this case, an electric current is passed through a solution of sodium chloride dissolved in water: Three useful products are obtained from this reaction: chlorine gas (Cl2), hydrogen gas (H2), and sodium hydroxide (NaOH). The chlor-alkali process is one of the most important industrial processes used today. Sodium metal has a relatively small, but important, number of uses. For example, it is sometimes used as a heat exchange medium in nuclear power plants. A heat exchange medium is a material that picks up heat in one place and carries it to another place. Water is a common heat exchange medium. Some home furnaces burn oil or gas to heat water that travels through pipes and radiators in the house. The water gives off its heat through the radiators. Sodium does a similar job in nuclear power plants. Heat is produced by nuclear fission reactions at the core (center) of a nuclear reactor. In a nuclear fission reaction, large atoms break down to form smaller atoms. As they do so, large amounts of heat energy are given off. Liquid sodium is sealed into pipes that surround the core of the reactor. As heat is generated, it is absorbed (taken up) by the sodium. The sodium is then forced through the pipes into a nearby room. In that room, the sodium pipes are wrapped around pipes filled with water. The heat in the sodium converts the water to steam. The steam is used to operate devices that generate electricity. Another use of sodium metal is in producing other metals. For example, sodium can be combined with titanium tetrachloride (TiCl4) to make titanium metal: Sodium is also used to make artificial rubber. (Real rubber is made from the collected sap of rubber trees and is expensive.) The starting material for artificial rubber is usually a small molecule. The small molecule reacts with itself over and over again. It becomes a much larger molecule called a polymer. The polymer is the material that makes up the artificial rubber. Sodium metal is used as a catalyst in this reaction. A catalyst is a substance used to speed up or slow down a chemical reaction without undergoing any change itself. The combination of an electric current and sodium vapor produces a yellowish glow in street lamps. Sodium is frequently used in making light bulbs. Sodium is first converted to a vapor (gas) and injected into a glass bulb. An electric current is passed through a wire or filament in the gas-filled bulb. The electric current causes the sodium vapor to give off a yellowish glow. Many street lamps today are sodium vapor lamps. Their advantage is that they do not produce as much glare as do ordinary lights. Almost all sodium compounds dissolve in water. When it rains, sodium compounds dissolve and are carried into the ground. Eventually, the compounds flow into rivers and then into the oceans. The ocean is salty partly because sodium compounds have been dissolved for many centuries. But that means that finding sodium compounds on land is somewhat unusual. They tend to be more common in desert areas because deserts experience low rainfall. So sodium compounds are less likely to be washed away. Huge beds of salt and sodium carbonate are sometimes found in desert areas. Dozens of sodium compounds are used today in all fields. Some of the most important of these compounds are discussed below. Sodium chloride (NaCl). The most familiar use of sodium chloride is as a flavor enhancer in food. It is best known as table salt. Large amounts of sodium chloride are also added to prepared foods, such as canned, bottled, frozen, and dried foods. One purpose of adding sodium chloride to these foods is to improve their flavors. But another purpose is to prevent them from decaying. Sodium chloride kills bacteria in foods. It has been used for hundreds of years as a food preservative. The "pickling" or "salting" of a food, for example, means the adding of salt to that food to keep it from spoiling. This process is one reason people eat so much salt in their foods today. Most people eat a lot of prepared foods. Those prepared foods contain a lot of salt. People are often not aware of all the salt they take in when they eat such foods. Sodium chloride is also the starting point for making other sodium compounds. In fact, this application is probably the number one use for sodium chloride. Almost all sodium compounds dissolve in water. They tend to be more common in desert areas because deserts experience low rainfall. Sodium carbonate (Na2CO3). Sodium carbonate is also known by other names, such as soda, soda ash, sal soda, and washing soda. It is also used as the starting point in making other sodium compounds. A growing use is in water purification and sewage treatment systems. The sodium carbonate is mixed with other chemicals that react to form a thick, gooey solid. The solid sinks to the bottom of a tank, carrying impurities present in water or waste water. Sodium carbonate is also used to make a very large number of commercial products, such as glass, pulp and paper, soaps and detergents, and textiles. Sodium bicarbonate (NaHCO3). When sodium bicarbonate is dissolved in water, it produces a fizzing reaction. That reaction can be used in many household situations. For example, the fizzy gas can help bread batter rise. The "rising" of the batter is caused by bubbles released when sodium bicarbonate (baking soda) is added to milk in the batter. Certain kinds of medications, such as Alka-Seltzer, also include sodium bicarbonate. The fizzing is one of the effects of taking Alka-Seltzer that helps settle the stomach. Sodium bicarbonate is also used in mouthwashes, cleaning solutions, wool and silk cleaning systems, fire extinguishers, and mold preventatives in the timber industry. Examples of lesser known compounds are as follows: sodium alginate (NaC6H7O6): a thickening agent in ice cream and other prepared foods; manufacture of cement; coatings for paper products; water-based paints sodium bifluoride (KHF2): preservative for animal specimens; antiseptic (germ-killer); etching of glass; manufacture of tin plate sodium diuranate, or "uranium yellow" (Na2U2O7): used to produce yellowish-orange glazes for ceramics sodium fluorosilicate (Na2SiF6): used to make "fluoride" toothpastes that protect against cavities; insecticides and rodenticides (rat-killers); moth repellent; wood and leather preservative; manufacture of laundry soaps and "pearl-like" enamels sodium metaborate (NaBO2): herbicide sodium paraperiodate (Na3H2IO6): helps tobacco to bum more completely and cleanly; helps paper products retain strength when wet sodium stearate (NaOOCC17H35): keeps plastics from breaking down; waterproofing agent; additive in toothpastes and cosmetics sodium zirconium glycolate (NaZrH3(H2COCOO)3): deodorant; germicide (germ-killer); fire-retardant Sodium has a number of important functions in plants, humans, and animals. In humans, for example, sodium is involved in controlling the amount of fluid present in cells. An excess or lack of sodium can cause cells to gain or lose water. Either of these changes can prevent cells from carrying out their normal functions. P eople sometimes talk about the amount of "sodium" in their diet. Or they may refer to the amount of "salt" in their diet. The two terms are similar, but not exactly alike. In the body, sodium occurs most often as sodium chloride. A common name for sodium chloride is salt. The Committee on Dietary Allowance of the U.S. Food and Nutrition Board recommends that a person take in about 1,100 to 3,300 milligrams of sodium per day. The human body actually needs only about 500 milligrams of sodium. Studies show that the average American takes in about 2,300 to 6,900 milligrams of sodium per day. This high level of sodium intake troubles many health experts. Too much sodium can affect the body's ability to digest fats, for example. The most serious problem, however, may be hypertension. Hypertension is another name for "high blood pressure." A person with high blood pressure may be at risk for stroke, heart attack, or other serious health problems. Sodium is also involved in sending nerve messages to and from cells. These impulses control the way muscles move. Again, an excess or lack of sodium can result in abnormal nerve and muscle behavior. Sodium is also needed to control the digestion of foods in the stomach and intestines. "Sodium (revised)." Chemical Elements: From Carbon to Krypton. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/science/news-wires-white-papers-and-books/sodium-revised "Sodium (revised)." Chemical Elements: From Carbon to Krypton. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/news-wires-white-papers-and-books/sodium-revised Known to most people in the form of table salt, sodium is one of the minerals that the body needs in relatively large quantities. Humankind's taste for sodium reaches far back into the distant past. Much like today, sodium was popular in antiquity as a food preservative and an ingredient in snacks. In some ancient societies, sodium was even used as a form of currency. In modern times, most Americans and other Westerners consume far too much of the mineral, and it is easy to see why. One obvious culprit is table salt, which has a high sodium content. The mineral is also found in many of America's favorite foods (or the chemicals used to preserve those foods). Sodium can be found in potato chips and a variety of other snacks, processed foods, meat, fish, butter and margarine, soft drinks, dairy products, canned vegetables, and bread, just to name a few sources. A single slice of pizza can supply the body with all the sodium it needs for one day (about 500 mg), while a teaspoon of table salt contains four times that amount. A certain intake of sodium is considered essential to life. The mineral is a vital component of all bodily fluids, including blood and sweat. Often working in combination with other minerals such as potassium , sodium helps to manage the distribution and pH balance of these fluids inside the body and plays an important role in blood pressure regulation. Sodium is referred to as an electrolyte because it possesses a mild electrical charge when dissolved in bodily fluids. Due to this charge, sufficient amounts of the mineral are necessary for the normal functioning of nerve transmissions and muscle contractions. Sodium also helps the body to retain water and prevent dehydration, and may have some activity as an antibacterial. The important benefits associated with sodium become apparent in cases of sodium deficiency, which is relatively uncommon. Sodium deficiency is most likely to occur in cases of starvation, diarrhea , intense sweating, or other conditions that cause rapid loss of water from the body. People who suffer from low sodium levels may experience a wide range of bothersome or serious health problems, including digestive disorders, muscle twitching or weakness, memory loss, fatigue , and lack of concentration or appetite. Arthritis may also develop. These problems usually occur when fluids that belong in the bloodstream take a wrong turn and enter cells. Most Americans consume anywhere from 3,000 mg to 20,000 mg of sodium a day. These amounts are much more than the body needs to function at an optimal level. Many nutrition experts are concerned about the rise in sodium intake in the general population in the last twenty years. Much of this increase is due to the popularity of fast foods and salty snacks, including the sale of high-sodium snack foods in school cafeterias or vending machines. While sodium deficiencies are rare, supplements may be required in people with certain medical conditions such as Addison's disease, adrenal gland tumors, kidney disease, or low blood pressure. More sodium may also be needed by those who experience severe dehydration or by people who take diuretic drugs. Though taking extra amounts of sodium is not known to improve health or cure disease, the mineral may have some therapeutic value when used externally. A number of medical studies in people suggest that soaking in water from the Dead Sea may be beneficial in the treatment of various diseases such as rheumatoid arthritis , psoriatic arthritis, and osteoarthritis of the knees. Located in Israel, the Dead Sea is many times saltier than ocean water and rich in other minerals such as magnesium , potassium, and calcium . In one small study, published in 1995 by researchers from the Soroka Medical Center in Israel, nine people with rheumatoid arthritis showed significant improvement in their condition after bathing in the Dead Sea for 12 days. The control group in the study, whose members did not bathe in the Dead Sea, failed to improve. The beneficial effects of the Dead Sea soaks lasted for up to three months after they had stopped bathing in the famous body of water. Despite intriguing findings such as these, no one knows for certain if sodium plays a major role in the therapeutic powers associated with the Dead Sea soaks. Sodium has a reputation as a germ killer. Some people use a sodium solution as an antibacterial mouthwash to combat microorganisms that cause sore throat or inflamed gums. Plain saltwater soaks have also been recommended as a remedy for sweaty feet. Salt is believed to have a drying effect by soaking up excess perspiration. In ages past, saltwater soaks were used to relieve sore or aching muscles. In the late 1990s the National Academy of Sciences established the recommended daily allowance (RDA) of sodium as between 1,100 and 3,300 milligrams. To prepare a sodium mouthwash, mix 1 tsp of table salt with a glass of warm water. The solution should be swished around in the mouth for about a minute or so. Then spit the mixture out. Try not to swallow the solution, as it contains about 2,000 mg of sodium. Sodium is available in tablet form, but supplements should only be taken under the supervision of a doctor. As mentioned earlier, most people already get far too much sodium in their diets . A trip to the Dead Sea is not necessary in order to enjoy its potential benefits. Dead Sea bath salts are also available. People who wish to take sodium supplements or increase their sodium intake should talk to a doctor first if they have high blood pressure (or a family history of the disease), congestive heart failure (or other forms of heart or blood vessel disease), hepatic cirrhosis, edema, epilepsy , kidney disease, or bleeding problems. Studies investigating the role of sodium in the development of high blood pressure have produced mixed results. However, sodium is widely believed to contribute to the development of the disease in susceptible people. For this reason, most doctors and major health organizations around the world recommend a diet low in sodium. Eating a low-sodium diet may actually help to lower blood pressure, especially when that diet includes sufficient amounts of potassium. A 20-year-long follow-up study to the National Health and Nutrition Examination Survey that was conducted between 1971–1975 reported in 2002 that high levels of sodium in the diet are an independent risk factor for congestive heart failure (CHF) in overweight adults. The authors of the study suggested that lowering the rate of sodium intake may play an important role in lowering the risk of CHF in overweight populations as well as individuals. Another good reason for limiting one's intake of sodium is the link between high levels of dietary sodium and an increased risk of stomach cancer . This risk is increased if a person's diet is also low in fresh fruits and vegetables. Apart from an increase in blood pressure, high levels of sodium may cause confusion, anxiety , edema, nausea, vomiting , restlessness, weakness, and loss of potassium and calcium. People who are concerned about consuming too much sodium should try to keep their sodium intake below 2500 mg per day. This is the level recommended by the US Department of Health and Human Services and the US Department of Agriculture in their 2000 Dietary Guidelines for Americans. Ways to reduce sodium intake include the following: - Reading the Nutrition Facts labels on processed food items. The amount of sodium in a specific processed food, such as cake mix or canned soup, can vary widely from brand to brand. - Retraining the taste buds. A taste for salt is acquired. A gradual decrease in the use of salt to season foods gives the taste buds time to adjust. - Using other spices and herbs to season food. - Cooking from scratch rather than using processed foods. - Substituting fresh fruits and vegetables for salty snack foods. - Tasting food at the table before adding salt. Many people salt their food automatically before eating it, which often adds unnecessary sodium to the daily intake. - Choosing foods that are labeled "low sodium" or "sodium free." - Watching the sodium content of over-the-counter medications, and asking a pharmacist for information about the sodium content of prescription drugs. Restricting sodium intake is not usually recommended for women who are pregnant or breast-feeding. Dietary sodium is not associated with any bothersome or significant short-term side effects. In some people, however, salt tablets may cause upset stomach or affect kidney function. Sodium may promote the loss of calcium and potassium from the body. In addition, sodium in the diet should be restricted for such medications as antihypertensives (drugs to control blood pressure) and anticoagulants (blood thinners) to be fully effective. Pelletier, Kenneth R., MD. The Best Alternative Medicine, Part I: Food for Thought. New York: Simon & Schuster, 2002. Sifton, David W. PDR Family Guide to Natural Medicines and Healing Therapies. New York: Three Rivers Press, 1999. Becker, Elizabeth, and Marian Burros. "Eat Your Vegetables? Only at a Few Schools." New York Times, January 13, 2003. He, J., L. G. Ogden, L. A. Bazzano, et al. "Dietary Sodium Intake and Incidence of Congestive Heart Failure in Overweight US Men and Women: First National Health and Nutrition Examination Survey Epidemiologic Follow-up Study." Archives of Internal Medicine 162 (July 22, 2002): 1619-1624. Ngoan, L. T., T. Mizoue, Y. Fujino, et al. "Dietary Factors and Stomach Cancer Mortality." British Journal of Cancer 87 (July 1, 2002): 37-42. Nielsen, S. J., A. M. Siega-Riz, and B. M. Popkin. "Trends in Food Locations and Sources Among Adolescents and Young Adults." Preventive Medicine 35 (August 2002): 107-113. Sukenik, S. "Balneotherapy for Rheumatic Diseases at the Dead Sea Area." Israeli Journal of Medicine and Science. (1996): S16–9. Sukenik, S., D. Flusser, and S. Codish et al. "Balneotherapy at the Dead Sea Area for Knee Osteoarthritis." Israeli Journal of Medicine and Science. (1999):83–5. National Academy of Sciences. 500 Fifth Street, NW, Washington, DC 20001. <www4.nationalacademies.org/nas>. Rebecca J. Frey, PhD "Sodium." Gale Encyclopedia of Alternative Medicine. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/medicine/encyclopedias-almanacs-transcripts-and-maps/sodium "Sodium." Gale Encyclopedia of Alternative Medicine. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/medicine/encyclopedias-almanacs-transcripts-and-maps/sodium sodium, a metallic chemical element; symbol Na [Lat. natrium]; at. no. 11; at. wt. 22.98977; m.p. 97.81°C; b.p. 892.9°C; sp. gr. 0.971 at 20°C; valence +1. Sodium is a soft, silver-white metal. Extremely reactive chemically, it is one of the alkali metals in Group 1 of the periodic table. Like potassium, which it closely resembles, it oxidizes rapidly in air; it also reacts violently with water, liberating hydrogen (which may ignite) and forming the hydroxide. It must be stored out of contact with air and water and should be handled carefully. Sodium combines directly with the halogens. The metal is usually prepared by electrolysis of the fused chloride (the Downs process); formerly, the chief method of preparation was by electrolysis of the fused hydroxide (the Castner process). Metallic sodium has limited use. It is used in sodium arc lamps for street lighting; pure or alloyed with potassium, it has found use as a heat-transfer liquid, e.g., in certain nuclear reactors. It is used principally in the manufacture of tetraethyl lead (a gasoline antiknock compound) and of sodamide, NaNH2, sodium cyanide, NaCN, sodium peroxide, Na2O2, and sodium hydride, NaH. Sodium compounds are extensively used in industry and for many nonindustrial purposes. Among the most important compounds are chloride (common salt, NaCl), bicarbonate (baking soda, NaHCO3), carbonate (soda ash, or washing soda, Na2CO3), hydroxide (caustic soda, or lye, NaOH), nitrate (Chile saltpeter, NaNO3), thiosulfate (hypo, Na2S2O3·5H2O), phosphates, and borax (Na2B4O7·10H2O). Sodium hydroxide is used wherever a cheap alkali is needed, for example, in making soap. Substances containing sodium impart a characteristic yellow color to a flame. Because of its activity sodium is not found uncombined in nature. It occurs abundantly and widely distributed in its compounds, which are present in rocks and soil, in the oceans, in salt lakes, in mineral waters, and in deposits in various parts of the world. Sodium compounds are found in the tissues of plants and animals. Sodium is an essential element in the diet, but some people must limit the amount of sodium in their food for medical reasons. Discovery of sodium is usually credited to Sir Humphry Davy, who prepared the metal from its hydroxide in 1807; its compounds have been known since antiquity. "sodium." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/sodium "sodium." The Columbia Encyclopedia, 6th ed.. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/sodium melting point: 97.8°C boiling point: 883°C density: 0.971 g/cm 3 most common ions: Na + Sodium is a soft, silvery alkali metal and reacts vigorously with water to generate hydrogen gas. The word sodium is derived from "sodanum" (a Medieval Latin name for a headache remedy), and "natrium" (Latin for "soda") is the origin of the element's symbol. Humphry Davy isolated the element in 1807 via the electrolysis of caustic soda, NaOH. Currently, sodium metal is obtained from the electrolysis of a molten mixture of sodium chloride and calcium chloride (in an electrochemical cell called the Downs cell). In nature it is never found in its elemental form, but sodium compounds are quite common. Sodium is the most abundant alkali metal and the seventh most abundant element in Earth's crust (22,700 ppm). Sodium burns yellow-orange in the flame test. The demand for metallic sodium is declining. Its primary use had been as a substance used in the production of tetraethyl lead, an antiknocking gasoline additive; however, because of its damaging effects on the environment, tetraethyl lead is being phased out. Sodium is used to produce sodamide from reaction with ammonia and to reduce TiCl4, ZrCl4, and KCl to Ti, Zr, and K, respectively. An alloy of Na and K is used in nuclear reactors as a heat transfer agent. Several sodium compounds are economically important. NaCl (ordinary salt) is a de-icing compound, a condiment, and a food preservative. NaOH finds use in the manufacture of soaps, detergents, and cleansers. Na2CO3 (washing soda) is used to make glass, soaps, fire extinguishers, and "scrubbers" that remove SO2 from gases generated in power plants before it escapes into the atmosphere. The paper industry uses Na2SO4 (salt cake) to make brown wrapping paper and corrugated boxes. Appropriate sodium ion levels (along with potassium levels) are essential for proper cell function in biological systems. see also Alkali Metals. Nathan J. Barrows Emsley, John (2001). Nature's Building Blocks: An A-Z Guide to the Elements. New York: Oxford University Press. Greenwood, N. N., and Earnshaw, A. (1997). Chemistry of the Elements, 2nd edition. Boston: Butterworth-Heinemann. Lide, David R., ed. (2000). CRC Handbook of Chemistry & Physics, 81st edition. New York: CRC Press. "Sodium." Chemistry: Foundations and Applications. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/science/news-wires-white-papers-and-books/sodium "Sodium." Chemistry: Foundations and Applications. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/news-wires-white-papers-and-books/sodium "sodium." A Dictionary of Food and Nutrition. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/education/dictionaries-thesauruses-pictures-and-press-releases/sodium "sodium." A Dictionary of Food and Nutrition. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/dictionaries-thesauruses-pictures-and-press-releases/sodium "sodium." World Encyclopedia. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/sodium "sodium." World Encyclopedia. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/sodium "sodium." A Dictionary of Nursing. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/caregiving/dictionaries-thesauruses-pictures-and-press-releases/sodium "sodium." A Dictionary of Nursing. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/caregiving/dictionaries-thesauruses-pictures-and-press-releases/sodium "sodium." A Dictionary of Ecology. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/sodium "sodium." A Dictionary of Ecology. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/sodium "sodium." A Dictionary of Biology. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/sodium-1 "sodium." A Dictionary of Biology. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/sodium-1 so·di·um / ˈsōdēəm/ • n. the chemical element of atomic number 11, a soft silver-white reactive metal of the alkali metal group. (Symbol: Na) "sodium." The Oxford Pocket Dictionary of Current English. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/sodium-0 "sodium." The Oxford Pocket Dictionary of Current English. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/sodium-0 "sodium." A Dictionary of Plant Sciences. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/sodium-0 "sodium." A Dictionary of Plant Sciences. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/sodium-0 "sodium." Oxford Dictionary of Rhymes. . Encyclopedia.com. (July 21, 2017). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/sodium "sodium." Oxford Dictionary of Rhymes. . Retrieved July 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/sodium
For the first time since exoplanets, or planets around stars other than the sun, were discovered almost 20 years ago, X-ray observations have detected an exoplanet passing in front of its parent star. An advantageous alignment of a planet and its parent star in the system HD 189733, which is 63 light-years from Earth, enabled NASA's Chandra X-ray Observatory and the European Space Agency's XMM Newton Observatory to observe a dip in X-ray intensity as the planet transited the star. "Thousands of planet candidates have been seen to transit in only optical light," said Katja Poppenhaeger of Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Mass., who led a new study to be published in the Aug. 10 edition of The Astrophysical Journal. "Finally being able to study one in X-rays is important because it reveals new information about the properties of an exoplanet." The team used Chandra to observe six transits and data from XMM Newton observations of one. The planet, known as HD 189733b, is a hot Jupiter, meaning it is similar in size to Jupiter in our solar system but in very close orbit around its star. HD 189733b is more than 30 times closer to its star than Earth is to the sun. It orbits the star once every 2.2 days. HD 189733b is the closest hot Jupiter to Earth, which makes it a prime target for astronomers who want to learn more about this type of exoplanet and the atmosphere around it. They have used NASA's Kepler space telescope to study it at optical wavelengths, and NASA's Hubble Space Telescope to confirm it is blue in color as a result of the preferential scattering of blue light by silicate particles in its atmosphere. The study with Chandra and XMM Newton has revealed clues to the size of the planet's atmosphere. The spacecraft saw light decreasing during the transits. The decrease in X-ray light was three times greater than the corresponding decrease in optical light. "The X-ray data suggest there are extended layers of the planet's atmosphere that are transparent to optical light but opaque to X-rays," said co-author Jurgen Schmitt of Hamburger Sternwarte in Hamburg, Germany. "However, we need more data to confirm this idea." The researchers also are learning about how the planet and the star can affect one another. Astronomers have known for about a decade ultraviolet and X-ray radiation from the main star in HD 189733 are evaporating the atmosphere of HD 189733b over time. The authors estimate it is losing 100 million to 600 million kilograms of mass per second. HD 189733b's atmosphere appears to be thinning 25 percent to 65 percent faster than it would be if the planet's atmosphere were smaller. "The extended atmosphere of this planet makes it a bigger target for high-energy radiation from its star, so more evaporation occurs," said co-author Scott Wolk, also of CfA. The main star in HD 189733 also has a faint red companion, detected for the first time in X-rays with Chandra. The stars likely formed at the same time, but the main star appears to be 3 billion to 3 1/2 billion years younger than its companion star because it rotates faster, displays higher levels of magnetic activity and is about 30 times brighter in X-rays than its companion. "This star is not acting its age, and having a big planet as a companion may be the explanation," said Poppenhaeger. "It's possible this hot Jupiter is keeping the star's rotation and magnetic activity high because of tidal forces, making it behave in some ways like a much younger star." - K. Poppenhaeger, J.H.M.M. Schmitt, S.J. Wolk. Transit observations of the Hot Jupiter HD 189733b at X-ray wavelengths. The Astrophysical Journal, 2013; (accepted) [link] Cite This Page:
|Matter is usually classified into three classical states, with plasma sometimes added as a fourth state. From top to bottom: quartz (solid), water (liquid), nitrogen dioxide (gas), and a plasma globe (plasma).| Before the 20th century, the term matter included ordinary matter composed of atoms and excluded other energy phenomena such as light or sound. This concept of matter may be generalized from atoms to include any objects having mass even when at rest, but this is ill-defined because an object's mass can arise from its (possibly massless) constituents' motion and interaction energies. Thus, matter does not have a universal definition, nor is it a fundamental concept in physics today. Matter is also used loosely as a general term for the substance that makes up all observable physical objects. All the objects from everyday life that we can bump into, touch or squeeze are composed of atoms. This atomic matter is in turn made up of interacting subatomic particles—usually a nucleus of protons and neutrons, and a cloud of orbiting electrons. Typically, science considers these composite particles matter because they have both rest mass and volume. By contrast, massless particles, such as photons, are not considered matter, because they have neither rest mass nor volume. However, not all particles with rest mass have a classical volume, since fundamental particles such as quarks and leptons (sometimes equated with matter) are considered "point particles" with no effective size or volume. Nevertheless, quarks and leptons together make up "ordinary matter", and their interactions contribute to the effective volume of the composite particles that make up ordinary matter. Matter commonly exists in four states (or phases): solid, liquid and gas, and plasma. However, advances in experimental techniques have revealed other previously theoretical phases, such as Bose–Einstein condensates and fermionic condensates. A focus on an elementary-particle view of matter also leads to new phases of matter, such as the quark–gluon plasma. For much of the history of the natural sciences people have contemplated the exact nature of matter. The idea that matter was built of discrete building blocks, the so-called particulate theory of matter, was first put forward by the Greek philosophers Leucippus (~490 BC) and Democritus (~470–380 BC). Matter should not be confused with mass, as the two are not quite the same in modern physics. For example, mass is a conserved quantity, which means that its value is unchanging through time, within closed systems. However, matter is not conserved in such systems, although this is not obvious in ordinary conditions on Earth, where matter is approximately conserved. Still, special relativity shows that matter may disappear by conversion into energy, even inside closed systems, and it can also be created from energy, within such systems. However, because mass (like energy) can neither be created nor destroyed, the quantity of mass and the quantity of energy remain the same during a transformation of matter (which represents a certain amount of energy) into non-material (i.e., non-matter) energy. This is also true in the reverse transformation of energy into matter. Different fields of science use the term matter in different, and sometimes incompatible, ways. Some of these ways are based on loose historical meanings, from a time when there was no reason to distinguish mass and matter. As such, there is no single universally agreed scientific meaning of the word "matter". Scientifically, the term "mass" is well-defined, but "matter" is not. Sometimes in the field of physics "matter" is simply equated with particles that exhibit rest mass (i.e., that cannot travel at the speed of light), such as quarks and leptons. However, in both physics and chemistry, matter exhibits both wave-like and particle-like properties, the so-called wave–particle duality. - 1 Definition - 2 Structure - 3 Phases - 4 Antimatter - 5 Other types - 6 Historical development - 7 See also - 8 References - 9 Further reading - 10 External links The observation that matter occupies space goes back to antiquity. However, an explanation for why matter occupies space is recent, and is argued to be a result of the phenomenon described in the Pauli exclusion principle. Two particular examples where the exclusion principle clearly relates matter to the occupation of space are white dwarf stars and neutron stars, discussed further below. In the context of relativity, mass is not an additive quantity, in the sense that one can add the rest masses of particles in a system to get the total rest mass of the system. Thus, in relativity usually a more general view is that it is not the sum of rest masses, but the energy–momentum tensor that quantifies the amount of matter. This tensor gives the rest mass for the entire system. "Matter" therefore is sometimes considered as anything that contributes to the energy–momentum of a system, that is, anything that is not purely gravity. This view is commonly held in fields that deal with general relativity such as cosmology. In this view, light and other massless particles and fields are part of matter. The reason for this is that in this definition, electromagnetic radiation (such as light) as well as the energy of electromagnetic fields contributes to the mass of systems, and therefore appears to add matter to them. For example, light radiation (or thermal radiation) trapped inside a box would contribute to the mass of the box, as would any kind of energy inside the box, including the kinetic energy of particles held by the box. Nevertheless, isolated individual particles of light (photons) and the isolated kinetic energy of massive particles, are normally not considered to be matter. A difference between matter and mass therefore may seem to arise when single particles are examined. In such cases, the mass of single photons is zero. For particles with rest mass, such as leptons and quarks, isolation of the particle in a frame where it is not moving, removes its kinetic energy. A source of definition difficulty in relativity arises from two definitions of mass in common use, one of which is formally equivalent to total energy (and is thus observer dependent), and the other of which is referred to as rest mass or invariant mass and is independent of the observer. Only "rest mass" is loosely equated with matter (since it can be weighed). Invariant mass is usually applied in physics to unbound systems of particles. However, energies which contribute to the "invariant mass" may be weighed also in special circumstances, such as when a system that has invariant mass is confined and has no net momentum (as in the box example above). Thus, a photon with no mass may (confusingly) still add mass to a system in which it is trapped. The same is true of the kinetic energy of particles, which by definition is not part of their rest mass, but which does add rest mass to systems in which these particles reside (an example is the mass added by the motion of gas molecules of a bottle of gas, or by the thermal energy of any hot object). Since such mass (kinetic energies of particles, the energy of trapped electromagnetic radiation and stored potential energy of repulsive fields) is measured as part of the mass of ordinary matter in complex systems, the "matter" status of "massless particles" and fields of force becomes unclear in such systems. These problems contribute to the lack of a rigorous definition of matter in science, although mass is easier to define as the total stress–energy above (this is also what is weighed on a scale, and what is the source of gravity). A definition of "matter" based on its physical and chemical structure is: matter is made up of atoms. As an example, deoxyribonucleic acid molecules (DNA) are matter under this definition because they are made of atoms. This definition can extend to include charged atoms and molecules, so as to include plasmas (gases of ions) and electrolytes (ionic solutions), which are not obviously included in the atoms definition. Alternatively, one can adopt the protons, neutrons, and electrons definition. Protons, neutrons and electrons definition A definition of "matter" more fine-scale than the atoms and molecules definition is: matter is made up of what atoms and molecules are made of, meaning anything made of positively charged protons, neutral neutrons, and negatively charged electrons. This definition goes beyond atoms and molecules, however, to include substances made from these building blocks that are not simply atoms or molecules, for example white dwarf matter—typically, carbon and oxygen nuclei in a sea of degenerate electrons. At a microscopic level, the constituent "particles" of matter such as protons, neutrons, and electrons obey the laws of quantum mechanics and exhibit wave–particle duality. At an even deeper level, protons and neutrons are made up of quarks and the force fields (gluons) that bind them together (see Quarks and leptons definition below). Quarks and leptons definition As seen in the above discussion, many early definitions of what can be called ordinary matter were based upon its structure or building blocks. On the scale of elementary particles, a definition that follows this tradition can be stated as: ordinary matter is everything that is composed of elementary fermions, namely quarks and leptons. The connection between these formulations follows. Leptons (the most famous being the electron), and quarks (of which baryons, such as protons and neutrons, are made) combine to form atoms, which in turn form molecules. Because atoms and molecules are said to be matter, it is natural to phrase the definition as: ordinary matter is anything that is made of the same things that atoms and molecules are made of. (However, notice that one also can make from these building blocks matter that is not atoms or molecules.) Then, because electrons are leptons, and protons, and neutrons are made of quarks, this definition in turn leads to the definition of matter as being quarks and leptons, which are the two types of elementary fermions. Carithers and Grannis state: Ordinary matter is composed entirely of first-generation particles, namely the [up] and [down] quarks, plus the electron and its neutrino. (Higher generations particles quickly decay into first-generation particles, and thus are not commonly encountered.) This definition of ordinary matter is more subtle than it first appears. All the particles that make up ordinary matter (leptons and quarks) are elementary fermions, while all the force carriers are elementary bosons. The W and Z bosons that mediate the weak force are not made of quarks or leptons, and so are not ordinary matter, even if they have mass. In other words, mass is not something that is exclusive to ordinary matter. The quark–lepton definition of ordinary matter, however, identifies not only the elementary building blocks of matter, but also includes composites made from the constituents (atoms and molecules, for example). Such composites contain an interaction energy that holds the constituents together, and may constitute the bulk of the mass of the composite. As an example, to a great extent, the mass of an atom is simply the sum of the masses of its constituent protons, neutrons and electrons. However, digging deeper, the protons and neutrons are made up of quarks bound together by gluon fields (see dynamics of quantum chromodynamics) and these gluons fields contribute significantly to the mass of hadrons. In other words, most of what composes the "mass" of ordinary matter is due to the binding energy of quarks within protons and neutrons. For example, the sum of the mass of the three quarks in a nucleon is approximately MeV/c2, which is low compared to the mass of a nucleon (approximately 12.5 MeV/c2). 938 The bottom line is that most of the mass of everyday objects comes from the interaction energy of its elementary components. Smaller building blocks issue The Standard Model groups matter particles into three generations, where each generation consists of two quarks and two leptons. The first generation is the up and down quarks, the electron and the electron neutrino; the second includes the charm and strange quarks, the muon and the muon neutrino; the third generation consists of the top and bottom quarks and the tau and tau neutrino. The most natural explanation for this would be that quarks and leptons of higher generations are excited states of the first generations. If this turns out to be the case, it would imply that quarks and leptons are composite particles, rather than elementary particles. In particle physics, fermions are particles that obey Fermi–Dirac statistics. Fermions can be elementary, like the electron—or composite, like the proton and neutron. In the Standard Model, there are two types of elementary fermions: quarks and leptons, which are discussed next. Quarks are particles of spin-1⁄2, implying that they are fermions. They carry an electric charge of −1⁄3 e (down-type quarks) or +2⁄3 e (up-type quarks). For comparison, an electron has a charge of −1 e. They also carry colour charge, which is the equivalent of the electric charge for the strong interaction. Quarks also undergo radioactive decay, meaning that they are subject to the weak interaction. Quarks are massive particles, and therefore are also subject to gravity. |mass comparable to||antiparticle||antiparticle |up||u||1⁄2||+2⁄3||1.5 to 3.3||~ 5 electrons||antiup||u| |charm||c||1⁄2||+2⁄3||1160 to 1340||~1 proton||anticharm||c| |top||t||1⁄2||+2⁄3||169,100 to 173,300||~180 protons or ~1 tungsten atom |down||d||1⁄2||−1⁄3||3.5 to 6.0||~10 electrons||antidown||d| |strange||s||1⁄2||−1⁄3||70 to 130||~ 200 electrons||antistrange||s| |bottom||b||1⁄2||−1⁄3||4130 to 4370||~ 5 protons||antibottom||b| Baryons are strongly interacting fermions, and so are subject to Fermi–Dirac statistics. Amongst the baryons are the protons and neutrons, which occur in atomic nuclei, but many other unstable baryons exist as well. The term baryon usually refers to triquarks—particles made of three quarks. "Exotic" baryons made of four quarks and one antiquark are known as the pentaquarks, but their existence is not generally accepted. Baryonic matter is the part of the universe that is made of baryons (including all atoms). This part of the universe does not include dark energy, dark matter, black holes or various forms of degenerate matter, such as compose white dwarf stars and neutron stars. Microwave light seen by Wilkinson Microwave Anisotropy Probe (WMAP), suggests that only about 4.6% of that part of the universe within range of the best telescopes (that is, matter that may be visible because light could reach us from it), is made of baryonic matter. About 23% is dark matter, and about 72% is dark energy. In physics, degenerate matter refers to the ground state of a gas of fermions at a temperature near absolute zero. The Pauli exclusion principle requires that only two fermions can occupy a quantum state, one spin-up and the other spin-down. Hence, at zero temperature, the fermions fill up sufficient levels to accommodate all the available fermions—and in the case of many fermions, the maximum kinetic energy (called the Fermi energy) and the pressure of the gas becomes very large, and depends on the number of fermions rather than the temperature, unlike normal states of matter. Degenerate matter is thought to occur during the evolution of heavy stars. The demonstration by Subrahmanyan Chandrasekhar that white dwarf stars have a maximum allowed mass because of the exclusion principle caused a revolution in the theory of star evolution. Degenerate matter includes the part of the universe that is made up of neutron stars and white dwarfs. Strange matter is a particular form of quark matter, usually thought of as a liquid of up, down, and strange quarks. It is contrasted with nuclear matter, which is a liquid of neutrons and protons (which themselves are built out of up and down quarks), and with non-strange quark matter, which is a quark liquid that contains only up and down quarks. At high enough density, strange matter is expected to be color superconducting. Strange matter is hypothesized to occur in the core of neutron stars, or, more speculatively, as isolated droplets that may vary in size from femtometers (strangelets) to kilometers (quark stars). Two meanings of the term "strange matter" - The broader meaning is just quark matter that contains three flavors of quarks: up, down, and strange. In this definition, there is a critical pressure and an associated critical density, and when nuclear matter (made of protons and neutrons) is compressed beyond this density, the protons and neutrons dissociate into quarks, yielding quark matter (probably strange matter). - The narrower meaning is quark matter that is more stable than nuclear matter. The idea that this could happen is the "strange matter hypothesis" of Bodmer and Witten. In this definition, the critical pressure is zero: the true ground state of matter is always quark matter. The nuclei that we see in the matter around us, which are droplets of nuclear matter, are actually metastable, and given enough time (or the right external stimulus) would decay into droplets of strange matter, i.e. strangelets. Leptons are particles of spin-1⁄2, meaning that they are fermions. They carry an electric charge of −1 e (charged leptons) or 0 e (neutrinos). Unlike quarks, leptons do not carry colour charge, meaning that they do not experience the strong interaction. Leptons also undergo radioactive decay, meaning that they are subject to the weak interaction. Leptons are massive particles, therefore are subject to gravity. |mass comparable to||antiparticle||antiparticle |muon||μ−||1⁄2||−1||105.7||~ 200 electrons||antimuon||μ+| |tau||τ−||1⁄2||−1||1,777||~ 2 protons||antitau||τ+| |1⁄2||0||< 0.000460||< 1⁄1000 electron||electron antineutrino||ν |1⁄2||0||< 0.19||< 1⁄2 electron||muon antineutrino||ν |1⁄2||0||< 18.2||< 40 electrons||tau antineutrino||ν In bulk[disambiguation needed], matter can exist in several different forms, or states of aggregation, known as phases, depending on ambient pressure, temperature and volume. A phase is a form of matter that has a relatively uniform chemical composition and physical properties (such as density, specific heat, refractive index, and so forth). These phases include the three familiar ones (solids, liquids, and gases), as well as more exotic states of matter (such as plasmas, superfluids, supersolids, Bose–Einstein condensates, ...). A fluid may be a liquid, gas or plasma. There are also paramagnetic and ferromagnetic phases of magnetic materials. As conditions change, matter may change from one phase into another. These phenomena are called phase transitions, and are studied in the field of thermodynamics. In nanomaterials, the vastly increased ratio of surface area to volume results in matter that can exhibit properties entirely different from those of bulk material, and not well described by any bulk phase (see nanomaterials for more details). Phases are sometimes called states of matter, but this term can lead to confusion with thermodynamic states. For example, two gases maintained at different pressures are in different thermodynamic states (different pressures), but in the same phase (both are gases). Baryon asymmetry. Why is there far more matter than antimatter in the observable universe? In particle physics and quantum chemistry, antimatter is matter that is composed of the antiparticles of those that constitute ordinary matter. If a particle and its antiparticle come into contact with each other, the two annihilate; that is, they may both be converted into other particles with equal energy in accordance with Einstein's equation E = mc2. These new particles may be high-energy photons (gamma rays) or other particle–antiparticle pairs. The resulting particles are endowed with an amount of kinetic energy equal to the difference between the rest mass of the products of the annihilation and the rest mass of the original particle–antiparticle pair, which is often quite large. Antimatter is not found naturally on Earth, except very briefly and in vanishingly small quantities (as the result of radioactive decay, lightning or cosmic rays). This is because antimatter that came to exist on Earth outside the confines of a suitable physics laboratory would almost instantly meet the ordinary matter that Earth is made of, and be annihilated. Antiparticles and some stable antimatter (such as antihydrogen) can be made in tiny amounts, but not in enough quantity to do more than test a few of its theoretical properties. There is considerable speculation both in science and science fiction as to why the observable universe is apparently almost entirely matter, and whether other places are almost entirely antimatter instead. In the early universe, it is thought that matter and antimatter were equally represented, and the disappearance of antimatter requires an asymmetry in physical laws called the charge parity (or CP symmetry) violation. CP symmetry violation can be obtained from the Standard Model, but at this time the apparent asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. Possible processes by which it came about are explored in more detail under baryogenesis. Ordinary matter, in the quarks and leptons definition, constitutes about 4% of the energy of the observable universe. The remaining energy is theorized to be due to exotic forms, of which 23% is dark matter and 73% is dark energy. In astrophysics and cosmology, dark matter is matter of unknown composition that does not emit or reflect enough electromagnetic radiation to be observed directly, but whose presence can be inferred from gravitational effects on visible matter. Observational evidence of the early universe and the big bang theory require that this matter have energy and mass, but is not composed of either elementary fermions (as above) OR gauge bosons. The commonly accepted view is that most of the dark matter is non-baryonic in nature. As such, it is composed of particles as yet unobserved in the laboratory. Perhaps they are supersymmetric particles, which are not Standard Model particles, but relics formed at very high energies in the early phase of the universe and still floating about. In cosmology, dark energy is the name given to the antigravitating influence that is accelerating the rate of expansion of the universe. It is known not to be composed of known particles like protons, neutrons or electrons, nor of the particles of dark matter, because these all gravitate. Fully 70% of the matter density in the universe appears to be in the form of dark energy. Twenty-six percent is dark matter. Only 4% is ordinary matter. So less than 1 part in 20 is made out of matter we have observed experimentally or described in the standard model of particle physics. Of the other 96%, apart from the properties just mentioned, we know absolutely nothing. — Lee Smolin: The Trouble with Physics, p. 16 Exotic matter is a hypothetical concept of particle physics. It covers any material that violates one or more classical conditions or is not made of known baryonic particles. Such materials would possess qualities like negative mass or being repelled rather than attracted by gravity. The pre-Socratics were among the first recorded speculators about the underlying nature of the visible world. Thales (c. 624 BC–c. 546 BC) regarded water as the fundamental material of the world. Anaximander (c. 610 BC–c. 546 BC) posited that the basic material was wholly characterless or limitless: the Infinite (apeiron). Anaximenes (flourished 585 BC, d. 528 BC) posited that the basic stuff was pneuma or air. Heraclitus (c. 535–c. 475 BC) seems to say the basic element is fire, though perhaps he means that all is change. Empedocles (c. 490–430 BC) spoke of four elements of which everything was made: earth, water, air, and fire. Meanwhile, Parmenides argued that change does not exist, and Democritus argued that everything is composed of minuscule, inert bodies of all shapes called atoms, a philosophy called atomism. All of these notions had deep philosophical problems. Aristotle (384 BC – 322 BC) was the first to put the conception on a sound philosophical basis, which he did in his natural philosophy, especially in Physics book I. He adopted as reasonable suppositions the four Empedoclean elements, but added a fifth, aether. Nevertheless, these elements are not basic in Aristotle's mind. Rather they, like everything else in the visible world, are composed of the basic principles matter and form. The word Aristotle uses for matter, ὑλη (hyle or hule), can be literally translated as wood or timber, that is, "raw material" for building. Indeed, Aristotle's conception of matter is intrinsically linked to something being made or composed. In other words, in contrast to the early modern conception of matter as simply occupying space, matter for Aristotle is definitionally linked to process or change: matter is what underlies a change of substance. For example, a horse eats grass: the horse changes the grass into itself; the grass as such does not persist in the horse, but some aspect of it—its matter—does. The matter is not specifically described (e.g., as atoms), but consists of whatever persists in the change of substance from grass to horse. Matter in this understanding does not exist independently (i.e., as a substance), but exists interdependently (i.e., as a "principle") with form and only insofar as it underlies change. It can be helpful to conceive of the relationship of matter and form as very similar to that between parts and whole. For Aristotle, matter as such can only receive actuality from form; it has no activity or actuality in itself, similar to the way that parts as such only have their existence in a whole (otherwise they would be independent wholes). René Descartes (1596–1650) originated the modern conception of matter. He was primarily a geometer. Instead of, like Aristotle, deducing the existence of matter from the physical reality of change, Descartes arbitrarily postulated matter to be an abstract, mathematical substance that occupies space: So, extension in length, breadth, and depth, constitutes the nature of bodily substance; and thought constitutes the nature of thinking substance. And everything else attributable to body presupposes extension, and is only a mode of extended — René Descartes, Principles of Philosophy For Descartes, matter has only the property of extension, so its only activity aside from locomotion is to exclude other bodies: this is the mechanical philosophy. Descartes makes an absolute distinction between mind, which he defines as unextended, thinking substance, and matter, which he defines as unthinking, extended substance. They are independent things. In contrast, Aristotle defines matter and the formal/forming principle as complementary principles that together compose one independent thing (substance). In short, Aristotle defines matter (roughly speaking) as what things are actually made of (with a potential independent existence), but Descartes elevates matter to an actual independent thing in itself. The continuity and difference between Descartes' and Aristotle's conceptions is noteworthy. In both conceptions, matter is passive or inert. In the respective conceptions matter has different relationships to intelligence. For Aristotle, matter and intelligence (form) exist together in an interdependent relationship, whereas for Descartes, matter and intelligence (mind) are definitionally opposed, independent substances. Descartes' justification for restricting the inherent qualities of matter to extension is its permanence, but his real criterion is not permanence (which equally applied to color and resistance), but his desire to use geometry to explain all material properties. Like Descartes, Hobbes, Boyle, and Locke argued that the inherent properties of bodies were limited to extension, and that so-called secondary qualities, like color, were only products of human perception. Isaac Newton (1643–1727) inherited Descartes' mechanical conception of matter. In the third of his "Rules of Reasoning in Philosophy", Newton lists the universal qualities of matter as "extension, hardness, impenetrability, mobility, and inertia". Similarly in Optics he conjectures that God created matter as "solid, massy, hard, impenetrable, movable particles", which were "...even so very hard as never to wear or break in pieces". The "primary" properties of matter were amenable to mathematical description, unlike "secondary" qualities such as color or taste. Like Descartes, Newton rejected the essential nature of secondary qualities. Newton developed Descartes' notion of matter by restoring to matter intrinsic properties in addition to extension (at least on a limited basis), such as mass. Newton's use of gravitational force, which worked "at a distance", effectively repudiated Descartes' mechanics, in which interactions happened exclusively by contact. Though Newton's gravity would seem to be a power of bodies, Newton himself did not admit it to be an essential property of matter. Carrying the logic forward more consistently, Joseph Priestley argued that corporeal properties transcend contact mechanics: chemical properties require the capacity for attraction. He argued matter has other inherent powers besides the so-called primary qualities of Descartes, et al. Since Priestley's time, there has been a massive expansion in knowledge of the constituents of the material world (viz., molecules, atoms, subatomic particles), but there has been no further development in the definition of matter. Rather the question has been set aside. Noam Chomsky summarizes the situation that has prevailed since that time: What is the concept of body that finally emerged?[...] The answer is that there is no clear and definite conception of body.[...] Rather, the material world is whatever we discover it to be, with whatever properties it must be assumed to have for the purposes of explanatory theory. Any intelligible theory that offers genuine explanations and that can be assimilated to the core notions of physics becomes part of the theory of the material world, part of our account of body. If we have such a theory in some domain, we seek to assimilate it to the core notions of physics, perhaps modifying these notions as we carry out this enterprise. — Noam Chomsky, 'Language and problems of knowledge: the Managua lectures, p. 144 So matter is whatever physics studies and the object of study of physics is matter: there is no independent general definition of matter, apart from its fitting into the methodology of measurement and controlled experimentation. In sum, the boundaries between what constitutes matter and everything else remains as vague as the demarcation problem of delimiting science from everything else. Late nineteenth and early twentieth centuries The common definition in terms of occupying space and having mass is in contrast with most physical and chemical definitions of matter, which rely instead upon its structure and upon attributes not necessarily related to volume and mass. At the turn of the nineteenth century, the knowledge of matter began a rapid evolution. Aspects of the Newtonian view still held sway. James Clerk Maxwell discussed matter in his work Matter and Motion. He carefully separates "matter" from space and time, and defines it in terms of the object referred to in Newton's first law of motion. However, the Newtonian picture was not the whole story. In the 19th century, the term "matter" was actively discussed by a host of scientists and philosophers, and a brief outline can be found in Levere.[further explanation needed] A textbook discussion from 1870 suggests matter is what is made up of atoms: Three divisions of matter are recognized in science: masses, molecules and atoms. A Mass of matter is any portion of matter appreciable by the senses. A Molecule is the smallest particle of matter into which a body can be divided without losing its identity. An Atom is a still smaller particle produced by division of a molecule. Rather than simply having the attributes of mass and occupying space, matter was held to have chemical and electrical properties. The famous physicist J. J. Thomson wrote about the "constitution of matter" and was concerned with the possible connection between matter and electrical charge. There is an entire literature concerning the "structure of matter", ranging from the "electrical structure" in the early 20th century, to the more recent "quark structure of matter", introduced today with the remark: Understanding the quark structure of matter has been one of the most important advances in contemporary physics.[further explanation needed] In this connection, physicists speak of matter fields, and speak of particles as "quantum excitations of a mode of the matter field". And here is a quote from de Sabbata and Gasperini: "With the word "matter" we denote, in this context, the sources of the interactions, that is spinor fields (like quarks and leptons), which are believed to be the fundamental components of matter, or scalar fields, like the Higgs particles, which are used to introduced mass in a gauge theory (and that, however, could be composed of more fundamental fermion fields)."[further explanation needed] The modern conception of matter has been refined many times in history, in light of the improvement in knowledge of just what the basic building blocks are, and in how they interact. In the late 19th century with the discovery of the electron, and in the early 20th century, with the discovery of the atomic nucleus, and the birth of particle physics, matter was seen as made up of electrons, protons and neutrons interacting to form atoms. Today, we know that even protons and neutrons are not indivisible, they can be divided into quarks, while electrons are part of a particle family called leptons. Both quarks and leptons are elementary particles, and are currently seen as being the fundamental constituents of matter. These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity cannot yet be accounted for at the quantum level; it is only described by classical physics (see quantum gravity and graviton). Interactions between quarks and leptons are the result of an exchange of force-carrying particles (such as photons) between quarks and leptons. The force-carrying particles are not themselves building blocks. As one consequence, mass and energy (which cannot be created or destroyed) cannot always be related to matter (which can be created out of non-matter particles such as photons, or even out of pure energy, such as kinetic energy). Force carriers are usually not considered matter: the carriers of the electric force (photons) possess energy (see Planck relation) and the carriers of the weak force (W and Z bosons) are massive, but neither are considered matter either. However, while these particles are not considered matter, they do contribute to the total mass of atoms, subatomic particles, and all systems that contain them. The term "matter" is used throughout physics in a bewildering variety of contexts: for example, one refers to "condensed matter physics", "elementary matter", "partonic" matter, "dark" matter, "anti"-matter, "strange" matter, and "nuclear" matter. In discussions of matter and antimatter, normal matter has been referred to by Alfvén as koinomatter (Gk. common matter). It is fair to say that in physics, there is no broad consensus as to a general definition of matter, and the term "matter" usually is used in conjunction with a specifying modifier. - R. Penrose (1991). "The mass of the classical vacuum". In S. Saunders, H.R. Brown. The Philosophy of Vacuum. Oxford University Press. p. 21. ISBN 0-19-824449-5. - "Matter (physics)". McGraw-Hill's Access Science: Encyclopedia of Science and Technology Online. Retrieved 2009-05-24. - P. Davies (1992). The New Physics: A Synthesis. Cambridge University Press. p. 1. ISBN 0-521-43831-4. - G. 't Hooft (1997). In search of the ultimate building blocks. Cambridge University Press. p. 6. ISBN 0-521-57883-3. - "RHIC Scientists Serve Up "Perfect" Liquid" (Press release). Brookhaven National Laboratory. 18 April 2005. Retrieved 2009-09-15. - J. Olmsted; G.M. Williams (1996). Chemistry: The Molecular Science (2nd ed.). Jones & Bartlett. p. 40. ISBN 0-8151-8450-6. - J. Mongillo (2007). Nanotechnology 101. Greenwood Publishing. p. 30. ISBN 0-313-33880-9. - P.C.W. Davies (1979). The Forces of Nature. Cambridge University Press. p. 116. ISBN 0-521-22523-X. - S. Weinberg (1998). The Quantum Theory of Fields. Cambridge University Press. p. 2. ISBN 0-521-55002-5. - M. Masujima (2008). Path Integral Quantization and Stochastic Quantization. Springer. p. 103. ISBN 3-540-87850-5. - S.M. Walker; A. King (2005). What is Matter?. Lerner Publications. p. 7. ISBN 0-8225-5131-4. - J.Kenkel; P.B. Kelter; D.S. Hage (2000). Chemistry: An Industry-based Introduction with CD-ROM. CRC Press. p. 2. ISBN 1-56670-303-4. All basic science textbooks define matter as simply the collective aggregate of all material substances that occupy space and have mass or weight. - K.A. Peacock (2008). The Quantum Revolution: A Historical Perspective. Greenwood Publishing Group. p. 47. ISBN 0-313-33448-X. - M.H. Krieger (1998). Constitutions of Matter: Mathematically Modeling the Most Everyday of Physical Phenomena. University of Chicago Press. p. 22. ISBN 0-226-45305-7. - S.M. Caroll (2004). Spacetime and Geometry. Addison Wesley. pp. 163–164. ISBN 0-8053-8732-3. - P. Davies (1992). The New Physics: A Synthesis. Cambridge University Press. p. 499. ISBN 0-521-43831-4. Matter fields: the fields whose quanta describe the elementary particles that make up the material content of the Universe (as opposed to the gravitons and their supersymmetric partners). - G. F. Barker (1870). "Divisions of matter". A text-book of elementary chemistry: theoretical and inorganic. John F Morton & Co. p. 2. ISBN 978-1-4460-2206-1. - M. de Podesta (2002). Understanding the Properties of Matter (2nd ed.). CRC Press. p. 8. ISBN 0-415-25788-3. - B. Povh; K. Rith; C. Scholz; F. Zetsche; M. Lavelle (2004). "Part I: Analysis: The building blocks of matter". Particles and Nuclei: An Introduction to the Physical Concepts (4th ed.). Springer. ISBN 3-540-20168-8. - B. Carithers, P. Grannis (1995). "Discovery of the Top Quark" (PDF). Beam Line (SLAC National Accelerator Laboratory) 25 (3): 4–16. - D. Green (2005). High PT physics at hadron colliders. Cambridge University Press. p. 23. ISBN 0-521-83509-7. - L. Smolin (2007). The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Mariner Books. p. 67. ISBN 0-618-91868-X. - The W boson mass is 80.398 GeV; see Figure 1 in C. Amsler et al. (Particle Data Group) (2008). "Review of Particle Physics: The Mass and Width of the W Boson" (PDF). Physics Letters B 667: 1. Bibcode:2008PhLB..667....1P. doi:10.1016/j.physletb.2008.07.018. - I.J.R. Aitchison; A.J.G. Hey (2004). Gauge Theories in Particle Physics. CRC Press. p. 48. ISBN 0-7503-0864-8. - B. Povh; K. Rith; C. Scholz; F. Zetsche; M. Lavelle (2004). Particles and Nuclei: An Introduction to the Physical Concepts. Springer. p. 103. ISBN 3-540-20168-8. - T. Hatsuda (2008). "Quark–gluon plasma and QCD". In H. Akai. Condensed matter theories 21. Nova Publishers. p. 296. ISBN 1-60021-501-7. - K.W Staley (2004). "Origins of the Third Generation of Matter". The Evidence for the Top Quark. Cambridge University Press. p. 8. ISBN 0-521-82710-8. - Y. Ne'eman; Y. Kirsh (1996). The Particle Hunters (2nd ed.). Cambridge University Press. p. 276. ISBN 0-521-47686-0. [T]he most natural explanation to the existence of higher generations of quarks and leptons is that they correspond to excited states of the first generation, and experience suggests that excited systems must be composite - C. Amsler et al. (Particle Data Group) (2008). "Reviews of Particle Physics: Quarks" (PDF). Physics Letters B 667: 1. Bibcode:2008PhLB..667....1P. doi:10.1016/j.physletb.2008.07.018. - "Five Year Results on the Oldest Light in the Universe". NASA. 2008. Retrieved 2008-05-02. - H.S. Goldberg; M.D. Scadron (1987). Physics of Stellar Evolution and Cosmology. Taylor & Francis. p. 202. ISBN 0-677-05540-4. - H.S. Goldberg; M.D. Scadron (1987). Physics of Stellar Evolution and Cosmology. Taylor & Francis. p. 233. ISBN 0-677-05540-4. - J.-P. Luminet; A. Bullough; A. King (1992). Black Holes. Cambridge University Press. p. 75. ISBN 0-521-40906-3. - A. Bodmer (1971). "Collapsed Nuclei". Physical Review D 4 (6): 1601. Bibcode:1971PhRvD...4.1601B. doi:10.1103/PhysRevD.4.1601. - E. Witten (1984). "Cosmic Separation of Phases". Physical Review D 30 (2): 272. Bibcode:1984PhRvD..30..272W. doi:10.1103/PhysRevD.30.272. - C. Amsler et al. (Particle Data Group) (2008). "Review of Particle Physics: Leptons" (PDF). Physics Letters B 667: 1. Bibcode:2008PhLB..667....1P. doi:10.1016/j.physletb.2008.07.018. - C. Amsler et al. (Particle Data Group) (2008). "Review of Particle Physics: Neutrinos Properties" (PDF). Physics Letters B 667: 1. Bibcode:2008PhLB..667....1P. doi:10.1016/j.physletb.2008.07.018. - S. R. Logan (1998). Physical Chemistry for the Biomedical Sciences. CRC Press. pp. 110–111. ISBN 0-7484-0710-3. - P.J. Collings (2002). "Chapter 1: States of Matter". Liquid Crystals: Nature's Delicate Phase of Matter. Princeton University Press. ISBN 0-691-08672-9. - D.H. Trevena (1975). "Chapter 1.2: Changes of phase". The Liquid Phase. Taylor & Francis. ISBN 978-0-85109-031-3. - National Research Council (US) (2006). Revealing the hidden nature of space and time. National Academies Press. p. 46. ISBN 0-309-10194-8. - J.P. Ostriker; P.J. Steinhardt (2003). "New Light on Dark Matter". Science 300 (5627): 1909–13. arXiv:astro-ph/0306402. Bibcode:2003Sci...300.1909O. doi:10.1126/science.1085976. PMID 12817140. - K. Pretzl (2004). "Dark Matter, Massive Neutrinos and Susy Particles". Structure and Dynamics of Elementary Matter. Walter Greiner. p. 289. ISBN 1-4020-2446-0. - K. Freeman; G. McNamara (2006). "What can the matter be?". In Search of Dark Matter. Birkhäuser Verlag. p. 105. ISBN 0-387-27616-5. - J.C. Wheeler (2007). Cosmic Catastrophes: Exploding Stars, Black Holes, and Mapping the Universe. Cambridge University Press. p. 282. ISBN 0-521-85714-7. - J. Gribbin (2007). The Origins of the Future: Ten Questions for the Next Ten Years. Yale University Press. p. 151. ISBN 0-300-12596-8. - P. Schneider (2006). Extragalactic Astronomy and Cosmology. Springer. p. 4, Fig. 1.4. ISBN 3-540-33174-3. - T. Koupelis; K.F. Kuhn (2007). In Quest of the Universe. Jones & Bartlett Publishers. p. 492; Fig. 16.13. ISBN 0-7637-4387-9. - M. H. Jones; R. J. Lambourne; D. J. Adams (2004). An Introduction to Galaxies and Cosmology. Cambridge University Press. p. 21; Fig. 1.13. ISBN 0-521-54623-0. - D. Majumdar (2007). "Dark matter — possible candidates and direct detection". arXiv:hep-ph/0703310 [hep-ph]. - K.A. Olive (2003). "Theoretical Advanced Study Institute lectures on dark matter". arXiv:astro-ph/0301505 [astro-ph]. - K.A. Olive (2009). "Colliders and Cosmology". European Physical Journal C 59 (2): 269–295. arXiv:0806.1208. Bibcode:2009EPJC...59..269O. doi:10.1140/epjc/s10052-008-0738-8. - J.C. Wheeler (2007). Cosmic Catastrophes. Cambridge University Press. p. 282. ISBN 0-521-85714-7. - L. Smolin (2007). The Trouble with Physics. Mariner Books. p. 16. ISBN 0-618-91868-X. - S. Toulmin; J. Goodfield (1962). The Architecture of Matter. University of Chicago Press. pp. 48–54. - Discussed by Aristotle in Physics, esp. book I, but also later; as well as Metaphysics I–II. - For a good explanation and elaboration, see R.J. Connell (1966). Matter and Becoming. Priory Press. - H. G. Liddell; R. Scott; J. M. Whiton (1891). A lexicon abridged from Liddell & Scott's Greek–English lexicon. Harper and Brothers. p. 725. - R. Descartes (1644). "The Principles of Human Knowledge". Principles of Philosophy I. p. 53. - though even this property seems to be non-essential (René Descartes, Principles of Philosophy II , "On the Principles of Material Things", no. 4.) - R. Descartes (1644). "The Principles of Human Knowledge". Principles of Philosophy I. pp. 8, 54, 63. - D.L. Schindler (1986). "The Problem of Mechanism". In D.L. Schindler. Beyond Mechanism. University Press of America. - E.A. Burtt, Metaphysical Foundations of Modern Science (Garden City, New York: Doubleday and Company, 1954), 117–118. - J.E. McGuire and P.M. Heimann, "The Rejection of Newton's Concept of Matter in the Eighteenth Century", The Concept of Matter in Modern Philosophy ed. Ernan McMullin (Notre Dame: University of Notre Dame Press, 1978), 104–118 (105). - Isaac Newton, Mathematical Principles of Natural Philosophy, trans. A. Motte, revised by F. Cajori (Berkeley: University of California Press, 1934), pp. 398–400. Further analyzed by Maurice A. Finocchiaro, "Newton's Third Rule of Philosophizing: A Role for Logic in Historiography", Isis 65:1 (Mar. 1974), pp. 66–73. - Isaac Newton, Optics, Book III, pt. 1, query 31. - McGuire and Heimann, 104. - N. Chomsky (1988). Language and problems of knowledge: the Managua lectures (2nd ed.). MIT Press. p. 144. ISBN 0-262-53070-8. - McGuire and Heimann, 113. - Nevertheless, it remains true that the mathematization regarded as requisite for a modern physical theory carries its own implicit notion of matter, which is very like Descartes', despite the demonstrated vacuity of the latter's notions. - M. Wenham (2005). Understanding Primary Science: Ideas, Concepts and Explanations (2nd ed.). Paul Chapman Educational Publishing. p. 115. ISBN 1-4129-0163-4. - J.C. Maxwell (1876). Matter and Motion. Society for Promoting Christian Knowledge. p. 18. ISBN 0-486-66895-9. - T.H. Levere (1993). "Introduction". Affinity and Matter: Elements of Chemical Philosophy, 1800–1865. Taylor & Francis. ISBN 2-88124-583-8. - G.F. Barker (1870). "Introduction". A Text Book of Elementary Chemistry: Theoretical and Inorganic. John P. Morton and Company. p. 2. - J. J. Thomson (1909). "Preface". Electricity and Matter. A. Constable. - O.W. Richardson (1914). "Chapter 1". The Electron Theory of Matter. The University Press. - M. Jacob (1992). The Quark Structure of Matter. World Scientific. ISBN 981-02-3687-5. - V. de Sabbata; M. Gasperini (1985). Introduction to Gravitation. World Scientific. p. 293. ISBN 9971-5-0049-3. - The history of the concept of matter is a history of the fundamental length scales used to define matter. Different building blocks apply depending upon whether one defines matter on an atomic or elementary particle level. One may use a definition that matter is atoms, or that matter is hadrons, or that matter is leptons and quarks depending upon the scale at which one wishes to define matter. B. Povh; K. Rith; C. Scholz; F. Zetsche; M. Lavelle (2004). "Fundamental constituents of matter". Particles and Nuclei: An Introduction to the Physical Concepts (4th ed.). Springer. ISBN 3-540-20168-8. - J. Allday (2001). Quarks, Leptons and the Big Bang. CRC Press. p. 12. ISBN 0-7503-0806-0. - B.A. Schumm (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Johns Hopkins University Press. p. 57. ISBN 0-8018-7971-X. - See for example, M. Jibu; K. Yasue (1995). Quantum Brain Dynamics and Consciousness. John Benjamins Publishing Company. p. 62. ISBN 1-55619-183-9., B. Martin (2009). Nuclear and Particle Physics (2nd ed.). John Wiley & Sons. p. 125. ISBN 0-470-74275-5. and K. W. Plaxco; M. Gross (2006). Astrobiology: A Brief Introduction. Johns Hopkins University Press. p. 23. ISBN 0-8018-8367-9. - P. A. Tipler; R. A. Llewellyn (2002). Modern Physics. Macmillan. pp. 89–91, 94–95. ISBN 0-7167-4345-0. - P. Schmüser; H. Spitzer (2002). "Particles". In L. Bergmann et al. Constituents of Matter: Atoms, Molecules, Nuclei. CRC Press. pp. 773 ff. ISBN 0-8493-1202-7. - P. M. Chaikin; T. C. Lubensky (2000). Principles of Condensed Matter Physics. Cambridge University Press. p. xvii. ISBN 0-521-79450-1. - W. Greiner (2003). W. Greiner, M.G. Itkis, G. Reinhardt, M.C. Güçlü, ed. Structure and Dynamics of Elementary Matter. Springer. p. xii. ISBN 1-4020-2445-2. - P. Sukys (1999). Lifting the Scientific Veil: Science Appreciation for the Nonscientist. Rowman & Littlefield. p. 87. ISBN 0-8476-9600-6. - Lillian Hoddeson; Michael Riordan, eds. (1997). The Rise of the Standard Model. Cambridge University Press. ISBN 0-521-57816-7. - Timothy Paul Smith (2004). "The search for quarks in ordinary matter". Hidden Worlds. Princeton University Press. p. 1. ISBN 0-691-05773-7. - Harald Fritzsch (2005). Elementary Particles: Building blocks of matter. World Scientific. p. 1. ISBN 981-256-141-2. - Bertrand Russell (1992). "The philosophy of matter". A Critical Exposition of the Philosophy of Leibniz (Reprint of 1937 2nd ed.). Routledge. p. 88. ISBN 0-415-08296-X. - Stephen Toulmin and June Goodfield, The Architecture of Matter (Chicago: University of Chicago Press, 1962). - Richard J. Connell, Matter and Becoming (Chicago: The Priory Press, 1966). - Ernan McMullin, The Concept of Matter in Greek and Medieval Philosophy (Notre Dame, Indiana: Univ. of Notre Dame Press, 1965). - Ernan McMullin, The Concept of Matter in Modern Philosophy (Notre Dame, Indiana: University of Notre Dame Press, 1978). |Wikimedia Commons has media related to Matter.| - Visionlearning Module on Matter - Matter in the universe How much Matter is in the Universe? - NASA on superfluid core of neutron star - Matter and Energy: A False Dichotomy – Conversations About Science with Theoretical Physicist Matt Strassler
Exponential growth is an increase in some quantity that follows the relationship N(t) = A e(kt) where A and k are positive real-valued constants. Before diving further into the mathematics, let’s look at a graphical example of exponential growth. This plot assumes that A = 3 and k = 1. The function’s initial value at t=0 is A=3. The variable k is the growth constant. The larger the value of k, the faster the growth will occur. The exponential behavior explored above is the solution to the differential equation below: dN/dt = kN The differential equation states that exponential change in a population is directly proportional to its size. Initially, the small population (3 in the above graph) is growing at a relatively slow rate. However, as the population grows, the growth rate increases rapidly. Exponential Growth: Example Problems Exponential growth can be found in a range of natural phenomena, from the growth of bacterial populations to the speed of computer processors. Problem 1: A colony of bacteria doubles its population every 4 hours. If the colony originally has ten bacteria, how large will the colony be 24 hours later? Solution: Since the colony has an original population of 10, then A=10. Knowing that the population will be 20 four hours later, we can solve for the growth constant. - N(t) = A e(kt) - 20 = 10 e(k*4 hours) - ln(2) = (4 hours)*k - k = 0.173 /hours Then, the growth constant can be used to determine the population’s size one day later. - N(24 hours) = 10 e(0.173 /hours * 24 hours) - N(24 hours) = 635 Amazingly, the original handful of bacteria will blossom into a colony of nearly a thousand in one day’s time. That’s the power of exponential growth. Problem 2: A client deposits $100 in a savings account at the local bank. The account grows by 1% interest, compounded annually. What will the value of the account be after ten years? Solution: The initial size of the account is $100, so A=100. The account’s value will be $101 after one year, due to the interest. Knowing this, we can calculate the growth constant. N(t) = A e(kt) 101 = 100 e(k*1 year) ln(1.01) = (1 year)*k k = 0.00995 /years To find the value of the account at ten years, t=10. N(10 years) = $100 e(0.00995/years * 10 years) N(10 years) = $110.46 Matthews, John A. “Exponential Growth.” 2014: 387–387. Print. Stephanie Glen. "Exponential Growth: Simple Definition, Step by Step Examples" From CalculusHowTo.com: Calculus for the rest of us! https://www.calculushowto.com/exponential-growth/ Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!
Scarcity Definition of Economics Marshall’s materialist definition of economics was unable to convince Lionel Robbins. Therefore, Robbins attempted to define economics in a better sense in his book “The Nature and Significance of Economic Science”. His efforts provided us with the most considered ‘scarcity definition of economics’. According to Lionel Robbins, “Economics is the science, which studies human behavior as a relationship between ends and scarce means, which have alternative uses.” If you decipher the definition, you will be able to understand that Robbins’ definition is based on four fundamental characteristics of human existence. They are unlimited wants, scarce means (resources), alternative uses of scarce means and the economic problem. In the scarcity definition, “Ends” refers to human wants. As you know, human wants are unlimited. This basic characteristic is the cause of all economic activities. Also human wants can never be satisfied. If one want is fulfilled, another will emerge eventually. This ensures that there is continuity in the economic activities. In the definition, “Means” refers to available resources. Resources could be anything such as raw materials, time, money and labor that help to satisfy human wants. While the human wants are unlimited, the resources available to satisfy them are limited. The world does not offer everything in abundance. If resources are available in abundance, they will become free goods such as air and water, and there will be no need of studying economics. In economics, we never bother about absolute scarcity but relative scarcity. Absolute scarcity means that goods are not available at all. However, relative scarcity measures scarcity in relation to demand. For example, there is a huge demand for green tea in the market. If the green tea stock available in market is unable to meet the existing demand, then we can say that there is a scarcity of green tea. Hence, in economics, demand determines everything. If there is no demand, there will not be the question of scarcity of a good. Alternative Uses of Scarce Means As stated above available resources are scarce. However, human wants are unlimited. Though economic resources can be used to produce various things, if we choose one thing, we must give up others. For example, you can use your land either to cultivate or to build buildings. If you choose to cultivate, you are in a position to give up buildings. This peculiar economic problem arises because of alternative uses and scarcity of resources. The Economic Problem The demand for scarce resources is very high due to unlimited human wants. At the same time, the scarce resources have alternative uses. Now we have to decide what to produce, how to produce and for whom to produce. This is called the economic problem. Almost all pricing and valuation theories in economics are developed to answer these questions. For example, suppose you have $1,000. Also you need to buy various things to keep your life comfortable. But you cannot buy everything you want with the $1,000. Now economics is all about spending your $1,000 optimally to keep yourself satisfied at the maximum level possible. If you do not choose carefully what to buy and what not to buy, the available resources, i.e. $1,000 in our example, will not be utilized properly. Economic Problem and Technical Problem It is imperative that you understand the difference between an economic problem and a technical problem. A technical problem has something to do with accomplishing a task. In simple words, technical problem arises when you try to complete a project with the limited resources. In this case, you need to find out the best possible way to carry out the project so that you utilize the scarce resources optimally. But an economic problem has something to do with decision making. An economic problem arises when you decide on a project whether you should carry out with the scarce resources. Because you have many choices, as you can use the same resources to carry out all projects. But the problem is that the resources available are not enough to carry out all projects. Now you have to decide in which project you should invest the scarce resources. But remember that if you choose one you must give up others. Hence choosing the best project is known as an economic problem. Therefore, we can conclude that economics is a science, which helps a society to solve its economic problems. Merits of Robbins’ Definition Robbins’ definition undoubtedly gives us a complete sense of what is economics. In particular, his conceptions of scarce resources and unlimited human wants are very realistic, and are applicable at all time and situations. In this manner, Robbins’ scarcity definition demolished the old structure of economics and made us look at economics in a new dimension. - First of all, Robbins’ explanation of economic problem is very clear and precise. According to Robbins, three important factors that cause economic problems are unlimited wants, scarce resources and multiple uses of resources. Indeed, economics revolves around these three factors. - Another important aspect of Robbins’ definition is that it tries to analyze human behavior. When there are many choices in front of you, you need to make a rational decision as which one is more important. Thus, human behavior plays a vital role in an economic decision-making. - Robbins’ definition is universal in application. All nations and economic systems face the problems of unlimited human wants and scarce resources. That is why Robbins’ definition becomes more realistic. - There were definitions for economics before Lionel Robbins’. However, those definitions were based on many assumptions and conditions. For example, neo-classical economists said that for a human activity to be included in the subject of economics, it must satisfy an important condition: the human activity must create material goods that contribute to human welfare. When we say material goods, it ignores services rendered by a teacher, advocate and doctor. However, those services are also economic activities. Robbins’ definition is a positive study. It explains economics just based on ‘what it is’ and not based on ‘what ought to be’. - Robbins’s definition undoubtedly widened the scope of economics, as it included activities of producing immaterial goods in the subject matter of economics. According to Robbins, economic activities include both the production of machines and production of ‘philosophy’. - We can ascertain that the concept of opportunity cost is the gift of scarcity definition. Because Robbins’ definition explains, how and why economic choices arise. And when you have choices and limited resources, you have the option to select one or few of the them. Here, you are in the position to give up others. In this way, opportunity cost gets significance in this definition. - The scarcity definition of Robbins touches the concept of scale of preference implicitly. When you have many wants and scarce resources, you list out all of your wants. Obviously, the top one in the list gives you the maximum satisfaction. This is called scale of preference. Scale of preference is an important concept in economics, which is implied this definition. Criticism of Robbins’ Definition 1. Robbins’ scarcity definition appears final and complete. However, important critics such as Lindley Fraser, R.W.Souter, T.Parson and Barbara Wootton criticized the definition vehemently. According to them, the scarcity definition ignored the normative aspect of human behavior. While studying ‘what it is’, it will become incomplete if we ignore ‘what ought to be’. Critics said that ignorance of the normative aspect of human behavior made economics an ‘obscure science’. 2. D.H.Robertson feels that Robbins could have considered idle resources as well. According to Robbins’ definition, the scarce resources are fully employed due to unlimited human wants. But in real world, this is not the case. Always there are idle resources. Unemployed people are an example for idle resources. Idle resources arise basically due to defects in the economic system. But there is no perfect economic system that ensures full employment of resources. In addition, D.H. Robertson says that the definition widened the scope of economics too much as it included services by a cricket captain and cinema actor. In this way, the definition includes non-economic activity. Hence, Robertson feels that the definition is ‘at once too narrow and too wide’. 3. According to critics, the concept of scarcity had been in the pride place of economics for a long time before Robbins explained. For example, physiocrats and classical writers have already discussed the concept of scarcity. However, this criticism is impertinent because Robbins handled the concept of scarcity in a different way and beautifully explained what economics is. 4. Boulding has criticized Robbins for not including welfare aspect in his definition. However, this argument does not hold true. Because Robbins in his definition has stated scarce resources are allocated optimally to produce goods that give maximum satisfaction. Hence, the concept of welfare has entered into Robbins definition implicitly. 5. Robbins’ definition works well at micro level. It means that the definition has something to do with individuals facing unlimited wants and scarce resources. But when we take a nation as a whole into consideration, Robbins definition is rather vague to address the aggregate economic problems. However, this argument also does not hold true, as Robbins definition is universal in application. A nation’s economic problems are more or less similar to an individual’s except for the size. 6. Robins’ scarcity definition limits the scope. Robins confined the scope of economics within satisfying unlimited wants and allocating scarce resources. But economics in reality is a growing science. New branches and concepts are included in the subject matter of economics day by day. However, this growth aspect of economics was completely ignored by Robbins. 7. Another important criticism against scarcity definition is economics of abundance. According to Lionel Robbins, scarcity is the main cause of economic problem. But Prof. Kenneth Galbraith, a noted American economics in his book “The Affluent Society” has denied this statement. Because in 1930s, economic problem emerged due to abundance. There were too many goods but nobody was there to buy due to lack of purchasing power. Hence, Prof. Kenneth doubts about the universal application of scarcity definition. © 2013 Sundaram Ponnusamy
Have you ever wondered how airplanes stay up and what allows them to fly at such high altitudes? The Bernoulli Principle allows us to figure that out. An airplane gets its lift from the Bernoulli Principle. The Bernoulli Principle is the aerodynamic Principle that allows movements to be controlled when included by the wind. Bernoulli’s Principle is important in all types of physics and science, it states “as the velocity of a fluid increases, the pressure exerted by the force decreases. In practical terms, it means that a slow-flowing fluid exerting more pressure than a fast flowing fluid, this means that in slow-moving fluid pressure and velocity are inversely proportional, so if the pressure is high then the velocity is low if the pressure is low then the velocity is high. This explains the reason while taking a shower the shower curtain tends to stick onto you or flow towards you, this is because the fast-moving water flowing from the shower creates an area of low pressure, this causes the shower curtain in the area of low pressure to stick towards the closest area. This Principle also applies to airflow one of the most dramatic everyday examples of this Principle is airplanes. This allows airplanes to stay in the air lower pressure caused by the increase of speed of air over the wings the area of pressure under the wings is higher than the pressure above it this makes the wing pushed upwards, and allowing the plane to stay in the air. Air moves faster on the upper surface which causes the air pressure to decrease the lower air pressure over the top of the wing gives it lift. Flying birds use this Principle all the time to allow them to hunt, mate, or fly away. The Bernoulli Principle allows the birds to fly and stay aloft. Birds have a natural curve in their wings and allow them to have this effect given to them. Humans can use biomimicry to help design better airplanes to become more efficient based on what the needs are for. We can make a model demonstrating the Bernoulli Principle in many ways, one of the ways we can use to demonstrate Bernoulli’s Principle is by using a hairdryer and a ping pong ball. The hairdryer is our source of wind and the ping pong ball is used because one, it is lightweight and easy to demonstrate how lift must be stronger than its weight and two, it is easily accessible. The speed of the wind is creating a difference of pressure which high pressure equals low velocity and low pressure equals high velocity. So it makes the ball fly around or as others would say it hovers. We can identify the Bernoulli Principle taking effect by showing what happens when it takes place. We notice there is a spin on the ball and the ball is bouncing around as though there is a field were it moves around. The ball is confined to that area and because of the area of low pressure and the area of high pressure around it and it causes it to be pulled towards the middle and give it a floating effect. Now, what can we document? To document this info we can make a graph and see if a difference in space between the objects influences the effect it gives to how high or lows the ping pong ball goes. There are many ways we can give this experiment a twist. Another way to perform this experiment is to tie two strings onto two different cans and hang them up by the two strings. We can then use the same blow dryer to blow wind and the cans will pull together because of the different pressures in the air. This also shows how when there is a difference between high and low pressure in the air the high-pressure air moves to the low-pressure air creating some sort of magnet. To conduct my experiment I will be using cans and a hairdryer. I will hang the cans using string and I will tape the string to a doorway then I will make a graph recording my data. I will start out with 12 centimeters in between the string I will then grab my stopwatch and see how long it takes for the two cans to touch. I will do this 5 times to get the mean. Then, move it each time 1 more centimeter so it should end up at 13 centimeters apart, I will continue to do this until there is no more room for the wind to blow onto and no movement is detected. I will then try and find out any imperfections. Then I will correct them and improve my tests then I will add variation, and determine if the variations have any effect on the experiment or if the Principle is even still in play. Then I will see if there is a difference if I use a different temperature and I will record the data. Throughout the experiment I hypothesis that the Bernoulli Principal will work until a point and then immediately stop working after that. I hypothesize this because there are no other variables in the experiment. The only thing that is changing is the distance between the cans. All in all the Bernoulli Principle is an important part of our daily life from day to day. It plays a great role in transportation and sports and does things that we may have never thought of before. Materials: String, two empty cans, Hairdryer, tape, Stopwatch Get two pieces of string and tie them to the tabs of the two empty soda cans Hang the two empty cans by the strings 12 centimeters apart Turn the hairdryer on and aim in between the two cans (for variation try different temperatures) Time how long it took for the cans to touch and when they do stop the stopwatch Record your data on if the cans moved or not Move the cans an centimeter closer together and repeat steps 1-4 RESULTS DISCUSSION My hypothesis for this experiment was that when I moved the cans apart it would work until a point and after that point, it would just stop working completely. As I expected it did stop working after I reached 13 cm. At that point, the cans started moving berserkly and by chance, they hit each other but for consistancy, I still recorded the times they received. I believe this experiment could have been perfected if the cans had a little more weight because I noticed they were moving to fast and not able to gain any momentum to put the Bernoulli principal into play. This model demonstrates how the Bernoulli Principle can attract objects instead of repelling them. The area of high pressure which was the hairdryer was blowing in between the cans caused them to be sucked in because of the area of low pressure surrounding it. My results actually impressed me but I wasn’t sure if it was going to be as dramatic as I had seen it become. Some possible errors that could have occurred could be that the cans were not the exact same length. Another problem that could have happened is that I noticed the cans did not have the same center of mass and that when I blew the air onto it, they would start to spin around I’m not sure if that was due to the unbalanced can or the wind was causing there to be friction between it caused it to spin. I’m not sure if that made a difference to the experiment in the end though. CONCLUSION Doing this experiment really helped me learn and actually understand what the Bernoulli Principal is. It was not only fun but it allowed me to use my problem-solving skills. One of the first problems that came up was that tying the string to the cans was tedious and it kept sliding off. All in all the Bernoulli Principal is a really fascinating subject to cover from Aerospace engineering to Plane designing for all the different needs that humans desire from making jets faster to making planes more efficient. The Bernoulli Principal was here before us and was only discovered until the 1700s. Studying the Bernoulli principle. (2021, Oct 12). Retrieved October 26, 2021 , from A professional writer will make a clear, mistake-free paper for you! Our verified experts write your 100% original paper on this topic. A professional writer will make a clear, mistake-free paper for you!Get help with your assigment Please check your inbox
A History of the Filibuster in the United States Senate Curated/Reviewed by Matthew A. McIntosh A filibuster is a parliamentary procedure used in the United States Senate to prevent a measure from being brought to a count. The most common form of filibuster occurs when one or more senators attempt to delay or block a vote on a bill by extending debate on the measure. The Senate rules permit a senator, or a series of senators, to speak for as long as they wish, and on any topic they choose, unless “three-fifths of the Senators duly chosen and sworn” (currently 60 out of 100) vote to bring the debate to a close by invoking cloture under Senate Rule XXII. The ability to block a measure through extended debate was a side effect of an 1806 rule change, and was infrequently used during much of the 19th and 20th centuries. In 1970, the Senate adopted a “two-track” procedure to prevent filibusters from stopping all other Senate business. The minority then felt politically safer in threatening filibusters more regularly, which became normalized over time to the point that 60 votes are now required to end debate on nearly every controversial legislative item. Efforts to limit the practice include laws that explicitly limit the time for Senate debate, notably the Congressional Budget and Impoundment Control Act of 1974 that created the budget reconciliation process. Changes in 2013 and 2017 now require only a simple majority to invoke cloture on nominations, although most legislation still requires 60 votes. At times, the “nuclear option” has been proposed to eliminate the 60 vote threshold for certain matters before the Senate. The nuclear option is a parliamentary procedure that allows the Senate to override one of its standing rules, including the 60-vote rule to close debate, by a simple majority (51+ votes or 50 votes with the Vice President casting the tie-breaking vote), rather than the two-thirds supermajority normally required to amend the rules. One or more senators may still occasionally hold the floor for an extended period, sometimes without the advance knowledge of the Senate leadership. However, these “filibusters” usually result only in brief delays and do not determine outcomes, since the Senate’s ability to act ultimately depends upon whether there are sufficient votes to invoke cloture and proceed to a final vote on passage. Constitutional Design: Simple Majority Voting Although not explicitly mandated, the Constitution and its framers clearly envisioned that simple majority voting would be used to conduct business. The Constitution provides, for example, that a majority of each House constitutes a quorum to do business. Meanwhile, a small number of super-majority requirements were explicitly included in the original document, including conviction on impeachment charges (2/3 of Senate), expelling a member of Congress (2/3 of the chamber in question), overriding presidential vetoes (2/3 of both Houses), ratifying treaties (2/3 of Senate) and proposing constitutional amendments (2/3 of both Houses). Through negative textual implication, the Constitution also gives a simple majority the power to set procedural rules: “Each House may determine the Rules of its Proceedings, punish its Members for disorderly Behaviour, and, with the Concurrence of two thirds, expel a Member.” Commentaries in The Federalist Papers confirm this understanding. In Federalist No. 58, the Constitution’s primary drafter James Madison defended the document against routine super-majority requirements, either for a quorum or a “decision”: “It has been said that more than a majority ought to have been required for a quorum; and in particular cases, if not in all, more than a majority of a quorum for a decision. That some advantages might have resulted from such a precaution, cannot be denied. It might have been an additional shield to some particular interests, and another obstacle generally to hasty and partial measures. But these considerations are outweighed by the inconveniences in the opposite scale.” “In all cases where justice or the general good might require new laws to be passed, or active measures to be pursued, the fundamental principle of free government would be reversed. It would be no longer the majority that would rule: the power would be transferred to the minority. Were the defensive privilege limited to particular cases, an interested minority might take advantage of it to screen themselves from equitable sacrifices to the general weal, or, in particular emergencies, to extort unreasonable indulgences.” In Federalist No. 22, Alexander Hamilton described super-majority requirements as being one of the main problems with the previous Articles of Confederation, and identified several evils which would result from such a requirement: “To give a minority a negative upon the majority (which is always the case where more than a majority is requisite to a decision), is, in its tendency, to subject the sense of the greater number to that of the lesser. … The necessity of unanimity in public bodies, or of something approaching towards it, has been founded upon a supposition that it would contribute to security. But its real operation is to embarrass the administration, to destroy the energy of the government, and to substitute the pleasure, caprice, or artifices of an insignificant, turbulent, or corrupt junto, to the regular deliberations and decisions of a respectable majority. In those emergencies of a nation, in which the goodness or badness, the weakness or strength of its government, is of the greatest importance, there is commonly a necessity for action. The public business must, in some way or other, go forward. If a pertinacious minority can control the opinion of a majority, respecting the best mode of conducting it, the majority, in order that something may be done, must conform to the views of the minority; and thus the sense of the smaller number will overrule that of the greater, and give a tone to the national proceedings. Hence, tedious delays; continual negotiation and intrigue; contemptible compromises of the public good. And yet, in such a system, it is even happy when such compromises can take place: for upon some occasions things will not admit of accommodation; and then the measures of government must be injuriously suspended, or fatally defeated. It is often, by the impracticability of obtaining the concurrence of the necessary number of votes, kept in a state of inaction. Its situation must always savor of weakness, sometimes border upon anarchy.” Accidental Creation and Early Use of the Filibuster In 1789, the first U.S. Senate adopted rules allowing senators to move the previous question (by simple majority vote), which meant ending debate and proceeding to a vote. But Vice President Aaron Burr argued that the previous-question motion was redundant, had only been exercised once in the preceding four years, and should be eliminated, which was done in 1806, after he left office. The Senate agreed and modified its rules. Because it created no alternative mechanism for terminating debate, filibusters became theoretically possible. During most of the pre-Civil War period, the filibuster was seldom used as northern senators desired to maintain southern support over fears of disunion/secession and made compromises over slavery in order to avoid confrontation with new states admitted to the Union in pairs to preserve the sectional balance in the Senate, most notably in the Missouri Compromise of 1820. Until the late 1830s, however, the filibuster remained a solely theoretical option, never actually exercised. The first Senate filibuster occurred in 1837 when a group of Whig senators filibustered to prevent allies of the Democratic-Republican President Andrew Jackson from expunging a resolution of censure against him. In 1841, a defining moment came during debate on a bill to charter a new national bank. After Whig Senator Henry Clay tried to end the debate via a majority vote, Democratic Senator William R. King threatened a filibuster, saying that Clay “may make his arrangements at his boarding house for the winter”. Other senators sided with King, and Clay backed down. At the time, both the Senate and the House of Representatives allowed filibusters as a way to prevent a vote from taking place. Subsequent revisions to House rules limited filibuster privileges in that chamber, but the Senate continued to allow the tactic. In practice, narrow majorities could enact legislation by changing the Senate rules, but only on the first day of the session in January or March. The Emergence of Cloture, 1917-1969 In 1917, during World War I, a rule allowing cloture of a debate was adopted by the Senate on a 76–3 roll call vote at the urging of President Woodrow Wilson, after a group of 12 anti-war senators managed to kill a bill that would have allowed Wilson to arm merchant vessels in the face of unrestricted German submarine warfare. From 1917 to 1949, the requirement for cloture was two-thirds of senators voting. Despite that formal requirement, however, political scientist David Mayhew has argued that in practice, it was unclear whether a filibuster could be sustained against majority opposition. The first cloture vote occurred in 1919 to end debate on the Treaty of Versailles, leading to the treaty’s rejection against the wishes of the cloture rule’s first champion, President Wilson. During the 1930s, Senator Huey Long of Louisiana used the filibuster to promote his populist policies. He recited Shakespeare and read out recipes for “pot-likkers” during his filibusters, which occupied 15 hours of debate. In 1946, five southern Democrats — senators John H. Overton (LA), Richard B. Russell (GA), Millard E. Tydings (MD), Clyde R. Hoey (NC), and Kenneth McKellar (TN) — blocked a vote on a bill (S. 101) proposed by Democrat Dennis Chávez of New Mexico that would have created a permanent Fair Employment Practice Committee (FEPC) to prevent discrimination in the workplace. The filibuster lasted weeks, and Senator Chávez was forced to remove the bill from consideration after a failed cloture vote, even though he had enough votes to pass the bill. In 1949, the Senate made invoking cloture more difficult by requiring two-thirds of the entire Senate membership to vote in favor of a cloture motion. Moreover, future proposals to change the Senate rules were themselves specifically exempted from being subject to cloture.:191 In 1953, Senator Wayne Morse of Oregon set a record by filibustering for 22 hours and 26 minutes while protesting the Tidelands Oil legislation. Then Democratic Senator Strom Thurmond of South Carolina broke this record in 1957 by filibustering the Civil Rights Act of 1957 for 24 hours and 18 minutes, during which he read laws from different states and recited George Washington’s farewell address in its entirety, although the bill ultimately passed. In 1959, anticipating more civil rights legislation, the Senate under the leadership of Majority Leader Lyndon Johnson restored the cloture threshold to two-thirds of those voting. Although the 1949 rule had eliminated cloture on rules changes themselves, Johnson acted at the very beginning of the new Congress on January 5, 1959, and the resolution was adopted by a 72–22 vote with the support of three top Democrats and three of the four top Republicans. The presiding officer, Vice President Richard Nixon, supported the move and stated his opinion that the Senate “has a constitutional right at the beginning of each new Congress to determine rules it desires to follow”. The 1959 change also eliminated the 1949 exemption for rules changes, allowing cloture to once again be invoked on future changes.:193 One of the most notable filibusters of the 1960s occurred when Southern Democrats attempted to block the passage of the Civil Rights Act of 1964 by filibustering for 75 hours, including a 14-hour and 13 minute address by Senator Robert Byrd of West Virginia. The filibuster failed when the Senate successfully invoked cloture for only the second time since 1927. From 1917 to 1970, the Senate took a cloture vote nearly once a year (on average); during this time, there were a total of 49 cloture votes. The Two-Track System, 60-Vote Rule, and the Rise of the Routine Filibuster (1970 onward) After a series of filibusters in the 1960s over civil rights legislation, the Senate put a “two-track system” into place in 1970 under the leadership of DemocraticMajority LeaderMike Mansfield and Democratic Majority Whip Robert Byrd. Before this system was introduced, a filibuster would stop the Senate from moving on to any other legislative activity. Tracking allows the majority leader—with unanimous consent or the agreement of the minority leader—to have more than one main motion pending on the floor as unfinished business. Under the two-track system, the Senate can have two or more pieces of legislation or nominations pending on the floor simultaneously by designating specific periods during the day when each one will be considered. The notable side effect of this change was that by no longer bringing Senate business to a complete halt, filibusters on particular motions became politically easier for the minority to sustain. As a result, the number of filibusters began increasing rapidly, eventually leading to the modern era in which an effective supermajority requirement exists to pass legislation, with no practical requirement that the minority party actually hold the floor or extend debate. In 1975, the Senate revised its cloture rule so that three-fifths of sworn senators (60 votes out of 100) could limit debate, except for changing Senate rules which still requires a two-thirds majority of those present and voting to invoke cloture. However, by returning to an absolute number of all Senators (60) rather than a proportion of those present and voting, the change also made any filibusters easier to sustain on the floor by a small number of senators from the minority party without requiring the presence of their minority colleagues. This further reduced the majority’s leverage to force an issue through extended debate. The Senate also experimented with a rule that removed the need to speak on the floor in order to filibuster (a “talking filibuster”), thus allowing for “virtual filibusters”. Another tactic, the post-cloture filibuster—which used points of order to delay legislation because they were not counted as part of the limited time allowed for debate—was rendered ineffective by a rule change in 1979. As the filibuster has evolved from a rare practice that required holding the floor for extended periods into a routine 60-vote supermajority requirement, Senate leaders have increasingly used cloture motions as a regular tool to manage the flow of business, often even in the absence of a threatened filibuster. Thus, the presence or absence of cloture attempts is not necessarily a reliable indicator of the presence or absence of a threatened filibuster. Because filibustering does not depend on the use of any specific rules, whether a filibuster is present is always a matter of judgment. Abolition for Nominations, 2013 and 2017 In 2005, a group of Republican senators led by Majority Leader Bill Frist proposed having the presiding officer, Vice President Dick Cheney, rule that a filibuster on judicial nominees was unconstitutional, as it was inconsistent with the President’s power to name judges with the advice and consent of a simple majority of senators. Senator Trent Lott, the junior senator from Mississippi, used the word “nuclear” to describe the plan, and so it became known as the “nuclear option,” and the term thereafter came to refer to the general process of changing cloture requirements via the establishment of a new Senate precedent (by simple majority vote, as opposed to formally amending the Senate rule by two-thirds vote). However, a group of 14 senators—seven Democrats and seven Republicans, collectively dubbed the “Gang of 14″—reached an agreement to temporarily defuse the conflict. From April to June 2010, under Democratic control, the Senate Committee on Rules and Administration held a series of monthly public hearings on the history and use of the filibuster in the Senate. During the 113th Congress, two packages of amendments were adopted on January 25, 2013, one temporary for that Congress and one permanent. Changes to the permanent Senate rules (Senate Resolution 16) allowed, among other things, elimination of post-cloture debate on a motion to proceed to a bill once cloture has been invoked on the motion, provided that certain thresholds of bipartisan support are met. Despite these modest changes, 60 votes were still required to overcome a filibuster, and the “silent filibuster”—in which a senator can delay a bill even if they leave the floor—remained in place. On November 21, 2013, Senate Democrats used the “nuclear option,” voting 52–48 — with all Republicans and three Democrats opposed — to eliminate the use of the filibuster on executive branch nominees and judicial nominees, except to the Supreme Court until 2017. The Democrats’ stated motivation was what they saw as an expansion of filibustering by Republicans during the Obama administration, especially with respect to nominations for the United States Court of Appeals for the District of Columbia Circuit and out of frustration with filibusters of executive branch nominees for agencies such as the Federal Housing Finance Agency. In 2015, Republicans took control of the Senate and kept the 2013 rules in place. On April 6, 2017, Senate Republicans eliminated the sole remaining exception to the 2013 change by invoking the “nuclear option” for Supreme Court nominees. This was done in order to allow a simple majority to confirm Neil Gorsuch to the Supreme Court. The vote to change the rules was 52 to 48 along party lines. In January 2021, following a shift to a 50-50 Democratic majority supported by Vice President Harris’s tie-breaking vote, the legislative filibuster became a sticking point for the adoption of a new organizing resolution when Mitch McConnell, the Senate Minority Leader, proposed to filibuster the organizing resolution until it should include language maintaining a 60-vote threshold to invoke cloture. As a result of this delay, committee memberships were held over from the 116th Congress, leaving some committees without a chair, some committees chaired by Republicans, and new Senators without committee assignments. After a stalemate that lasted a week, McConnell received assurances from two Democratic senators that they would continue to support the 60 vote threshold. Because of those assurances, on January 25, 2021 McConnell officially announced that he would hand over control of the 50-50 Senate to Democrats. The only bills that are not currently subject to effective 60-vote requirements are those considered under provisions of law that limit time for debating them. These limits on debate allow the Senate to hold a simple-majority vote on final passage without obtaining the 60 votes normally needed to close debate. As a result, many major legislative actions in recent decades have been adopted through one of these methods, especially reconciliation. Budget reconciliation is a procedure created in 1974 as part of the congressional budget process. In brief, the annual budget process begins with adoption of a budget resolution (passed by simple majority in each house, not signed by President, does not carry force of law) that sets overall funding levels for the government. The Senate may then consider a budget reconciliation bill, not subject to filibuster, that reconciles funding amounts in any annual appropriations bills with the amounts specified in the budget resolution. However, under the Byrd rule no non-budgetary “extraneous matter” may be considered in a reconciliation bill. The presiding officer, relying always (as of 2017) on the opinion of the Senate parliamentarian, determines whether an item is extraneous, and a 60-vote majority is required to include such material in a reconciliation bill. During periods of single-party control in Congress and the Presidency, reconciliation has increasingly been used to enact major parts of a party’s legislative agenda by avoiding the 60-vote rule. Notable examples of such successful use include: - Omnibus Budget Reconciliation Act of 1993, Pub.L. 103–66 (1993) — the Clinton budget bill, passed the Senate 51–50. Raised taxes on some high earners. - Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA), Pub.L. 107–16 (text) (pdf) (2001) — first set of Bush tax cuts, passed the Senate 58–33. - Jobs and Growth Tax Relief Reconciliation Act of 2003, Pub.L. 108–27 (text) (pdf) (2003) — accelerated and extended Bush tax cuts, passed the Senate 51–50. - Deficit Reduction Act of 2005, Pub.L. 109–171 (text) (pdf) (2006) — slowed growth in Medicare and Medicaid spending and changed student loan formulas, passed the Senate 51–50. - Tax Increase Prevention and Reconciliation Act of 2005 (TIPRA), Pub.L. 109–222 (text) (pdf) (2006) — extended lower rates on capital gains and relief from the alternative minimum tax, passed the Senate 54–44. - Health Care and Education Reconciliation Act of 2010, Pub.L. 111–152 (text) (pdf) (2010) — second portion of Obamacare, passed the Senate 56–43. This law made budget-related amendments to the main Obamacare law, the Patient Protection and Affordable Care Act which had previously passed with 60 votes. It also included significant student loan changes. - Tax Cuts and Jobs Act of 2017 (2017) — the Trump tax cuts, passed the Senate 51–48. - American Rescue Plan Act of 2021 (2021) — COVID-19 relief, passed the Senate 50-49 Trade Promotion Authority Beginning in 1975 with the Trade Act of 1974, and later through the Trade Act of 2002 and the Trade Preferences Extension Act of 2015, Congress has from time to time provided so-called “fast track” authority for the President to negotiate international trade agreements. After the President submits an agreement, Congress can then approve or deny the agreement, but cannot amend it nor filibuster. On the House and Senate floors, each body can debate the bill for no more than 20 hours, thus the Senate can act by simple majority vote once the time for debate has expired. Congressional Review Act The Congressional Review Act, enacted in 1995, allows Congress to review and repeal administrative regulations adopted by the Executive Branch within 60 legislative days. This procedure will most typically be used successfully shortly after a party change in the presidency. It was used once in 2001 to repeal an ergonomics rule promulgated under Bill Clinton, was not used in 2009, and was used 14 times in 2017 to repeal various regulations adopted in the final year of the Barack Obama presidency. The Act provides that a rule disapproved by Congress “may not be reissued in substantially the same form” until Congress expressly authorizes it. However, CRA disapproval resolutions require only 51 votes while a new authorization for the rule would require 60 votes. Thus, the CRA effectively functions as a “one-way ratchet” against the subject matter of the rule in question being re-promulgated, such as by the administration of a future President of the opposing party. National Emergencies Act The National Emergencies Act, enacted in 1976, formalizes the emergency powers of the President. The law requires that when a joint resolution to terminate an emergency has been introduced, it must be considered on the floor within a specified number of days. The time limitation overrides the normal 60-vote requirement to close debate, and thereby permits a joint resolution to be passed by a simple majority of both the House and Senate. As originally designed, such joint resolutions were not subject to presidential veto. However, following the Supreme Court’s decision in INS v. Chadha (1983) which ruled that the legislative veto was unconstitutional, Congress revised the law in 1985 to make the joint resolutions subject to presidential veto. War Powers Resolution The War Powers Resolution, enacted in 1973 over Richard Nixon’s veto, generally requires the President to withdraw troops committed overseas within 60 days, which the President may extend once for 30 additional days, unless Congress has declared war, otherwise authorized the use of force, or is unable to meet as a result of an armed attack upon the United States. Both the House and Senate must vote on any joint resolution authorizing forces, or requiring that forces be removed, within a specified time period, thus establishing a simple-majority threshold in the Senate. - “Precedence of motions (Rule XXII)”. Rules of the Senate. United States Senate. Archived from the original on January 31, 2010. Retrieved January 21, 2010. - U.S. Constitution, Article I, Sec. 5, Cl. 1. - U.S. Constitution, Article I, Sec. 3, Cl. 6. - U.S. Constitution, Article I, Sec. 5, Cl. 2. - U.S. Constitution, Article I, Sec. 7, Cl. 2 & 3. - U.S. Constitution, Article II, Sec. 2, Cl. 2. - U.S. Constitution, Article V. - The Federalist, No. 58 - The Federalist, No. 22 - Gold, Martin (2008). Senate Procedure and Practice (2nd ed.). Rowman & Littlefield. p. 49. - “Filibuster”. History.com. August 21, 2018. - Binder, Sarah (April 22, 2010). “The History of the Filibuster”. Brookings. Retrieved June 14, 2012. - “U.S. Senate: Filibuster and Cloture”. www.senate.gov. Retrieved December 13, 2016. The House and Senate rulebooks in 1789 were nearly identical. Both rulebooks included what is known as the “previous question” motion. The House kept their motion, and today it empowers a simple majority to cut off debate. The Senate no longer has that rule on its books. - Pildes, Rick (December 24, 2009). “The History of the Senate Filibuster”. Balkinization. Retrieved March 1, 2010. Discussing Wawro, Gregory John; Schickler, Eric (2006). Filibuster: obstruction and lawmaking in the U.S. Senate. Princeton, N.J.: Princeton University Press. p. 19. - 55 Congressional Record p. 45 (March 8, 1917) - “Filibuster and Cloture”. United States Senate. Retrieved March 5,2010. - See John F. Kennedy’s Profiles in Courage (chapter on George Norris) for a description of the event. - “What is Rule 22?”, Rule22 (blog), Word press, May 28, 2011, archived from the original on May 21, 2011, retrieved May 26, 2011 - Mayhew, David (January 2003). “Supermajority Rule in the US Senate”(PDF). PS: Political Science & Politics. 36: 31–36. doi:10.1017/S1049096503001653. - Bomboy, Scott. “On this day, Wilson’s own rule helps defeat the Versailles Treaty – National Constitution Center”. National Constitution Center – constitutioncenter.org. Retrieved May 15, 2019. - “Changing the Rule”. Filibuster. Washington, DC: CQ-Roll Call Group. 2010. Archived from the original on June 7, 2010. Retrieved June 24, 2010. - Senate Rules Committee, Senate Cloture Rule, S. Prt. 112-31, prepared by the Congressional Research Service (2011) - “Strom Thurmond Biography”. Strom Thurmond Institute. Archived from the original on July 9, 2011. Retrieved January 6, 2009. - Kelly, Jon (December 12, 2012). “The art of the filibuster: How do you talk for 24 hours straight?”. BBC News Magazine. BBC. Retrieved March 7,2021. - CQ Almanac 1959, Senate Rules Change - “Civil Rights Filibuster Ended”. Art & History Historical Minutes. United States Senate. Retrieved March 1, 2010. - Klein, Ezra (2020). “Chapter 8: When Bipartisanship Becomes Irrational”. Why We’re Polarized. New York: Avid Reader Press. - Mariziani, Mimi and Lee, Diana (April 22, 2010). “Testimony of Mimi Marizani & Diana Lee, Brennan Center for Justice at NYU School of Law, Submitted to the U.S. Senate Committee on Rules & Administration for the hearing entitled “Examining the Filibuster: History of the Filibuster 1789–2008″”. Examining the Filibuster: History of the Filibuster 1789–2008. United States Senate Committee on Rules & Administration. p. 5. Retrieved June 30, 2010. - Note: Senator Robert C. Byrd wrote in 1980 that he and Senator Mike Mansfield instituted the “two-track system” in the early 1970s with the approval and cooperation of Senate Republican leaders while he was serving as Senate Majority Whip. (Byrd, Robert C. (1991). “Party Whips, May 9, 1980”. In Wendy Wolff (ed.). The Senate 1789–1989. 2. Washington, D.C.: 100th Congress, 1st Session, S. Con. Res. 18; U.S. Senate Bicentennial Publication; Senate Document 100-20; U.S. Government Printing Office. p. 203. - “Senate Action on Cloture Motions”. United States Senate. Retrieved January 26, 2021. - Erlich, Aaron (November 18, 2003). “Whatever Happened to the Old-Fashioned Jimmy Stewart-Style Filibuster?”. HNN: George Mason University’s History News Network. Retrieved June 30, 2010. - Kemp, Jack (November 15, 2004). “Force a real filibuster, if necessary”. Townhall. Retrieved March 1, 2010. - Brauchli, Christopher (January 8–10, 2010). “A Helpless and Contemptible Body—How the Filibuster Emasculated the Senate”. CounterPunch. - Schlesinger, Robert (January 25, 2010). “How the Filibuster Changed and Brought Tyranny of the Minority”. Politics & Policy. U.S. News & World Report. Retrieved June 24, 2010. - “Resolution to amend Rule XXII of the Standing Rules of the Senate”. The Library of Congress. January 14, 1975. Retrieved February 18, 2010. - Wawro, Gregory J. (April 22, 2010). “The Filibuster and Filibuster Reform in the U.S. Senate, 1917–1975; Testimony Prepared for the Senate Committee on Rules and Administration”. Examining the Filibuster: History of the Filibuster 1789–2008. United States Senate Committee on Rules & Administration. Retrieved July 1, 2010. - Understanding the Filibuster: Purpose and History of the Filibuster. No Labels. - Byrd, Robert C. (April 22, 2010). “Statement of U.S. Senator Robert C. Byrd, Senate Committee on Rules and Administration, “Examining the Filibuster: History of the Filibuster 1789–2008.””. Examining the Filibuster: History of the Filibuster 1789–2008. United States Senate Committee on Rules & Administration. p. 2. Retrieved July 1, 2010. - Bach, Stanley (April 22, 2010). “Statement on Filibusters and Cloture: Hearing before the Senate Committee on Rules and Administration”. Examining the Filibuster: History of the Filibuster 1789–2008. United States Senate Committee on Rules & Administration. pp. 5–7. Retrieved July 1, 2010. - Gold, Martin B. and Gupta, Dimple (Winter 2005). “The Constitutional Option to Change the Senate Rules and Procedures: A Majoritarian Means to Overcome the Filibuster” (PDF). Harvard Journal of Law & Public Policy. 28(1): 262–64. Retrieved July 1, 2010. - Beth, Richard; Stanley Bach (December 24, 2014). Filibusters and Cloture in the Senate (PDF). Congressional Research Service. pp. 4, 9. - Allen, Mike; Birnbaum, Jeffrey H. (May 18, 2005). “A Likely Script for The ‘Nuclear Option'”. The Washington Post. - Kirkpatrick, David D. (April 23, 2005). “Cheney Backs End of Filibustering”. The New York Times. - Safire, William (March 20, 2005). “Nuclear Options”. The New York Times. - Lochhead, Carolyn (May 24, 2005). “Senate filibuster showdown averted”. San Francisco Chronicle. Retrieved January 23, 2017. - “Senators compromise on filibusters”. CNN.com. May 24, 2005. Retrieved January 23, 2017. - Gerhardt, Michael; Painter, Richard (2011). “‘Extraordinary Circumstances’: The Legacy of the Gang of 14 and a Proposal for Judicial Nominations Reform” (PDF). American Constitution Society for Law and Policy. - “Senate Rules Committee Holds Series of Hearings on the Filibuster”. In The News. United States Senate Committee on Rules & Administration. June 9, 2010. Retrieved July 2, 2010. - Rybicki E. (2013). Changes to Senate Procedures in the 113th Congress Affecting the Operation of Cloture (S.Res. 15 and S.Res. 16). Congressional Research Service. - Karoun Demirjian (January 24, 2013). “Senate approves modest, not sweeping, changes to the filibuster”. Las Vegas Sun. Retrieved January 31,2013. - Bolton, Alexander (January 24, 2013). “Liberals irate as Senate passes watered-down filibuster reform”. The Hill. Retrieved January 31, 2013. - Peters, Jeremy W. (November 21, 2013). “In Landmark Vote, Senate Limits Use of the Filibuster”. The New York Times. - Kathleen Hunter (November 21, 2013). “U.S. Senate changes rules to stop minority from blocking nominations”. Concord Monitor. - Jeremy W. Peters (October 31, 2013). “G.O.P. Filibuster of 2 Obama Picks Sets Up Fight”. The New York Times. - Bolton, Alexander (October 12, 2015). “Senate Republicans open door to weakening the filibuster”. The Hill. Retrieved October 12, 2015. - SENATE GOES NUCLEAR: McConnell kills the filibuster for Supreme Court nominees to get Trump’s court pick over the top Retrieved April 6, 2017. - Senate Session | C-SPAN.org Retrieved February 14, 2021 - Steven T. Dennis (January 21, 2021). “McConnell Threatens Senate’s Unity Kickoff Over Filibuster Fears”. Bloomberg News. Archived from the original on January 21, 2021. Retrieved January 22, 2021. - “McConnell folds, drops filibuster objection, signals readiness for rules resolution”. MSNBC. Retrieved January 25, 2021. - “McConnell allows Senate power-sharing deal to advance after fight with Democrats over filibuster – CNNPolitics”. Retrieved January 26, 2021. - “Senate Legislative Process”. United States Senate. Retrieved February 12, 2017. - 19 U.S.C. § 2191(g) - 5 U.S.C. § 801(b)(2) - 50 U.S.C. § 1544(b) - 50 U.S.C. § 1545 Originally published by Wikipedia, 02.28.2010, under a Creative Commons Attribution-ShareAlike 3.0 Unported license.
- What is the first step of the scientific method? - What are the steps in hypothesis testing? - What is the most important step in hypothesis testing? - Which hypothesis is written correctly? - What are the four steps of hypothesis testing? - What are the types of hypothesis testing? - What are the 6 steps of hypothesis testing? - What does P value indicate? - What is the aim of hypothesis testing? - What is h0 and h1? What is the first step of the scientific method? The first step in the Scientific Method is to make objective observations. These observations are based on specific events that have already happened and can be verified by others as true or false. Form a hypothesis.. What are the steps in hypothesis testing? Step 1: Specify the Null Hypothesis. … Step 2: Specify the Alternative Hypothesis. … Step 3: Set the Significance Level (a) … Step 4: Calculate the Test Statistic and Corresponding P-Value. … Step 5: Drawing a Conclusion. What is the most important step in hypothesis testing? The most important (and often the most challenging) step in hypothesis testing is selecting the test statistic. Which hypothesis is written correctly? Answer. A hypothesis is usually written in the form of an if/then statement, according to the University of California. This statement gives a possibility (if) and explains what may happen because of the possibility (then). The statement could also include “may.” What are the four steps of hypothesis testing? Step 1: State the hypotheses. Step 2: Set the criteria for a decision. Step 3: Compute the test statistic. Step 4: Make a decision. What are the types of hypothesis testing? A hypothesis is an approximate explanation that relates to the set of facts that can be tested by certain further investigations. There are basically two types, namely, null hypothesis and alternative hypothesis. A research generally starts with a problem. What are the 6 steps of hypothesis testing? 1.2 – The 7 Step Process of Statistical Hypothesis TestingStep 1: State the Null Hypothesis. … Step 2: State the Alternative Hypothesis. … Step 3: Set. … Step 4: Collect Data. … Step 5: Calculate a test statistic. … Step 6: Construct Acceptance / Rejection regions. … Step 7: Based on steps 5 and 6, draw a conclusion about. What does P value indicate? What Is P-Value? In statistics, the p-value is the probability of obtaining results at least as extreme as the observed results of a statistical hypothesis test, assuming that the null hypothesis is correct. What is the aim of hypothesis testing? The purpose of hypothesis testing is to determine whether there is enough statistical evidence in favor of a certain belief, or hypothesis, about a parameter. What is h0 and h1? The first thing needed to know is that there are two Hypotheses called the Null Hypothesis (H0) and the Alternative Hypothesis (H1); they are mutually exclusive. There is also a claim, which is something that can be said or inferred to the Population Proportion or the Population Mean.
Social sciences and economics are dealing with a huge number of people and track statistical trends rather than exact numbers. Even modern physics is indebted to statistics in its analysis. I will present the basic concepts and formulas in this “Statistics 101” series. The Overview of Statistics - Entities that can have more than one value ( <-> constant) - Independent Variables / Dependent Variables For example, a new treatment has been developed for AIDS. When the new treatment is tested, a sample of patients will be assigned to one of two groups: a new method vs. a traditional method. In this case, the independent variable is the “method of treatment,” and the dependent variable is the recovery rate of each method. In general, researchers look at how dependent variables will vary when they change the independent variables. - Data is raw material. - Numerical Data: can be expressed as a number; quantitative data or measurement data - Nominal Data: A number does not have any specific meaning, e.g., athlete’s jersey - Ordinal Data: The order of numbers has a specific meaning. - Interval Data: provide equal differences. - Ratio Data: include a meaningful zero point (usually means total absence of underlying property), and each number can be understood as a ratio. - Categorical Data: qualitative data (such as opinions) [Note] Examples of numerical data The decibel (dB) is a measure of sound level. 20 dB is louder than 10 dB. Therefore the decibel is ordinal data (not nominal data). But the 20 dB is not twice louder than 10dB. (20 dB is ten times louder than 10 dB and 30 dB is a hundred times louder than 10 dB.) Therefore, the decibel is not interval data. Fahrenheit (F) is a unit of temperature. The difference between 100F and 110F is the same as the difference between 120F and 130F. But 100F is not twice hotter than 50F, and 0F does not mean the absolute lack of temperature. Fahrenheit is ordinal and interval data but not ratio data. Meter (m) is a unit of length, and 0m means a complete absence of length. Also, 10m is twice as long as 5m. Meter is ordinal, interval, and ratio data. Population: A group of individuals that are wished to be studied. Sample: Selected individuals in the population, who are actually being studied Data Set: the collection of all data taken from the sample Outliners: very large or very small values in the data set that are not typical Bias: A systematic favoritism that is present in the data collection and analysis process, resulting in misleading results. It might be from how samples are selected or how data are collected.
Image of a dusty star-forming galaxy. Before our universe reached its 1 billionth birthday, an unusual galaxy formed and began whipping up new stars at astounding speeds. Then, a mere 800 million years later, the ultramassive galaxy suddenly fell silent, according to a new study. The enormous galaxy, called XMM-2599, stood out as a rarity in the early days of the universe. “In general, early-formed galaxies should be smaller in mass, because the current model of structure formation is hierarchical — small, low-mass galaxies would be expected to form first, and then they would merge to form bigger more massive galaxies at a later time,” co-author Danilo Marchesini, a professor of physics and astronomy at Tufts University, said in a statement. But XMM-2599, with six times the mass of the Milky Way, completely defies these predictions. Some numerical models predict that such monster galaxies existed in the early universe, but those predicted galaxies are expected to be actively forming stars, Gillian Wilson, a professor of physics and astronomy at the University of California, Riverside (UCR), said in a statement from the university. “What makes XMM-2599 so interesting, unusual, and surprising is that it is no longer forming stars.” And no one knows why. Marchesini, Wilson and their colleagues described the perplexing discovery in a new study published today (Feb. 5) in The Astrophysical Journal Letters. The team spotted XMM-2599 by measuring the electromagnetic radiation emanating from distant stars, which allows researchers to determine the chemical and physical properties of galaxies. The radiation must often travel across vast expanses of space before reaching Earthbound instruments, and the journey can take a long time. That means that, by taking spectroscopic measurements, scientists can glimpse what our universe looked like in the distant past. Using their measurements, the team developed mathematical models to predict how XMM-2599 would have formed through time. “Even before the universe was 2 billion years old, XMM-2599 had already formed a mass of more than 300 billion suns, making it an ultramassive galaxy,” lead author Benjamin Forrest, a postdoctoral researcher in the UCR Department of Physics and Astronomy, said in the UCR statement. The model suggests that the galaxy generated most of its stars in a “huge frenzy” when the universe was less than 1 billion years old, Forrest said. During peak production, the galaxy churned out more than 1,000 solar masses each year; in the same time, our Milky Way formed only one new star, the statement noted. The models predicted that XMM-2599 should have continued to produce new stars, as most galaxies did in that epoch of cosmic history. Instead, the monster galaxy fell dormant, perhaps due to a lack of fuel or because of activity from the black hole at its center, Wilson said. “We have caught XMM-2599 in its inactive phase,” Wilson said. While the galaxy no longer makes new stars, it cannot lose any of its accrued mass, he added. “As time goes by, could [XMM-2599] gravitationally attract nearby star-forming galaxies and become a bright city of galaxies?” In theory, XMM-2599 could become a central figure in one of the “brightest and most massive clusters of galaxies in the local universe,” co-author Michael Cooper, a professor of astronomy at the University of California, Irvine, said in the UCR statement. “Alternatively, it could continue to exist in isolation. Or we could have a scenario that lies between these two outcomes.” The authors don’t yet know why XMM-2599 stopped producing stars, or how the galaxy might evolve in the future. They do conclude that, given how suddenly the galaxy fell inactive, the existence of XMM-2599 “[challenges] our current understanding of how ultra-massive galaxies form and evolve in the early universe.”
laws of motion DESCRIPTIONIt is a useful ppt on NEWTON'S LAWS OF MOTION . Newton’s Laws of Motion BackgroundSir Isaac Newton (1643-1727) an English scientist and mathematician famous for his discovery of the law of gravity also discovered the three laws of motion. He published them in his book Philosophies Naturalist Principia Mathematic (mathematic principles of natural philosophy) in 1687. Today these laws are known as Newton’s Laws of Motion and describe the motion of all objects on the scale we experience in our everyday lives. “If I have ever made any valuable discoveries, it has been owing more to patient attention, than to any other talent.” -Sir Isaac Newton Newton’s Laws of Motion1. An object in motion tends to stay in motion and an object at rest tends to stay at rest unless acted upon by an unbalanced force. 2. Force equals mass times acceleration (F = ma). 3. For every action there is an equal and opposite reaction. Newton’s First Law An object at rest tends to stay at rest and an object in motion tends to stay in motion unless acted upon by an unbalanced force. What does this mean?Basically, an object will “keep doing what it was doing” unless acted on by an unbalanced force. If the object was sitting still, it will remain stationary. If it was moving at a constant velocity, it will keep moving. It takes force to change the motion of an object. What is meant by unbalanced force? If the forces on an object are equal and opposite, they are said to be balanced, and the object experiences no change in motion. If they are not equal and opposite, then the forces are unbalanced and the motion of the object changes. Some Examples from Real Life Two teams are playing tug of war. They are both exerting equal force on the rope in opposite directions. This balanced force results in no change of motion. A soccer ball is sitting at rest. It takes an unbalanced force of a kick to change its motion. Newton’s First Law is also called the Law of Inertia Inertia: the tendency of an object to resist changes in its state of motion The First Law states that all objects have inertia. The more mass an object has, the more inertia it has (and the harder it is to change its motion). More Examples from Real Life A powerful locomotive begins to pull a long line of boxcars that were sitting at rest. Since the boxcars are so massive, they have a great deal of inertia and it takes a large force to change their motion. Once they are moving, it takes a large force to stop them. On your way to school, a bug flies into your windshield. Since the bug is so small, it has very little inertia and exerts a very small force on your car (so small that you don’t even feel it). If objects in motion tend to stay in motion, why don’t moving objects keep moving forever? Things don’t keep moving forever because there’s almost always an unbalanced force acting upon it. A book sliding across a table slows down and stops because of the force of friction. If you throw a ball upwards it will eventually slow down and fall because of the force of gravity. In outer space, away from gravity and any sources of friction, a rocket ship launched with a certain speed and direction would keep going in that same direction and at that same Newton’s Second Law Force equals mass times acceleration. F = ma Acceleration: a measurement of how quickly an object is changing speed. What does F = ma mean?Force is directly proportional to mass and acceleration. Imagine a ball of a certain mass moving at a certain acceleration. This ball has a certain force. Now imagine we make the ball twice as big (double the mass) but keep the acceleration constant. F = ma says that this new ball has twice the force of the old ball.Now imagine the original ball moving at twice the original acceleration. F = ma says that the ball will again have twice the force of the ball at the original acceleration. Newton’s Third Law For every action there is an equal and opposite reaction. What does this mean?For every force acting on an object, there is an equal force acting in the opposite direction. Right now, gravity is pulling you down in your seat, but Newton’s Third Law says your seat is pushing up against you with equal force. This is why you are not moving. There is a balanced force acting on you– gravity pulling down, your seat pushing up. Think about it . . .What happens if you are standing on a skateboard or a slippery floor and push against a wall? You slide in the opposite direction (away from the wall), because you pushed on the wall but the wall pushed back on you with equal and opposite force. Why does it hurt so much when you stub your toe? When your toe exerts a force on a rock, the rock exerts an equal force back on your toe. The harder you hit your toe against it, the more force the rock exerts back on your toe (and the more your toe hurts). ReviewNewton’s First Law: Objects in motion tend to stay in motion and objects at rest tend to stay at rest unless acted upon by an unbalanced force. Newton’s Second Law: Force equals mass times acceleration (F = ma). Newton’s Third Law: For every action there is an equal and The tendency of an object to resist changes in its state of motion. Acceleration:•a change in velocity•a measurement of how quickly an object is changing speed, direction or both. Velocity: the rate of change of position along straight line with time. Force: strength or energy PRESENTATION PREPARED BY : SMRITI SHARMA OF CLASS- IX C
Logic programming is a type of programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, Answer set programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses: and are read declaratively as logical implications: H is called the head of the rule and B1, ..., Bn is called the body. Facts are rules that have no body, and are written in the simplified form: In the simplest case in which H, B1, ..., Bn are all atomic formulae, these clauses are called definite clauses or Horn clauses. However, there exist many extensions of this simple case, the most important one being the case in which conditions in the body of a clause can also be negations of atomic formulae. Logic programming languages that include this extension have the knowledge representation capabilities of a non-monotonic logic. In ASP and Datalog, logic programs have only a declarative reading, and their execution is performed by means of a proof procedure or model generator whose behaviour is not meant to be under the control of the programmer. However, in the Prolog family of languages, logic programs also have a procedural interpretation as goal-reduction procedures: Consider, for example, the following clause: based on an example used by Terry Winograd to illustrate the programming language Planner. As a clause in a logic program, it can be used both as a procedure to test whether X is fallible by testing whether X is human, and as a procedure to find an X that is fallible by finding an X that is human. Even facts have a procedural interpretation. For example, the clause: can be used both as a procedure to show that socrates is human, and as a procedure to find an X that is human by "assigning" socrates to X. The declarative reading of logic programs can be used by a programmer to verify their correctness. Moreover, logic-based program transformation techniques can also be used to transform logic programs into logically equivalent programs that are more efficient. In the Prolog family of logic programming languages, the programmer can also use the known problem-solving behaviour of the execution mechanism to improve the efficiency of programs. The use of mathematical logic to represent and execute computer programs is also a feature of the lambda calculus, developed by Alonzo Church in the 1930s. However, the first proposal to use the clausal form of logic for representing computer programs was made by Cordell Green. This used an axiomatization of a subset of LISP, together with a representation of an input-output relation, to compute the relation by simulating the execution of the program in LISP. Foster and Elcock's Absys, on the other hand, employed a combination of equations and lambda calculus in an assertional programming language which places no constraints on the order in which operations are performed. Logic programming in its present form can be traced back to debates in the late 1960s and early 1970s about declarative versus procedural representations of knowledge in Artificial Intelligence. Advocates of declarative representations were notably working at Stanford, associated with John McCarthy, Bertram Raphael and Cordell Green, and in Edinburgh, with John Alan Robinson (an academic visitor from Syracuse University), Pat Hayes, and Robert Kowalski. Advocates of procedural representations were mainly centered at MIT, under the leadership of Marvin Minsky and Seymour Papert. Although it was based on the proof methods of logic, Planner, developed at MIT, was the first language to emerge within this proceduralist paradigm. Planner featured pattern-directed invocation of procedural plans from goals (i.e. goal-reduction or backward chaining) and from assertions (i.e. forward chaining). The most influential implementation of Planner was the subset of Planner, called Micro-Planner, implemented by Gerry Sussman, Eugene Charniak and Terry Winograd. It was used to implement Winograd's natural-language understanding program SHRDLU, which was a landmark at that time. To cope with the very limited memory systems at the time, Planner used a backtracking control structure so that only one possible computation path had to be stored at a time. Planner gave rise to the programming languages QA-4, Popler, Conniver, QLISP, and the concurrent language Ether. Hayes and Kowalski in Edinburgh tried to reconcile the logic-based declarative approach to knowledge representation with Planner's procedural approach. Hayes (1973) developed an equational language, Golux, in which different procedures could be obtained by altering the behavior of the theorem prover. Kowalski, on the other hand, developed SLD resolution, a variant of SL-resolution, and showed how it treats implications as goal-reduction procedures. Kowalski collaborated with Colmerauer in Marseille, who developed these ideas in the design and implementation of the programming language Prolog. The Association for Logic Programming was founded to promote Logic Programming in 1986. Prolog gave rise to the programming languages ALF, Fril, Gödel, Mercury, Oz, Ciao, Visual Prolog, XSB, and ?Prolog, as well as a variety of concurrent logic programming languages,constraint logic programming languages and datalog. Logic programming can be viewed as controlled deduction. An important concept in logic programming is the separation of programs into their logic component and their control component. With pure logic programming languages, the logic component alone determines the solutions produced. The control component can be varied to provide alternative ways of executing a logic program. This notion is captured by the slogan where "Logic" represents a logic program and "Control" represents different theorem-proving strategies. In the simplified, propositional case in which a logic program and a top-level atomic goal contain no variables, backward reasoning determines an and-or tree, which constitutes the search space for solving the goal. The top-level goal is the root of the tree. Given any node in the tree and any clause whose head matches the node, there exists a set of child nodes corresponding to the sub-goals in the body of the clause. These child nodes are grouped together by an "and". The alternative sets of children corresponding to alternative ways of solving the node are grouped together by an "or". Any search strategy can be used to search this space. Prolog uses a sequential, last-in-first-out, backtracking strategy, in which only one alternative and one sub-goal is considered at a time. Other search strategies, such as parallel search, intelligent backtracking, or best-first search to find an optimal solution, are also possible. In the more general case, where sub-goals share variables, other strategies can be used, such as choosing the subgoal that is most highly instantiated or that is sufficiently instantiated so that only one procedure applies. Such strategies are used, for example, in concurrent logic programming. For most practical applications, as well as for applications that require non-monotonic reasoning in artificial intelligence, Horn clause logic programs need to be extended to normal logic programs, with negative conditions. A clause in a normal logic program has the form: and is read declaratively as a logical implication: where H and all the Ai and Bi are atomic formulas. The negation in the negative literals not Bi is commonly referred to as "negation as failure", because in most implementations, a negative condition not Bi is shown to hold by showing that the positive condition Bi fails to hold. For example: canfly(X) :- bird(X), not abnormal(X). abnormal(X) :- wounded(X). bird(john). bird(mary). wounded(john). Given the goal of finding something that can fly: there are two candidate solutions, which solve the first subgoal bird(X), namely X = john and X = mary. The second subgoal not abnormal(john) of the first candidate solution fails, because wounded(john) succeeds and therefore abnormal(john) succeeds. However, The second subgoal not abnormal(mary) of the second candidate solution succeeds, because wounded(mary) fails and therefore abnormal(mary) fails. Therefore, X = mary is the only solution of the goal. Micro-Planner had a construct, called "thnot", which when applied to an expression returns the value true if (and only if) the evaluation of the expression fails. An equivalent operator is normally built-in in modern Prolog's implementations. It is normally written as \+ Goal, where Goal is some goal (proposition) to be proved by the program. This operator differs from negation in first-order logic: a negation such as \+ X == 1 fails when the variable X has been bound to the atom 1, but it succeeds in all other cases, including when X is unbound. This makes Prolog's reasoning non-monotonic: X = 1, \+ X == 1 always fails, while \+ X == 1, X = 1 can succeed, binding 1, depending on whether X was initially bound (note that standard Prolog executes goals in left-to-right order). The logical status of negation as failure was unresolved until Keith Clark showed that, under certain natural conditions, it is a correct (and sometimes complete) implementation of classical negation with respect to the completion of the program. Completion amounts roughly to regarding the set of all the program clauses with the same predicate on the left hand side, say as a definition of the predicate where "iff" means "if and only if". Writing the completion also requires explicit use of the equality predicate and the inclusion of a set of appropriate axioms for equality. However, the implementation of negation by failure needs only the if-halves of the definitions without the axioms of equality. For example, the completion of the program above is: As an alternative to the completion semantics, negation as failure can also be interpreted epistemically, as in the stable model semantics of answer set programming. In this interpretation not(Bi) means literally that Bi is not known or not believed. The epistemic interpretation has the advantage that it can be combined very simply with classical negation, as in "extended logic programming", to formalise such phrases as "the contrary can not be shown", where "contrary" is classical negation and "can not be shown" is the epistemic interpretation of negation as failure. The fact that Horn clauses can be given a procedural interpretation and, vice versa, that goal-reduction procedures can be understood as Horn clauses + backward reasoning means that logic programs combine declarative and procedural representations of knowledge. The inclusion of negation as failure means that logic programming is a kind of non-monotonic logic. Despite its simplicity compared with classical logic, this combination of Horn clauses and negation as failure has proved to be surprisingly expressive. For example, it provides a natural representation for the common-sense laws of cause and effect, as formalised by both the situation calculus and event calculus. It has also been shown to correspond quite naturally to the semi-formal language of legislation. In particular, Prakken and Sartor credit the representation of the British Nationality Act as a logic program with being "hugely influential for the development of computational representations of legislation, showing how logic programming enables intuitively appealing representations that can be directly deployed to generate automatic inferences". The programming language Prolog was developed in 1972 by Alain Colmerauer. It emerged from a collaboration between Colmerauer in Marseille and Robert Kowalski in Edinburgh. Colmerauer was working on natural language understanding, using logic to represent semantics and using resolution for question-answering. During the summer of 1971, Colmerauer and Kowalski discovered that the clausal form of logic could be used to represent formal grammars and that resolution theorem provers could be used for parsing. They observed that some theorem provers, like hyper-resolution, behave as bottom-up parsers and others, like SL-resolution (1971), behave as top-down parsers. It was in the following summer of 1972, that Kowalski, again working with Colmerauer, developed the procedural interpretation of implications. This dual declarative/procedural interpretation later became formalised in the Prolog notation which can be read (and used) both declaratively and procedurally. It also became clear that such clauses could be restricted to definite clauses or Horn clauses, where H, B1, ..., Bn are all atomic predicate logic formulae, and that SL-resolution could be restricted (and generalised) to LUSH or SLD-resolution. Kowalski's procedural interpretation and LUSH were described in a 1973 memo, published in 1974. Colmerauer, with Philippe Roussel, used this dual interpretation of clauses as the basis of Prolog, which was implemented in the summer and autumn of 1972. The first Prolog program, also written in 1972 and implemented in Marseille, was a French question-answering system. The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David Warren in Edinburgh in 1977. Experiments demonstrated that Edinburgh Prolog could compete with the processing speed of other symbolic programming languages such as Lisp. Edinburgh Prolog became the de facto standard and strongly influenced the definition of ISO standard Prolog. Abductive logic programming is an extension of normal Logic Programming that allows some predicates, declared as abducible predicates, to be "open" or undefined. A clause in an abductive logic program has the form: where H is an atomic formula that is not abducible, all the Bi are literals whose predicates are not abducible, and the Ai are atomic formulas whose predicates are abducible. The abducible predicates can be constrained by integrity constraints, which can have the form: where the Bi are arbitrary literals (defined or abducible, and atomic or negated). For example: canfly(X) :- bird(X), normal(X). false :- normal(X), wounded(X). bird(john). bird(mary). wounded(john). where the predicate normal is abducible. Problem solving is achieved by deriving hypotheses expressed in terms of the abducible predicates as solutions of problems to be solved. These problems can be either observations that need to be explained (as in classical abductive reasoning) or goals to be solved (as in normal logic programming). For example, the hypothesis normal(mary) explains the observation canfly(mary). Moreover, the same hypothesis entails the only solution X = mary of the goal of finding something that can fly: Abductive logic programming has been used for fault diagnosis, planning, natural language processing and machine learning. It has also been used to interpret Negation as Failure as a form of abductive reasoning. Because mathematical logic has a long tradition of distinguishing between object language and metalanguage, logic programming also allows metalevel programming. The simplest metalogic program is the so-called "vanilla" meta-interpreter: solve(true). solve((A,B)):- solve(A),solve(B). solve(A):- clause(A,B),solve(B). where true represents an empty conjunction, and clause(A,B) means there is an object-level clause of the form A :- B. Metalogic programming allows object-level and metalevel representations to be combined, as in natural language. It can also be used to implement any logic that is specified by means of inference rules. Metalogic is used in logic programming to implement metaprograms, which manipulate other programs, databases, knowledge bases or axiomatic theories as data. Constraint logic programming combines Horn clause logic programming with constraint solving. It extends Horn clauses by allowing some predicates, declared as constraint predicates, to occur as literals in the body of clauses. A constraint logic program is a set of clauses of the form: where H and all the Bi are atomic formulas, and the Ci are constraints. Declaratively, such clauses are read as ordinary logical implications: However, whereas the predicates in the heads of clauses are defined by the constraint logic program, the predicates in the constraints are predefined by some domain-specific model-theoretic structure or theory. Procedurally, subgoals whose predicates are defined by the program are solved by goal-reduction, as in ordinary logic programming, but constraints are checked for satisfiability by a domain-specific constraint-solver, which implements the semantics of the constraint predicates. An initial problem is solved by reducing it to a satisfiable conjunction of constraints. The following constraint logic program represents a toy temporal database of john's history as a teacher: teaches(john, hardware, T) :- 1990 T, T < 1999. teaches(john, software, T) :- 1999 T, T < 2005. teaches(john, logic, T) :- 2005 T, T 2012. rank(john, instructor, T) :- 1990 T, T < 2010. rank(john, professor, T) :- 2010 T, T < 2014. Here and < are constraint predicates, with their usual intended semantics. The following goal clause queries the database to find out when john both taught logic and was a professor: The solution is 2010 . Constraint logic programming has been used to solve problems in such fields as civil engineering, mechanical engineering, digital circuit verification, automated timetabling, air traffic control, and finance. It is closely related to abductive logic programming. Concurrent logic programming integrates concepts of logic programming with concurrent programming. Its development was given a big impetus in the 1980s by its choice for the systems programming language of the Japanese Fifth Generation Project (FGCS). A concurrent logic program is a set of guarded Horn clauses of the form: The conjunction G1, ... , Gn is called the guard of the clause, and | is the commitment operator. Declaratively, guarded Horn clauses are read as ordinary logical implications: However, procedurally, when there are several clauses whose heads H match a given goal, then all of the clauses are executed in parallel, checking whether their guards G1, ... , Gn hold. If the guards of more than one clause hold, then a committed choice is made to one of the clauses, and execution proceedes with the subgoals B1, ..., Bn of the chosen clause. These subgoals can also be executed in parallel. Thus concurrent logic programming implements a form of "don't care nondeterminism", rather than "don't know nondeterminism". For example, the following concurrent logic program defines a predicate shuffle(Left, Right, Merge) , which can be used to shuffle two lists Left and Right, combining them into a single list Merge that preserves the ordering of the two lists Left and Right: shuffle(, , ). shuffle(Left, Right, Merge) :- Left = [First | Rest] | Merge = [First | ShortMerge], shuffle(Rest, Right, ShortMerge). shuffle(Left, Right, Merge) :- Right = [First | Rest] | Merge = [First | ShortMerge], shuffle(Left, Rest, ShortMerge). Here, represents the empty list, and [Head | Tail] represents a list with first element Head followed by list Tail, as in Prolog. (Notice that the first occurrence of | in the second and third clauses is the list constructor, whereas the second occurrence of | is the commitment operator.) The program can be used, for example, to shuffle the lists [ace, queen, king] and [1, 4, 2] by invoking the goal clause: shuffle([ace, queen, king], [1, 4, 2], Merge). The program will non-deterministically generate a single solution, for example Merge = [ace, queen, 1, king, 4, 2]. Arguably, concurrent logic programming is based on message passing and consequently is subject to the same indeterminacy as other concurrent message-passing systems, such as Actors (see Indeterminacy in concurrent computation). Carl Hewitt has argued that, concurrent logic programming is not based on logic in his sense that computational steps cannot be logically deduced. However, in concurrent logic programming, any result of a terminating computation is a logical consequence of the program, and any partial result of a partial computation is a logical consequence of the program and the residual goal (process network). Consequently, the indeterminacy of computations implies that not all logical consequences of the program can be deduced.[neutrality is disputed] Concurrent constraint logic programming combines concurrent logic programming and constraint logic programming, using constraints to control concurrency. A clause can contain a guard, which is a set of constraints that may block the applicability of the clause. When the guards of several clauses are satisfied, concurrent constraint logic programming makes a committed choice to the use of only one. Inductive logic programming is concerned with generalizing positive and negative examples in the context of background knowledge: machine learning of logic programs. Recent work in this area, combining logic programming, learning and probability, has given rise to the new field of statistical relational learning and probabilistic inductive logic programming. Several researchers have extended logic programming with higher-order programming features derived from higher-order logic, such as predicate variables. Such languages include the Prolog extensions HiLog and ?Prolog. Basing logic programming within linear logic has resulted in the design of logic programming languages that are considerably more expressive than those based on classical logic. Horn clause programs can only represent state change by the change in arguments to predicates. In linear logic programming, one can use the ambient linear logic to support state change. Some early designs of logic programming languages based on linear logic include LO [Andreoli & Pareschi, 1991], Lolli, ACL, and Forum [Miller, 1996]. Forum provides a goal-directed interpretation of all of linear logic. F-logic extends logic programming with objects and the frame syntax. Logtalk extends the Prolog programming language with support for objects, protocols, and other OOP concepts. Highly portable, it supports most standard-compliant Prolog systems as backend compilers. Transaction logic is an extension of logic programming with a logical theory of state-modifying updates. It has both a model-theoretic semantics and a procedural one. An implementation of a subset of Transaction logic is available in the Flora-2 system. Other prototypes are also available. This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (February 2012) (Learn how and when to remove this template message) Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your Digital Marketing and Technology knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.Visit defaultLogic's partner sites below:
Debt Ceiling & Political Posturing The history of the U.S. debt ceiling and potential default is a complex and significant aspect of the country’s financial and political landscape. Here’s a summary of its history: 1. Origins and Early Years: - The United States’ first debt ceiling was established with the Second Liberty Bond Act of 1917, which aimed to finance the country’s involvement in World War I. - Initially, the debt ceiling was introduced as a means to provide a limit on the total amount of debt the U.S. government could accumulate. - Over the following decades, Congress regularly raised the debt ceiling to accommodate increasing spending and the financing of wars and economic programs. 2. The 1980s and the Modern Debt Ceiling Era: - In the early 1980s, the U.S. national debt began to rise rapidly due to tax cuts, increased defense spending, and other factors. - During this period, lawmakers began using the debt ceiling as a political tool, using it to negotiate budgetary concessions or policy changes. - The debt ceiling was regularly increased throughout the 1980s and 1990s, often as part of larger budget agreements. 3. Debt Ceiling Crises: - In recent years, the U.S. has faced several debt ceiling crises, leading to significant debates and concerns about potential default. - In 2011, a contentious battle over raising the debt ceiling led to Standard & Poor’s downgrading the U.S. credit rating from AAA to AA+. - In 2013, the debt ceiling was temporarily suspended through the Bipartisan Budget Act, which allowed the Treasury to borrow without limit until a specified date. - In 2017, the debt ceiling was reinstated, and the Treasury employed “extraordinary measures” to meet financial obligations until Congress raised the limit. - In 2019, a bipartisan agreement suspended the debt ceiling until July 2021. - In 2021, a bipartisan agreement raised the debt limit by $2.5 trillion to a total of $31.4 trillion. 4. Potential Default: - If the debt ceiling is not raised or temporarily suspended, the U.S. government may face the risk of default. - Default would occur if the Treasury cannot meet its financial obligations, such as paying interest on existing debt or making other necessary payments. - A default by the U.S. government would have severe consequences for the economy, including higher borrowing costs, market instability, and a loss of confidence in U.S. Treasury securities. How could markets react in the three possible scenarios related to the debt ceiling? The reaction of the stock market to a debt ceiling compromise can vary depending on several factors, including the specific terms of the compromise, market sentiment, and the overall economic conditions at the time. Here are some potential scenarios and their possible market reactions: 1. Positive Resolution and Avoidance of Default: - If a debt ceiling compromise is reached well in advance of the deadline and assures investors that the U.S. will not default on its obligations, it can provide relief and stability to the stock market. - In this case, the market may respond positively, with increased investor confidence, leading to a potential rally in stock prices. - The avoidance of default reduces the risk of market disruptions and can support a favorable environment for businesses and investors. 2. Last-Minute Compromise and Uncertainty: - If a debt ceiling compromise is reached at the eleventh hour or after significant uncertainty, it may lead to market volatility. - Investors tend to dislike uncertainty, and a drawn-out negotiation process can create anxiety and potential sell-offs in the stock market. - However, once a compromise is reached, even if it’s late, can still provide relief and prevent a default. The market may stabilize and regain confidence over time. 3. Unsatisfactory Compromise or Continuing Political Gridlock: - If the debt ceiling compromise is seen as insufficient or fails to address underlying fiscal concerns, it could lead to market skepticism and negative reactions. - Investors may view the compromise as a temporary solution that only postpones addressing the fundamental issues, potentially leading to long-term concerns about fiscal stability. - In such cases, the stock market may react negatively, with declines in stock prices and increased volatility as investors reassess their risk appetite. It’s important to remember that the stock market’s reaction to debt ceiling compromises is influenced by various factors, including broader economic indicators, geopolitical events, and investor sentiment. While historical patterns can provide some insights, market reactions are inherently unpredictable and can vary in each instance. Analysts at Deutsche Bank put a 2% possibility the U.S. government will default on its loans, despite days of failed negotiations. Steven Zeng and Brett Ryan wrote in an analyst note Tuesday that an outright default is the least likely of the possible outcomes. (https://www.investopedia.com/debt-ceiling-default-unlikely-7501717) They put a 45% chance that lawmakers will come to a resolution before June 1st. They also put an equal chance that they will kick the can down the road and extend the debt ceiling through September. This would give law makers time to come to a more permanent agreement on spending. Steven and Brett also put an 8% chance that the President ignores the debt limit under the 14th Amendment, which sates that the “validity of the public debt of the United States…shall not be questioned.” https://constitution.congress.gov/browse/amendment-14/section-4/ Please keep in mind when you read the above that there have been three similar standoffs in the last 40 years, all of which ended without a default. I believe they will continue to kick the can down the road. The one consideration I think everyone should look at is the level of debt we have as a country and how we eventually begin to correct that issue. I don’t have the answers, but it could come down to raising taxes, means testing social security, etc. We have engaged in numerous discussions and commenced planning for the diversification of your retirement income sources. Several of you have taken some steps with Roth conversions and saving money in taxable accounts (taxed as long term and short-term capital gains). If anything, this debt ceiling fight brings to light many of the conversations we have had in the past. For many of you we are planning on your retirement being 30+ years and I would think we will have many changes in that 30-year period. It’s important to note that the information provided here is based on the historical context only and I am simply looking at the historical patterns of the debt ceiling in the past. Philip Lockwood | Founder + Managing Partner Address: 3100 Ingersoll Ave. Des Moines, IA 50312 Website: Lockwood Financial Strategies Securities offered through Parkland Securities, LLC, member FINRA (FINRA.org) and SIPC (SIPC.org). Investment Advisory services offered through SPC, a Registered Investment Advisor. Lockwood Financial Strategies, LLC is independent of Parkland Securities, LLC and SPC Securities offered through Parkland Securities, LLC, member FINRA/SIPC.
posted on 07 May 2016 from the St Louis Fed A central issue in economics concerns how output (equivalent to income) is distributed across economic agents (e.g., workers, entrepreneurs). A first step in addressing this issue is understanding how output (or income) is distributed in the United States and understanding how the distribution has changed over time. Measuring income inequality, however, is not a trivial endeavor. Multiple sources of income - salary, capital gains income, employer-provided health insurance and other non-salaried compensation, etc. - make simply measuring income itself problematic. Nonetheless, using a number of different definitions of income and employing various metrics, researchers have attempted to quantify income inequality in the U.S. Economists have identified two broad periods in income inequality over the post-World War II period - first in the 1970s and then, more recently, prior to the Great Recession. In the sections that follow, we describe how income inequality is measured and then how it changed over these two periods. Income Inequality and How It's Measured Assessing income inequality boils down in effect to measuring the income gaps between high and low earners. Income inequality implies that the lower-income population receives disproportionately less income than the higher-income population: The larger the disparity, the greater the degree of income inequality. To measure inequality, economists often sort the population by income percentiles and measure the difference across these percentiles. For example, the top 10 percent of earners would be the 90th percentile. A related way of dividing the population is quintiles, which split the distribution into five even buckets (the bottom quintile is the 20th percentile); quintiles are commonly used percentiles for studying inequality except at the top of the income distribution, where the income difference between 98th and 99th percentiles is large. To summarize inequality across the entire distribution, economists use the Gini coefficient. The Gini coefficient measures income concentration at each percentile of the population and ranges from 0 (perfectly equal) to 1 (perfectly unequal). In order to study income inequality, one needs income at an individual level. While gross domestic product is the usual aggregate indicator for income, there are many definitions of income and many data sources available at the individual level. Economists often use the Internal Revenue Service's Statistics of Income program (SOI) or the Census Bureau's Current Population Survey (CPS). Studies using different data sources reach various conclusions on income inequality, depending on the definition used for income. For example, economists Thomas Piketty and Emmanuel Saez compiled a dataset using SOI data back to 1913. They focused on the share of income earned by the top percentiles to avoid poor data quality in the lower percentiles. The SOI definition of income is market income, the cash income reported on tax forms. The SOI data more accurately measure the top of the income distribution, but less accurately measure low-income statistics because low-income households are not always required to file income taxes. Another source of individual income data is the CPS. Every March, the CPS - a monthly survey of 75,000 households - provides the information used in the Annual Social and Economic Supplement, which is the primary source for census data on income and poverty. The CPS data are reported in money income - market income plus other cash income, excluding noncash benefits, such as employer-provided health insurance. While the CPS provides quality low- and middle-income data, incomes above a certain threshold are not reported to protect individual privacy. This makes it less ideal for high-income estimates. The Congressional Budget Office (CBO) also constructed a dataset that merges the CPS and SOI and draws on each source's strengths - the CPS for low income and the SOI for high income. The CBO reports market income, both before-tax (market income plus government transfers) and after-tax income (before-tax income less federal taxes). Most studies find that more equality is seen in after-tax income, followed by before-tax income and then market income. Moreover, it is generally accepted that the U.S. economy is similar to other developed nations' in terms of pretax and transfer income inequality. In other words, U.S. income inequality is not intrinsically different from what is seen in other countries, and any differences are mainly driven by the lack of income-redistributing fiscal policies in the U.S. Trends in Income Inequality From the end of World War II to the early 1970s, income inequality in the U.S. was relatively low. The graph shows that from 1947 to 1970, the Gini coefficient was flat or declining. Piketty and Saez, using SOI data with a longer history, found that income inequality peaked in the 1920s, then decreased after the Great Depression, when top capital incomes fell and were unable to recover. Although the U.S. economy rebounded during World War II, wage controls prevented growth in top incomes. Once the war ended, a progressive tax structure and reforms such as Social Security and unionization kept low- and middle-income growth strong. Starting in the 1970s, wage growth at the top of the income distribution outpaced the rest of the distribution, and inequality began to rise. The Gini coefficient grew from 0.394 in 1970 to 0.482 in 2013. The CBO estimates that between 1979 and 2011 market income grew 56 percent in the 81st through 99th percentiles and 174 percent in the 99th percentile. In contrast, market income growth averaged 16 percent in the bottom four quintiles. Government transfers and federal taxes did have a redistributive effect during this period, but income inequality in after-tax income grew substantially. The 1970s increase in inequality was different from the increase during the 1920s. During the period from 1940 to 1970, top-income composition shifted from capital income to wage income. In the top 0.01 percent, the total income share from capital income fell from 70 percent in 1929 to just above 20 percent in 1998. Wage income rose over the same period, from 10 percent to about 45 percent. High growth in top wages is partly explained by the Tax Reform Act of 1986, which lowered the top marginal-income tax rates. The short-term impact of tax reform is circled in red on the graph. Longer-lasting wage growth came from the reporting of stock options and other forms of income as wages on tax returns. After the increase in the 1970s, inequality continued to rise. In the 2001 and 2007-09 recessions, top incomes fell sharply as stock market crashes decreased the value of capital gains and stock options. However, losses to top incomes were temporary. During the recovery period from 2002 through 2007, for example, the top 1 percent captured about two-thirds of overall income growth, Piketty and Saez estimated. Further, even though top incomes fell 36.3 percent in the 2007-09 recession, the incomes of the bottom 99 percent also decreased 11.6 percent. This decrease is the largest two-year fall in the incomes of the bottom 99 percent since the Great Depression. So far, the top 1 percent has captured 58 percent of income gains from 2009 to 2014. The newest data on income show that growth from 2013 to 2014 was more equal. The incomes of the bottom 99 percent grew 3.3 percent, the best rate in more than 10 years, and the Gini coefficient on household income decreased slightly, marking the first nonrecession decrease since 1998. Economists use Gini coefficients, percentiles and detailed survey data to study trends in income inequality. They find that inequality has been rising in the U.S. since World War II, reaching its highest level in 2013 since the 1920s. This result is robust for the definition of income and the chosen measure of inequality. Understanding the facts about inequality is the first step in assessing what can and should be done. While there is a general consensus that some reallocative transfers from the top of the income distribution to the bottom are desirable, the optimal amount of these redistributions is still up in the air. DeNavas-Walt, Carmen; and Proctor, Bernadette D. "Income and Poverty in the United States: 2014." Current Population Reports. September 2015. See www.census.gov/content/dam/Census/library/publications/2015/demo/p60-252.pdf. "The Distribution of Household Income and Federal Taxes, 2011." Congress of the United States: Congressional Budget Office. November 2014. See www.cbo.gov/sites/default/files/113th-congress-2013-2014/reports/49440-Distribution-of-Income-and-Taxes.pdf. Piketty, Thomas; and Saez, Emmanuel. "Income Inequality in the United States, 1913-1998." The Quarterly Journal of Economics, Vol. 118, No. 1, 2003, pp. 1-39. See http://eml.berkeley.edu/~saez/pikettyqje.pdf. Saez, Emmanuel. "Striking It Richer: The Evolution of Top Incomes in the United States," updated with 2014 preliminary estimates. University of California, Berkeley. June 2015. See http://eml.berkeley.edu/~saez/saez-UStopincomes-2014.pdf. Stone, Chad; Trisi, Danilo; Sherman, Arloc; and DeBot, Brandon. "A Guide to Statistics on Historical Trends in Income Inequality." Center on Budget and Policy Priorities. October 2015. See www.cbpp.org/sites/default/files/atoms/files/11-28-11pov_0.pdf. >>>>> Scroll down to view and make comments <<<<<< This Web Page by Steven Hansen ---- Copyright 2010 - 2016 Econintersect LLC - all rights reserved
for National Geographic News A pulsar that had previously been invisible to orbiting and ground-based observatories has been discovered thanks to one of astronomy's newest pairs of "glasses," the Fermi Gamma-ray Space Telescope. A pulsar is a type of neutron star, the small, dense remnant of a massive star that exploded as a supernova. Unlike ordinary neutron stars, pulsars send out jets of radiation from their magnetic poles that sweep across Earth's line of sight as the star spins on its axis. The newfound pulsar, which sits 4,600 light-years away in the constellation Cepheus, rotates at about a million miles an hour, and its beam of gamma rays reaches Earth about three times a second. Fermi, a collaboration between NASA, the U.S. Department of Energy, and international partners, was launched in June to scan the skies for gamma rays, the most energetic wavelengths in the electromagnetic spectrum. Pulsars have been spotted before based on radio waves and x-rays, but the new pulsar is the first object ever found solely based on gamma rays, according to Fermi scientists. "We're learning that the Fermi telescope is the perfect instrument for finding young pulsars that were hidden from us before," said Alice Harding, a co-author on the study and a scientist at NASA's Goddard Space Flight Center in Maryland. Harding said the mission could discover a new class of previously invisible pulsars, identify the mysterious sources of so-called gamma ray bursts, and expand estimates of the number of supernovae in our galaxy. Radio beams from the first pulsar were discovered in 1968, and astronomers have counted 1,800 pulsars since then. From 1991 through 2000, NASA's Energetic Gamma Ray Experiment Telescope (EGRET) cataloged hundreds of gamma ray sources, some of which turned out to be pulsars. But many of EGRET's gamma ray sources remain unidentified, Harding said. SOURCES AND RELATED WEB SITES
- 60 minutes Kids love to count. Help your students gain a greater understanding of what each number represents numerically, and make counting from 0-10 a breeze. Students will be able to identify and write numbers one to ten. Introduction (10 minutes) - Have the students come together as a group. - To motivate the students, begin by saying, "Today, we will be learning about numbers. Raise your hand if you know a number." - Randomly select students to share the number they know. This taps into their prior knowledge. - Write down the numbers that are shared on the board. - Say, "I will share with you a poem by Mother Goose that uses all the numbers from one to ten." - Read the poem One, Two, Buckle My Shoe. - Have the students recite the poem after you. Explicit Instruction/Teacher Modeling (10 minutes) - Draw a large circle on your board. - Place 10 magnetic shapes to the right of the circle. - Explain to the students that there are no items in the circle. Nothing there represents the number zero. Zero represents nothing at all. - Move one magnetic shape into the circle, then write the number 1 above the circle. Have the students repeat, "one." - Add another magnetic shape into the circle. Count one, two. Erase the 1 and write 2. Have the students repeat, "one, two." - Repeat until all 10 magnetic shapes are in the circle, changing the number at the top of to reflect the number of items in the circle. - Use your index finger to count all 10 magnetic shapes in the circle. Guided Practice/Interactive Modeling (10 minutes) - Have the students return to their desks. - Each student should have a piece of construction paper and glue. - Each student should have 55 beads, buttons, or foam shapes. - Model the upcoming activity on the whiteboard using magnetic shapes. - Make a list on the board: 1-10. Have the students copy you as you place the correct amount of items next to each number. - Below is an example how the students' work should look. Each asterisk represents a bead, button, or foam shape. 1 * 2 ** 3 *** 4 **** 5 ***** 6 ****** 7 ******* 8 ******** 9 ********* 10 ********** Independent Working Time (10 minutes) - Provide each student with one of the Know Your Numbers 1 to 10 worksheets and a pencil. - Read the instructions for section one. - Allow the students a few minutes to complete section one. - Read the instructions for section two. - Allow the students a few minutes to complete section two. - Read the instructions for section three. - Allow the students a few minutes to complete section three. - Collect the worksheets for grading. - Enrichment: Give above level students the entire set of ten worksheets from Know Your Numbers 1 to 10. Allow them to complete these at their own pace. - Support: Have struggling students complete the Number 1 Tracing worksheet during Independent Working Time for extra practice. Assessment (10 minutes) - Conduct mini-conferences with the students individually at your desk. - Give them each a random amount of objects and ask them to use their index finger to count the items aloud. Review and Closing (10 minutes) - Have the students come together on the floor in a circle in groups of 10. - Assign a student to start and assign another student to end. - Have the students count off 1 to 10 in each group.
Wait for the Punchline… A line chart allows the visual representation of data in the form of points connected to form a line. Whether it’s straight or curved will depend on the data being used to create a line chart. Line graphs are the simplest tools used to represent quantitative data variables. What’s on the Line--Variations Simple or not, don’t mistake line chart examples as being basic! They come in different varieties, and each is better suited to present certain kinds of data. Here’s a few examples: Simple Line (Basic) As evident from the chart above, simple line graphs only comprise one line that shows how two variables are related. In this case, the days of the week and when I went to bed in the morning. The name should clue you in this is the simplest and most classic example of line charts. Line charts where vertical and horizontal line segments connect the points are called step line charts. Unsurprisingly, the resulting pattern resembles a staircase or steps. The width of each step is equal to the value of the independent variable–days, in the chart above. The step direction, on the other hand, has three different modes: - Default or center mode in which the data points are at the center of the horizontal segments. - Forward mode in which the data points start where the horizontal segments begin. - Backward mode in which the data points are present at the tail of horizontal segments. Compound Line (Multi-Axis) After working out how to make a line chart for simple data sets, let’s see the chart type you can use to present complicated data: A simple line chart expands into a compound line when you use it to showcase data subdivided into types. Therefore, compound line charts are good for presenting multiple data sets but in one chart. The second type of compound line’s a multi-axis chart. That’s because it displays two sets of variables–hours of sleep and zombieness levels–but for the same independent variable, i.e., day of the week. A sparkline’s what you get when you reduce a common line graph to its core components. Usually, there aren’t any axes on these tiny graphs, and they emphasize the change of value. Have multiple data sets but with a single set of axes? Stack small sparklines vertically on a page to draw eyes to the focal point, i.e., the pattern emerging from the changing values. In the Pipeline—Use Cases So, now you know what is a line chart and its different types. Let’s discuss their uses. Line charts are in use for data visualization across industries. They’re easy to interpret and simple to plot. Below are a few use cases of these charts: - Real Estate - Potential homeowners can use line charts to track seasonality in the real estate market. Some might want to know how an ongoing recession is affecting the availability of homes, while others could plot to determine the best time to buy/sell houses. - Stock Market - More buying and selling to be done and more line charts in support or against the decision. So, you can follow the changing values of a particular stock or predict up/downward patterns with a trend line chart. - Banks/Finance - Line charts can demonstrate everything from the number of borrowers per month and over the years to how well a company’s stock is doing. - R&D - Scientists often present the results of experimentation by plotting them on line charts. The analysis of said charts can indicate the direction for their next batch of experiments. The Lineup—Most Common Uses While it’s clear line charts support the monitoring of variables in a data set, its most common uses may not be as obvious. So, let’s peek at a few of them: - Line graphs, like bar charts, are great tools for tracking changing values over time. - Suspect correlations within your data? Think you could highlight the differences within? Use line charts for that! - Viewers can use line charts to plot past and current performance and analyze those to forecast. - Their simplicity proves line charts ideal for beginners, whether in trader use or something else. - Line charts are great stepping stones for readers desiring to learn advanced techniques, like point and figure charts or Japanese candlestick patterns. Coloring within the Lines—Best Use Cases If line charts are now your new faves, you’ll need to ensure yours help the audience visualize data in the best way. In that case, remember these best practices: Keep it Clean Present with the audience in mind. Since most people are math-averse, it’d be a safe bet to deduce they won’t appreciate line charts studded with figures. So, use a simple, clean format that can convince even those who are not good at math. Of course, you should include all information that will help them interpret your chart. But avoid a cluttered look. Opt for several multiple line charts rather than a single data-dense one for the best reception. We Don’t Talk Anymore Yes, charts are tools for visualizing data. However, it doesn’t mean you can’t supplement them with text. In fact, without some text for context, your chart may not make much sense to everyone. So, impart additional meaning by adding minimal and necessary text to your charts. Your Place or… Placement matters, whether we’re talking about the keys, legends, data, symbols, or titles on charts. All such graphical elements are only there for one purpose, i.e., to aid in visualization. Therefore, position them correctly to get maximum use out of them. The same goes for data. Since a sparkline chart is based on a minimalist design, position it close to its data for a better impact. Look for the One Use the best chart type for each job. For instance, step line charts are best for highlighting irregularities in changes in the data, such as tax or interest rates. That’s because, unlike simple line charts that focus on data trends over time, step line charts also emphasize on: - Periods where the pattern stays the same - Exactly when and how much changes Likewise, sparklines are better suited for highlighting max and min values and showing trends in series, such as: - Economic cycles - Seasonal in/decreases Sometimes, a stacked line chart doesn’t work; only a bar chart will do. Look at the line chart below. It displays the number of things and times my neighbor has borrowed stuff from me–I italicize because it’s a never-ending and never-returning process. However, by plotting a line chart, I’d be misleading the readers since: - The categories don’t have anything to do with each other - The horizontal axis does not focus on a measurement occurring with regularity A stacked bar chart with one distinctly colored bar for each type of stuff borrowed per month would make measuring all the borrowing much easier. It will also achieve two other important things: - Eliminate the risk of misinformation - Clarify I need to move Pick Granularity that Pops Your line chart will look like a confused jumble with too many data points. So, take care to not overestimate those. It’s better to create multiple sparklines to cover several ranges/series rather than smushing it all into one chart. Observe the graphs below to see what you risk with overpopulated line charts: A 15 pixels/point sparkline with 40 data points A 2 pixels/point sparkline with 300 data points A pixel/ point sparkline with 600 data points Less than 1 pixel/point sparkline with 900 data points Whether you’re adding too many data points, lines, or colors to your line charts, stop to rethink the original objective. Effective line charts emphasize data changes and direction. They’re not displays of the size of the values themselves! Jumbled-up and difficult-to-read charts won’t help you make your point and can ruin even the best data. Kiss, Marry, or Kill All measurements on your chart’s horizontal line should be kissing close to regularly occurring intervals and corresponding data points. The line mostly defines durations. However, it can focus on other kinds of measurements, such as experiment iterations, baseball pitches in a match, etc. As long as the horizontal line meets the requirements, you can use it to demonstrate all measurement types. Marry the lines in your chart to colors for different purposes: - Identifying deviations - Highlighting target goals - Defining individuals within categories - Indicating outliers in data sets Kill the zero baselines in line charts that don’t require one. Only add it if it’s helping you tell your data’s story more articulately. Down the Line—Product Details Do line charts sound just awesome? But is making them one by one eating at your hours? Then begin using Image Charts as your line chart creator right away! By doing so, you’ll get to enjoy this tool’s many amazing features, including on-brand automated chart-making. Find several examples below: - You can assign specific colors to the series in your data sets. Or specify different colors for different values in the same series. - Choose the line thickness and style that would show off your data in the best way. - Add various sized markers, i.e., different point styles, to the data points in your line charts. - Find out how to create a line chart in excel here. - Js chart construction and distribution are now possible too! Here’s how, along with chart js line chart examples - Distribute and share via multiple methods, including MailChimp, Gmail, Team, Slack, and Zapier, Visualforce, and Email. - Tell bigger and more interesting stories by turning your charts into gifs. - Create keys to show the specification of each attribute in the URL. - Embiggen *snort* your line charts with larger pixel values. - Create Sparklines and Scatter charts to better showcase your data. Parallel Lines—Why & How to Automate with Image Charts Automation via Image Charts makes proactive collaboration easier. The simple interface is so user intuitive that it brings even complete beginners up to speed. In addition, the development hub houses short and simple tutorials even free users can access. Besides these cool benefits–and the wide spectrum of features mentioned above– Image Charts users also love being able to create charts with just one URL. So, ready to automate? Do it through: Connect with thousands of apps through Zapier. Make and share charts without resorting to even a single code! A typical zap runs like this: - Choose your data source as your Trigger, such as Typeform, Google Sheets, Email, CRM, etc. - Select Image Charts - Set your chart’s custom options, including advanced features like round edged bars - Choose the chart destination, such as Slack, Teams, Email, etc. and add your new chart’s URL to your chosen vehicle for delivery and distribution. Leverage the Make-integration with Image Charts today. This is how scenarios work on Make: - The first module is for you to pick your data source. - In the second module, you connect to Image Charts and set up the options you want in your chart. - The final module involves choosing your distribution vehicle, such as Gmail, Slack, or Teams, and adding your URL to send it forward. The tool maximizes its user-friendliness by keeping the elements required to create most charts the same. For instance, all chart URLs are based on the same format. Adding chart type, data, size, labels, and other parameters, such as slices (for pie charts), gradients/slice, line thickness and styles, and marker type and size, allows you to finetune your chart to your liking. Once the URL transforms the info you input into a chart, you can paste it in the browser and share in multiple ways. Find detailed instructions here. The End of the Line Are line charts versatile? Yes! Are they useful? Highly! With Image Charts as your line chart maker, you can create all types of line charts, from a basic to spark line chart and scatter plots. So no more waiting for graph-making experts; do what’s best for your data! Great Pickup Lines--FAQs 1. What are a line chart and its uses? The simplest line chart definition would be a graph or plot that connects data points from within a series using a line. The sequential values in such a trend line chart can help you identify patterns. They’re mostly used when you need answers to queries, such as: - Will we meet our quarterly/yearly goal? - Are any patterns developing in the data over time? - Which category is outperforming the others? - Is there an upward trend or a downward one? 2. What is the benefit of a line chart? Line charts are ideal for demonstrating the rate–change over time–and trends. Therefore, they can be useful in predicting and forecasting as yet unrecorded data. Finally, you can also use line charts to demonstrate one independent variable's effect on several dependent ones. 3. What are the limitations of line charts? Like all chart types, line charts also have limitations. One of them concerns data granularity. The more points you stuff into one chart, the more unclear they will become. Therefore, users should weigh the bar chart vs line chart pros and cons and look at other types, like stacked bar charts or sparklines, when presenting such data. 4. How do you use a line graph? You can use a line chart for displaying change in your data sets over time. The points will be connected by line segments on two axes. Since one data set’s dependent on the other one, your line chart will help you determine the relationship between the two.
Two point charges, A and B, lie along a line separated by a distance L. The point x is the midpoint of their separation. Which combination of charges would yield the greatest repulsive force between the charges? −2q and −4q +1q and −3q −1q and −4q −2q and +4q +1q and +7q A laser emits a single, 2.0-ms pulse of light that has a frequency of 2.83 × 1011 Hz and a total power of 75 000 W. How many photons are in the pulse? 8.0 × 1023 1.6 × 1024 2.4 × 1025 3.2 × 1025 4.0 × 1026 Which combination of charges will yield zero electric field at the point x? +1q and −1q +2q and −3q +1q and −4q −1q and +4q +4q and +4q A solid, conducting sphere of radius a carries an excess charge of +6 µC. This sphere is located at the center of a hollow, conducting sphere with an inner radius of b and an outer radius of c as shown. The hollow sphere also carries a total excess charge of +6 µC. Determine the excess charge on the inner surface of the outer sphere (a distance b from the center of the system). Determine the excess charge on the outer surface of the outer sphere (a distance c from the center of the system). Which one of the following figures shows a qualitatively accurate sketch of the electric field lines in and around this system? A charged conductor is brought near an uncharged insulator. Which one of the following statements is true? Both objects will repel each other. Both objects will attract each other. Neither object exerts an electrical force on the other. The objects will repel each other only if the conductor has a negative charge. The objects will attract each other only if the conductor has a positive charge. Three charged particles A, B, and C are located near one another. Both the magnitude and direction of the force that particle A exerts on particle B is independent of the sign of charge B. the sign of charge A. the distance between C and B. the distance between A and B. the magnitude of the charge on B. Two positive point charges Q and 2Q are separated by a distance R. If the charge Q experiences a force of magnitude F when the separation is R, what is the magnitude of the force on the charge 2Q when the separation is 2R? Four point charges are held fixed at the corners of a square as shown in the figure. Which of the five arrows shown below most accurately shows the direction of the net force on the charge −Q due to the presence of the three other charges? Which one of the following statements is true concerning the magnitude of the electric field at a point in space? It is a measure of the total charge on the object. It is a measure of the electric force on any charged object. It is a measure of the ratio of the charge on an object to its mass. It is a measure of the electric force per unit mass on a test charge. It is a measure of the electric force per unit charge on a test charge. At which point (or points) is the electric field zero N/C for the two point charges shown on the x axis? The electric field is never zero in the vicinity of these charges. The electric field is zero somewhere on the x axis to the left of the +4q charge. The electric field is zero somewhere on the x axis to the right of the −2q charge. The electric field is zero somewhere on the x axis between the two charges, but this point is nearer to the −2q charge. The electric field is zero at two points along the x axis; one such point is to the right of the −2q charge and the other is to the left of the +4q charge. Which one of the following statements is true concerning the strength of the electric field between two oppositely charged parallel plates? It is zero midway between the plates. It is a maximum midway between the plates. It is a maximum near the positively charged plate. It is a maximum near the negatively charged plate. It is constant between the plates except near the edges. Complete the following statement: The magnitude of the electric field at a point in space does not depend upon the distance from the charge causing the field. the sign of the charge causing the field. the magnitude of the charge causing the field. the force that a unit positive charge will experience at that point. the force that a unit negative charge will experience at that point. Which one of the following statements best explains why it is possible to define an electrostatic potential in a region of space that contains an electrostatic field? Work must be done to bring two positive charges closer together. Like charges repel one another and unlike charges attract one another. A positive charge will gain kinetic energy as it approaches a negative charge. The work required to bring two charges together is independent of the path taken. A negative charge will gain kinetic energy as it moves away from another negative charge. The electric potential at a certain point is space is 12 V. What is the electric potential energy of a −3.0 ?C charge placed at that point? Two positive point charges are separated by a distance R. If the distance between the charges is reduced to R/2, what happens to the total electric potential energy of the system? It is doubled. It remains the same. It increases by a factor of 4. It is reduced to one-half of its original value. It is reduced to one-fourth of its original value. Two point charges are arranged along the x axis as shown in the figure. At which of the following values of x is the electric potential equal to zero?Note: At infinity, the electric potential is zero. Three point charges −Q, −Q, and +3Q are arranged along a line as shown in the sketch. What is the electric potential at the point P? If the work required to move a +0.35 C charge from point A to point B is +125 J, what is the potential difference between the two points? Four point charges are individually brought from infinity and placed at the corners of a square as shown in the figure. Each charge has the identical value +Q. The length of the diagonal of the square is 2a. What is the magnitude of the electric field at P, the center of the square? What is the electric potential at P, the center of the square? Two charges of opposite sign and equal magnitude Q = 2.0 C are held 2.0 m apart as shown in the figure. Determine the magnitude of the electric field at the point P. 2.2 × 109 V/m 5.6 × 108 V/m 4.4 × 108 V/m 2.8 × 108 V/m Determine the electric potential at the point P. 1.1 × 109 V 2.2 × 109 V 4.5 × 109 V 9.0 × 109 V Which one of the following statements concerning electrostatic situations is false? E is zero everywhere inside a conductor. Equipotential surfaces are always perpendicular to E. It takes zero work to move a charge along an equipotential surface. If V is constant throughout a region of space then E must be zero in that region. No force component acts along the path of a charge as it is moved along an equipotential surface. The magnitude of the charge on the plates of an isolated parallel plate capacitor is doubled. Which one of the following statements is true concerning the capacitance of this parallel-plate system? The capacitance is decreased to one half of its original value. The capacitance is increased to twice its original value. The capacitance remains unchanged. The capacitance depends on the electric field between the plates. The capacitance depends on the potential difference across the plates. A parallel plate capacitor with plates of area A and plate separation d is charged so that the potential difference between its plates is V. If the capacitor is then isolated and its plate separation is decreased to d/2, what happens to the potential difference between the plates? The potential difference is increased by a factor of four. The potential difference is twice it original value. The potential difference is one half of its original value. The potential difference is one fourth of its original value. The potential difference is unchanged. Which one of the following changes will necessarily increase the capacitance of a capacitor? decreasing the charge on the plates increasing the charge on the plates placing a dielectric between the plates increasing the potential difference between the plates decreasing the potential difference between the plates A potential difference of 120 V is established between two parallel metal plates. The magnitude of the charge on each plate is 0.020 C. What is the capacitance of this capacitor? Two positive charges are located at points A and B as shown in the figure. The distance from each charge to the point P is a = 2.0 m. Which statement is true concerning the direction of the electric field at P? The direction is toward A. The direction is toward B. The direction is directly away from A. The direction makes a 45° angle above the horizontal direction. The direction makes a 135° angle below the horizontal direction. 1.35 × 104 V 1.89 × 104 V 2.30 × 104 V 2.70 × 104 V 3.68 × 104 V Which one of the following circuits has the largest resistance? Which one of the following statements concerning resistance is true? The resistance of a semiconductor increases with temperature. Resistance is a property of resistors, but not conductors. The resistance of a metal wire changes with temperature. The resistance is the same for all samples of the same material. The resistance of a wire is inversely proportional to the length of the wire. Complete the following statement: The unit kilowatt • hour measures Which one of the following quantities can be converted to kilowatt • hours (kWh)? A 40-W and a 60-W light bulb are designed for use with the same voltage. What is the ratio of the resistance of the 60-W bulb to the resistance of the 40-W bulb? A 220-? resistor is connected across an ac voltage source V = (150 V) sin [2?(60 Hz)t]. What is the average power delivered to this circuit? The figure shows variation of the current through the heating element with time in an iron when it is plugged into a standard 120 V, 60 Hz outlet. What is the peak voltage? What is the rms value of the current in this circuit? What is the approximate average power dissipated in the iron? Which one of the following statements concerning resistors in series is true? The voltage across each resistor is the same. The current through each resistor is the same. The power dissipated by each resistor is the same. The rate at which charge flows through each resistor depends on its resistance. The total current through the resistors is the sum of the current through each resistor. Some light bulbs are connected in parallel to a 120 V source as shown in the figure. Each bulb dissipates an average power of 60 W. The circuit has a fuse F that burns out when the current in the circuit exceeds 9 A. Determine the largest number of bulbs, which can be used in this circuit without burning out the fuse. Three resistors are connected as shown in the figure. The potential difference between points A and B is 26 V. How much current flows through the 3-? resistor? How much current flows through the 2-? resistor? Three resistors are placed in a circuit as shown. The potential difference between points A and B is 30 V. What is the equivalent resistance between the points A and B? What is the potential drop across the 30-? resistor? How much energy is stored in the combination of capacitors shown? The figure shows a simple RC circuit consisting of a 100.0-V battery in series with a 10.0-µF capacitor and a resistor. Initially, the switch S is open and the capacitor is uncharged. Two seconds after the switch is closed, the voltage across the resistor is 37 V. How much charge is on the capacitor 2.0 s after the switch is closed? 1.1 × 10−3 C 2.9 × 10−3 C 3.7 × 10−4 C 5.2 × 10−4 C 6.6 × 10−4 C Which one of the following statements concerning the magnetic force on a charged particle in a magnetic field is true? It is a maximum if the particle is stationary. It is zero if the particle moves perpendicular to the field. It is a maximum if the particle moves parallel to the field. It acts in the direction of motion for a positively charged particle. It depends on the component of the particle's velocity that is perpendicular to the field. Complete the following statement: The magnitude of the magnetic force that acts on a charged particle in a magnetic field is independent of the sign of the charge. the magnitude of the charge. the magnitude of the magnetic field. the direction of motion of the particle. the velocity components of the particle. Which one of the following will not generate electromagnetic waves or pulses? a steady direct current an accelerating electron a proton in simple harmonic motion an alternating current charged particles traveling in a circular path in a mass spectrometer Which one of the following statements best explains why a constant magnetic field can do no work on a moving charged particle? The magnetic field is conservative. The magnetic force is a velocity dependent force. The magnetic field is a vector and work is a scalar quantity. The magnetic force is always perpendicular to the velocity of the particle. The electric field associated with the particle cancels the effect of the magnetic field on the particle. An electron traveling horizontally enters a region where a uniform magnetic field is directed into the plane of the paper as shown. Which one of the following phrases most accurately describes the motion of the electron once it has entered the field? upward and parabolic upward and circular downward and circular upward, along a straight line downward and parabolic An electron enters a region that contains a magnetic field directed into the page as shown. The velocity vector of the electron makes an angle of 30° with the +y axis. What is the direction of the magnetic force on the electron when it enters the field? up, out of the page at an angle of 30° below the positive x axis at an angle of 30° above the positive x axis at an angle of 60° below the positive x axis at an angle of 60° above the positive x axis A current-carrying, rectangular coil of wire is placed in a magnetic field. The magnitude of the torque on the coil is not dependent upon which one of the following quantities? the magnitude of the current in the loop the direction of the current in the loop the length of the sides of the loop the area of the loop the orientation of the loop A long, straight wire carries a current I. If the magnetic field at a distance d from the wire has magnitude B, what is the magnitude of the magnetic field at a distance 2d from the wire? Two long, straight wires are perpendicular to the plane of the paper as shown in the drawing. Each wire carries a current of magnitude I. The currents are directed out of the paper toward you. Which one of the following expressions correctly gives the magnitude of the total magnetic field at the origin of the x, y coordinate system? A wire is bent into the shape of a circle of radius r = 0.10 m and carries a 20.0-A current in the direction shown. What is the direction of the magnetic field at the center of the loop? to the right of the page to the left of the page toward the top of the page into the plane of the paper out of the plane of the paper What is the magnitude of the magnetic field at the center of the loop? 2.0 × 10−5 T 1.3 × 10−5 T 2.0 × 10−4 T 1.3 × 10−4 T Determine the magnetic moment of the loop. A conducting loop of wire is placed in a magnetic field that is normal to the plane of the loop. Which one of the following actions will not result in an induced current in the loop? Rotate the loop about an axis that is parallel to the field and passes through the center of the loop. Increase the strength of the magnetic field. Decrease the area of the loop. Decrease the strength of the magnetic field. Rotate the loop about an axis that is perpendicular to the field and passes through the center of the loop. A 0.50-T magnetic field is directed perpendicular to the plane of a circular loop of radius 0.25 m. What is the magnitude of the magnetic flux through the loop? A long, straight wire is in the same plane as a rectangular, conducting loop. The wire carries a constant current I as shown in the figure. Which one of the following statements is true if the wire is suddenly moved toward the loop? There will be no induced emf and no induced current. There will be an induced emf, but no induced current. There will be an induced current that is clockwise around the loop. There will be an induced current that is counterclockwise around the loop. There will be an induced electric field that is clockwise around the loop. A circuit is pulled with a 16-N force toward the right to maintain a constant speed v. At the instant shown, the loop is partially in and partially out of a uniform magnetic field that is directed into the paper. As the circuit moves, a 6.0-A current flows through a 4.0-? resistor. Which one of the following statements concerning this situation is true? The temperature of the circuit remains constant. The induced current flows clockwise around the circuit. Since the circuit moves with constant speed, the force F does zero work. If the circuit were replaced with a wooden loop, there would be no induced emf. As the circuit moves through the field, the field does work to produce the current. With what speed does the circuit move? Determine the energy stored in a 95-mH inductor that carries a 1.4-A current. Two coils share a common axis as shown in the figure. The mutual inductance of this pair of coils is 6.0 mH. If the current in coil 1 is changing at the rate of 3.5 A/s, what is the magnitude of the emf generated in coil 2? 5.8 × 10−4 V 1.7 × 10−3 V 3.5 × 10−3 V 1.5 × 10−2 V 2.1 × 10−2 V Two coils, 1 and 2, with iron cores are positioned as shown in the figure. Coil 1 is part of a circuit with a battery and a switch. Immediately after the switch S is closed, which one of the following statements is true? An induced current will flow from right to left in R. An induced current will flow from left to right in r. A magnetic field that points toward B appears inside coil 1. An induced magnetic field that points toward B appears inside coil 2. A current will pass through r, but there will be no current through R. Assume the switch S has been closed for a long time. Which one of the following statements is true? In the drawing, a coil of wire is wrapped around a cylinder from which an iron core extends upward. The ends of the coil are connected to an ac voltage source. After the alternating current is established in the coil, an aluminum ring of resistance R is placed onto the iron core and released. Which one of the following statements concerning this situation is false? The induced current in the ring is an alternating current. The temperature of the ring will increase. At any instant, the direction of the induced current in the ring is in the same direction as that in the coil. The induced magnetic field in the ring may by directed either upward or downward at an instant when the direction of the magnetic field generated by the current in the coil is upward. The ring may remain suspended at the position shown with no vertical movement of its center of mass. A transformer changes 120 V across the primary to 1200 V across the secondary. If the secondary coil has 800 turns, how many turns does the primary coil have? A loop with a resistance of 2.0 ? is pushed to the left at a constant speed of 4.0 m/s by a 32 N force. At the instant shown in the figure, the loop is partially in and partially out of a uniform magnetic field. An induced current flows from left to right through the resistor. The length and width of the loop are 2.0 m and 1.0 m, respectively. What is the direction of the magnetic field? to the left to the right out of the paper into the paper Determine the magnitude of the induced current through the resistor. A flexible, circular conducting loop of radius 0.15 m and resistance 4.0 ? lies in a uniform magnetic field of 0.25 T. The loop is pulled on opposite sides by equal forces and stretched until its enclosed area is essentially zero m2, as suggested in the drawings. It takes 0.30 s to close the loop. Which one of the following phrases best describes the direction of the induced magnetic field generated by the current induced in the loop while the loop is being stretched? into the page out of the page The induced field is zero. An ac voltage source that has a frequency f is connected across the terminals of a capacitor. Which one of the following statements correctly indicates the effect on the capacitive reactance when the frequency is increased to 4f? The capacitive reactance increases by a factor of four. The capacitive reactance increases by a factor of eight. The capacitive reactance is unchanged. The capacitive reactance decreases by a factor of eight. The capacitive reactance decreases by a factor of four. The current in a certain ac circuit is independent of the frequency at a given voltage. Which combination of elements is most likely to comprise the circuit? a combination of inductors and resistors a combination of inductors and capacitors The graph shows the voltage across and the current through a single circuit element connected to an ac generator. Determine the frequency of the generator. Determine the rms voltage across this element. Determine the rms current through this element. What is the reactance of this element? Identify the circuit element. The element is a 25-? resistor. The element is a 35-? resistor. The element is a 0.45-H inductor. The element is a 360-µF capacitor. The element is a 510-µF capacitor. Note the following circuit elements: (1) resistors, (2) capacitors, and (3) inductors.Which of these elements uses no energy, on average, in an ac circuit? both 2 and 3 both 1 and 3 A 7.70-µF capacitor and a 1250-? resistor are connected in series to a generator operating at 50.0 Hz and producing an rms voltage of 208 V. What is the average power dissipated in this circuit? A series RCL circuit contains a 222-? resistor, a 1.40-µF capacitor, and a 0.125-H inductor. The 444-Hz ac generator in the circuit has an rms voltage of 208 V. What is the average electric power dissipated by the circuit? The electric field E of an electromagnetic wave traveling the positive x direction is illustrated in the figure. This is the wave of the radiation field of an antenna. What is the direction and the phase relative to the electric field of the magnetic field at a point where the electric field is in the negative y direction? Note: The wave is shown in a region of space that is a large distance from its source. +y direction, in phase −z direction, 90° out of phase +z direction, 90° out of phase −z direction, in phase +z direction, in phase A television station broadcasts at a frequency of 86 MHz. The circuit contains an inductor with an inductance L = 1.2 × 10−6 H and a variable-capacitance C. Determine the value of C that allows this television station to be tuned in. 2.9 × 10−12 F 5.8 × 10−12 F 1.8 × 10−11 F 3.6 × 10−11 F 1.1 × 10−10 F Which one of the following types of wave is intrinsically different from the other four? Which one of the following statements concerning electromagnetic waves is false? Electromagnetic waves carry energy. X-rays have longer wavelengths than radio waves. In vacuum, all electromagnetic waves travel at the same speed. Lower frequency electromagnetic waves can be produced by oscillating circuits. They consist of mutually perpendicular electric and magnetic fields that oscillate perpendicular to the direction of propagation. Which one of the following statements concerning the wavelength of an electromagnetic wave in a vacuum is true? The wavelength is independent of the speed of the wave for a fixed frequency. The wavelength is inversely proportional to the speed of the wave. The wavelength is the same for all types of electromagnetic waves. The wavelength is directly proportional to the frequency of the wave. The wavelength is inversely proportional to the frequency of the wave. Complete the following sentence: The various colors of visible light differ in their speeds in a vacuum. frequency and wavelength. frequency and their speed in a vacuum. When a radio telescope observes a region of space between two stars, it detects electromagnetic radiation that has a wavelength of 0.21 m. This radiation was emitted by hydrogen atoms in the gas and dust located in that region. What is the frequency of this radiation? 7.1 × 1010 Hz 2.1 × 1014 Hz 3.0 × 108 Hz 6.9 × 1011 Hz 1.4 × 109 Hz A radio wave sent from the surface of the earth reflects from the surface of the moon and returns to the earth. The elapsed time between the generation of the wave and the detection of the reflected wave is 2.6444 s. Determine the distance from the surface of the earth to the surface of the moon. Note: The speed of light is 2.9979 × 108 m/s. 3.7688 × 108 m 3.8445 × 108 m 3.9638 × 108 m 4.0551 × 108 m 7.9276 × 108 m An electromagnetic wave has an electric field with peak value 250.0 N/C. What is the average energy delivered to a surface with area 2.00 m2 by this wave in one minute? An incandescent light bulb radiates uniformly in all directions with a total average power of 1.0 × 102 W. What is the maximum value of the magnetic field at a distance of 0.50 m from the bulb? 8.4 × 10−7 T 5.2 × 10−7 T 3.1 × 10−7 T 1.6 × 10−7 T Electromagnetic waves are radiated uniformly in all directions from a source. The rms electric field of the waves is measured 35 km from the source to have an rms value of 0.42 N/C. Determine the average total power radiated by the source. 4.1 × 105 W 8.3 × 105 W 3.0 × 106 W 7.2 × 106 W 1.7 × 107 W The most convincing evidence that electromagnetic waves are transverse waves is that they can be polarized. they carry energy through space. they can travel through a material substance. they do not require a physical medium for propagation. all electromagnetic waves travel with the same speed through vacuum. Two polarizing sheets have their transmission axes parallel so that the intensity of unpolarized light transmitted through both of them is a maximum. Through what angle must either sheet be rotated if the transmitted intensity is 25 % of the incident intensity? The figure shows the time variation of the magnitude of the electric field of an electromagnetic wave produced by a wire antenna. Determine the rms value of the electric field magnitude. What is the rms value of the magnitude of the magnetic field? 1.4 × 10−8 T 2.4 × 10−8 T 3.3 × 10−8 T 4.6 × 10−8 T 5.4 × 10−8 T Determine the frequency of the wave. 1.0 × 109 Hz 1.3 × 108 Hz 2.5 × 108 Hz 3.8 × 108 Hz 5.0 × 108 Hz Determine the wavelength of the wave. What is the average intensity of this electromagnetic wave? The speed of light in material A is 1.25 times as large as it is in material B. What is the ratio of the refractive indices, nA/nB, of these materials? A beam of light passes from air into water. Which is necessarily true? The frequency is unchanged and the wavelength increases. The frequency is unchanged and the wavelength decreases. The wavelength is unchanged and the frequency decreases. Both the wavelength and frequency increase. Both the wavelength and frequency decrease. Complete the following statement: Fiber optics make use of total internal reflection. A glass block with an index of refraction of 1.7 is immersed in an unknown liquid. A ray of light inside the block undergoes total internal reflection as shown in the figure. Which one of the following relations best indicates what may be concluded concerning the index of refraction of the liquid, nL? nL < 1.0 nL > 1.1 nL > 1.3 nL < 1.1 nL < 1.3 A child is looking at a reflection of the sun in a pool of water. When she puts on a pair of Polaroid sunglasses with a vertical transmission axis, she can no longer see the reflection. At what angle is she looking at the pool of water? This is the end of the test. When you have completed all the questions and reviewed your answers, press the button below to grade the test.
Students play a little game to catch germs. Then they sort them into like groups. Then they complete the building of a bar graph. The Education.com site will be used for this activity. There are free activities throughout the site, but you will need to create an account to access them. There is a fee to access all the resources. - Be able to sort like items and then create a bar graph. - Enable: Enable is to give someone or something the authority to do something. - Sort: To sort is to categorize things with a common feature. - Bar graph: A bar graph is a diagram in which the numerical values of variables are represented by the height or length of lines or rectangles of equal width. To prepare for this lesson: The teacher will log onto www.education.com and create a free account. The teacher should create a class within their account. Students will receive a code for login. Cards can be downloaded for students with information. Search for Bar Graphing with Roly. Click the assign button. Have students log onto www.education.com with their login code. Show students where the assignment is located. Prior to assignment introduce bar graphs with this video. Note: Free Month for education com. There is a fee, however, one month is free. See Accommodations Page and Charts on the 21things4students.net site in the Teacher Resources. Directions for this activity: - The Teacher provides the student with login information. - Students log in with the code to Education.com. - Students locate assignment. - Students complete assignment. Different options for assessing the students: - Check for understanding - Have students collect their own data on a given topic and create a bar graph of their own. - Create questions using data from the bar graph and have them pair with a student to solve. 5a. Students formulate problem definitions suited for technology-assisted methods such as data analysis, abstract models and algorithmic thinking in exploring and finding solutions. 5b. Students collect data or identify relevant data sets, use digital tools to analyze them, and represent data in various ways to facilitate problem-solving and decision-making. 5c. Students break problems into component parts, extract key information, and develop descriptive models to understand complex systems or facilitate problem-solving. 5d. Students understand how automation works and use algorithmic thinking to develop a sequence of steps to create and test automated solutions. MITECS: Michigan adopted the "ISTE Standards for Students" called MITECS (Michigan Integrated Technology Competencies for Students) in 2018. Devices and Resources Device: PC, Chromebook, Mac, iPad Browser: Chrome, Safari, Firefox, Edge, ALL Apps. Extensions, Add-ons Bar Graphing with Roly CONTENT AREA RESOURCES Students will complete a writing piece explaining the bar graph. The students will create a paper graph. - The students use a graph for Math Talks. - Have students write math equations using the data. This task card was created by Julie Hoehing, Lake Shore Public Schools, January 2019.
Equations involving linear or even quadratic polynomials are fairly straightforward, but if polynomials of higher degrees are involved, the process can be difficult or impossible to do by hand. In these cases, a graphing calculator or computer may be necessary. Nevertheless, we can find the roots of higher-degree polynomials approximately by graphing the functions (even by hand), and in some cases, we can employ a technique called synthetic division to find them and to factor the polynomial. A polynomial with a root at x = a has a binomial (x – a) as a factor. Thus, if f(x) is a polynomial of degree n where f(a) = 0, then where g(x) is a polynomial of degree n – 1. Consider a simple example: f(x) = x2 – 1. This quadratic polynomial has a root at x = 1, so it has a factor (x – 1): Here, our remaining polynomial g(x) is simply x + 1, which corresponds to the root at x = –1. For a general polynomial f(x) defined as follows, where cn = 1, the polynomial can be factored into n binomials: Here, the constants ai are the roots of f. In other words, a polynomial of degree n has n roots--but these roots may not all be unique, as is the case with x3, which can be factored into the expression (x – 0) (x – 0) (x – 0). (Furthermore, these roots may be complex, as is the case with the quadratic function x2 + 1--we will not address this situation, however.) Although this expression factors into three binomials, the corresponding roots are all equal to zero. Once you find a root of a polynomial function, you can factor out the corresponding binomial from the polynomial, leaving a lower-degree polynomial as the other factor. Sometimes, this makes finding the remaining roots simple--although sometimes the problem remains just as difficult. To find this lower-degree polynomial, we can use a technique called synthetic division, which is more or less long division of polynomials. Let's consider an example: Say we graphed the function or otherwise determined that one of the roots is at x = 2. We thus know that a factor of f is the binomial x – 2: Here's the procedure for finding g(x). 1. Set up the division. Draw an inverted division bracket as shown below. Outside the bracket, write the coordinate of the root; inside, write the coefficients of the polynomial that you are factoring from higher-order terms to lower-order terms (include zero-valued coefficients). 2. Carry the first coefficient. Carry the highest-order coefficient below the bracket. 3. Multiply the value of the root by the last value you wrote below the bracket, then write the product under the next coefficient inside the bracket. Now add the values in this column and write the value under the bracket in the same column. 4. Repeat step 3 until you reach the end of the bracket. The final number that you write under the bracket should be zero; if not, either you have made a mistake, or the root value outside the bracket is not a root of the polynomial. 5. Write the new factored polynomial. Use the zero value outside the bracket to write the binomial factor; use the numbers under the bracket as the coefficients for the new polynomial. The polynomial has a root at x = -2. Find the other roots of the polynomial. Solution: First, we can factor out x2 from the polynomial to simplify it somewhat. Because the polynomial has a root at x = –2, we know it has a factor (x + 2). Let's use synthetic division to divide out this binomial. First, set up the problem. Now, perform the division. Only the result is shown below. Thus, the polynomial can be expressed as follows: Factor the quadratic: The roots of the polynomial are thus 0 (which occurs "twice"), 1, –2, and 3. Asymptotes of Rational Functions One feature of rational functions that is worth noting is the presence (or absence) of asymptotes: lines that the function gets arbitrarily close to but never touches or crosses. Consider the simple rational function ; a plot of this function is shown below. Note that is defined for all x except zero--and this manifests itself as the function getting arbitrarily close to the y-axis but never touching it. In other words, pick your value of x; no matter how small it is, f(x) is defined as long as x isn't zero. Thus, the y-axis (x = 0) is a vertical asymptote of the function . Furthermore, note that no value of x, no matter how large, can make the function equal to zero. Nevertheless, the larger x is, the closer f(x) gets to the x-axis. Thus, the x-axis (y = 0) is a horizontal asymptote of . These asymptotes are added to the graph below. Asymptotes can be any line on the coordinate plane, and finding the expressions for those lines can sometimes be difficult. Simple cases, like above, also crop up. Note that whenever the domain (or range) has the form (a, b) and (b, c), an asymptote exists for the independent (or dependent) variable value b. Note that for , for example, the domain is (-∞, 0) and (0, ∞). Thus, an asymptote exists at x = 0. (Note also that not all asymptotes correspond to functions! Any asymptote x = a is a line, but it is not a function.) An additional point to note about asymptotes is that the function may actually cross an asymptote at some point, but this crossing is not related to the part of the function that exhibits asymptotic behavior. That is, the function might approach the asymptote as the independent variable gets very large, but it might also cross the asymptote at a small value of x, like x = 0. This crossing does not mean the asymptote doesn't exist. Practice Problem: Graph the function and find its asymptotes. You need not determine the functions for those asymptotes. Solution: First, graph the function. Clearly, the function has an asymptote at x = 1: this corresponds to the root of the polynomial in the denominator of the function. Slightly less obvious, however, is the presence of another, "diagonal" asymptote. As it turns out, this asymptote corresponds to the line x + 1 (how precisely this is calculated is beyond the scope of this article). A non-vertical, non-horizontal asymptote is called a slant asymptote.
(Visited 311 times, 1 visits today)FacebookTwitterPinterestSave分享0 There’s 100 times less of a radioactive element on the ocean floor than expected.According to commonly accepted theory, heavy elements are cooked inside stars and distributed via supernova explosions. One particular unstable isotope, Plutonium-244 with a half-life of 81 million years, should be a good tracer of supernova explosions in the earth’s vicinity. Since the earth is believed to have formed 4.5 billion years ago, any primordial Pu-244 should be long gone. New Pu-244 should have come from supernovae since earth’s birth. But there’s a problem. PhysOrg starts an article with the rumble of a paradigm shaking:Scientists plumbing the depths of the ocean have made a surprise finding that could change the way we understand supernovae, exploding stars way beyond our solar system.What happened? Dr. Anton Wallner of Australian National University and 12 colleagues went looking for Pu-244 on the ocean floor, thinking there should be some specks from supernova explosions during the past 100 million years. Earth should receive a sprinkling of this isotope because, according to theory, abundances of heavy elements should reach a steady state in interstellar dust.“We’ve analysed galactic dust from the last 25 million years that has settled on the ocean and found there is much less of the heavy elements such as plutonium and uranium than we expected.“The findings are at odds with current theories of supernovae, in which some of the materials essential for human life, such as iron, potassium and iodine are created and distributed throughout space….“We found 100 times less plutonium-244 than we expected,” Dr Wallner said.Scratching his head, Wallner wondered if theories of nucleosynthesis of heavy elements are wrong. Maybe it takes the collision of neutron stars to form this isotope. Where do large radioisotopes come from?The fact that these heavy elements like plutonium were present, and uranium and thorium are still present on earth suggests that such an explosive event must have happened close to the earth around the time it formed, said Dr Wallner.“Radioactive elements in our planet such as uranium and thorium provide much of the heat that drives continental movement, perhaps other planets don’t have the same heat engine inside them,” he said.His findings not only question nucleosynthesis, therefore, but add an additional constraint on habitability. Without the radiogenic heat to drive plate tectonics, it is unlikely a planet would be suitable for complex life. Wallner et al.‘s study is published in Nature Communications.We offer this astro-geo-physical tidbit to individuals who may wish to explore the implications. One possibility Wallner did not think to consider was whether the earth is younger than he assumes. Review of “Is Genesis History?” that premiered nationally in theaters in a one-night Fathom Media event on Feb 23.It’s very difficult to get a hearing for intelligent design these days, let alone Genesis 1-11. And even if you got a public platform for Genesis, limiting it to a rational discussion of creation in six literal days and a global flood would seem miraculous. But that’s what Compass Cinema pulled off in secular theaters around the country for a one-night film event by Fathom Media. Some towns had to open additional theater space because of the demand. That happened in Littleton Colorado, I know, and in my hometown, a second theater was added. Both filled up, indicating significant interest in the subject.On ContentThe film consists largely of conversations by Del Tackett with leaders in the Biblical creation movement about scientific evidence supporting the historicity of Genesis accounts of Creation and the Flood. The conversations occur at Grand Canyon, in museums and zoos, on a dinosaur dig and other locations. Del is founder of The Truth Project, a popular worldview apologetics course. In 2008, Del was among two dozen Bible scholars who participated in a special scholar’s rafting trip through the Grand Canyon sponsored by Canyon Ministries. I was on that trip and got to know Del, finding him to be a humble, godly, and very intelligent man, an excellent teacher who is genuinely interested in scientific questions. It appears that trip was very stimulating for him and all the other Bible scholars. We all saw profound evidence for the Flood with our own eyes as veteran guide Tom Vail steered us down the river. Details of the evidence were expounded by PhD geologist Andrew Snelling, who is also prominent in the film. Perhaps that trip was a turning point in Del’s thinking to take him beyond mere Biblical apologetics into full embrace of the historical Genesis.Tom Vail and Del Tackett conversing in Grand Canyon, 6/28/08.Photo by David CoppedgeThe experts interviewed—all credentialed scientists and scholars— are listed on the film’s website. Topics range from strata in the Grand Canyon, the nature of the Hebrew text, the extent of variability of original created kinds, the Biblical epochs contrasted with secular geological epochs, fossils and soft tissue in dinosaur bones, questions in recent-creation astronomy, world history after the Flood, and more. The variety of topics (necessarily covered briefly though sufficiently for film) provides a well-rounded answer to the point of the film: can Genesis be reasonably considered a true history of the world, given that the mainstream believes in a big bang and billions of years? Tying each scientist’s answer together is recognition of the importance of paradigms. The evidence does not speak for itself; it is always evaluated through a paradigmatic framework, especially for historical questions that cannot be repeated in a lab. Genesis provides an eyewitness account of earth history by the Creator himself. Secular science, without that advantage, constantly changes its stories. Several of the scientists remarked that what they were taught in school as fact is no longer believed. In conclusion, a pastor connects Genesis to the gospel and urges building one’s view of world history on the Bible’s reliable record instead of the shifting sands of science.On DeliveryProduction quality was OK but not great. Being used to the top-notch Illustra Media films, I thought the energy level was somewhat low, the music mediocre, and much of the scenery poorly shot. There was not a trajectory of interest leading to a climax; just a series of interviews at about the same energy level throughout. For its purposes, though, the producer needed to concentrate on the facts being shared; the audience benefited from each expert having enough time to explain his views. Each of the experts delivered key points with authenticity and credibility. Many of the face shots were a bit too close for my comfort, and often shaky, but not to a disturbing degree. Graphical elements were few. The interviews were tied together with pencil drawings that merged into live action, with a few others used to illustrate points. A few scenic shots and drone shots were eye-catching, but I could imagine someone watching the film once, and then listening to the soundtrack alone the second time. It was a film for the ears more than for the eyes.Line drawing from “Is Genesis History?” (Compass Cinema)On ImpactAs a presentation of the young-earth creationist view, the film was probably more effective by being low-key, fact-rich, and personalized than by trying to generate artificial interest with special effects, dramatic music and emotion. It was clear these were reasonable men, not scientific renegades or nuts as opponents are prone to portray them (see ridicule in the Baloney Detector). They all have PhDs from secular universities in their respective fields.Even though it was low-key, the film had very interesting moments. From reactions I sensed around me, the dinosaur soft-tissue demonstration by Kevin Anderson, when he pulled on stretchy tissue found inside a Triceratops horn that he and Mark Armitage had dug out in Montana with their own hands, was a high point. The images seemed to elicit gasps of astonishment, illustrating that facts of sufficient import need no extra dressing. The widespread flat layers of strata pointed out by geologists Steve Austin and Andrew Snelling also had visual punch for the catastrophist/Flood position. The audience was probably also surprised when Snelling revealed results of radiometric ages from the same rock he had collected that differed by millions of years depending on the method used. I thought Tackett’s opening was very effective. He stands in a deep canyon, hiking along a stream, sensing the vast spans of time that must have passed, only to reveal that the whole canyon was younger than he was! As the drone camera backs away, he explains that the canyon formed within a couple of days in a catastrophic mudflow at Mt. St. Helens since the 1980 eruption. The similarities to Grand Canyon in the subsequent episode are apparent, showing that you can’t always trust your senses if you didn’t know the true story of what happened. Later, Kurt Wise applied that point well to warn of flaws in uniformitarian interpretations of present processes.Group led by Dr. Steve Austin into the “Mini-Grand Canyon” at Mt. St. Helens, Aug. 4, 2012Photo by David CoppedgeAudience responses I heard outside the theater were uniformly positive. Everyone was smiling and commenting that they thought it was really good and were glad they came. There’s a lot of science in this film without being technical. Each viewer probably had their own favorite moments, whether the towering dinosaur reconstructions, swimming sharks, stars and galaxies, Grand Canyon, or a fossil dig in progress. I would like to see two impacts of this production: first, to encourage pastors to take a bold stand for Genesis as “true to what is there,” and second, to encourage young budding scientists to follow in the footsteps of these men whose dedication has given many Christians confidence for having reasonable faith, because God’s word is trustworthy. It fits the world as we see it. It’s history; His story.Update 2/24/17: Just got word that there will be encore showings on March 2. See the website for details. This indicates that interest was high. The publicity team says, “Our premier last night vaulted Is Genesis History to one of Fathom’s best releases!”It was good to see a spirit of unity among the participating scientists. They come from different organizations; sometimes that can result in divided loyalties. They also avoided infighting over different young-earth models, focusing instead on points of agreement. The producer and participants also wisely avoided disparaging comments about theistic evolutionists or old-earthers. Hopefully that will encourage viewers of those persuasions to consider the evidence itself.On the day of the showing, Paul Nelson felt it necessary to issue a “dissent” about his part in the film. On Evolution News & Views, he claims his views were misrepresented, mistakenly portraying a false dichotomy between old-age secularism and young-earth creationism as if those were the only two options. While his feelings are understandable and his arguments sound, I’m not sure it was necessary or helpful to call this a “dissent.” He could have called it a “clarification.” Nothing Paul says in the film is false; it’s only incomplete. The word “dissent” appears to put him at odds with not only the producer, but with Del Tackett and all the other honorable scientists who appear in the film. Paul is a good, wise and honorable teacher himself; let’s hope this issue will be resolved to everyone’s satisfaction in the DVD version.I’m glad to know many of the scientists in the film personally, and even more glad to know that many of them are supportive of Creation-Evolution Headlines. If you missed the film, I hope this review encourages you to see it if and when it comes out on DVD. —David Coppedge, reviewer(Visited 166 times, 1 visits today)FacebookTwitterPinterestSave分享0 Pulp-Based Computing: While there’s little information on these projects just yet, one thing is clear. The folks in MIT’s Media Lab Fluid Interfaces Group are exploring electrically active inks and fibers during the paper making process to create a new form of paper-based computing. Apparently the paper would react in the same way as regular paper; however, it would also carry digital information. While the project is only in its early stages and appears to be hooked up to a basic Arduino prototyping platform, theoretically this could be used to create a new type of Wacom tablet. Remember when Steven Levy wrote about losing his Macbook Air? A paper interface would take some serious getting used to. Siftables: Created by David Merrill and Jeevan Kalanithi, Siftables is a series of blocks that contain built-in motion sensors, graphical displays and wireless communication. The blocks can be programmed to interact with digital information and media to form a collective interface. Siftables have been used to create art displays, painting tools, calculators, games and even a music sequencer. Bug Labs also offers a similar open source block system for modular device interfaces. For more on alternative interfaces featured during 2009, check out our articles on the BiDi screen and the wearable Internet. Top Reasons to Go With Managed WordPress Hosting Perceptive Pixel Multi-Touch Wall (Jeff Han’s new project) and Microsoft Surface: In the world of alternative interfaces, these two workstations are extremely well known. Certainly not the inexpensive, mainstream touch interfaces we’d hoped for, their size and price makes them unobtainable to the average user. However, for commercial uses, they’ve certainly got that wow factor. The products are used for story boarding, geo-spatial command, broadcast media, museum exhibits, hotels and Surface is even in Disneyland’s tomorrow land. Related Posts Scratch Input: Recently featured in Technology Review for his presentation at the SIGGRAPH Conference, Carnegie Mellon Ph.D student Chris Harrison created a gestural input interface using existing surfaces and an acoustic input technique. In other words, Harrison’s interface uses scratches to communicate with his machine. By taping a modified stethoscope to a wall, Harrison got users to perform six scratch input gestures at about 90% accuracy with less than 5 minutes of training. If Scratch Input were utilized by a mobile manufacturer, a phone owner could simply rest their device on a table top and use it to scribble out messages. Tags:#Apple#web dana oshiro Editor’s note: This story is part of a series we call Redux, where we’ll re-publish some of our best posts of 2009. As we look back at the year – and ahead to what next year holds – we think these are the stories that deserve a second glance. It’s not just a best-of list, it’s also a collection of posts that examine the fundamental issues that continue to shape the Web. We hope you enjoy reading them again and we look forward to bringing you more Web products and trends analysis in 2010. Happy holidays from Team ReadWriteWeb! Ever since Jeff Han demoed his Multi-Touch Workstation at the 2006 TED Conference, the world has been waiting for a high resolution sensory work experience. As a generation of hunched night creatures with intimate knowledge of our chiropractors, we’ve suffered and conformed to our traditional interfaces for too long. Touch was the future of workstations. But as articulated by ReadWriteWeb, the upcoming Apple tablet is not the workstation of the near future. It simply isn’t practical. For those of us who still want to gawk at the cool regardless of its practicality, here is an assortment of 2009’s most interesting interfaces. A Web Developer’s New Best Friend is the AI Wai… 8 Best WordPress Hosting Solutions on the Market Why Tech Companies Need Simpler Terms of Servic… Sixth Sense: Sixth Sense is an extremely inexpensive interface ($350 to build the prototype) and it consists of some colored finger markers, a projector, and a camera on a necklace. Demoed at the TED conference, this interface has amazing potential. We reviewed this product as part of our post The Wearable Internet Will Blow Mobile Phones Away.Given Nikon’s release of yesterday’s first camera with a built-in pico projector and Mobileburn’s demo of the Samsung Anycall Show phone, these little projectors are about to start popping up everywhere. For Minority Report fans, we may actually see these projector based interfaces used up for everyday tasks; however, it’s more produce amazing entertainment for gamers. Alex is founder of BuildingGreen, Inc. and executive editor of Environmental Building News. He also wrote Insulation: The BuildingGreen Guide to Insulation Products and Practices, which provides in-depth guidance on the selection of insulation materials. To keep up with his latest articles and musings, you can sign up for his Twitter feed. I’m often asked the question, “How much insulation should I install in my house”? It’s a great question. Let me offer some recommendations:First of all… it depends. It depends to a significant extent on where you live. And it depends on whether we’re talking about a new house or trying to squeeze insulation into an existing house.To simplify the discussion, let’s assume, for the time being, that we’re talking about new constructionAs for location, I’ll provide recommendations for three different climates, based on the U.S. Department of Energy (DOE) and International Energy Conservation Code (IECC) climate zones. These DOE climate zones range from Zone 1 at the extreme southern tip of Florida, to Zone 7, which covers the tip of Maine, stretches across the northern reaches of Michigan, Wisconsin, Minnesota, and North Dakota, and includes a few high-elevation places in the Rockies (see map). In my recommendations, I group these into three larger zones for simplicity.Cold climates: Zones 5-7Zones 5-7 cover much of the northern half of the U.S., from roughly the Mason-Dixon Line at the East Coast across the northern third of Missouri and the northern edge of eastern Kansas, then dipping south in the higher-elevation Plains States through northern New Mexico, northern Arizona, nearly all of Nevada (except the Las Vegas area), and the northeastern corner of California and the eastern three-quarters of Oregon and Washington.For these locations, I follow the widely quoted recommendations from Building Science Corporation and aim for the 5-10-20-40-60 rule. These numbers refer to the R-value recommendations for windows, foundation slabs, foundation walls, above-ground walls, and attics (or roofs), respectively.These recommendations are for “true” R-values, not the nominal values listed on insulation packaging. For example, if you install R-19 fiberglass batts in 2×6 frame walls, with the studs 16 inches on-center, double top-plates, and other elements of “standard” framing, the actual R-value of the whole wall with the R-19 insulation will be about R-15. The whole-wall R-value is lower because of thermal bridging through the wood framing.To achieve R-40 in the walls requires a lot of insulation — far more than is found in standard construction. This level of insulation, if combined with strategies for minimizing air leakage, will result in a house that will be affordable to heat even if energy prices double or triple. And if combined with some passive solar heating will result in a house that should never come close to freezing in winter, even if the heat is turned off.With window R-values, the recommendation refers to the “unit R-value,” a measure that averages the center-of-glass R-value and the R-value at the window edges — where the heat loss is greater (at least with high-performance windows). These unit R-values are the inverse of the U-factors listed on NFRC (National Fenestration Rating Council) labels found on most new windows: R = 1/U.Hot climates: Zones 1-2Zones 1-2 include the hottest areas in the U.S., covering most of Florida and a band west to central Texas, as well as southern Arizona and the Imperial Valley of extreme southeastern California.Here, I recommend a 3-5-10-20-60 rule: R-3 windows, R-5 under slabs and for any below-grade foundation walls, R-10 for above-grade foundation walls and slab perimeter (full foundations are rare in these climates), R-20 for above-ground walls, and R-60 for attics. These recommendations come from an informal conversation with John Straube of Building Science Corporation. Again, these are true R-values (unit values for windows).It will surprise some to see the recommendation for attics to be the same as in cold climates. This is because of the difference in temperature (delta-T) between the living space and the attic on a hot summer day can be as high as wintertime delta-T in a cold-climate between indoors and outdoors. With windows, I further recommend a solar heat gain coefficient (SHGC) of 0.3 or lower to minimize unwanted solar gain.Moderate climates: Zones 3-4Zones 3-4 include much of the southern half of the country, with the boundary between Zones 4 and 5 dipping south across the center of New Mexico and Arizona. This moderate region excludes Florida and the Gulf Coast, but includes most of California and the western edge of Oregon and Washington.For these locations, I recommend intermediate insulation values between those for cold climates and hot climates. I suggest a 4-5-10-30-60 rule: R-4 windows, R-5 under slabs, R-10 foundation walls or slab perimeter, R-30 above-grade walls, and R-60 in the attic or roof.What about existing houses?In new construction, the incremental cost of increasing insulation levels are relatively modest. With existing houses, retrofit insulation costs are usually much higher, so it is usually difficult to justify such high insulation levels. The exception is attics, where adding lots of additional insulation is usually quite affordable.So, in existing homes, determining reasonable insulation levels is project-specific. In a full gut-rehab (where the house is taken down to the structure, or the frame is opened up on either the interior or exterior), achieving close to the recommended insulation levels for new construction may be possible (though higher costs for extending window and door jambs and, sometimes, roof overhangs also need to be considered).And with windows, whether to replace or improve existing windows is a key question. Look for recommendations in future blogs. I’ve now had a year with the Geyser heat-pump water heater (HPWH). With the exception of the puddle on the floor in July 2011, it has performed consistently.Its performance has not been thrilling, though. In the summer, it was making hot water at about 0.13 – 0.15 kWh/gallon, with incoming water in the mid-60°Fs and basement air temperature around 70°F. In the winter, with basement temperatures in the low to mid 50°Fs, and incoming water at 50°F or a bit below, this consumption ratio increased to 0.25 kWh/gallon.I switched to using only the upper electric element in mid-January 2012, and the consumption ratio was 0.31 kWh/gallon, so the HPWH was saving about 20% — actually more, since the HPWH was heating the entire tank, and the electric element only was heating the upper 30% of the tank. This was verifiable by the way with the infrared camera — a sharp temperature gradient below the element location. Water heating strategiesTo sum up how I’m thinking on how to make domestic hot water, given my preference to think in terms of electrically powered buildings to mate with renewable power generation:– Low DHW users, say up to 20 gallons/day, should use electric resistance in either a superinsulated tank or maybe distributed instantaneous electric heaters. (Caveat emptor: lots and lots of amps!)– Medium DHW users, say 20 – 50 gallons/day, should consider a heat-pump water heater. Pick the highest efficiency and one with a large tank, which keeps the electric backup off.– Large users of DHW should consider a solar thermal system. Look at the Wagner system, which is a clever packaged drainback system, as one possibility. A HPWH is also a dehumidifierThe other thing we’ve learned with the Stiebel Eltron HPWH is the effect it has on the basement humidity. We know a HPWH will both cool and remove moisture from the air, but we didn’t know if it would make that air higher or lower relative humidity. It could possibly cool the air and not remove enough moisture to keep the relative humidity from rising as the air was cooled.Image 2 (below) shows data recorded during a 3 1/2 hour run of the HPWH, and the conditions of the air at the start and the end.What we see is that the basement both cools and drops in relative humidity. As my friend and South Mountain colleague John Guadagno says, good stuff, good stuff! The reason it’s good is that the moisture content of materials is based on the relative humidity of the surrounding air, and lower moisture content means lower opportunity for mold. I agree with JG! Marc Rosenbaum is director of engineering at South Mountain Company on the island of Martha’s Vineyard in Massachusetts. He writes a blog called Thriving on Low Carbon. Better performance from a Stiebel Eltron unitIf I didn’t give myself the flexibility with the 85-gallon tank to do the HPWH or add solar thermal hot water I would have installed a 50-gallon Marathon instead. We have very good data from the Eliakim’s Way homes that show about 0.21 – 0.23 kWh/gallon. So over a year I’m not sure my HPWH and the larger tank saved me anything over a smaller electric water heater.One very significant factor is our very low hot water usage of about 13 gallons/day. This means that the HPWH spends a significant portion of its operating time working against the standby losses, which means that it’s cycling in the 110°F – 120°F water temperature range, where it is least efficient. And of course that energy is not being used to heat hot water to replace hot water we’ve used. BLOGS BY MARC ROSENBAUM Getting into Hot Water — Part 1Getting into Hot Water — Part 2Getting Into Hot Water — Part 3Basement Insulation — Part 1Basement Insulation — Part 2Seasonal Changes in Electrical Loads RELATED ARTICLES All About Water HeatersHeat-Pump Water Heaters Come of AgeGet Ready for Heat-Pump Water HeatersSolar Hot WaterSolar Thermal is Dead Where does a heat-pump water heater make sense?It’s worth noting that a HPWH takes heat from the house, at least during the heating season, so how you heat the house matters. Here are some cases to consider:1 – The HPWH is in a basement with a gas furnace and leaky uninsulated ducts that keep the basement at 70°F. The HPWH is operating efficiently because it is taking heat from nice warm air, and that heat is only indirectly getting to the living space. Probably a good application.2 – The HPWH is in the thermal envelope of a direct gain passive solar house with a wood stove backup. Again, the heat pump is operating in a favorable temperature regime, and the source of the heat is either the sun or firewood. And often during the winter the space may be overheated and the cooling is not objectionable.3 – The HPWH is in the thermal envelope of an electrically heated house. Each unit of energy removed from the air is replaced by electric resistance heat. Not a good choice.4 – The HPWH is in the thermal envelope of a house heated with minisplit heat pumps that operate at a COP of 2.5. The HPWH COP of 2 is effectively reduced to 1.4 because of the energy required by the heat pump to offset the cooling effect of the HPWH. If the house is in heating mode for six months of the year, and the rest of the time the cooling effect of the heat pump is negligible or welcome, then this changes to 1.7.And finally, the more the climate shifts towards being cooling-dominated, the better the HPWH looks. A HPWH in your house in Florida supplies free cooling and dehumidification as it heats water. I have data on another HPWH, the Accelera made by Stiebel Eltron.It was installed late this past winter in a deep energy retrofit that South Mountain Company did on a small house in Chilmark. The basement had about R-20 walls and an R-3 floor. There is a ducted minisplit heat pump air handler and insulated ducts in the basement as well as the Stiebel Eltron.The Stiebel Eltron has an 80-gallon tank which has the refrigerant heating coil wrapped around the outside of the tank beneath the insulation. It has a 1.7-kW backup electric element with a separate thermostat. This unit was set to make 130°F water. We installed a water meter on the cold water inlet and measured the electrical usage with the Powerhouse Dynamics eMonitor. Over the first six months, the household averaged 45 gallons/day of domestic hot water usage.As in other Martha’s Vineyard homes, the incoming water temperature varies, starting at 50°F in early March and rising into the low 60°Fs in August. Basement temperature began in the upper 50°Fs and rose to the upper 60°Fs.The HPWH made 7,980 gallons of hot water and used 477 kWh of electricity, a consumption ratio of 0.060 kWh/gallon — over three times more efficient than the 50-gallon Marathon tanks at Eliakim’s Way, which used 0.20 kWh/gallon over the same months in six houses that averaged 43 gallons per day.This performance is in a whole other ballpark than that of the Geyser. Also, the unit seems to have low standby losses. On days with no usage it was using about 1/2 kWh. My biggest question is, how long will this expensive device (the list price is about $2,600) last? by Robin Allen MSPH, RDN, LDNThe recent foodborne illness outbreaks of E.coli and Norovirus has me greatly concerned. I love dining out but is it safe? How my food is being prepared? And what is the condition of those preparing my food? Have they washed their hands? Are they well? What about all those buffets and potlucks? How long has that food been sitting out? Are other people handling their food safely? I thought I would take this time to write some reminders to keep your holiday safe! The last thing anyone wants is to get sick around the holidays.I attended a #CDCFoodChat Twitter chat on food safety and picked up some good information to share. I highly recommend you read this informative sharing of information. First what has been in the news?There have been outbreaks of Escherichia Coli (E.-coli) in several states. E.coli are a group of bacteria mostly harmless but can cause diarrhea, abdominal cramping, nausea, and vomiting. Food and water contaminated with human or animal feces are the modes of transmission. Swallowing tiny amounts of these foods that have been contaminated (yes it is disgusting) spreads the infection.Outbreaks of E.coli occurred in November 2015 in Oregon, Washington State, California, Illinois, Maryland, Ohio, Pennsylvania, Minnesota, and New York. No specific food has yet been identified for this Illness, but Chipotle Grill appears to be the source of the outbreak. Outbreaks of E.coli have been linked recently to rotisserie chicken salad, raw clover sprouts (2 occasions), ground beef, ready to eat salad, organic spinach, and Spring Blend. Even my favorite, raw cookie dough has been associated with an outbreak. Foods most likely to be associated with E.Coli include unpasteurized (raw) milk, unpasteurized apple cider, and soft cheeses made from raw milk, eating an undercooked hamburger or a contaminated piece of lettuce. People have also gotten sick by swallowing lake water while swimming, petting zoos and other animal exhibits, and by eating food prepared by people who did not wash their hands well after using the toilet.How can you minimize your risk for E. Coli? Wash, Wash, Wash!Below are tips from Foodsafety.gov.Wash your hands thoroughly after using the bathroom, changing diapers, and before preparing or eating food.Wash your hands after any contact with animals, even your pets.Always wash your hands before preparing and feeding an infant, before touching an infant’s mouth, or touching pacifiers or other things that go into an infant’s mouth.Keep all objects that enter infants’ mouths (such as pacifiers and teethers) clean. If soap and water aren’t available, use an alcohol-based hand sanitizer. These alcohol-based products can quickly reduce the number of germs on hands in some situations, but they are not a substitute for washing with soap and running water.Follow clean, separate, cook, chill guidelines, found at FoodSafety.gov.Cook meats thoroughly. Cook ground beef and meat to a temperature of at least 160°F (70˚C).Prevent cross-contamination in food preparation areas. Do not cut vegetables on the same cutting board as raw meat. Thoroughly wash hands, counters, cutting boards, and utensils after they touch raw meat. Avoid consuming raw milk or unpasteurized dairy products juices (like fresh apple cider).Avoid swallowing water when swimming and when playing in lakes, ponds, streams, swimming pools, and backyard “kiddie” pools. Another recent foodborne illness has been the Norovirus or Norwalk Virus. The Norovirus is the most common cause of acute gastroenteritis in the United States and causes about 50% of all food-related illnesses. Norovirus is highly contagious and is usually spread by person to person contact. However, norovirus can be spread by consuming contaminated food or water or touching items that are contaminated. A food worker who comes to work infected with norovirus and handles food can cause or spread the virus. Contamination with norovirus can occur at any point in the food process, when it is being grown, shipped, handled, or prepared. Foods commonly associated with outbreaks of norovirus are produce, leafy greens, ready to eat foods handled by infected workers, salads, sandwiches, ice, cookies, fresh fruits, and shellfish (such as oysters). Any food can be contaminated if an infected person has handled it with vomit or feces on their hands.Recently a norovirus outbreak, again associated with Chipotle Grill sickened 141 Boston College students. The Boston College basketball coach blamed a recent loss because 8 players were sick with norovirus. Another outbreak of norovirus occurred in Simi Valley, CA where 234 people became ill, also associated with Chipotle Grill.According to the Center for Disease Control (CDC), food workers can follow some simple tips to prevent norovirus from spreading:Avoid preparing food for others while you are sick and for at least 48 hours after symptoms stop.Wash your hands carefully and often with soap and water.Rinse fruits and vegetables and cook shellfish thoroughly.Clean and sanitize kitchen utensils, counters, and surfaces routinely.Wash table linens, napkins, and other laundry separately.Here are some more tips from the CDC to prevent foodborne illness from crashing your holiday.Clean, Separate, Cook, Chill, Stuff with careBuffets and the Two-Hour Rule. Perishable foods (like meat and poultry) should not sit at room temperature for more than two hours. Keep track of how long foods have been sitting on the buffet table and discard anything there two hours or more.Hot and ColdKeep Hot Foods HOT and Cold Foods COLD. Hot foods on a buffet should be held at 140 °F or warmer. Keep hot foods hot with chafing dishes, slow cookers, and warming trays. Do not re-heat food in your slow cooker.Cold foods should be held at 40 °F or colder. Keep cold foods cold on a buffet by nesting the serving dishes into bowls of ice. Otherwise, use small serving trays and replace them frequently. If you’re transporting cold foods, use a cooler with ice or commercial freezing gel.LeftoversDiscard all perishable foods (meats, poultry) left at room temperature longer than 2 hours. Immediately refrigerate or freeze remaining leftovers in shallow containers.If you have additional questions about how to store leftovers, download the FoodKeeper app. This app offers storage guidance on more than 400 items and cooking tips for meat, poultry, seafood and eggs.Don’t let foodborne Illness be an uninvited guest at your table this holiday season!References:http://www.cdc.gov/norovirus/downloads/global-burden-report.pdf accessed 12-21-15CDC Food Safety accessed 12-21-15Foodsafety.gov accessed 12-21-15http://www.foodsafety.gov/keep/foodkeeperapp/ accessed 12-21-15https://www.foodsafety.gov accessed 12-21-15This post was written by Robin Allen, a member of the Military Families Learning Network (MFLN) Nutrition and Wellness team which aims to support the development of professionals working with military families. Find out more about the MFLN Nutrition and Wellness concentration on our website, on Facebook, on Twitter and on LinkedIn. South Zone: 31 cases, an increase of 138.5 percent compared to 2017Calgary Zone: 206 cases, an increase of 7.3 percent compared to 2017Central Zone: 88 cases, an increase of 266.7 percent compared to 2017Edmonton Zone: 977 cases, an increase of 305.4 percent compared to 2017North Zone: 208 cases, an increase of 324.5 percent compared to 2017 In 2018, a total of 1,536 cases of infectious syphilis were reported compared to 161 in 2014, almost a tenfold increase. The government shares this rate of infectious syphilis has not been this high in Alberta since 1948.While congenital syphilis cases were rare before the outbreak, There were 22 congenital syphilis cases between 2014 and 2018, one of which was stillborn. Congenital syphilis, which occurs when a child is born to a mother with syphilis, is a severe, disabling and life-threatening disease.Consistent and correct condom use is an important protection against STIs such as syphilis.As with other STIs, the symptoms of syphilis may not be obvious.Health experts recommend sexually active people, regardless of gender, age or sexual orientation, get tested every three to six months if they:have a sexual partner with a known STIhave a new sexual partner or multiple or anonymous sexual partnershave a previous history of an STI diagnosishave been sexually assaultedIt is critical that anyone who is pregnant seeks early prenatal care and testing for syphilis during pregnancy.Anyone experiencing STI-related symptoms should seek testing and speak to a family doctor to find testing and treatment options.In the 2018 case counts for infectious syphilis by AHS zone: EDMONTON, AB – Infectious and congenital syphilis rates have escalated across the province over the past five years, with a sharp increase in 2018.Due to the rapid increase in syphilis cases, Alberta’s chief medical officer of health, Dr. Deena Hinshaw, has declared a provincial outbreak and is encouraging Albertans to get tested and protect themselves.“We need to emphasize for all Albertans: Sexually Transmitted Infections (STIs) are a risk to anyone who is sexually active, particularly people who have new sex partners and are not using protection. I encourage anyone who is sexually active to get tested regularly. Anyone in Alberta can access STI testing and treatment for free,” said Dr. Deena Hinshaw, Chief Medical Officer of Health. Guwahati: The Indian women’s cricket team suffered a five-wicket defeat to England in the second T20 International, surrendering the series with a sixth straight loss in the shortest format. Chasing 112 for an unassailable 2-0 lead in the three-match series, England completed the task in 19.1 overs, holding nerves after losing a few quick wickets. Opener Danielle Wyatt was Engalnd’s star performer with the bat, top-scoring with an unbeaten 64 off 55 balls. During her stay in the middle, Wyatt struck six boundaries, and was ably supported by Lauren Winfield (29). Also Read – Dhoni, Paes spotted playing football togetherWhile Wyatt held one end firm on the way to her fourth T20 half- century, England needed three back-to-back boundaries by Winfield to take the game away from India. Opting to bowl, England produced a brilliant performance to prevent the hosts from putting up a big score at the Barsapara Cricket Stadium, with Katherine Brunt emerging as the most successful bowler. The veteran seamer returned figures of 3/17, sending back stand-in skipper Smriti Mandhana (12) and Jemimah Rodrigues to put India on the backfoot. Also Read – Andy Murray to make Grand Slam return at Australian OpenThe wicket of Mandhana was important for England as the opener had powered India to 24 for no loss in 2.3 overs before Brunt had her caught behind. Coming in to bat at one drop, the young Rodrigues (2) did not last long, getting bowled by Brunt. In the next over, the dismissal of Harleen Deol by left-arm spinner Linsey Smith (2/11) left the hosts in a spot of bother at 34 for three. The experienced Mithali Raj, in the last leg of her career, top-scored with 20 off 27 balls, while Deepti Sharma and Bharati Fulmali contributed 18 each. England were off to a steady start but slow left-armer Radha Yadav did not let the opening partnership flourish, disturbing Tammy Beaumont’s stumps in the fifth over. Leg-spinner Poonam Yadav had Amy Jones caught and bowled in a soft dismissal and Ekta Bisht picked up two wickets, including the big one of skipper Heather Knight, to leave the visitors in trouble at 56 for four. But Wyatt and Winfield saw England through with their 47-run partnership for the fifth wicket. India bowled tightly and conceded just three extras in comparison to England’s 18. England won the first match by 41 runs. Bryant, the son of former NBA player Joe “Jellybean” Bryant, modeled his game after Michael Jordan and came closer to replicating His Airness’s silky offensive style than anyone we’ve seen. He finished his 20-year career with more points than MJ and stood apart by managing to hit impossible shots from all over the floor, despite having defenses draped over him.What Michael was to Kobe, Kobe became to the next generation of players. One possible sign: The number of guys wearing No. 8, which Bryant wore for the first 10 years of his NBA career, has more than tripled — from seven in 1995-96, the year before Bryant’s rookie season, to 231This list shows 25 players, but two were cut before they actually played a game for the team they entered the preseason with. this season. (In the second half of this career, Bryant wore No. 24, which was more popular than No. 8 before Bryant donned it. Bryant’s adoption of it doesn’t seem to have had much influence leaguewide.)But even though younger NBA players adopted Bryant’s number, few players have adopted his style of play — a ball-dominant one that involved taking tough contested shots inside the arc — as some offenses around the league have become more free-flowing and hyper-efficient.The current players who draw perhaps the most frequent comparisons to Kobe, Thunder guard Russell Westbrook and Raptors swingman DeMar DeRozan (both of whom are from the L.A. area and played there collegiately), each count Bryant as a mentor of sorts and possess a handful of the same skills and flaws that he had.In Westbrook’s case, he’s so talented that he sometimes can dominate the ball too much — even when he has another superstar, or two, on the court with him. And much like Bryant did, DeRozan makes a living from midrange, a shot that goes against the grain of today’s league, where most star wing players have developed a respectable shot from 3-point range.Translation: On any given night, both guys are capable of shooting it less efficiently than other stars because they’re taking far tougher shots than just about everyone else. (DeRozan, in particular, has the highest degree of shot difficulty in the NBA among those who’ve taken at least 200 attempts, according to data from Second Spectrum, which uses high-level tracking equipment in NBA arenas to compile data.) That willingness to launch (miss) scores of contested shots is vintage Kobe.“I don’t care about that crap, and I’m sure he doesn’t either,” said then-Lakers coach Byron Scott after Bryant broke a record for the most missed shot attempts in NBA history. “I don’t mean to cut you off, but to me, it speaks of his aggressiveness and longevity.” It also speaks to his being wired far differently than many other players, who refuse to take shots that have little chance of going down for fear of hurting their field-goal percentages, which factor into future contracts and potential earnings.During the 2015-16 campaign, his farewell season, Bryant attempted more fadeaway jumpers than any guard in the league despite missing 16 games that year. And during the final three-season span of his career, Bryant ranked dead-last among 357 players2Those who attempted at least 500 shots total from the 2013-14 season through the end of the 2015-16 season. in Second Spectrum’s Quantified Shot Quality metric, which estimates the odds of a shot going in by tracking shot and defender distance. Put another way, this means he took the hardest collection of the shots in the NBA in that window. (He also shot worse than expected on those attempts.)It’s worth mentioning a couple of things here. First, it’s not really fair to focus more on Bryant’s misses3Especially late in his career, when he was clearly diminished and arguably the worst volume shooter in the league. than his makes — he was an absolutely devastating scorer in his prime — and defensive accomplishments. Secondly, even his missed shots often turned out to be a good thing. Kirk Goldsberry, then of Grantland, created the “Kobe Assist,” a metric that highlighted how Bryant’s shot attempts attracted so much defensive attention that they opened up easy putback opportunities for his teammates.There’s no telling how much more productive Bryant could have been in this era, one in which coaches, teams and even the league itself are more aggressive about resting their players in hopes of safeguarding them from injury. Bryant, of course, famously pushed himself to play through pain, especially during the final days of the 2012-13 season, in which he tore his Achilles tendon while playing enormous minutes during a playoff push.Both the increased focus on efficiency and the new-age strategy of holding players out for rest make it less likely that we’ll see another star with such a devil-may-care attitude on scoring and health. On some level, that’s what made Bryant’s finale — in which he scored 60 points on 50 shots, both NBA records for a player’s last game — so fitting. Having the courage to fire up tough shots from all over the floor, and worrying about the statistical consequences later, if at all, doesn’t happen much anymore. In fact, it’s an attitude that might’ve gone extinct with Bryant’s exit from the league.Check out our latest NBA predictions. Related: The Lab Retiring a number is the ultimate recognition of a former player’s contribution and legacy to a franchise. But for Kobe Bryant, one number apparently doesn’t do his years with the Los Angeles Lakers justice: Tonight, he’ll become the first player in NBA history to have two different numbers lifted to the rafters by the same team. It’s a fitting honor for a man who played more than 1,300 games, scored more than 33,500 points and won five titles for Los Angeles — yet couldn’t settle on one number to wear.But if there’s one thing we end up remembering the Laker legend for, it should be that he went out as arguably the NBA’s last true gunslinger. The Sixers Still Have Growing Pains To Work Out OSU redshirt-senior defensive lineman Kosta KarageorgeCredit: OSU AthleticsOhio State football coach Urban Meyer released a statement Friday afternoon praising the hard work of senior defensive lineman Kosta Karageorge — who was reported missing Thursday morning — and asking for anyone with knowledge of his whereabouts to come forward.“Our thoughts continue to be with the family of Kosta Karageorge and we pray that he is safe and that he is found soon,” Meyer said in his statement. “He is a young man who joined the football team in August and was a hard worker on the field and pleasant off the field. He has been an important player in practice for us, right up until the time he was reported missing. If anyone knows anything about his whereabouts, please help his family and contact the authorities.”According to a Facebook post from Karageorge’s mother, the player — who also wrestled at OSU — was last seen at about 2 a.m. on Wednesday. The post explained that Karageorge’s family had traced his phone to the Grandview area of Columbus.The OSU release containing Meyer’s statement also had a statement from team physician Dr. Jim Borchers.According to a report by The Columbus Dispatch, Karageorge’s sister, Sophia Karageorge, said the family was concerned he might have been suffering from symptoms related to concussions.In his statement, Borchers said he was unable to comment on Kosta Karageorge’s health, but reinforced his confidence in OSU’s medical practices.“First and foremost, our primary concern is for health, safety and welfare of Kosta,” Borchers said in his statement. “While we are not able to discuss or comment about the medical care regarding our student-athletes, we are confident in our medical procedures and policies to return athletes to participation following injury or illness.”A tweet sent from Karageorge’s personal Twitter account (@kostadinos81) provided a number to contact with any information about where he might be.“Kosta was last seen around 2 am November 26. His family is asking for prayers & any info regarding his whereabouts.Please call: 614-747-1729,” the tweet said.Karageorge was listed by OSU among 24 seniors set to be honored during the Buckeyes’ matchup with Michigan on Saturday, marking the last game of their regular season and their last game of the season at Ohio Stadium. Kickoff is set for noon.
In accounting, we need to report our net cash. This is cash we actually have free to use. The direct method is a means to prepare our statements in regards to cash. This lesson will define the method and provide an example. Be More Direct! In accounting, there are a couple of ways to prepare a statement of cash flows. A cash flow statement is a financial statement. It is a summary of how changes to balance sheet accounts impact the cash account. This reporting is done for a given period (month, quarter) and shows how cash is being used to fuel operations.The direct method of creating the statement of cash flow calculates a NET cash amount, by subtracting operating cash from total cash receipts. When using the direct method, you need to list both the sources of cash and the uses for the cash. These items include cash paid to suppliers, wages, and cash payments from customers. The direct method looks at both the Balance Sheet and Income Statement from period to period (say Quarter 3 to Quarter 4) to determine changes in cash flow.Let’s take a look at how each piece is evaluated. Recall from accounting that credits to accounts receivable (AR) result from cash transactions, debits to AR are a result of sales. Not all sales are paid in cash! Think of it this way. If you had Accounts Receivable of $100 one month, then only $50 the next month, it means that the $50 was converted to cash. We’ll ignore bad debt and all that for now.In terms of building the statement of cash flows, the following is true of cash received:Cash received = beginning accounts receivable + credit sales – ending accounts receivable. Accounts Payable and Inventory We also have to factor in the cash that we spent. However, since a lot of transactions are purchases on account, we have to look at the accounts payable side of the ledger also! We also need to look at inventory, since we may have purchased inventory during the period, either on credit or with cash. In order to get the costs paid for the cash flow statement, we look at both the Balance Sheet and Income statement. First, from the Income Statement, we gather the cost of goods sold. We then ADD this amount to the CHANGE in inventory from period to period. However, a lot of inventory is purchased on credit. Therefore, we then SUBTRACT the change in Accounts Payable over the period. Therefore:Payments = (Cost of Goods Sold + Change in Inventory) – Change in Accounts PayableWhy is this? We will assume the Accounts Payable went down from period to period. The difference is indicative of the cash we spent to pay off our debts. Rent can factor into this also. We need to make sure we account for the prepaid rents from period to period and ADD that to the current rent expense. Salaries/Cash Paid to Employees Salaries/Wages are more like Accounts Payable in this sense. We need to factor the wage expense (as we have to pay our employees), but SUBTRACT the change in accrued wages from period to period. Just like Accounts Payable, or Accounts Receivable, the change in accrued wages indicates the amount we doled out in cash.This should make much more sense with an example. Let’s look at an example. Really Good Bread, Inc. had the following Balance Sheet and Income Statement. In order to highlight the concepts in this lesson, we added a column for differences. In a real Balance Sheet or Income Statement, this column would not appear. |Account||Quarter 4||Quarter 3||Difference| |Property, Plant, and Equipment (PP;E)||950,000||1,000,000||-50,000| And now the Income Statement: |Cost of Goods Sold||2,750,000| Build the Statement of Cash Flows Now we can put together our statement. It will include cash received, cash paid for inventory, cash paid for rent, cash paid to employees, and a total cash flow from operations. Remember, this is our Sales minus the change in Accounts Receivable. Looking at the Balance Sheet, there was a change from Q3 to Q4 of 50,000 (it dropped). Therefore, cash received from customers is:3,500,000 – (-50,000) = 3,550,000 Cash paid for inventory In order to denote cash paid for inventory, we add Cost of Goods Sold PLUS change in Inventory, MINUS the change in Accounts Payable.2,750,000 + 100,000 – (-60,000) = 2,790,000 Cash paid for rent This requires adding the rent expense to the change in prepaid rent.140,000 + 30,000 = 170,000 Cash paid to employees To get this value, subtract the change in wages payable from the wage expense:275,000 – 30,000 = 245,000This gives us enough to build out the statement of cash flows using the direct method. Remember, cash PAID is a negative value since it is cash going out the door! |Cash paid for inventory||-2,790,000| |Cash paid for rent||-160,000| |Cash paid to employees||-245,000| |Cash flows from operations||355,000| A statement of cash flows is a summary of how the balance sheet accounts impact the cash account. The direct method can be used to generate the statement. It calculates a net cash amount by SUBTRACTING the following from the cash account, changes in Accounts Receivable, Accounts Payable, and Wages Payable. It ADDS changes in inventory and prepaid rent. In this way, we have a statement that shows a picture of the cash flow from operations that does not include credit transactions.
The Indian Wars brought disastrous consequences to the native people of America, who suffered from discriminatory practices on a large scale. At that time, Jackson wrote that Indians were considered to have “no legal rights to any lands” after refusing to settle in specified locations (101). Such hostile behavior turned people’s lives into horror, causing numerous casualties and forcing them to resign from their ways of living due to the United States’ continuous growth. The nation’s economy was the driving force behind this persecution of the native population. The primary motivation for the conflict was the fact that Native Americans controlled a significant part of lands that were essential for the United States economy and expansion (American Yawp). While the encounters turned negative once settlers began to look for prospects in the West, the fights broke mainly between different forces. The American military groups that were sent to move Native Americans from their lands forcefully became their main combat opponents, as the refusal to leave was met with weaponized coercion (American Yawp). The outcome was disastrous for the Indians, who lacked the firepower to match their opponents. After many defeats on the battlefield, tribe leaders agreed to peace, which limited their rights significantly (American Yawp). Native Americans’ culture, freedom, and social structures took a substantial hit as a consequence of this event. In conclusion, the Indian Wars that were started out of Americans’ strive for expansion led to a massive loss of life for many native tribes. Settlers who sought to explore the West did not plan on sharing the land with the existing communities, asking the government to send the Army for assistance. As an outcome, Native Americans have driven off their lands into reservations, while those refusing to accept such a fate were destined to be hunted by the U.S. military forces. Jackson, Helen H. A Century of Dishonor. Digital Scanning, Inc., 2001. The American Yawp. Stanford University Press, 2019. Industrialization And Its Effects On The World Was industrialization good for everyone? If so, why? If not, who benefited from it, and who suffered because of it? The XIX century is the period of the establishment of a new, industrial society. This technique was significantly influenced by the Industrial Revolution. By the 1830s it was completed in England, in the 1870s it happened in France, Germany and Austria (Locke and Wright 176). As a result of the industrialization process, the strict restrictions of pre-industrial society were eliminated, which led to dependence on natural and climatic conditions, when the imperfection of technologies could not help in eliminating hunger and epidemics (Locke and Wright 180). Therefore, even while industrialization had some short-term drawbacks, such as a decline in employment, from a global viewpoint it was for the advantage of all. Population growth stimulated the development of the economy. The industrial state can be characterized by the emergence of a large working class. In the 1860s and 1880s, political parties took shape in most European countries, which turned into mass political organizations, including workers who benefited from it (Cleveland para. 20). The first Liberal Party was formed in England in 1861 (Locke and Wright 183). There were two-party systems in England and the USA, liberals and conservatives in the UK, and Republican and Democrat parties appeared in the USA (Locke and Wright 184). A two-party system was absolutely necessary for the implementation of the changes that guaranteed the nation’s political evolution would be very stable and predictable. The prevalence of capital export over products export was a defining trait of industrialization. These processes led to the internationalization of economic life (Cleveland para. 12). In turn, this was the impetus for lifestyle changes. The appearance of cars, trams, telephones and cinema changed people’s way of life. Cleveland, Grover. Veto of the Texas Seed Bill. 1887. Locke, Joseph, and Ben Wright. The American Yawp: A Massively Collaborative Open U.S. History Textbook, Vol. 1: To 1877. Stanford University Press, 2019. Globalization And Its Impact On The World A phenomenon that gathered speed after World War II, globalization has tremendously impacted the international economy, society, and culture by enabling greater interconnectedness and cross-border exchange of people and ideas. Globalization is a complex phenomenon that has benefited developed countries economically while unfairly distributing wealth to underdeveloped nations and disenfranchising people inside rich nations. Increased cross-border trade in products and services is one noticeable effect of globalization, which helps developing nations with open trade and international investment experience economic progress. Less developed countries in Africa and Latin America have encountered distinct economic advantages during their development compared to more developed nations. Poverty and its effects continue to pose a significant challenge in these areas. Goods Across the World by Bridgette Byrd O’Connor highlights the impact of globalization on the retail sector. Cross-border trade has experienced substantial expansion, enabling retailers to source goods globally (O’Connor, 2019a). Outsourcing retail jobs to countries with lower labor costs has resulted in unemployment among retail workers, despite the benefit of a more comprehensive selection of products at lower consumer prices. The World Trade Organization (WTO), established in 1995, aims to promote international trade and remove trade barriers among nations. The WTO has effectively decreased tariffs and other trade obstacles, but some argue it prioritizes developed nations over developing ones (O’Connor, 2019c). Developing countries continue to rely on primary commodity exports, such as oil and minerals, while developed nations profit from the trade of value-added products. The phenomenon of the “spiky world” provides evidence for the assertion that globalization has not yielded uniform benefits for all individuals. According to O’Connor’s (2019b) argument in “Is the World Flat or Spiky?”, globalization has resulted in the clustering of economic activity in select global cities, including New York, London, and Tokyo. Cities are now global hubs for business, innovation, and culture, drawing highly skilled and accomplished individuals from diverse regions (O’Connor, 2019b). Underdeveloped regions in developing countries need to be addressed by the global economy. On the other hand, with the growing interconnectedness in the world, various problems appear that must be solved by joint efforts. First of all, this is reflected in the growing awareness reflected in the need to implement a more systematic and safe environmental policy (Dalby, 2013). In conditions of strong codependency in the world, it is necessary to take into account how each aspect of activity affects the environment (Dalby, 2013). Many opponents of globalization point out that people face problems that were caused by it and can lead to a significant deterioration in people’s lives (Pinker, 2018). However, in fact, there is rapid development in the world, improving the health and happiness of people in many parts of the world (Pinker, 2018). This is due to the fact that the population all over the planet actively interacts and shares knowledge that allows them to adhere to stability and security. However, do not forget that any human action has a response in the environment, therefore it is important to observe the natural balance. In conclusion, globalization has brought significant changes and advancements to our world. Globalization has enabled international trade, cultural exchange, and technological progress, increasing prosperity and interconnectedness. Nevertheless, it has intensified economic disparity, political unrest, and ecological deterioration. Policymakers must consider the adverse effects of globalization and strive to establish a fair and sustainable global framework. Individuals should promote ethical consumption and advocate for social and environmental justice. Solving the challenges presented by globalization is necessary to establish a fair and impartial world for everyone. Dalby, Simon. Geographies of Global Environmental Security. Falkner, 2013. O’Connor, Bridgette Byrd. “Goods Across the World.” World History Project, 2019a. O’Connor, Bridgette Byrd. “Is the World Flat or Spiky?” World History Project, 2019b. O’Connor, Bridgette Byrd. “WTO Resistance.” World History Project, 2019c. Pinker, Steven. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. Penguin, 2018.
Relativity, theory of The theory of relativity is an approach for studying the nature of the universe. It was devised by German-born American physicist Albert Einstein (1879–1955) in the first quarter of the twentieth century. The theory is usually separated into two parts: the special theory and the general theory. The outlines of the special theory were first published by Einstein in 1905 and dealt with physical systems in uniform velocity. (The term velocity refers both to the speed with which an object is moving and to the direction in which it is moving.) The theory applies, for example, to physical events that might take place in a railroad train traveling down a track at a constant 50 miles (80 kilometers) per hour. The general theory was announced by Einstein in 1915. It deals with physical systems in accelerated motion—that is, in systems whose velocity is constantly changing. The general theory could be used to describe events taking place in a railroad train that accelerates from a speed of 50 miles (80 kilometers) per hour to 100 miles (140 kilometers) per hour. Obviously, the general theory applies to a larger category of events than does the special theory and, therefore, has many more applications. The term classical physics is used to describe a whole set of concepts and beliefs about the natural world held by physicists prior to about 1900. According to classical physics, every effect could be traced to some specific and identifiable cause. If an apple fell out of a tree, then that effect could be traced to some specific cause—in this case, gravitational attraction. Also, physicists believed that physical objects had constant properties that did not change unless they were altered or destroyed. For example, suppose that you had a meter stick that was exactly 1.000 meter long. You could trust that meter stick to find the correct length of a line whether you made the measurement in your laboratory at the university or in an airplane flying at 500 miles (800 kilometers) per hour above Earth's surface. Even before 1900, though, a few physicists had begun to question some of these assumptions. These physicists based their questions on some very obvious points. Consider, for example, the following scenario: two railroad train cars are traveling next to each other at the same speed. In such a case, a person in one train could look into the windows of the second train and observe the passengers in its cars. To the observer seated in the first train, it appears as if the second train is at rest. Suppose the second train begins to speed up or slow down. In that case, it seems to be moving slowly away from the first train—forward or backward—perhaps at the rate of a few miles (kilometers) per hour. In reality, though, the train is traveling at a speed of 50, 60, 70 miles (kilometers) per hour or faster. Before the turn of the twentieth century, a few physicists began to explore the significance of this strange experience of relative motion. In 1895, for example, Irish physicist George Francis FitzGerald (1851–1901) analyzed the effects of relative motion mathematically and came to a startling conclusion. The length of an object, FitzGerald announced, depended on how fast it was traveling! That is, your trusty meter stick might truly measure 1.000 meter (39.37 inches) when it is at rest. But find a way to get it moving at speeds of a few thousand meters per second, and it will begin to shrink. At some point, it may measure only 0.999 meter, or 0.900 meter, or even 0.500 meter. Even then, the length of the stick would depend on the person doing the measuring. The shrinkage taking place as the speed of the meter stick increases would be noticeable only to someone at rest compared to the meter stick itself. A person traveling with the meter stick would notice no change at all. The special theory The mathematics used by FitzGerald to reach this conclusion is beyond the scope of this book. In fact, the details of all theories of relativity are quite complex, and only some general conclusions can be described here. Einstein's work on relativity is of primary importance because he was the first physicist to work out in detail all of the implications of the physical properties of moving systems. He began his analysis with only two simple assumptions. First, he assumed that the speed of light is always the same. That is, suppose you could measure the speed of light in your back yard, on a Boeing 747 flying over Detroit, in a rocket ship on its way to the Moon, or on the outer edges of a black hole. No matter where the measurement is made, Einstein said, the speed of light is always the same. Einstein's second assumption is that the laws of physics are always the same everywhere. Should you someday be able to travel to Mars or to that black hole, you will not have to learn a whole new set of physical laws. They will be the same as those we use here on Earth. How did Einstein decide on just these two assumptions and not other possible assumptions? The answer is that he had a hunch—he made a guess as to what he thought would be most basic about anything we could study in the universe. A part of his genius is that his hunches were apparently correct: a whole new kind of physics, relativistic physics, has been built up on them. And the new science seems to work very well, suggesting that its basic assumptions are probably correct. Conclusions from special relativity. Einstein was a theoretical physicist; he did not spend any time in laboratories trying out his ideas with experiments. Instead, he tried to determine—using logic and mathematics alone—what the consequences would be of his initial assumptions. Eventually he was able to derive mathematical equations that described the physical properties of systems in motion. Some of the conclusions he drew were the following: - The length of an object is a function of the speed at which it is traveling. The faster the object travels, the shorter the object becomes. - The mass of an object is also a function of the speed at which it is traveling. The faster the object travels, the heavier it becomes. - Time slows down as an object increases in speed. Think for a moment about the logical consequences of just these three points. First, none of the effects is of much practical importance until an object is traveling close to the speed of light. If you tried to detect changes in length, mass, or time in a moving train, you'd have no success at all. It is not until one approaches speeds of about 167,000 miles (270,000 kilometers) per second (about 90 percent the speed of light) that such effects are noticeable. But what effects they are! An object traveling at the speed of light would have its length reduced to zero, its mass increased to infinity, and the passage of time reduced to zero. Any clocks attached to the object would stop. Energy and mass Einstein made one other remarkable discovery in working out the meaning of relativity: he found that the two concepts we think of as energy and mass are really two manifestations of the same phenomenon. This discovery marked a real revolution in thinking. Prior to Einstein's time, scientists thought of mass as being the "stuff" of which objects are made, and they thought of energy as the force that caused matter to move. No one would have imagined that the two had anything at all in common. What Einstein showed was that it was possible to take a piece of matter and convert it all into energy. Or, by contrast, one could capture a burst of energy and convert it into a piece of matter. He even developed a formula for showing how much mass is equivalent to how much energy: E = mc 2 , where E is the amount of energy involved, m the amount of mass, and c the velocity of light. The implications of the theory of relativity are unbelievably extensive. Einstein went on to suggest other revolutionary ways of looking at the natural world. For example, scientists had always taken it for granted that the natural world can be described in three dimensions—the three dimensions that we all live in: length, width, and height. All of physics and most of mathematics had traditionally been built on that concept. Einstein suggested that the world had to be viewed in terms of four dimensions: the three dimensions with which we are familiar and time. That is, if we want to study any object in complete detail, we have to be able to state not only its length, its width, and its height, but also its place on the world's time line. That is, the object is traveling through time as we study it; under many circumstances, changes in its place on the world time line must be taken into consideration. In addition, Einstein suggested an entirely new way of thinking about space and time. He said that rather than imagining the universe as the inside of an enormous balloon, we should think about it as consisting of curved surfaces over which light and other objects travel. Tests of relativity One of the most remarkable things about Einstein's theories is the speed with which they were accepted by other physicists. As revolutionary as his ideas were, physicists quickly saw the logic of Einstein's arguments. Some physicists and many nonscientists, however, wanted to see experimental evidence in the real world that his ideas were correct. One proof for Einstein's theory is his equation representing the relationship of energy and mass, E = mc 2 . It is upon this principle that nuclear weapons and nuclear power plants operate. But other pieces of experimental proof were eventually discovered for Einstein's theories. One of those was obtained in 1919. Einstein had predicted that light will be bent out of a straight path if it passes near to a very massive object. He said that the gravitational field of the object would have an effect on light much as it does on other objects. An opportunity to test that prediction occurred during a solar eclipse that occurred on May 29, 1919. Astronomers waited until the Sun was completely blocked out during the eclipse, then took a photograph of the stars behind the Sun. They found that the stars appeared to be in a somewhat different position than had been expected. The reason for the apparent displacement of the stars' position was that the light they emitted was bent slightly as it passed the Sun on its way to Earth. Significance of relativity theory Einstein's theories have had some practical applications, as demonstrated by the use of E = mc 2 in solving problems of nuclear energy. But far more important has been its effect on the way that scientists, and even some nonscientists, view the universe. His theories have changed the way we understand gravity and the universe in general. In that respect, the theories of relativity produced a revolution in physics matched only once or twice in all of previous human history.
Dark energy, repulsive force that is the dominant component (73 percent) of the universe. The remaining portion of the universe consists of ordinary matter and dark matter. Dark energy, in contrast to both forms of matter, is relatively uniform in time and space and is gravitationally repulsive, not attractive, within the volume it occupies. The nature of dark energy is still not well understood. A kind of cosmic repulsive force was first hypothesized by Albert Einstein in 1917 and was represented by a term, the “cosmological constant,” that Einstein reluctantly introduced into his theory of general relativity in order to counteract the attractive force of gravity and account for a universe that was assumed to be static (neither expanding nor contracting). After the discovery in the 1920s by American astronomer Edwin Hubble that the universe is not static but is in fact expanding, Einstein referred to the addition of this constant as his “greatest blunder.” However, the measured amount of matter in the mass-energy budget of the universe was improbably low, and thus some unknown “missing component,” much like the cosmological constant, was required to make up the deficit. Direct evidence for the existence of this component, which was dubbed dark energy, was first presented in 1998. Dark energy is detected by its effect on the rate at which the universe expands and its effect on the rate at which large-scale structures such as galaxies and clusters of galaxies form through gravitational instabilities. The measurement of the expansion rate requires the use of telescopes to measure the distance (or light travel time) of objects seen at different size scales (or redshifts) in the history of the universe. These efforts are generally limited by the difficulty in accurately measuring astronomical distances. Since dark energy works against gravity, more dark energy accelerates the universe’s expansion and retards the formation of large-scale structure. One technique for measuring the expansion rate is to observe the apparent brightness of objects of known luminosity like Type Ia supernovas. Dark energy was discovered in 1998 with this method by two international teams that included American astronomers Adam Riess (the author of this article) and Saul Perlmutter and Australian astronomer Brian Schmidt. The two teams used eight telescopes including those of the Keck Observatory and the MMT Observatory. Type Ia supernovas that exploded when the universe was only two-thirds of its present size were fainter and thus farther away than they would be in a universe without dark energy. This implied the expansion rate of the universe is faster now than it was in the past, a result of the current dominance of dark energy. (Dark energy was negligible in the early universe.) Studying the effect of dark energy on large-scale structure involves measuring subtle distortions in the shapes of galaxies arising from the bending of space by intervening matter, a phenomenon known as “weak lensing.” At some point in the last few billion years, dark energy became dominant in the universe and thus prevented more galaxies and clusters of galaxies from forming. This change in the structure of the universe is revealed by weak lensing. Another measure comes from counting the number of clusters of galaxies in the universe to measure the volume of space and the rate at which that volume is increasing. The goals of most observational studies of dark energy are to measure its equation of state (the ratio of its pressure to its energy density), variations in its properties, and the degree to which dark energy provides a complete description of gravitational physics. In cosmological theory, dark energy is a general class of components in the stress-energy tensor of the field equations in Einstein’s theory of general relativity. In this theory, there is a direct correspondence between the matter-energy of the universe (expressed in the tensor) and the shape of space-time. Both the matter (or energy) density (a positive quantity) and the internal pressure contribute to a component’s gravitational field. While familiar components of the stress-energy tensor such as matter and radiation provide attractive gravity by bending space-time, dark energy causes repulsive gravity through negative internal pressure. If the ratio of the pressure to the energy density is less than −1/3, a possibility for a component with negative pressure, that component will be gravitationally self-repulsive. If such a component dominates the universe, it will accelerate the universe’s expansion. Test Your Knowledge Objects in Space: Fact or Fiction? The simplest and oldest explanation for dark energy is that it is an energy density inherent to empty space, or a “vacuum energy.” Mathematically, vacuum energy is equivalent to Einstein’s cosmological constant. Despite the rejection of the cosmological constant by Einstein and others, the modern understanding of the vacuum, based on quantum field theory, is that vacuum energy arises naturally from the totality of quantum fluctuations (i.e., virtual particle-antiparticle pairs that come into existence and then annihilate each other shortly thereafter) in empty space. However, the observed density of the cosmological vacuum energy density is ~10−10 ergs per cubic centimetre; the value predicted from quantum field theory is ~10110 ergs per cubic centimetre. This discrepancy of 10120 was known even before the discovery of the far weaker dark energy. While a fundamental solution to this problem has not yet been found, probabilistic solutions have been posited, motivated by string theory and the possible existence of a large number of disconnected universes. In this paradigm the unexpectedly low value of the constant is understood as a result of an even greater number of opportunities (i.e., universes) for the occurrence of different values of the constant and the random selection of a value small enough to allow for the formation of galaxies (and thus stars and life). Britannica Lists & Quizzes Another popular theory for dark energy is that it is a transient vacuum energy resulting from the potential energy of a dynamical field. Known as “quintessence,” this form of dark energy would vary in space and time, thus providing a possible way to distinguish it from a cosmological constant. It is also similar in mechanism (though vastly different in scale) to the scalar field energy invoked in the inflationary theory of the big bang. Another possible explanation for dark energy is topological defects in the fabric of the universe. In the case of intrinsic defects in space-time (e.g., cosmic strings or walls), the production of new defects as the universe expands is mathematically similar to a cosmological constant, although the value of the equation of state for the defects depends on whether the defects are strings (one-dimensional) or walls (two-dimensional). There have also been attempts to modify gravity to explain both cosmological and local observations without the need for dark energy. These attempts invoke departures from general relativity on scales of the entire observable universe. A major challenge to understanding accelerated expansion with or without dark energy is to explain the relatively recent occurrence (in the past few billion years) of near-equality between the density of dark energy and dark matter even though they must have evolved differently. (For cosmic structures to have formed in the early universe, dark energy must have been an insignificant component.) This problem is known as the “coincidence problem” or the “fine-tuning problem.” Understanding the nature of dark energy and its many related problems is one of the most formidable challenges in modern physics.
Production is a process of combining various material inputs and immaterial inputs (plans, know-how) in order to make something for consumption (the output). It is the act of creating output, a good or service which has value and contributes to the utility of individuals. Economic well-being is created in a production process, meaning all economic activities that aim directly or indirectly to satisfy human needs. The degree to which the needs are satisfied is often accepted as a measure of economic well-being. In production there are two features which explain increasing economic well-being. They are improving quality-price-ratio of commodities and increasing incomes from growing and more efficient market production. The most important forms of production are In order to understand the origin of the economic well-being we must understand these three production processes. All of them produce commodities which have value and contribute to well-being of individuals. The satisfaction of needs originates from the use of the commodities which are produced. The need satisfaction increases when the quality-price-ratio of the commodities improves and more satisfaction is achieved at less cost. Improving the quality-price-ratio of commodities is to a producer an essential way to enhance the production performance but this kind of gains distributed to customers cannot be measured with production data. Economic well-being also increases due to the growth of incomes that are gained from the growing and more efficient market production. Market production is the only one production form which creates and distributes incomes to stakeholders. Public production and household production are financed by the incomes generated in market production. Thus market production has a double role in creating well-being, i.e. the role of producing developing commodities and the role to creating income. Because of this double role market production is the “primus motor” of economic well-being and therefore here under review. - 1 As a source of economic well-being - 2 Production models - 3 Objective functions - 4 Footnotes - 5 References - 6 See also - 7 Further references and external links As a source of economic well-being In principle there are two main activities in an economy, production and consumption. Similarly there are two kinds of actors, producers and consumers. Well-being is made possible by efficient production and by the interaction between producers and consumers. In the interaction, consumers can be identified in two roles both of which generate well-being. Consumers can be both customers of the producers and suppliers to the producers. The customers’ well-being arises from the commodities they are buying and the suppliers’ well-being is related to the income they receive as compensation for the production inputs they have delivered to the producers. Stakeholders of production Stakeholders of production are persons, groups or organizations with an interest in a producing company. Economic well-being originates in efficient production and it is distributed through the interaction between the company’s stakeholders. The stakeholders of companies are economic actors which have an economic interest in a company. Based on the similarities of their interests, stakeholders can be classified into three groups in order to differentiate their interests and mutual relations. The three groups are as follows: The interests of these stakeholders and their relations to companies are described briefly below. Our purpose is to establish a framework for further analysis. The customers of a company are typically consumers, other market producers or producers in the public sector. Each of them has their individual production functions. Due to competition, the price-quality-ratios of commodities tend to improve and this brings the benefits of better productivity to customers. Customers get more for less. In households and the public sector this means that more need satisfaction is achieved at less cost. For this reason the productivity of customers can increase over time even though their incomes remain unchanged. The suppliers of companies are typically producers of materials, energy, capital, and services. They all have their individual production functions. The changes in prices or qualities of supplied commodities have an effect on both actors’ (company and suppliers) production functions. We come to the conclusion that the production functions of the company and its suppliers are in a state of continuous change. The incomes are generated for those participating in production, i.e., the labour force, society and owners. These stakeholders are referred to here as producer communities or, in shorter form, as producers. The producer communities have a common interest in maximizing their incomes. These parties that contribute to production receive increased incomes from the growing and developing production. The well-being gained through commodities stems from the price-quality relations of the commodities. Due to competition and development in the market, the price-quality relations of commodities tend to improve over time. Typically the quality of a commodity goes up and the price goes down over time. This development favourably affects the production functions of customers. Customers get more for less. Consumer customers get more satisfaction at less cost. This type of well-being generation can only partially be calculated from the production data. The situation is presented in this study. The producer community (labour force, society, and owners) earns income as compensation for the inputs they have delivered to the production. When the production grows and becomes more efficient, the income tends to increase. In production this brings about an increased ability to pay salaries, taxes and profits. The growth of production and improved productivity generate additional income for the producing community. Similarly the high income level achieved in the community is a result of the high volume of production and its good performance. This type of well-being generation – as mentioned earlier - can be reliably calculated from the production data. Main processes of a producing company A producing company can be divided into sub-processes in different ways; yet, the following five are identified as main processes, each with a logic, objectives, theory and key figures of its own. It is important to examine each of them individually, yet, as a part of the whole, in order to be able to measure and understand them. The main processes of a company are as follows: - real process. - income distribution process - production process. - monetary process. - market value process. Production output is created in the real process, gains of production are distributed in the income distribution process and these two processes constitute the production process. The production process and its sub-processes, the real process and income distribution process occur simultaneously, and only the production process is identifiable and measurable by the traditional accounting practices. The real process and income distribution process can be identified and measured by extra calculation, and this is why they need to be analyzed separately in order to understand the logic of production and its performance. Real process generates the production output from input, and it can be described by means of the production function. It refers to a series of events in production in which production inputs of different quality and quantity are combined into products of different quality and quantity. Products can be physical goods, immaterial services and most often combinations of both. The characteristics created into the product by the producer imply surplus value to the consumer, and on the basis of the market price this value is shared by the consumer and the producer in the marketplace. This is the mechanism through which surplus value originates to the consumer and the producer likewise. It is worth noting that surplus values to customers cannot be measured from any production data. Instead the surplus value to a producer can be measured. It can be expressed both in terms of nominal and real values. The real surplus value to the producer is an outcome of the real process, real income, and measured proportionally it means productivity. The concept “real process” in the meaning quantitative structure of production process was introduced in Finnish management accounting in 1960´s. Since then it has been a cornerstone in the Finnish management accounting theory. (Riistama et al. 1971) Income distribution process of the production refers to a series of events in which the unit prices of constant-quality products and inputs alter causing a change in income distribution among those participating in the exchange. The magnitude of the change in income distribution is directly proportionate to the change in prices of the output and inputs and to their quantities. Productivity gains are distributed, for example, to customers as lower product sales prices or to staff as higher income pay. The production process consists of the real process and the income distribution process. A result and a criterion of success of the owner is profitability. The profitability of production is the share of the real process result the owner has been able to keep to himself in the income distribution process. Factors describing the production process are the components of profitability, i.e., returns and costs. They differ from the factors of the real process in that the components of profitability are given at nominal prices whereas in the real process the factors are at periodically fixed prices. Monetary process refers to events related to financing the business. Market value process refers to a series of events in which investors determine the market value of the company in the investment markets. Production growth and performance Production gowth is often defined as a production increase of an output of a production process. It is usually expressed as a growth percentage depicting growth of the real production output. The real output is the real value of products produced in a production process and when we subtract the real input from the real output we get the real income. The real output and the real income are generated by the real process of production from the real inputs. The real process can be described by means of the production function. The production function is a graphical or mathematical expression showing the relationship between the inputs used in production and the output achieved. Both graphical and mathematical expressions are presented and demonstrated. The production function is a simple description of the mechanism of income generation in production process. It consists of two components. These components are a change in production input and a change in productivity. The figure illustrates an income generation process(exaggerated for clarity). The Value T2 (value at time 2) represents the growth in output from Value T1 (value at time 1). Each time of measurement has its own graph of the production function for that time (the straight lines). The output measured at time 2 is greater than the output measured at time one for both of the components of growth: an increase of inputs and an increase of productivity. The portion of growth caused by the increase in inputs is shown on line 1 and does not change the relation between inputs and outputs. The portion of growth caused by an increase in productivity is shown on line 2 with a steeper slope. So increased productivity represents greater output per unit of input. The growth of production output does not reveal anything about the performance of the production process. The performance of production measures production’s ability to generate income. Because the income from production is generated in the real process, we call it the real income. Similarly, as the production function is an expression of the real process, we could also call it “income generated by the production function”. The real income generation follows the logic of the production function. Two components can also be distinguished in the income change: the income growth caused by an increase in production input (production volume) and the income growth caused by an increase in productivity. The income growth caused by increased production volume is determined by moving along the production function graph. The income growth corresponding to a shift of the production function is generated by the increase in productivity. The change of real income so signifies a move from the point 1 to the point 2 on the production function (above). When we want to maximize the production performance we have to maximize the income generated by the production function. The sources of productivity growth and production volume growth are explained as follows. Productivity growth is seen as the key economic indicator of innovation. The successful introduction of new products and new or altered processes, organization structures, systems, and business models generates growth of output that exceeds the growth of inputs. This results in growth in productivity or output per unit of input. Income growth can also take place without innovation through replication of established technologies. With only replication and without innovation, output will increase in proportion to inputs. (Jorgenson et al. 2014,2) This is the case of income growth through production volume growth. Jorgenson et al. (2014,2) give an empiric example. They show that the great preponderance of economic growth in the US since 1947 involves the replication of existing technologies through investment in equipment, structures, and software and expansion of the labor force. Further they show that innovation accounts for only about twenty percent of US economic growth. In the case of a single production process (described above) the output is defined as an economic value of products and services produced in the process. When we want to examine an entity of many production processes we have to sum up the value-added created in the single processes. This is done in order to avoid the double accounting of intermediate inputs. Value-added is obtained by subtracting the intermediate inputs from the outputs. The most well-known and used measure of value-added is the GDP (Gross Domestic Product). It is widely used as a measure of the economic growth of nations and industries. Absolute (total) and average income The production performance can be measured as an average or an absolute income. Expressing performance both in average (avg.) and absolute (abs.) quantities is helpful for understanding the welfare effects of production. For measurement of the average production performance, we use the known productivity ratio - Real output / Real input. The absolute income of performance is obtained by subtracting the real input from the real output as follows: - Real income (abs.) = Real output – Real input The growth of the real income is the increase of the economic value which can be distributed between the production stakeholders. With the aid of the production model we can perform the average and absolute accounting in one calculation. Maximizing production performance requires using the absolute measure, i.e. the real income and its derivatives as a criterion of production performance. The differences between the absolute and average performance measures can be illustrated by the following graph showing marginal and average productivity. The figure is a traditional expression of average productivity and marginal productivity. The maximum for production performance is achieved at the volume where marginal productivity is zero. The maximum for production performance is the maximum of the real incomes. In this illustrative example the maximum real income is achieved, when the production volume is 7.5 units. The maximum average productivity is reached when the production volume is 3.0 units. It is worth noting that the maximum average productivity is not the same as the maximum of real income. Figure above is a somewhat exaggerated depiction because the whole production function is shown. In practice, decisions are made in a limited range of the production functions, but the principle is still the same; the maximum real income is aimed for. An important conclusion can be drawn. When we try to maximize the welfare effects of production we have to maximize real income formation. Maximizing productivity leads to a suboptimum, i.e. to losses of incomes. A practical example illustrates the case. When a jobless person obtains a job in market production we may assume it is a low productivity job. As a result average productivity decreases but the real income per capita increases. Furthermore the well-being of the society also grows. This example reveals the difficulty to interpret the total productivity change correctly. The combination of volume increase and total productivity decrease leads in this case to the improved performance because we are on the “diminishing returns” area of the production function. If we are on the part of “increasing returns” on the production function, the combination of production volume increase and total productivity increase leads to improved production performance. Unfortunately we do not know in practice on which part of the production function we are. Therefore, a correct interpretation of a performance change is obtained only by measuring the real income change. A production model is a numerical description of the production process and is based on the prices and the quantities of inputs and outputs. There are two main approaches to operationalize the concept of production function. We can use mathematical formulae, which are typically used in macroeconomics (in growth accounting) or arithmetical models, which are typically used in microeconomics and management accounting. We do not present the former approach here but refer to the survey “Growth accounting” by Hulten 2009. We use here arithmetical models because they are like the models of management accounting, illustrative and easily understood and applied in practice. Furthermore they are integrated to management accounting, which is a practical advantage. A major advantage of the arithmetical model is its capability to depict production function as a part of production process. Consequently production function can be understood, measured, and examined as a part of production process. There are different production models according to different interests. Here we use a production income model and a production analysis model in order to demonstrate production function as a phenomenon and a measureable quantity. Production income model The scale of success run by a going concern is manifold, and there are no criteria that might be universally applicable to success. Nevertheless, there is one criterion by which we can generalise the rate of success in production. This criterion is the ability to produce surplus value. As a criterion of profitability, surplus value refers to the difference between returns and costs, taking into consideration the costs of equity in addition to the costs included in the profit and loss statement as usual. Surplus value indicates that the output has more value than the sacrifice made for it, in other words, the output value is higher than the value (production costs) of the used inputs. If the surplus value is positive, the owner’s profit expectation has been surpassed. The table presents a surplus value calculation. We call this set of production data a basic example and we use the data through the article in illustrative production models. The basic example is a simplified profitability calculation used for illustration and modelling. Even as reduced, it comprises all phenomena of a real measuring situation and most importantly the change in the output-input mix between two periods. Hence, the basic example works as an illustrative “scale model” of production without any features of a real measuring situation being lost. In practice, there may be hundreds of products and inputs but the logic of measuring does not differ from that presented in the basic example. In this context we define the quality requirements for the production data used in productivity accounting. The most important criterion of good measurement is the homogenous quality of the measurement object. If the object is not homogenous, then the measurement result may include changes in both quantity and quality but their respective shares will remain unclear. In productivity accounting this criterion requires that every item of output and input must appear in accounting as being homogenous. In other words the inputs and the outputs are not allowed to be aggregated in measuring and accounting. If they are aggregated, they are no longer homogenous and hence the measurement results may be biased. Both the absolute and relative surplus value have been calculated in the example. Absolute value is the difference of the output and input values and the relative value is their relation, respectively. The surplus value calculation in the example is at a nominal price, calculated at the market price of each period. Production analysis model A model used here is a typical production analysis model by help of which it is possible to calculate the outcome of the real process, income distribution process and production process. The starting point is a profitability calculation using surplus value as a criterion of profitability. The surplus value calculation is the only valid measure for understanding the connection between profitability and productivity or understanding the connection between real process and production process. A valid measurement of total productivity necessitates considering all production inputs, and the surplus value calculation is the only calculation to conform to the requirement. If we omit an input in productivity or income accounting, this means that the omitted input can be used unlimitedly in production without any cost impact on accounting results. Accounting and interpreting The process of calculating is best understood by applying the term ceteris paribus, i.e. "all other things being the same," stating that at a time only the impact of one changing factor be introduced to the phenomenon being examined. Therefore, the calculation can be presented as a process advancing step by step. First, the impacts of the income distribution process are calculated, and then, the impacts of the real process on the profitability of the production. The first step of the calculation is to separate the impacts of the real process and the income distribution process, respectively, from the change in profitability (285.12 – 266.00 = 19.12). This takes place by simply creating one auxiliary column (4) in which a surplus value calculation is compiled using the quantities of Period 1 and the prices of Period 2. In the resulting profitability calculation, Columns 3 and 4 depict the impact of a change in income distribution process on the profitability and in Columns 4 and 7 the impact of a change in real process on the profitability. The accounting results are easily interpreted and understood. We see that the real income has increased by 58.12 units from which 41.12 units come from the increase of productivity growth and the rest 17.00 units come from the production volume growth. The total increase of real income (58.12) is distributed to the stakeholders of production, in this case 39.00 units to the customers and to the suppliers of inputs and the rest 19.12 units to the owners. Here we can make an important conclusion. Income formation of production is always a balance between income generation and income distribution. The income change created in a real process (i.e. by production function) is always distributed to the stakeholders as economic values within the review period. Accordingly the changes in real income and income distribution are always equal in terms of economic value. Based on the accounted changes of productivity and production volume values we can explicitly conclude on which part of the production function the production is. The rules of interpretations are the following: The production is on the part of “increasing returns” on the production function, when - productivity and production volume increase or - productivity and production volume decrease The production is on the part of “diminishing returns” on the production function, when - productivity decreases and volume increases or - productivity increases and volume decreases. In the basic example the combination of volume growth (+17.00) and productivity growth (+41.12) reports explicitly that the production is on the part of “increasing returns” on the production function (Saari 2006 a, 138–144). Another production model (Production Model Saari 1989) also gives details of the income distribution (Saari 2011,14). Because the accounting techniques of the two models are different, they give differing, although complementary, analytical information. The accounting results are, however, identical. We do not present the model here in detail but we only use its detailed data on income distribution, when the objective functions are formulated in the next section. An efficient way to improve the understanding of production performance is to formulate different objective functions according to the objectives of the different interest groups. Formulating the objective function necessitates defining the variable to be maximized (or minimized). After that other variables are considered as constraints. The most familiar objective function is profit maximization which is also included in this case. Profit maximization is an objective function that stems from the owner’s interest and all other variables are constraints in relation to maximizing of profits. The procedure for formulating objective functions The procedure for formulating different objective functions, in terms of the production model, is introduced next. In the income formation from production the following objective functions can be identified: - Maximizing the real income - Maximizing the producer income - Maximizing the owner income. These cases are illustrated using the numbers from the basic example. The following symbols are used in the presentation: The equal sign (=) signifies the starting point of the computation or the result of computing and the plus or minus sign (+ / -) signifies a variable that is to be added or subtracted from the function. A producer means here the producer community, i.e. labour force, society and owners. Objective function formulations can be expressed in a single calculation which concisely illustrates the logic of the income generation, the income distribution and the variables to be maximized. The calculation resembles an income statement starting with the income generation and ending with the income distribution. The income generation and the distribution are always in balance so that their amounts are equal. In this case it is 58.12 units. The income which has been generated in the real process is distributed to the stakeholders during the same period. There are three variables which can be maximized. They are the real income, the producer income and the owner income. Producer income and owner income are practical quantities because they are addable quantities and they can be computed quite easily. Real income is normally not an addable quantity and in many cases it is difficult to calculate. The dual approach for the formulation Here we have to add that the change of real income can also be computed from the changes in income distribution. We have to identify the unit price changes of outputs and inputs and calculate their profit impacts (i.e. unit price change x quantity). The change of real income is the sum of these profit impacts and the change of owner income. This approach is called the dual approach because the framework is seen in terms of prices instead of quantities (ONS 3, 23). The dual approach has been recognized in growth accounting for long but its interpretation has remained unclear. The following question has remained unanswered: “Quantity based estimates of the residual are interpreted as a shift in the production function, but what is the interpretation of the price-based growth estimates?” (Hulten 2009, 18). We have demonstrated above that the real income change is achieved by quantitative changes in production and the income distribution change to the stakeholders is its dual. In this case the duality means that the same accounting result is obtained by accounting the change of the total income generation (real income) and by accounting the change of the total income distribution. - Kotler, P., Armstrong, G., Brown, L., and Adam, S. (2006) Marketing, 7th Ed. Pearson Education Australia/Prentice Hall. - Genesca & Grifell 1992, Saari 2006 - Courbois & Temple 1975, Gollop 1979, Kurosawa 1975, Saari 1976, 2006 - Courbois, R.; Temple, P. (1975). La methode des "Comptes de surplus" et ses applications macroeconomiques. 160 des Collect,INSEE,Serie C (35). p. 100. - Craig, C.; Harris, R. (1973). "Total Productivity Measurement at the Firm Level". Sloan Management Review (Spring 1973): 13–28. - Genesca, G.E.; Grifell, T. E. (1992). "Profits and Total Factor Productivity: A Comparative Analysis". Omega. the International Journal of Management Science 20 (5/6): 553–568. doi:10.1016/0305-0483(92)90002-O. - Gollop, F.M. (1979). "Accounting for Intermediate Input: The Link Between Sectoral and Aggregate Measures of Productivity Growth". Measurement and Interpretation of Productivity, (National Academy of Sciences). - Hulten, C.R. (January 2000). "TOTAL FACTOR PRODUCTIVITY: A SHORT BIOGRAPHY". NATIONAL BUREAU OF ECONOMIC RESEARCH. - Hulten, C.R. (September 2009). "GROWTH ACCOUNTING". NATIONAL BUREAU OF ECONOMIC RESEARCH. - Jorgenson, D.W.; Ho, M.S.; Samuels, J.D. (2014). Long-term Estimates of U.S. Productivity and Growth (PDF). Tokyo: Third World KLEMS Conference. - Kurosawa, K (1975). "An aggregate index for the analysis of productivity". Omega 3 (2): 157–168. doi:10.1016/0305-0483(75)90115-2. - Loggerenberg van, B.; Cucchiaro, S. (1982). "Productivity Measurement and the Bottom Line". National Productivity Review 1 (1): 87–99. doi:10.1002/npr.4040010111. - Pineda, A. (1990). A Multiple Case Study Research to Determine and respond to Management Information Need Using Total-Factor Productivity Measurement (TFPM). Virginia Polytechnic Institute and State University. - Riistama, K.; Jyrkkiö E. (1971). Operatiivinen laskentatoimi (Operative accounting). Weilin + Göös. p. 335. - Saari, S. (2006a). Productivity. Theory and Measurement in Business. Productivity Handbook (In Finnish). MIDO OY. p. 272. - Saari, S. (2011). Production and Productivity as Sources of Well-being. MIDO OY. p. 25. - Saari, S. (2006). Productivity. Theory and Measurement in Business (PDF). Espoo, Finland: European Productivity Conference. - A list of production functions - Production function - Production theory basics - Production, costs, and pricing - Production possibility frontier - Productivity model - Productivity improving technologies (historical) - Productive and unproductive labour - Productive forces - Computer-aided manufacturing - Distribution (economics) - Mode of production - Johann Heinrich von Thünen - Division of labour - Mass production - Assembly line - Second Industrial Revolution - Industrial Revolution |Wikiquote has quotations related to: Production (economics)| - Moroney, J. R. (1967) Cobb-Douglass production functions and returns to scale in US manufacturing industry, Western Economic Journal, vol 6, no 1, December 1967, pp 39–51. - Pearl, D. and Enos, J. (1975) Engineering production functions and technological progress, The Journal of Industrial Economics, vol 24, September 1975, pp 55–72. - Robinson, J. (1953) The production function and the theory of capital, Review of Economic Studies, vol XXI, 1953, pp. 81–106 - Anwar Shaikh, "Laws of Production and Laws of Algebra: The Humbug Production Function", in The Review of Economics and Statistics, Volume 56(1), February 1974, p. 115-120. http://homepage.newschool.edu/~AShaikh/humbug.pdf - Anwar Shaikh, "Laws of Production and Laws of Algebra—Humbug II", in Growth, Profits and Property ed. by Edward J. Nell. Cambridge, Cambridge University Press, 1980. http://homepage.newschool.edu/~AShaikh/humbug2.pdf - Anwar Shaikh, "Nonlinear Dynamics and Pseudo-Production Functions", published?, 2008. http://homepage.newschool.edu/~AShaikh/Nonlinear%20Dynamics%20and%20Pseudo-Production%20Functions.pdf - Shephard, R (1970) Theory of cost and production functions, Princeton University Press, Princeton NJ. - Thompson, A. (1981) Economics of the firm, Theory and practice, 3rd edition, Prentice Hall, Englewood Cliffs. ISBN 0-13-231423-1 - Elmer G. Wiens: Production Functions - Models of the Cobb-Douglas, C.E.S., Trans-Log, and Diewert Production Functions.
Lesson 15Distinguishing Volume and Surface Area Let’s work with surface area and volume in context. - I can decide whether I need to find the surface area or volume when solving a problem about a real-world situation. 15.1 The Science Fair Mai’s science teacher told her that when there is more ice touching the water in a glass, the ice melts faster. She wants to test this statement so she designs her science fair project to determine if crushed ice or ice cubes will melt faster in a drink. She begins with two cups of warm water. In one cup, she puts a cube of ice. In a second cup, she puts crushed ice with the same volume as the cube. What is your hypothesis? Will the ice cube or crushed ice melt faster, or will they melt at the same rate? Explain your reasoning. 15.2 Revisiting the Box of Chocolates The other day, you calculated the volume of this heart-shaped box of chocolates. The depth of the box is 2 inches. How much cardboard is needed to create the box? 15.3 Card Sort: Surface Area or Volume Your teacher will give you cards with different figures and questions on them. - Sort the cards into two groups based on whether it would make more sense to think about the surface area or the volume of the figure when answering the question. Pause here so your teacher can review your work. - Your teacher will assign you a card to examine more closely. What additional information would you need to be able to answer the question on your card? - Estimate reasonable measurements for the figure on your card. Use your estimated measurements to calculate the answer to the question. Are you ready for more? A cake is shaped like a square prism. The top is 20 centimeters on each side, and the cake is 10 centimeters tall. It has frosting on the sides and on the top, and a single candle on the top at the exact center of the square. You have a knife and a 20-centimeter ruler. - Find a way to cut the cake into 4 fair portions, so that all 4 portions have the same amount of cake and frosting. - Find another way to cut the cake into 4 fair portions. - Find a way to cut the cake into 5 fair portions. 15.4 A Wheelbarrow of Concrete A wheelbarrow is being used to carry wet concrete. Here are its dimensions. - What volume of concrete would it take to fill the tray? - After dumping the wet concrete, you notice that a thin film is left on the inside of the tray. What is the area of the concrete coating the tray? (Remember, there is no top.) Lesson 15 Summary Sometimes we need to find the volume of a prism, and sometimes we need to find the surface area. Here are some examples of quantities related to volume: - How much water a container can hold - How much material it took to build a solid object Volume is measured in cubic units, like in3 or m3. Here are some examples of quantities related to surface area: - How much fabric is needed to cover a surface - How much of an object needs to be painted Surface area is measured in square units, like in2 or m2. Lesson 15 Practice Problems Here is the base of a prism. If the height of the prism is 5 cm, what is its surface area? What is its volume? If the height of the prism is 10 cm, what is its surface area? What is its volume? When the height doubled, what was the percent increase for the surface area? For the volume? Select all the situations where knowing the volume of an object would be more useful than knowing its surface area. Determining the amount of paint needed to paint a barn. Determining the monetary value of a piece of gold jewelry. Filling an aquarium with buckets of water. Deciding how much wrapping paper a gift will need. Packing a box with watermelons for shipping. Charging a company for ad space on your race car. Measuring the amount of gasoline left in the tank of a tractor. Han draws a triangle with a angle, a angle, and a side of length 4 cm as shown. Can you draw a different triangle with the same conditions? Angle is half as large as angle . Angle is one fourth as large as angle . Angle has measure 240 degrees. What is the measure of angle ? The Colorado state flag consists of three horizontal stripes of equal height. The side lengths of the flag are in the ratio . The diameter of the gold-colored disk is equal to the height of the center stripe. What percentage of the flag is gold?
Surrender of Japan The surrender of Japan was announced by Imperial Japan on August 15 and formally signed on September 2, 1945, bringing the hostilities of World War II to a close. By the end of July 1945, the Imperial Japanese Navy was incapable of conducting major operations and an Allied invasion of Japan was imminent. Together with the United Kingdom and China, the United States called for the unconditional surrender of the Japanese armed forces in the Potsdam Declaration on July 26, 1945—the alternative being "prompt and utter destruction". While publicly stating their intent to fight on to the bitter end, Japan's leaders (the Supreme Council for the Direction of the War, also known as the "Big Six") were privately making entreaties to the still-neutral Soviet Union to mediate peace on terms more favorable to the Japanese. Meanwhile, the Soviets were preparing to attack Japanese forces in Manchuria and Korea (in addition to southern Sakhalin and the Kuril Islands) in fulfillment of promises they had secretly made to the United States and the United Kingdom at the Tehran and Yalta Conferences. On August 6, 1945, at 8:15 AM local time, the United States detonated an atomic bomb over the Japanese city of Hiroshima. Sixteen hours later, American President Harry S. Truman called again for Japan's surrender, warning them to "expect a rain of ruin from the air, the like of which has never been seen on this earth." Late in the evening of August 8, 1945, in accordance with the Yalta agreements, but in violation of the Soviet–Japanese Neutrality Pact, the Soviet Union declared war on Japan, and soon after midnight on August 9, 1945, the Soviet Union invaded the Imperial Japanese puppet state of Manchukuo. Later in the day, the United States dropped a second atomic bomb, this time on the Japanese city of Nagasaki. Following these events, Emperor Hirohito intervened and ordered the Supreme Council for the Direction of the War to accept the terms the Allies had set down in the Potsdam Declaration for ending the war. After several more days of behind-the-scenes negotiations and a failed coup d'état, Emperor Hirohito gave a recorded radio address across the Empire on August 15. In the radio address, called the Jewel Voice Broadcast (玉音放送 Gyokuon-hōsō?), he announced the surrender of Japan to the Allies. On August 28, the occupation of Japan by the Supreme Commander for the Allied Powers began. The surrender ceremony was held on September 2, aboard the United States Navy battleship USS Missouri (BB-63), at which officials from the Japanese government signed the Japanese Instrument of Surrender, thereby ending the hostilities. Allied civilians and military personnel alike celebrated V-J Day, the end of the war; however, some isolated soldiers and personnel from Imperial Japan's far-flung forces throughout Asia and the Pacific islands refused to surrender for months and years afterwards, some even refusing into the 1970s. The role of the atomic bombings in Japan's unconditional surrender, and the ethics of the two attacks, is still debated. The state of war formally ended when the Treaty of San Francisco came into force on April 28, 1952. Four more years passed before Japan and the Soviet Union signed the Soviet–Japanese Joint Declaration of 1956, which formally brought an end to their state of war. - 1 Impending defeat - 2 Supreme Council for the Direction of the War - 3 Divisions within the Japanese leadership - 4 Attempts to deal with the Soviet Union - 5 Manhattan Project - 6 Events at Potsdam - 7 Hiroshima, Manchuria, and Nagasaki - 8 Imperial intervention, Allied response, and Japanese reply - 9 Attempted military coup d'état (August 12–15) - 10 Surrender - 11 Further surrenders and continued Japanese military resistance - 12 See also - 13 References - 14 External links By 1945, the Japanese had suffered an unbroken string of defeats for nearly two years in the South West Pacific, the Marianas campaign, and the Philippines campaign. In July 1944, following the loss of Saipan, General Hideki Tōjō was replaced as prime minister by General Kuniaki Koiso, who declared that the Philippines would be the site of the decisive battle. After the Japanese loss of the Philippines, Koiso in turn was replaced by Admiral Kantarō Suzuki. The Allies captured the nearby islands of Iwo Jima and Okinawa in the first half of 1945. Okinawa was to be a staging area for Operation Downfall, the American invasion of the Japanese Home Islands. Following Germany's defeat, the Soviet Union quietly began redeploying its battle-hardened European forces to the Far East, in addition to about forty divisions that had been stationed there since 1941, as a counterbalance to the million-strong Kwantung Army. The Allied submarine campaign and the mining of Japanese coastal waters had largely destroyed the Japanese merchant fleet. With few natural resources, Japan was dependent on raw materials, particularly oil, imported from Manchuria and other parts of the East Asian mainland, and from the conquered territory in the Dutch East Indies. The destruction of the Japanese merchant fleet, combined with the strategic bombing of Japanese industry, had wrecked Japan's war economy. Production of coal, iron, steel, rubber, and other vital supplies was only a fraction of that before the war. As a result of the losses it had suffered, the Imperial Japanese Navy (IJN) had ceased to be an effective fighting force. Following a series of raids on the Japanese shipyard at Kure, Japan, the only major warships in fighting order were six aircraft carriers, four cruisers, and one battleship, none of which could be fueled adequately. Although 19 destroyers and 38 submarines were still operational, their use was limited by the lack of fuel. Faced with the prospect of an invasion of the Home Islands, starting with Kyūshū, and the prospect of a Soviet invasion of Manchuria—Japan's last source of natural resources—the War Journal of the Imperial Headquarters concluded: We can no longer direct the war with any hope of success. The only course left is for Japan's one hundred million people to sacrifice their lives by charging the enemy to make them lose the will to fight. As a final attempt to stop the Allied advances, the Japanese Imperial High Command planned an all-out defense of Kyūshū codenamed Operation Ketsugō. This was to be a radical departure from the defense in depth plans used in the invasions of Peleliu, Iwo Jima, and Okinawa. Instead, everything was staked on the beachhead; more than 3,000 kamikazes would be sent to attack the amphibious transports before troops and cargo were disembarked on the beach. If this did not drive the Allies away, they planned to send another 3,500 kamikazes along with 5,000 Shin'yō suicide boats and the remaining destroyers and submarines—"the last of the Navy's operating fleet"—to the beach. If the Allies had fought through this and successfully landed on Kyūshū, only 3,000 planes would have been left to defend the remaining islands, although Kyūshū would be "defended to the last" regardless. The strategy of making a last stand at Kyūshū was based on the assumption of continued Soviet neutrality. A set of caves were excavated near Nagano on Honshu, the largest of the Japanese islands. In the event of invasion, these caves, the Matsushiro Underground Imperial Headquarters, were to be used by the army to direct the war and to house the Emperor and his family. Supreme Council for the Direction of the War Japanese policy-making centered on the Supreme Council for the Direction of the War (created in 1944 by earlier Prime Minister Kuniaki Koiso), the so-called "Big Six"—the Prime Minister, Minister of Foreign Affairs, Minister of the Army, Minister of the Navy, Chief of the Army General Staff, and Chief of the Navy General Staff. At the formation of the Suzuki government in April 1945, the council's membership consisted of: - Prime Minister: Admiral Kantarō Suzuki - Minister of Foreign Affairs: Shigenori Tōgō - Minister of the Army: General Korechika Anami - Minister of the Navy: Admiral Mitsumasa Yonai - Chief of the Army General Staff: General Yoshijirō Umezu - Chief of the Navy General Staff: Admiral Koshirō Oikawa (later replaced by Admiral Soemu Toyoda) All of these positions were nominally appointed by the Emperor and their holders were answerable directly to him. Nevertheless, from 1936 the Japanese Army and Navy held, effectively, a legal right to nominate (or refuse to nominate) their respective ministers. Thus, they could prevent the formation of undesirable governments, or by resignation bring about the collapse of an existing government. Emperor Hirohito and Lord Keeper of the Privy Seal Kōichi Kido also were present at some meetings, following the Emperor's wishes. As Iris Chang reports, "the Japanese deliberately destroyed, hid or falsified most of their secret wartime documents." Divisions within the Japanese leadership For the most part, Suzuki's military-dominated cabinet favored continuing the war. For the Japanese, surrender was unthinkable—Japan had never been invaded or lost a war in its history. Only Mitsumasa Yonai, the Navy minister, was known to desire an early end to the war. According to historian Richard B. Frank: - Although Suzuki might indeed have seen peace as a distant goal, he had no design to achieve it within any immediate time span or on terms acceptable to the Allies. His own comments at the conference of senior statesmen gave no hint that he favored any early cessation of the war ... Suzuki's selections for the most critical cabinet posts were, with one exception, not advocates of peace either. After the war, Suzuki and others from his government and their apologists claimed they were secretly working towards peace, and could not publicly advocate it. They cite the Japanese concept of haragei—"the art of hidden and invisible technique"—to justify the dissonance between their public actions and alleged behind-the-scenes work. However, many historians reject this. Robert J. C. Butow wrote: Because of its very ambiguity, the plea of haragei invites the suspicion that in questions of politics and diplomacy a conscious reliance upon this 'art of bluff' may have constituted a purposeful deception predicated upon a desire to play both ends against the middle. While this judgment does not accord with the much-lauded character of Admiral Suzuki, the fact remains that from the moment he became Premier until the day he resigned no one could ever be quite sure of what Suzuki would do or say next. Japanese leaders had always envisioned a negotiated settlement to the war. Their prewar planning expected a rapid expansion and consolidation, an eventual conflict with the United States, and finally a settlement in which they would be able to retain at least some new territory they had conquered. By 1945, Japan's leaders were in agreement that the war was going badly, but they disagreed over the best means to negotiate its end. There were two camps: the so-called "peace" camp favored a diplomatic initiative to persuade Joseph Stalin, the leader of the Soviet Union, to mediate a settlement between the Allies and Japan; and the hardliners who favored fighting one last "decisive" battle that would inflict so many casualties on the Allies that they would be willing to offer more lenient terms. Both approaches were based on Japan's experience in the Russo–Japanese War, forty years earlier, which consisted of a series of costly but largely indecisive battles, followed by the decisive naval Battle of Tsushima. In February 1945, Prince Fumimaro Konoe gave Emperor Hirohito a memorandum analyzing the situation, and told him that if the war continued, the imperial family might be in greater danger from an internal revolution than from defeat. According to the diary of Grand Chamberlain Hisanori Fujita, the Emperor, looking for a decisive battle (tennōzan), replied that it was premature to seek peace "unless we make one more military gain". Also in February, Japan's treaty division wrote about Allied policies towards Japan regarding "unconditional surrender, occupation, disarmament, elimination of militarism, democratic reforms, punishment of war criminals, and the status of the emperor." Allied-imposed disarmament, Allied punishment of Japanese war criminals, and especially occupation and removal of the Emperor, were not acceptable to the Japanese leadership. On April 5, the Soviet Union gave the required 12 months' notice that it would not renew the five-year Soviet–Japanese Neutrality Pact (which had been signed in 1941 following the Nomonhan Incident). Unknown to the Japanese, at the Tehran Conference in November–December 1943, it had been agreed that the Soviet Union would enter the war against Japan once Nazi Germany was defeated. At the Yalta conference in February 1945, the United States had made substantial concessions to the Soviets to secure a promise that they would declare war on Japan within three months of the surrender of Germany. Although the five-year Neutrality Pact did not expire until April 5, 1946, the announcement caused the Japanese great concern, because Japan had amassed its forces in the South to repel the inevitable US attack, thus leaving its Northern islands vulnerable to Soviet invasion. Russian Foreign Minister Vyacheslav Molotov, in Moscow, and Yakov Malik, Soviet ambassador in Tokyo, went to great lengths to assure the Japanese that "the period of the Pact's validity has not ended". At a series of high-level meetings in May, the Big Six first seriously discussed ending the war—but none of them on terms that would have been acceptable to the Allies. Because anyone openly supporting Japanese surrender risked assassination by zealous army officers, the meetings were closed to anyone except the Big Six, the Emperor, and the Privy Seal—no second- or third-echelon officers could attend. At these meetings, despite the dispatches from Japanese ambassador Satō in Moscow, only Foreign minister Tōgō realized that Roosevelt and Churchill might have already made concessions to Stalin to bring the Soviets into the war against Japan. As a result of these meetings, Tōgō was authorized to approach the Soviet Union, seeking to maintain its neutrality, or (despite the very remote probability) to form an alliance. In keeping with the custom of a new government declaring its purposes, following the May meetings the Army staff produced a document, "The Fundamental Policy to Be Followed Henceforth in the Conduct of the War," which stated that the Japanese people would fight to extinction rather than surrender. This policy was adopted by the Big Six on June 6. (Tōgō opposed it, while the other five supported it.) Documents submitted by Suzuki at the same meeting suggested that, in the diplomatic overtures to the USSR, Japan adopt the following approach: It should be clearly made known to Russia that she owes her victory over Germany to Japan, since we remained neutral, and that it would be to the advantage of the Soviets to help Japan maintain her international position, since they have the United States as an enemy in the future. On June 9, the Emperor's confidant Marquis Kōichi Kido wrote a "Draft Plan for Controlling the Crisis Situation," warning that by the end of the year Japan's ability to wage modern war would be extinguished and the government would be unable to contain civil unrest. "... We cannot be sure we will not share the fate of Germany and be reduced to adverse circumstances under which we will not attain even our supreme object of safeguarding the Imperial Household and preserving the national polity." Kido proposed that the Emperor take action, by offering to end the war on "very generous terms." Kido proposed that Japan withdraw from the formerly European colonies it had occupied provided they were granted independence, that Japan disarm provided this not occur under Allied supervision, and that Japan for a time be "content with minimum defense." Kido's proposal did not contemplate Allied occupation of Japan, prosecution of war criminals or substantial change in Japan's system of government. With the Emperor's authorization, Kido approached several members of the Supreme Council, the "Big Six." Tōgō was very supportive. Suzuki and Admiral Mitsumasa Yonai, the Navy minister, were both cautiously supportive; each wondered what the other thought. General Korechika Anami, the Army minister, was ambivalent, insisting that diplomacy must wait until "after the United States has sustained heavy losses" in Operation Ketsugō. In June, the Emperor lost confidence in the chances of achieving a military victory. The Battle of Okinawa was lost, and he learned of the weakness of the Japanese army in China, of the Kwantung Army in Manchuria, of the navy, and of the army defending the Home Islands. The Emperor received a report by Prince Higashikuni from which he concluded that "it was not just the coast defense; the divisions reserved to engage in the decisive battle also did not have sufficient numbers of weapons." According to the Emperor: I was told that the iron from bomb fragments dropped by the enemy was being used to make shovels. This confirmed my opinion that we were no longer in a position to continue the war. On June 22, the Emperor summoned the Big Six to a meeting. Unusually, he spoke first: "I desire that concrete plans to end the war, unhampered by existing policy, be speedily studied and that efforts made to implement them." It was agreed to solicit Soviet aid in ending the war. Other neutral nations, such as Switzerland, Sweden, and the Vatican City, were known to be willing to play a role in making peace, but they were so small they were believed unable to do more than deliver the Allies' terms of surrender and Japan's acceptance or rejection. The Japanese hoped that the Soviet Union could be persuaded to act as an agent for Japan in negotiations with America and Britain. Attempts to deal with the Soviet Union On June 30, Tōgō told Naotake Satō, Japan's ambassador in Moscow, to try to establish "firm and lasting relations of friendship." Satō was to discuss the status of Manchuria and "any matter the Russians would like to bring up." Well aware of the overall situation and cognizant of their promises to the Allies, the Soviets responded with delaying tactics to encourage the Japanese without promising anything. Satō finally met with Soviet Foreign Minister Vyacheslav Molotov on July 11, but without result. On July 12, Tōgō directed Satō to tell the Soviets that: His Majesty the Emperor, mindful of the fact that the present war daily brings greater evil and sacrifice upon the peoples of all the belligerent powers, desires from his heart that it may be quickly terminated. But so long as England and the United States insist upon unconditional surrender, the Japanese Empire has no alternative but to fight on with all its strength for the honor and existence of the Motherland. The Emperor proposed sending Prince Konoe as a special envoy, although he would be unable to reach Moscow before the Potsdam Conference. Satō advised Tōgō that in reality, "unconditional surrender or terms closely equivalent thereto" was all that Japan could expect. Moreover, in response to Molotov's requests for specific proposals, Satō suggested that Tōgō's messages were not "clear about the views of the Government and the Military with regard to the termination of the war," thus questioning whether Tōgō's initiative was supported by the key elements of Japan's power structure. On July 17, Tōgō responded: Although the directing powers, and the government as well, are convinced that our war strength still can deliver considerable blows to the enemy, we are unable to feel absolutely secure peace of mind ... Please bear particularly in mind, however, that we are not seeking the Russians' mediation for anything like an unconditional surrender. In reply, Satō clarified: It goes without saying that in my earlier message calling for unconditional surrender or closely equivalent terms, I made an exception of the question of preserving [the imperial family]. On July 21, speaking in the name of the cabinet, Tōgō repeated: With regard to unconditional surrender we are unable to consent to it under any circumstances whatever. ... It is in order to avoid such a state of affairs that we are seeking a peace, ... through the good offices of Russia. ... it would also be disadvantageous and impossible, from the standpoint of foreign and domestic considerations, to make an immediate declaration of specific terms. American cryptographers had broken most of Japan's codes, including the Purple code used by the Japanese Foreign Office to encode high-level diplomatic correspondence. As a result, messages between Tokyo and Japan's embassies were provided to Allied policy-makers nearly as quickly as to the intended recipients. Security concerns dominated Soviet decisions concerning the Far East. Chief among these was gaining unrestricted access to the Pacific Ocean. The year-round ice-free areas of the Soviet Pacific coastline—Vladivostok in particular—could be blockaded by air and sea from Sakhalin island and the Kurile Islands. Acquiring these territories, thus guaranteeing free access to the Soya Strait, was their primary objective. Secondary objectives were leases for the Chinese Eastern Railway, Southern Manchuria Railway, Dairen, and Port Arthur. To this end, Stalin and Molotov strung out the negotiations with the Japanese, giving them false hope of a Soviet-mediated peace. At the same time, in their dealings with the United States and Britain, the Soviets insisted on strict adherence to the Cairo Declaration, re-affirmed at the Yalta Conference, that the Allies would not accept separate or conditional peace with Japan. The Japanese would have to surrender unconditionally to all the Allies. To prolong the war, the Soviets opposed any attempt to weaken this requirement. This would give the Soviets time to complete the transfer of their troops from the Western Front to the Far East, and conquer Manchuria (Manchukuo), Inner Mongolia (Mengjiang), Korea, Sakhalin, the Kuriles, and possibly, Hokkaidō (starting with a landing at Rumoi). In 1939, Albert Einstein and Leó Szilárd wrote a letter to President Roosevelt warning him that the Germans might be researching the development of atomic weaponry and that it was necessary that the United States fund research and development of its own such project. Roosevelt agreed, and the result was the Manhattan Project—a top-secret research program administered by General Leslie Groves, with scientific direction from J. Robert Oppenheimer. The first bomb was tested successfully in the Trinity explosion on July 16, 1945. As the project neared its conclusion, American planners began to consider the use of the bomb. Groves formed a committee that met in April and May 1945 to draw up a list of targets. One of the primary criteria was that the target cities must not have been damaged by conventional bombing. This would allow for an accurate assessment of the damage done by the atomic bomb. The targeting committee's list included 18 Japanese cities. At the top of the list were Kyoto, Hiroshima, Yokohama, Kokura, and Niigata. Ultimately, Kyoto was removed from the list at the insistence of Secretary of War Henry L. Stimson, who had visited the city on his honeymoon and knew of its cultural and historical significance. The Allies' atomic bomb program was considered to be so sensitive that not even the Vice President of the United States was told of its existence. As a result, Harry S. Truman only learned about the Manhattan Project and its purpose after becoming President upon Franklin Roosevelt's death on April 12. In May, Truman approved the formation of an "Interim Committee", an advisory group that would report on the atomic bomb. It consisted of George L. Harrison, Vannevar Bush, James Bryant Conant, Karl Taylor Compton, William L. Clayton, and Ralph Austin Bard, advised by scientists Oppenheimer, Enrico Fermi, Ernest Lawrence, and Arthur Compton. In a June 1 report, the Committee concluded that the bomb should be used as soon as possible against a war plant surrounded by workers' homes, and that no warning or demonstration should be given. The Committee's mandate did not include the use of the bomb—its use upon completion was presumed. Following a protest by scientists involved in the project, in the form of the Franck Report, the Committee re-examined the use of the bomb. In a June 21 meeting, it reaffirmed that there was no alternative. Events at Potsdam |Wikisource has original text related to this article:| The leaders of the major Allied powers met at the Potsdam Conference from July 16 to August 2, 1945. The participants were the Soviet Union, the United Kingdom, and the United States, represented by Stalin, Winston Churchill (later Clement Attlee), and Truman respectively. Although the Potsdam Conference was mainly concerned with European affairs, the war against Japan was also discussed in detail. Truman learned of the successful Trinity test early in the conference, and shared this information with the British delegation. The successful test caused the American delegation to reconsider the necessity and wisdom of Soviet participation, for which the U.S. had lobbied hard at the Tehran and Yalta Conferences. High on the United States' list of priorities was shortening the war and reducing American casualties—Soviet intervention seemed likely to do both, but at the cost of possibly allowing the Soviets to capture territory beyond that which had been promised to them at Tehran and Yalta, and causing a postwar division of Japan similar to that which had occurred in Germany. In dealing with Stalin, Truman decided to give the Soviet leader vague hints about the existence of a powerful new weapon without going into details. However, the other Allies were unaware that Soviet intelligence had penetrated the Manhattan Project in its early stages, so Stalin already knew of the existence of the atomic bomb, but did not appear impressed by its potential. The Potsdam Declaration It was decided to issue a statement, the Potsdam Declaration, defining "Unconditional Surrender" and clarifying what it meant for the position of the emperor and for Hirohito personally. The American and British governments strongly disagreed on this point—the United States wanted to abolish the position and possibly try him as a war criminal, while the British wanted to retain the position, perhaps with Hirohito still reigning. The Potsdam Declaration went through many drafts until a version acceptable to all was found. On July 26, the United States, Britain and China released the Potsdam Declaration announcing the terms for Japan's surrender, with the warning, "We will not deviate from them. There are no alternatives. We shall brook no delay." For Japan, the terms of the declaration specified: - the elimination "for all time [of] the authority and influence of those who have deceived and misled the people of Japan into embarking on world conquest" - the occupation of "points in Japanese territory to be designated by the Allies" - that the "Japanese sovereignty shall be limited to the islands of Honshū, Hokkaidō, Kyūshū, Shikoku and such minor islands as we determine." As had been announced in the Cairo Declaration in 1943, Japan was to be reduced to her pre-1894 territory and stripped of her pre-war empire including Korea and Taiwan, as well as all her recent conquests. - that "[t]he Japanese military forces, after being completely disarmed, shall be permitted to return to their homes with the opportunity to lead peaceful and productive lives." - that "[w]e do not intend that the Japanese shall be enslaved as a race or destroyed as a nation, but stern justice shall be meted out to all war criminals, including those who have visited cruelties upon our prisoners." On the other hand, the declaration stated that: - "The Japanese Government shall remove all obstacles to the revival and strengthening of democratic tendencies among the Japanese people. Freedom of speech, of religion, and of thought, as well as respect for the fundamental human rights shall be established." - "Japan shall be permitted to maintain such industries as will sustain her economy and permit the exaction of just reparations in kind, but not those which would enable her to rearm for war. To this end, access to, as distinguished from control of, raw materials shall be permitted. Eventual Japanese participation in world trade relations shall be permitted." - "The occupying forces of the Allies shall be withdrawn from Japan as soon as these objectives have been accomplished and there has been established, in accordance with the freely expressed will of the Japanese people, a peacefully inclined and responsible government." The only use of the term "unconditional surrender" came at the end of the declaration: - "We call upon the government of Japan to proclaim now the unconditional surrender of all Japanese armed forces, and to provide proper and adequate assurances of their good faith in such action. The alternative for Japan is prompt and utter destruction." Contrary to what had been intended at its conception, the Declaration made no mention of the Emperor at all. Allied intentions on issues of utmost importance to the Japanese, including whether Hirohito was to be regarded as one of those who had "misled the people of Japan" or even a war criminal, or alternatively, whether the Emperor might become part of a "peacefully inclined and responsible government" were thus left unstated. The "prompt and utter destruction" clause has been interpreted as a veiled warning about American possession of the atomic bomb (which had been tested successfully on the first day of the conference). On the other hand, the declaration also made specific references to the devastation that had been wrought upon Germany in the closing stages of the European war. To contemporary readers on both sides who were not yet aware of the atomic bomb's existence, it was easy to interpret the conclusion of the declaration simply as a threat to bring similar destruction upon Japan using conventional weapons. On July 27, the Japanese government considered how to respond to the Declaration. The four military members of the Big Six wanted to reject it, but Tōgō persuaded the cabinet not to do so until he could get a reaction from the Soviets. In a telegram, Shun'ichi Kase, Japan's ambassador to Switzerland, observed that "unconditional surrender" applied only to the military and not to the government or the people, and he pleaded that it should be understood that the careful language of Potsdam appeared "to have occasioned a great deal of thought" on the part of the signatory governments—"they seem to have taken pains to save face for us on various points." The next day, Japanese newspapers reported that the Declaration, the text of which had been broadcast and dropped by leaflet into Japan, had been rejected. In an attempt to manage public perception, Prime Minister Suzuki met with the press, and stated: I consider the Joint Proclamation a rehash of the Declaration at the Cairo Conference. As for the Government, it does not attach any important value to it at all. The only thing to do is just kill it with silence (mokusatsu). We will do nothing but press on to the bitter end to bring about a successful completion of the war. The meaning of mokusatsu, literally "kill with silence," can range from "ignore" to "treat with contempt"—which rather accurately described the range of reactions within the government. On July 30, Ambassador Satō wrote that Stalin was probably talking to Roosevelt and Churchill about his dealings with Japan, and he wrote: "There is no alternative but immediate unconditional surrender if we are to prevent Russia's participation in the war." On August 2, Tōgō wrote to Satō: "it should not be difficult for you to realize that ... our time to proceed with arrangements of ending the war before the enemy lands on the Japanese mainland is limited, on the other hand it is difficult to decide on concrete peace conditions here at home all at once." Hiroshima, Manchuria, and Nagasaki August 6: Hiroshima |Problems playing this file? See media help.| On August 6 at 8:15 AM local time, the Enola Gay, a Boeing B-29 Superfortress piloted by Colonel Paul Tibbets, dropped an atomic bomb (code-named Little Boy by the U.S.) on the city of Hiroshima in southwest Honshū. Throughout the day, confused reports reached Tokyo that Hiroshima had been the target of an air raid, which had leveled the city with a "blinding flash and violent blast". Later that day, they received U.S. President Truman's broadcast announcing the first use of an atomic bomb, and promising: We are now prepared to obliterate more rapidly and completely every productive enterprise the Japanese have above ground in any city. We shall destroy their docks, their factories, and their communications. Let there be no mistake; we shall completely destroy Japan's power to make war. It was to spare the Japanese people from utter destruction that the ultimatum of July 26 was issued at Potsdam. Their leaders promptly rejected that ultimatum. If they do not now accept our terms they may expect a rain of ruin from the air, the like of which has never been seen on this earth … The Japanese Army and Navy had their own independent atomic-bomb programs and therefore the Japanese understood enough to know how very difficult building it would be. Therefore, many Japanese and in particular the military members of the government refused to believe the United States had built an atomic bomb, and the Japanese military ordered their own independent tests to determine the cause of Hiroshima's destruction. Admiral Soemu Toyoda, the Chief of the Naval General Staff, argued that even if the United States had made one, they could not have many more. American strategists, having anticipated a reaction like Toyoda's, planned to drop a second bomb shortly after the first, to convince the Japanese that the U.S. had a large supply. August 8–9: Soviet invasion and Nagasaki When the Russians invaded Manchuria, they sliced through what had once been an elite army and many Russian units only stopped when they ran out of gas. The Soviet 16th Army — 100,000 strong — launched an invasion of the southern half of Sakhalin Island. Their orders were to mop up Japanese resistance there, and then — within 10 to 14 days — be prepared to invade Hokkaido, the northernmost of Japan’s home islands. The Japanese force tasked with defending Hokkaido, the 5th Area Army, was under strength at two divisions and two brigades, and was in fortified positions on the east side of the island. The Soviet plan of attack called for an invasion of Hokkaido from the west. The Soviet declaration of war also changed the calculation of how much time was left for maneuver. Japanese intelligence was predicting that U.S. forces might not invade for months. Soviet forces, on the other hand, could be in Japan proper in as little as 10 days. The Soviet invasion made a decision on ending the war extremely time sensitive. These "twin shocks"—the atomic bombing of Hiroshima and the Soviet entry—had immediate profound effects on Prime Minister Suzuki and Foreign Minister Tōgō Shigenori, who concurred that the government must end the war at once. However, the senior leadership of the Japanese Army took the news in stride, grossly underestimating the scale of the attack. With the support of Minister of War Anami, they started preparing to impose martial law on the nation, to stop anyone attempting to make peace. Hirohito told Kido to "quickly control the situation" because "the Soviet Union has declared war and today began hostilities against us." The Supreme Council met at 10:30. Suzuki, who had just come from a meeting with the Emperor, said it was impossible to continue the war. Tōgō Shigenori said that they could accept the terms of the Potsdam Declaration, but they needed a guarantee of the Emperor's position. Navy Minister Yonai said that they had to make some diplomatic proposal—they could no longer afford to wait for better circumstances. In the middle of the meeting, shortly after 11:00, news arrived that Nagasaki, on the west coast of Kyūshū, had been hit by a second atomic bomb (called "Fat Man" by the United States). By the time the meeting ended, the Big Six had split 3–3. Suzuki, Tōgō, and Admiral Yonai favored Tōgō's one additional condition to Potsdam, while Generals Anami, Umezu, and Admiral Toyoda insisted on three further terms that modified Potsdam: that Japan handle their own disarmament, that Japan deal with any Japanese war criminals, and that there be no occupation of Japan. Following the atomic bombing of Nagasaki, Truman issued another statement: The British, Chinese, and United States Governments have given the Japanese people adequate warning of what is in store for them. We have laid down the general terms on which they can surrender. Our warning went unheeded; our terms were rejected. Since then the Japanese have seen what our atomic bomb can do. They can foresee what it will do in the future. The world will note that the first atomic bomb was dropped on Hiroshima, a military base. That was because we wished in this first attack to avoid, insofar as possible, the killing of civilians. But that attack is only a warning of things to come. If Japan does not surrender, bombs will have to be dropped on her war industries and, unfortunately, thousands of civilian lives will be lost. I urge Japanese civilians to leave industrial cities immediately, and save themselves from destruction. I realize the tragic significance of the atomic bomb. Its production and its use were not lightly undertaken by this Government. But we knew that our enemies were on the search for it. We know now how close they were to finding it. And we knew the disaster which would come to this Nation, and to all peace-loving nations, to all civilization, if they had found it first. That is why we felt compelled to undertake the long and uncertain and costly labor of discovery and production. We won the race of discovery against the Germans. Having found the bomb we have used it. We have used it against those who attacked us without warning at Pearl Harbor, against those who have starved and beaten and executed American prisoners of war, against those who have abandoned all pretense of obeying international laws of warfare. We have used it in order to shorten the agony of war, in order to save the lives of thousands and thousands of young Americans. We shall continue to use it until we completely destroy Japan's power to make war. Only a Japanese surrender will stop us. Imperial intervention, Allied response, and Japanese reply The full cabinet met on 14:30 on August 9, and spent most of the day debating surrender. As the Big Six had done, the cabinet split, with neither Tōgō's position nor Anami's attracting a majority. Anami told the other cabinet ministers that, under torture, a captured American P-51 fighter pilot had told his interrogators that the United States possessed 100 atom bombs and that Tokyo and Kyoto would be bombed "in the next few days". The pilot, Marcus McDilda, was lying. He knew nothing of the Manhattan Project and simply told his interrogators what he thought they wanted to hear to end the torture. The lie, which caused him to be classified as a high-priority prisoner, probably saved him from beheading. In reality, the United States would have had the third bomb ready for use around August 19, and a fourth in September 1945. The third bomb probably would have been used against Tokyo. The cabinet meeting adjourned at 17:30 with no consensus. A second meeting lasting from 18:00 to 22:00 also ended with no consensus. Following this second meeting, Suzuki and Tōgō met the Emperor, and Suzuki proposed an impromptu Imperial conference, which started just before midnight on the night of August 9–10. Suzuki presented Anami's four-condition proposal as the consensus position of the Supreme Council. The other members of the Supreme Council spoke, as did Kiichirō Hiranuma, the president of the Privy Council, who outlined Japan's inability to defend itself and also described the country's domestic problems, such as the shortage of food. The cabinet debated, but again no consensus emerged. At around 02:00 (August 10), Suzuki finally addressed Emperor Hirohito, asking him to decide between the two positions. The participants later recollected that the Emperor stated: I have given serious thought to the situation prevailing at home and abroad and have concluded that continuing the war can only mean destruction for the nation and prolongation of bloodshed and cruelty in the world. I cannot bear to see my innocent people suffer any longer. ... I was told by those advocating a continuation of hostilities that by June new divisions would be in place in fortified positions [at Kujūkuri Beach, east of Tokyo] ready for the invader when he sought to land. It is now August and the fortifications still have not been completed. ... There are those who say the key to national survival lies in a decisive battle in the homeland. The experiences of the past, however, show that there has always been a discrepancy between plans and performance. I do not believe that the discrepancy in the case of Kujūkuri can be rectified. Since this is also the shape of things, how can we repel the invaders? [He then made some specific reference to the increased destructiveness of the atomic bomb] It goes without saying that it is unbearable for me to see the brave and loyal fighting men of Japan disarmed. It is equally unbearable that others who have rendered me devoted service should now be punished as instigators of the war. Nevertheless, the time has come to bear the unbearable. ... I swallow my tears and give my sanction to the proposal to accept the Allied proclamation on the basis outlined by the Foreign Minister. According to General Sumihisa Ikeda and Admiral Zenshirō Hoshina, Privy Council President Kiichirō Hiranuma then turned to the Emperor and asked him: "Your majesty, you also bear responsibility (sekinin) for this defeat. What apology are you going to make to the heroic spirits of the imperial founder of your house and your other imperial ancestors?" |Wikisource has original text related to this article:| Once the Emperor had left, Suzuki pushed the cabinet to accept the Emperor's will, which it did. Early that morning (August 10), the Foreign Ministry sent telegrams to the Allies (by way of the Swiss Federal Political Department and Max Grässli in particular) announcing that Japan would accept the Potsdam Declaration, but would not accept any peace conditions that would "prejudice the prerogatives" of the Emperor. That effectively meant no change in Japan's form of government—that the Emperor of Japan would remain a position of real power. The Allied response was written by James F. Byrnes and approved by the British, Chinese, and Soviet governments, although the Soviets agreed only reluctantly. The Allies sent their response (via the Swiss Political Affairs Department) to Japan's qualified acceptance of the Potsdam Declaration on August 12. On the status of the Emperor it said: From the moment of surrender the authority of the Emperor and the Japanese government to rule the state shall be subject to the Supreme Commander of the Allied powers who will take such steps as he deems proper to effectuate the surrender terms. ...The ultimate form of government of Japan shall, in accordance with the Potsdam Declaration, be established by the freely expressed will of the Japanese people. President Truman ordered military operations (including the B-29 bombings) to continue until official word of Japanese surrender was received. However, news correspondents incorrectly interpreted a comment by Carl Andrew Spaatz that the B-29s were not flying on August 11 (because of bad weather) as a statement that a ceasefire was in effect. To avoid giving the Japanese the impression that the Allies had abandoned peace efforts and resumed bombing, Truman then ordered a halt to further bombings. The Japanese cabinet considered the Allied response, and Suzuki argued that they must reject it and insist on an explicit guarantee for the imperial system. Anami returned to his position that there be no occupation of Japan. Afterward, Tōgō told Suzuki that there was no hope of getting better terms, and Kido conveyed the Emperor's will that Japan surrender. In a meeting with the Emperor, Yonai spoke of his concerns about growing civil unrest: I think the term is inappropriate, but the atomic bombs and the Soviet entry into the war are, in a sense, divine gifts. This way we don't have to say that we have quit the war because of domestic circumstances. That day, Hirohito informed the imperial family of his decision to surrender. One of his uncles, Prince Asaka, then asked whether the war would be continued if the kokutai (imperial sovereignty) could not be preserved. The Emperor simply replied "of course." The Big Six and the cabinet spent August 13 debating their reply to the Allied response, but remained deadlocked. Meanwhile, the Allies grew doubtful, waiting for the Japanese to respond. The Japanese had been instructed that they could transmit an unqualified acceptance in the clear, but in fact they sent out coded messages on matters unrelated to the surrender parlay. The Allies took this coded response as non-acceptance of the terms. Via Ultra intercepts, the Allies also detected increased diplomatic and military traffic, which was taken as evidence that the Japanese were preparing an "all-out banzai attack." President Truman ordered a resumption of attacks against Japan at maximum intensity "so as to impress Japanese officials that we mean business and are serious in getting them to accept our peace proposals without delay." The United States Third Fleet began shelling the Japanese coast. In the largest bombing raid of the Pacific War, more than 400 B-29s attacked Japan during daylight on August 14, and more than 300 that night. A total of 1,014 aircraft were used with no losses. In the longest bombing mission of the war, B-29s from the 315 Bombardment Wing flew 6,100 km (3,800 mi) to destroy the Nippon Oil Company refinery at Tsuchizaki on the northern tip of Honshū. This was the last operational refinery in the Japan Home Islands and it produced 67% of their oil. After the war, the bombing raids were justified as already in progress when word of the Japanese surrender was received, but this is only partially true. At the suggestion of American psychological operations experts, B-29s spent August 13 dropping leaflets over Japan, describing the Japanese offer of surrender and the Allied response. The leaflets had a profound effect on the Japanese decision-making process. As August 14 dawned, Suzuki, Kido, and the Emperor realized the day would end with either an acceptance of the American terms or a military coup. The Emperor met with the most senior Army and Navy officers. While several spoke in favor of fighting on, Field Marshal Shunroku Hata did not. As commander of the Second General Army, the headquarters of which had been in Hiroshima, Hata commanded all the troops defending southern Japan—the troops preparing to fight the "decisive battle". Hata said he had no confidence in defeating the invasion and did not dispute the Emperor's decision. The Emperor asked his military leaders to cooperate with him in ending the war. At a conference with the cabinet and other councilors, Anami, Toyoda, and Umezu again made their case for continuing to fight, after which the Emperor said: I have listened carefully to each of the arguments presented in opposition to the view that Japan should accept the Allied reply as it stands and without further clarification or modification, but my own thoughts have not undergone any change. ... In order that the people may know my decision, I request you to prepare at once an imperial rescript so that I may broadcast to the nation. Finally, I call upon each and every one of you to exert himself to the utmost so that we may meet the trying days which lie ahead. The cabinet immediately convened and unanimously ratified the Emperor's wishes. They also decided to destroy vast amounts of material pertaining to war crimes and the war responsibility of the nation's highest leaders. Immediately after the conference, the Foreign ministry transmitted orders to its embassies in Switzerland and Sweden to accept the Allied terms of surrender. These orders were picked up and received in Washington at 02:49, August 14. Difficulty with senior commanders on the distant war fronts was anticipated. Three princes of the Imperial Family who held military commissions were dispatched on August 14 to deliver the news personally. Prince Tsuneyoshi Takeda went to Korea and Manchuria, Prince Yasuhiko Asaka to the China Expeditionary Army and China Fleet, and Prince Kan'in Haruhito to Shanghai, South China, Indo-China and Singapore. The text of the Imperial Rescript on surrender was finalized by 19:00 August 14, transcribed by the official court calligrapher, and brought to the cabinet for their signatures. Around 23:00, the Emperor, with help from an NHK recording crew, made a gramophone record of himself reading it. The record was given to court chamberlain Yoshihiro Tokugawa, who hid it in a locker in the empress's secretary's office. Attempted military coup d'état (August 12–15) Late on the night of August 12, 1945, Major Kenji Hatanaka, along with Lieutenant Colonels Masataka Ida, Masahiko Takeshita (Anami's brother-in-law), and Inaba Masao, and Colonel Okitsugu Arao, the Chief of the Military Affairs Section, spoke to War Minister Korechika Anami (the army minister and "most powerful figure in Japan besides the Emperor himself"), and asked him to do whatever he could to prevent acceptance of the Potsdam Declaration. General Anami refused to say whether he would help the young officers in treason. As much as they needed his support, Hatanaka and the other rebels decided they had no choice but to continue planning and to attempt a coup d'état on their own. Hatanaka spent much of August 13 and the morning of August 14 gathering allies, seeking support from the higher-ups in the Ministry, and perfecting his plot. Shortly after the conference on the night of August 13–14 at which the surrender finally was decided, a group of senior army officers including Anami gathered in a nearby room. All those present were concerned about the possibility of a coup d'état to prevent the surrender—some of those present may have even been considering launching one. After a silence, General Torashirō Kawabe proposed that all senior officers present sign an agreement to carry out the Emperor's order of surrender—"The Army will act in accordance with the Imperial Decision to the last." It was signed by all the high-ranking officers present, including Anami, Hajime Sugiyama, Yoshijirō Umezu, Kenji Doihara, Torashirō Kawabe, Masakazu Kawabe, and Tadaichi Wakamatsu. "This written accord by the most senior officers in the Army ... acted as a formidable firebreak against any attempt to incite a coup d'état in Tokyo." Around 21:30 on August 14, Hatanaka's rebels set their plan into motion. The Second Regiment of the First Imperial Guards had entered the palace grounds, doubling the strength of the battalion already stationed there, presumably to provide extra protection against Hatanaka's rebellion. But Hatanaka, along with Lt. Col. Jirō Shiizaki, convinced the commander of the 2nd Regiment of the First Imperial Guards, Colonel Toyojirō Haga, of their cause, by telling him (falsely) that Generals Anami and Umezu, and the commanders of the Eastern District Army and Imperial Guards Divisions were all in on the plan. Hatanaka also went to the office of Shizuichi Tanaka, commander of the Eastern region of the army, to try to persuade him to join the coup. Tanaka refused, and ordered Hatanaka to go home. Hatanaka ignored the order. Originally, Hatanaka hoped that simply occupying the palace and showing the beginnings of a rebellion would inspire the rest of the Army to rise up against the move to surrender. This notion guided him through much of the last days and hours and gave him the blind optimism to move ahead with the plan, despite having little support from his superiors. Having set all the pieces into position, Hatanaka and his co-conspirators decided that the Guard would take over the palace at 02:00. The hours until then were spent in continued attempts to convince their superiors in the Army to join the coup. At about the same time, General Anami committed seppuku, leaving a message that, "I—with my death—humbly apologize to the Emperor for the great crime." Whether the crime involved losing the war, or the coup, remains unclear. |Wikisource has original text related to this article:| At some time after 01:00, Hatanaka and his men surrounded the palace. Hatanaka, Shiizaki and Captain Shigetarō Uehara (of the Air Force Academy) went to the office of Lt. General Takeshi Mori to ask him to join the coup. Mori was in a meeting with his brother-in-law, Michinori Shiraishi. The cooperation of Mori, as commander of the 1st Imperial Guards Division, was crucial. When Mori refused to side with Hatanaka, Hatanaka killed him, fearing Mori would order the Guards to stop the rebellion. Uehara killed Shiraishi. These were the only two murders of the night. Hatanaka then used General Mori's official stamp to authorize Imperial Guards Division Strategic Order No. 584, a false set of orders created by his co-conspirators, which would greatly increase the strength of the forces occupying the Imperial Palace and Imperial Household Ministry, and "protecting" the Emperor. The palace police were disarmed and all the entrances blocked. Over the course of the night, Hatanaka's rebels captured and detained eighteen people, including Ministry staff and NHK workers sent to record the surrender speech. The rebels, led by Hatanaka, spent the next several hours fruitlessly searching for Imperial House Minister Sōtarō Ishiwatari, Lord of the Privy Seal Kōichi Kido, and the recordings of the surrender speech. The two men were hiding in the "bank vault", a large chamber underneath the Imperial Palace. The search was made more difficult by a blackout in response to Allied bombings, and by the archaic organization and layout of the Imperial House Ministry. Many of the names of the rooms were unrecognizable to the rebels. The rebels did find the chamberlain Tokugawa. Although Hatanaka threatened to disembowel him with a samurai sword, Tokugawa lied and told them he did not know where the recordings or men were. During their search, the rebels cut nearly all of the telephone wires, severing communications between the palace grounds and the outside world. At about the same time, another group of Hatanaka's rebels led by Captain Takeo Sasaki went to Prime Minister Suzuki's office, intent on killing him. When they found it empty, they machine-gunned the office and set the building on fire, then left for his home. Hisatsune Sakomizu had warned Suzuki, and he escaped minutes before the would-be assassins arrived. After setting fire to Suzuki's home, they went to the estate of Kiichirō Hiranuma to assassinate him. Hiranuma escaped through a side gate and the rebels burned his house as well. Suzuki spent the rest of August under police protection, spending each night in a different bed. Around 03:00, Hatanaka was informed by Lieutenant Colonel Masataka Ida that the Eastern District Army was on its way to the palace to stop him, and that he should give up. Finally, seeing his plan collapsing around him, Hatanaka pleaded with Tatsuhiko Takashima, Chief of Staff of the Eastern District Army, to be given at least ten minutes on the air on NHK radio, to explain to the people of Japan what he was trying to accomplish and why. He was refused. Colonel Haga, commander of the 2nd Regiment of the First Imperial Guards, discovered that the Army did not support this rebellion, and he ordered Hatanaka to leave the palace grounds. Just before 05:00, as his rebels continued their search, Major Hatanaka went to the NHK studios, and, brandishing a pistol, tried desperately to get some airtime to explain his actions. A little over an hour later, after receiving a telephone call from the Eastern District Army, Hatanaka finally gave up. He gathered his officers and walked out of the NHK studio. At dawn, Tanaka learned that the palace had been invaded. He went there and confronted the rebellious officers, berating them for acting contrary to the spirit of the Japanese army. He convinced them to return to their barracks. By 08:00, the rebellion was entirely dismantled, having succeeded in holding the palace grounds for much of the night but failing to find the recordings. Hatanaka, on a motorcycle, and Shiizaki, on horseback, rode through the streets, tossing leaflets that explained their motives and their actions. Within an hour before the Emperor's broadcast, sometime around 11:00, August 15, Hatanaka placed his pistol to his forehead, and shot himself. Shiizaki stabbed himself with a dagger, and then shot himself. In Hatanaka's pocket was found his death poem: "I have nothing to regret now that the dark clouds have disappeared from the reign of the Emperor." Broadcast of the Imperial Rescript on surrender The Gyokuon-hōsō, the radio broadcast in which Hirohito read the Imperial Rescript on the Termination of the War, August 15, 1945. |Problems playing these files? See media help.| |Wikisource has original text related to this article:| After pondering deeply the general trends of the world and the actual conditions obtaining in Our Empire today, We have decided to effect a settlement of the present situation by resorting to an extraordinary measure. We have ordered Our Government to communicate to the Governments of the United States, Great Britain, China and the Soviet Union that Our Empire accepts the provisions of their Joint Declaration. To strive for the common prosperity and happiness of all nations as well as the security and well-being of Our subjects is the solemn obligation which has been handed down by Our Imperial Ancestors and which lies close to Our heart. Indeed, We declared war on America and Britain out of Our sincere desire to ensure Japan's self-preservation and the stabilization of East Asia, it being far from Our thought either to infringe upon the sovereignty of other nations or to embark upon territorial aggrandizement. But now the war has lasted for nearly four years. Despite the best that has been done by everyone—the gallant fighting of the military and naval forces, the diligence and assiduity of Our servants of the State, and the devoted service of Our one hundred million people—the war situation has developed not necessarily to Japan's advantage, while the general trends of the world have all turned against her interest. Moreover, the enemy has begun to employ a new and most cruel bomb, the power of which to do damage is, indeed, incalculable, taking the toll of many innocent lives. Should we continue to fight, not only would it result in an ultimate collapse and obliteration of the Japanese nation, but also it would lead to the total extinction of human civilization. Such being the case, how are We to save the millions of Our subjects, or to atone Ourselves before the hallowed spirits of Our Imperial Ancestors? This is the reason why We have ordered the acceptance of the provisions of the Joint Declaration of the Powers.... The hardships and sufferings to which Our nation is to be subjected hereafter will be certainly great. We are keenly aware of the inmost feelings of all of you, Our subjects. However, it is according to the dictates of time and fate that We have resolved to pave the way for a grand peace for all the generations to come by enduring the unendurable and suffering what is unsufferable. The low quality of the recording, combined with the Classical Japanese language used by the Emperor in the Rescript, made the recording very difficult to understand for most listeners. This speech marked the end of imperial Japan's ultranationalist ideology, and was a major turning point in Japanese history. Public reaction to the Emperor's speech varied–many Japanese simply listened to it, then went on with their lives as best they could, while some Army and Navy officers chose suicide over surrender. At a base north of Nagasaki, some Japanese Army officers, enraged at the prospect of surrender, pulled some 16 captured American airmen out of the base prison and hacked them to death with swords. A large, weeping crowd gathered in front of the Imperial Palace in Tokyo, with their cries sometimes interrupted by the sound of gunshots as military officers present committed suicide. On August 17, Suzuki was replaced as prime minister by the Emperor's uncle, Prince Higashikuni, perhaps to forestall any further coup or assassination attempts; Mamoru Shigemitsu replaced Tōgō as foreign minister. Japan's forces were still fighting against the Soviets as well as the Chinese, and managing their cease-fire and surrender was difficult. The last air combat by Japanese fighters against American reconnaissance bombers took place on August 18. The Soviet Union continued to fight until early September, taking the Kuril Islands. Beginning of occupation and the surrender ceremony Allied civilians and servicemen alike rejoiced at the news of the end of the war. A photograph, V–J day in Times Square, of an American sailor kissing a woman in New York, and a news film of the Dancing Man in Sydney have come to epitomize the immediate celebrations. August 14 and 15 are celebrated as Victory over Japan Day in many Allied countries. The Soviet Union had some intentions of occupying Hokkaidō. Unlike the Soviet occupations of East Germany and North Korea, however, these plans were frustrated by the opposition of President Truman. Japanese officials left for Manila on August 19 to meet Supreme Commander of the Allied Powers Douglas MacArthur, and to be briefed on his plans for the occupation. On August 28, 150 U.S. personnel flew to Atsugi, Kanagawa Prefecture, and the occupation of Japan began. They were followed by USS Missouri, whose accompanying vessels landed the 4th Marines on the southern coast of Kanagawa. Other Allied personnel followed. MacArthur arrived in Tokyo on August 30, and immediately decreed several laws: No Allied personnel were to assault Japanese people. No Allied personnel were to eat the scarce Japanese food. Flying the Hinomaru or "Rising Sun" flag was severely restricted. The formal surrender occurred on September 2, 1945 around 9 a.m. Tokyo time, when representatives from the Empire of Japan signed the Japanese Instrument of Surrender in Tokyo Bay aboard the USS Missouri. Japanese Foreign Minister Shigemitsu signed for the Japanese government, while Gen. Umezu signed for the Japanese armed forces. On the Missouri that day was the American flag flown in 1853 on the USS Powhatan by Commodore Matthew C. Perry on the first of his two expeditions to Japan. Perry's expeditions had resulted in the Convention of Kanagawa, which forced the Japanese to open the country to American trade. After the formal surrender on September 2 aboard the Missouri, investigations into Japanese war crimes began quickly. At a meeting with General MacArthur later in September, Emperor Hirohito offered to take blame for the war crimes, but his offer was rejected, and he was never tried. Legal procedures for the International Military Tribunal for the Far East were issued on January 19, 1946. In addition to August 14 and 15, September 2, 1945 is also known as V-J Day. President Truman declared September 2 to be V-J Day, but noted that "It is not yet the day for the formal proclamation of the end of the war nor of the cessation of hostilities." In Japan, August 15 is often called Shūsen-kinenbi (終戦記念日), which literally means the "memorial day for the end of the war," but the government's name for the day (which is not a national holiday) is Senbotsusha o tsuitō shi heiwa o kinen suru hi (戦没者を追悼し平和を祈念する日, "day for mourning of war dead and praying for peace"). Further surrenders and continued Japanese military resistance Following the signing of the instrument of surrender, many further surrender ceremonies took place across Japan's remaining holdings in the Pacific. Japanese forces in South East Asia surrendered on September 12, 1945 in Singapore. Taiwan's Retrocession Day (October 25), marked the end of Japanese rule of Taiwan and the subsequent rule by the Republic Of China government. It was not until 1947 that all prisoners held by America and Britain were repatriated. As late as April 1949, China still held more than 60,000 Japanese prisoners. Some, such as Shozo Tominaga, were not repatriated until the late 1950s. The logistical demands of the surrender were formidable. After Japan's capitulation, more than 5,400,000 Japanese soldiers and 1,800,000 Japanese sailors were taken prisoner by the Allies. The damage done to Japan's infrastructure, combined with a severe famine in 1946, further complicated the Allied efforts to feed the Japanese POWs and civilians. The state of war between the United States and Japan officially ended when the Treaty of San Francisco took effect on April 28, 1952. Japan and the Soviet Union formally made peace four years later, when they signed the Soviet–Japanese Joint Declaration of 1956. Some Japanese holdouts, especially on small Pacific Islands, refused to surrender at all (believing the declaration to be propaganda or considering surrender against their code). Some may never have heard of it. Teruo Nakamura, the last known holdout, emerged from his hidden retreat in Indonesia in December 1974, while two other Japanese soldiers, who had joined Communist guerrillas at the end of the war, fought in southern Thailand until 1991. |Surrender ceremonies throughout the Pacific theater| - Aftermath of World War II - Japanese holdouts - Post–World War II economic expansion - Hypothetical Axis victory in World War II - Japanese dissidence during the Shōwa period - Japanese American service in World War II - Frank, 90. - Skates, 158, 195. - Bellamy, Chris (2007). Absolute War: Soviet Russia in the Second World War. Alfred A. Knopf. p. 676. ISBN 978-0-375-41086-4. - Frank, 87–88. - Frank, 81. - Pape, Robert A. (Fall 1993). "Why Japan Surrendered". International Security 18 (2): 154–201. doi:10.2307/2539100. - Feifer, 418. - Reynolds, 363. - Frank, 89, citing Daikichi Irokawa, The Age of Hirohito: In Search of Modern Japan (New York: Free Press, 1995; ISBN 978-0-02-915665-0). Japan consistently overstated its population as 100 million, when in fact the 1944 census counted 72 million. - Skates, 100–115. - Hasegawa, 295–296 - McCormack, 253. - Frank, 87. - Frank, 86. - Spector 33. - The exact role of the Emperor has been a subject of much historical debate. Following PM Suzuki's orders, many key pieces of evidence were destroyed in the days between Japan's surrender and the start of the Allied occupation. Starting in 1946, following the constitution of the Tokyo tribunal, the imperial family began to argue that Hirohito was a powerless figurehead, which brought some historians to accept this point of view. Others, like Herbert Bix, John W. Dower, Akira Fujiwara, and Yoshiaki Yoshimi, argue that he actively ruled from behind the scenes. According to Richard Frank, "Neither of these polar positions is accurate", and the truth appears to lie somewhere in between.—Frank, 87. - Iris Chang (2012). The Rape of Nanking: The Forgotten Holocaust of World War II. Basic Books. p. 177. - For more details on what was destroyed see Page Wilson (2009). Aggression, Crime and International Security: Moral, Political and Legal Dimensions of International Relations. Taylor & Francis. p. 63. - Alan Booth. Lost: Journeys through a Vanishing Japan. Kodansha Globe, 1996, ISBN 978-1-56836-148-2. Page 67. - Frank, 92. - Frank, 91–92. - Butow, 70–71. - Spector, 44–45. - Frank, 89. - Bix, 488–489. - Michael J. Hogan (March 29, 1996). Hiroshima in History and Memory. Cambridge University Press. p. 86. - Hasegawa, 39. - Hasegawa, 39, 68. - Frank, 291. - Soviet-Japanese Neutrality Pact, April 13, 1941. (Avalon Project at Yale University) Declaration Regarding Mongolia, April 13, 1941. (Avalon Project at Yale University) - Soviet Denunciation of the Pact with Japan. Avalon Project, Yale Law School. Text from United States Department of State Bulletin Vol. XII, No. 305, April 29, 1945. Retrieved February 22, 2009. - "Molotov's note was neither a declaration of war nor, necessarily, of intent to go to war. Legally, the treaty still had a year to run after the notice of cancellation. But the Foreign Commissar's tone suggested that this technicality might be brushed aside at Russia's convenience." "So Sorry, Mr. Sato". Time, April 16, 1945. - Russia and Japan, declassified CIA report from April 1945. - Slavinskiĭ (page 153-4), quoting from Molotov's diary, recounts the conversation between Molotov and Satō, the Japanese ambassador to Moscow: After Molotov has read the statement, Satō "permits himself to ask Molotov for some clarifications", saying he thinks his government expects that during that year April 25, 1945 – April 25, 1946, the Soviet government will maintain the same relations with Japan it had maintained up to present, "bearing in mind that the Pact remains in force". Molotov replies that "Factually Soviet-Japanese relations revert to the situation in which they were before conclusion of the Pact". Satō observes that in that case the Soviet and Japanese government interpret the question differently. Molotov replies that "there is some misunderstanding" and explains that "on expiry of the five year period … Soviet-Japanese relations will obviously revert to the status quo ante conclusion of the Pact". After further discussion, Molotov states: "The period of the Pact's validity has not ended". Boris Nikolaevich Slavinskiĭ, The Japanese-Soviet Neutrality Pact: A Diplomatic History 1941–1945, Translated by Geoffrey Jukes, 2004, Routledge. (Extracts on-line). Page 153-4. Later in his book (page 184), Slavinskiĭ further summarizes the chain of events: - "Even after Germany's exit from the war, Moscow went on saying the Pact was still operative, and that Japan had no cause for anxiety about the future of Soviet-Japanese relations." - May 21, 1945: Malik (Soviet ambassador to Tokyo) tells Sukeatsu Tanakamura, representing Japanese fishing interests in Soviet waters, that the treaty continues in force. - May 29, 1945: Molotov tells Satō: "we have not torn up the pact". - June 24, 1945: Malik tells Kōki Hirota that the Neutrality Pact … will continue … until it expires. - Frank, 93. - Frank, 95. - Frank, 93–94. - Frank, 96. - Toland, John. The Rising Sun. Modern Library, 2003. ISBN 978-0-8129-6858-3. Page 923. - Frank, 97, quoting The Diary of Marquis Kido, 1931–45: Selected Translations into English, p 435–436. - Frank, 97–99. - Frank, 100, quoting Terasaki, 136–37. - Frank, 102. - Frank, 94. - Frank, 221, citing Magic Diplomatic Summary No. 1201. - Frank, 222–3, citing Magic Diplomatic Summary No. 1205, 2 (PDF). - Frank, 226, citing Magic Diplomatic Summary No. 1208, 10–12. - Frank, 227, citing Magic Diplomatic Summary No. 1209. - Frank, 229, citing Magic Diplomatic Summary No. 1212. - Frank, 230, citing Magic Diplomatic Summary No. 1214, 2–3 (PDF). - "Some messages were deciphered and translated the same day and most within a week; a few in cases of key change took longer"—The Oxford Guide to World War II, ed. I.C.B. Dear. Oxford: Oxford University Press, 2007. ISBN 978-0-19-534096-9 S.v. "MAGIC". - Hasegawa, 60. - Hasegawa, 19. - Hasegawa, 25. - Hasegawa, 32. - Hasegawa, 86. - Hasegawa, 115–116. - Frank, 279. - United States Army Corps of Engineers, Manhattan Engineer District (1946). "The atomic bombings of Hiroshima and Nagasaki". OCLC 77648098. Retrieved January 23, 2011. - Quiner, Tom. "What lesson can we learn from Japan?". Retrieved December 30, 2013. - Frank, 254. - Hasegawa, 67. - David F. Schmitz. Henry L. Stimson: The First Wise Man. Rowman & Littlefield, 2001, ISBN 978-0-8420-2632-1. Page 182. - Hasegawa, 90. - Frank, 256. - Frank, 260. - Hasegawa, 152–153. - "American officials meeting in Washington on August 10, 1945 … decided that a useful dividing line between the U.S. and Soviet administrative occupation zones would be the 38th parallel across the midsection of the [Korean] peninsula, thereby leaving Korea's central city, Seoul, within the U.S. zone. This arrangement was suggested to the Soviet side shortly after the USSR entered both the Pacific War and the Korean peninsula. The Soviets accepted that dividing line, even though their attempt to obtain a corresponding northern Japan occupation zone on the island of Hokkaido was rejected by Washington." – Edward A. Olsen. Korea, the Divided Nation. Greenwood Publishing Group, 2005. ISBN 978-0-275-98307-9. Page 62. - Rhodes, 690. - Hasegawa, 145–148. - Hasegawa, 118–119. - Weintraub, 288. - Frank, 234. - Frank, 236, citing Magic Diplomatic Summary No. 1224. - Frank, 236, citing Magic Diplomatic Summary No. 1225, 2 (PDF). - Tucker, Spencer. A Global Chronology of Conflict: From the Ancient World to the Modern Middle East: From the Ancient World to the Modern Middle East, p. 2086 (ABC-CLIO, 2009). - White House Press Release Announcing the Bombing of Hiroshima, August 6, 1945. The American Experience: Truman. PBS.org. Sourced to The Harry S. Truman Library, "Army press notes," box 4, Papers of Eben A. Ayers. - "While senior Japanese officers did not dispute the theoretical possibility of such weapons, they refused to concede that the United States had vaulted over the tremendous practical problems to create an atomic bomb." On August 7, the Imperial Staff released a message saying that Hiroshima had been struck by a new type of bomb. A team led by Lieutenant General Seizō Arisue was sent to Hiroshima on August 8 to sort out several competing theories as to the cause of the explosion, including that Hiroshima was struck by a magnesium or liquid-oxygen bomb.—Frank, 270–271. - Frank, 270–271. - Frank, 283–284. - Soviet Declaration of War on Japan, August 8, 1945. (Avalon Project at Yale University) - The Soviets delivered a declaration of war to Japanese ambassador Satō in Moscow two hours before the invasion of Manchuria. However, despite assurances to the contrary they did not deliver Satō's cable notifying Tokyo of the declaration, and cut the embassy phone lines. This was revenge for the Japanese sneak attack on Port Arthur 40 years earlier. The Japanese found out about the attack from radio broadcast from Moscow.—Butow, 154–164; Hoyt, 401. - Wilson, Ward (30 May 2013). "The Bomb Didn’t Beat Japan... Stalin Did". foreignpolicy.com. Retrieved 18 June 2016. - Sadao Asada. "The Shock of the Atomic Bomb and Japan's Decision to Surrender: A Reconsideration". The Pacific Historical Review, Vol. 67, No. 4 (Nov. 1998), pp. 477–512. - Frank, 288–9. - Diary of Kōichi Kido, 1966, p. 1223. - Frank, 290–91. - Radio Report to the American People on the Potsdam Conference by President Harry S. Truman, Delivered from the White House at 10 p.m, August 9, 1945 - Hasagawa, 207–208. - Jerome T. Hagen. War in the Pacific: America at War, Volume I. Hawaii Pacific University, ISBN 978-0-9762669-0-7. Chapter, "The Lie of Marcus McDilda", 159–162. - Hasegawa 298. - A few hours before the Japanese surrender was announced, Truman had a discussion with the Duke of Windsor and Sir John Balfour (British ambassador to the U.S.). According to Balfour, Truman "remarked sadly that he now had no alternative but to order an atomic bomb dropped on Tokyo".—Frank, 327, citing Bernstein, Eclipsed by Hiroshima and Nagasaki, p 167. - Hasagawa, 209. - Frank, 295–296. - Bix, 517, citing Yoshida, Nihonjin no sensôkan, 42–43. - Hoyt, 405. - Frank, 302. - Frank, 303. - While the ceasefire was in effect, Spaatz made a momentous decision. Based on evidence from the European Strategic Bombing Survey, he ordered the strategic bombing to refocus its efforts away from firebombing Japanese cities, to concentrate on wiping out Japanese oil and transportation infrastructure. Frank, 303–307. - Frank, 310. - Terasaki, 129. - Bix, 129. - Frank, 313. - Smith, 188. - Wesley F. Craven and James L. Cate, The Army Air Forces in World War II, Vol. 5, pp. 732–33. (Catalog entry, U Washington.) - Smith, 183. - Smith, 187. - Smith 187–188 notes that though the daytime bombers had already attacked Japan, the night bombers had not yet taken off when radio notification of the surrender was received. Smith also notes that, despite substantial efforts, he has found no historical documentation relating to Carl Spaatz's order to go ahead with the attack. - Frank, 314. - Frank, 315. - Bix, 558. - MacArthur, Douglas. "Reports of General MacArthur Vol II - Part II". US Army Center of Military History. Retrieved 16 February 2016. On the same day that the Rescript to the armed forces was issued, three Imperial Princes left Tokyo by air as personal representatives of the Emperor to urge compliance with the surrender decision upon the major overseas commands. The envoys chosen all held military rank as officers of the Army, and they had been guaranteed safety of movement by General MacArthur's headquarters. General Prince Yasuhiko Asaka was dispatched as envoy to the headquarters of the expeditionary forces in China, Maj. Gen. Prince Haruhiko Kanin to the Southern Army, and Lt. Col. Prince Tsuneyoshi Takeda to the Kwantung Army in Manchuria. - Fuller, Richard Shokan: Hirohito's Samurai 1992 p.290 ISBN 1-85409-151-4 - Hasegawa, 244. - Hoyt, 409. - Frank, 316. - Frank, 318. - Hoyt 407–408. - Frank, 317. - Frank, 319. - Butow, 220. - Hoyt, 409–410. - The Pacific War Research Society, 227. - The Pacific War Research Society, 309. - Butow, 216. - Hoyt, 410. - The Pacific War Research Society, 279. - Wainstock, 115. - The Pacific War Research Society, 246. - Hasegawa, 247. - The Pacific War Research Society, 283. - Hoyt, 411. - The Pacific War Research Society, 303. - The Pacific War Research Society, 290. - The Pacific War Research Society, 311. - "Text of Hirohito's Radio Rescript", The New York Times, p. 3, 15 August 1945, retrieved 8 August 2015 - Dower, 34. - "The Emperor's Speech: 67 Years Ago, Hirohito Transformed Japan Forever". The Atlantic. Retrieved May 23, 2013. - Dower, 38–39. - Spector, 558. (Spector incorrectly identifies Higashikuni as the Emperor's brother.) - The Last to Die | Military Aviation | Air & Space Magazine. Airspacemag.com. Retrieved on 2010-08-05. - Which day they celebrate V-J day depends on the local time at which they received word of Japan's surrender. British Commonwealth countries celebrate the 15th, whereas the United States celebrates the 14th. - Hasegawa, 271ff - Individuals and prefectural offices could apply for permission to fly it. The restriction was partially lifted in 1948 and completely lifted the following year. - The USS Missouri was anchored at 35° 21′ 17″ N 139° 45′ 36″E' - USS Missouri Instrument of Surrender, WWII, Pearl Harbor, Historical Marker Database, www.hmdb.org, Retrieved 2012-03-27. - "1945 Japan surrenders". Retrieved 2015-08-14. - "The framed flag in lower right is that hoisted by Commodore Matthew C. Perry on 14 July 1853, in Yedo (Tokyo) Bay, on his first expedition to negotiate the opening of Japan." Formal Surrender of Japan, 2 September 1945—Surrender Ceremonies Begin. United States Naval Historical Center. Retrieved February 25, 2009. - Dower, 41. - "1945: Japan signs unconditional surrender" On This Day: September 2, BBC. - The Tokyo War Crimes Trials (1946–1948). The American Experience: MacArthur. PBS. Retrieved February 25, 2009. - "Radio Address to the American People after the Signing of the Terms of Unconditional Surrender by Japan," Harry S. Truman Library and Museum (1945-09-01). - 厚生労働省:全国戦没者追悼式について (in Japanese). Ministry of Health, Labour and Welfare. 2007-08-08. Retrieved 2008-02-16. - Ng Yuzin Chiautong (1972). "Historical and Legal Aspects of the International Status of Taiwan (Formosa)". World United Formosans for Independence (Tokyo). Retrieved 2010-02-25. - "Taiwan's retrocession procedurally clear: Ma". The China Post. CNA. 2010-10-26. Retrieved 2015-08-14. - Dower, 51. - Cook 40, 468. - Weinberg, 892. - Cook 403 gives the total number of Japanese servicemen as 4,335,500 in Japan on the day of the surrender, with an additional 3,527,000 abroad. - Frank, 350–352. - Cook contains an interview with Iitoyo Shogo about his experiences as POW of the British at Galang Island—known to prisoners as "Starvation Island". - "Preface". Ministry of Foreign Affairs of Japan. - H. P. Wilmott, Robin Cross & Charles Messenger, World War II, Dorling Kindersley, 2004, p. 293. ISBN 978-0-7566-0521-6 - Bix, Herbert (2001). Hirohito and the Making of Modern Japan. Perennial. ISBN 978-0-06-093130-8. - Butow, Robert J. C. (1954). Japan's Decision to Surrender. Stanford University Press. ISBN 978-0-8047-0460-1. - Cook, Haruko Taya; Theodore F. Cook (1992). Japan at War: An Oral History. New Press. ISBN 978-1-56584-039-3. - Dower, John (1999). Embracing Defeat: Japan in the Wake of World War II. W.W. Norton. ISBN 978-0-393-04686-1. - Feifer, George (2001). The Battle of Okinawa: The Blood and the Bomb. Guilford, CT: The Lyons Press. ISBN 978-1-58574-215-8. - Ford, Daniel (September 1995). "The Last Raid: How World War Two Ended". Air & Space Smithsonian. pp. 74–81. Archived from the original on August 10, 2004. - Frank, Richard B. (1999). Downfall: the End of the Imperial Japanese Empire. New York: Penguin. ISBN 978-0-14-100146-3. - Glantz, David M. (February 1983). "August Storm: The Soviet 1945 Strategic Offensive in Manchuria". Fort Leavenworth, KA: Leavenworth Paper No.7, Command and General Staff College. Archived from the original on July 23, 2011. Retrieved May 31, 2012. - Glantz, David M. (June 1983). "August Storm: Soviet Tactical and Operational Combat in Manchuria, 1945". Fort Leavenworth, KA: Leavenworth Paper No.8, Command and General Staff College. Archived from the original on March 16, 2003. Retrieved May 31, 2012. - Glantz, David M. (1995) "The Soviet Invasion of Japan". Quarterly Journal of Military History, vol. 7, no. 3, Spring 1995. - Glantz, David M. (2003). The Soviet Strategic Offensive in Manchuria, 1945 (Cass Series on Soviet (Russian) Military Experience, 7). Routledge. ISBN 978-0-7146-5279-5. - Hasegawa, Tsuyoshi (2005). Racing the Enemy: Stalin, Truman, and the Surrender of Japan. Harvard University Press. ISBN 978-0-674-01693-4. - Hoyt, Edwin P. (1986). Japan's War: The Great Pacific Conflict, 1853–1952. New York: Cooper Square Press. ISBN 978-0-8154-1118-5. - The Pacific War Research Society (1968) . Japan's Longest Day (English language ed.). Palo Alto, California: Kodansha International. - Reynolds, Clark G. (1968). The Fast Carriers; The Forging of an Air Navy. New York, Toronto, London, Sydney: McGraw-Hill. - Rhodes, Richard (1986). The Making of the Atomic Bomb. Simon and Schuster. ISBN 978-0-671-44133-3. - Skates, John Ray (1994). The Invasion of Japan: Alternative to the Bomb. Columbia, SC: University of South Carolina Press. ISBN 978-0-87249-972-0. - Smith, John B.; Malcolm McConnell (2002). The Last Mission: The Secret Story of World War II's Final Battle. New York: Broadway Books. ISBN 978-0-7679-0778-1. - Slavinskiĭ, Boris Nikolaevich (2004). The Japanese-Soviet Neutrality Pact: A Diplomatic History, 1941–1945. Nissan Institute/Routledge Japanese studies series. London; New York: RoutledgeCurzon. ISBN 978-0-415-32292-8. - Spector, Ronald H. (1985). Eagle against the Sun. Vintage. ISBN 978-0-394-74101-7. - Thomas, Gordon, and Witts, Max Morgan (1977). Enola Gay. 1978 reprint, New York: Pocket Books. ISBN 0-671-81499-0. - Hidenari Terasaki (寺崎英成) (1991). Shōwa Tennō dokuhakuroku: Terasaki Hidenari, goyō-gakari nikki (昭和天皇独白録 寺崎英成・御用掛日記). Tokyo: Bungei Shunjū. ISBN 978-4-16-345050-6. (Japanese) - Wainstock, Dennis (1996). The Decision to Drop the Atomic Bomb. Greenwood Publishing Group. ISBN 978-0-275-95475-8. - Weinberg, Gerhard L. (1999). A World at Arms: A Global History of World War II. Cambridge University Press. ISBN 978-0-521-55879-2. - Weintraub, Stanley (1995). The Last Great Victory: The End of World War II. Dutton Adult. ISBN 978-0-525-93687-9. |Library resources about Surrender of Japan |Wikimedia Commons has media related to Surrender of Japan.| - Japanese Instruments of Surrender - Original document: surrender of Japan - on YouTube - Hirohito's Determination of surrender 終戦 Shūsen (Japanese) - Minutes of private talk between British Prime Minister Winston Churchill and Marshal Joseph Stalin at the Potsdam Conference on July 17, 1945 - Article concerning Japan's surrender
Braja Sorensen Team November 26, 2020 Worksheet Multiply/divide > dividing mixed numbers. Denominators 2 25 below are six versions of our grade 6 math worksheet on multiplying proper fractions with denominators between 2 and 25. 6th grade dividing fractions worksheets with answers. 5th grade place value worksheets. These worksheets are pdf files. Dividing fractions worksheet with answers to practice & learn 6th grade math problems is available online for free in printable & downloadable (pdf & image) format. Dividing whole numbers by fractions 5th grade math distance learning packet fractions worksheets fractions divide whole numbers. Dividing decimals practice:this resource includes 3 separate dividing fractions practices. Fraction multiplication and division math worksheets. Worksheets based on dividing any two improper fractions. Multiplying mixed numbers by mixed numbers. Kids worksheet 1 radicals worksheet elementary kids worksheet problems and solutions kids worksheet 2 equations and answers kids worksheet 2. Some of the worksheets for this concept are multiplyingdividing fractions and mixed numbers, fractions work multiplying and dividing fractions, fractions work multiplying and dividing mixed, exercise work, multiplying and dividing signed fractions, multiplying and dividing integers, dividing. These dividing fractions worksheets have problems that can be solved with a cross multiply step, but they do not include wholes so do not require initially converting to improper fractions to solve. Students should simplify all answers. Multiply a fraction by a whole number (for 5th grade) multiply fractions and mixed numbers (mixed problems, for 5th grade) division of fractions, special case (answers are whole numbers, for 5th grade) divide by fractions (mixed problems, for 6th grade) add two unlike fractions (incl. Thanksgiving fun worksheets printable worksheets subtraction worksheets for grade 4 multiplying dividing fractions mathematics fractions worksheets touchpoint math numbers grade 4 math problem solving with answers if you home school your children, you will quickly realize how important printable homeschool worksheets can be. This dividing fractions study guide and assessment resource has all you need to help your students prepare and assess their knowledge of the common core standards related to dividing fractions in sixth grade.the study guide mimics the assessment so that your students will be familiar with the types Multiplying fractions worksheets 7th grade Besides, you’ll equally find here well exemplified, fun and motivational dividing fractions exercises with answers that will encourage an immediate understanding of dividing. Ahead of referring to dividing fractions worksheet 6th grade, please know that schooling is usually your factor to a much better the day after tomorrow, along with learning doesn’t only avoid as soon as the university bell rings.of which staying explained, many of us provide selection of uncomplicated yet useful articles or blog posts and web templates manufactured suitable for just about. Grade 6 math multiple choice questions on fractions and mixed numbers with answers are presented. Dividing fractions, multiplying and dividing decimals other contents: These printable pdf worksheets are perfect for math students in the 6th grade. Add to my workbooks (2) download file pdf embed in my website or blog add to google classroom 6th grade math dividing fractions worksheets dividing fractions exercises with answers. I have used this with my 6th grade students as classwork and/or homework, depending on what i needed! Master equivalent fractions in no time with these worksheets. Developing a concrete and conceptual way to divide fractions helps students understand the “why” part of the process. The first 3 questions are related to the concept of fractions and mixed numbers. These grade 6 math worksheets cover the multiplication and division of fractions and mixed numbers; We believe pencil and paper practice is needed to master these computations. Sample grade 6 fraction addition worksheet. Alphabet tracing worksheets for 3 year olds. Convert mixed numbers into improper fractions, find the reciprocal of the dividend, apply cross cancellation or simplifying terms, find the product and write the solution in simplest form. Multiplying and dividing fractions worksheets free math worksheets for 6th grade algebra idioms worksheets with answers noun worksheets with answers chemistry measurement conversion worksheets online worksheets for lkg math websites for kids for free c0ool math games need math answers need math answers solving two step equations worksheet 7th grade free printable color by number for kids free. 6th grade multiplying mixed fractions worksheets. Worksheets > math > grade 6 > fractions: Music theory for beginners worksheets. Dividing decimals practice this resource includes 3 separate dividing fractions practices. Dividing fractions, multiplying and dividing decimals grade/level: Also detailed solutions and full explanations are included. Dividing mixed numbers by mixed numbers. Fraction charts for bulletin board fraction games fraction math learning centers fraction word problems fractions and decimals improper fractions. Fractions and mixed numbers grade 6 maths questions and problems with answers. The various resources listed below are aligned to the same standard, (6ns01) taken from the ccsm (common core standards for mathematics) as the fractions worksheet shown above. Some of the worksheets for this concept are multiplying mixed numbers, multiplyingdividing fractions and mixed numbers, exercise work, 6th grade fractions work with answers, multiplying fractions by whole numbers, mixed number multiplication l1s1, addsubtracting fractions and mixed numbers. Available in our outstanding 6 th grade math dividing fractions worksheets are quickest techniques for easy dividing fractions. Chapter 4 multiply and divide fractions answer key 6th grade.6th grade math multiplying fractions.dividing fractions word problems 6th grade pdf.multiplying and dividing fractions word problems worksheets 6th grade.dividing fractions and mixed numbers worksheets 6th grade.dividing fractions worksheet 6th grade pdf.dividing mixed numbers worksheet 6th grade.dividing fractions activities 6th. Fraction worksheets and printables 2nd grade fractions 3rd grade fractions 4th grade fractions 5th grade fractions adding fractions comparing fractions dividing fractions equivalent fractions: Eureka lesson for 6th grade unit one models of dividing fractions these 2 lessons can be taught in 2 class periods or 3 with. Interpret and compute quotients of fractions, and solve word problems involving division of fractions by fractions, e.g., by using visual fraction models and equations.
अगस्त 18, 2016 In the past, the remoteness and high elevation of the Himalayas and Tibetan plateau was believed to protect the vast expanse of land from the chronic pollution that haunts the densely populated regions of China and South Asia. It was thought that the world’s highest mountain range would act as a barrier to stop pollution from reaching the higher glaciated peaks. But new research from scientists at the Chinese Academy of Sciences show pollution is spreading over the world’s highest mountain range and across central Tibet. The team recorded major spikes in pollution this April on the northern slopes of Everest, and traced the source back to Nepal and northern India. Brown clouds blight South Asia Every year, particularly during the dry winter season from October until May, the heavily populated Indo-Gangetic Plain in South Asia is plagued by severe air pollution. In recent years, people in India and Nepal, in particular, have suffered from periods of suffocating air pollution, known scientifically as Atmospheric Brown Cloud (ABCs). In 2015 India’s air pollution levels overtook China’s. According to an analysis by Greenpeace of NASA satellite data, the average particulate matter exposure in India exceeded that of China’s, and more importantly China’s exposure fell 15% between 2014 and 2015, while India’s was increasing by 2% per year. Pollution is worse in the winter because there is no rain to wash pollutants from the air. In Kathmandu, for example, days with clear views of the Himalayas become very rare due to the heavy brown clouds shrouding the valley. Emissions from burning fossil fuels and biomass for cooking and heating build up during the dry season when brown clouds extend from the Indian Ocean to the Himalayan ridge. The soot, sulphates and other harmful aerosols in Atmospheric Brown Clouds pose a major threat to the water and food security of Asia, according to a 2008 study by UNEP. The soot settles on the glaciers, darkening the snow and increasing absorption of energy. This speeds up the melting of glaciers and snow pack in the Hindu Kush-Himalayas – which provide water for million of people living downstream. Pollutants also absorb the sun and heat the atmosphere and so are thought to be as “important as greenhouse gas warming in accounting for the anomalously large warming trend observed in the elevated regions,” states the report. With the westerlies blowing across the Indo Gangetic plain, pollutants spread across Nepal and climb up valleys and slopes of the Himalayan ranges. Ice-core samples taken from both the southern and northern slopes of the Himalayas have also revealed rising soot concentrations during times of rapid industrialization in recent decades, indicating pollution could travel over the high mountain range. In recent years Chinese scientists have found more definitive evidence that the Himalayas do not block the passage of pollution into the central region of the Tibetan plateau. Thanks to observations at various observatory stations on the northern slopes of the Himalayas since 2009, they have found a similar concentration and type of pollutants on both the south and north sides of Mount Everest. Professor Kang Shichang, director of the State Key Laboratory of Cryosphere Science at CAS, has been monitoring the atmosphere of the Qinghai-Tibetan Plateau for over 15 years. In April this year, Kang observed a sudden peaking of black carbon at the observation site on the Qomolangma (Mount Everest) Station, 4,276 metres above sea level on the northern foothills of Mount Everest. Typically the Everest station records black carbon concentrations of about 0.3 microgrammes per cubic metre, but on April 9-18, levels spiked to 1.2-2.4 microgrammes per cubic metre. While severe for this unpopulated area, this level of black carbon is still relatively low for China; in the country’s urban areas, black carbon concentrations tend to range between 6-11 microgrammes per cubic metre, according to a 2012 report by the United States Environmental Protection Agency. The extremely high levels of black carbon in atmosphere during this period is far beyond the background figure on Everest and thus could be listed as a “pollution event,” said Kang. Using satellite images and computer simulations of the air circulation system based on meteorological data, Kang and his research team concluded that 97% of the air clusters passing through the station during that period came from northern India and the neighbouring area in Nepal. “The transport of the air masses passed most parts of Nepal before finally climbing over the Himalayas to reach the northern slopes of Mount Everest,” states the analysis from Kang’s team. The team traced the pollution to Nepal as the major source, followed by northern India using images from the MODIS instrument on NASA’s Aqua satellite. “An apparent rise of Aerosol Optical Depth (AOD), particularly fine aerosol particles, at the Everest station occurred, which indicated the increasing of fine particles from burned biomass [such as from cooking stoves fed by wood or crops, or from forest fires],” the report stated. During April, when the spike in pollution on Everest was recorded, Nepal experienced a region-wide heavy haze from forest fires after prolonged drought. Both NASA satellites and local anecdotal evidence show widespread forest fires and the burning of agricultural crops from 7 April through the following week in Nepal. Transboundary air pollution does not only affect the area around Everest. In March 2009, the Namco station in central Tibet, around 800 kilometres to the northeast of the Everest station, recorded a sudden rise of AOD in March 2009 to 0.42 (AOD varied between 0.428 and 0.550 for China’s most polluted urban region in the neighbourhood area of Beijing during 2000-2013). This was a huge spike compared to the low base level of 0.029 AOD. The results and analysis of Kang’s team were published in Atmospheric Chemistry and Physics in mid-2015. The research detailed “how polluted air masses from atmospheric brown cloud (ABC) over South Asia reach the Tibetan Plateau within a few days,” driven by a combination of long distance and local meteorological processes. Every year before in the months before the monsoon, from March until May, there is a high chance of severe transboundary air pollution, Kang said, based on almost a decade of observations. “The Himalayas are not an impermeable barrier,” said Arnico Panday. Panday is programme coordinator of the Atmosphere Initiative at the International Centre for Integrated Mountain Development (ICIMOD), and a former visiting professor at the Massachusetts Institute of Technology. In an email interview, he stated, “There are passes and river valleys that cross the Himalaya at numerous places, so it is easy for air from the south side to reach the north side.” Kang emphasised the importance of regional cooperation to find a solution to long- distance transboundary air pollution in the Himalayan region. However so far, the The Malé Declaration on Control and Prevention of Air Pollution and its Likely Transboundary Effects for South Asia adopted by some South Asia countries in 1998 has not produced any results. “Right now most people in the source regions don’t even know the consequences of the pollution they are emitting,” said Panday. There are still huge data gaps, and gaps in the scientific understanding of the relationship between source regions and locations of impacts, he explained. Indeed, transboundary air pollution is a problem for many regions across the world. As early as 1979, European countries signed the Geneva Convention on Long-range Transboundary Air Pollution to combat region wide smog. The pollution from forest fires in Indonesia has had a devastating effect upon neighbouring countries. Eastern China is allegedly the source of pollution for Korea and Japan – even as far as California. Technological cooperation and exchange are essential to control the glacier melt and the expansion of the atmospheric brown cloud. “We are planning to deepen our cooperation with ICIMOD and enhance the joint atmospheric observations on the Third Pole,” Kang said. “We also expect to include the efforts of Indian researchers into our network,” he added. According to Kang, some of China’s experiences, for example, the strict ban of field burning and effective recycling of stubble in eastern China can be promoted in South Asian countries facing similar problems. In addition, mitigation efforts, including the promotion of energy-saving cooking stoves, shifting to higher standard fuel, and dismantling polluting brick kilns, are as important. In the meanwhile the glaciers in the high Himalayas are experiencing pollution like never before.
Neural networks are computational models inspired by the structure and function of the human brain. They are composed of interconnected artificial neurons (also known as nodes or units) organized in layers. These networks learn from data by adjusting the weights and biases associated with the connections between neurons. Here’s a high-level overview of how neural networks operate: - Input Layer: The neural network begins with an input layer that receives the raw data or features. Each neuron in the input layer represents a feature or attribute of the data. - Hidden Layers: After the input layer, one or more hidden layers can be present in the network. Hidden layers are composed of neurons that receive input from the previous layer and apply a mathematical transformation to produce an output. Hidden layers enable the network to learn complex patterns and relationships in the data. - Weights and Biases: Each connection between neurons in adjacent layers has an associated weight and bias. The weights determine the strength of the connection, while the biases introduce an offset. Initially, these weights and biases are assigned randomly. - Activation Function: Neurons in the hidden layers and output layer typically apply an activation function to the weighted sum of their inputs plus the bias. The activation function introduces non-linearity into the network, enabling it to learn and model complex relationships. - Forward Propagation: During the forward propagation phase, the neural network computes an output based on the input data. The outputs are calculated by propagating the inputs through the layers, applying the activation functions, and using the current weights and biases. - Loss Function: The output of the neural network is compared to the desired output using a loss function. The loss function quantifies the difference between the predicted output and the actual output. The goal of training is to minimize this loss. - Backpropagation: Backpropagation is the process of adjusting the weights and biases of the network based on the computed loss. It works by calculating the gradient of the loss function with respect to the weights and biases, and then updating them in the direction that reduces the loss. This process is typically done using optimization algorithms like gradient descent. - Training: The network goes through multiple iterations of forward propagation and backpropagation to update the weights and biases, gradually reducing the loss. This iterative process is known as training. The training is typically performed on a labeled dataset, where the desired outputs are known, allowing the network to learn from the provided examples. - Prediction: Once the neural network has been trained, it can be used for making predictions on new, unseen data. The forward propagation process is applied to the new input data, and the network produces an output based on the learned weights and biases. - Evaluation and Iteration: The performance of the trained neural network is evaluated using various metrics and validation datasets. If the performance is not satisfactory, the network can be adjusted by modifying the architecture, tuning hyperparameters, or acquiring more training data. This iterative process continues until the desired performance is achieved. It’s important to note that this is a simplified explanation of neural networks, and there are many variations and additional concepts involved in different types of neural networks, such as convolutional neural networks (CNNs) for image processing or recurrent neural networks (RNNs) for sequential data. Pandas and Python together form a powerful toolkit for data analysis and manipulation due to several key factors: Data Structures: Pandas provides two primary data structures: Series and DataFrame. Series is a one-dimensional labeled array capable of holding any data type, while DataFrame is a two-dimensional labeled data structure with columns of potentially different data types. These data structures offer flexible ways to store, manipulate, and analyze data, similar to tables in a relational database. Data Cleaning and Transformation: Pandas offers a wide range of functions and methods to clean and transform data. It provides tools for handling missing data, removing duplicates, reshaping data, splitting and combining datasets, and applying various data transformations such as filtering, sorting, and aggregation. These capabilities make it easier to preprocess and prepare data for analysis. Efficient Data Operations: Pandas is built on top of the NumPy library, which provides efficient numerical operations in Python. It leverages the underlying array-based operations to perform vectorized computations, enabling fast and efficient processing of large datasets. This efficiency is particularly valuable when dealing with complex data operations and computations. Flexible Indexing and Selection: Pandas allows flexible indexing and selection of data, both by label and by position. It provides various methods to access specific rows, columns, or subsets of data based on criteria, making it easy to filter and extract relevant information. The ability to slice, filter, and manipulate data based on conditions is crucial for data analysis and manipulation tasks. Integration with Other Libraries: Pandas seamlessly integrates with other libraries commonly used in the Python ecosystem, such as Matplotlib for visualization, scikit-learn for machine learning, and many others. This interoperability allows data scientists and analysts to leverage the strengths of different libraries and create powerful workflows for data analysis, modeling, and visualization. Extensive Functionality: Pandas offers a vast array of functions and methods for data analysis and manipulation. It includes capabilities for data alignment, merging, reshaping, time series analysis, statistical computations, handling categorical data, and much more. This rich functionality provides a comprehensive toolkit to address a wide range of data-related tasks and challenges. Active Community and Ecosystem: Pandas has a large and active community of users and developers who contribute to its development and provide support. This active ecosystem ensures that Pandas is continuously improved, maintained, and extended with new features and functionalities. The availability of extensive documentation, tutorials, and online resources further enhances its usability and learning curve. In combination with Python’s simplicity, readability, and wide adoption as a general-purpose programming language, these factors make Pandas and Python a powerful toolkit for data analysis, manipulation, and exploration. They enable data professionals to efficiently work with data, derive insights, and build data-driven applications. As an AI language model, ChatGPT is already a powerful tool that can be used for a wide range of tasks, including natural language processing, question answering, language translation, and more. However, if you want to build a more specialized AI system using ChatGPT, here are some steps you can follow: - Define your problem: Start by clearly defining the problem you want your AI system to solve. This could be anything from classifying images to answering customer service inquiries. - Collect and prepare data: To build an AI system, you need to train it on a large dataset of examples. Collect data that is relevant to your problem and then preprocess it to ensure it is in a suitable format for training. - Fine-tune ChatGPT: Once you have your dataset, you can fine-tune ChatGPT to perform the specific task you want it to do. Fine-tuning involves training the model on your dataset so that it learns the patterns and relationships in your data. - Evaluate your model: Once you have trained your model, you need to evaluate its performance on a separate test dataset. This will help you determine whether the model is accurately solving the problem you defined in step 1. - Deploy your model: Finally, you can deploy your AI system so that it can be used in the real world. This could involve integrating it into an existing application, creating a standalone service, or building a custom user interface. Keep in mind that building an AI system is a complex process that requires a strong understanding of machine learning and natural language processing concepts. If you’re new to these fields, it’s a good idea to start with some tutorials and introductory materials before diving into a full-scale AI project. A database is an organized collection of data that is stored and managed using a computer system. It is designed to make it easy to access, manage, and update large amounts of data in a structured way. Databases can be used to store a wide variety of information, such as customer data, financial records, product information, employee information, and more. They are often used by businesses, organizations, and individuals to keep track of important information that they need to access and analyze on a regular basis. Databases can be organized in different ways, such as in tables, documents, graphs, or other formats, depending on the needs of the user. They can also be accessed and manipulated using specialized software called a database management system (DBMS). Some popular examples of DBMS include MySQL, Oracle, SQL Server, and MongoDB. Here are some tips for creating basic SQL queries, along with examples: - Start with a clear understanding of the data you need to retrieve. Identify the specific fields (columns) you need to include in your query. Example: If you want to retrieve a list of customers from a database, you might need their names, email addresses, and phone numbers. In this case, your query would include the fields “Name”, “Email”, and “Phone Number”. - Use the SELECT statement to specify the fields you want to retrieve. sql Copy code SELECT Name, Email, Phone_Number This query will retrieve the “Name”, “Email”, and “Phone_Number” fields from the “Customers” table. - Use the FROM statement to specify the table you want to retrieve data from. sql Copy code This query will retrieve all the fields from the “Orders” table. - Use the WHERE statement to filter the results based on specific conditions. sql Copy code WHERE Order_Date >= ‘2022-01-01’; This query will retrieve all the fields from the “Orders” table where the “Order_Date” is equal to or greater than ‘2022-01-01’. - Use the ORDER BY statement to sort the results based on specific fields. sql Copy code ORDER BY Name ASC; This query will retrieve all the fields from the “Customers” table and sort them in ascending order based on the “Name” field. Hope these tips and examples help you get started with creating basic SQL queries! Power BI Desktop is a powerful business intelligence tool developed by Microsoft. It allows users to create interactive visualizations, reports, and dashboards by connecting to various data sources. Here are some key features and benefits of Power BI Desktop: - Data connectivity: Power BI Desktop allows users to connect to a variety of data sources, including Excel spreadsheets, cloud-based data sources, and on-premises databases. - Data modeling: Power BI Desktop provides a robust data modeling engine that allows users to transform, clean, and combine data from different sources. This enables users to create unified views of their data that are optimized for reporting and analysis. - Visualization: Power BI Desktop includes a range of visualization options that make it easy to create compelling reports and dashboards. These visualizations include charts, tables, maps, and custom visuals. - Sharing and collaboration: Power BI Desktop allows users to share their reports and dashboards with others in their organization. This makes it easy to collaborate on data analysis and decision-making. - Mobile support: Power BI Desktop reports and dashboards can be accessed on mobile devices using the Power BI mobile app. This makes it easy to view and interact with data on the go. - Integration with other Microsoft products: Power BI Desktop integrates with other Microsoft products, such as Excel, SharePoint, and Teams. This allows users to leverage existing investments in Microsoft technology. Overall, Power BI Desktop is a powerful business intelligence tool that enables users to turn their data into actionable insights. It provides a range of features and benefits that make it a great choice for organizations of all sizes. Data scientists and data analysts are both important roles in the field of data science, but they have different responsibilities and skill sets. A data analyst is responsible for collecting, processing, and performing basic statistical analysis on data to identify patterns and trends. They typically use tools such as spreadsheets, databases, and data visualization software to perform these tasks. Data analysts are primarily focused on finding insights from data that can be used to inform business decisions. On the other hand, data scientists are responsible for developing and implementing complex machine learning algorithms and statistical models to solve business problems. They are skilled in programming languages like Python and R and use tools such as deep learning frameworks to build predictive models that can be used to identify patterns in large datasets. Data scientists are typically more focused on developing new insights and creating predictive models that can help businesses make more informed decisions. Overall, while there is some overlap between the two roles, data analysts tend to focus more on descriptive analytics, while data scientists focus on predictive analytics and developing new models. Tableau is a powerful data visualization tool that allows users to turn raw data into compelling visualizations. Here are some tips to help you convert raw data into effective visualizations using Tableau: - Understand your data: Before creating any visualizations, it is important to understand the data you are working with. What is the purpose of the data? What insights are you hoping to gain from it? This will help you determine which types of visualizations are most appropriate for your data. - Choose the right visualization type: There are many different types of visualizations available in Tableau, including bar charts, line charts, scatter plots, and more. Choose the type that best represents your data and the insights you want to convey. - Use color effectively: Color can be a powerful tool in data visualization, but it can also be distracting if not used correctly. Use color to highlight important data points or to group related data together. - Keep it simple: While it can be tempting to add lots of bells and whistles to your visualizations, it is important to keep them simple and easy to understand. Avoid cluttering your visualizations with too much information or unnecessary design elements. - Make it interactive: Tableau allows you to create interactive visualizations that allow users to explore the data on their own. Add filters, tooltips, and other interactive elements to make your visualizations more engaging and informative. - Tell a story: Data visualizations are most effective when they tell a story. Use your visualizations to guide viewers through the data and help them draw meaningful conclusions. Overall, creating effective data visualizations using Tableau requires a combination of technical skills, creativity, and an understanding of the data you are working with. By following these tips, you can create compelling visualizations that help you and others gain new insights from raw data. Business Intelligence (BI) refers to the use of technology, data analysis, and strategic decision-making to help organizations gain valuable insights into their business operations. BI can be used to identify trends, forecast future outcomes, and make data-driven decisions that can help organizations achieve sustainable and profitable growth. Here are some ways in which business intelligence can lead to sustainable and profitable growth: - Improved data accuracy and accessibility: BI tools can help organizations collect and analyze accurate data from different sources, such as social media, customer feedback, and financial reports. This data can be used to identify trends, patterns, and insights that can inform strategic decision-making. - Better forecasting and planning: BI tools can help organizations forecast future demand, sales, and revenue, allowing them to make informed decisions about resource allocation, product development, and marketing strategies. - Improved operational efficiency: BI tools can help organizations identify inefficiencies in their operations, such as supply chain bottlenecks, and suggest ways to optimize processes to reduce costs and increase profitability. - Enhanced customer insights: BI tools can help organizations analyze customer behavior and preferences, allowing them to tailor their products and services to meet customer needs and improve customer satisfaction. Competitive advantage: BI can provide organizations with a competitive advantage by helping them stay ahead of industry trends, identify new business opportunities, and respond to market changes faster than their competitors. In summary, business intelligence can lead to sustainable and profitable growth by providing organizations with valuable insights into their operations, enabling them to make data-driven decisions, and helping them stay ahead of their competitors. There are several data analysis techniques that can be used to make calculated business decisions faster. Here are a few: - Descriptive analytics: This technique is used to summarize and describe historical data. It is useful for understanding patterns, trends, and relationships in the data. Descriptive analytics can help businesses to identify key performance indicators (KPIs) and track their progress over time. - Predictive analytics: This technique uses statistical algorithms and machine learning to predict future outcomes based on historical data. Predictive analytics can help businesses to forecast demand, identify potential risks and opportunities, and optimize their operations. - Prescriptive analytics: This technique uses optimization algorithms to recommend the best course of action based on a set of constraints and objectives. Prescriptive analytics can help businesses to make decisions that are aligned with their goals and resources. - Data mining: This technique involves exploring and analyzing large data sets to uncover patterns, relationships, and insights that can inform business decisions. Data mining can help businesses to identify customer segments, optimize pricing strategies, and improve marketing campaigns. - Business intelligence (BI): This technique involves the use of software tools to collect, analyze, and visualize data in order to provide insights that can inform business decisions. BI can help businesses to monitor their performance, track KPIs, and identify areas for improvement. Ultimately, the most effective data analysis technique will depend on the specific needs and goals of the business. A combination of these techniques may be necessary to make calculated business decisions faster. Google Data Studio is a powerful tool that allows you to visualize and analyze your data in a meaningful way. Here are some tips for using it effectively: - Connect your data sources: Before you can start creating reports, you need to connect your data sources to Google Data Studio. This could include data from Google Analytics, Google Ads, Google Sheets, and other sources. - Define your metrics: Before you start building reports, you need to decide what metrics are important to your business. These could include things like website traffic, conversion rates, revenue, and more. - Create a report: Once you have your data sources and metrics defined, you can start creating your report. You can use a variety of visualization tools, including bar charts, line charts, tables, and more. - Use filters: Filters can help you refine your data and focus on specific segments or time periods. You can create filters based on dimensions like time, location, device, and more. - Add context: It’s important to provide context for your data, so viewers understand what they’re seeing. You can add text boxes, images, and other visual elements to provide context and insights. - Share your report: Once you’ve created your report, you can share it with others in your organization or with clients. You can also schedule regular email updates to keep stakeholders informed. - Monitor your data: Finally, it’s important to monitor your data and make adjustments as needed. Use your reports to identify trends and insights, and make data-driven decisions based on what you learn. Overall, Google Data Studio is a powerful tool for data analysis, and with the right approach, you can use it to gain valuable insights and make informed decisions for your business. Building dashboards in Excel involve creating a visual representation of your data that allows you to quickly and easily analyze and understand key metrics. Here are the general steps you can follow to create a dashboard in Excel: - Identify the key metrics: Determine what metrics you want to track in your dashboard, such as revenue, expenses, customer acquisition, website traffic, etc. - Gather and organize the data: Collect the data you need for each metric and organize it in a structured format, such as a table or a pivot table. - Choose the type of chart: Decide what type of chart will best represent each metric, such as a line chart, bar chart, pie chart, or scatter chart. - Create the charts: Use Excel’s charting tools to create the charts for each metric. - Design the dashboard layout: Decide how you want to arrange the charts on the dashboard and design a layout that is visually appealing and easy to read. - Add interactivity: Use Excel’s interactive features, such as slicers or drop-down menus, to allow users to filter the data and customize the dashboard based on their needs. - Test and refine: Test your dashboard with a small group of users to ensure it is easy to use and understand, and make any necessary refinements based on their feedback. - Share the dashboard: Once your dashboard is complete, share it with the intended audience, either by sharing the Excel file or by publishing it to a web-based platform like SharePoint or Power BI. Overall, building a dashboard in Excel requires a combination of data analysis, charting skills, and design expertise. With practice and patience, you can create a dashboard that effectively communicates your data and helps you make informed decisions. Excel offers a variety of tools for data analysis. Some of the most commonly used ones include: - PivotTables: This tool allows you to summarize and analyze large amounts of data quickly and easily. It enables you to create interactive tables and charts that can help you identify patterns and trends in your data. - Data Tables: This tool enables you to perform what-if analysis by calculating multiple versions of a formula based on different inputs. - Scenario Manager: This tool helps you to create and compare different scenarios to assess the impact of changes on your data. - Solver: This tool enables you to find the optimal solution for a problem by adjusting values of input cells within defined constraints. - Conditional Formatting: This tool enables you to apply formatting to cells based on specific criteria, making it easier to identify and analyze patterns in your data. - Statistical Functions: Excel offers a wide range of statistical functions such as AVERAGE, MAX, MIN, COUNT, STDEV, etc. that can help you analyze your data. - Charts and Graphs: Excel also provides a variety of charts and graphs that can be used to visually represent your data and identify patterns and trends. Overall, Excel is a powerful tool for data analysis, and its many features and functions can help you gain valuable insights from your data. Excel Solver is a powerful tool for data analysis that allows you to find the optimal solution for complex problems. The Solver add-in in Microsoft Excel helps you find an optimal value for a target cell by adjusting the values of input cells, subject to constraints and limits that you specify. This is commonly used in many fields, including finance, engineering, and operations research. The Solver tool works by identifying a target cell that needs to be optimized, such as maximizing profits or minimizing costs. It then uses mathematical algorithms to determine the best values for a set of decision variables, which are inputs that can be changed within certain constraints. These constraints might include limits on resources, such as labor or materials, or other business or technical requirements. Solver can be used for a variety of applications, including financial modeling, production planning, and scheduling. It can also be used for more advanced problems, such as linear programming and non-linear optimization. To use Solver, users must first set up a model within Excel that includes the target cell, decision variables, and constraints. Then, they can use the Solver tool to find the optimal solution based on their objectives and constraints. The Solver tool offers different solving methods, such as Simplex LP and GRG Nonlinear, and can be customized to fit different problem types and sizes. In data analysis, Excel Solver can be used for a variety of purposes, such as: - Optimization: Excel Solver can be used to optimize the output of a model based on a set of input variables. For example, you might use Solver to find the optimal combination of product pricing and marketing spend that maximizes sales. - Regression Analysis: Excel Solver can be used to perform regression analysis to identify the relationship between two or more variables. This is useful in analyzing data to identify trends and make predictions. - Monte Carlo Simulation: Excel Solver can be used to perform Monte Carlo simulations, which involve creating a large number of random scenarios to analyze the potential outcomes of a particular decision or event. - Linear Programming: Excel Solver can be used to solve linear programming problems, which involve maximizing or minimizing a linear objective function subject to constraints. Overall, Excel Solver is a powerful tool for data analysis that can help you make better decisions based on the insights you derive from your data.
The Boston Tea Party was an organized political protest that took place in Massachusetts during the American Revolution. The following are some facts about the Boston Tea Party: What Was the Boston Tea Party? The Boston Tea Party was an act of protest against the Tea Act of 1773, which had been recently passed by the British Government. During the Boston Tea Party, several hundred participants dressed in disguise, rowed in small boats out to three cargo ships anchored in Boston Harbor, climbed aboard and dumped 90,000 pounds of tea into Boston Harbor. What Was the Date of the Boston Tea Party? The Boston Tea Party took place on the night of December 16, 1773. What Caused the Boston Tea Party? Due to a series of costly wars, the British government was deeply in debt by the late 1700s and hoped to make some much needed money off the sale of British tea in the colonies. Colonists were drinking 1.2 million pounds of tea a year and it became clear that adding a small tax to this tea could generate a lot of extra money for the government. The British government passed and then repealed a few tea taxes before it finally passed the Townshend Act of 1767. The Townshend Act placed a tax on all tea sold in the colonies, among other goods. The colonists resented the government’s attempts to make money off them and complained that it was unfair. To appease the colonists, the government repealed the tax on most goods sold in the colony except for the tea tax. Then in May of 1773, the British government passed The Tea Act, which allowed for tea to be shipped by British companies duty-free to the North American colonies, thus allowing the companies to sell it for a cheaper price. The tax on tea for colonists still remained though. One reason behind the tea act was to help save the floundering East India Company, whose tea sales dwindled after the colonists began boycotting British tea. Another reason behind the tea act was that, since the tea tax was still in place, selling the colonists discounted British tea could be a subtle way to persuade them to comply with the unpopular tax. The colonists, though, opposed the tax on a matter of principle, not financial cost, so they refused to comply. Boston Tea Party Summary: Still angry about the unfair tea tax, the colonists refused to let the Dartmouth, a merchant ship filled with tea, dock in Boston harbor at Griffin’s Wharf in November of 1773. The colonists sent a message to the Custom house to send the ship away without any payment for the tea. The Collector of Customs refused. Colonists held a meeting at Faneuil Hall on November 29, 1773 but it was moved to the Old South Meeting House to accommodate the large crowd. At the meeting, the colonists all agreed that the tea should be sent back and the tax should not be paid. They assigned 25 men to guard the docks and prevent the ships from docking while they adjourned the meeting for the next day. The following day, the colonists met again in the Old South Meeting House and listened to a message delivered via John Copley from the tea company. The company suggested storing the tea in a warehouse until further instruction from Parliament. This idea was immediately rejected because it would mean paying the tax on the tea once it landed. The local sheriff, Stephen Greenleaf, then delivered a proclamation from Governor Hutchinson ordering them to stop blocking the ships from landing. The colonists refused to comply with Hutchinson’s demands. In the first week of December, two more tea ships arrived; the Eleanor and the Beaver. The meetings continued while colonists tried to find a way to prevent the ships from docking. The last meeting was held on December 16 and included over 5,000 people. The colonists sent a message to the governor asking him to allow the ships to return to England without payment. As the owner of one of the ships, Francis Rotch, left the Old South Meetinghouse to give the governor the message, the colonists waited. When Rotch returned hours later with the governor’s reply, a definite “no”, they realized they had run out of options. Little did they know, the Sons of Liberty, a radical political group based in Boston, had anticipated this response and had a secret plan laid out. Shortly after the governor’s reply was announced, members of the Sons of Liberty, sitting in the audience, stood up and shouted “Hurrah for Griffin’s Wharf!” and “Boston Harbor a Teapot Tonight!” as they began disguising themselves as Native Americans, and rushed out of the meetinghouse towards the harbor. Other people joined the Sons of Liberty along the way and together the mob rowed out to the ships and dumped 90,000 pounds of tea, about 1 million dollars worth in today’s money, into Boston Harbor. What Happened After the Boston Tea Party? The Boston Tea Party was a brave move that proved the colonists were not to be pushed around. The British government was furious over the protest, with Governor Thomas Hutchinson calling it “the boldest stroke that had been struck against British rule in America.” Parliament called it “vandalism” and England’s attorney generally officially charged a number of patriot leaders, including Samuel Adams and John Hancock, with the crime of high treason and high misdemeanor, even though there is no proof any of them participated in the protest. John Adams also had not participated but was delighted when he saw the tea in Boston harbor the next morning, according to the book American Tempest: How the Boston Tea Party Sparked a Revolution: “John Adams, who had been in court in Plymouth for a week and rode back into Boston the morning after the Tea Party, said he did not know any Tea Party participants. As he rode into town, he saw splintered tea chests and huge clots of tea leaves covering the water as far as his eyes could see. They washed ashore along a fifty-mile stretch of coastline as well as on the offshore islands. ‘This,’ he entered in his diary when he reached his home, ‘is the most magnificent movement of all….There is a dignity, a majesty, a sublimity in this last effort of the Patriots I greatly admire. The people should never rise without doing something to be remembered – something notable. And striking. This destruction of the tea is so bold, so daring, so firm, intrepid and inflexible, and it must have so important consequences, and so lasting, that I cannot but consider it as an epocha [sic] in history.’” John Hancock wrote a letter to his London agent a few days later gleefully reporting that New York and Philadelphia were also refusing to let cargo ships carrying tea land there and declared that, “No circumstance could possibly have taken place more effectively to unite the colonies than this maneuver of the tea.” Yet, not everyone was impressed by the Boston Tea Party, according to the book American Tempest: How the Boston Tea Party Sparked a Revolution: “Not many American leaders in the South rallied to the defense of the Boston Tea Party Patriots. Far from uniting colonists, the Tea Party had alienated many property owners, who held private property to be sacrosanct and did not tolerate its destruction or violation. George Washington concluded that Bostonians were mad, and like other Virginians and most Britons, he condemned the Boston Tea Party as vandalism and wanton destruction of private property – an unholy disregard for property rights. After the repeal of the Townshend Acts, Virginians saw no reason to persist in boycotting their British countrymen, and they resumed drinking tea…” There were, eventually, legal repercussions for the Boston Tea Party. It took a few months but Parliament soon cracked down on Boston, according to the book American Insurgents, American Patriots: The Revolution of the People: “Several months passed before the colonists learned the extent of their punishment. In a series of statutes known as the Coercive Acts, Parliament – like so many other uncertain imperial powers over the centuries – decided that provocation of this sort justified an overwhelming show of toughness. The punitive legislation closed the port of Boston to all commerce except for coastal trade in basic supplies like firewood, restructured the Massachusetts government in ways that curtailed free speech in town meetings, and filled the colony’s council with Crown appointees determined to restore law and order to the troubled commonwealth. To enforce the new system, the Crown dispatched to Boston an army of occupation under the command of General Thomas Gage.” These “Coercive Acts” which consisted of several acts, including the Quebec Act, the Quartering Act and two additional Intolerable Acts, made life very difficult for Bostonians and Massachusetts residents. Morale began to run low, food was scarce and some colonists began to wonder if paying for the destroyed tea might appease the British government. Fortunately, the other colonies, including Nova Scotia, Georgia and even Virginia, began to send food and supplies to Boston to ease their suffering. In February of 1775, the British Government passed the Conciliatory Resolution which stated that any colony that wanted to contribute its share of the “common defense” to parliament would be exempted from further taxes except for regulation of trade. An attempt at reconciliation was made in 1778 when the British government repealed the tea tax with passage of the Taxation of Colonies Act 1778 but by then it was too late, the colonies were already in the middle of their Revolutionary War with Britain. Boston Tea Party Quotes: “We have been much agitated in consequence of the arrival of tea ships by the East India Company, and after every effort was made to induce the consignees to return it from whence it came and all proving ineffectual, in a very few hours the whole of the tea on board…was thrown into the salt water. The particulars I must refer you to Captain Scott for indeed I am not acquainted with them myself, so as to give detail. No one circumstance could possibly have taken place more effectively to unite the colonies than this maneuver of the tea.” – John Hancock, letter to his London agent, December 21, 1773 “This is the most magnificent movement of all….There is a dignity, a majesty, a sublimity in this last effort of the Patriots I greatly admire. The people should never rise without doing something to be remembered – something notable. And striking. This destruction of the tea is so bold, so daring, so firm, intrepid and inflexible, and it must have so important consequences, and so lasting, that I cannot but consider it as an epocha [sic] in history.” – John Adams, diary, December 1773 “We then were ordered by our commander to open the hatches and take out all the chests of tea and throw them overboard, and we immediately proceeded to execute his orders, first cutting and splitting the chests with our tomahawks, so as thoroughly to expose them to the effects of the water. In about three hours from the time we went on board, we had thus broken and thrown overboard every tea chest to be found in the ship, while those in the other ships were disposing of the tea in the same way, at the same time. We were surrounded by British armed ships, but no attempt was made to resist us.” – George Hewes, Interview with James Hawkes, 1834 “American Tempest: How the Boston Tea Party Sparked a Revolution”; Harlow G. Unger; 2011 “American Insurgents, American Patriots: The Revolution of the People”; T.H. Brown; 2010 Massachusetts Historical Society: The Coercive Acts: http://www.masshist.org/revolution/coercive.php American National Biography Online: George Robert Twelve Hewes: http://www.anb.org/articles/20/20-01899.html Eyewitness to History: The Boston Tea Party: http://www.eyewitnesstohistory.com/teaparty.htm Mass.gov: Boston Tea Party: http://www.mass.gov/?pageID=mg2terminal&L=5&L0=Home&L1=State+Government&L2=About+Massachusetts&L3=Interactive+State+House&L4=Key+Events&sid=massgov2&b=terminalcontent&f=interactive_statehouse_boston_tea_party&csid=massgov2
Table of Contents An exchange rate is a rate at which one currency will be exchanged for another. It affects trade and the movement of money between countries. Exchange rates are impacted by both the domestic currency value and the foreign currency value. The exchange rate between two currencies is commonly determined by the economic activity, market interest rates, gross domestic product, and unemployment rate in each country. It also plays a vital role in a country’s level of trade, which is critical to almost every free market economy in the world. A higher-valued currency makes a country’s imports less expensive and its exports more expensive in foreign markets. While a lower-valued currency makes a country’s imports more expensive and its exports less expensive in foreign markets. Also, a higher exchange rate can be expected to worsen a country’s balance of trade, while a lower exchange rate can be expected to improve it. Numerous factors determine exchange rate movement. Many of these factors are related to the trading relationship between the two countries. Remember, exchange rates are relative, and are expressed as a comparison of the currencies of two countries. The following are some of the principal determinants of the exchange rate between two countries. Terms of Trade A ratio comparing export prices to import prices, the terms of trade is related to current accounts and the balance of payments. If the price of a country’s exports rises by a greater rate than that of its imports, its terms of trade have favorably improved. Increasing terms of trade shows greater demand for the country’s exports. This, in turn, results in rising revenues from exports, which provides increased demand for the country’s currency (and an increase in the currency’s value). If the price of exports rises by a smaller rate than that of its imports, the currency’s value will decrease in relation to its trading partners. Differentials in Interest Rates Interest rates, inflation, and exchange rates are all highly correlated. By manipulating interest rates, central banks exert influence over both inflation and exchange rates, and changing interest rates impact inflation and currency values. Higher interest rates offer lenders in an economy a higher return relative to other countries. Therefore, higher interest rates attract foreign capital and cause the exchange rate to rise. The impact of higher interest rates is mitigated, however, if inflation in the country is much higher than in others, or if additional factors serve to drive the currency down. The opposite relationship exists for decreasing interest rates – that is, lower interest rates tend to decrease exchange rates. Differentials in Inflation Typically, a country with a consistently lower inflation rate exhibits a rising currency value, as its purchasing power increases relative to other currencies. During the last half of the 20th century, the countries with low inflation included Japan, Germany, and Switzerland, while the U.S.and Canada achieved low inflation only later. Those countries with higher inflation typically see depreciation in their currency to the currencies of their trading partners. This is also usually accompanied by higher interest rates. While most exchange rates are floating and will rise or fall based on the supply and demand in the market, some exchange rates are pegged or fixed to the value of a specific country’s currency. Exchange rate changes affect businesses and the cost of supplies and demand for their products in the international marketplace. Does your business need comprehensive FX services? AZA Finance facilitates FX transactions worldwide and mitigates risk for international businesses and enterprises across multiple currency pairs.
|Part of a series on the| In historical contexts, New Imperialism characterizes a period of colonial expansion by European powers, the United States, and Japan during the late 19th and early 20th centuries. The period featured an unprecedented pursuit of overseas territorial acquisitions. At the time, states focused on building their empires with new technological advances and developments, expanding their territory through conquest, and exploiting the resources of the subjugated countries. During the era of New Imperialism, the Western powers (and Japan) individually conquered almost all of Africa and parts of Asia. The new wave of imperialism reflected ongoing rivalries among the great powers, the economic desire for new resources and markets, and a "civilizing mission" ethos. Many of the colonies established during this era gained independence during the era of decolonization that followed World War II. The qualifier "new" is used to differentiate modern imperialism from earlier imperial activity, such as the so-called first wave of European colonization between 1402 and 1815. In the First wave of colonization, European powers conquered and colonized the Americas and Siberia; they then later established more outposts in Africa and Asia. The American Revolution (1775–83) and the collapse of the Spanish Empire in Latin America during 1820s ended the first era of European imperialism. Especially in Great Britain these revolutions helped show the deficiencies of mercantilism, the doctrine of economic competition for finite wealth which had supported earlier imperial expansion. In 1846, the Corn Laws were repealed and manufacturers gained, as the regulations enforced by the Corn Laws had slowed their businesses. With the repeal in place, the manufacturers were then able to trade more freely. Thus, Britain began to adopt the concept of free trade. During this period, between the 1815 Congress of Vienna after the defeat of Napoleonic France and the end of the Franco-Prussian War in 1871, Britain reaped the benefits of being the world's sole modern, industrial power. As the "workshop of the world", Britain could produce finished goods so efficiently that they could usually undersell comparable, locally manufactured goods in foreign markets, even supplying a large share of the manufactured goods consumed by such nations as the German states, France, Belgium, and the United States. The erosion of British hegemony after the Franco-Prussian War, in which a coalition of German states led by Prussia defeated France, was occasioned by changes in the European and world economies and in the continental balance of power following the breakdown of the Concert of Europe, established by the Congress of Vienna. The establishment of nation-states in Germany and Italy resolved territorial issues that had kept potential rivals embroiled in internal affairs at the heart of Europe, to Britain's advantage. The years from 1871 to 1914 would be marked by an extremely unstable peace. France’s determination to recover Alsace-Lorraine, annexed by Germany as a result of the Franco-Prussian War, and Germany’s mounting imperialist ambitions would keep the two nations constantly poised for conflict. This competition was sharpened by the Long Depression of 1873–1896, a prolonged period of price deflation punctuated by severe business downturns, which put pressure on governments to promote home industry, leading to the widespread abandonment of free trade among Europe's powers (in Germany from 1879 and in France from 1881). The Berlin Conference of 1884–1885 sought to destroy the competition between the powers by defining "effective occupation" as the criterion for international recognition of a territory claim, specifically in Africa. The imposition of direct rule in terms of "effective occupation" necessitated routine recourse to armed force against indigenous states and peoples. Uprisings against imperial rule were put down ruthlessly, most spectacularly in the Herero Wars in German South-West Africa from 1904 to 1907 and the Maji Maji Rebellion in German East Africa from 1905 to 1907. One of the goals of the conference was to reach agreements over trade, navigation, and boundaries of Central Africa. However, of all of the 15 nations in attendance of the Berlin Conference, none of the countries represented were African. The main dominating powers of the conference were France, Germany, Great Britain and Portugal. They remapped Africa without considering the cultural and linguistic borders that were already established. At the end of the conference, Africa was divided into 50 different colonies. The attendants established who was in control of each of these newly divided colonies. They also planned, noncommittally, to end the slave trade in Africa. In Britain, the age of new imperialism marked a time for significant economic changes. Because the country was the first to industrialize, Britain was technologically ahead of many other countries throughout the majority of the nineteenth century. By the end of the nineteenth century, however, other countries, chiefly Germany and the United States, began to challenge Britain's technological and economic power. After several decades of monopoly, the country was battling to maintain a dominant economic position while other powers became more involved in international markets. In 1870, Britain contained 31.8% of the world's manufacturing capacity while the United States contained 23.3% and Germany contained 13.2%. By 1910, Britain's manufacturing capacity had dropped to 14.7%, while that of the United States had risen to 35.3% and that of Germany to 15.9%. As countries like Germany and America became more economically successful, they began to become more involved with imperialism, resulting in the British struggling to maintain the volume of British trade and investment overseas. Britain further faced strained international relations with three expansionist powers (Japan, Germany, and Italy) during the early twentieth century. Before 1939, these three powers never directly threatened Britain itself, but the indirect dangers to the Empire were clear. By the 1930s, Britain was worried that Japan would threaten its holdings in the Far East as well as territories in India, Australia and New Zealand. Italy held an interest in North Africa, which threatened British Egypt, and German dominance of the European continent held some danger for Britain's security. Britain worried that the expansionist powers would cause the breakdown of international stability; as such, British foreign policy attempted to protect the stability in a rapidly changing world. With its stability and holdings threatened, Britain decided to adopt a policy of concession rather than resistance, a policy that became known as appeasement. In Britain, the era of new imperialism affected public attitudes toward the idea of imperialism itself. Most of the public believed that if imperialism was going to exist, it was best if Britain was the driving force behind it. The same people further thought that British imperialism was a force for good in the world. In 1940, the Fabian Colonial Research Bureau argued that Africa could be developed both economically and socially, but until this development could happen, Africa was best off remaining with the British Empire. Rudyard Kipling's 1891 poem, "The English Flag," contains the stanza: Winds of the World, give answer! They are whimpering to and fro-- And what should they know of England who only England know?-- The poor little street-bred people that vapour and fume and brag, They are lifting their heads in the stillness to yelp at the English Flag! These lines show Kipling's belief that the British who actively took part in imperialism knew more about British national identity than the ones whose entire lives were spent solely in the imperial metropolis. While there were pockets of anti-imperialist opposition in Britain in the late nineteenth and early twentieth centuries, resistance to imperialism was nearly nonexistent in the country as a whole. In many ways, this new form of imperialism formed a part of the British identity until the end of the era of new imperialism around the Second World War. New Imperialism gave rise to new social views of colonialism. Rudyard Kipling, for instance, urged the United States to "Take up the White Man's burden" of bringing European civilization to the other peoples of the world, regardless of whether these "other peoples" wanted this civilization or not. This part of The White Man's Burden exemplifies Britain's perceived attitude towards the colonization of other countries: Take up the White Man's burden— In patience to abide, To veil the threat of terror And check the show of pride; By open speech and simple, An hundred times made plain To seek another's profit, And work another's gain. While Social Darwinism became popular throughout Western Europe and the United States, the paternalistic French and Portuguese "civilizing mission" (in French: mission civilisatrice; in Portuguese: Missão civilizadora) appealed to many European statesmen both in and outside France. Despite apparent benevolence existing in the notion of the "White Man's Burden", the unintended consequences of imperialism might have greatly outweighed the potential benefits. Governments became increasingly paternalistic at home and neglected the individual liberties of their citizens. Military spending expanded, usually leading to an "imperial overreach", and imperialism created clients of ruling elites abroad that were brutal and corrupt, consolidating power through imperial rents and impeding social change and economic development that ran against their ambitions. Furthermore, "nation building" oftentimes created cultural sentiments of racism and xenophobia. Many of Europe's major elites also found advantages in formal, overseas expansion: large financial and industrial monopolies wanted imperial support to protect their overseas investments against competition and domestic political tensions abroad, bureaucrats sought government offices, military officers desired promotion, and the traditional but waning landed gentries sought increased profits for their investments, formal titles, and high office. Such special interests have perpetuated empire building throughout history. Observing the rise of trade unionism, socialism, and other protest movements during an era of mass society both in Europe and later in North America, elites sought to use imperial jingoism to co-opt the support of part of the industrial working class. The new mass media promoted jingoism in the Spanish–American War (1898), the Second Boer War (1899–1902), and the Boxer Rebellion (1900). The left-wing German historian Hans-Ulrich Wehler has defined social imperialism as "the diversions outwards of internal tensions and forces of change in order to preserve the social and political status quo", and as a "defensive ideology" to counter the "disruptive effects of industrialization on the social and economic structure of Germany". In Wehler's opinion, social imperialism was a device that allowed the German government to distract public attention from domestic problems and preserve the existing social and political order. The dominant elites used social imperialism as the glue to hold together a fractured society and to maintain popular support for the social status quo. According to Wehler, German colonial policy in the 1880s was the first example of social imperialism in action, and was followed up by the 1897 Tirpitz Plan for expanding the German Navy. In this point of view, groups such as the Colonial Society and the Navy League are seen as instruments for the government to mobilize public support. The demands for annexing most of Europe and Africa in World War I are seen by Wehler as the pinnacle of social imperialism. The notion of rule over foreign lands commanded widespread acceptance among metropolitan populations, even among those who associated imperial colonization with oppression and exploitation. For example, the 1904 Congress of the Socialist International concluded that the colonial peoples should be taken in hand by future European socialist governments and led by them into eventual independence. In the 17th century, the British businessmen arrived in India and, after taking a small portion of land, formed the East India Company. The British East India Company annexed most of the subcontinent of India, starting with Bengal in 1757 and ending with Punjab in 1849. Many princely states remained independent. This was aided by a power vacuum formed by the collapse of the Mughal Empire in India and the death of Mughal Emperor Aurangzeb and increased British forces in India because of colonial conflicts with France. The invention of clipper ships in the early 1800s cut the trip to India from Europe in half from 6 months to 3 months; the British also laid cables on the floor of the ocean allowing telegrams to be sent from India and China. In 1818, the British controlled most of the Indian subcontinent and began imposing their ideas and ways on its residents, including different succession laws that allowed the British to take over a state with no successor and gain its land and armies, new taxes, and monopolistic control of industry. The British also collaborated with Indian officials to increase their influence in the region. Some Hindu and Muslim Sepoys rebelled in 1857, resulting in the Indian Mutiny. After this revolt was suppressed by the British, India came under the direct control of the British crown. After the British had gained more control over India, they began changing around the financial state of India. Previously, Europe had to pay for Indian textiles and spices in bullion; with political control, Britain directed farmers to grow cash crops for the company for exports to Europe while India became a market for textiles from Britain. In addition, the British collected huge revenues from land rent and taxes on its acquired monopoly on salt production. Indian weavers were replaced by new spinning and weaving machines and Indian food crops were replaced by cash crops like cotton and tea. The British also began connecting Indian cities by railroad and telegraph to make travel and communication easier as well as building an irrigation system for increasing agricultural production. When Western education was introduced in India, Indians were quite influenced by it, but the inequalities between the British ideals of governance and their treatment of Indians became clear.[clarification needed] In response to this discriminatory treatment, a group of educated Indians established the Indian National Congress, demanding equal treatment and self-governance. John Robert Seeley, a Cambridge Professor of History, said, "Our acquisition of India was made blindly. Nothing great that has ever been done by Englishmen was done so unintentionally or accidentally as the conquest of India". According to him, the political control of India was not a conquest in the usual sense because it was not an act of a state. The new administrative arrangement, crowned with Queen Victoria's proclamation as Empress of India in 1876, effectively replaced the rule of a monopolistic enterprise with that of a trained civil service headed by graduates of Britain's top universities. The administration retained and increased the monopolies held by the company. The India Salt Act of 1882 included regulations enforcing a government monopoly on the collection and manufacture of salt; in 1923 a bill was passed doubling the salt tax. After taking control of much of China, the British expanded further into Burma, Malaya, Singapore and Borneo, with these colonies becoming further sources of trade and raw materials for British goods. Formal colonisation of the Dutch East Indies (now Indonesia) commenced at the dawn of the 19th century when the Dutch state took possession of all Dutch East India Company (VOC) assets. Before that time the VOC merchants were in principle just another trading power among many, establishing trading posts and settlements (colonies) in strategic places around the archipelago. The Dutch gradually extended their sovereignty over most of the islands in the East Indies. Dutch expansion paused for several years during an interregnum of British rule between 1806 and 1816, when the Dutch Republic was occupied by the French forces of Napoleon. The Dutch government-in-exile in England ceded rule of all its colonies to Great Britain. However, Jan Willem Janssens, the Governor of the Dutch East Indies at the time, fought the British before surrendering the colony; he was eventually replaced by Stamford Raffles. The Dutch East Indies became the prize possession of the Dutch Empire. It was not the typical settler colony founded through massive emigration from the mother countries (such as the USA or Australia) and hardly involved displacement of the indigenous islanders, with a notable and dramatic exception in the island of Banda during the VOC era. Neither was it a plantation colony built on the import of slaves (such as Haiti or Jamaica) or a pure trade post colony (such as Singapore or Macau). It was more of an expansion of the existing chain of VOC trading posts. Instead of mass emigration from the homeland, the sizeable indigenous populations were controlled through effective political manipulation supported by military force. Servitude of the indigenous masses was enabled through a structure of indirect governance, keeping existing indigenous rulers in place. This strategy was already established by the VOC, which independently acted as a semi-sovereign state within the Dutch state, using the Indo Eurasian population as an intermediary buffer. "The mode of government now adopted in Java is to retain the whole series of native rulers, from the village chief up to princes, who, under the name of Regents, are the heads of districts about the size of a small English county. With each Regent is placed a Dutch Resident, or Assistant Resident, who is considered to be his "elder brother," and whose "orders" take the form of "recommendations," which are, however, implicitly obeyed. Along with each Assistant Resident is a Controller, a kind of inspector of all the lower native rulers, who periodically visits every village in the district, examines the proceedings of the native courts, hears complaints against the head-men or other native chiefs, and superintends the Government plantations." France annexed all of Vietnam and Cambodia in the 1880s; in the following decade, France completed its Indochinese empire with the annexation of Laos, leaving the kingdom of Siam (now Thailand) with an uneasy independence as a neutral buffer between British and French-ruled lands. In 1839, China found itself fighting the First Opium War with Great Britain after the Governor-General of Hunan and Hubei, Lin Zexu, seized the illegally traded opium. China was defeated, and in 1842 agreed to the provisions of the Treaty of Nanking. Hong Kong Island was ceded to Britain, and certain ports, including Shanghai and Guangzhou, were opened to British trade and residence. In 1856, the Second Opium War broke out; the Chinese were again defeated and forced to the terms of the 1858 Treaty of Tientsin and the 1860 Convention of Peking. The treaty opened new ports to trade and allowed foreigners to travel in the interior. Missionaries gained the right to propagate Christianity, another means of Western penetration. The United States and Russia obtained the same prerogatives in separate treaties. Towards the end of the 19th century, China appeared on the way to territorial dismemberment and economic vassalage, the fate of India's rulers that had played out much earlier. Several provisions of these treaties caused long-standing bitterness and humiliation among the Chinese: extraterritoriality (meaning that in a dispute with a Chinese person, a Westerner had the right to be tried in a court under the laws of his own country), customs regulation, and the right to station foreign warships in Chinese waters. In 1904, the British invaded Lhasa, a pre-emptive strike against Russian intrigues and secret meetings between the 13th Dalai Lama's envoy and Tsar Nicholas II. The Dalai Lama fled into exile to China and Mongolia. The British were greatly concerned at the prospect of a Russian invasion of the Crown colony of India, though Russia – badly defeated by Japan in the Russo-Japanese War and weakened by internal rebellion – could not realistically afford a military conflict against Britain. China under the Qing dynasty, however, was another matter. Natural disasters, famine and internal rebellions had enfeebled China in the late Qing. In the late 19th century, Japan and the Great Powers easily carved out trade and territorial concessions. These were humiliating submissions for the once-powerful China. Still, the central lesson of the war with Japan was not lost on the Russian General Staff: an Asian country using Western technology and industrial production methods could defeat a great European power. Jane E. Elliott criticized the allegation that China refused to modernize or was unable to defeat Western armies as simplistic, noting that China embarked on a massive military modernization in the late 1800s after several defeats, buying weapons from Western countries and manufacturing their own at arsenals, such as the Hanyang Arsenal during the Boxer Rebellion. In addition, Elliott questioned the claim that Chinese society was traumatized by the Western victories, as many Chinese peasants (90% of the population at that time) living outside the concessions continued about their daily lives, uninterrupted and without any feeling of "humiliation". The British observer Demetrius Charles de Kavanagh Boulger suggested a British-Chinese alliance to check Russian expansion in Central Asia. During the Ili crisis when Qing China threatened to go to war against Russia over the Russian occupation of Ili, the British officer Charles George Gordon was sent to China by Britain to advise China on military options against Russia should a potential war break out between China and Russia. The Russians observed the Chinese building up their arsenal of modern weapons during the Ili crisis, the Chinese bought thousands of rifles from Germany. In 1880 massive amounts of military equipment and rifles were shipped via boats to China from Antwerp as China purchased torpedoes, artillery, and 260,260 modern rifles from Europe. The Russian military observer D. V. Putiatia visited China in 1888 and found that in Northeastern China (Manchuria) along the Chinese-Russian border, the Chinese soldiers were potentially able to become adept at "European tactics" under certain circumstances, and the Chinese soldiers were armed with modern weapons like Krupp artillery, Winchester carbines, and Mauser rifles. Compared to Russian controlled areas, more benefits were given to the Muslim Kirghiz on the Chinese controlled areas. Russian settlers fought against the Muslim nomadic Kirghiz, which led the Russians to believe that the Kirghiz would be a liability in any conflict against China. The Muslim Kirghiz were sure that in an upcoming war, that China would defeat Russia. The Qing dynasty forced Russia to hand over disputed territory in Ili in the Treaty of Saint Petersburg (1881), in what was widely seen by the west as a diplomatic victory for the Qing. Russia acknowledged that Qing China potentially posed a serious military threat. Mass media in the west during this era portrayed China as a rising military power due to its modernization programs and as major threat to the western world, invoking fears that China would successfully conquer western colonies like Australia. Russian sinologists, the Russian media, threat of internal rebellion, the pariah status inflicted by the Congress of Berlin, the negative state of the Russian economy all led Russia to concede and negotiate with China in St Petersburg, and return most of Ili to China. Historians have judged the Qing dynasty's vulnerability and weakness to foreign imperialism in the 19th century to be based mainly on its maritime naval weakness while it achieved military success against westerners on land, the historian Edward L. Dreyer said that "China’s nineteenth-century humiliations were strongly related to her weakness and failure at sea. At the start of the Opium War, China had no unified navy and no sense of how vulnerable she was to attack from the sea; British forces sailed and steamed wherever they wanted to go. ... In the Arrow War (1856–60), the Chinese had no way to prevent the Anglo-French expedition of 1860 from sailing into the Gulf of Zhili and landing as near as possible to Beijing. Meanwhile, new but not exactly modern Chinese armies suppressed the midcentury rebellions, bluffed Russia into a peaceful settlement of disputed frontiers in Central Asia, and defeated the French forces on land in the Sino-French War (1884–85). But the defeat of the fleet, and the resulting threat to steamship traffic to Taiwan, forced China to conclude peace on unfavorable terms." The British and Russian consuls schemed and plotted against each other at Kashgar. In 1906, Tsar Nicholas II sent a secret agent to China to collect intelligence on the reform and modernization of the Qing dynasty. The task was given to Carl Gustaf Emil Mannerheim, at the time a colonel in the Russian army, who travelled to China with French Sinologist Paul Pelliot. Mannerheim was disguised as an ethnographic collector, using a Finnish passport. Finland was, at the time, a Grand Duchy. For two years, Mannerheim proceeded through Xinjiang, Gansu, Shaanxi, Henan, Shanxi and Inner Mongolia to Beijing. At the sacred Buddhist mountain of Wutai Shan he even met the 13th Dalai Lama. However, while Mannerheim was in China in 1907, Russia and Britain brokered the Anglo-Russian Agreement, ending the classical period of the Great Game. The correspondent Douglas Story observed Chinese troops in 1907 and praised their abilities and military skill. The rise of Japan as an imperial power after the Meiji Restoration led to further subjugation of China. In a dispute over regional suzerainty, war broke out between China and Japan, resulting in another humiliating defeat for the Chinese. By the Treaty of Shimonoseki in 1895, China was forced to recognize Korea's exit from the Imperial Chinese tributary system, leading to the proclamation of the Korean Empire, and the island of Taiwan was ceded to Japan. In 1897, taking advantage of the murder of two missionaries, Germany demanded and was given a set of mining and railroad rights around Jiaozhou Bay in Shandong province. In 1898, Russia obtained access to Dairen and Port Arthur and the right to build a railroad across Manchuria, thereby achieving complete domination over a large portion of northeast China. The United Kingdom, France, and Japan also received a number of concessions later that year. The erosion of Chinese sovereignty contributed to a spectacular anti-foreign outbreak in June 1900, when the "Boxers" (properly the society of the "righteous and harmonious fists") attacked foreign legations in Beijing. This Boxer Rebellion provoked a rare display of unity among the colonial powers, who formed the Eight-Nation Alliance. Troops landed at Tianjin and marched on the capital, which they took on 14 August; the foreign soldiers then looted and occupied Beijing for several months. German forces were particularly severe in exacting revenge for the killing of their ambassador, while Russia tightened its hold on Manchuria in the northeast until its crushing defeat by Japan in the Russo-Japanese War of 1904–1905. Although extraterritorial jurisdiction was abandoned by the United Kingdom and the United States in 1943, foreign political control of parts of China only finally ended with the incorporation of Hong Kong and the small Portuguese territory of Macau into the People's Republic of China in 1997 and 1999 respectively. Mainland Chinese historians refer to this period as the century of humiliation. "The Great Game" (Also called the Tournament of Shadows (Russian: Турниры теней, Turniry Teney) in Russia) was the strategic economic and political rivalry and conflict between the British Empire and the Russian Empire for supremacy in Central Asia at the expense of Afghanistan, Persia and the Central Asian Khanates/Emirates. The classic Great Game period is generally regarded as running approximately from the Russo-Persian Treaty of 1813 to the Anglo-Russian Convention of 1907, in which nations like Emirate of Bukhara fell. A less intensive phase followed the Bolshevik Revolution of 1917, causing some trouble with Persia and Afghanistan until the mid 1920s. In the post-Second World War post-colonial period, the term has informally continued in its usage to describe the geopolitical machinations of the Great Powers and regional powers as they vie for geopolitical power and influence in the area, especially in Afghanistan and Iran/Persia. Between 1850 and 1914, Britain brought nearly 30% of Africa's population under its control, to 15% for France, 9% for Germany, 7% for Belgium and 1% for Italy: Nigeria alone contributed 15 million subjects to Britain, more than in the whole of French West Africa, or the entire German colonial empire. The only regions not under European control in 1914 were Liberia and Ethiopia. Britain's formal occupation of Egypt in 1882, triggered by concern over the Suez Canal, contributed to a preoccupation over securing control of the Nile River, leading to the conquest of neighboring Sudan in 1896-1898, which in turn led to confrontation with a French military expedition at Fashoda in September 1898. In 1899, Britain set out to complete its takeover of the future South Africa, which it had begun in 1814 with the annexation of the Cape Colony, by invading the gold-rich Afrikaner republics of Transvaal and the neighboring Orange Free State. The chartered British South Africa Company had already seized the land to the north, renamed Rhodesia after its head, the Cape tycoon Cecil Rhodes. British gains in southern and East Africa prompted Rhodes and Alfred Milner, Britain's High Commissioner in South Africa, to urge a "Cape to Cairo" empire: linked by rail, the strategically important Canal would be firmly connected to the mineral-rich South, though Belgian control of the Congo Free State and German control of German East Africa prevented such an outcome until the end of World War I, when Great Britain acquired the latter territory. Britain's quest for southern Africa and its diamonds led to social complications and fallouts that lasted for years. To work for their prosperous company, British businessmen hired both white and black South Africans. But when it came to jobs, the white South Africans received the higher paid and less dangerous ones, leaving the black South Africans to risk their lives in the mines for limited pay. This process of separating the two groups of South Africans, whites and blacks, was the beginning of segregation between the two that lasted until 1990. Paradoxically, the United Kingdom, a staunch advocate of free trade, emerged in 1914 with not only the largest overseas empire, thanks to its long-standing presence in India, but also the greatest gains in the conquest of Africa, reflecting its advantageous position at its inception. Until 1876, Belgium had no colonial presence in Africa. It was then that its king, Leopold II created the International African Society. Operating under the pretense of an international scientific and philanthropic association, it was actually a private holding company owned by Leopold. Henry Morton Stanley was employed to explore and colonize the Congo River basin area of equatorial Africa in order to capitalize on the plentiful resources such as ivory, rubber, diamonds, and metals. Up until this point, Africa was known as "the Dark Continent" because of the difficulties Europeans had with exploration. Over the next few years, Stanley overpowered and made treaties with over 450 native tribes, acquiring him over 2,340,000 square kilometres (905,000 sq mi) of land, nearly 67 times the size of Belgium. Neither the Belgian government nor the Belgian people had any interest in imperialism at the time, and the land came to be personally owned by King Leopold II. At the Berlin Conference in 1884, he was allowed to have land named the Congo Free State. The other European countries at the conference allowed this to happen on the conditions that he suppress the East African slave trade, promote humanitarian policies, guarantee free trade, and encourage missions to Christianize the people of the Congo. However, Leopold II’s primary focus was to make a large profit on the natural resources, particularly ivory and rubber. In order to make this profit, he passed several cruel decrees that can be considered to be genocide. He forced the natives to supply him with rubber and ivory without any sort of payment in return. Their wives and children were held hostage until the workers returned with enough rubber or ivory to fill their quota, and if they could not, their family would be killed. When villages refused, they were burned down; the children of the village were murdered and the men had their hands cut off. These policies led to uprisings, but they were feeble compared to European military and technological might, and were consequently crushed. The forced labor was opposed in other ways: fleeing into the forests to seek refuge or setting the rubber forests on fire, preventing the Europeans from harvesting the rubber. No population figures exist from before or after the period, but it is estimated that as many as 10 million people died from violence, famine and disease. However, some sources point to a total population of 16 million people. King Leopold II profited from the enterprise with a 700% profit ratio for the rubber he took from Congo and exported. He used propaganda to keep the other European nations at bay, for he broke almost all of the parts of the agreement he made at the Berlin Conference. For example, he had some Congolese pygmies sing and dance at the 1897 World Fair in Belgium, showing how he was supposedly civilizing and educating the natives of the Congo. Under significant international pressure, the Belgian government annexed the territory in 1908 and renamed it the Belgian Congo, removing it from the personal power of the king. Of all the colonies that were conquered during the wave of New Imperialism, the human rights abuses of the Congo Free State were considered the worst. Chile's interest in expanding into the islands of the Pacific Ocean dates to the presidency of José Joaquín Prieto (1831-1841) and the ideology of Diego Portales, who considered that Chile's expansion into Polynesia was a natural consequence of its maritime destiny.[A] Nonetheless, the first stage of the country's expansionism into the Pacific began only a decade later, in 1851, when—in response to an American incursion into the Juan Fernández Islands—Chile's government formally organized the islands into a subdelegation of Valparaíso. That same year, Chile's economic interest in the Pacific were renewed after its merchant fleet briefly succeeded in creating an agricultural goods exchange market that connected the Californian port of San Francisco with Australia. By 1861, Chile had established a lucrative enterprise across the Pacific, its national currency abundantly circulating throughout Polynesia and its merchants trading in the markets of Tahiti, New Zealand, Tasmania, Shanghai; negotiations were also made with the Spanish Philippines, and altercations reportedly occurred between Chilean and American whalers in the Sea of Japan.This period ended as a result of the Chilean merchant fleet's destruction by Spanish forces in 1866, during the Chincha Islands War. Chile's Polynesian aspirations would again be awakened in the aftermath of the country's decisive victory against Peru in the War of the Pacific, which left the Chilean fleet as the dominant maritime force in the Pacific coast of the Americas. Valparaíso had also become the most important port in the Pacific coast of South America, providing Chilean merchants with the capacity to find markets in the Pacific for its new mineral wealth acquired from the Atacama. During this period, the Chilean intellectual and politician Benjamín Vicuña Mackenna (who served as senator in the National Congress from 1876 to 1885) was an influential voice in favor of Chilean expansionism into the Pacific—he considered that Spain's discoveries in the Pacific had been stolen by the British, and envisioned that Chile's duty was to create an empire in the Pacific that would reach Asia. In the context of this imperialist fervor is that, in 1886, Captain Policarpo Toro of the Chilean Navy proposed to his superiors the annexation of Easter Island; a proposal which was supported by President José Manuel Balmaceda because of the island's apparent strategic location and economic value. After Toro transferred the rights to the island's sheep ranching operations from Tahiti-based businesses to the Chilean-based Williamson-Balfour Company in 1887, Easter Island's annexation process was culminated with the signing of the "Agreement of Wills" between Rapa Nui chieftains and Toro, in name of the Chilean government, in 1888. By occupying Easter Island, Chile joined the imperial nations. By 1900 nearly all Oceania islands were in control of Britain, France, United States, Germany, Japan, Mexico, Ecuador and Chile. The extension of European control over Africa and Asia added a further dimension to the rivalry and mutual suspicion which characterized international diplomacy in the decades preceding World War I. France's seizure of Tunisia in 1881 initiated fifteen years of tension with Italy, which had hoped to take the country, retaliating by allying with Germany and waging a decade-long tariff war with France. Britain's takeover of Egypt a year later caused a marked cooling of her relations with France. The most striking conflicts of the era were the Spanish–American War of 1898 and the Russo-Japanese War of 1904–05, each signaling the advent of a new imperial great power; the United States and Japan, respectively. The Fashoda incident of 1898 represented the worst Anglo-French crisis in decades, but France's buckling in the face of British demands foreshadowed improved relations as the two countries set about resolving their overseas claims. British policy in South Africa and German actions in the Far East contributed to dramatic policy shifts, which in the 1900s, aligned hitherto isolationist Britain first with Japan as an ally, and then with France and Russia in the looser Triple Entente. German efforts to break the Entente by challenging French hegemony in Morocco resulted in the Tangier Crisis of 1905 and the Agadir Crisis of 1911, adding to tension and anti-German sentiment in the years preceding World War I. In the Pacific conflicts between Germany and the United States and the United Kingdom contributed to the First and Second Samoan Civil War. One of the biggest motivations behind New Imperialism was the idea of humanitarianism and "civilizing" the "lower" class people in Africa and in other undeveloped places. This was a religious motive for many Christian missionaries, in an attempt to save the souls of the "uncivilized" people, and based on the idea that Christians and the people of the United Kingdom were morally superior. Most of the missionaries that supported imperialism did so because they felt the only true religion was their own. Similarly, Roman Catholic missionaries opposed British missionaries because the British missionaries were Protestant. At times, however, imperialism did help the people of the colonies because the missionaries ended up stopping some of the slavery in some areas. Therefore, Europeans claimed that they were only there because they wanted to protect the weaker tribal groups they conquered. The missionaries and other leaders suggested that they should stop such practices as cannibalism, child marriage, and other "savage things". This humanitarian ideal was described in poems such as the White Man's Burden and other literature. Often, the humanitarianism was sincere, but with misguided choices. Although some imperialists were trying to be sincere with the notion of humanitarianism, at times their choices might not have been best for the areas they were conquering and the natives living there. The Dutch Ethical Policy was the dominant reformist and liberal political character of colonial policy in the Dutch East Indies during the 20th century. In 1901, the Dutch Queen Wilhelmina announced that the Netherlands accepted an ethical responsibility for the welfare of their colonial subjects. This announcement was a sharp contrast with the former official doctrine that Indonesia was mainly a wingewest (region for making profit). It marked the start of modern development policy, implemented and practised by Alexander Willem Frederik Idenburg, whereas other colonial powers usually talked of a civilizing mission, which mainly involved spreading their culture to colonized peoples. The Dutch Ethical Policy (Dutch: Ethische Politiek) emphasised improvement in material living conditions. The policy suffered, however, from serious underfunding, inflated expectations and lack of acceptance in the Dutch colonial establishment, and it had largely ceased to exist by the onset of the Great Depression in 1929. It did however create an educated indigenous elite able to articulate and eventually establish independence from the Netherlands. The "accumulation theory" adopted by Karl Kautsky, John A. Hobson and popularized by Vladimir Lenin centered on the accumulation of surplus capital during and after the Industrial Revolution: restricted opportunities at home, the argument goes, drove financial interests to seek more profitable investments in less-developed lands with lower labor costs, unexploited raw materials and little competition. Hobson's analysis fails to explain colonial expansion on the part of less industrialized nations with little surplus capital, such as Italy, or the great powers of the next century—the United States and Russia—which were in fact net borrowers of foreign capital. Also, military and bureaucratic costs of occupation frequently exceeded financial returns. In Africa (exclusive of what would become the Union of South Africa in 1909) the amount of capital investment by Europeans was relatively small before and after the 1880s, and the companies involved in tropical African commerce exerted limited political influence. The "World-Systems theory" approach of Immanuel Wallerstein sees imperialism as part of a general, gradual extension of capital investment from the "core" of the industrial countries to a less developed "periphery." Protectionism and formal empire were the major tools of "semi-peripheral," newly industrialized states, such as Germany, seeking to usurp Britain's position at the "core" of the global capitalist system. Echoing Wallerstein's global perspective to an extent, imperial historian Bernard Porter views Britain's adoption of formal imperialism as a symptom and an effect of her relative decline in the world, and not of strength: "Stuck with outmoded physical plants and outmoded forms of business organization, [Britain] now felt the less favorable effects of being the first to modernize." [...] the concept of the 'new imperialism' espoused by such diverse writers as John A. Hobson, V. I. Lenin, Leonard Woolf, Parker T, Moon, Robert L. Schuyler, and William L. Langer. Those students of imperialism, whatever their purpose in writing, all saw a fundamental difference between the imperialist impulses of the mid- and late-Victorian eras. Langer perhaps best summarized the importance of making the distinction of late-nineteenth-century imperialism when he wrote in 1935: '[...] this period will stand out as the crucial epoch during which the nations of the western world extended their political, economic and cultural influence over Africa and over large parts of Asia ... in the larger sense the story is more than the story of rivalry between European imperialisms; it is the story of European aggression and advance in the non-European parts of the world.' Commentators have identified three broad waves of European colonial and imperial expansion, connected with specific territories. The first targeted the Americas, North and South, as well as the Caribbean. The second focused on Asia, while the third wave extended European control into Africa. |url=value (help) on 7 December 2013.