text
stringlengths
4
602k
Presentation on theme: "Chapter 7: Direct Variation Advanced Math. Section 7.1: Direct Variation, Slope, and Tangent When two variable quantities have a constant ratio, their."— Presentation transcript: Chapter 7: Direct Variation Advanced Math Section 7.1: Direct Variation, Slope, and Tangent When two variable quantities have a constant ratio, their relationship is called direct variation. The constant ratio is called the variation constant. Page 359 in your textbook… For the roof, the height of a post depends on its horizontal distance from the eaves. Although the height increases as the distance decreases, the ratio of these two quantities is constant. Ratio of Quantities = variation constant – Height/Distance = 0.75 – Height = 0.75(Distance) You can either use this equation to find the height of a post needed to support the roof at another point. Sample Question #1 Suppose the carpenter of the roof shown on page 359 decides that a post should be 3 feet from the eaves. What should the height of the post be? H= 0.75(D) H = 0.75(3) H = 2.25 The post should be 2.25 feet long. Try this one on your own… Distance from eaves to first post is 10/4 or 2.5 feet. H = 0.75(d) H = 0.75(2.5) H = The shortest post should be feet long. Suppose the carpenter of the roof shown on page 359 wants only three support post between the eaves and the post at the peak. Assuming that the posts are evenly spaced, how long should the shortest support post be? Slope… The slope of a line is the ratio of rise to run for any two points on the line. Rise = vertical change between two points Run = horizontal change between two points Slope = rise/run Sample Question #2 Find the slope of the roof shown on page 359. Use the eaves and the peak as the two points. Step One: Draw a Sketch. Step Two: Find the slope. The roof rises 7.5 feet across a distance of 10 feet. Slope = rise/run Slope = 7.5/10 Try this one on your own… Slope = rise/run Slope = 9/18 Slope = 1/2 Suppose the post from the horizontal beam to the peak of the roof is 9 feet high and the length of the beam from the eaves to the post is 18 feet. Tangent… The tangent ratio of an acute angle compares the two legs of a right triangle. Tangent = leg opposite / leg adjacent Sample Question #3 Look again at the roof on page 359. What is the tangent ratio of the angle between the roof and the horizontal beam? Tan Try this one on your own… Tangent Section 7.2: A Direct Variation Model Read pages Get with a partner and begin your homework. Section 7.3: Circumference and Arc Length overhead Section 7.4: Direct Variation with y = kx You can model direct variation with an equation y = kx. Y = kx is called the general form for direct variation. Y = dependent variable K = variation constant X = control variable Sample #1 Y = kx 336 = k (40) K = 8.4 Y = 8.4x Y = 8.4(130) Y = 1092 In 130 hrs, Cindy would make $1092. The amount Cindy earns varies directly with the number of hours she works. She earns $336 in a 40- hour work week. How much will Cindy earn in 130 hours? Try this one on your own… Efren set an empty fish tank on a bathroom scale. When he added 20 quarts of water, the weight increase was 41 ¾ pounds. He plans to have 28 quarts of water in the tank when he sets the tank up for his fish. If the weight of the water varies directly with the number of quarts used, how much will the water in the tank weigh? About 58 pounds Direct Variation Graphs The graph of a direct variation equation y = kx is a line that passes through the origin. The variation constant, k, is the slope of the line. Sample # 2 Is direct variation a good model for the data shown in each graph? Explain why or why not. Try these on your own… Is direct variation a good model for the data shown in each graph? Explain why or why not. Sample # 3 The direct variation graph shown is a line with slope 3. Write an equation of the line. Try this one on your own… The direct variation graph shown is a line with slope 3/2. Write an equation for the graph. Sample #4 Graph y = -2x. Try this one on your own… – Graph y = -2/3x. Section 7.5: Using Dimensional Analysis When you cancel units of measurement as if they are numbers, you are using a problem solving strategy called dimensional analysis. A conversion factor is a ratio of two equal quantities that are measured in different units. – Example: 12 inches in 1 foot – 12:1 Sample #1 The distance Milo travels varies directly with the number of hours he drives. Milo drives 145 miles in 3 hours. – Identify the control variable and the dependent variable. – Express the variation constant as a rate. Control/Independent Variable = time traveled (hours) Dependent Variable = distance traveled (miles) 145 miles / 3 hours = 48.3 miles per hour Try this one on your own… The thickness of a stack of copier paper varies directly with the number of sheets in the stack. A new pack of copier paper is 2 inches thick and contains 500 sheets. – Identify the control variable and dependent variable. – Express the variation constant as a rate. Control/Independent = thickness of stack of paper Dependent = number of sheets Variation Constant = 2 inches/500 sheets = inches per sheet Sample #2 How many centimeters are in 3 inches? – There are 2.54 centimeters in 1 inch. 3 inches x 2.54 cm/1 in 7.62 centimeters Write a direct variation equation that helps you convert inches to centimeters. – X = number of inches – Y = number of centimeters – K = 2.54 – Equation: y = 2.54x What is the variation constant? – 2.54 centimeters per inch Try this one on your own… How many pounds are in 70.5 kg? pounds Write a direct variation equation for converting kilograms to pounds. – Y = 2.205X What is the variation constant? – pounds per kilograms Pedro Zepeda weighs 70.5 kg. There are pounds per 1 kilogram. Sample #3 In 1990 a French train set a speed record of 515 km/h on the Atlantique line. Find its speed in miles per hour? 1 kilometer = miles 515 km / 1 hr = ? / ? 515 km / 1 hr = ? mi / ? km 515 km / 1 hr = mi / 1 km 320 miles per hour Try this one on your own… A large crude oil pipeline from Canada to the United States has a flow of 8.3 million gallons per day. How many liters per day does the pipeline handle? There are liters per 1 gallon million liters of crude oil per day Sample #4 Convert 30 mi/hr to ft/s 1 mile = 5,280 feet 1 hour = 60 minutes 1 minute = 60 seconds Rate x Conversion Factors = New Rate rate x conversion factors = new rate Section 7.6: Areas of Circles and Sectors Area Formula – What is it? – Are they units squared or cubed? Sample #1 Find the area of a circle whose radius is 3 units. Round the answer to the nearest square unit units squared Try this one on your own… – Find the area of a circle whose radius is 7.5 units. Round the answer to the nearest square units. – units squared Sample #2 In a center-pivot irrigation system, a moving arm sprinkles water over a circular region. How long must the arm be to water an area of 586,000 meters squared? = radius Try this one on your own… The amount of water a pipe can carry depends on the area of the opening at the end. An engineer wants a large sewer pipe to have an opening of 13 feet squared. What should the radius of the pipe be? Round the answer to the nearest tenth of a foot. 2.0 = radius The region of a circle formed by a central angle and its arc is called a sector. area of sector = measure of the central angle area of circle = 360 degrees Sample #3 A fixed camera that is part of the TV security system at a shopping mall parking lot has a range of about 250 feet. The angle of vision of the camera is 100 degrees. Over how large an area can the camera see when it is in operation? Steps to Solve – Draw a Picture – Set up the Equation Sample #4 About how many times more pizza do you get when you buy the large size instead of the small size? – Large is 16 inches in diameter. – Small is 12 inches in diameter. Try this one on your own… The camera is Sample #3 replaced an older camera that had a viewing angle of 110 degrees but a range of only 175 feet. How many times greater is the viewing area of the newer camera than that of the older camera? 1.86 times
In vascular plants, the roots are the organs of a plant that are modified to provide anchorage for the plant and take in water and nutrients into the plant body, which allows plants to grow taller and faster. They are most often below the surface of the soil, but roots can also be aerial or aerating, that is, growing up above the ground or especially above water. Root morphology is divided into four zones: the root cap, the apical meristem, the elongation zone, and the hair. The root cap of new roots helps the root penetrate the soil. These root caps are sloughed off as the root goes deeper creating a slimy surface that provides lubricant. The apical meristem behind the root cap produces new root cells that elongate. Then, root hairs form that absorb water and mineral nutrients from the soil. The first root in seed producing plants is the radicle, which expands from the plant embryo after seed germination. When dissected, the arrangement of the cells in a root is root hair, epidermis, epiblem, cortex, endodermis, pericycle and, lastly, the vascular tissue in the centre of a root to transport the water absorbed by the root to other places of the plant.[clarification needed] Perhaps the most striking characteristic of roots that distinguishes them from other plant organs such as stem-branches and leaves is that roots have an endogenous origin, i.e., they originate and develop from an inner layer of the mother axis, such as pericycle. In contrast, stem-branches and leaves are exogenous, i.e., they start to develop from the cortex, an outer layer. In response to the concentration of nutrients, roots also synthesise cytokinin, which acts as a signal as to how fast the shoots can grow. Roots often function in storage of food and nutrients. The roots of most vascular plant species enter into symbiosis with certain fungi to form mycorrhizae, and a large range of other organisms including bacteria also closely associate with roots. Root system architecture (RSA) In its simplest form, the term root system architecture (RSA) refers to the spatial configuration of a plant's root system. This system can be extremely complex and is dependent upon multiple factors such as the species of the plant itself, the composition of the soil and the availability of nutrients. Root architecture plays the important role of providing a secure supply of nutrients and water as well as anchorage and support. The configuration of root systems serves to structurally support the plant, compete with other plants and for uptake of nutrients from the soil. Roots grow to specific conditions, which, if changed, can impede a plant's growth. For example, a root system that has developed in dry soil may not be as efficient in flooded soil, yet plants are able to adapt to other changes in the environment, such as seasonal changes. Terms and components The main terms used to classify the architecture of a root system are: |Branch magnitude||Number of links (exterior or interior)| |Topology||Pattern of branching (Herringbone, Dichotomous, Radial)| |Link length||Distance between branches| |Root angle||Radial angle of a lateral root's base around the parent root's circumference, the angle of a lateral root from its parent root, and the angle an entire system spreads.| |Link radius||Diameter of root| All components of the root architecture are regulated through a complex interaction between genetic responses and responses due to environmental stimuli. These developmental stimuli are categorised as intrinsic, the genetic and nutritional influences, or extrinsic, the environmental influences and are interpreted by signal transduction pathways. Extrinsic factors affecting root architecture include gravity, light exposure, water and oxygen, as well as the availability or lack of nitrogen, phosphorus, sulphur, aluminium and sodium chloride. The main hormones (intrinsic stimuli) and respective pathways responsible for root architecture development include: |Auxin||Lateral root formation, maintenance of apical dominance and adventitious root formation.| |Cytokinins||Cytokinins regulate root apical meristem size and promote lateral root elongation.| |Ethylene||Promotes crown root formation.| |Gibberellins||Together with ethylene, they promote crown primordia growth and elongation. Together with auxin, they promote root elongation. Gibberellins also inhibit lateral root primordia initiation.| Early root growth is one of the functions of the apical meristem located near the tip of the root. The meristem cells more or less continuously divide, producing more meristem, root cap cells (these are sacrificed to protect the meristem), and undifferentiated root cells. The latter become the primary tissues of the root, first undergoing elongation, a process that pushes the root tip forward in the growing medium. Gradually these cells differentiate and mature into specialized cells of the root tissues. Growth from apical meristems is known as primary growth, which encompasses all elongation. Secondary growth encompasses all growth in diameter, a major component of woody plant tissues and many nonwoody plants. For example, storage roots of sweet potato have secondary growth but are not woody. Secondary growth occurs at the lateral meristems, namely the vascular cambium and cork cambium. The former forms secondary xylem and secondary phloem, while the latter forms the periderm. In plants with secondary growth, the vascular cambium, originating between the xylem and the phloem, forms a cylinder of tissue along the stem and root. The vascular cambium forms new cells on both the inside and outside of the cambium cylinder, with those on the inside forming secondary xylem cells, and those on the outside forming secondary phloem cells. As secondary xylem accumulates, the "girth" (lateral dimensions) of the stem and root increases. As a result, tissues beyond the secondary phloem including the epidermis and cortex, in many cases tend to be pushed outward and are eventually "sloughed off" (shed). At this point, the cork cambium begins to form the periderm, consisting of protective cork cells. The walls of cork cells contains suberin thickenings, which is an extra cellular complex biopolymer. The suberin thickenings functions by providing a physical barrier, protection against pathogens and by preventing water loss from the surrounding tissues. In addition, it also aids the process of wound healing in plants. It is also postulated that suberin could be a component of the apoplastic barrier (present at the outer cell layers of roots) which prevents toxic compounds from entering the root and reduces radial oxygen loss (ROL) from the aerenchyma during waterlogging. In roots, the cork cambium originates in the pericycle, a component of the vascular cylinder. The vascular cambium produces new layers of secondary xylem annually. The xylem vessels are dead at maturity but are responsible for most water transport through the vascular tissue in stems and roots. Tree roots usually grow to three times the diameter of the branch spread, only half of which lie underneath the trunk and canopy. The roots from one side of a tree usually supply nutrients to the foliage on the same side. Some families however, such as Sapindaceae (the maple family), show no correlation between root location and where the root supplies nutrients on the plant. There is a correlation of roots using the process of plant perception to sense their physical environment to grow, including the sensing of light, and physical barriers. Plants also sense gravity and respond through auxin pathways, resulting in gravitropism. Over time, roots can crack foundations, snap water lines, and lift sidewalks. Research has shown that roots have ability to recognize 'self' and 'non-self' roots in same soil environment. The correct environment of air, mineral nutrients and water directs plant roots to grow in any direction to meet the plant's needs. Roots will shy or shrink away from dry or other poor soil conditions. Shade avoidance response In order to avoid shade, plants utilize a shade avoidance response. When a plant is under dense vegetation, the presence of other vegetation nearby will cause the plant to avoid lateral growth and experience an increase in upward shoot, as well as downward root growth. In order to escape shade, plants adjust their root architecture, most notably by decreasing the length and amount of lateral roots emerging from the primary root. Experimentation of mutant variants of Arabidopsis thaliana found that plants sense the Red to Far Red light ratio that enters the plant through photoreceptors known as phytochromes. Nearby plant leaves will absorb red light and reflect far- red light which will cause the ratio red to far red light to lower. The phytochrome PhyA that senses this Red to Far Red light ratio is localized in both the root system as well as the shoot system of plants, but through knockout mutant experimentation, it was found that root localized PhyA does not sense the light ratio, whether directly or axially, that leads to changes in the lateral root architecture. Research instead found that shoot localized PhyA is the phytochrome responsible for causing these architectural changes of the lateral root. Research has also found that phytochrome completes these architectural changes through the manipulation of auxin distribution in the root of the plant. When a low enough Red to Far Red ratio is sensed by PhyA, the phyA in the shoot will be mostly in its active form. In this form, PhyA stabilize the transcription factor HY5 causing it to no longer be degraded as it is when phyA is in its inactive form. This stabilized transcription factor is then able to be transported to the roots of the plant through the phloem, where it proceeds to induce its own transcription as a way to amplify its signal. In the roots of the plant HY5 functions to inhibit an auxin response factor known as ARF19, a response factor responsible for the translation of PIN3 and LAX3, two well known auxin transporting proteins. Thus, through manipulation of ARF19, the level and activity of auxin transporters PIN3 and LAX3 is inhibited. Once inhibited, auxin levels will be low in areas where lateral root emergence normally occurs, resulting in a failure for the plant to have the emergence of the lateral root primordium through the root pericycle. With this complex manipulation of Auxin transport in the roots, lateral root emergence will be inhibited in the roots and the root will instead elongate downwards, promoting vertical plant growth in an attempt to avoid shade. Research of Arabidopsis has led to the discovery of how this auxin mediated root response works. In an attempt to discover the role that phytochrome plays in lateral root development, Salisbury et al. (2007) worked with Arabidopsis thaliana grown on agar plates. Salisbury et al. used wild type plants along with varying protein knockout and gene knockout Arabidopsis mutants to observe the results these mutations had on the root architecture, protein presence, and gene expression. To do this, Salisbury et al. used GFP fluorescence along with other forms of both macro and microscopic imagery to observe any changes various mutations caused. From these research, Salisbury et al. were able to theorize that shoot located phytochromes alter auxin levels in roots, controlling lateral root development and overall root architecture. In the experiments of van Gelderen et al. (2018), they wanted to see if and how it is that the shoot of Arabidopsis thaliana alters and affects root development and root architecture. To do this, they took Arabidopsis plants, grew them in agar gel, and exposed the roots and shoots to separate sources of light. From here, they altered the different wavelengths of light the shoot and root of the plants were receiving and recorded the lateral root density, amount of lateral roots, and the general architecture of the lateral roots. To identify the function of specific photoreceptors, proteins, genes, and hormones, they utilized various Arabidopsis knockout mutants and observed the resulting changes in lateral roots architecture. Through their observations and various experiments, van Gelderen et al. were able to develop a mechanism for how root detection of Red to Far-red light ratios alter lateral root development. A true root system consists of a primary root and secondary roots (or lateral roots). - the diffuse root system: the primary root is not dominant; the whole root system is fibrous and branches in all directions. Most common in monocots. The main function of the fibrous root is to anchor the plant. The roots, or parts of roots, of many plant species have become specialized to serve adaptive purposes besides the two primary functions[clarification needed], described in the introduction. - Adventitious roots arise out-of-sequence from the more usual root formation of branches of a primary root, and instead originate from the stem, branches, leaves, or old woody roots. They commonly occur in monocots and pteridophytes, but also in many dicots, such as clover (Trifolium), ivy (Hedera), strawberry (Fragaria) and willow (Salix). Most aerial roots and stilt roots are adventitious. In some conifers adventitious roots can form the largest part of the root system. - Aerating roots (or knee root or knee or pneumatophores): roots rising above the ground, especially above water such as in some mangrove genera (Avicennia, Sonneratia). In some plants like Avicennia the erect roots have a large number of breathing pores for exchange of gases. - Aerial roots: roots entirely above the ground, such as in ivy (Hedera) or in epiphytic orchids. Many aerial roots are used to receive water and nutrient intake directly from the air – from fogs, dew or humidity in the air. Some rely on leaf systems to gather rain or humidity and even store it in scales or pockets. Other aerial roots, such as mangrove aerial roots, are used for aeration and not for water absorption. Other aerial roots are used mainly for structure, functioning as prop roots, as in maize or anchor roots or as the trunk in strangler fig. In some Epiphytes – plants living above the surface on other plants, aerial roots serve for reaching to water sources or reaching the surface, and then functioning as regular surface roots. - Canopy roots/arboreal roots: roots that form when tree branches support mats of epiphytes and detritus, which hold water and nutrients in the canopy. They grow out into these mats, likely to utilize the available nutrients and moisture. - Contractile roots: roots that pull bulbs or corms of monocots, such as hyacinth and lily, and some taproots, such as dandelion, deeper in the soil through expanding radially and contracting longitudinally. They have a wrinkled surface. - Coarse roots: roots that have undergone secondary thickening and have a woody structure. These roots have some ability to absorb water and nutrients, but their main function is transport and to provide a structure to connect the smaller diameter, fine roots to the rest of the plant. - Dimorphic root systems: roots with two distinctive forms for two separate functions - Fine roots: typically primary roots <2 mm diameter that have the function of water and nutrient uptake. They are often heavily branched and support mycorrhizas. These roots may be short lived, but are replaced by the plant in an ongoing process of root 'turnover'. - Haustorial roots: roots of parasitic plants that can absorb water and nutrients from another plant, such as in mistletoe (Viscum album) and dodder. - Propagative roots: roots that form adventitious buds that develop into aboveground shoots, termed suckers, which form new plants, as in Canada thistle, cherry and many others. - Proteoid roots or cluster roots: dense clusters of rootlets of limited growth that develop under low phosphate or low iron conditions in Proteaceae and some plants from the following families Betulaceae, Casuarinaceae, Elaeagnaceae, Moraceae, Fabaceae and Myricaceae. - Stilt roots: adventitious support roots, common among mangroves. They grow down from lateral branches, branching in the soil. - Storage roots: roots modified for storage of food or water, such as carrots and beets. They include some taproots and tuberous roots. - Structural roots: large roots that have undergone considerable secondary thickening and provide mechanical support to woody plants and trees. - Surface roots: roots that proliferate close below the soil surface, exploiting water and easily available nutrients. Where conditions are close to optimum in the surface layers of soil, the growth of surface roots is encouraged and they commonly become the dominant roots. - Tuberous roots: fleshy and enlarged lateral roots for food or water storage, e.g. sweet potato. A type of storage root distinct from taproot. - Photosynthetic roots: roots that are green and photosynthesize, providing sugar to the plant. They are similar to phylloclades. Several orchids have these, such as Dendrophylax and Taeniophyllum. - Root nodules: roots that harbor nitrogen-fixing soil bacteria. These are often very short and rounded. Root nodules are found in virtually all legumes. - Coralloid roots: similar to root nodules, these provide nitrogen to the plant. They are often larger than nodules, branched, and located at or near the soil surface, and harbor nitrogen-fixing cyanobacteria. They are only found in cycads. The distribution of vascular plant roots within soil depends on plant form, the spatial and temporal availability of water and nutrients, and the physical properties of the soil. The deepest roots are generally found in deserts and temperate coniferous forests; the shallowest in tundra, boreal forest and temperate grasslands. The deepest observed living root, at least 60 metres below the ground surface, was observed during the excavation of an open-pit mine in Arizona, USA. Some roots can grow as deep as the tree is high. The majority of roots on most plants are however found relatively close to the surface where nutrient availability and aeration are more favourable for growth. Rooting depth may be physically restricted by rock or compacted soil close below the surface, or by anaerobic soil conditions. |Species||Location||Maximum rooting depth (m)||References| |Boscia albitrunca||Kalahari desert||68||Jennings (1974)| |Juniperus monosperma||Colorado Plateau||61||Cannon (1960)| |Eucalyptus sp.||Australian forest||61||Jennings (1971)| |Acacia erioloba||Kalahari desert||60||Jennings (1974)| |Prosopis juliflora||Arizona desert||53.3||Phillips (1963)| The fossil record of roots—or rather, infilled voids where roots rotted after death—spans back to the late Silurian, about 430 million years ago. Their identification is difficult, because casts and molds of roots are so similar in appearance to animal burrows. They can be discriminated using a range of features. The evolutionary development of roots likely happened from the modification of shallow rhizomes (modified horizontal stems) which anchored primitive vascular plants combined with the development of filamentous outgrowths (called rhizoids) which anchored the plants and conducted water to the plant from the soil. Light has been shown to have some impact on roots, but its not been studied as much as the effect of light on other plant systems. Early research in the 1930s found that light decreased the effectiveness of Indole-3-acetic acid on adventitious root initiation. Studies of the pea in the 1950s shows that lateral root formation was inhibited by light, and in the early 1960s researchers found that light could induce positive gravitropic responses in some situations. The effects of light on root elongation has been studied for monocotyledonous and dicotyledonous plants, with the majority of studies finding that light inhibited root elongation, whether pulsed or continuous. Studies of Arabidopsis in the 1990s showed negative phototropism and inhibition of the elongation of root hairs in light sensed by phyB. Certain plants, namely Fabaceae, form root nodules in order to associate and form a symbiotic relationship with nitrogen-fixing bacteria called rhizobia. Owing to the high energy required to fix nitrogen from the atmosphere, the bacteria take carbon compounds from the plant to fuel the process. In return, the plant takes nitrogen compounds produced from ammonia by the bacteria. Soil temperature is a factor that effects root initiation and length. Root length is usually impacted more dramatically by temperature than overall mass, where cooler temperatures tend to cause more lateral growth because downward extension is limited by cooler temperatures at subsoil levels. Needs vary by plant species, but in temperate regions cool temperatures may limit root systems. Cool temperature species like oats, rapeseed, rye, wheat fare better in lower temperatures than summer annuals like maize and cotton. Researchers have found that plants like cotton develop wider and shorter taproots in cooler temperatures. The first root originating from the seed usually has a wider diameter than root branches, so smaller root diameters are expected if temperatures increase root initiation. Root diameter also decreases when the root elongates. Plants can interact with one another in their environment through their root systems. Studies have demonstrated that plant-plant interaction occurs among root systems via the soil as a medium. Researchers have tested whether plants growing in ambient conditions would change their behavior if a nearby plant was exposed to drought conditions. Since nearby plants showed no changes in stomatal aperture researchers believe the drought signal spread through the roots and soil, not through the air as a volatile chemical signal. Soil microbiota can suppress both disease and beneficial root symbionts (mycorrhizal fungi are easier to establish in sterile soil). Inoculation with soil bacteria can increase internode extension, yield and quicken flowering. The migration of bacteria along the root varies with natural soil conditions. For example, research has found that the root systems of wheat seeds inoculated with Azotobacter showed higher populations in soils favorable to Azotobacter growth. Some studies have been unsuccessful in increasing the levels of certain microbes (such as P. fluorescens) in natural soil without prior sterilization. Grass root systems are beneficial at reducing soil erosion by holding the soil together. Perennial grasses that grow wild in rangelands contribute organic matter to the soil when their old roots decay after attacks by beneficial fungi, protozoa, bacteria, insects and worms release nutrients. Scientists have observed significant diversity of the microbial cover of roots at around 10 percent of three week old root segments covered. On younger roots there was even low coverage, but even on 3-month-old roots the coverage was only around 37%. Before the 1970s, scientists believed that the majority of the root surface was covered by microorganisms. Researchers studying maize seedlings found that calcium absorption was greatest in the apical root segment, and potassium at the base of the root. Along other root segments absorption was similar. Absorbed potassium is transported to the root tip, and to a lesser extent other parts of the root, then also to the shoot and grain. Calcium transport from the apical segment is slower, mostly transported upward and accumulated in stem and shoot. Researchers found that partial deficiencies of K or P did not change the fatty acid composition of phosphatidyl choline in Brassica napus L. plants. Calcium deficiency did, on the other hand, lead to a marked decline of polyunsaturated compounds that would be expected to have negative impacts for integrity of the plant membrane, that could effect some properties like its permeability, and is needed for the ion uptake activity of the root membranes. The term root crops refers to any edible underground plant structure, but many root crops are actually stems, such as potato tubers. Edible roots include cassava, sweet potato, beet, carrot, rutabaga, turnip, parsnip, radish, yam and horseradish. Spices obtained from roots include sassafras, angelica, sarsaparilla and licorice. Sugar beet is an important source of sugar. Yam roots are a source of estrogen compounds used in birth control pills. The fish poison and insecticide rotenone is obtained from roots of Lonchocarpus spp. Important medicines from roots are ginseng, aconite, ipecac, gentian and reserpine. Several legumes that have nitrogen-fixing root nodules are used as green manure crops, which provide nitrogen fertilizer for other crops when plowed under. Specialized bald cypress roots, termed knees, are sold as souvenirs, lamp bases and carved into folk art. Native Americans used the flexible roots of white spruce for basketry. Tree roots can heave and destroy concrete sidewalks and crush or clog buried pipes. The aerial roots of strangler fig have damaged ancient Mayan temples in Central America and the temple of Angkor Wat in Cambodia. Vegetative propagation of plants via cuttings depends on adventitious root formation. Hundreds of millions of plants are propagated via cuttings annually including chrysanthemum, poinsettia, carnation, ornamental shrubs and many houseplants. Roots can also protect the environment by holding the soil to reduce soil erosion. This is especially important in areas such as sand dunes. - Absorption of water - Cypress knee - Drought rhizogenesis - Fibrous root system - Mycorrhiza – root symbiosis in which individual hyphae extending from the mycelium of a fungus colonize the roots of a host plant. - Mycorrhizal network - Plant physiology - Rhizosphere – region of soil around the root influenced by root secretions and microorganisms present - Root cutting - Rooting powder - Tanada effect - Harley Macdonald & Donovan Stevens (3 September 2019). Biotechnology and Plant Biology. EDTECH. pp. 141–. ISBN 978-1-83947-180-3. - "Plant parts=Roots". University of Illinois Extension. - Yaacov Okon (24 November 1993). Azospirillum/Plant Associations. CRC Press. pp. 77–. ISBN 978-0-8493-4925-6. - "Backyard Gardener: Understanding Plant Roots". University of Arizona Cooperative Extension. - Gangulee HC, Das KS, Datta CT, Sen S. College Botany. 1. Kolkata: New Central Book Agency. - Dutta AC, Dutta TC. BOTANY For Degree Students (6th ed.). Oxford University Press. - Sheldrake, Merlin (2020). Entangled Life. Bodley Head. p. 148. ISBN 978-1847925206. - Malamy JE (2005). "Intrinsic and environmental response pathways that regulate root system architecture". Plant, Cell & Environment. 28 (1): 67–77. doi:10.1111/j.1365-3040.2005.01306.x. PMID 16021787. - Caldwell MM, Dawson TE, Richards JH (January 1998). "Hydraulic lift: consequences of water efflux from the roots of plants". Oecologia. 113 (2): 151–161. Bibcode:1998Oecol.113..151C. doi:10.1007/s004420050363. PMID 28308192. S2CID 24181646. - Fitter AH (1991). "The ecological significance of root system architecture: an economic approach". In Atkinson D (ed.). Plant Root Growth: An Ecological Perspective. Blackwell. pp. 229–243. - Malamy JE, Ryan KS (November 2001). "Environmental regulation of lateral root initiation in Arabidopsis". Plant Physiology. 127 (3): 899–909. doi:10.1104/pp.010406. PMC 129261. PMID 11706172. - Russell PJ, Hertz PE, McMillan B (2013). Biology: The Dynamic Science. Cengage Learning. p. 750. ISBN 978-1-285-41534-5. Archived from the original on 2018-01-21. Retrieved 2017-04-24. - "Suberin – an overview | ScienceDirect Topics". www.sciencedirect.com. Retrieved 2021-08-31. - "Suberin Form & Function – Mark Bernards – Western University". www.uwo.ca. Retrieved 2021-08-31. - Watanabe, Kohtaro; Nishiuchi, Shunsaku; Kulichikhin, Konstantin; Nakazono, Mikio (2013). "Does suberin accumulation in plant roots contribute to waterlogging tolerance?". Frontiers in Plant Science. 4: 178. doi:10.3389/fpls.2013.00178. ISSN 1664-462X. PMC 3683634. PMID 23785371. - van den Driessche, R. (1974-07-01). "Prediction of mineral nutrient status of trees by foliar analysis". The Botanical Review. 40 (3): 347–394. doi:10.1007/BF02860066. ISSN 1874-9372. S2CID 29919924 – via Springer. - Nakagawa Y, Katagiri T, Shinozaki K, Qi Z, Tatsumi H, Furuichi T, et al. (February 2007). "Arabidopsis plasma membrane protein crucial for Ca2+ influx and touch sensing in roots". Proceedings of the National Academy of Sciences of the United States of America. 104 (9): 3639–44. Bibcode:2007PNAS..104.3639N. doi:10.1073/pnas.0607703104. PMC 1802001. PMID 17360695. - UV-B light sensing mechanism discovered in plant roots, San Francisco State University, December 8, 2008 - Marchant A, Kargul J, May ST, Muller P, Delbarre A, Perrot-Rechenmann C, Bennett MJ (April 1999). "AUX1 regulates root gravitropism in Arabidopsis by facilitating auxin uptake within root apical tissues". The EMBO Journal. 18 (8): 2066–73. doi:10.1093/emboj/18.8.2066. PMC 1171291. PMID 10205161. - Hodge A (June 2009). "Root decisions". Plant, Cell & Environment. 32 (6): 628–40. doi:10.1111/j.1365-3040.2008.01891.x. PMID 18811732. - Carminati A, Vetterlein D, Weller U, Vogel H, Oswald SE (2009). "When roots lose contact". Vadose Zone Journal. 8 (3): 805–809. doi:10.2136/vzj2008.0147. - Chen R, Rosen E, Masson PH (June 1999). "Gravitropism in higher plants". Plant Physiology. 120 (2): 343–50. doi:10.1104/pp.120.2.343. PMC 1539215. PMID 11541950. - Pandey, Bipin K.; Huang, Guoqiang; Bhosale, Rahul; Hartman, Sjon; Sturrock, Craig J.; Jose, Lottie; Martin, Olivier C.; Karady, Michal; Voesenek, Laurentius A. C. J.; Ljung, Karin; Lynch, Jonathan P. (2021-01-15). "Plant roots sense soil compaction through restricted ethylene diffusion". Science. 371 (6526): 276–280. Bibcode:2021Sci...371..276P. doi:10.1126/science.abf3013. ISSN 0036-8075. PMID 33446554. S2CID 231606782. - Salisbury FJ, Hall A, Grierson CS, Halliday KJ (May 2007). "Phytochrome coordinates Arabidopsis shoot and root development". The Plant Journal. 50 (3): 429–38. doi:10.1111/j.1365-313x.2007.03059.x. PMID 17419844. - van Gelderen K, Kang C, Paalman R, Keuskamp D, Hayes S, Pierik R (January 2018). "Far-Red Light Detection in the Shoot Regulates Lateral Root Development through the HY5 Transcription Factor". The Plant Cell. 30 (1): 101–116. doi:10.1105/tpc.17.00771. PMC 5810572. PMID 29321188. - Nowak EJ, Martin CE (1997). "Physiological and anatomical responses to water deficits in the CAM epiphyte Tillandsia ionantha (Bromeliaceae)". International Journal of Plant Sciences. 158 (6): 818–826. doi:10.1086/297495. hdl:1808/9858. JSTOR 2475361. S2CID 85888916. - Nadkarni NM (November 1981). "Canopy roots: convergent evolution in rainforest nutrient cycles". Science. 214 (4524): 1023–4. Bibcode:1981Sci...214.1023N. doi:10.1126/science.214.4524.1023. PMID 17808667. S2CID 778003. - Pütz N (2002). "Contractile roots". In Waisel Y., Eshel A., Kafkafi U. (eds.). Plant roots: The hidden half (3rd ed.). New York: Marcel Dekker. pp. 975–987. - Canadell J, Jackson RB, Ehleringer JB, Mooney HA, Sala OE, Schulze ED (December 1996). "Maximum rooting depth of vegetation types at the global scale". Oecologia. 108 (4): 583–595. Bibcode:1996Oecol.108..583C. doi:10.1007/BF00329030. PMID 28307789. S2CID 2092130. - Stonea EL, Kaliszb PJ (1 December 1991). "On the maximum extent of tree roots". Forest Ecology and Management. 46 (1–2): 59–102. doi:10.1016/0378-1127(91)90245-Q. - Retallack GJ (1986). "The fossil record of soils" (PDF). In Wright VP (ed.). Paleosols: their Recognition and Interpretation. Oxford: Blackwell. pp. 1–57. Archived (PDF) from the original on 2017-01-07. - Hillier R, Edwards D, Morrissey LB (2008). "Sedimentological evidence for rooting structures in the Early Devonian Anglo–Welsh Basin (UK), with speculation on their producers". Palaeogeography, Palaeoclimatology, Palaeoecology. 270 (3–4): 366–380. Bibcode:2008PPP...270..366H. doi:10.1016/j.palaeo.2008.01.038. - Amram Eshel; Tom Beeckman (17 April 2013). Plant Roots: The Hidden Half, Fourth Edition. CRC Press. pp. 1–. ISBN 978-1-4398-4649-0. - Kurata, Tetsuya (1997). "Light-stimulated root elongation in Arabidopsis thaliana". Journal of Plant Physiology. 151 (3): 345–351. doi:10.1016/S0176-1617(97)80263-5. hdl:2115/44841. - Postgate, J. (1998). Nitrogen Fixation (3rd ed.). Cambridge, UK: Cambridge University Press. - Encyclopedia of Soil Science - Chamovitz, Daniel. (21 November 2017). What a plant knows : a field guide to the senses. ISBN 9780374537128. OCLC 1041421612. - Falik O, Mordoch Y, Ben-Natan D, Vanunu M, Goldstein O, Novoplansky A (July 2012). "Plant responsiveness to root-root communication of stress cues". Annals of Botany. 110 (2): 271–80. doi:10.1093/aob/mcs045. PMC 3394639. PMID 22408186. - Bowen GD, Rovira AD (1976). "Microbial Colonization of Plant Roots". Annu. Rev. Phytopathol. 14: 121–144. doi:10.1146/annurev.py.14.090176.001005. - Plant Roots and their Environment. Elsevier. 1988. p. 17. - Plant Roots and their Environment. Elsevier. 1988. p. 25. - Zahniser, David (February 21, 2008) "City to pass the bucks on sidewalks?" Archived 2015-04-17 at the Wayback Machine Los Angeles Times - Baldocchi DD, Xu L (October 2007). "What limits evaporation from Mediterranean oak woodlands–The supply of moisture in the soil, physiological control by plants or the demand by the atmosphere?". Advances in Water Resources. 30 (10): 2113–22. Bibcode:2007AdWR...30.2113B. doi:10.1016/j.advwatres.2006.06.013. - Brundrett, M. C. (2002). "Coevolution of roots and mycorrhizas of land plants". New Phytologist. 154 (2): 275–304. doi:10.1046/j.1469-8137.2002.00397.x. PMID 33873429. - Clark L (2004). "Primary Root Structure and Development – lecture notes" (PDF). Archived from the original (PDF) on 3 January 2006. - Coutts MP (1987). "Developmental processes in tree root systems". Canadian Journal of Forest Research. 17 (8): 761–767. doi:10.1139/x87-122. - Raven JA, Edwards D (2001). "Roots: evolutionary origins and biogeochemical significance". Journal of Experimental Botany. 52 (Suppl 1): 381–401. doi:10.1093/jxb/52.suppl_1.381. PMID 11326045. - Schenk HJ, Jackson RB (2002). "The global biogeography of roots". Ecological Monographs. 72 (3): 311–328. doi:10.2307/3100092. JSTOR 3100092. - Sutton RF, Tinus RW (1983). "Root and root system terminology". Forest Science Monograph. 24: 137. - Phillips WS (1963). "Depth of roots in soil". Ecology. 44 (2): 424. doi:10.2307/1932198. JSTOR 1932198. - Caldwell MM, Dawson TE, Richards JH (1998). "Hydraulic lift: consequences of water efflux from the roots of plants". Oecologia. 113 (2): 151–161. Bibcode:1998Oecol.113..151C. doi:10.1007/s004420050363. PMID 28308192. S2CID 24181646. |Wikimedia Commons has media related to Roots.| |Wikiquote has quotations related to: Root|
Congruent triangles are a special type of similar triangle where corresponding sides are congruent. In similar triangles, corresponding angles are congruent but corresponding sides are proportional. In this activity, students will look at three methods of constructing similar triangles and will test these properties using dilations or stretches. E46 m3 track build - Draw the triangle. Then, find x and the measure of each side of the triangle. 33) Triangle KLM is equilateral with KM = d + 2, LM = 12 − d, and KM = 4d − 13 . 34) Triangle ABC is equilateral with AB = 3x − 2, BC = 2x + 4, and CA = x + 10 . 35) Triangle DEF is isosceles, angle D is the vertex angle, DE = x + 7, DF = 3x − 1, and EF = 2x + 5. - Chapter 10Congruent and Similar Triangles. Introduction Recognizing and using congruent and similar shapes can make calculations and design work easier. For instance, in the design at the corner, only two different shapes were actually drawn. State whether the triangles are congruent by SSS, SAS, ASA, AAS, or HL. Worksheet # 80 Overlapping Congruent Triangles ... Worksheet 80 Overlapping Triangles - Copy.pdf - Sum of the Interior Angles of a Triangle Worksheet 2 PDF View Answers. Sum of the Interior Angles of a Triangle Worksheet 3 - This angle worksheet features 12 different triangles. The measure of each angle is represented by an algebraic expression. That’s right, you’re not give the measure of any of the three angles in the triangle. Geometry ID: 1 Name_____ Assignment Date_____ Period____ Complete each congruence statement by naming the corresponding angle or side. - If the triangles meet the condition of the postulate or theorem, then, you have congruent triangles. They are the SSS postulate, SAS postulate, ASA postulate, AAS theorem, and Hypotenuse-Leg theorem SSS postulate: If three sides of a triangle are congruent to three sides of a second triangle, then the two triangles are congruent Example: Describe the symbol for triangles and how congruent triangles are depicted. Be sure to emphasize the order of the letters. 8. [Slide 6] Explain that the students will need to be able to write congruence statements. [Press enter] The first is to determine whether the triangles are in fact congruent by looking for corresponding parts. - are congruent to two angles of another triangle, then the triangles are similar. If . . .! S " ! M and ! R " ! L Then . . . # SR T $# M L P Postulate 9-1 Angle-Angle Similarity (AA = ) Postulate R S T M P L Theorem If an angle of one triangle is congruent to an angle of a second triangle, and the sides that include the two angles are ... Similar Triangles Strand: Triangles Topic: Exploring congruent triangles Primary SOL: G.7 The student, given information in the form of a figure of statement will prove two triangles are similar. Related SOL: G.3a Materials Which Triangles Are Similar? activity sheet (attached) Similar Triangles: Shortcuts activity sheet (attached) - • In a right-angled triangle, the side opposite to the right angle is called the hypotenuse and the other two sides are called its legs or arms. • In a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares on its legs. • Two plane figures, say, F 1 and F 2 are said to be congruent, if the trace-copy of F 1 Similar Triangles and Polygons (with worksheets, videos, games. Download free ppt files, ebooks and documents about Similar. Similar Triangles are two triangles that are having congruent corresponding angles and the ratios of the corresponding sides are in proportion. This proportion is also. and Similarity Worksheet · Congruent and Similar ... - triangles similar from methods for proving that triangles are congruent, we use SAS~ and SSS~ to identify the similarity theorems. Theorem 5.3.3 (SAS~) If an angle of one triangle is congruent to an angle of a second triangle and the pairs of sides including the angles are proportional, then the triangles are similar. Yes. The triangles are congruent because of SSS 52 +12 2=13 ... All congruent triangles are similar. REF: 061830geo. Title: g.srt.b.5.xps Created Date:
A polygon is a closed figure formed by three or more line segments, called sides. Each side is joined to two other sides at its endpoints, and the endpoints are called vertices. In this discussion, the term “polygon” means “convex polygon,” that is, a polygon in which the measure of each interior angle is less than 180°. The figures below are examples of such polygons. The simplest polygon is a triangle. We know that the sum of interior angles in a triangle is 180º. A quadrilateral can be divided into 2 triangles, a pentagon can be divided into 3 triangles, and a hexagon can be divided into 4 triangles as shown below. If a polygon has n sides, it can be divided into n − 2 triangles. Since the sum of the measures of the interior angles of a triangle is 180º it follows that the sum of the measures of the interior angles of an n-sided polygon is (n − 2)(180º) For example, the sum for a quadrilateral (n = 4) is (4 − 2)(180º) = 360º and the sum for a hexagon (n = 6) is (6 − 2)(180º) = 720º A polygon in which all sides are congruent and all interior angles are congruent is called a regular polygon. For example, in a regular octagon (8 sides), the sum of the measures of the interior angles is (8 − 2)(180º) = 1080º Therefore, the measure of each angle is 1080º ÷ 8 = 135º The perimeter of a polygon is the sum of the lengths of its sides. The area of a polygon refers to the area of the region enclosed by the polygon Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
Number: Problems 1st Grade In this first grade math worksheet, 1st graders examine the pictures of apples and apple cores and then write a number sentence that illustrates a subtraction problem. 3 Views 1 Download Word Problems Involving Subtraction A classic math instructional activity like this one involves a warm up, guided practice, application, and assessment. Get those 2nd graders good at assessing and solving single-step word problems through basic addition and subtraction... 1st - 2nd Math CCSS: Adaptable Study Jams! + & - Without Regrouping Addition and subtraction are essential skills for all young mathematicians. Explain the step-by-step process with respect to place value using these real-world examples. Focus is on numbers in the tens, hundreds, and thousands, making... 1st - 4th Math CCSS: Adaptable How Many Times Did You Add That? Math whizzes practice multiplying using repeated addition. They watch a video clip from the Hershey's chocolate plant, read Hershey's Multiplication, and use grid paper to investigate multiplication as repeated addition. A page of five... 1st - 4th Math CCSS: Designed
Most of the 5000 exoplanets discovered so far have been found using methods that don’t actually see the planet at all. Brightness dimmings and star wobbles only get us so far. They limit our ability to study them in detail and astronomers are working on gigantic starshades to resolve planets directly. Direct imaging is a technique that can be used to take pictures of exoplanets. But this is a much more challenging task than indirect methods, because the light from the exoplanet is much fainter than the light from the star it orbits. However, direct imaging is the only way to obtain detailed information about the physical properties of exoplanets, such as their geography, shape, and atmosphere. One way to improve the sensitivity of direct imaging is to use a starshade. A starshade is a large, deployable structure that looks a lot like a gigantic flower that blocks out the light from a star, allowing a telescope to see the fainter light from an orbiting planet. Starshades are still in the development stage, but they have the potential to revolutionize the field of exoplanet research. How does a starshade work? It works by creating a shadow in the space between the telescope and the star. The shadow is created by the starshade’s petals, which are arranged in a circular or hexagonal pattern. The shape of the petals is designed to reduce diffraction and minimize the amount of stray light that enters the shadow. The starshade is positioned in front of the telescope, and the telescope is pointed at the star. With the light from the star blocked, the reflected light from the planet is revealed and the telescope can then see the faint glow. There are several challenges associated with using starshades to find exoplanets. One challenge is the size of the starshade. They are huge and they need to be large enough to block out the light from the star, but they also need to be light enough to be deployed in space. Another challenge is the alignment of the starshade. It needs to be aligned very precisely with the telescope in order to create a sharp shadow. There are several ongoing projects to develop starshades. One project is the HabEx mission, which is a joint project between NASA and the European Space Agency. The HabEx mission would use a starshade to search for Earth-like planets around nearby stars. Another project is the Roman Starshade Rendezvous, which is a NASA mission that will attempt to directly image the habitable zones of nearby sun-like stars. They will attempt to determine whether Earth-like exoplanets exist in the HZs of the nearest sunlike stars and have biosignature gases in their atmospheres. The development of starshades is still in the early stages, but they have the potential to make major advances in our understanding of exoplanets. Starshades could help us to find new exoplanets, to study their atmospheres, and to search for signs of life.
A rectifier is an electrical device that converts alternating current (AC), which periodically reverses direction, to direct current (DC), which flows in only one direction. The process is known as rectification. Physically, rectifiers take a number of forms, including vacuum tube diodes, mercury-arc valves, copper and selenium oxide rectifiers, semiconductor diodes, silicon-controlled rectifiers and other silicon-based semiconductor switches. Historically, even synchronous electromechanical switches and motors have been used. Early radio receivers, called crystal radios, used a "cat's whisker" of fine wire pressing on a crystal of galena (lead sulfide) to serve as a point-contact rectifier or "crystal detector". Rectifiers have many uses, but are often found serving as components of DC power supplies and high-voltage direct current power transmission systems. Rectification may serve in roles other than to generate direct current for use as a source of power. As noted, detectors of radio signals serve as rectifiers. In gas heating systems flame rectification is used to detect presence of a flame. Because of the alternating nature of the input AC sine wave, the process of rectification alone produces a DC current that, though unidirectional, consists of pulses of current. Many applications of rectifiers, such as power supplies for radio, television and computer equipment, require a steady constant DC current (as would be produced by a battery). In these applications the output of the rectifier is smoothed by an electronic filter (usually a capacitor) to produce a steady current. More complex circuitry that performs the opposite function, converting DC to AC, is called an inverter. - 1 Rectifier devices - 2 Rectifier circuits - 2.1 Single-phase rectifiers - 2.2 Three-phase rectifiers - 2.3 Voltage-multiplying rectifiers - 3 Rectifier efficiency - 4 Rectifier losses - 5 Rectifier output smoothing - 6 Applications - 7 Rectification technologies - 7.1 Electromechanical - 7.2 Electrolytic - 7.3 Plasma type - 7.4 Diode vacuum tube (valve) - 7.5 Solid state - 8 Current research - 9 See also - 10 References Before the development of silicon semiconductor rectifiers, vacuum tube thermionic diodes and copper oxide- or selenium-based metal rectifier stacks were used. With the introduction of semiconductor electronics, vacuum tube rectifiers became obsolete, except for some enthusiasts of vacuum tube audio equipment. For power rectification from very low to very high current, semiconductor diodes of various types (junction diodes, Schottky diodes, etc.) are widely used. Other devices that have control electrodes as well as acting as unidirectional current valves are used where more than simple rectification is required—e.g., where variable output voltage is needed. High-power rectifiers, such as those used in high-voltage direct current power transmission, employ silicon semiconductor devices of various types. These are thyristors or other controlled switching solid-state switches, which effectively function as diodes to pass current in only one direction. Rectifier circuits may be single-phase or multi-phase (three being the most common number of phases). Most low power rectifiers for domestic equipment are single-phase, but three-phase rectification is very important for industrial applications and for the transmission of energy as DC (HVDC). Half-wave rectification (M1U) In half-wave rectification of a single-phase supply, also called uncontrolled one-pulse midpoint circuit, either the positive or negative half of the AC wave is passed, while the other half is blocked. Because only one half of the input waveform reaches the output, mean voltage is lower. Half-wave rectification requires a single diode in a single-phase supply, or three in a three-phase supply. Rectifiers yield a unidirectional but pulsating direct current; half-wave rectifiers produce far more ripple than full-wave rectifiers, and much more filtering is needed to eliminate harmonics of the AC frequency from the output. The no-load output DC voltage of an ideal half-wave rectifier for a sinusoidal input voltage is: - Vdc, Vav – the DC or average output voltage, - Vpeak, the peak value of the phase input voltages, - Vrms, the root mean square (RMS) value of output voltage. Full-wave rectification (B2U) A full-wave bridge rectifier converts the whole of the input waveform to one of constant polarity (positive or negative) at its output. Full-wave rectification converts both polarities of the input waveform to pulsating DC (direct current), and yields a higher average output voltage. Two diodes and a center tapped transformer, or four diodes in a bridge configuration and any AC source (including a transformer without center tap), are needed. Single semiconductor diodes, double diodes with common cathode or common anode, and four-diode bridges, are manufactured as single components. For single-phase AC, if the transformer is center-tapped, then two diodes back-to-back (cathode-to-cathode or anode-to-anode, depending upon output polarity required) can form a full-wave rectifier. Twice as many turns are required on the transformer secondary to obtain the same output voltage than for a bridge rectifier, but the power rating is unchanged. The average and RMS no-load output voltages of an ideal single-phase full-wave rectifier are: Very common double-diode rectifier vacuum tubes contained a single common cathode and two anodes inside a single envelope, achieving full-wave rectification with positive output. The 5U4 and 5Y3 were popular examples of this configuration. Single-phase rectifiers are commonly used for power supplies for domestic equipment. However, for most industrial and high-power applications, three-phase rectifier circuits are the norm. As with single-phase rectifiers, three-phase rectifiers can take the form of a half-wave circuit, a full-wave circuit using a center-tapped transformer, or a full-wave bridge circuit. Thyristors are commonly used in place of diodes to create a circuit that can regulate the output voltage. Many devices that provide direct current actually generate three-phase AC. For example, an automobile alternator contains six diodes, which function as a full-wave rectifier for battery charging. Three-phase, half-wave circuit (M3U) An uncontrolled three-phase, half-wave midpoint circuit requires three diodes, one connected to each phase. This is the simplest type of three-phase rectifier but suffers from relatively high harmonic distortion on both the AC and DC connections. This type of rectifier is said to have a pulse-number of three, since the output voltage on the DC side contains three distinct pulses per cycle of the grid frequency: The peak values of this three-pulse DC voltage are calculated from the RMS value of the input phase voltage (line to neutral voltage, 120 V in North America, 230 V within Europe at mains operation): . The average no-load output voltage results from the integral under the graph of a positive half-wave with the period duration of (from 30° to 150°): - ⇒ ⇒ ≈ 1,17 ⋅ Three-phase, full-wave circuit using center-tapped transformer (M6) If the AC supply is fed via a transformer with a center tap, a rectifier circuit with improved harmonic performance can be obtained. This rectifier now requires six diodes, one connected to each end of each transformer secondary winding. This circuit has a pulse-number of six, and in effect, can be thought of as a six-phase, half-wave circuit. Before solid state devices became available, the half-wave circuit, and the full-wave circuit using a center-tapped transformer, were very commonly used in industrial rectifiers using mercury-arc valves. This was because the three or six AC supply inputs could be fed to a corresponding number of anode electrodes on a single tank, sharing a common cathode. With the advent of diodes and thyristors, these circuits have become less popular and the three-phase bridge circuit has become the most common circuit. Three-phase bridge rectifier uncontrolled (B6U) For an uncontrolled three-phase bridge rectifier, six diodes are used, and the circuit again has a pulse number of six. For this reason, it is also commonly referred to as a six-pulse bridge. The B6 circuit can be seen simplified as a series connection of two M3 three-pulse center circuits. For low-power applications, double diodes in series, with the anode of the first diode connected to the cathode of the second, are manufactured as a single component for this purpose. Some commercially available double diodes have all four terminals available so the user can configure them for single-phase split supply use, half a bridge, or three-phase rectifier. For higher-power applications, a single discrete device is usually used for each of the six arms of the bridge. For the very highest powers, each arm of the bridge may consist of tens or hundreds of separate devices in parallel (where very high current is needed, for example in aluminium smelting) or in series (where very high voltages are needed, for example in high-voltage direct current power transmission). The pulsating DC voltage results from the differences of the instantaneous positive and negative phase voltages , phase-shifted by 30°: The ideal, no-load average output voltage of the B6 circuit results from the integral under the graph of a DC voltage pulse with the period duration of (from 60° to 120°) with the peak value : - ⇒ ⇒ ≈ 2,34 ⋅ If the three-phase bridge rectifier is operated symmetrically (as positive and negative supply voltage), the center point of the rectifier on the output side (or the so called isolated reference potential) opposite the center point of the transformer (or the neutral conductor) has a potential difference in form of a triangular common-mode voltage. For this reason, the two centers must never be connected to each other, otherwise short-circuit currents would flow. The ground of the three-phase bridge rectifier in symmetrical operation is thus decoupled from the neutral conductor or the earth of the mains voltage. Powered by a transformer, earthing of the center point of the bridge is possible, provided that the secondary winding of the transformer is electrically isolated from the mains voltage and the star point of the secondary winding is not on earth. In this case, however, (negligible) leakage currents are flowing over the transformer windings. The common-mode voltage is formed out of the respective average values of the differences between the positive and negative phase voltages, which form the pulsating DC voltage. The peak value of the delta voltage amounts ¼ of the peak value of the phase input voltage and is calculated with minus half of the DC voltage at 60° of the period: - = · 0,25 The RMS value of the common-mode voltage is calculated from the form factor for triangular oscillations: If the circuit is operated asymmetrically (as a simple supply voltage with just one positive pole), both the positive and negative poles (or the isolated reference potential) are pulsating opposite the center (or the ground) of the input voltage analogously to the positive and negative waveforms of the phase voltages. However, the differences in the phase voltages result in the six-pulse DC voltage (over the duration of a period). The strict separation of the transformer center from the negative pole (otherwise short-circuit currents will flow) or a possible grounding of the negative pole when powered by an isolating transformer apply correspondingly to the symmetrical opperation. Three-phase bridge rectifier controlled (B6C) The controlled three-phase bridge rectifier uses thyristors in place of diodes. The output voltage is reduced by the factor cos(α): Or, expressed in terms of the line to line input voltage: - VLLpeak, the peak value of the line to line input voltages, - Vpeak, the peak value of the phase (line to neutral) input voltages, - α, firing angle of the thyristor (0 if diodes are used to perform rectification) The above equations are only valid when no current is drawn from the AC supply or in the theoretical case when the AC supply connections have no inductance. In practice, the supply inductance causes a reduction of DC output voltage with increasing load, typically in the range 10–20% at full load. The effect of supply inductance is to slow down the transfer process (called commutation) from one phase to the next. As result of this is that at each transition between a pair of devices, there is a period of overlap during which three (rather than two) devices in the bridge are conducting simultaneously. The overlap angle is usually referred to by the symbol μ (or u), and may be 20 30° at full load. With supply inductance taken into account, the output voltage of the rectifier is reduced to: The overlap angle μ is directly related to the DC current, and the above equation may be re-expressed as: - Lc, the commutating inductance per phase - Id, the direct current Although better than single-phase rectifiers or three-phase half-wave rectifiers, six-pulse rectifier circuits still produce considerable harmonic distortion on both the AC and DC connections. For very high-power rectifiers the twelve-pulse bridge connection is usually used. A twelve-pulse bridge consists of two six-pulse bridge circuits connected in series, with their AC connections fed from a supply transformer that produces a 30° phase shift between the two bridges. This cancels many of the characteristic harmonics the six-pulse bridges produce. The 30 degree phase shift is usually achieved by using a transformer with two sets of secondary windings, one in star (wye) connection and one in delta connection. The simple half-wave rectifier can be built in two electrical configurations with the diode pointing in opposite directions, one version connects the negative terminal of the output direct to the AC supply and the other connects the positive terminal of the output direct to the AC supply. By combining both of these with separate output smoothing it is possible to get an output voltage of nearly double the peak AC input voltage. This also provides a tap in the middle, which allows use of such a circuit as a split rail power supply. A variant of this is to use two capacitors in series for the output smoothing on a bridge rectifier then place a switch between the midpoint of those capacitors and one of the AC input terminals. With the switch open, this circuit acts like a normal bridge rectifier. With the switch closed, it act like a voltage doubling rectifier. In other words, this makes it easy to derive a voltage of roughly 320 V (±15%, approx.) DC from any 120 V or 230 V mains supply in the world, this can then be fed into a relatively simple switched-mode power supply. However, for a given desired ripple, the value of both capacitors must be twice the value of the single one required for a normal bridge rectifier; when the switch is closed each one must filter the output of a half-wave rectifier, and when the switch is open the two capacitors are connected in series with an equivalent value of half one of them. Cascaded diode and capacitor stages can be added to make a voltage multiplier (Cockroft-Walton circuit). These circuits are capable of producing a DC output voltage potential tens of times that of the peak AC input voltage, but are limited in current capacity and regulation. Diode voltage multipliers, frequently used as a trailing boost stage or primary high voltage (HV) source, are used in HV laser power supplies, powering devices such as cathode ray tubes (CRT) (like those used in CRT based television, radar and sonar displays), photon amplifying devices found in image intensifying and photo multiplier tubes (PMT), and magnetron based radio frequency (RF) devices used in radar transmitters and microwave ovens. Before the introduction of semiconductor electronics, transformerless powered vacuum tube receivers powered directly from AC power sometimes used voltage doublers to generate roughly 300 VDC from a 100–120 V power line. Rectifier efficiency (η) is defined as the ratio of DC output power to the input power from the AC supply. Even with ideal rectifiers with no losses, the efficiency is less than 100% because some of the output power is AC power rather than DC which manifests as ripple superimposed on the DC waveform. For a half-wave rectifier efficiency is very poor, (the divisors are 2 rather than √2 because no power is delivered on the negative half-cycle) Thus maximum efficiency for a half-wave rectifier is, Similarly, for a full-wave rectifier, Efficiency is reduced by losses in transformer windings and power dissipation in the rectifier element itself. Efficiency can be improved with the use of smoothing circuits which reduce the ripple and hence reduce the AC content of the output. Three-phase rectifiers, especially three-phase full-wave rectifiers, have much greater efficiencies because the ripple is intrinsically smaller. In some three-phase and multi-phase applications the efficiency is high enough that smoothing circuitry is unnecessary. A real rectifier characteristically drops part of the input voltage (a voltage drop, for silicon devices, of typically 0.7 volts plus an equivalent resistance, in general non-linear)—and at high frequencies, distorts waveforms in other ways. Unlike an ideal rectifier, it dissipates some power. An aspect of most rectification is a loss from the peak input voltage to the peak output voltage, caused by the built-in voltage drop across the diodes (around 0.7 V for ordinary silicon p–n junction diodes and 0.3 V for Schottky diodes). Half-wave rectification and full-wave rectification using a center-tapped secondary produces a peak voltage loss of one diode drop. Bridge rectification has a loss of two diode drops. This reduces output voltage, and limits the available output voltage if a very low alternating voltage must be rectified. As the diodes do not conduct below this voltage, the circuit only passes current through for a portion of each half-cycle, causing short segments of zero voltage (where instantaneous input voltage is below one or two diode drops) to appear between each "hump". Peak loss is very important for low voltage rectifiers (for example, 12 V or less) but is insignificant in high-voltage applications such as HVDC. Rectifier output smoothing While half-wave and full-wave rectification can deliver unidirectional current, neither produces a constant voltage. Producing steady DC from a rectified AC supply requires a smoothing circuit or filter. In its simplest form this can be just a reservoir capacitor or smoothing capacitor, placed at the DC output of the rectifier. There is still an AC ripple voltage component at the power supply frequency for a half-wave rectifier, twice that for full-wave, where the voltage is not completely smoothed. Sizing of the capacitor represents a tradeoff. For a given load, a larger capacitor reduces ripple but costs more and creates higher peak currents in the transformer secondary and in the supply that feeds it. The peak current is set in principle by the rate of rise of the supply voltage on the rising edge of the incoming sine-wave, but in practice it is reduced by the resistance of the transformer windings. In extreme cases where many rectifiers are loaded onto a power distribution circuit, peak currents may cause difficulty in maintaining a correctly shaped sinusoidal voltage on the ac supply. To limit ripple to a specified value the required capacitor size is proportional to the load current and inversely proportional to the supply frequency and the number of output peaks of the rectifier per input cycle. The load current and the supply frequency are generally outside the control of the designer of the rectifier system but the number of peaks per input cycle can be affected by the choice of rectifier design. A half-wave rectifier only gives one peak per cycle, and for this and other reasons is only used in very small power supplies. A full wave rectifier achieves two peaks per cycle, the best possible with a single-phase input. For three-phase inputs a three-phase bridge gives six peaks per cycle. Higher numbers of peaks can be achieved by using transformer networks placed before the rectifier to convert to a higher phase order. To further reduce ripple, a capacitor-input filter can be used. This complements the reservoir capacitor with a choke (inductor) and a second filter capacitor, so that a steadier DC output can be obtained across the terminals of the filter capacitor. The choke presents a high impedance to the ripple current. For use at power-line frequencies inductors require cores of iron or other magnetic materials, and add weight and size. Their use in power supplies for electronic equipment has therefore dwindled in favour of semiconductor circuits such as voltage regulators. A more usual alternative to a filter, and essential if the DC load requires very low ripple voltage, is to follow the reservoir capacitor with an active voltage regulator circuit. The reservoir capacitor must be large enough to prevent the troughs of the ripple dropping below the minimum voltage required by the regulator to produce the required output voltage. The regulator serves both to significantly reduce the ripple and to deal with variations in supply and load characteristics. It would be possible to use a smaller reservoir capacitor (these can be large on high-current power supplies) and then apply some filtering as well as the regulator, but this is not a common strategy. The extreme of this approach is to dispense with the reservoir capacitor altogether and put the rectified waveform straight into a choke-input filter. The advantage of this circuit is that the current waveform is smoother and consequently the rectifier no longer has to deal with the current as a large current pulse, but instead the current delivery is spread over the entire cycle. The disadvantage, apart from extra size and weight, is that the voltage output is much lower – approximately the average of an AC half-cycle rather than the peak. The primary application of rectifiers is to derive DC power from an AC supply (AC to DC converter). Virtually all electronic devices require DC, so rectifiers are used inside the power supplies of virtually all electronic equipment. Converting DC power from one voltage to another is much more complicated. One method of DC-to-DC conversion first converts power to AC (using a device called an inverter), then uses a transformer to change the voltage, and finally rectifies power back to DC. A frequency of typically several tens of kilohertz is used, as this requires much smaller inductance than at lower frequencies and obviates the use of heavy, bulky, and expensive iron-cored units. Rectifiers are also used for detection of amplitude modulated radio signals. The signal may be amplified before detection. If not, a very low voltage drop diode or a diode biased with a fixed voltage must be used. When using a rectifier for demodulation the capacitor and load resistance must be carefully matched: too low a capacitance makes the high frequency carrier pass to the output, and too high makes the capacitor just charge and stay charged. Rectifiers supply polarised voltage for welding. In such circuits control of the output current is required; this is sometimes achieved by replacing some of the diodes in a bridge rectifier with thyristors, effectively diodes whose voltage output can be regulated by switching on and off with phase fired controllers. Thyristors are used in various classes of railway rolling stock systems so that fine control of the traction motors can be achieved. Gate turn-off thyristors are used to produce alternating current from a DC supply, for example on the Eurostar Trains to power the three-phase traction motors. Before about 1905 when tube type rectifiers were developed, power conversion devices were purely electro-mechanical in design. Mechanical rectification systems used some form of rotation or resonant vibration (e.g. vibrators) driven by electromagnets, which operated a switch or commutator to reverse the current. These mechanical rectifiers were noisy and had high maintenance requirements. The moving parts had friction, which required lubrication and replacement due to wear. Opening mechanical contacts under load resulted in electrical arcs and sparks that heated and eroded the contacts. They also were not able to handle AC frequencies above several thousand cycles per second. To convert alternating into direct current in electric locomotives, a synchronous rectifier may be used. It consists of a synchronous motor driving a set of heavy-duty electrical contacts. The motor spins in time with the AC frequency and periodically reverses the connections to the load at an instant when the sinusoidal current goes through a zero-crossing. The contacts do not have to switch a large current, but they must be able to carry a large current to supply the locomotive's DC traction motors. These consisted of a resonant reed, vibrated by an alternating magnetic field created by an AC electromagnet, with contacts that reversed the direction of the current on the negative half cycles. They were used in low power devices, such as battery chargers, to rectify the low voltage produced by a step-down transformer. Another use was in battery power supplies for portable vacuum tube radios, to provide the high DC voltage for the tubes. These operated as a mechanical version of modern solid state switching inverters, with a transformer to step the battery voltage up, and a set of vibrator contacts on the transformer core, operated by its magnetic field, to repeatedly break the DC battery current to create a pulsing AC to power the transformer. Then a second set of rectifier contacts on the vibrator rectified the high AC voltage from the transformer secondary to DC. A motor-generator set, or the similar rotary converter, is not strictly a rectifier as it does not actually rectify current, but rather generates DC from an AC source. In an "M-G set", the shaft of an AC motor is mechanically coupled to that of a DC generator. The DC generator produces multiphase alternating currents in its armature windings, which a commutator on the armature shaft converts into a direct current output; or a homopolar generator produces a direct current without the need for a commutator. M-G sets are useful for producing DC for railway traction motors, industrial motors and other high-current applications, and were common in many high-power D.C. uses (for example, carbon-arc lamp projectors for outdoor theaters) before high-power semiconductors became widely available. The electrolytic rectifier was a device from the early twentieth century that is no longer used. A home-made version is illustrated in the 1913 book The Boy Mechanic but it would only be suitable for use at very low voltages because of the low breakdown voltage and the risk of electric shock. A more complex device of this kind was patented by G. W. Carpenter in 1928 (US Patent 1671970). When two different metals are suspended in an electrolyte solution, direct current flowing one way through the solution sees less resistance than in the other direction. Electrolytic rectifiers most commonly used an aluminum anode and a lead or steel cathode, suspended in a solution of tri-ammonium ortho-phosphate. The rectification action is due to a thin coating of aluminum hydroxide on the aluminum electrode, formed by first applying a strong current to the cell to build up the coating. The rectification process is temperature-sensitive, and for best efficiency should not operate above 86 °F (30 °C). There is also a breakdown voltage where the coating is penetrated and the cell is short-circuited. Electrochemical methods are often more fragile than mechanical methods, and can be sensitive to usage variations, which can drastically change or completely disrupt the rectification processes. Similar electrolytic devices were used as lightning arresters around the same era by suspending many aluminium cones in a tank of tri-ammonium ortho-phosphate solution. Unlike the rectifier above, only aluminium electrodes were used, and used on A.C., there was no polarization and thus no rectifier action, but the chemistry was similar. The modern electrolytic capacitor, an essential component of most rectifier circuit configurations was also developed from the electrolytic rectifier. The development of vacuum tube technology in the early 20th century resulted in the invention of various tube-type rectifiers, which largely replaced the noisy, inefficient mechanical rectifiers. A rectifier used in high-voltage direct current (HVDC) power transmission systems and industrial processing between about 1909 to 1975 is a mercury-arc rectifier or mercury-arc valve. The device is enclosed in a bulbous glass vessel or large metal tub. One electrode, the cathode, is submerged in a pool of liquid mercury at the bottom of the vessel and one or more high purity graphite electrodes, called anodes, are suspended above the pool. There may be several auxiliary electrodes to aid in starting and maintaining the arc. When an electric arc is established between the cathode pool and suspended anodes, a stream of electrons flows from the cathode to the anodes through the ionized mercury, but not the other way (in principle, this is a higher-power counterpart to flame rectification, which uses the same one-way current transmission properties of the plasma naturally present in a flame). These devices can be used at power levels of hundreds of kilowatts, and may be built to handle one to six phases of AC current. Mercury-arc rectifiers have been replaced by silicon semiconductor rectifiers and high-power thyristor circuits in the mid 1970s. The most powerful mercury-arc rectifiers ever built were installed in the Manitoba Hydro Nelson River Bipole HVDC project, with a combined rating of more than 1 GW and 450 kV. Argon gas electron tube The General Electric Tungar rectifier was a mercury vapor (ex.:5B24) or argon (ex.:328) gas-filled electron tube device with a tungsten filament cathode and a carbon button anode. It operated similarly to the thermionic vacuum tube diode, but the gas in the tube ionized during forward conduction, giving it a much lower forward voltage drop so it could rectify lower voltages. It was used for battery chargers and similar applications from the 1920s until lower-cost metal rectifiers, and later semiconductor diodes, supplanted it. These were made up to a few hundred volts and a few amperes rating, and in some sizes strongly resembled an incandescent lamp with an additional electrode. The 0Z4 was a gas-filled rectifier tube commonly used in vacuum tube car radios in the 1940s and 1950s. It was a conventional full-wave rectifier tube with two anodes and one cathode, but was unique in that it had no filament (thus the "0" in its type number). The electrodes were shaped such that the reverse breakdown voltage was much higher than the forward breakdown voltage. Once the breakdown voltage was exceeded, the 0Z4 switched to a low-resistance state with a forward voltage drop of about 24 V. Diode vacuum tube (valve) The thermionic vacuum tube diode, originally called the Fleming valve, was invented by John Ambrose Fleming in 1904 as a detector for radio waves in radio receivers, and evolved into a general rectifier. It consisted of an evacuated glass bulb with a filament heated by a separate current, and a metal plate anode. The filament emitted electrons by thermionic emission (the Edison effect), discovered by Thomas Edison in 1884, and a positive voltage on the plate caused a current of electrons through the tube from filament to plate. Since only the filament produced electrons, the tube would only conduct current in one direction, allowing the tube to rectify an alternating current. Vacuum diode rectifiers were widely used in power supplies in vacuum tube consumer electronic products, such as phonographs, radios, and televisions, for example the All American Five radio receiver, to provide the high DC plate voltage needed by other vacuum tubes. "Full-wave" versions with two separate plates were popular because they could be used with a center-tapped transformer to make a full-wave rectifier. Vacuum rectifiers were made for very high voltages, such as the high voltage power supply for the cathode ray tube of television receivers, and the kenotron used for power supply in X-ray equipment. However, compared to modern semiconductor diodes, vacuum rectifiers have high internal resistance due to space charge and therefore high voltage drops, causing high power dissipation and low efficiency. They are rarely able to handle currents exceeding 250 mA owing to the limits of plate power dissipation, and cannot be used for low voltage applications, such as battery chargers. Another limitation of the vacuum tube rectifier is that the heater power supply often requires special arrangements to insulate it from the high voltages of the rectifier circuit. In musical instrument amplification (especially for electric guitars), the slight delay or "sag" between a signal increase (for instance, when a guitar chord is struck hard and fast) and the corresponding increase in output voltage is a notable effect of tube rectification, and results in compression. The choice between tube rectification and diode rectification is a matter of taste; some amplifiers have both and allow the player to choose. The cat's-whisker detector was the earliest type of semiconductor diode. It consisted of a crystal of some semiconducting mineral, usually galena (lead sulfide), with a light springy wire touching its surface. Invented by Jagadish Chandra Bose and developed by G. W. Pickard around 1906, it served as the radio wave rectifier in the first widely used radio receivers, called crystal radios. Its fragility and limited current capability made it unsuitable for power supply applications. It became obsolete around 1920, but later versions served as microwave detectors and mixers in radar receivers during World War 2. Selenium and copper oxide rectifiers Once common until replaced by more compact and less costly silicon solid-state rectifiers in the 1970s, these units used stacks of metal plates and took advantage of the semiconductor properties of selenium or copper oxide. While selenium rectifiers were lighter in weight and used less power than comparable vacuum tube rectifiers, they had the disadvantage of finite life expectancy, increasing resistance with age, and were only suitable to use at low frequencies. Both selenium and copper oxide rectifiers have somewhat better tolerance of momentary voltage transients than silicon rectifiers. Typically these rectifiers were made up of stacks of metal plates or washers, held together by a central bolt, with the number of stacks determined by voltage; each cell was rated for about 20 V. An automotive battery charger rectifier might have only one cell: the high-voltage power supply for a vacuum tube might have dozens of stacked plates. Current density in an air-cooled selenium stack was about 600 mA per square inch of active area (about 90 mA per square centimeter). Silicon and germanium diodes In the modern world, silicon diodes are the most widely used rectifiers for lower voltages and powers, and have largely replaced earlier germanium diodes. For very high voltages and powers, the added need for controllability has in practice led to replacing simple silicon diodes with high-power thyristors (see below) and their newer actively gate-controlled cousins. High power: thyristors (SCRs) and newer silicon-based voltage sourced converters In high-power applications, from 1975 to 2000, most mercury valve arc-rectifiers were replaced by stacks of very high power thyristors, silicon devices with two extra layers of semiconductor, in comparison to a simple diode. In medium-power transmission applications, even more complex and sophisticated voltage sourced converter (VSC) silicon semiconductor rectifier systems, such as insulated gate bipolar transistors (IGBT) and gate turn-off thyristors (GTO), have made smaller high voltage DC power transmission systems economical. All of these devices function as rectifiers. As of 2009[update] it was expected that these high-power silicon "self-commutating switches", in particular IGBTs and a variant thyristor (related to the GTO) called the integrated gate-commutated thyristor (IGCT), would be scaled-up in power rating to the point that they would eventually replace simple thyristor-based AC rectification systems for the highest power-transmission DC applications. A major area of research is to develop higher frequency rectifiers, that can rectify into terahertz and light frequencies. These devices are used in optical heterodyne detection, which has myriad applications in optical fiber communication and atomic clocks. Another prospective application for such devices is to directly rectify light waves picked up by tiny antenna, called nantennas, to produce DC electric power. It is thought that arrays of nantennas could be a more efficient means of producing solar power than solar cells. A related area of research is to develop smaller rectifiers, because a smaller device has a higher cutoff frequency. Research projects are attempting to develop a unimolecular rectifier, a single organic molecule that would function as a rectifier. |Wikimedia Commons has media related to rectifiers.| - AC adapter - Active rectification - Direct current - High-voltage direct current - Synchronous rectification - Vienna rectifier - Morris, Peter Robin (1990). A History of the World Semiconductor Industry. p. 18. ISBN 978-0-86341-227-1. - Lander, Cyril W. (1993). "2. Rectifying Circuits". Power electronics (3rd ed.). London: McGraw-Hill. ISBN 978-0-07-707714-3. - Williams, B. W. (1992). "Chapter 11". Power electronics : devices, drivers and applications (2nd ed.). Basingstoke: Macmillan. ISBN 978-0-333-57351-8. - Hendrik Rissik (1941). Mercury-arc current convertors [sic] : an introduction to the theory and practice of vapour-arc discharge devices and to the study of rectification phenomena. Sir I. Pitman & sons, ltd. Retrieved 8 January 2013. - Kimbark, Edward Wilson (1971). Direct current transmission. (4. printing. ed.). New York: Wiley-Interscience. p. 508. ISBN 978-0-471-47580-4. - "Analog and Digital Electronics", Rectifier and clipper circuit, pages 4 and 5, retrieved 25 April 2015. - Wendy Middleton, Mac E. Van Valkenburg (eds), Reference Data for Engineers: Radio, Electronics, Computer, and Communications, p. 14. 13, Newnes, 2002 ISBN 0-7506-7291-9. - Archived 16 February 2012 at the Wayback Machine. - Mansell, A.D.; Shen, J. (1 January 1994). "Pulse converters in traction applications". Power Engineering Journal. 8 (4): 183. doi:10.1049/pe:19940407. - Hawkins, Nehemiah (1914). "54. Rectifiers". Hawkins Electrical Guide: Principles of electricity, magnetism, induction, experiments, dynamo. New York: T. Audel. Retrieved 8 January 2013. - "How To Make An Electrolytic Rectifier". Chestofbooks.com. Retrieved 2012-03-15. - US patent 1671970, Glenn W. Carpenter, "Liquid Rectifier", issued 1928-06-05 - American Technical Society (1920). Cyclopedia of applied electricity. 2. American technical society. p. 487. Retrieved 8 January 2013. - Pictures of a mercury-arc rectifier in operation can be seen here: Belsize Park deep shelter rectifier 1, Belsize Park deep shelter rectifier 2 - Sood, Vijay K. HVDC and FACTS Controllers: Applications Of Static Converters In Power Systems. Springer-Verlag. p. 1. ISBN 978-1-4020-7890-3. The first 25 years of HVDC transmission were sustained by converters having mercury arc valves till the mid-1970s. The next 25 years till the year 2000 were sustained by line-commutated converters using thyristor valves. It is predicted that the next 25 years will be dominated by force-commutated converters . Initially, this new force-commutated era has commenced with Capacitor Commutated Converters (CCC) eventually to be replaced by self-commutated converters due to the economic availability of high-power switching devices with their superior characteristics. - Hunter, Dave (September 2013). "What's The Big Deal About Tube Rectification?". Guitar Player. p. 136. - H. P. Westman et al., (ed), Reference Data for Radio Engineers, Fifth Edition, 1968, Howard W. Sams and Co., no ISBN, Library of Congress Card No. 43-14665 chapter 13 - Arrillaga, Jos; Liu, Yonghe H; Watson, Neville R; Murray, Nicholas J. Self-Commutating Converters for High Power Applications. John Wiley & Sons. ISBN 978-0-470-68212-8. - Idaho National Laboratory (2007). "Harvesting the sun's energy with antennas". Retrieved 2008-10-03.
Worksheet. December 15th , 2020. Free printable measuring angles worksheets a … Complementary and supplementary word problems worksheet. Measuring angles worksheet geometry. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Free printable math worksheet angles. His or her job is to use a standard protractor to measure the angles in degrees, extending the lines with a straight edge if necessary. Measuring angles worksheet, word docs & powerpoints. In these worksheets, students use a protractor to draw and measure angles and determine if the angles are acute, obtuse, straight or 90 degrees. Learn some basic geometry with this worksheet all about the angle. Worksheetworks.com is an online resource used every day by thousands of teachers, students and parents. To gain access to our editable content join the geometry teacher community! In these exercises, students measure angles with a real protractor.in the last two worksheets, students also classify the angles as being acute, obtuse or a right angle. Printable geometry worksheet for 4th grade and up to practice using a protractor. Some of the worksheets for this concept are 11 arcs and central angles, measuring angles and arcs, name block geometry work date, nag10110 to, measuring angles, arcs and angles formed by secants and tangents from a, grade 5 geometry work, geometry 10 2 angles and arcs. See more ideas about math, math geometry, teaching math. Line up one side of the angle with the zero line of the protractor (line with the number 0). This worksheet includes only acute and obtuse angles whose measures are offered in increments of five degrees. The degrees where the other side crosses the number scale is the angle.you can learn how to use this handy skills simply and easily using this angles worksheet. Using the measuring angles worksheet, students measuring ten angles using a protractor and then identify the type of angle (acute, obtuse, right, or straight). Geometry worksheets angles worksheets for practice and study. Measure these angles with a protractor. This worksheet provides the student with a set of angles. Geometry involves a lot of measuring, and angles are one of the most important measurements to take. Using a real protractor to measure angles. With fun activities, including measuring spiderwebs, steering wheels, and laser beams, your child is sure to enjoy our kind of geometry. There are a range of worksheets to help children learn to classify angles and measure angles using a protractor. Sum of the angles in a triangle is 180 degree worksheet. Drawing and measuring angles with a protractor. The size of the angle is the turn from one arm of the angle to the other, and to measure this, we require a protractor that comes with an outer and an inner scale. Angles are a part of the foundation of upper grades geometry lessons. The easiest way to measure angles is with a protractor. put the midpoint of the protractor on the vertex of the angle you are measuring. As they measure, students will learn that acute angles are less than 90°. Let your math practice ring authentic with this pdf worksheet that helps students of grade 4 bring home the covetous skill of reading the inner scale of the protractor. The angles worksheets are randomly created and will never repeat so you have an endless supply of quality angles worksheets to use in the classroom or at home. Here you will find hundreds of lessons, a community of teachers for support, and materials that are always up to date with the latest standards. Our measuring angles worksheets make angle practice easy. Here is a graphic preview for all of the angles worksheets.you can select different variables to customize these angles worksheets for your needs. Here is your free content for this lesson! Worksheets > math > grade 3 > geometry > using a protractor. In addition to measuring angles, these resources will help students learn the difference between acute, obtuse, and right angles. The degrees where the other side crosses the number scale is the angle. Types of angles help students learn to differentiate between acute, obtuse, and right angles with these printables. Similar to the above listing, the resources below are aligned to related standards in the common core for mathematics that together support the following learning outcome: Angles are an important concept in geometry, and hence it becomes vital for grade 4 and grade 5 children to learn to measure them. Line up one side of the angle with the zero line of the protractor (line with the number 0). Understand concepts of angle and measure angles. Obtuse angles are over 90°. Angles worksheets geometry worksheets angles worksheet angles math develop protractor usage skills. When teaching your measuring angles to your ks2 students, the easiest way to do so is by using a protractor. put the midpoint of the protractor on the vertex of the angle you are measuring. Measure angles with a protractor. The links below will connect you to sections of our site with geometry activities and printables on angles, angle types, and angle measurement. Students need a lot of practice when measuring angles with a protractor when they start learning. Any content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s. In no way does LocalHost claim ownership or responsibility for such items and you should seek legal consent for any use of such materials from its owner.
Published at Saturday, November 02nd 2019. by Tiffney Langlois in Worksheet. These Function Table Worksheets are great for giving students practice in computing the outputs for different linear equations. You may select between four different types of equations. These Function Table Worksheets will generate 12 function table problems per worksheet. These Function Table Worksheets are appropriate for 4th Grade, 5th Grade, 6th Grade, and 7th Grade. Our grade 3 geometry worksheets review the properties of, and classification of, two dimensional shapes, particularly circles, triangles, quadrilaterals and polygons. We also focus on the definition and classification of lines and angles. The areas and perimeters of rectangular shapes are reviewed, as are the concepts of congruence and symmetry. This section contains all of the graphic previews for the Polynomial Functions Worksheets. We currently have worksheets covering naming polynomials, factoring, The Remainder Theorem, Irrational and Imaginary Root Theorems, Descartes’ Rule of Signs, The Rational Root Theorem, polynomial equations, basic shapes and graphs of polynomials, graphing polynomial functions, and The Binomial Theorem. These Polynomial Functions Worksheets are a good resource for students in the 9th Grade through the 12th Grade. These Division Worksheets produces problems in which you must divide a 3 digit decimal number by a single digit number. You may select between 12, 15, 18, 21, 24 or 30 problems for these division worksheets. This section contains all of the graphic previews for the Triangle Worksheets. We have a triangle fact sheet, identifying triangles, area and perimeters, the triangle inequality theorem, triangle inequalities of angles and angles, triangle angle sum, the exterior angle theorem, angle bisectors, median of triangles, finding a centroid from a graph and a set of vertices for your use. These geometry worksheets are a good resource for children in the 5th Grade through the 10th Grade. This Graph Paper generator will produce a blank page of trigonometric graph paper with the x-axis from zero to 2 Pi and two grids per page. You may select the type of label you wish to use for the X-Axis. These mixed problems worksheets are great for working on adding, subtracting, multiplying, and dividing two fractions on the same worksheet. You may select between three different degrees of difficulty and randomize or keep in order the operations for the problems. These mixed problems worksheets will produce 12 problems per page. Any content, trademark’s, or other material that might be found on the Sandbaronline website that is not Sandbaronline’s property remains the copyright of its respective owner/s. In no way does Sandbaronline claim ownership or responsibility for such items, and you should seek legal consent for any use of such materials from its owner. Copyright © 2020 Sandbaronline. All Rights Reserved.
Even though the planet is not very far from Earth, Mercury will not be an easy place to view through a telescope – even visit – due to its closeness to our Sun. Will this be a problem to future travel agencies? By: Vanessa Uy Our Solar System’s smallest and innermost planet, Mercury, has been - for ages – proved notoriously difficult for Earth – based astronomical observation. This piece of rock, with a diameter of 3,030 miles (4,800 km.) is hardly bigger than our Moon (2,160 miles). Mercury is also not that far from Earth, sometimes coming in as near as 48,337,000 miles and it is fairly bright when viewed from ground level. The main reason that makes the planet Mercury so difficult to Earth-based observers is the planet’s closeness to our Sun. Thus the angle between Mercury (which appears as a fairly bright star when viewed by our naked eye at ground level) and our Sun is always less than that between the two hands of a watch at 1 o’clock. This “quirk of geometry” makes itself known every time you try to observe the planet Mercury at daytime, the Sun’s blazing light complicates optical – based observation of the planet, and when nighttime comes, the planet disappears from view almost as quickly as our Sun does. Mercury can be seen alone only when it is low above the horizon, just before sunrise or soon after sunset. Observations at such low angles is seldom satisfactory because of the great distance that the planet Mercury’s light must travel through the Earth’s murky and turbulent lower atmosphere. Despite the handicaps of Earth-based observations, astronomers were able to measure the rate of rotation of the planet Mercury via Doppler radar. In 1965 the great radio telescope at Arecibo, Puerto Rico, measured the rotation of the planet Mercury by the Doppler shifts of wavelength in radar echoes from its surface. But more sophisticated – and therefore reliable - surface observations of the planet Mercury necessitates the use of unmanned interplanetary probes. Via Earth – based optical telescopes, planet Mercury always appeared as a nearly featureless blob. Then came the Mariner X (Mariner 10) flybys whose first ever close-up photographs of the planet Mercury’s surface produced an astonished double take among the astronomical community back in 1974. The volumes of data gathered by the Mariner X space probe has the astronomical community concluding back then that the planet Mercury is like our Moon on the outside, but it may well be like the planet Earth on the inside. Like our Moon, the surface of Mercury is pocked with craters and lava-filled basins. But Mariner X also detected an Earth-like magnetic field. Scientists knew that planetary magnetism was produced by a “dynamo effect” – the rapid rotation of iron-cored planets like the Earth. But planet Mercury rotates far too slow – once in every 58.6 Earth days – for the “dynamo effect” to work. So back in 1974, scientists postulated that a large iron core could also produce magnetism in a slowly rotating body. The discovery of scarps or cliffs – via the Mariner X’s close-up photographs – towering some two miles high and snake for hundreds of miles through Mercury’s crated regions. These findings made scientists think back in 1974 that these scarps are wrinkles that formed some 4 billion years ago when the planet’s core began to shrink. Which made the planet’s surface crack. Despite the wealth of data collected by the Mariner X spacecraft, the many mysteries surrounding the phenomena that occurs on the planet Mercury necessitates the use of more sophisticated space probes with more advanced instruments in upcoming planetary exploration programs. Then came the M-Ercury Surface, Space E-Nvironment, G-Eochemistry and Ranging (MESSENGER) probe. The NASA space craft was launched in August 3, 2004 to further study the planet Mercury from orbit to augment the data collected from the Mariner X program that ended back in March 1975. The current MESSENGER mission is the first to visit the planet Mercury in over 30 years. The MESSENGER spacecraft is fitted with the latest generation of scientific instruments that allows it to study from orbit not only the chemical composition of Mercury’s surface. But also the planet’s environment, geologic history, the nature of the magnetic field, the size and state of the core, the volatile inventory at the poles and the nature of Mercury’s exosphere and magnetosphere over a nominal orbital mission of one Earth year. The current MESSENGER spacecraft has vastly improved optics for improved scanning capability. The cameras supplied to MESSENGER are capable of resolving surface features that are only 18 meters (59 feet) across. A vast improvement compared to the 1.6 kilometers (0.99 miles) resolution of Mariner X. MESSENGER will also be able to image the entire planet as opposed to the previous Mariner X mission which was only able to observe one hemisphere that was lit during the spacecraft’s flyby. After being launched from a Boeing Delta II rocket, the MESSENGER spacecraft’s travel to the planet Mercury required an extremely large velocity change, or delta-v (known colloquially to aerospace types as “delta vee”), to perform a Hohmann-transfer because Mercury lies deeper in the Sun’s gravity well. A spacecraft travelling to Mercury is greatly accelerated as it falls toward the Sun’s gravity well, so most of the fuel expenditure is used to slow it down to perform a Hohmann-transfer so that the spacecraft can enter Mercury’s orbit. As MESSENGER’s voyage to the planet Mercury requires extensive use of gravity assists to lower the spacecraft’s fuel expenditure. But this will greatly prolong the time of the trip. And to save rocket fuel even further because there are still no existing refilling stations for hydrazine and nitrogen tetroxide in the spacecraft’s flight path en route to Mercury. The thrust used for insertion into orbit around Mercury will be minimized, resulting in a notably elliptical orbit. Besides the advantage of saving its own propellants, such an orbit allows the MESSENGER spacecraft to measure solar wind and magnetic field strength at various distances from Mercury. Despite of the notably elliptical orbit, the improved instrumentation of MESSENGER can still allow close-up measurements and photographs of Mercury’s surface. As of January 14, 2008, MESSENGER mapped another 30% of Mercury’s surface in addition to the photos taken by Mariner X back in 1974 to 1975. Full orbital insertion of the MESSENGER spacecraft into Mercury will happen in March 18, 2011. As a follow-up to the MESSENGER mission, the European Space Agency is planning a joint mission with Japan called BepiColombo, which will orbit the planet Mercury with two space probes: one to map the planet and the other one to study the planet’s magnetosphere. The original plan to include a lander has been shelved due to budget constraints and of its dubious scientific value. A Russian Soyuz rocket will launch the “bus” carrying the two probes in 2013 from E.S.A. ’s Guyana Space Center to take advantage of fuel savings when launching from an equatorial location. As with the MESSENGER spacecraft, the BepiColombo “bus” will make close approaches to other planets en route to Mercury for orbit-changing / Hohmann-transfer gravitational assists. The BepiColombo “bus” will first fly past our Moon then to the planet Venus and making several approaches to the planet Mercury before entering orbit.
What's an AquaRAP? AquaRAP is short for Aquatic Rapid Assessment Program. The program is designed to quickly collect, analyze, and disseminate scientific data on freshwater aquatic ecosystems for use in conservation planning. Field expeditions typically last only three to four weeks, while data analysis and report preparation is expected within six to eight months after the expedition. AquaRAP teams survey such taxonomic groups as fishes, macro-crustaceans, aquatic insects, aquatic plants, and plankton. The scientists also study water chemistry and hydrology. Focusing on entire watersheds, the team studies the biological diversity, degree of endemism, uniqueness, and ecological connections within each system. Scientists carefully choose specific survey sites by consulting satellite imagery and overflights before a trip. In the field, they survey specific taxonomic groups as well as indicator species, taxa whose presence can help identify a habitat type or its condition. AquaRAP provides a primarily qualitative assessment to determine how a system is faring and what threats it faces. This survey usually precedes long-term scientific inventory and research which is often based on the data collected by the AquaRAP team. Why are scientists focusing on the Caura River Basin? As one of the most pristine ecosystems in South America, the Caura Basin and its inhabitants remain a tantalizing mystery to conservation biologists. The lack of human activity in the region leads the team to believe the Caura is a thriving ecosystem that may even contain new species. Located in southern Venezuela, the Caura comprises 5 percent of the entire country and supports the region's 15,000 inhabitants. The aquatic ecosystems provide freshwater, food, transportation and habitat for local communities as well as for wildlife. However, serious threats exist from a proposed water diversion project, encroachment by miners and deforestation for agricultural pursuits. What species will the biologists study? Previous studies of the Caura region are few in number, but they indicate that high levels of biodiversity endemism (species found nowhere else) exist. Approximately 257 bird species, 208 mammal species and up to 450 fish species are part of this unique ecosystem. This AquaRAP survey will likely yield high numbers of terrestrial and aquatic organisms, and may even uncover some species that are new to science. The AquaRAP team will conduct rapid surveys of aquatic organisms, including fishes, shrimp, crabs, plankton, zoobenthoths, and riparian vegetation, as well as a general survey of ecology and geomorphology. Team limnologists (fresh water scientists) will also evaluate water, chemistry and quality. What will be done with data collected? AquaRAP aspires to make the results of the surveys available to decision-makers, scientists, conservation groups, and the general public as "RAPidly" as possible. The challenge for AquaRAP participants is to cover vast areas in a short amount of time to consolidate data in order to complete apreliminary report before leaving Venezuela. In addition, the data will be used to generate a final report, which will make recommendations regarding the conservation and management of these critical resources. The data collected by CI will be used to establish a long term monitoring program for the region. A member of the indigenous community is accompanying the AquaRAP team to work with the scientists to create ways that the data can be made useful to the people who live there. What is life like on the expedition? According to RAP Coordinator, Jensen Montambault, the Caura expedition will be "the most rugged AquaRAP ever!" This is mainly due to the site's remote location. After flying over the site earlier this year she said, "all you can see is the thick forest canopy for miles around. Since the Ye'kwana and Sanema people mainly use the river for transportation, there are absolutely no roads." Scientists will be living under primitive conditions during the expedition, traveling in dugout canoe by day and camping along the river's edge most nights. Whitewater rapids will pose the greatest danger to the team, therefore team members will wear life jackets while in the canoes. A helicopter will transport the team down the Salto Parà, the largest waterfall in the river system which is not navigable by boat. The team carries a satellite phone for emergency communication. Throughout the expedition, care will be taken not to disturb indigenous communities, who have graciously permitted this study of their homeland. IN DEPTH: Be sure to read the field dispatches to learn more about life on the expedition.
graph aggregate demand and supply Downward sloping demand curve becomes aggregate demand curve; Upward sloping supply curve becomes aggregate supply curve; Instead of "price" on the Y-axis, we have "price-level". Instead of "quantity" on the X-axis, we have "Real GDP", a measure of the size of the economy. Apr 17, 2019· Find out how aggregate demand is calculated in macroeconomic models. See what kinds of factors can cause the aggregate demand curve to shift left or right. The Aggregate Demand Curve. In Unit 2, we learned that a demand curve illustrates the relationship between quantity demanded and the price of one product.Aggregate demand represents the quantity demanded of all products in a certain country or area at different price levels.. The aggregate demand curve is downward sloping, just like one product’s demand curve. Short‐run aggregate supply curve.The short‐run aggregate supply (SAS) curve is considered a valid description of the supply schedule of the economy only in the short‐run. The short‐run is the period that begins immediately after an increase in the price level and that ends when input prices have increased in the same proportion to the increase in the price level. Start studying Aggregate Supply and Aggregate Demand. Learn vocabulary, terms, and more with flashcards, games, and other study tools. AGGREGATE DEMAND AGGREGATE SUPPLY AND THE PHILIPS CURVE. The model of aggregate demand and aggregate supply provides an easy explanation for the menu of possible outcomes described by the Phillips curve. The Phillips curve simply shows the combinations of inflation and unemployment that arise in the short run as shifts in the aggregate-demand curve move the … To use the graph, click and drag either the AD or AS labels to shift the aggregate demand or aggregate supply curve, respectively, to a new location. Clicking Reset will restore the economy to full employment GDP and a stable price level. Mar 01, 2012· Understanding how aggregate demand is different from demand for a specific good or service. Justifications for the aggregate demand curve being downward sloping Watch the next lesson: https://www ... What is Aggregate Supply and Demand? Aggregate supply and demand refers to the concept of supply and demand Supply and Demand The laws of supply and demand are microeconomic concepts that state that in efficient markets, the quantity supplied of a good and quantity demanded of that but applied at a macroeconomic scale. Both aggregate supply and aggregate demand are both … Oct 15, 2016· This model combines to form the aggregate demand curve which is negatively sloped; hence when prices are high, demand is lower. Therefore, each point on the aggregate demand curve is an outcome of this model. Aggregate demand occurs at the point where the IS and LM curves intersect at a particular price. Apr 10, 2019· The ‘natural rate of unemployment’ is the rate of unemployment at equilibrium, at this rate wages are in equilibrium, and aggregate demand and aggregate supply are also in balance. If the demand for labor decreases, then wages will fall and labor employed falls. This logic follows that at the given wage rate, those who want to work will work. This model is called the aggregate demand/aggregate supply model. This module will explain aggregate supply, aggregate demand, and the equilibrium between them. The following modules will discuss the causes of shifts in aggregate supply and aggregate demand. The Aggregate Supply Curve and Potential GDP Aggregate supply: Aggregate supply is the overall total production of goods and services in a particular economy. It can be shown via a supply curve. This particular curve basically shows that the relationship between overall production and amount of goods or services at different price levels. Aggregate Demand, Aggregate Supply, and the Business Cycle. Having explained the theoretical framework, we are now ready to explain business cycle behavior using the Aggregate Demand/Aggregate Supply model. Generally, economic expansions and contractions are driven by shifts in the Aggregate Demand or Aggregate Supply curves. Unlike the aggregate demand curve, the aggregate supply curve does not usually shift independently. This is because the equation for the aggregate supply curve contains no terms that are indirectly related to either the price level or output. Instead, the equation for aggregate supply contains only ... The aggregate demand curve illustrates the relationship between two factors: the quantity of output that is demanded and the aggregate price level. Aggregate demand is expressed contingent upon a fixed level of the nominal money supply. There are many factors that can shift the AD curve. ... Aggregate supply/demand graph. Apr 10, 2019· The Aggregate Supply curve is horizontal until it reaches the point of full employment, where it becomes vertical. At AD1, output is below full employment. There is a deflationary gap, between AD* and AD1 on the vertical AS curve, which means that equilibrium output is less than full employment. Demand side policies can shift AD1 to AD*, however beyond that there is no rise in output. Justifications for the aggregate demand curve being downward sloping. Understanding how aggregate demand is different from demand for a specific good or service. Justifications for the aggregate demand curve being downward sloping. If you're seeing this message, it means we're having trouble loading external resources on our website. ... Aggregate Supply. The Aggregate Demand-Aggregate Supply model is designed to answer the questions of what determines the level of economic activity in the economy (i.e. what determines real GDP and employment), and what causes economic activity to speed up or slow down. The concepts of supply and demand can be applied to the economy as a whole. If you're seeing this message, it means we're having trouble loading external resources on our website. ... Interpreting the aggregate demand/aggregate supply model. Lesson summary: equilibrium in the AD-AS model. Apr 20, 2019· Aggregate supply, also known as total output, is the total supply of goods and services produced within an economy at a given overall price level in a given period. It is represented by the ... The best way to graph a supply and demand curve in Microsoft Excel would be to use the XY Scatter chart. A line graph is good when trying to find out a point where both sets of data intersects. A column chart is good for displaying the variation between the data. • Aggregate demand and supply analysis yields the following conclusions: 1. A shift in the aggregate demand curve affects output only in the short run and has no effect in the long run 2. A temporary supply shock affects output and inflation only in the short run and has no effect in the long run (holding the aggregate demand curve constant) 3. In this unit on Aggregate Supply, you learned the following concepts: 1. The axes of the aggregate supply and aggregate demand model (ASAD graph). 2. The three ranges of the aggregate supply curve and what each range indicates on the ASAD graph. 3. Short-run equilibrium and Long-run equilibrium on the ASAD graph. Aggregate Demand and Supply. STUDY. PLAY. Terms in this set (...) abbreviation for aggregate demand. AD. what is spending called? consumption. ... On a correct graph of Aggregate Demand, the Y-axis is labelled [a] and the X-axis is labelled [b]. For Aggregate Demand, the relationship between these two variables will be [c] (inverse ... Feb 05, 2012· I explain the most important graph in most introductory macroeconomics courses- the aggregate demand model. In this video I cover aggregate demand (AD), aggregate supply (AS), and the long run ... Feb 18, 2016· Aggregate Demand Curve Aggregate demand falls when the price level increases because the higher price level causes the demand for money to rise, which causes the interest rate to rise. It is the higher interest rate that causes aggregate output to fall. At all points along the AD curve, both the goods market and the money market are in equilibrium. Supply and demand, in economics, relationship between the quantity of a commodity that producers wish to sell at various prices and the quantity that consumers wish to buy. It is the main model of price determination used in economic theory. The price of a commodity is determined by the interaction of supply and demand in a market. The aggregate supply curve is a curve showing the relationship between a nation's price level and the quantity of goods supplied by its producers. The Short Run Aggregate Supply (SRAS) curve is an upward-sloping curve, and represents how firms will respond to what they perceive as changing demand … Supply and demand graph template to quickly visualize demand and supply curves. Use our economic graph maker to create them and many other econ graphs and charts. --You can edit this template and create your own diagram. Creately diagrams can be exported and added to Word, PPT (powerpoint), Excel, Visio or any other document. Nov 09, 2016· We defined aggregate demand and explained what shifts aggregate demand and aggregate supply. It is always crucial that you remember to draw large, clear, and well-labelled graphs. To wrap up on the subject of aggregate demand and supply, keep in mind that these concepts are important in formulating economic policy, and you are highly likely to ... Long-run equilibrium occurs at the intersection of the aggregate demand curve and the long-run aggregate supply curve. For the three aggregate demand curves shown, long-run equilibrium occurs at three different price levels, but always at an output level … - holcim indonesia ciwandan quarry - contribution of mining industries to the economic development of nigeria - api barite market price - advantages and disadvantages of coal mining in meghalaya - microgrinder eg 400 - japans stone breaker machine - machinery used to process and produce marble - introduction of mobile crusher br580jg 1 - dry mix gunite additives - china gold mining companies - crushing and grinding hydrocyclone of coal - gold view mine ermelo - grinding waste latex - eia on stone crusher - gold wash plant alog request - gyratory crusher installation specifiion - grinding machine heavy duty m14 dia 230mm - screen printing machine used - plant and animal cell plaster molds for sale - ft cone crusher zenith price used - PE Jaw Crusher - PEW Jaw Crusher - PFW Impact Crusher - PF Impact Crusher - HJ Series Jaw Crusher - HST Cone Crusher - HPT Cone Crusher - HPC Cone Crusher - CS Cone Crusher - PY Cone Crusher - VSI5X Crusher - VSI Crusher - Hammer Crusher - LM Vertical Grinding Mills - MTM Trapezium Grinder - MTW Milling Machine - SCM Ultrafine Mill - Ball Mill - T130X Reinforced Ultrafine Mill - Raymond Mill - LUM Ultrafine Vertical Roller Mill - Vibrating Feeder - Belt Conveyor - Wharf Belt Conveyor - BWZ Heavy Duty Apron Feeder - Vibrating Screen - XSD Sand Washer - LSX Sand Washing Machine - YKN Vibrating Screen - Mobile Jaw Crusher - Mobile Cone Crusher - Mobile Impact Crusher - Hydraulic-driven Track Mobile Plant - K Series Mobile Crushing Plant - Flotation Machine - High-frequency Screen - Magnetic Separation Machine - Spiral Classifier
For example, a vector would be used to show the distance and direction something moved in. If you ask for directions, and a person says "Walk one kilometer towards North", that's a vector. If he'd say "Walk one kilometer", without showing a direction, it would be a scalar. Vectors are usually represented as a line with an arrow at the end. Examples of vectors[change | change source] - John walks north 20 meters. The direction "north" together with the distance "20 meters" is a vector. - An apple falls down at 10 meters per second. The direction "down" combined with the speed "10 meters per second" is a vector. This kind of vector is also called "Velocity". Examples of things that are not vectors (Scalars)[change | change source] - The distance between the two places is 10 kilometers. This distance is not a vector because it does not contain a direction. - The number of fruit in a box is not a vector. - A person pointing at a building is not a vector because there is only a direction. There is not a magnitude (the distance from the person's finger to the building, for example). - The length of an object. - A car drives at 100 kilometers per hour. That is not describing a vector, as there is only a number, but no direction. More examples of vectors[change | change source] - Displacement is a vector. Displacement is the distance that something moves in a certain direction. A measure of distance alone is a scalar. - Force that includes direction is a vector. - Velocity is a vector, because it is a speed in a certain direction. - Acceleration is the rate of change of velocity. An object is accelerating if it is changing speed or changing direction. How to add Vectors[change | change source] Adding Vectors on paper using the Head to Tail method[change | change source] The Head to Tail method of adding vectors is useful for doing an estimate on paper of the result of adding two vectors. To do it: - Each vector is drawn as an arrow with an amount of length behind it, where each unit of length on the paper represents a certain magnitude of the vector. - Draw the next vector, with the tail(end) of the second vector at the head(front) of the first vector. - Repeat for all further vectors: Draw the tail of the next vector at head of the previous one. - Draw a line from the tail of the first vector to the head of the last vector - that's the resultant(sum) of all the vectors. It's called the "Head to Tail" method, because each head from the previous vector leads in to the tail of the next one. See link for an example created with Java Using Component Form[change | change source] Using the component forms of vectors two add two vectors literally means adding the components of the vectors to create a new vector. For example, let a and b be two two-dimensional vectors. This implies that both vectors can be written in terms of their respective components; thus, a = axî + ayĵ and b = bxî + byĵ. Suppose c is the sum of vectors a and b, so that c = a + b. This simply implies that c, in component form, gives the following: c = (ax + bx)î + (ay + by)ĵ. Note that î and ĵ are called unit vectors. Here is an example of addition of two vectors, using their component forms. a = 3î - ĵ b = 2î + 2ĵ c = a + b - = (ax + bx)î + (ay + by)ĵ - = (3 + 2)î + ((-1)+2)ĵ - = 5î + ĵ Related pages[change | change source] References[change | change source] - "The Head-to-Tail Method". http://www.nhn.ou.edu/~walkup/demonstrations/WebTutorials/HeadToTailMethod.htm. Retrieved 5 October 2010.
Presentation on theme: "Solutions Chapter 14. solution Homogeneous mixture of 2 or more substances in a single physical state –particles in a solution are very small –particles."— Presentation transcript: solution Homogeneous mixture of 2 or more substances in a single physical state –particles in a solution are very small –particles in a solution are evenly distributed –particles in a solution will not separate solute The substance that is dissolved examples: sugar, salt solvent Substance that does the dissolving –example: water, ethanol Aqueous solutions-use water as solvent Like dissolves like A solute will dissolve best in a solvent with similar intermolecular forces. If the intermolecular forces are too different the solute will not dissolve in that solvent. Calculating the strength of a solution Often the strength of a solution can be expressed in terms of percent. Percent Solutions can be calculated 2 ways. % by volume This compares the volume of solute to the total volume of solution. % by mass This compares the mass of solute to the total mass of solution. Volume Percent Volume of solute present in a total volume of solution. Volume Percent (v/v) = volume of solute / volume of solution x 100% Calculating volume percent A solution is prepared by dissolving 36 ml of ethanol in water to a final volume of 150 ml what is the solution’s volume percent? % (v/v) ethanol = 36 ml ethanol / 150 ml total x 100% Volume Percent ethanol = 24 % Volume percent If 15.0ml of acetone is diluted to 500ml with water what is the % (v/v) of the prepared solution? % (v/v) = 15.0ml / 500ml x 100% % v/v = 3.0% acetone Mass percent Way to describe solutions composition mass of solute present in given mass of solution mass percent = mass of solute mass of solution grams of solute grams of solute + grams of solvent X 100 Mass percent A solution is prepared by dissolving 1.0g of sodium chloride in 48 g of water. The solution has a mass of 49 g, and there is 1.0g of solute (NaCl) present. Find the mass percent of solute. Mass Percent A solution is prepared by mixing 1.00g of ethanol, C 2 H 5 OH, with 100.0 g of water. Calculate the mass percent of ethanol in this solution. Solubility The extent to which a solute will dissolve –expressed in grams of solute per 100g of solvent –‘likes dissolve likes’ Not every substance dissolves in every other substance –soluble- capable of being dissolved salt –insoluble- does not dissolve in another oil does not dissolve in water Solubility & liquids Miscible- two liquids that dissolve in each other completely immiscible- liquids that are insoluble in one another –oil & vinegar The compositions of the solvent and solute will determine if the substance will dissolve –stirring –temperature –surface area of the dissolving particles A solution is prepared by mixing 2.8 g of sodium chloride with 100 g of water. What is the mass percent of NaCl? What is the volume percent alcohol when you add sufficient water to 700mL of isopropyl alcohol to obtain 1000mL of solution? saturated solution contains the maximum amount of solute for a given quantity of solvent –no more solute will dissolve unsaturated solution contains less solute than a saturated solution –could use more Supersaturated solution Solution contains more solute than it can ‘hold’ –too much Dilute solution- contains a small amount of solute Concentrated solution- contains large amount of solute Solubility Table salt: at room temperature, 37.7 g can be dissolved in 100 ml of H 2 O Sugar: at room temperature, 200 g can be dissolved in 100 ml of H 2 O Solubility curve Determines solubility of substances at specific temperatures with raising temperature solids increase in solubility with increase in temperature gases decrease in solubility ex: fish die saturated unsaturated supersaturated temperature Solute (g) per 100 g H 2 O on the line- saturated (can not hold anymore) above the line- supersaturated (holding more than it can) below the line- unsaturated (can hold more solute) 92 g of NaNO 3 are added to 100ml of water at 25°C and mixed. What type of solution is it? 80 g of NaNO 3 are added to 100ml of water at 25°C and mixed. What type of solution is it? What is the solubility of NaNO 3 in 100g of H 2 O at 20°C? What is the solubility of NaNO 3 in 200g of H 2 O at 20°C? Concentration of solutions Concentration of a solution is the amount of solute in a given amount of solvent most common measurements of concentration are: –molarity –(mole fraction) Concentration of solutions Concentration of a solution is the amount of solute in a given amount of solvent most common measurements of concentration are: –molarity –(mole fraction) – not discussed in this class Molarity Number of moles of solute per volume of solution in liters moles of solute molarity (M) = liters of solution mol L Molarity Calculate the molarity of a solution prepared by dissolving 11.5g of solid NaOH in enough water to make 1.50 L of solution. Given: –mass of solute = 11.5 g NaOH –vol of solution = 1.50 L molarity is moles of solute per liters of solution Molarity Convert mass of solute to moles (using molar mass of NaOH). Then we can divide by volume molar mass of solute = 40.0 g 11.5 g NaOH x 1 mol NaOH 40.0 g NaOH 0.288 mol NaOH 1.50 L solution = 0.288 mol NaOH = 0.192 M NaOH Molarity Calculate the molarity of a solution prepared by dissolving 1.56 g of gaseous HCl into enough water to make 26.8 mL of solution. Given: mass of solute (HCl) = 1.56 g volume of solution = 26.8 mL Molarity Molarity is moles per liters we have to change 1.56 g HCl to moles HCl and then change 26.8 mL to liters molar mass of HCl = 36.5 g 1.56 g HCl x 1 mol HCl 36.5 g HCl = 0.0427 mol HCl = 4.27 x 10 -2 mol HCl Molarity Change the volume from mL to liters 1 L = 1000 mL 26.8 mL x 1 L 1000 mL = 0.0268 L = 2.68 x 10 -2 L molarity Finally, divide the moles of solute by the liters of solution molarity = 4.27 x 10 -2 mol HCl 2.68 x 10 -2 L = 1.59 M HCl molarity Calculate the molarity of a solution prepared by dissolving 1.00 g of ethanol, C 2 H 5 OH, in enough water to give a final volume of 101mL. Molarity = moles of solute/ L of solution Moles of ethanol MM ethanol = 46.08 g/mol 1.00 g ethanol / 46.08 g/mol= 0.0217 mol Solution volume = 101 ml convert to liters 101 ml / 1000ml per liter = 0.101 L Molarity (M) = moles / L M = 0.0217 moles / 0.101 L Molarity = 0.215 M ethanol molarity One saline solution contains 0.90 g NaCl in exactly 1.0 L of solution. What is the molarity of the solution? Calculate the moles of NaCl MM NaCl = 58.44 g / mol 0.90 g NaCl / 58.44 g / mol = 0.015 moles Volume = 1.0 L Molarity = 0.015 moles / 1.0 L Molarity = 0.015 M NaCl molarity A solution has a volume of 250 mL and contains 7.0 x 10 ⁻¹ mol NaCl. What is its molarity? Convert volume to liters 250 ml / 1000 ml per L = 0.25 L M = 7.0 x 10 ⁻ ¹ / 0.25 L Molarity of NaCl = 2.8 M Finding moles to calculate grams How many grams of solute is needed to prepare 300. ml of 3.2 M KCl solution? Use the molarity relationship the find the number of mol. moles = L x M Convert volume to L 300. ml x (1 L/1000ml) = 0.300 L Calculate mol moles = 0.300 L x 3.2M = 0.96 mol Convert mol to grams 0.96 mol KCl x (74.5g/mol) = 72 g Finding volume How many liters of 0.442 M MgS can be made with 27.3 g of MgS? MM of MgS = 56 g/mol. Use the relationship L = mol / M Convert g to mol 27.3g x (1 mol/56 g) = 0.488 mol Calculate liters L = 0.488 mol/ 0.442 M = 1.10 L dilution Diluting a solution: –reduces the number of moles of solute per unit volume –the total number of moles of solute in solution does not change Diluting solutions M 1 V 1 = M 2 V 2 M 1 = molarity of stock solution (initial) V 1 = volume of stock solution (initial) M 2 = molarity of dilute solution V 2 = volume of dilute solution M 1 V 1 = M 2 V 2 How many milliliters of aqueous 2.00M MgSO 4 solution must be diluted with water to prepare 100.00 mL of aqueous 0.400M MgSO 4 ? M 1 = 2.00M MgSO 4 M 2 = 0.400M MgSO 4 V 2 = 100.00 mL MgSO 4 V 1 = ? M 1 V 1 = M 2 V 2 Solve for V 1 V 1 = M 2 x V 2 M 1 0.400M x 100.00 mL 2.00M = 20.0 mL M 1 V 1 = M 2 V 2 How many milliliters of a solution of 4.00M KI are needed to prepare 0.250 L of 0.760M KI? Mı=4.00M Vı=? M2=0.760M V2=0.250 L V1=M2xV2/M1 V1=(0.760M)(0.250 L)/4.00M V1=0.0475 L If 0.250 L of a 5.00 M HBr solution is diluted to 2.00 L with water what will the resulting concentration be? M1= 5.00 M V1= 0.250 L M2 = ? V2= 2.00 L M2 = M1 x V1 / V2 M2 = (5.00M)(0.250L) / (2.00 L) M2 = 0.625M
Galaxies contain huge clouds of dust and gas. Sometimes those clouds of gas become so large that their own gravity makes them crash in on themselves. Somewhere inside the center of those clouds, through the process of fusion, stars are born. Their temperature rises to ten million degrees Celsius, and their life sparks. Like humans, stars are also born at a certain time and place, and their birthplaces are what astronomers call nebulae or star nurseries. The name nebulae came from Latin, and it translates to English as "cloud." The nebulae do resemble a cloud, because of how they appear to us at night. Have you ever heard of Orion? It is the most famous of all the nebulae, and you can see it even without the need for a telescope. It is a big stellar nursery with around seven hundred stars being born inside of it. Different Types Of Nebulae Astronomers group nebulae in four main categories.; the first one is diffuse nebulae, and this is where most nebulae belong to. Their main characteristic is that they have no defined boundaries. The second type is called planetary nebulae, and during the origins of astronomical development, these nebulae were often mistaken for planets. These nebulae are formed by the ionized gas that comes from other stars in the later stages of their life. The third nebulae are the ones that occur when the nuclear fusion inside of a star ends, resulting in the death of a star and the event, also known as a supernova. It is a place where life ends in a gigantic explosion, and a new one can form again. These nebulae are then called Supernova Remnants. The fourth and the final one is the Dark Nebulae, made of clouds that don't let any light pass through them, filled with massive amounts of dust. Their shape is formed by the light that surrounds them. How Do Stars Die? Usually, stars need to be millions of years old before they die. The star's longevity generally depends on its size at its birth. The stars are in a constant balance between the forces of gravity and the energy that is produced from the fusion at the star's core. When that nuclear fusion comes towards its end, when the hydrogen is no more, the star starts collapsing, leaving only it's own core behind. It becomes a white dwarf star, and after it has cooled down long enough, it becomes a fragment of what it once was, now called a black dwarf. But, during these same processes, some far more massive stars can fuse even heavier elements at their core until there is nothing else to get from fusion and such stars explode in what astronomers call a supernova explosion. When those stars die, they release a giant explosion that shines through the whole galaxy, releasing the materials and elements from which the universe was created. About the Author Antonia is a sociologist and an anglicist by education, but a writer and a behavior enthusiast by inclination. If she's not writing, editing or reading, you can usually find her snuggling with her huge dog or being obsessed with a new true-crime podcast. She also has a (questionably) healthy appreciation for avocados and Seinfeld. Your MLA Citation Your APA Citation Your Chicago Citation Your Harvard CitationRemember to italicize the title of this article in your Harvard citation.
Write down a number, any number. Now multiply it by 2, add 3 and write down the result. Now take the number you just wrote, double it again, add 3 and write down the new result. Do this infinitely many times and…congratulations, you defined a sequence using recursion! All you needed was a simple rule (multiply by 2 then add 3), which is formally known as a recurrence relation. But what exactly is a recurrence relation, and how could one “solve” an equation such as xn+1 = 2xn + 3? How can one classify such relations, and, most importantly, how can you use them to compute, say, the 7th term of the sequence you just built? To answer that, we first need to clarify recursion. Simply put, recursion happens when something is defined in terms of itself or something of the same type. Rather surprisingly, one can observe such a phenomenon without putting pen to paper or pressing a key. This visual recursion is known as the Droste effect, or “picture within a picture”. An image appears recursively within itself, giving the impression of an infinite loop which really only goes as far as the quality can allow. In mathematics, the most common form of recursion is generating a sequence of numbers (such as the sequence of odd integers). This is done by applying an equation which gives a term as a function of preceding terms in the sequence. Usually it’s nice and it only depends on the previous number, for example xn=xn-1 + 2, but at times it can rely on even earlier terms, such as xn = xn-1 + xn-2 + 17n. Much like Spongebob holding an image in the meme above, each equation represents a rule by which an entire thing can be constructed. This rule is the “recurrence relation”. A key observation is that, in the Droste effect example, we needed an original image on which to recurse. A mathematical sequence also requires some initial terms to get the recursive ball rolling. These terms are known as “boundary conditions”, and any respectable sequence has at least one such term (usually just one, but there may be more). Now actually solving recurrence relations can be a dark art, but there is a subclass of these which can be solved rather quickly. That subclass is referred to as “linear” recurrence relations and contains some of the most famous recursive formulas. Any equation where a general term xn (or sometimes xn+1, depending on the convention) is written as a sum of preceding terms, each multiplied by constants that do not change regardless of n (those are known as “constant coefficients”), or multiplied by a function depending on n, can be put into this class. For example xn = (-3)xn-1 + 4n2, xn = xn-1 – 2xn – 3 + 5xn – 4, or the three equations mentioned before in the article are linear. However, relations such as xn =(xn-1)2 + (xn-2)5 or xn = xn-1 xn-4 + xn-2 are not. A sequence (xn) for which the equation is true for any n ≥ 0 is considered a “solution”. It is this type of recurrence relation that we will learn to solve today, starting from the simplest ones: linear recurrence relations of first order. The reason they are called “first order” is that every term in the sequence, except for the first one, can be written as the same function taking only one input: the previous term. The most common cases are of the type xn+1 = axn + b or xn+1 = axn + bn. It should be pointed out that a cannot be zero, as otherwise you are not defining a recursion, but rather a constant or linear sequence as boring as watching Arteta’s Arsenal play. The method we will use, which can be generalised to higher orders where more preceding terms are referenced, makes use of homogeneous recurrence relations. In layman’s terms, these are equations containing only the terms of the sequence, each multiplied by constant coefficients; the unattached constant or expression dependent on n is removed (or, better put, is zero). One can see that, for the first order case, the homogeneous linear recurrence relation is xn+1 = axn. After finding a sequence (un) that “solves” this homogeneous equation, we can find a particular solution (vn) to the initial recurrence relation by continuously “trying the next most complex thing” (constants, then 1st-degree polynomials involving n, then 2nd-degree polynomials involving n and so on), a trick which some may recognise from solving differential equations. Adding the 2 relations up and factorising properly on each side, one can obtain a general solution (xn), such that xn = un + vn for each n ≥ 0. We then only need to plug in the “boundary conditions”; that is, replace xn and vn with the corresponding values to find more information about un. This will be enough to deduce the general term of the solution we actually need. If this explanation sounds confusing, it will hopefully make sense in practice. As a first application, we will work on the sequence created at the start of the article. The equation mentioned there, xn+1 = 2xn + 3, wasn’t entirely random, since it is the exact mathematical translation of the rule we used: multiply by 2, then add 3. Firstly, we solve the corresponding homogeneous equation, which is un+1 = 2un, using the revolutionary method of… “putting it into itself”: un = 2un-1 = 2(2un-2) = 22un-2 = … = 2nu0 (Notice that the exponent of the constant coefficient and the index of the term always add up to n: for 21un-1 we get 1 + (n-1) = n, for 22un-2 we have 2 + (n-2) = n, and so on until for 2nu0 we have n + 0 = n). As we have no useful information available about u0, we will note it down as an unknown constant A (it could be anything), and write the solution of the homogeneous equation as un=2nA. To find the particular solution (vn), we first look through constants to see if they can provide a solution. Namely, we try vn = C and replace the values conveniently in the original equation, finding either the value of C or a contradiction: vn+1 = 2vn + 3 becomes C = 2C + 3, or C – 2C = 3, which gives C = -3. Hence vn = -3 is a particular solution of the recurrence relation, and the general sequence (xn) thus has the following formula: xn = un + vn = 2nA – 3 (Knowing that un+1 = 2un, vn+1 = 2vn + 3, you can check that indeed xn+1 = 2xn + 3. You can also check that the first few “general” terms, namely A – 3, 2A – 3, 4A – 3 and so on, obey the rule). As a “boundary condition”, we can use the first number you wrote down, namely the initial term x0. We will assume that it’s 1 (it probably isn’t, I’m not a clairvoyant), and replace n with 0 and xn with 1 to find A: 1 = 20A – 3, which gives A – 3 = 1, so A = 4. So the general term for that sequence you wrote down might have been 4*2n – 3 or, if you prefer, 2n+2 – 3. Now instead of writing down 999 numbers to find the 1000th one, you can simply plug n = 1000 into this formula to compute it directly. Amazing! First order linear recurrence relations have surprising applications in real world finance, as well. Suppose that your friend down the pub opens an investment account with an initial sum of £1000. They tell you their account should grow at a fixed interest rate of 1% per month, and that they add 5 pounds to it at the end of every month. By writing down the recurrence relation xn+1 = (1 + 0.01)xn + 5 (careful: 1% = 0.01) and using the boundary condition x0 = 1000 and the same method as above, you too can compute how deep your buddy’s pockets will be after 36 months, or 3 years. (Spoiler alert: not that much). We will now take a look at “second order” linear recurrence relations, named so because, as you may have guessed, the terms in the sequence are written as an equation of the 2 preceding terms. Their most common form is xn+1 + axn + bxn-1 = f(n); we will analyse the simpler cases where the right-hand side is a constant. Here both a and b must be non-zero: b since otherwise the relation is just first order, and a because otherwise the rule would be incomplete, skipping every other term and ruining the recursion. The method applied in the first order case works just fine here, however, finding the homogeneous solution is a bit trickier. We assume that our solution is not a dull as dishwater sequence of 0’s (formally, it’s non-trivial) and just skip straight to taking un = Aλn. We have Aλn+1 + aAλn + bAλn-1 = 0. Each term has a common factor of Aλn-1, so we can rewrite this as Aλn-1(λ2 + aλ + b) = 0. Now, if we divide by Aλn-1 (this is possible, as both A and λ are non-zero), we will finish the derivation of what is known as the “auxiliary equation”: λ2 + aλ + b = 0. If this derivation proves difficult to memorise, just remember that we can “reach” the auxiliary equation by replacing the terms of the sequence with powers of λ. The quadratic itself is important to remember, however, since in order to find the homogeneous solution we need to solve it and compare the roots, giving us 2 cases: - The roots λ1 and λ2 are different. Then the homogeneous solution is un = Cλ1n + Dλ2n - The roots λ1 = λ2 = λ are the same. By “trying the next most complex thing” (just as we did before), we get the homogeneous solution un = (C + nD)λn. Like Michael Schumacher at the wheel of a Ferrari, these solutions work brilliantly (but you may check if you’re a Hamilton fan). More importantly, you need to compute 2 constants to get your general term; hence you need at least 2 boundary conditions in this case (after all, 3 + 7 and 4 + 6 both give 10, but 3 + 2*7 ≠ 4 + 2*6). We will now look at a rather artificial example, meant to show the quirks of repeated roots and of the process for finding a particular solution: xn+1 – 2xn + xn-1 = 1. The auxiliary equation is pretty nice here: λ2 – 2λ + 1 = (λ – 1)2 = 0, with repeated root 1. By looking at the scheme above, our homogeneous solution is un = An + B. For the particular solution, we will yet again try things that are as simple as possible. Notice that taking xn as a constant would just cancel the left-hand side out. A 1st-degree linear polynomial already solves the homogeneous equation, so we can ignore it as a component for the particular solution since it cannot contribute (remember: A and B may still be any number). We hence try a 2nd-degree polynomial which means taking vn = Cn2. Substituting in we obtain: C(n+1)2 – 2Cn2 + C(n-1)2 = 1, which simplifies to 2C = 1 or C = 1/2. The general solution is hence xn = An + B + 1/2n2. Indeed, you can check the equation and verify that this is the correct solution (we won’t do the computation here, but If you know the rules of squaring and are careful enough, a lot of terms should cancel out nicely). To finish things off in a beautiful way, we will look at the most famous recurrence relation: Fibonacci. The idea is simple: we write down 1, then 1 again, and every number after that is the sum of the 2 preceding it (1,1,2,3,5,8,13,21, and so on). While cute at the surface, this recursion hides something incredible, which can only be revealed via the methods we have learned thus far. Firstly, from the rule we can pinpoint the recurrence relation xn+2 = xn+1 + xn, or, in a more useful form, xn+2 – xn+1 – xn = 0, as well as the boundary conditions x0 = x1 = 1. The resulting auxiliary equation, as one can hopefully see, is λ2 – λ – 1 = 0. This is a particular case of the general quadratic formula aλ2 + bλ + c = 0 for a = 1, b = -1, c = -1. b2 – 4ac = 5 > 0, so we have 2 real roots, namely (-b + (b2-4ac)1/2)/2a and (-b + (b2-4ac)1/2)/2a or, in our case, (1 + 51/2)/2 and (1 – 51/2)/2. Notice that both roots are irrational, even though the quadratic has integer coefficients. While we had more work to do to solve the quadratic, this is where the fun begins, as Anakin Skywalker once said: the relation is already homogeneous. Great, no need to do any more guess work, we can just plug the roots in: The two initial terms now tell us that A + B = 1, A(1 + 51/2)/2 + B(1 – 51/2)/2 = 1, from where we obtain (by solving the system of equations) that: Which gives us for any n ≥ 0: Quite incredibly, an entire sequence of integers is generated only by irrational numbers, even if they have absolutely no overlap! This observation is at the heart of the golden spiral, a geometrical object that at every quarter turn gets wider by a factor of 1 + 51/2, an unfathomable number, but which can be created using squares we can actually draw. Whether you want to calculate the student loan you will give back to Boris with interest, the exponentially increasing price tag of Kylian Mbappe, or just draw a very attractive spiral, recurrence relations may be there with you. Linear recurrence relations, in particular, can have applications of the most surprising kind, one of which is going to be explored in Part 3 of the series (coming soon), as we dive into a mathematical model of the “Gambler’s ruin”. We hope to see you there! Article 1: Probability is everywhere. But what is it exactly? […] the next article we will move on to the seemingly unrelated ‘recurrence relations’, but before doing so, […] […] Despite the well-known health risks associated with gambling, many people around the world still take huge risks and bet their hard-earned money at slot machines and casinos in the hopes of getting rich quick. Perhaps, even when reading the percentages above that are close to 50%, you believed that with just enough luck you could dethrone Jeff Bezos by spending a few days in Nevada. The reality could not be bleaker, and that’s before you even consider concepts such as house edges. The idea that such high percentages can be so misleading just because they are on the wrong side of 50%, is at the basis of the gambler’s ruin, the statistical concept teased in Part 1 of this series, which we will study using the theory we have developed in the second article. […] So powerful, Stacey, on so many counts. I think foremost is the sharing of your own progression as a teacher of writing- I remember those hamburger organizers! I think of how many students dutifully comply with such things … but what a world of difference there is between compliance/checklists and creativity/real communication. Being shown how to communicate your own thoughts, your own ideas, things that matter to you in a way that impacts others is immeasurable in the spectrum of learning AND LIFE. It’s both craft and creative freedom; parameters are removed vs. enforced. I could go on – just thank you for this labor-of-love post. I have Warner’sWhy They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities and have used it in summer writing pd for teachers at all grade levels; this book is mighty, inspiring, and true. One of my favorites on how to revise the way writing is taught- and empowering tge writer.LikeLike
During the economic boom of the Roaring Twenties, the traditional values of rural America were challenged by the Jazz Age, symbolized by women smoking, drinking, and wearing short skirts. The average American was busy buying automobiles and household appliances, and speculating in the stock market, where big money could be made. Those appliances were bought on credit, however. Although businesses had made huge gains — 65 percent — from the mechanization of manufacturing, the average worker’s wages had only increased 8 percent. The imbalance between the rich and the poor, with 0.1 percent of society earning the same total income as 42 percent, combined with production of more and more goods and rising personal debt, could not be sustained. On Black Tuesday, October 29, 1929, the stock market crashed, triggering the Great Depression, the worst economic collapse in the history of the modern industrial world. It spread from the United States to the rest of the world, lasting from the end of 1929 until the early 1940s. With banks failing and businesses closing, more than 15 million Americans (one-quarter of the workforce) became unemployed. President Herbert Hoover, underestimating the seriousness of the crisis, called it “a passing incident in our national lives,” and assured Americans that it would be over in 60 days. A strong believer in rugged individualism, Hoover did not think the federal government should offer relief to the poverty-stricken population. Focusing on a trickle-down economic program to help finance businesses and banks, Hoover met with resistance from business executives who preferred to lay off workers. Blamed by many for the Great Depression, Hoover was widely ridiculed: an empty pocket turned inside out was called a “Hoover flag;” the decrepit shantytowns springing up around the country were called “Hoovervilles.” Franklin Delano Roosevelt, the rich governor from New York, offered Americans a New Deal, and was elected in a landslide victory in 1932. He took quick action to attack the Depression, declaring a four-day bank holiday, during which Congress passed the Emergency Banking Relief Act to stabilize the banking system. During the first 100 days of his administration, Roosevelt laid the groundwork for his New Deal remedies that would rescue the country from the depths of despair. The New Deal programs created a liberal political alliance of labor unions, blacks and other minorities, some farmers and others receiving government relief, and intellectuals. The hardship brought on by the Depression affected Americans deeply. Since the prevailing attitude of the 1920s was that success was earned, it followed that failure was deserved. The unemployment brought on by the Depression caused self-blame and self-doubt. Men were harder hit psychologically than women were. Since men were expected to provide for their families, it was humiliating to have to ask for assistance. Although some argued that women should not be given jobs when many men were unemployed, the percentage of women working increased slightly during the Depression. Traditionally female fields of teaching and social services grew under New Deal programs. Children took on more responsibilities, sometimes finding work when their parents could not. As a result of living through the Depression, some people developed habits of careful saving and frugality, others determined to create a comfortable life for themselves. African Americans suffered more than whites, since their jobs were often taken away from them and given to whites. In 1930, 50 percent of blacks were unemployed. However, Eleanor Roosevelt championed black rights, and New Deal programs prohibited discrimination. Discrimination continued in the South, however, as a result a large number of black voters switched from the Republican to the Democrat party during the Depression. The Great Depression and the New Deal changed forever the relationship between Americans and their government. Government involvement and responsibility in caring for the needy and regulating the economy came to be expected. Native Alaskans, oil company representatives, environmentalists, politicians, and others tell the story of the 800-mile pipeline. In 1927, the Mississippi River flooded from New Orleans to Illinois, leaving a million people homeless and leading to a major black migration to the North. The life of the legendary photographer, known best for his black and white images of the wilderness of the American West. Robert Marshall, Aldo Leopold and Howard Zahniser dedicated their lives to protect the shrinking American wilderness. The New Deal program CCC put three million young men to work in camps across America. When an earthen dam broke without warning, a small city in Pennsylvania was swept away in a wall of water over 30 feet high. A daunting story of shipwreck, starvation, mutiny and cannibalism amongst a group left abandoned in the high Arctic. Vivid memories of those trapped in the terrifying temblor of 1906 that killed thousands of Californians.
Vision is the most important sense for birds, since good eyesight is essential for safe flight. Birds have a number of adaptations which give visual acuity superior to that of other vertebrate groups; a pigeon has been described as "two eyes with wings". Birds likely being descendants of theropod dinosaurs, the avian eye resembles that of other reptiles, with ciliary muscles that can change the shape of the lens rapidly and to a greater extent than in the mammals. Birds have the largest eyes relative to their size in the animal kingdom, and movement is consequently limited within the eye's bony socket. In addition to the two eyelids usually found in vertebrates, bird's eyes are protected by a third transparent movable membrane. The eye's internal anatomy is similar to that of other vertebrates, but has a structure, the pecten oculi, unique to birds. Some bird groups have specific modifications to their visual system linked to their way of life. Birds of prey have a very high density of receptors and other adaptations that maximise visual acuity. The placement of their eyes gives them good binocular vision enabling accurate judgement of distances. Nocturnal species have tubular eyes, low numbers of colour detectors, but a high density of rod cells which function well in poor light. Terns, gulls, and albatrosses are among the seabirds that have red or yellow oil droplets in the colour receptors to improve distance vision especially in hazy conditions. The eye of a bird most closely resembles that of the reptiles. Unlike the mammalian eye, it is not spherical, and the flatter shape enables more of its visual field to be in focus. A circle of bony plates, the sclerotic ring, surrounds the eye and holds it rigid, but an improvement over the reptilian eye, also found in mammals, is that the lens is pushed further forward, increasing the size of the image on the retina. Eyes of most birds are large, not very round and capable of only limited movement in the orbits, typically 10-20° (but in some passerines, >80°) horizontally. That's why head movements in birds play a bigger role than eye movements. Two eyes usually move independently, and in some species they can move coordinatedly in opposite directions. Birds with eyes on the sides of their heads have a wide visual field, useful for detecting predators, while those with eyes on the front of their heads, such as owls, have binocular vision and can estimate distances when hunting. The American woodcock probably has the largest visual field of any bird, 360° in the horizontal plane, and 180° in the vertical plane. The eyelids of a bird are not used in blinking. Instead the eye is lubricated by the nictitating membrane, a third concealed eyelid that sweeps horizontally across the eye like a windscreen wiper. The nictitating membrane also covers the eye and acts as a contact lens in many aquatic birds when they are under water. When sleeping, the lower eyelid rises to cover the eye in most birds, with the exception of the horned owls where the upper eyelid is mobile. The eye is also cleaned by tear secretions from the lachrymal gland and protected by an oily substance from the Harderian glands which coats the cornea and prevents dryness. The eye of a bird is larger compared to the size of the animal than for any other group of animals, although much of it is concealed in its skull. The ostrich has the largest eye of any land vertebrate, with an axial length of 50 mm (2.0 in), twice that of the human eye. Bird eye size is broadly related to body mass. A study of five orders (parrots, pigeons, petrels, raptors and owls) showed that eye mass is proportional to body mass, but as expected from their habits and visual ecology, raptors and owls have relatively large eyes for their body mass. Behavioural studies show that many avian species focus on distant objects preferentially with their lateral and monocular field of vision, and birds will orientate themselves sideways to maximise visual resolution. For a pigeon, resolution is twice as good with sideways monocular vision than forward binocular vision, whereas for humans the converse is true. The performance of the eye in low light levels depends on the distance between the lens and the retina, and small birds are effectively forced to be diurnal because their eyes are not large enough to give adequate night vision. Although many species migrate at night, they often collide with even brightly lit objects like lighthouses or oil platforms. Birds of prey are diurnal because, although their eyes are large, they are optimised to give maximum spatial resolution rather than light gathering, so they also do not function well in poor light. Many birds have an asymmetry in the eye's structure which enables them to keep the horizon and a significant part of the ground in focus simultaneously. The cost of this adaptation is that they have myopia in the lower part of their visual field. Birds with relatively large eyes compared to their body mass, such as common redstarts and European robins sing earlier at dawn than birds of the same size and smaller body mass. However, if birds have the same eye size but different body masses, the larger species sings later than the smaller. This may be because the smaller bird has to start the day earlier because of weight loss overnight. Overnight weight loss for small birds is typically 5-10% and may be over 15% on cold winter nights. In one study, robins put on more mass in their dusk feeding when nights were cold. Nocturnal birds have eyes optimised for visual sensitivity, with large corneas relative to the eye's length, whereas diurnal birds have longer eyes relative to the corneal diameter to give greater visual acuity. Information about the activities of extinct species can be deduced from measurements of the sclerotic ring and orbit depth. For the latter measurement to be made, the fossil must have retained its three-dimensional shape, so activity pattern cannot be determined with confidence from flattened specimens like Archaeopteryx, which has a complete sclerotic ring but no orbit depth measurement. The main structures of the bird eye are similar to those of other vertebrates. The outer layer of the eye consists of the transparent cornea at the front, and two layers of sclera — a tough white collagen fibre layer which surrounds the rest of the eye and supports and protects the eye as a whole. The eye is divided internally by the lens into two main segments: the anterior segment and the posterior segment. The anterior segment is filled with a watery fluid called the aqueous humour, and the posterior segment contains the vitreous humour, a clear jelly-like substance. The lens is a transparent convex or 'lens' shaped body with a harder outer layer and a softer inner layer. It focuses the light on the retina. The shape of the lens can be altered by ciliary muscles which are directly attached to the lens capsule by means of the zonular fibres. In addition to these muscles, some birds also have a second set, Crampton's muscles, that can change the shape of the cornea, thus giving birds a greater range of accommodation than is possible for mammals. This accommodation can be rapid in some diving water birds such as in the mergansers. The iris is a coloured muscularly operated diaphragm in front of the lens which controls the amount of light entering the eye. At the centre of the iris is the pupil, the variable circular area through which the light passes into the eye. The retina is a relatively smooth curved multi-layered structure containing the photosensitive rod and cone cells with the associated neurons and blood vessels. The density of the photoreceptors is critical in determining the maximum attainable visual acuity. Humans have about 200,000 receptors per mm2, but the house sparrow has 400,000 and the common buzzard 1,000,000. The photoreceptors are not all individually connected to the optic nerve, and the ratio of nerve ganglia to receptors is important in determining resolution. This is very high for birds; the white wagtail has 100,000 ganglion cells to 120,000 photoreceptors. Rods are more sensitive to light, but give no colour information, whereas the less sensitive cones enable colour vision. In diurnal birds, 80% of the receptors may be cones (90% in some swifts) whereas nocturnal owls have almost all rods. As with other vertebrates except placental mammals, some of the cones may be double cones. These can amount to 50% of all cones in some species. Towards the centre of the retina is the fovea (or the less specialised, area centralis) which has a greater density of receptors and is the area of greatest forward visual acuity, i.e. sharpest, clearest detection of objects. In 54% of birds, including birds of prey, kingfishers, hummingbirds and swallows, there is second fovea for enhanced sideways viewing. The optic nerve is a bundle of nerve fibres which carry messages from the eye to the relevant parts of the brain. Like mammals, birds have a small blind spot without photoreceptors at the optic disc, under which the optic nerve and blood vessels join the eye. The pecten is a poorly understood body consisting of folded tissue which projects from the retina. It is well supplied with blood vessels and appears to keep the retina supplied with nutrients, and may also shade the retina from dazzling light or aid in detecting moving objects. Pecten oculi is abundantly filled with melanin granules which have been proposed to absorb stray light entering the bird eye to reduce background glare. Slight warming of pecten oculi due to absorption of light by melanin granules has been proposed to enhance metabolic rate of pecten. This is suggested to help increase secretion of nutrients into the vitreous body, eventually to be absorbed by the avascular retina of birds for improved nutrition. Extra-high enzymic activity of alkaline phosphatase in pecten oculi has been proposed to support high secretory activity of pecten to supplement nutrition of the retina. The choroid is a layer situated behind the retina which contains many small arteries and veins. These provide arterial blood to the retina and drain venous blood. The choroid contains melanin, a pigment which gives the inner eye its dark colour, helping to prevent disruptive reflections. There are two sorts of light receptors in a bird's eye, rods and cones. Rods, which contain the visual pigment rhodopsin are better for night vision because they are sensitive to small quantities of light. Cones detect specific colours (or wavelengths) of light, so they are more important to colour-orientated animals such as birds. Most birds are tetrachromatic, possessing four types of cone cells each with a distinctive maximal absorption peak. In some birds, the maximal absorption peak of the cone cell responsible for the shortest wavelength extends to the ultraviolet (UV) range, making them UV-sensitive. In addition to that, the cones at the bird's retina are arranged in a characteristic form of spatial distribution, known as hyperuniform distribution, which maximizes its light and color absorption. This form of spatial distributions are only observed as a result of some optimization process, which in this case can be described in terms of birds’ evolutionary history. The four spectrally distinct cone pigments are derived from the protein opsin, linked to a small molecule called retinal, which is closely related to vitamin A. When the pigment absorbs light the retinal changes shape and alters the membrane potential of the cone cell affecting neurons in the ganglia layer of the retina. Each neuron in the ganglion layer may process information from a number of photoreceptor cells, and may in turn trigger a nerve impulse to relay information along the optic nerve for further processing in specialised visual centres in the brain. The more intense a light, the more photons are absorbed by the visual pigments; the greater the excitation of each cone, and the brighter the light appears. By far the most abundant cone pigment in every bird species examined is the long-wavelength form of iodopsin, which absorbs at wavelengths near 570 nm. This is roughly the spectral region occupied by the red- and green-sensitive pigments in the primate retina, and this visual pigment dominates the colour sensitivity of birds. In penguins, this pigment appears to have shifted its absorption peak to 543 nm, presumably an adaptation to a blue aquatic environment. The information conveyed by a single cone is limited: by itself, the cell cannot tell the brain which wavelength of light caused its excitation. A visual pigment may absorb two wavelengths equally, but even though their photons are of different energies, the cone cannot tell them apart, because they both cause the retinal to change shape and thus trigger the same impulse. For the brain to see colour, it must compare the responses of two or more classes of cones containing different visual pigments, so the four pigments in birds give increased discrimination. Each cone of a bird or reptile contains a coloured oil droplet; these no longer exist in mammals. The droplets, which contain high concentrations of carotenoids, are placed so that light passes through them before reaching the visual pigment. They act as filters, removing some wavelengths and narrowing the absorption spectra of the pigments. This reduces the response overlap between pigments and increases the number of colours that a bird can discern. Six types of cone oil droplets have been identified; five of these have carotenoid mixtures that absorb at different wavelengths and intensities, and the sixth type has no pigments. The cone pigments with the lowest maximal absorption peak, including those that are UV-sensitive, possess the 'clear' or 'transparent' type of oil droplets with little spectral tuning effect. The colours and distributions of retinal oil droplets vary considerably among species, and is more dependent on the ecological niche utilised (hunter, fisher, herbivore) than genetic relationships. As examples, diurnal hunters like the barn swallow and birds of prey have few coloured droplets, whereas the surface fishing common tern has a large number of red and yellow droplets in the dorsal retina. The evidence suggests that oil droplets respond to natural selection faster than the cone's visual pigments. Even within the range of wavelengths that are visible to humans, passerine birds can detect colour differences that humans do not register. This finer discrimination, together with the ability to see ultraviolet light, means that many species show sexual dichromatism that is visible to birds but not humans. Migratory songbirds use the Earth's magnetic field, stars, the Sun, and other unknown cues to determine their migratory direction. An American study suggested that migratory Savannah sparrows used polarised light from an area of sky near the horizon to recalibrate their magnetic navigation system at both sunrise and sunset. This suggested that skylight polarisation patterns are the primary calibration reference for all migratory songbirds. However, it appears that birds may be responding to secondary indicators of the angle of polarisation, and may not be actually capable of directly detecting polarisation direction in the absence of these cues. Many species of birds are tetrachromatic, with dedicated cone cells for perceiving wavelengths in the ultraviolet and violet regions of the light spectrum. These cells contain a combination of short wave sensitive (SWS1) opsins, SWS1-like opsins (SWS2), and long-wave filtering carotenoid pigments for selectively filtering and receiving light between 300 and 400 nm. There are two types of short wave color vision in birds: violet sensitive (VS) and ultraviolet sensitive (UVS). Single nucleotide substitutions in the SWS1 opsin sequence are responsible blue-shifting the spectral sensitivity of the opsin from violet sensitive (λmax = 400) to ultraviolet sensitive (λmax = 310–360). This is the proposed evolutionary mechanism by which ultraviolet vision originally arose. The major clades of birds that have UVS vision are Palaeognathae (ratites and tinamous), Charadriiformes (shorebirds, gulls, and alcids), Trogoniformes (trogons), Psittaciformes (parrots), and Passeriformes (perching birds, representing more than half of all avian species). UVS vision can be useful for courtship. Birds that do not exhibit sexual dichromatism in visible wavelengths are sometimes distinguished by the presence of ultraviolet reflective patches on their feathers. Male blue tits have an ultraviolet reflective crown patch which is displayed in courtship by posturing and raising of their nape feathers. Male blue grosbeaks with the brightest and most UV-shifted blue in their plumage are larger, hold the most extensive territories with abundant prey, and feed their offspring more frequently than other males. Mediterranean storm petrels do not show sexual dimorphism in UV-patterns, but the correlation between UV-reflectance and male body condition suggests a possible role in sexual selection. The bill's appearance is important in the interactions of the blackbird. Although the UV component seems unimportant in interactions between territory-holding males, where the degree of orange is the main factor, the female responds more strongly to males with bills with good UV-reflectiveness. UVS is also demonstrated to serve functions in foraging, prey identification, and frugivory. Similar advantages afforded to trichromatic primates over dichromatic primates in frugivory are generally considered to exist in birds. The waxy surfaces of many fruits and berries may reflect UV light that advertise their presence to UVS birds. However, wide-spread evidence in support of colour-mediated frugivory is equivocal and may be scale dependant. Common kestrels are able to locate the trails of voles with vision; these small rodents lay scent trails of urine and feces that reflect UV light, making them visible to the kestrels. However, this view has been challenged by the finding of low UV sensitivity in raptors and weak UV reflection of mammal urine. While tetrachromatic vision is not exclusive to birds (insects, reptiles, and crustaceans are also sensitive to short wavelengths), some predators of UVS birds cannot see ultraviolet light. This raises the possibility that ultraviolet vision gives birds a channel in which they can privately signal, thereby remaining inconspicuous to predators. However, recent evidence does not appear to support this hypothesis. Contrast (or more precisely Michelson-contrast) is defined as the difference in luminance between two stimulus areas, divided by the sum of luminance of the two. Contrast sensitivity is the inverse of the smallest contrast that can be detected; a contrast sensitivity of 100 means that the smallest contrast that can be detected is 1%. Birds have comparably lower contrast sensitivity than mammals. Humans have been shown to detect contrasts as low as 0.5-1% whereas most birds tested require ca. 10% contrast to show a behavioural response. A contrast sensitivity function describes an animal's ability to detect the contrast of grating patterns of different spatial frequency (i.e. different detail). For stationary viewing experiments the contrast sensitivity is highest at a medium spatial frequency and lower for higher and lower spatial frequencies. Birds can resolve rapid movements better than humans, for whom flickering at a rate greater than 50 light pulse cycles per second appears as continuous movement. Humans cannot therefore distinguish individual flashes of a fluorescent light bulb oscillating at 60 light pulse cycles per second, but budgerigars and chickens have flicker or light pulse cycles per second thresholds of more than 100 light pulse cycles per second. A Cooper's hawk can pursue agile prey through woodland and avoid branches and other objects at high speed; to humans such a chase would appear as a blur. Birds can also detect slow moving objects. The movement of the sun and the constellations across the sky is imperceptible to humans, but detected by birds. The ability to detect these movements allows migrating birds to properly orient themselves. To obtain steady images while flying or when perched on a swaying branch, birds hold the head as steady as possible with compensating reflexes. Maintaining a steady image is especially relevant for birds of prey. Because the image can be centered on the deep fovea of only one eye at a time, most falcons when diving use a spiral path to approach their prey after they have locked on to a target individual. The alternative of turning the head for a better view slows down the dive by increasing drag while spiralling does not reduce speeds significantly. When an object is partially blocked by another, humans unconsciously tend to make up for it and complete the shapes (See Amodal perception). It has however been demonstrated that pigeons do not complete occluded shapes. A study based on altering the grey level of a perch that was coloured differently from the background showed that budgerigars do not detect edges based on colours. The perception of magnetic fields by migratory birds has been suggested to be light dependent. Birds move their head to detect the orientation of the magnetic field, and studies on the neural pathways have suggested that birds may be able to "see" the magnetic fields. The right eye of a migratory bird contains photoreceptive proteins called cryptochromes. Light excites these molecules to produce unpaired electrons that interact with the Earth's magnetic field, thus providing directional information. The visual ability of birds of prey is legendary, and the keenness of their eyesight is due to a variety of factors. Raptors have large eyes for their size, 1.4 times greater than the average for birds of the same weight, and the eye is tube-shaped to produce a larger retinal image. The resolving power of an eye depends both on the optics, large eyes with large apertures suffers less from diffraction and can have larger retinal images due to a long focal length, and on the density of receptor spacing. The retina has a large number of receptors per square millimeter, which determines the degree of visual acuity. The more receptors an animal has, the higher its ability to distinguish individual objects at a distance, especially when, as in raptors, each receptor is typically attached to a single ganglion. Many raptors have foveas with far more rods and cones than the human fovea (65,000/mm2 in American kestrel, 38,000 in humans) and this provides these birds with spectacular long distance vision. It is proposed that the shape of the deep central fovea of raptors can create a telephoto optical system, increasing the size of the retinal image in the fovea and thereby increasing the spatial resolution. Behavioural studies show that some large eyed raptors (Wedge-tailed eagle, Old world vultures) have a 2 times higher spatial resolution than humans, but many medium and small sized raptors have comparable or lower spatial resolution. The forward-facing eyes of a bird of prey give binocular vision, which is assisted by a double fovea. The raptor's adaptations for optimum visual resolution (an American kestrel can see a 2–mm insect from the top of an 18–m tree) has a disadvantage in that its vision is poor in low light level, and it must roost at night. Raptors may have to pursue mobile prey in the lower part of their visual field, and therefore do not have the lower field myopia adaptation demonstrated by many other birds. Scavenging birds like vultures do not need such sharp vision, so a condor has only a single fovea with about 35,000 receptors mm2. Vultures, however have high physiological activity of many important enzymes to suit their distant clarity of vision. Crested caracara also only have a single fovea as this species forages on the ground for carrion and insects. However, they do have a higher degree of binocular overlap than other falcons, potentially to enable the caracara to manipulate objects, such as rocks, whilst foraging. Like other birds investigated, raptors do also have coloured oil droplets in their cones. The generally brown, grey and white plumage of this group, and the absence of colour displays in courtship, suggests that colour is relatively unimportant to these birds. In most raptors, a prominent eye ridge and its feathers extend above and in front of the eye. This "eyebrow" gives birds of prey their distinctive stare. The ridge physically protects the eye from wind, dust, and debris and shields it from excessive glare. The osprey lacks this ridge, although the arrangement of the feathers above its eyes serves a similar function; it also possesses dark feathers in front of the eye which probably serve to reduce the glare from the water surface when the bird is hunting for its staple diet of fish. Owls have very large eyes for their size, 2.2 times greater than the average for birds of the same weight, and positioned at the front of the head. The eyes have a field overlap of 50–70%, giving better binocular vision than for diurnal birds of prey (overlap 30–50%). The tawny owl's retina has about 56,000 light-sensitive rods per square millimetre (36 million per square inch); although earlier claims that it could see in the infrared part of the spectrum have been dismissed. Adaptations to night vision include the large size of the eye, its tubular shape, large numbers of closely packed retinal rods, and an absence of cones, since cone cells are not sensitive enough for a low-photon nighttime environment. There are few coloured oil droplets, which would reduce the light intensity, but the retina contains a reflective layer, the tapetum lucidum. This increases the amount of light each photosensitive cell receives, allowing the bird to see better in low light conditions. Owls normally have only one fovea, and that is poorly developed except in diurnal hunters like the short-eared owl. Besides owls, bat hawks, frogmouths and nightjars also display good night vision. Some bird species nest deep in cave systems which are too dark for vision, and find their way to the nest with a simple form of echolocation. The oilbird is the only nocturnal bird to echolocate, but several Aerodramus swiftlets also utilise this technique, with one species, Atiu swiftlet, also using echolocation outside its caves. Seabirds such as terns and gulls that feed at the surface or plunge for food have red oil droplets in the cones of their retinas. This improves contrast and sharpens distance vision, especially in hazy conditions. Birds that have to look through an air/water interface have more deeply coloured carotenoid pigments in the oil droplets than other species. This helps them to locate shoals of fish, although it is uncertain whether they are sighting the phytoplankton on which the fish feed, or other feeding birds. Birds that fish by stealth from above the water have to correct for refraction particularly when the fish are observed at an angle. Reef herons and little egrets appear to be able to make the corrections needed when capturing fish and are more successful in catching fish when strikes are made at an acute angle and this higher success may be due to the inability of the fish to detect their predators. Other studies indicate that egrets work within a preferred angle of strike and that the probability of misses increase when the angle becomes too far from the vertical leading to an increased difference between the apparent and real depth of prey. Birds that pursue fish under water like auks and divers have far fewer red oil droplets, but they have special flexible lenses and use the nictitating membrane as an additional lens. This allows greater optical accommodation for good vision in air and water. Cormorants have a greater range of visual accommodation, at 50 dioptres, than any other bird, but the kingfishers are considered to have the best all-round (air and water) vision. Tubenosed seabirds, which come ashore only to breed and spend most of their life wandering close to the surface of the oceans, have a long narrow area of visual sensitivity on the retina This region, the area giganto cellularis, has been found in the Manx shearwater, Kerguelen petrel, great shearwater, broad-billed prion and common diving-petrel. It is characterised by the presence of ganglion cells which are regularly arrayed and larger than those found in the rest of the retina, and morphologically appear similar to the cells of the retina in cats. The location and cellular morphology of this novel area suggests a function in the detection of items in a small binocular field projecting below and around the bill. It is not concerned primarily with high spatial resolution, but may assist in the detection of prey near the sea surface as a bird flies low over it. The Manx shearwater, like many other seabirds, visits its breeding colonies at night to reduce the chances of attack by aerial predators. Two aspects of its optical structure suggest that the eye of this species is adapted to vision at night. In the shearwater's eyes the lens does most of the bending of light necessary to produce a focused image on the retina. The cornea, the outer covering of the eye, is relative flat and so of low refractive power. In a diurnal bird like the pigeon, the reverse is true; the cornea is highly curved and is the principal refractive component. The ratio of refraction by the lens to that by the cornea is 1.6 for the shearwater and 0.4 for the pigeon; the figure for the shearwater is consistent with that for a range of nocturnal birds and mammals. The shorter focal length of shearwater eyes give them a smaller, but brighter, image than is the case for pigeons, so the latter has sharper daytime vision. Although the Manx shearwater has adaptations for night vision, the effect is small, and it is likely that these birds also use smell and hearing to locate their nests. It used to be thought that penguins were far-sighted on land. Although the cornea is flat and adapted to swimming underwater, the lens is very strong and can compensate for the reduced corneal focusing when out of water. Almost the opposite solution is used by the hooded merganser which can bulge part of the lens through the iris when submerged. ((cite journal)): CS1 maint: multiple names: authors list (link)
SummaryStudents’ background understanding of electricity and circuit-building is reinforced as they create wearable, light-up e-textile pins. They also tap their creative and artistic abilities as they plan and produce attractive end product “wearables.” Using fabric, LED lights, conductive thread (made of stainless steel) and small battery packs, students design and fabricate their own unique light-up pins. This involves putting together the circuitry so the sewn-in LEDs light up. Connecting electronics with stitching instead of soldering gives students a unique and tangible understanding of how electrical circuits operate. Electrical engineers play an important role in developing the countless pieces of technology we use in our daily lives. Engineers design the electrical circuits and batteries that are inside these everyday devices and appliances. Engineers must take seriously the responsibility to design circuits that work safely and dependably, which requires them to have an excellent understanding of electricity and the physics behind circuits. It is highly recommended that students have thorough background knowledge about how electricity and circuits work since this activity does not explain electricity or circuits, but reinforces students’ existing understanding of electricity by applying it to the creation of LED pins. See the Additional Multimedia Support section for some background resources. In addition, it is helpful if students are able to hand sew using needle and thread. After this activity, students should be able to: - Explain the basic concepts of electricity that are necessary to fabricate a product that requires a circuit. - Connect and build a working circuit using positive and negative traces. - Demonstrate basic sewing skills. More Curriculum Like This Students are introduced to several key concepts of electronic circuits. They learn about some of the physics behind circuits, the key components in a circuit and their pervasiveness in our homes and everyday lives. Students learn about current electricity and necessary conditions for the existence of an electric current. Students construct a simple electric circuit and a galvanic cell to help them understand voltage, current and resistance. Students explore the composition and practical application of parallel circuitry, compared to series circuitry. Students design and build parallel circuits and investigate their characteristics, and apply Ohm's law. Students learn that charge movement through a circuit depends on the resistance and arrangement of the circuit components. In one associated hands-on activity, students build and investigate the characteristics of series circuits. In another activity, students design and build flashlights. Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. - Students will develop an understanding of the characteristics and scope of technology. (Grades K - 12) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! - Students will develop an understanding of the role of troubleshooting, research and development, invention and innovation, and experimentation in problem solving. (Grades K - 12) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! Each student needs: - 1 (or 2) LilyPad rainbow LEDs, such as from this set of seven LEDs in six colors for $5 from SparkFun at https://www.sparkfun.com/products/13903 - LilyPad coin cell battery holder, switched, 20 mm; such as from SparkFun for $4 at https://www.sparkfun.com/products/13883 - 3V 20-mm coin cell battery; such as CR2032 for $2 from SparkFun at https://www.sparkfun.com/products/338 - sewing needle, such as from this set of five needles for $2 from SparkFun at https://www.sparkfun.com/products/10405 - 1 pin back (see Figure 6), available at craft stores - Pre-Activity Safety Quiz, one per student - Sew What?! Instructions Sheet and/or Instructions Placemat, one per group - Post-Activity Comprehension Quiz, one per student To share with the entire class: - conductive thread, 2+ feet per student; such as 30-ft stainless steel thread bobbin for $3 from SparkFun at https://www.sparkfun.com/products/10867 - thin fabric (like muslin) to make the pins; ~3 x 3-inch piece per pin; the fabric color and pattern can also vary, as long as the fabric is thin enough for an LED light to shine through; pin shape and size can vary at the discretion of the teacher/students, as long as the 20-mm battery holder and pin fit on the back; see the Fabric Pin Templates-Color and Fabric Pin Templates-B&W for printable patterns - felt sheets; each pin needs a piece of felt cut to the same size as it’s thin fabric front - assorted items and craft supplies for decorating fabric, such as markers, pens, glitter glue, fabric stamps, sequins, paint, buttons, beads - hot glue gun and hot glue sticks - (optional) clear nail polish, to seal any fraying fabric Note: SparkFun provides a shopping cart list of the necessary SparkFun materials. What would life would be like without cell phones? Without computers? Or without televisions and electronic games? All of these devices—and all pieces of technology, for that matter!—have something in common: They require a source of electricity and circuitry in order to function! Electrical engineers are the people who design the circuits inside all these devices, products and appliances. They must determine the best source of electricity for each circuit they design. For example, rechargeable batteries might be the power source that makes the most sense for cell phones, whereas wall outlets might be the most-efficient and logical power source for kitchen appliances. Above all, electrical engineers must make sure that their designs are practical, functional, reliable and safe. Sometimes new products come to market and do not perform as intended or have flaws in their designs. Remember the exploding Samsung Galaxy Note 7 smartphone? That doesn’t happen very often because of all the careful planning and pre-release testing that goes into new products. In this activity, you will act as if you are as electrical engineers who are designing wearable, light-up pins that require battery-powered circuits. It is up to you to create designs that are both practical and functional—but don’t worry—these pins won’t explode! conductive thread: Thread spun of stainless steel (or silver-coated nylon) that carries current like wires so that it can be used to create circuits that are flexible and require no soldering and thus are suitable for textile-based projects that use embedded electronics. Conductive thread tends to be more “twisty,” and so more difficult to sew with than non-conductive (regular) thread. e-textiles: Clothing or accessories that include electrical components such as LEDs. LED: An abbreviation for light-emitting diode, which is a type of semiconductor called a diode that converts electrical energy into light. short circuit: An electrical circuit that permits a current to travel along an unintended path with no/low electrical impedance, usually resulting from the accidental contact of components that diverts the current. trace: A physical pathway that permits conductive electricity to travel between electronic components. Depending on students’ prior knowledge about pertinent physics, electricity and circuits concepts, it may be useful to spend time going over some of these concepts before introducing this activity. Refer to the Additional Multimedia Support section for some useful resources. Before the Activity - Make activity planning decisions: What type of fabric to provide to students? Will they create their own designs or will you provide printable patterns? Will fabric printing be required? Will you provide students with one LED for each pin, or two (more complicated)? Consider pairing up students so they can help each other through the process. Consider introducing the activity several days in advance in order to give students the chance to gather their own fabric if they feel inclined to be that creative. - Depending on student age, arrange to have some adult assistants in the classroom to operate the glue guns, guided by students telling them where to place glue and what to glue together. - If desired, use the printable patterns, Fabric Pin Templates-B&W and Fabric Pin Templates-Color - Make copies of the Pre-Activity Safety Quiz and Post-Activity Comprehension Quiz, one each per student, and the Sew What?! Instructions Sheet and/or Instructions Placemat, for each group. - Gather materials and verify that electronic equipment is in working order. If possible, have enough materials so that each student can make at least one light-up pin. - Bring students up-to-speed with a background understanding of electricity and/or circuits. With the Students - Present to the class the Introduction/Motivation content. Explain that the plan for the day is to design and create one (or more) LED pin each using LED lights, mini circuits, and conductive thread. - Review with the class the safety hazards associated with the project (refer to the Safety Issues section and the Pre-Activity Safety Quiz Answers) and then administer the pre-activity quiz. - Hand out the supplies, including the instructions sheet and/or placemat for student reference while connecting the circuits and fabricating the pins. - Cut Pin Template, Design & Decorate: Give students time to design and create the fronts of their pins, which are made of thin fabric. This is a time for students to be creative and decide where in the design to incorporate the (one or two) LED that will shine through the thin fabric. - Minimum size requirement: Mention that pin designs must be large enough to fit the 20-mm coin cell battery holder and a safety pin on the back. - Patterns or free form: Either provide pre-printed pin patterns or let students create their own designs (size, shape) that accommodate the batteries and pin backs. - Decoration: Have students use markers, pens, glitter glue and other craft supplies to draw and color the pin designs on thin fabric. - Underneath the front of the pin, a layer of felt is necessary to attach the battery, circuit and LEDs. So have students each cut out a piece of felt that is of similar size and shape as their thin-fabric pin fronts. - Place Components: Use hot glue to position the technology components before the sewing step. Point out that the pin fabrication is somewhat like a layered sandwich and many of the various components must line up just right in layers on top of each other. Key steps: - On one side of the felt piece, use a small dot of hot glue on the back of a battery holder to glue it to the felt (see Figure 2). Make sure that the on/off switch is upright, and not on the glued side! Also, DO NOT cover the holes with glue, since they must remain unclogged for later sewing onto the felt using the conductive thread (the glue is just to tack it in place to make the sewing step easier). Also, DO NOT add the battery yet. - On the other side of the felt piece, which is the side that will be in contact with the front fabric, place the LED light(s) near, but not on top of, the battery holder. Make sure that the (+) and (-) symbols align with the symbols on top of the battery holder. Also make sure that each LED aligns with the intended pin designs on the front fabric. Then glue the LED board(s) in place, but DO NOT cover the holes. - Thread the Needle: Tell the students: We will be sewing paths with conductive thread for the electricity to travel from the battery pack to illuminate the LED light; these are called traces. Doing this connects the positive (+) sides of the components together, and then we’ll do the same for the negative (-) components. Essentially, we are “sewing a circuit” using conductive thread, which is made from stainless steel. - Students each thread their needles with 2 feet of conductive thread. - Then pull the thread through the eye and double it up. Pull the end that’s threaded through the eye so it meets up evenly with the other end of the thread. Then knot together those two tail ends (see Figure 3). This leaves you with a one-foot-long double-threaded needle, ready to sew. - Sew Positive Trace: Next, students complete the following steps: - On your battery holder, find the hole marked with a (+) symbol closest to the LED board on the other side of the felt. Push the needle up through the hole and the felt, then to the outside of the hole. This creates a stitch that holds the battery holder down and makes an electrical connection between the conductive thread and the metal around the hole. Repeat three times to make a secure connection (see Figure 4). - Then, using a straight stitch, follow your path to the next positive hole on the LED apparatus. Loop around the positive (+) hole three or four times to make a secure electrical connection. Once finished at the LED, tie a knot in the thread to secure its stitching to the felt. Trim away excess thread. (See Figure 5.) - Sew Negative Trace: Next, students tie a knot in the remaining thread on the needle, as done initially, or get a new piece of conductive thread, if necessary. Connect the negative (-) side of the battery holder to the negative side of the LED board, using the same technique used for sewing the positive trace. - Test and Troubleshoot the Circuit: Insert the coin cell battery into the battery holder with the positive side facing up (marked with a +). Turn ON the battery holder switch to see if the LED shines bright! If the LED is not lighting up or is dim, refer to the Troubleshooting Tips section. - Finishing Touches: Once the circuit is working: - Use hot glue to attach the decorated thin-fabric pin front to the felt layer. Before gluing, be sure the LED is lined up with the design as intended (see Figure 6). - Turn over the pin and hot glue a pin back to the same side of the felt where the battery holder is attached (see Figure 6). Then the project is complete! Show off your wearable, light-up art pin! - Conclude by administering the post-activity quiz, as described in the Assessment section. - Working with needles creates the obvious risk of skin pricks and pokes, but also keep needles well away from eyes. The same goes for the sharp pin backs. - Hot glue guns attain high temperatures and can burn skin that touches the hot part of the gun or any hot glue. Depending on student age, you may want to limit glue gun use to adults, with students guiding them on where to place glue and what to glue together. - It is important that batteries are NOT inserted into their battery packs until all positive and negative traces are completed. Though unlikely, electrical injury, such as shocks, could occur. - If fabric frays at the pin edges, use clear nail polish to seal the threads. - If students have trouble threading the needles, use a needle threader or a needle with a larger eye. Just be sure to avoid using a need that is too big for the holes provided on the LilyPad components. - If a circuit is not lighting up (or is dim), try a new battery and make sure that the project is completely switched “ON.” - Check the sewing for any loose conductive thread or ends that may be touching other threads or electronic components—all situations that might cause a short circuit. It pays to practice tidy stitching to keep the conductive thread from traveling to places where it shouldn’t. See Figure 7 for examples of incorrectly sewn (wired) circuitry. - Check to make sure that the LED is in good working condition by replacing the LED bulb. - If you suspect the circuit connection is the issue, get additional troubleshooting tips at SparkFun’s Lighting Up a Basic Circuit experiment page for e-textiles circuits. - What might happen if the positive (+) end of the battery pack was connected to the negative (-) end of the LED, and vice versa? - How could we change this design to attach multiple LEDs to the pin? (Answer: See SparkFun’s extensive instructions for how to add more than one LED to a circuit.) Pre-Quiz: After presenting the Introduction/Motivation content, review with the class the safety hazards associated with the project, such as the use of needles and thread, hot glue, and batteries. Then administer the three-question Pre-Activity Safety Quiz. Alternatively, give students the quiz first, and then review the answers as a class. Or conduct the quiz as an interactive discussion. Refer to the Safety Issues section and the Pre-Activity Safety Quiz Answers. Activity Embedded Assessment Monitor Design: As students work through the project, circulate the room to monitor their progress and answer any questions and help with any troubleshooting. Post-Quiz: At activity end, administer the three-question Post-Activity Comprehension Quiz. Review students’ answers and examine their finished pins to gauge their depth of understanding. Move on to more advanced e-textiles projects that involve more complicated circuits and art. For example, see SparkFun’s multiple LED circuits version of this activity that incorporates more than one LED in a circuit. - For lower grades, use just one LED per pin and have student pairs work together on one pin. Focus less on the electrical circuit portion of the project and more on the art design portion so that students have a fun and successful STEM technology and art experience. - For higher grades, see the Activity Extensions section for more involved projects. Challenge students to come up with designs that require more complex circuits and tracers, and use multiple LEDs. Additional Multimedia Support Some helpful background information tutorials to introduce students to electricity concepts: Activity adapted from SparkFun’s e-Textile Art Pin activity at https://learn.sparkfun.com/resources/86. “Circuits.” TeachEngineering. Accessed December 29, 2016. https://www.teachengineering.org/lessons/view/cub_housing_lesson02 “Completing the Circuit.” TeachEngineering. Accessed December 29, 2016. https://www.teachengineering.org/activities/view/cub_electricity_lesson03_activity1 “One Path.” TeachEngineering. Accessed January 04, 2016. https://www.teachengineering.org/lessons/view/cub_electricity_lesson05 ContributorsAngela Sheehan; Emma Biesiada Copyright© 2017 by Regents of the University of Colorado; original © 2014 SparkFun Education Supporting ProgramSparkFun Education Last modified: February 16, 2018
Multiplication Drills Worksheets and Exercise Looking for multiplication worksheets for 3rd and 4th-grade children? Our free worksheet features a variety of exercises, including multipliers by 10 and 12, 2-digit times 1 digit, and 2-digit times 2 digit multiplication. With 20 exercises per page, your child can practice and improve their multiplication skills in no time. These worksheets are perfect for home or classroom use, and they are designed to be engaging and interactive. Start exploring our free worksheets today and give your child the tools they need to succeed in math! Factors up to 10 Our printable grade 3 worksheets are designed to help students practice multiplying multipliers up to 10. With problems arranged horizontally or vertically, these worksheets offer students a range of challenges to improve their multiplication skills. Factors up to 12 Teachers and homeschoolers can challenge their students with our multiplication worksheet that focuses on factors up to 12. With a time limit, students can test their multiplication skills and work on improving their accuracy and speed. 2-digit times 1-digits Our grade 4 multiplication worksheet includes 2-digit by 1-digit multiplication problems that are arranged in columns or rows. This worksheet helps students practice their multiplication skills and improve their accuracy with larger numbers. 2-digit times 2-digits Practice 2-digit by 2-digit multiplication with our free printable worksheet. With problems that challenge students to multiply larger numbers, this worksheet is perfect for students looking to improve their multiplication skills.
Scientists have bolstered Albert Einstein’s theory of general relativity by exploring the strange mysteries of white dwarf stars. Astronomers have long theorized about the relationship between a white dwarf star’s mass and radius but haven’t been able to observe the stars’ mass-radius relationship until now, a new study shows. As white dwarf stars gain mass, they shrink in size unlike most known celestial objects. In this new work, researchers used a novel method that incorporated data from thousands of white dwarfs to observe the strange phenomenon and provide further evidence for the theory of general relativity. When stars like our sun run out of fuel, they shed their outer layers and are stripped down to their Earth-sized core. This core is known as a white dwarf star, which is believed to be the final evolutionary state of a stellar object. But these stellar remnants hold a mystery, as when white dwarfs increase in mass, they shrink in size. White dwarfs therefore will end up with a mass similar to that of the sun, but packed into a body the size of the Earth. White dwarfs become so small and compact that they eventually collapse into neutron stars, highly dense stellar corpses with a radius that usually does not extend beyond 18 miles (30 kilometers). The odd mass-radius relationship within white dwarf stars has been theorized about since the 1930s. The reason why white dwarfs increase in mass while shrinking at the same time is thought to be caused by the state of its electrons — as a white dwarf star is compressed, the number of its electrons increases. This mechanism is a combination of quantum mechanics — a fundamental theory in physics on the motion and interaction of subatomic particles — as well as Albert Einstein’s theory of general relativity, which deals with gravitational effects. “The mass-radius relation is a spectacular combination of quantum mechanics and gravity, but it’s counterintuitive for us,”” Nadia Zakamska, an associate professor at the Department of Physics and Astronomy at Johns Hopkins University, who supervised the new study, said in a statement. “”We think as an object gains mass, it should get bigger.” In this new study, the team from John Hopkins University developed a method to observe the mass-radius relationship in white dwarfs. Using data collected by the Sloan Digital Sky Survey and the Gaia space observatory, the researchers looked at 3,000 white dwarf stars. The team of researchers measured the gravitational redshift effect, which is the effect of gravity on light, on the stars. As light moves away from an object, the wavelength of light coming from the object lengthens, causing it to appear redder. By looking at the gravitational redshift effect, they were able to determine radial velocity of the white dwarf stars that share a similar radius. Radial velocity is the distance from the Sun to a given star which determines whether a star is moving towards or away from the Sun. By determining the stars’ radial velocity, they were also able to determine the change in the stars’ mass. “The theory has existed for a long time, but what’s notable is that the dataset we used is of unprecedented size and unprecedented accuracy,” Zakamska added. “These measurement methods, which in some cases were developed years ago, all of a sudden work so much better and these old theories can finally be probed.” The method used in the study essentially turned a theory into an observational phenomenon. Additionally, it can be used to study more stars in the future, and can help astronomers analyze the chemical composition of white dwarf stars. “Because the star gets smaller as it gets more massive, the gravitational redshift effect also grows with mass,” Zakamska said. “And this is a bit easier to comprehend—it’s easier to get out of a less dense, bigger object than it is to get out of a more massive, more compact object. And that’s exactly what we saw in the data.” The study was accepted for publication in The Astrophysical Journal and has been posted online to the preprint server arXiv.org. Follow Passant Rabie on Twitter @passantrabie. Follow us on Twitter @Spacedotcom and on Facebook.
About the Book : This bestselling series has been updated to ensure teachers can deliver the revised Cambridge Secondary 1 programme for Mathematics with confidence. This brand-new Workbook for stage 7 provides extra practice questions. There is a parallel exercise for each exercise in the Student's Book. Students can write their answers in the Workbook, which can be used for homework or in the classroom. Introduction; SECTION 1; Place value, ordering and rounding; Expressions; Shapes and geometrical reasoning; Length, mass and capacity; Collecting and displaying data; Addition and subtraction; SECTION 2; Integers, powers and roots; Equations and simple functions; Measurement and construction; Time; Averages; Multiplication and division 1; SECTION 3; Fractions, decimals and percentages; Sequences; Angle properties; Area and perimeter of rectangles; Probability; Multiplication and division 2; SECTION 4; Ratio and proportion; Formulae and substitution; Coordinates; Cubes and cuboids; Experimental and theoretical probability; Division and fractions of a quantity;
If one object exerts a force on a second object, the second object exerts a force of equal magnitude and opposite direction on the first object (action equals reaction). Example. Articles that describe this calculator. If the box was on a soft surface, because of your additional force, it might collapse. Normal Force Calculator Normal Force is the force that acts perpendicularly to the surface. That is only the case when there is no outside force acting on the object, or, if there is, the outside force is parallel to the surface. What is the force required to accelerate an object with a mass of 20 kg from stationary to 3 m/s 2?. This increases the normal force, the outside force is pushing the object into the ground. This online calculator calculates strength of the normal force from the mass of the object, the gravitational field strength and the angle of the inclined surface measured from the horizontal. The last step in this process is to analyze the results. In calculations including an external force, you should only take into consideration the parallel vector component. Calculate Normal Force for Sinking Object Settles on Solid Floor. Everyone who receives the link will be able to view this calculation, Copyright © PlanetCalc Version: For this example we are going to assume the friction we are calculating is between a flat block and the ground. Sorry I have a potentially dumb question to ask. Imagine there is a box lying on the ground that you want to move. The normal force is a typical example of the Newton's third law of motion. It most cases it’s done empirically, or in other words, through physical experiments. The normal force calculator helps you find the force that a surface exerts to prevent an object from falling through it. Mu(s) should not be the same as Mu(k). It weights 100 kg. person_outlineTimurschedule 2017-03-27 12:17:27. In this case, where force applied is aligned with the slope, it is perpendicular to the force normal and therefore does not affect it. All rights reserved. This calculator will find the missing variable in the physics equation for force (F = m * a), when two of the variables are known. In other words, a single Newton is equal to the force needed to accelerate one kilogram one meter per … Use this calculator to determine the pressure generated by a force acting over a surface that is in direct contact with the applied load. If the force is directed straight upwards and equal to the gravitational force, the normal force is zero. For an object lying on a flat surface, the formula is. Normal force. This example won’t go into detail about how that is done. Let's see what happens if there is an external force that isn't concurrent to the surface! If there is an outside force that is directed downwards, you need to add its vector component to the weight of the object. Where an object rests on an incline, the normal force is perpendicular to the plane the object rests on. Because it counteracts the force of gravity entirely. The gravitational force of the object is not opposite and equal to the normal force, but one of the force of gravity's vector's components is. The normal force acts perpendicular to the surfaces. This is a key step in any science problem or question. person_outline Timur schedule 3 years ago N = m * g - F [sin (x)] Where, N = Normal Force of an Object with External Upward Force m = Mass, g = Gravitational Force, x = Angle of Incline, F = Force Since you're here, you might also enjoy our Newton's second law calculator. Therefore, the applied force, because it runs parallel to the slope, is completely in your “x” axis, and does not affect the force normal (which is completely in your “new y axis”). The coefficient of friction between two surfaces is a difficult value to measure. where: v₀ is the initial velocity (measured in m/s or ft/s);; t stands for the fall time (measured in seconds); and; g is the free fall acceleration (expressed in m/s² or ft/s²). From the definition of velocity, we can find the velocity of a falling object is:. You push on it at a 45 degrees angle with 250 N of force. Angela Lee Vs Xiong Jing Nan 1, Rural House Spain Value, Journey 2: The Mysterious Island Map, Advantages And Disadvantages Of Partnership And Corporation, Pajama Party Ideas, The Cry Streaming, Insecure Season 4 Release Date 2020, Villehardouin Castle, Richard T Sale, Double J Music Submission, Anchorman 123movies, All Summers End Netflix, Video Games, I Think I Love Sosa, She Is Beautiful Drama, Faq Questions And Answers, Lcd Soundsystem - Oh Baby Movie, Escapism Steven Universe Lyrics, Don't Steal My Spotlight, Viral Movie Plot, Big Sean Height And Weight, Thirsty Show, Thirst Menu, For Colored Girls Netflix, I Know This Much Is True Emmy Snub, Smokin' Aces 2: Assassins' Ball Cast, Honeymoon For One Hallmark Full Movie 2011, Jackson Ohio Court Records, Miles Heizer Height, Wit Studio Mappa, Hurd Hatfield Net Worth, Watch Sons Of Anarchy, Unakkum Enakkum Pooparikka Neeyum, Paris Ford Espn, Pro14 Teams, Beyond The Blackboard Characters, Zach Davies Contract, Lights Out With David Spade Episode 143, Tom Banton T10 Stats, Dawid Malan Wife Name, Dead Crow Meaning, Alexander Hodge Birthday, Jesse Mccartney Songs, Kira Powell Age, Desperate Living Quotes, Max Rushden Gaviscon, Liverpool V Wimbledon 1999, How Old Do I Look Tiktok, Eleven Sports Live Stream, Cbc Sofra Live, Jill Whelan Movies And Tv Shows, The Gift Lyrics Seether, History Of Jim Crockett Promotions, Sarah Roemer And Chad Michael Murray, Rik Mayall Farm, Csk Vs Kkr 2011, Adam Zampa Age, This App Is Currently Not Available In Your Country Or Region, Abdullah Of Saudi Arabia Sons, Atletico Madrid Away Kit 19/20, Broken City Trailer, Merimbula To Bega,
A giant comet found far out in the solar system may be 1,000 times more massive than a typical comet, making it potentially the largest ever found in modern times. The object, officially designated a comet on June 23, is called Comet C/2014 UN271 (opens in new tab) or Bernardinelli-Bernstein after its discoverers, University of Pennsylvania graduate student Pedro Bernardinelli and astronomer Gary Bernstein. Astronomers estimate this icy body has a diameter of 62 miles to 124 miles (100 to 200 km), making it about 10 times wider than a typical comet. This estimate is quite rough, however, as the comet remains far away from Earth and its size was calculated based on how much sunlight it reflects. The comet will make its closest approach to our planet in 2031 but will remain at quite a distance even then. "We have the privilege of having discovered perhaps the largest comet ever seen — or at least larger than any well-studied one — and caught it early enough for people to watch it evolve as it approaches and warms up," Bernstein said in a June 25 statement (opens in new tab) from the National Science Foundation's National Optical-Infrared Astronomy Research Laboratory, or NOIRLab. First spotted in archival images from the Dark Energy Survey taken in 2014, Comet Bernardinelli-Bernstein is now located at the equivalent distance of Uranus, roughly 20 astronomical units (AU) from the sun. (One AU is the Earth-sun distance — about 93 million miles, or 150 million kilometers). The comet shines at magnitude 20, making it out of reach of most amateur astronomers' telescopes; by comparison, most people can see objects of magnitude 5 or 6 with the naked eye in dark conditions. When the comet swings closer to Earth in 2031, it will still be at 11 AU, which is a little more distant than Saturn's average orbit from the sun. Even then, amateur skywatchers will still need to use very large telescopes to see it, NSF stated. What makes Comet Bernardinelli-Bernstein so special, aside from its size, is the fact it hasn't visited the inner solar system in three million years, roughly the same era that the famous human ancestor "Lucy" (opens in new tab) was walking the Earth. The comet originated some 40,000 AU away from the sun in the Oort Cloud, which is a huge, distant region of space thought to hold trillions of comets. The comet popped up during a scan of archival images of the Dark Energy Survey, which uses a wide-field 570-megapixel CCD imager mounted on the Víctor M. Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory in Chile. The survey's main goal is mapping 300 million galaxies across a swath of the night sky, but its deep-sky observations have also yielded several comets and trans-Neptunian objects (TNOs), which are icy worlds orbiting beyond Neptune. Bernardinelli and Bernstein spotted the comet using the National Center for Supercomputing Applications and Fermilab, identifying 800 TNOs from archival survey data. While the images of the comet didn't show a classic tail between 2014 and 2018, an independent observation from the Las Cumbres Observatory network in 2021 (after the comet's existence was made public) showed the comet now has a coma of gas and dust surrounding it. Studying the comet will not only give us more insight into how this massive object formed and evolves, but it also could shed light on the early history of giant planet movements in the solar system, NSF officials noted in the same press release. "Astronomers suspect that there may be many more undiscovered comets of this size waiting in the Oort Cloud far beyond Pluto and the Kuiper Belt," NSF stated. "These giant comets are thought to have been scattered to the far reaches of the solar system by the migration of Jupiter, Saturn, Uranus and Neptune early in their history." While planned cometary observation campaigns are in their early stages, a typical big event usually gets attention from the largest telescopes in space and around the world. By 2031, several newer observatories may be online to look at Comet Bernardinelli-Bernstein. Upcoming major ground-based observatories include the NSF's and Department of Energy's Vera C. Rubin Observatory, whose first light is expected in 2022; the European Southern Observatory's Extremely Large Telescope, whose first light is expected by 2025; and the Giant Magellan Telescope which should be and running by the late 2020s. It's harder to predict if any spacecraft will be able to observe the comet's approach, because space missions tend to be shorter than the lifespans of ground-based scopes. It's possible, however, that a future telescope or mission could be funded by 2031 for comet observations that is not yet approved or even planned. The major space agencies may also task existing spacecraft across the solar system to look at Comet Bernardinelli-Bernstein, as happened near Mars in 2014 when Comet Siding-Spring zoomed past the Red Planet. NASA's James Webb Space Telescope is scheduled to launch in late 2021 for a prime mission of at least 5 1/2 years, although Webb could run for a decade or more if it remains healthy and funding is maintained, NASA says (opens in new tab). The Hubble Space Telescope (currently facing a problematic computer glitch) is famous for comet observations and may be available in 2031, although predictions say it could be healthy through the mid-2020s and will be deorbited no later than the 2030s. Follow Elizabeth Howell on Twitter @howellspace. Follow us on Twitter @Spacedotcom and on Facebook.
Microbes. They are invisible to the naked eye, but they play a critical role in keeping our planet habitable. They are everywhere, in abundant numbers, but are still difficult to find. They come in a multitude of varieties, but too often are difficult to distinguish from one another. Wherever there is water (fresh or salt), there are usually microbes—microscopic, single-celled organisms. In the ocean, they form an unseen cornucopia at the center of a food web that ultimately nourishes larger organisms, fish, and people. Their fundamental role in the ocean’s food supply makes them critical targets for study, and scientists would like to know much more about them. They would like to identify them and count them. They would like to learn more about how marine microorganisms (part of what we call plankton) eat, grow, reproduce, and interact with other organisms. They would like to determine how changes in the ocean might affect the microbial communities’ vitality and viability. Finding minuscule life forms in a seemingly infinite ocean isn’t trivial. But in recent years, oceanographers have been developing new techniques and instruments to identify and count marine microorganisms. Year by year, we are learning more and more about them and discovering that they are even more numerous, varied, and important than we previously thought. A diverse microbial community Some marine microbes are bacteria, or prokaryotes—simple cells with no specialized organelles, which are among the smallest of living things. Others are eukaryotes—larger and more complex cells with a nucleus, mitochondria, and other organelles. Eukaryotic microbes, also called protists, include both producers, such as algae, and consumers, such as protozoa. They thrive in a variety of habitats—living suspended in the water, in bottom sediments, or on other objects. They form communities, or assemblages, of different species that photosynthesize, consume each other, and are, in turn, consumed by other things in the ocean’s food web. In the last few years, we have considerably advanced our knowledge of the structure and function of these assemblages—particularly planktonic assemblages that we sample by collecting the water they inhabit. We now know that these plankton assemblages are diverse, composed of species with widely different sizes, growth rates, and nutrition. Not surprisingly, we know more about the larger protists (greater than 100 microns) than the smaller ones (under 20 microns). Larger protists are easily visible using light or electron microscopes. They have features that remain intact throughout procedures to sample, preserve, and examine them, which can break or distort cells. These features are often lacking in the smaller organisms; and if they are present, they are harder to see and characterize. Identifying protists has always involved some type of microscopic analysis, with someone looking at the shapes, or morphology, of the cells. But now we also use molecular methods—techniques that give scientists the ability to detect and identify the presence of even small protists based upon their DNA in water samples. Scientists have begun to describe the genetic composition of communities of species that live and interact in the same water. Our next objective is to overcome several technical challenges so that we can routinely monitor changes in protist populations over time. Sampling the invisible So far, all of our detection and identification techniques, both morphologic and molecular, have relied on collecting samples from remote sites and analyzing them in laboratories. But these techniques don’t give us all the information we need. Collecting samples from ships means physically taking separate water samples, at separate times, in separate places. Samples taken this way are, quite literally, just single samples—of one location at one time. They don’t provide a continuous picture of protists in a given area of the ocean. And they don’t allow us to detect how the protists respond to rapidly changing environmental conditions. What researchers want is the ability to collect and analyze samples over long time periods in the ocean, to have a continuous sampling and recording procedure, and to obtain data in as close to real time as possible. Overcoming engineering hurdles Several technical challenges, however, still make it difficult to remotely detect and count microbes in their own environment. One is the number of organisms, or microscopic cells, in a given water sample. In most marine planktonic environments, microbes are present in low numbers and organisms targeted for study may only be a small proportion of the total population. To overcome this low density, researchers in the laboratory must often concentrate several liters of water into a much smaller volume for analysis by passing it through filters designed to retain the protists, then resuspending them in smaller volumes for analysis. Once water samples are collected and concentrated, microbes can be analyzed in several ways, so automated systems must be designed to accommodate the analysis method. For instance, if scientists want to use only the organisms’ genetic material to identify them, collection systems must be able to break open cells and collect their DNA. If they want to study the whole organisms, though, the systems must keep the cells intact. In fact, researchers are already developing instruments that can either detect a genetic signal from a microbial population or monitor one of its biological activities—and do it autonomously, without requiring scientists to be on the scene. They can be pre-programmed to collect water samples over time periods ranging from hours to months and spaces ranging from inches to miles—depending on the particular microbes and biological activities the scientists want to study. These instruments inject water into flexible bags containing a solution that preserves the cells for later examination. SID, ESP, and FlowCytobot Three examples of instruments for remote analysis of marine microbes do solve many of the technical problems. The Environmental Sample Processor, ESP, developed by Chris Scholin at Monterey Bay Aquarium Research Institute (MBARI), attaches to a mooring anchored to the ocean bottom and collects and preserves water samples. It extracts nucleic acids from the protists in the water and detects specific organisms by their DNA. It can also preserve samples for microscope analysis in the laboratory. Researchers have already used it to detect species that cause harmful algal blooms and to distinguish types of planktonic larvae in the ocean. It will soon have even greater capacity to detect and distinguish organisms. The Submersible Incubation Device, SID, a moored instrument developed by Craig Taylor at WHOI, determines levels of photosynthesis in the water around it by robotically measuring carbon dioxide taken up by phytoplankton in the samples. Up to 50 of these experiments can be performed before the instrument needs to be removed from the ocean to analyze the samples and determine what species are present. A third instrument, FlowCytobot, is a submersible flow cytometer— a device that counts single cells flowing through it. Developed by Robert Olson at WHOI, it is also anchored to the seafloor near the coast. It counts and analyzes microbial cells in the water continuously for up to two months. FlowCytobot identifies microbes by the way they scatter light, or by the way certain pigments in the cells emit fluorescent light. Because it samples continuously, scientists can see changes in plankton populations over time that cannot be detected by traditional sampling. A coastal observatory network The ultimate goal is a continuous, remote system that can detect, distinguish and count microbes in the environment. In the laboratory, scientists can do all these things by filtering samples, identifying DNA within them, and examining microbes under microscopes. But designing, programming, and building a system to carry out all of these steps remotely is a challenge. One of the difficulties for this work is that DNA analysis requires heat, which requires power. Remotely deployed instruments depend on batteries for power, and adding batteries quickly makes instruments too heavy, big, and costly to build. To overcome this hurdle, scientists have sought a viable alternative; developing long-term installations of instruments powered by cables from a nearby shore. In recent years, several coastal ocean observatories have been built that have cables linking power nodes on the ocean floor with shore-based facilities. One of these is near Woods Hole, at the Martha’s Vineyard Coastal Observatory (MVCO). Instruments plugged into seafloor nodes receive power from the cables and transmit data back via the cables. This level of available power has stimulated the development of new biological sensors and methods that will let scientists take measurements continuously and accurately. In the lab, we are working to develop and assemble several instrument modules into the FlowCytobot automated system to install at the MVCO. The system will detect microbial cells, identify them genetically, and obtain accurate counts of particular species. It will let us monitor specific microbial populations that play significant roles in the food web and detect changes taking place, on a daily basis. The development of new sensors is also important to national efforts to build an infrastructure of ocean observation systems. Ocean observatories are the wave of the future in many fields of oceanography. Some will monitor coastal water; others will monitor the open ocean. Many already exist, and many more are being planned, through several national programs. These programs will incorporate existing coastal observatories into a network, expanding their research capabilities, and building more at key coastal sites. We will use the observatories, each with seafloor cables supplying power, to collect and share information on a previously intractable microbial world – the broad group of tiny cells that control the coastal ocean’s food supply. A Quick Guide to Ocean Observation Systems Acronyms Network of Ocean Observation Systems will collect continuous, reliable information about our coastal ocean. Moored and mobile installations will carry instruments and sensors that sample and measure environmental variables, then transmit the data to computer systems that store, analyze, and model the data to describe and predict oceanic conditions. Some of the following acronyms denote permanent observatories in coastal water, some denote deep or open water observing systems, and some denote programs or organizations that have responsibility for planning or oversight of ocean-observing networks. Ocean Observing Systems The U.S. has more than forty observatory installations to monitor coastal ocean conditions, with more planned. and Sustained Ocean Observing System The nationwide network of ocean observatories on platforms such as ships, airplanes, satellites, buoys, and Ocean Observing System An international program to create a permanent global system Ocean Research Interactive Observatory Networks The national program that coordinates the science, technology, outreach of the observatory network Consortium of Coastal Ocean Observatories Observatories on the U.S. East Coast Vineyard Coastal Observatory Operated by WHOI, and provides real time oceanographic and meteorological data of Maine Ocean Observing System National pilot program posting hourly oceanographic data from the Gulf Ecosystem Observatory Observatory operated by Rutgers University off the coast of New Jersey, collecting data from satellites, aircraft, ships, moorings, and autonomous underwater vehicles.
In this article, we will learn how to find outliers in Excel. What are Outliers? Outliers are the values in the data which are outside the scope of the general data values, meaning that they are very much higher than very much lower than the general data values. Why do we need to remove the Outliers? Outliers spoil our data and representation. When we create a graph of complete data, if some values are extremely high and extremely low, the other values/range of the graph do not look good. Let us create simple data. In this example, we can clearly see that some values are really outside, e.g., 200. Let's discuss the procedure to remove outliers. Let's start by finding the first and third quartile: 1st quartile: =QUARTILE(A2:A14,1) 3rd quartile: =QUARTILE(A2:A14,3) How to calculate interquartile range? Let's calculate the interquartile range (IQR). Because it is a range, you just need to subtract values. Q3-Q1. Calculating IQR gives us the possibility to calculate the lower and upper bounds of data. Let us calculate the lower bound and upper bound values: Lower bound = lower of Q1 or Q3 - 1.5 * IQR Upper bound = max of Q1 or Q3 + 1.5 * IQR Let us find the outlier by using the U bound and L bound: =OR(A2<$F$2,A2>$G$2) Anything lower than the lower bound and higher than the upper bound is an outlier. A box plot, also known as a box-and-whisker plot, is a simple way to visualize the distribution of data and identify outliers. To create a box plot in Excel, you'll need to use the "Q1", "Q3", and "MIN" and "MAX" functions. Another method to find outliers is to use the standard deviation. If a value is more than a certain number of standard deviations away from the mean, it's considered an outlier. To find outliers using this method, you'll need to calculate the mean, standard deviation, and z-score for each value in your data set. A third option is to use conditional formatting to highlight outliers in your data set. To do this, you can use a formula to compare each value to a mean or median, and then format the cells that are above or below a certain threshold. It's important to note that the method you choose will depend on the type of data you're working with and the results you're trying to achieve. Each of these methods has its own strengths and limitations, so it's important to understand each one and choose the method that best meets your needs.
(This topic is also in Section 1.3 in Finite Mathematics, Applied Calculus and Finite Mathematics and Applied Calculus) For best viewing, adjust the window width to at least the length of the line below. Q What is a linear function? A A linear function is one whose graph is a straight line (hence the term "linear"). Q How do we recognize a linear function algebraically? A As follows: A linear function is one that can be written in the form Here is a partial table of values of the linear function f(x) = 3x - 1. Fill in the missing values and press "Check." Plotting a few of these points gives the following graph. The Role of b in the equation y = mx + b Let us look more closely at the above linear function, y = 3x - 1, and its graph, shown above. This linear equation has m = 3 and b = -1. Notice that that setting x = 0 gives y = -1, the value of b. On the graph, the corresponding point (0, -1) is the point where the graph crosses the y-axis, and we say that b = -1 is the y-intercept of the graph The Role of m in the equation y = mx + b Notice from the table that the value of y increases by m = 3 for every increase of 1 in x. This is caused by the term 3x in the formula: for every increase of 1 in x we get an increase of 31 = 3 in y. On the graph, the value of y increases by exactly 3 for every increase of 1 in x, the graph is a straight line rising by 3 for every 1 we go to the right. We say that we have a rise of 3 units for each run of 1 unit. Similarly, we have a rise of 6 for a run of 2, a rise of 9 for a run of 3, and so on. Thus we see that m = 3 is a measure of the steepness of the line; we call m the slope of the line. Here is the graph of y = 0.5x + 2, so that b = 2 (y-intercept) and m = 0.5 (slope). Notice that the graph cuts the y-axis at b = 2, and goes up 0.5 units for every one unit to the right. Here is a more general picture showing two "generic" lines; one with positive slope, and one wqith negative slope. |Graph of y = mx + b| |Positive Slope||Negative Slope| Let y = -1.5x + 4. Mathematicians traditionally use (delta, the Greek equivalent of the Roman letter D) to stand for "difference," or "change in." For example, we write x to stand for "the change in x." Let us take another look at the linear equation Now we know that y increases by 3 for every 1-unit increase in x. Similarly, y increases by 32 = 6 for every 3-unit increase in x. . . . . In general, y increases by 3x units for every x-unit change in x. |y||=||3x||Change in y = 3 Change in x| |=||3 = slope| Q How do these changes show up on the graph? A Here again is the graph of y = 3x - 1 , showing two different choices for x and the associated y. Slope of a Line The slope of a line is given by the ratio Definition of the Slope For positive m, the graph rises m units for every 1-unit move to the right, and rises y = mx units for every x units moved to the right. For negative m, the graph drops |m| units for every 1-unit move to the right, and drops |m|x units for every x units moved to the right. Fill in the slopes of the following lines. |Getting Familiar with Slopes| Q Two points, say (x1, y1) and (x2, y2), determine a line in the xy-plane. How do we find its slope? A Look at the following figure. As you can see in the figure, the rise is y = y2 - y1, the change in the y-coordinate from the first point to the second, while the run is x = x2 - x1, the change in the x-coordinate. Computing the Slope of a Line We can compute the slope m of the line through the points (x1, y1) and (x2, y2) using Before trying the exercises, you should go on to the next tutorial: Part B: Finding the Equation of a Line.
- Teacher Sign In - LTA Toolkit - Digital Teacher Guides Common Core 9th Grade Writing Standards for English LiteracyTA provides writing skills that Common Core educators use to teach Common Core 9th Grade Writing Standards for English. The Common Core literacy standards are the what. The skills below and the related eCoach discussions are the how. In the table below, you will find next to each Common Core writing standard practical skills, classroom resources, rich conversations and teaching ideas that move all students toward achieving Common Core standards! Common Core Literacy Standards Text types and purposes. Analyzing Research Prompts Analyzing Text-Dependent Prompts Argument Writing Plan Building a Reading and Writing Plan Debate a Side One Minute Speech Problem and Solution Organizer Taking Research Notes Timed Writing Process A Writing Process for All Understanding Argument Writing Standards Engage in Structured Debates 5 Steps to Teaching Argumentative Writing 6 Steps for Teaching Types of Writing Cause and Effect Organizer Compare and Contrast Organizer Informative Essay Writing Plan Summary: The Pathway to College Success 8 Research Lessons for African-American History Month Understanding the Writing Standards: A Narrative (or is it Informative?) 3 Ways to Prepare Students for the New State Tests Narrative Essay Writing Plan Narrative Story Writing Plan Let's Start Collaborating: Short Writing Tasks CCR-Aligned Reading and Writing Processes Production and Distribution of Writing Peer Review Groups Three Step Writing Process Getting Started with Pair Peer Review New and Exciting Writing Resources on TA Supporting Your Tech Initiatives: Technology in Your Standards Bringing the Socrative Seminar to the 21st Century Digitally Mark Texts and Take Notes Using Mobile Devices, Laptops, and PCs Add LiteracyTA to Your Mobile Device Facilitating On-line Collaboration and Argument Analysis with Google+ Hangout Research to Build and Present Knowledge Six Organizers at a Glance Top 10 topics to write about/research in 2015-2016 Examining Common Core Anchor Standard W7 Team Teaching with Your Librarian Four Square Organizer Finally, A Way to Understand Text Complexity Fictional Character Web Story Summary Poster Three Group Socratic Seminar Writing about Rhetorical Devices Fun and Free Reading Program Range of Writing - Skill Library - Informational Texts - Prompt Builder - Literacy Standards - Teacher Talk - Elementary School - Middle School - High School - LTA Toolkit Pro - The EL Teacher's Guidebook - Collaborative Work - Free Reading Program - Free Reading Program for Elementary - Free Reading Program for Middle School - Free Reading Program for High School - Free EL Reading Program - Free Intervention Reading Program - Free Online Reading Program - Reading Assessments - Reading Standards - Writing Standards - Speaking Standards - Language Standards - Common Core Standards Interested in a ZAP Class at your school? Already have access. If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. 9th grade reading & vocabulary Unit 1: borders, unit 2: social psychology, unit 3: the apocalypse. Essay Topics for Class 9 Essay Topics for Class 9th Students Essay on Wonders of Science Essay on Conservation of Environment Essay on Relationship Essay on My Best Friend Essay on My Family Essay on My Favorite Teacher Essay on My Mother Essay on Myself Essay on Har Ghar Tiranga Essay on 15 August Essay on Flag Code of India Essay on Draupadi Murmu Essay on Agneepath Scheme Essay on Rainy Season Essay on Rainy Day Essay on Holi Festival Essay on Why Holika Dahan is Celebrated a Day before Holi Essay on Is Holi a Harvest or Religious Festival Essay on Online Shopping Essay on Advantages and Disadvantages of Online Shopping Essay on Family Planning In India Essay on Farmers Suicide in India Essay on Save Fuel Essay on Global Terrorism Essay on Junk Food Essay on Google Essay on Robotics and Machine Learning Essay on Lohri Festival Essay on Online Education Essay on Supercomputer Essay on Parakram Diwas in India Essay on Central Vigilance Commission Essay on Internet of Things (IoT) Essay on hyperloop Essay on Growing Trends of Privatization Essay on One Nation One Election Essay on online Schooling -can it be the future of education? Essay on impact of oil increasing on Indian Economy Essay on E-Diplomacy Essay on Growing Pollution in Rivers Essay on why I want to become a Physical therapist Essay on why I want to become an Engineer Essay on adult education Essay on forest: a precious gift of nature Essay on Role of Youth in Nation Building Essay on Artificial Intelligence Essay on cybercrime Essay on Wildlife Conservation Essay on Indian politics Essay on alcohol ban Essay on Water Scarcity Essay on Teamwork Essay on Life in an Indian Village Essay on Traffic Jam Essay on Afforestation Essay on women education in India Essay on reading is good habit Essay on vocational education essay on life after school essay on make in India scheme essay on democracy in India essay on life of soldiers essay on Indian economy essay on Indian army essay on fundamental rights Essay on How I Spent My Summer Vacation Essay on what I learned during Lockdown Essay on how to curb covid-19 Essay on coronavirus disease Essay on covid 19 Essay on lockdown Essay on why republic day is celebrated on 26 January Essay on What I did during lockdown Essay on How I spent lockdown period Essay on digital india, cashless india essay, essay on child is father of the man, essay on causes, effects and prevention of corona virus, essay on dr. sarvepalli radhakrishnan, durga puja essay, dussehra essay, essay on summer vacation, essay on my plans for summer vacation. Self Study Mantra - Essay for IBPS PO Mains - Essay for State PSC - Essay for Banking Exam - Important Essays - Letter Writing - हिन्दी निबंध - One Word Substitution - Computer Knowledge - Important Days - जीवन परिचय - Government Schemes List Essay Topics for Class 9 | Essay Topics for Grade 9 Essay Topics for Class 9 : Practicing essay writing, embellish the knowledge and creativity of students. To write a good essay, students need to think about and explore the connected points of the subject to make the essay sensible and impressive. Here, we have compiled a list of top essay topics for class 9 students. You can read these essays by opening it in new tab. Essay topics for Class 9 - Essay on Beat Plastic Pollution (Click on the Topic to Read Essays) - Essay on Spring Season - Essay on Rainy Season - Essay on Har Ghar Tiranga - Essay on My Vision for India @ 2047 - Essay on Online Shopping - Essay on Summer Vacation - Essay on My School - Essay on My Family - Essay on My Mother - Essay on My Father - My Favourite Teacher - My Aim in Life - Importance of Reading Books - My Visit to Taj Mahal - Importance of Trees - Essay on Computer - Advantages and Disadvantages of Internet - Essay on Mobile Phones - My Best Friend You can read these essays by clicking on the essay topic which will redirect you to the essay page. These are most important essay topics for class 9 Students. Essay Topics for Grade 9 - Essay on Mahatma Gandhi (Click on the Topic to Read Essays) - Missile Man of India - Essay on Republic Day - Essay on Independence Day - Essay on Constitution Day - Essay on Mother's Day - Essay on Father's Day - Essay on Teachers Day - Essay on Holi - Essay on Krishna Janmashtami - Essay on Raksha Bandhan - Essay on Diwali - Essay on Christmas Festival - Essay on Pollution - Essay on Environmental Pollution - Essay on Noise Pollution - Essay on Water Pollution The above essay topics are based on important personality, important national and international days and popular festivals celebrated in India and across the world. Hope you like these essay topics for grade 9 students and it help you in your exam. Essay Topics for Class 9 Students - Essay on International Day of Yoga (Click on the Topic to Read Essays) - Essay on Water Day - Essay on World Environment Day - Essay on Earth Day - Essay on Population Day - Essay on Sports Day - Essay on Christmas Day - Essay on Human Rights Day - Essay on Benefits of Yoga These above essay topics for Class 9 Students is based on important national and international days . These days are celebrated world wide on different dates during the calendar year to highlight the importance of the subject of the date. Therefore, these essay topics are very important essay topics for class 9 students . Hope you liked this article on essay topics for class 9 students and it helped you in your exam preparation. Tags: Essay topics for class 9, essay topics for grade 9, essay topics for class 9 students, Class 9 essay topics You may like these posts Post a comment. - Download PDF Essay for All Exams Download PDF Essay for All Exams Most important essays ranging from 250 words to 1000 … Popular this Month Trending Essay Topics | Important Essay Topics for Competitive Exams Essay on Mission Chandrayaan 3: Facts and Highlights Meri Mati Mera Desh Essay in English Essay Topics for Class 7 Students | Essay Topics for Grade 7 My Family Essay in English 10 Lines, Essay on My Family Azadi Ka Amrit Mahotsav Essay in English Essay Topics for Class 6 Students | Essay Topics for Grade 6 Essay on Electric Vehicles: The Future of Transport, Benefits of Electric Vehicles uses, Electric Vehicles 20 Most Important Essay Topics for CAPF 2023 | UPSC CAPF Essay Topics 2023 One word substitution (download here👇👇). Essay Writing in English - Essay in English - Essay in Hindi - 20 Essays for IBPS PO Descriptive Paper - Trending Essay Topics - IBPS PO Previous Year Descriptive Paper - Important Essays for UPSC - Essay Topics for UPSC CAPF AC Exam - How To Crack SSC CGL In First Attempt? - 100 Most Important One Word Substitution - Essay on Artificial Intelligence - Latest Jobs | Admit Card | Result - Essay on Global Warming - पर्यावरण प्रदूषण: नियंत्रण के उपाय - Essay on Women Empowerment - Daily Homework for Class 1 to 5 - Paragraph in English - Join Self Study Mantra through WhatsApp, Facebook, Telegram - Advertise With Us - Career with Us - Disclaimer, Terms and Condition - 10 Lines 13 - Best Books for SSC CGL 1 - Biography 6 - Education System 6 - English Grammar 1 - Essay in Hindi 18 - Essay Topics 27 - essay writing 135 - Farmer Welfare Schemes 1 - Important National and International Days 26 - Mathematics 5 - One Word Substitution 2 - Online Classes 3 - Paragraph Writing 18 - Political Science 1 - Pollution 7 - Republic Day 1 - Speech in Hindi 1 - SSC Exams 5 - Study Tips 7 - जीवन परिचय 6 Essay on Advantages and Disadvantages of Online Classes Important Days in 2023 | Important National and International Days | Important Days and Dates My School Essay in English 10 Lines, Essay on My School Copyright (c) 2019-22 Self Study Mantra All Rights Reseved In order to continue enjoying our site, we ask that you confirm your identity as a human. Thank you very much for your cooperation. Learn from anywhere on any device. Top Leader by G2 Top Performer by SourceForge Top Leader by Softwaresuggest Ranked Amongst Top 25 Companies by LinkedIn Most Preferred Workplace Have a language expert improve your writing Run a free plagiarism check in 10 minutes, generate accurate citations for free. - Knowledge Base - Example of a great essay | Explanations, tips & tricks Example of a Great Essay | Explanations, Tips & Tricks Published on February 9, 2015 by Shane Bryson . Revised on July 23, 2023 by Shona McCombes. This example guides you through the structure of an essay. It shows how to build an effective introduction , focused paragraphs , clear transitions between ideas, and a strong conclusion . Each paragraph addresses a single central point, introduced by a topic sentence , and each point is directly related to the thesis statement . As you read, hover over the highlighted parts to learn what they do and why they work. Table of contents Other interesting articles, frequently asked questions about writing an essay, an appeal to the senses: the development of the braille system in nineteenth-century france. The invention of Braille was a major turning point in the history of disability. The writing system of raised dots used by visually impaired people was developed by Louis Braille in nineteenth-century France. In a society that did not value disabled people in general, blindness was particularly stigmatized, and lack of access to reading and writing was a significant barrier to social participation. The idea of tactile reading was not entirely new, but existing methods based on sighted systems were difficult to learn and use. As the first writing system designed for blind people’s needs, Braille was a groundbreaking new accessibility tool. It not only provided practical benefits, but also helped change the cultural status of blindness. This essay begins by discussing the situation of blind people in nineteenth-century Europe. It then describes the invention of Braille and the gradual process of its acceptance within blind education. Subsequently, it explores the wide-ranging effects of this invention on blind people’s social and cultural lives. Lack of access to reading and writing put blind people at a serious disadvantage in nineteenth-century society. Text was one of the primary methods through which people engaged with culture, communicated with others, and accessed information; without a well-developed reading system that did not rely on sight, blind people were excluded from social participation (Weygand, 2009). While disabled people in general suffered from discrimination, blindness was widely viewed as the worst disability, and it was commonly believed that blind people were incapable of pursuing a profession or improving themselves through culture (Weygand, 2009). This demonstrates the importance of reading and writing to social status at the time: without access to text, it was considered impossible to fully participate in society. Blind people were excluded from the sighted world, but also entirely dependent on sighted people for information and education. In France, debates about how to deal with disability led to the adoption of different strategies over time. While people with temporary difficulties were able to access public welfare, the most common response to people with long-term disabilities, such as hearing or vision loss, was to group them together in institutions (Tombs, 1996). At first, a joint institute for the blind and deaf was created, and although the partnership was motivated more by financial considerations than by the well-being of the residents, the institute aimed to help people develop skills valuable to society (Weygand, 2009). Eventually blind institutions were separated from deaf institutions, and the focus shifted towards education of the blind, as was the case for the Royal Institute for Blind Youth, which Louis Braille attended (Jimenez et al, 2009). The growing acknowledgement of the uniqueness of different disabilities led to more targeted education strategies, fostering an environment in which the benefits of a specifically blind education could be more widely recognized. Several different systems of tactile reading can be seen as forerunners to the method Louis Braille developed, but these systems were all developed based on the sighted system. The Royal Institute for Blind Youth in Paris taught the students to read embossed roman letters, a method created by the school’s founder, Valentin Hauy (Jimenez et al., 2009). Reading this way proved to be a rather arduous task, as the letters were difficult to distinguish by touch. The embossed letter method was based on the reading system of sighted people, with minimal adaptation for those with vision loss. As a result, this method did not gain significant success among blind students. Louis Braille was bound to be influenced by his school’s founder, but the most influential pre-Braille tactile reading system was Charles Barbier’s night writing. A soldier in Napoleon’s army, Barbier developed a system in 1819 that used 12 dots with a five line musical staff (Kersten, 1997). His intention was to develop a system that would allow the military to communicate at night without the need for light (Herron, 2009). The code developed by Barbier was phonetic (Jimenez et al., 2009); in other words, the code was designed for sighted people and was based on the sounds of words, not on an actual alphabet. Barbier discovered that variants of raised dots within a square were the easiest method of reading by touch (Jimenez et al., 2009). This system proved effective for the transmission of short messages between military personnel, but the symbols were too large for the fingertip, greatly reducing the speed at which a message could be read (Herron, 2009). For this reason, it was unsuitable for daily use and was not widely adopted in the blind community. Nevertheless, Barbier’s military dot system was more efficient than Hauy’s embossed letters, and it provided the framework within which Louis Braille developed his method. Barbier’s system, with its dashes and dots, could form over 4000 combinations (Jimenez et al., 2009). Compared to the 26 letters of the Latin alphabet, this was an absurdly high number. Braille kept the raised dot form, but developed a more manageable system that would reflect the sighted alphabet. He replaced Barbier’s dashes and dots with just six dots in a rectangular configuration (Jimenez et al., 2009). The result was that the blind population in France had a tactile reading system using dots (like Barbier’s) that was based on the structure of the sighted alphabet (like Hauy’s); crucially, this system was the first developed specifically for the purposes of the blind. While the Braille system gained immediate popularity with the blind students at the Institute in Paris, it had to gain acceptance among the sighted before its adoption throughout France. This support was necessary because sighted teachers and leaders had ultimate control over the propagation of Braille resources. Many of the teachers at the Royal Institute for Blind Youth resisted learning Braille’s system because they found the tactile method of reading difficult to learn (Bullock & Galst, 2009). This resistance was symptomatic of the prevalent attitude that the blind population had to adapt to the sighted world rather than develop their own tools and methods. Over time, however, with the increasing impetus to make social contribution possible for all, teachers began to appreciate the usefulness of Braille’s system (Bullock & Galst, 2009), realizing that access to reading could help improve the productivity and integration of people with vision loss. It took approximately 30 years, but the French government eventually approved the Braille system, and it was established throughout the country (Bullock & Galst, 2009). Although Blind people remained marginalized throughout the nineteenth century, the Braille system granted them growing opportunities for social participation. Most obviously, Braille allowed people with vision loss to read the same alphabet used by sighted people (Bullock & Galst, 2009), allowing them to participate in certain cultural experiences previously unavailable to them. Written works, such as books and poetry, had previously been inaccessible to the blind population without the aid of a reader, limiting their autonomy. As books began to be distributed in Braille, this barrier was reduced, enabling people with vision loss to access information autonomously. The closing of the gap between the abilities of blind and the sighted contributed to a gradual shift in blind people’s status, lessening the cultural perception of the blind as essentially different and facilitating greater social integration. The Braille system also had important cultural effects beyond the sphere of written culture. Its invention later led to the development of a music notation system for the blind, although Louis Braille did not develop this system himself (Jimenez, et al., 2009). This development helped remove a cultural obstacle that had been introduced by the popularization of written musical notation in the early 1500s. While music had previously been an arena in which the blind could participate on equal footing, the transition from memory-based performance to notation-based performance meant that blind musicians were no longer able to compete with sighted musicians (Kersten, 1997). As a result, a tactile musical notation system became necessary for professional equality between blind and sighted musicians (Kersten, 1997). Braille paved the way for dramatic cultural changes in the way blind people were treated and the opportunities available to them. Louis Braille’s innovation was to reimagine existing reading systems from a blind perspective, and the success of this invention required sighted teachers to adapt to their students’ reality instead of the other way around. In this sense, Braille helped drive broader social changes in the status of blindness. New accessibility tools provide practical advantages to those who need them, but they can also change the perspectives and attitudes of those who do not. Bullock, J. D., & Galst, J. M. (2009). The Story of Louis Braille. Archives of Ophthalmology , 127(11), 1532. https://doi.org/10.1001/archophthalmol.2009.286. Herron, M. (2009, May 6). Blind visionary. Retrieved from https://eandt.theiet.org/content/articles/2009/05/blind-visionary/. Jiménez, J., Olea, J., Torres, J., Alonso, I., Harder, D., & Fischer, K. (2009). Biography of Louis Braille and Invention of the Braille Alphabet. Survey of Ophthalmology , 54(1), 142–149. https://doi.org/10.1016/j.survophthal.2008.10.006. Kersten, F.G. (1997). The history and development of Braille music methodology. The Bulletin of Historical Research in Music Education , 18(2). Retrieved from https://www.jstor.org/stable/40214926. Mellor, C.M. (2006). Louis Braille: A touch of genius . Boston: National Braille Press. Tombs, R. (1996). France: 1814-1914 . London: Pearson Education Ltd. Weygand, Z. (2009). The blind in French society from the Middle Ages to the century of Louis Braille . Stanford: Stanford University Press. If you want to know more about AI tools , college essays , or fallacies make sure to check out some of our other articles with explanations and examples or go directly to our tools! - Ad hominem fallacy - Post hoc fallacy - Appeal to authority fallacy - False cause fallacy - Sunk cost fallacy - Choosing Essay Topic - Write a College Essay - Write a Diversity Essay - College Essay Format & Structure - Comparing and Contrasting in an Essay - Grammar Checker - Paraphrasing Tool - Text Summarizer - AI Detector - Plagiarism Checker - Citation Generator Here's why students love Scribbr's proofreading services Discover proofreading & editing An essay is a focused piece of writing that explains, argues, describes, or narrates. In high school, you may have to write many different types of essays to develop your writing skills. Academic essays at college level are usually argumentative : you develop a clear thesis about your topic and make a case for your position using evidence, analysis and interpretation. The structure of an essay is divided into an introduction that presents your topic and thesis statement , a body containing your in-depth analysis and arguments, and a conclusion wrapping up your ideas. The structure of the body is flexible, but you should always spend some time thinking about how you can organize your essay to best serve your ideas. Your essay introduction should include three main things, in this order: - An opening hook to catch the reader’s attention. - Relevant background information that the reader needs to know. - A thesis statement that presents your main point or argument. The length of each part depends on the length and complexity of your essay . A thesis statement is a sentence that sums up the central point of your paper or essay . Everything else you write should relate to this key idea. A topic sentence is a sentence that expresses the main point of a paragraph . Everything else in the paragraph should relate to the topic sentence. At college level, you must properly cite your sources in all essays , research papers , and other academic texts (except exams and in-class exercises). Add a citation whenever you quote , paraphrase , or summarize information or ideas from a source. You should also give full source details in a bibliography or reference list at the end of your text. The exact format of your citations depends on which citation style you are instructed to use. The most common styles are APA , MLA , and Chicago . Cite this Scribbr article If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator. Bryson, S. (2023, July 23). Example of a Great Essay | Explanations, Tips & Tricks. Scribbr. Retrieved September 4, 2023, from https://www.scribbr.com/academic-essay/example-essay-structure/ Is this article helpful? Shane finished his master's degree in English literature in 2013 and has been working as a writing tutor and editor since 2009. He began proofreading and editing essays with Scribbr in early summer, 2014. Other students also liked How to write an essay introduction | 4 steps & examples, academic paragraph structure | step-by-step guide & examples, how to write topic sentences | 4 steps, examples & purpose. Just wanted to ask if its correct to give a topic to each paragraph Jack Caulfield (Scribbr Team) Yes, you should usually try to focus each paragraph on a clear topic and introduce it with a topic sentence suggesting what it will be about. Note that this doesn't mean titling each paragraph, just making sure they're organized appropriately in terms of topic. Still have questions? What is your plagiarism score. English 9th std (TN 9th English English Medium) Online Study, Important Questions with Answer Key, Book back Exercise answers and solution, Question Papers, Textbook, Students Guide, Study Material Important questions and answers, Question Paper download, Online Study Material, Lecturing Notes, Assignment, Reference, Wiki Book Back Solution Text book - download pdf, 9th english, 9th english : unit 1 : prose : learning the game - by sachin tendulkar, 9th english : unit 1 : poem : stopping by woods on a snowy evening - by robert frost, 9th english : unit 1 : supplementary : the envious neighbour - a japanese folk tale, 9th english : unit 2 : prose : i can t climb trees anymore - by ruskin bond, 9th english : unit 2 : poem : a poison tree - by william blake, 9th english : unit 2 : supplementary : the fun they had - by isaac asimov, 9th english : unit 3 : drama : old man river - by dorothy deming, 9th english : unit 3 : poem : on killing a tree - by gieve patel, 9th english : unit 3 : supplementary : earthquake - by m s mahadevan, 9th english : unit 4 : prose : seventeen oranges - by bill naughton, 9th english : unit 4 : poem : the spider and the fly - by mary botham howitt, 9th english : unit 4 : supplementary : the cat and the painkiller - by mark twain, 9th english : unit 5 : prose : water - the elixir of life - by sir c v raman, 9th english : unit 5 : poem : the river - by caroline ann bowles, 9th english : unit 5 : supplementary : little cyclone: the story of a grizzly cub - by william temple hornaday, 9th english : unit 6 : prose : from zero to infinity - biography of srinivasa ramanujan, 9th english : unit 6 : poem : the comet - by norman littleford, 9th english : unit 6 : supplementary : mothers voice - by vasil berezhnoy, 9th english : unit 7 : prose : a birthday letter - by jawaharlal nehru, 9th english : unit 7 : poem : the stick-together families - by edgar albert guest, 9th english : unit 7 : supplementary : the christmas truce - by aaron shepard. Contact Us(Customer Care) Via Social Media Copyright © 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. - Arts & Music - English Language Arts - World Language - Social Studies - History - Special Education - Holidays / Seasonal - Independent Work Packet - Easel by TPT - Google Apps Interactive resources you can assign in your digital classroom from TPT. staar expository writing 9th grade Resource types, all resource types, results for staar expository writing 9th grade. - Rating Count - Price (Ascending) - Price (Descending) - Most Recent Expository Essay : Planning Guide Foldable ( STAAR 7th and 9th grade ) Expository Essay Planning Page - 9th Grade STAAR - Word Document File STAAR 9th Grade Expository Essay Topic STAAR 9th grade Expository Essay Writing 7th and 9th Grade Expository Prompts (Prompts in format of Texas test) Expository Essay Graphic Organizer STAAR EOC English I Expository Essay Graphic Organizer Expository and Persuasive Essays Teacher Resource Packet - from Old ELA STAAR Expository Writing Prompts I (7th - 9th Grade ) STAAR Expository Essay Preparation STAAR Expository Essay Writing 101 Expository Essay - STAAR - EOC STAAR Expository Essay STAAR Expository Essay Template - Fill-in-the-blank STAAR Test Expository Writing Prompt # 4 STAAR Test Expository Writing Prompt # 1 Expository Essay Bundle - STAAR - EOC - We're Hiring - Help & FAQ - Student Privacy - Terms of Service Balbharati Solutions for English Kumarbharati 9th Standard Maharashtra State Board Maharashtra state board 9th standard english solutions guide. Shaalaa provides the Maharashtra State Board 9th Standard English Solutions Digest. Shaalaa is undoubtedly a site that most of your classmates are using to perform well in exams. You can solve 9th Standard English Book Solutions Maharashtra State Board textbook questions by using Shaalaa.com to verify your answers. This will help you practise better and become more confident. Maharashtra State Board 9th Standard English Textbook Solutions Questions and answers for the 9th Standard English Textbook on this page. Balbharati Solutions for 9th Standard English Digest Maharashtra State Board will help students understand the concepts better. Balbharati Solutions for 9th Standard English Chapterwise List | 9th Standard English Digest The answers to the Balbharati books are the best study material for students. Listed below are the chapter-wise Balbharati English 9th Standard Solutions Maharashtra State Board. - • Chapter 1.1: Life - • Chapter 1.2: A Synopsis-The Swiss Family Robinson - • Chapter 1.3: Have you ever seen...? - • Chapter 1.4: Have you thought of the verb ‘have’ - • Chapter 1.5: The Necklace - • Chapter 2.1: Invictus - • Chapter 2.2: A True Story of Sea Turtles - • Chapter 2.3: Somebody’s Mother - • Chapter 2.4: The Fall of Troy - • Chapter 2.5: Autumn - • Chapter 2.6: The Past in the Present - • Chapter 3.1: Silver - • Chapter 3.2: Reading Works of Art - • Chapter 3.3: The Road Not Taken - • Chapter 3.4: How the First Letter was Written - • Chapter 4.1: Please Listen! - • Chapter 4.2: The Storyteller - • Chapter 4.3: Intellectual Rubbish - • Chapter 4.4: My Financial Career - • Chapter 4.5: Tansen - SSC (English Medium) 9th Standard Maharashtra State Board Balbharati 9th Standard solutions for other subjects - Balbharati solutions for Hindi - Lokbharati 9th Standard Maharashtra State Board [हिंदी - लोकभारती ९ वीं कक्षा] - Balbharati solutions for Marathi - Aksharbharati 9th Standard Maharashtra State Board [मराठी - अक्षरभारती इयत्ता ९ वी] - Balbharati solutions for Mathematics 1 Algebra 9th Standard Maharashtra State Board - Balbharati solutions for Mathematics 2 Geometry 9th Standard Maharashtra State Board - Balbharati solutions for Sanskrit Composite - Anand 9th Standard (सयुक्त-सस्कृतम् - आनन्दः नवमी कक्षा) - Balbharati solutions for Sanskrit Entire - Amod 9th Standard (सम्पूर्ण-संस्कृतम् - आमोदः नवमी कक्षा) - Balbharati solutions for Science and Technology 9th Standard Maharashtra State Board - Balbharati solutions for Science and Technology 9th Standard Maharashtra State Board [विज्ञान और प्रौद्योगिकी ९ वीं कक्षा] - Balbharati solutions for Social Science Geography 9th Standard Maharashtra State Board - Balbharati solutions for Social Science History and Political Science 9th Standard Maharashtra State Board Chapters covered in Balbharati Solutions for English Kumarbharati 9th Standard Maharashtra State Board Balbharati solutions for 9th standard english (class 9) chapter 1.1: life, balbharati 9th standard english (class 9) chapter 1.1: life exercises, balbharati solutions for 9th standard english (class 9) chapter 1.2: a synopsis-the swiss family robinson, balbharati 9th standard english (class 9) chapter 1.2: a synopsis-the swiss family robinson exercises, balbharati solutions for 9th standard english (class 9) chapter 1.3: have you ever seen..., balbharati 9th standard english (class 9) chapter 1.3: have you ever seen... exercises, balbharati solutions for 9th standard english (class 9) chapter 1.4: have you thought of the verb ‘have’, balbharati 9th standard english (class 9) chapter 1.4: have you thought of the verb ‘have’ exercises, balbharati solutions for 9th standard english (class 9) chapter 1.5: the necklace, balbharati 9th standard english (class 9) chapter 1.5: the necklace exercises, balbharati solutions for 9th standard english (class 9) chapter 2.1: invictus, balbharati 9th standard english (class 9) chapter 2.1: invictus exercises, balbharati solutions for 9th standard english (class 9) chapter 2.2: a true story of sea turtles, balbharati 9th standard english (class 9) chapter 2.2: a true story of sea turtles exercises, balbharati solutions for 9th standard english (class 9) chapter 2.3: somebody’s mother, balbharati 9th standard english (class 9) chapter 2.3: somebody’s mother exercises, balbharati solutions for 9th standard english (class 9) chapter 2.4: the fall of troy, balbharati 9th standard english (class 9) chapter 2.4: the fall of troy exercises, balbharati solutions for 9th standard english (class 9) chapter 2.5: autumn, balbharati 9th standard english (class 9) chapter 2.5: autumn exercises, balbharati solutions for 9th standard english (class 9) chapter 2.6: the past in the present, balbharati 9th standard english (class 9) chapter 2.6: the past in the present exercises, balbharati solutions for 9th standard english (class 9) chapter 3.1: silver, balbharati 9th standard english (class 9) chapter 3.1: silver exercises, balbharati solutions for 9th standard english (class 9) chapter 3.2: reading works of art, balbharati 9th standard english (class 9) chapter 3.2: reading works of art exercises, balbharati solutions for 9th standard english (class 9) chapter 3.3: the road not taken, balbharati 9th standard english (class 9) chapter 3.3: the road not taken exercises, balbharati solutions for 9th standard english (class 9) chapter 3.4: how the first letter was written, balbharati 9th standard english (class 9) chapter 3.4: how the first letter was written exercises, balbharati solutions for 9th standard english (class 9) chapter 4.1: please listen, balbharati 9th standard english (class 9) chapter 4.1: please listen exercises, balbharati solutions for 9th standard english (class 9) chapter 4.2: the storyteller, balbharati 9th standard english (class 9) chapter 4.2: the storyteller exercises, balbharati solutions for 9th standard english (class 9) chapter 4.3: intellectual rubbish, balbharati 9th standard english (class 9) chapter 4.3: intellectual rubbish exercises, balbharati solutions for 9th standard english (class 9) chapter 4.4: my financial career, balbharati 9th standard english (class 9) chapter 4.4: my financial career exercises, balbharati solutions for 9th standard english (class 9) chapter 4.5: tansen, balbharati 9th standard english (class 9) chapter 4.5: tansen exercises, maharashtra state board 9th std english textbook solutions, english kumarbharati 9th textbook answers maharashtra state board. Balbharti Maharashtra state Board Class 9 English Solutions for Maharashtra state Board Students. On this page, you can find Chapterwise 9th Standard English Textbook Solutions. 9th Standard Balbharati solutions answers all the questions given in the Balbharati textbooks in a step-by-step process. Our English tutors have helped us put together this for our 9th Standard Students. The solutions on Shaalaa will help you solve all the Balbharati 9th Standard English questions without any problems. Every chapter has been broken down systematically for the students, which gives fast learning and easy retention. Shaalaa provides free Balbharati solutions for English Kumarbharati 9th Standard Maharashtra State Board. Shaalaa has carefully crafted Balbharati solutions for 9th Standard English that can help you understand the concepts and learn how to answer properly in your board exams. You can also share our link for free 9th Standard English Balbharati solutions with your classmates. If you have any doubts while going through our 9th Standard English Balbharati solutions, then you can go through our Video Tutorials for English. The tutorials should help you better understand the concepts. Balbharati Solutions for 9th Standard English Maharashtra State Board 9th Standard Balbharati Solutions answers all the questions in the Balbharati textbooks in a step-by-step process. Our English tutors helped us assemble this for our 9th Standard students. The solutions on Shaalaa will help you solve all the Balbharati 9th Standard English questions without any problems. Every chapter has been broken down systematically for the students, which gives them fast learning and easy retention. Shaalaa provides a free Balbharati answer guide for English 9th Standard, Maharashtra State Board. Shaalaa has carefully crafted Balbharati solutions for 9th Standard English to help you understand the concepts and adequately answer questions in your board exams. If you have any doubts while going through our 9th Standard English Balbharati Guide, you can go through our Video Tutorials for English. The tutorials help you better understand the concepts. Finding the best English 9th Standard Balbharati Solutions Digest is significant if you want to prepare for the exam fully. It's crucial to ensure that you are fully prepared for any challenges that can arise, and that's why a heavy, professional focus can be an excellent idea. As you learn the answers, obtaining the desired results becomes much easier, and the experience can be staggering every time. Balbharati 9th Standard English Guide Book Back Answers The following Maharashtra State Board Balbharati 9th Standard English Book Answers Solutions Guide Pdf Free Download in English Medium will be helpful to you. Answer material is developed per the latest exam pattern and is part of Balbharati 9th Standard Books Solutions. You will be aware of all topics or concepts discussed in the book and gain more conceptual knowledge from the study material. If you have any questions about the Maharashtra State Board New Syllabus 9th Standard English Guide Pdf of Text Book Back Questions and Answers, Notes, Chapter Wise Important Questions, Model Questions, etc., please get in touch with us. Comprehensive Balbharati Solutions for Maharashtra State Board English 9th Standard Guide The Balbharati English 9th Standard Maharashtra State Board solutions are essential as they can offer a good improvement guideline. You must push the boundaries and take things to the next level to improve. That certainly helps a lot and can bring tremendous benefits every time. It takes the experience to the next level, and the payoff alone can be extraordinary. You want a lot of accuracy from the Balbharati solution of English 9th Standard. With accurate answers, you'll have the results and value you want. That's why you want quality, reliability, and consistency with something like this. If you have it, things will undoubtedly be amazing, and you will get to pursue your dreams. Suppose you acquire the English Balbharati 9th Standard solutions from this page. In that case, they are fully formatted and ready to use, helping make the experience simpler and more convenient while offering the results and value you need. That's what you want to pursue, a genuine focus on quality and value, and the payoff can be great thanks to that. Our Balbharati English Answer Guide for the 9th Standard Maharashtra State Board covers all 20 chapters. As a result, you will be able to fully prepare for the exam without worrying about missing anything. You rarely get such a benefit, which makes the English 9th Standard Maharashtra State Board Balbharati solutions provided here such an extraordinary advantage that you can always rely on. Consider giving it a try for yourself, and you will find it very comprehensive, professional, and convenient at the same time. Our Maharashtra State Board Balbharati solutions for English 9th Standard cover everything from Life, A Synopsis-The Swiss Family Robinson, Have you ever seen...?, Have you thought of the verb ‘have’, The Necklace, Invictus, A True Story of Sea Turtles, Somebody’s Mother, The Fall of Troy, Autumn, The Past in the Present, Silver, Reading Works of Art, The Road Not Taken, How the First Letter was Written, Please Listen!, The Storyteller, Intellectual Rubbish, My Financial Career, Tansen and the other topics. Yes, these are the best Balbharati 9th Standard English solution options on the market. You must check it out for yourself; the experience can be impressive. You get to prepare for the exam reliably, comprehensively, and thoroughly. Please look at our English 9th Standard Maharashtra State Board answer guide today if you'd like to handle this exam efficiently. Just browse our solutions right now, and you will find yourself mastering the Balbharati exam questions in no time! It will offer an extraordinary experience every time, and you will not have to worry about any issues. - Maharashtra Board Question Bank with Solutions (Official) - Balbharati Solutions (Maharashtra) - Samacheer Kalvi Solutions (Tamil Nadu) - NCERT Solutions - RD Sharma Solutions - RD Sharma Class 10 Solutions - RD Sharma Class 9 Solutions - Lakhmir Singh Solutions - HC Verma Solutions - TS Grewal Solutions - ICSE Class 10 Solutions - Selina ICSE Concise Solutions - Frank ICSE Solutions - ML Aggarwal Solutions - NCERT Solutions for Class 12 Maths - NCERT Solutions for Class 12 Physics - NCERT Solutions for Class 12 Chemistry - NCERT Solutions for Class 12 Biology - NCERT Solutions for Class 11 Maths - NCERT Solutions for Class 11 Physics - NCERT Solutions for Class 11 Chemistry - NCERT Solutions for Class 11 Biology - NCERT Solutions for Class 10 Maths - NCERT Solutions for Class 10 Science - NCERT Solutions for Class 9 Maths - NCERT Solutions for Class 9 Science - CBSE Study Material - Maharashtra State Board Study Material - Tamil Nadu State Board Study Material - CISCE ICSE / ISC Study Material - Mumbai University Engineering Study Material - CBSE Previous Year Question Paper With Solution for Class 12 Arts - CBSE Previous Year Question Paper With Solution for Class 12 Commerce - CBSE Previous Year Question Paper With Solution for Class 12 Science - CBSE Previous Year Question Paper With Solution for Class 10 - Maharashtra State Board Previous Year Question Paper With Solution for Class 12 Arts - Maharashtra State Board Previous Year Question Paper With Solution for Class 12 Commerce - Maharashtra State Board Previous Year Question Paper With Solution for Class 12 Science - Maharashtra State Board Previous Year Question Paper With Solution for Class 10 - CISCE ICSE / ISC Board Previous Year Question Paper With Solution for Class 12 Arts - CISCE ICSE / ISC Board Previous Year Question Paper With Solution for Class 12 Commerce - CISCE ICSE / ISC Board Previous Year Question Paper With Solution for Class 12 Science - CISCE ICSE / ISC Board Previous Year Question Paper With Solution for Class 10 - Entrance Exams - Video Tutorials - Question Papers - Question Bank Solutions - Question Search (beta) - More Quick Links - Terms and Conditions - Shaalaa App - Ad-free Subscriptions Select a course - Textbook Solutions - Study Material - Remove All Ads - Change mode 9th English Question Papers - 9th English Annual Exam Question Paper 2023 | Mr. A. Mohammed Ali – Preview & Download (MAT.NO. 221842) - 9th English Annual Exam Model Question Paper with Answer Key | Way to Success – Preview & Download (MAT.NO. 216332 ) - 9th English Half Yearly Exam Model Question Paper with Answer Key | Way to Success – Preview & Download (MAT.NO. 220511 ) - 9th English Quarterly Exam Model Question Paper with Answer Key | Way to Success – Preview & Download (MAT.NO. 220512 ) - 9th English Quarterly Model Question Paper with Answer Key | Rasi Publications – Preview & Download (MAT.NO. 221174) - 9th English Annual Exam Question Paper 2022 | Mr. A. Mohammed Ali – Preview & Download (MAT.NO. 220682) - 9th English Quarterly Question Paper 2022 with Answer Key | Mr. Z. Mohammed Abrar – Preview & Download (MAT.NO. 221190 ) - 9th English Model Exam Question Paper 2020 | L. Mohanraj – Preview & Download (MAT.NO. 215680 ) - 9th English Half Yearly Exam Question Paper 2019 | RK Tuition Centre – Preview & Download (MAT.NO. 216831) - 9th English Pre Half Yearly Exam Question Paper 2019 | Mr. Mask – Preview & Download (MAT.NO. 216333) - English Paper 1 Preview & Download (MAT.NO. 215310 ) - English Paper 2 Preview & Download (MAT.NO. 215311 ) - 9th English First Midterm Exam Question Paper 2019 | Mr. Kumaresan – Preview & Download (MAT.NO. 216764) - 9th English First Midterm Model Question Paper 2019 | Kamarajar Tuition Center – Preview & Download (MAT.NO. 214592) - 9th English Second Midterm Exam Question Paper 2019 | Mr. Mask – Preview & Download (MAT.NO. 216102) - 9th English Third Midterm Exam Question Paper 2020 | Mr. Kumaresan – Preview & Download (MAT.NO. 218037) - 9th English Unit Test Question Paper with Answer Key (Unit 1) | Rasi Publications – Preview & Download (MAT.NO. 221030) - 9th English Unit 1 – Unit Test Question Paper | Mr. T. S. Saravanan – Preview & Download (MAT.NO. 214304) There was a problem reporting this post. Please confirm you want to block this member. You will no longer be able to: - See blocked member's posts - Mention this member in posts - Invite this member to groups - Message this member Please allow a few minutes for this process to complete. What's the minimum time you need to complete my order? What is the native language of the person who will write my essay for me? What Can You Help Me With? No matter what assignment you need to get done, let it be math or English language, our essay writing service covers them all. Assignments take time, patience, and thorough in-depth knowledge. Are you worried you don't have everything it takes? Our writers will help with any kind of subject after receiving the requirements. One of the tasks we can take care of is research papers. They can take days if not weeks to complete. If you don't have the time for endless reading then contact our essay writing help online service. With EssayService stress-free academic success is a hand away. Another assignment we can take care of is a case study. Acing it requires good analytical skills. You'll need to hand pick specific information which in most cases isn't easy to find. Why waste your energy on this when they're so many exciting activities out there? Our writing help can also do your critical thinking essays. They aren't the easiest task to complete, but they're the perfect occasion to show your deep understanding of the subject through a lens of critical analysis. Hire our writer services to ace your review. Are you struggling with understanding your professors' directions when it comes to homework assignments? Hire professional writers with years of experience to earn a better grade and impress your parents. Send us the instructions, and your deadline, and you're good to go. Artikel & Berita Write my essay for me. Will I get caught if I buy an essay? The most popular question from clients and people on the forums is how not to get caught up in the fact that you bought an essay, and did not write it yourself. Students are very afraid that they will be exposed and expelled from the university or they will simply lose their money, because they will have to redo the work themselves. If you've chosen a good online research and essay writing service, then you don't have to worry. The writers from the firm conduct their own exploratory research, add scientific facts and back it up with the personal knowledge. None of them copy information from the Internet or steal ready-made articles. Even if this is not enough for the client, he can personally go to the anti-plagiarism website and check the finished document. Of course, the staff of the sites themselves carry out such checks, but no one can forbid you to make sure of the uniqueness of the article for yourself. - Share full article The Blue Supermoon: A Gem in the Night Sky Wednesday’s full moon, the second in August, was a celestial rarity. It was also bigger and brighter than usual. By Katrina Miller The lunar fanfare of August wrapped up with a treat: a blue supermoon that occurred on Wednesday at 9:36 p.m. Eastern time. The blue moon is the second of two full moons in a single month. Each month usually hosts only one full moon, but blue moons sometimes arise because the lunar cycle is 29.5 days long — just short of the length of an average calendar month. This difference means that some months see two full moons. That is exactly what happened this month: The first full moon popped up on Aug. 1 and the second on Wednesday. What is a blue supermoon? A supermoon occurs when the full moon phase of the lunar cycle syncs up with the perigee, or when it is nearest to the Earth. Supermoons appear brighter and bigger than regular full moons. According to NASA , the apparent size increase is 14 percent, which is about the difference between a nickel and a quarter. Supermoons are generally seen every three or four months. This one was the third this year and the second this August . Blue moons, on the other hand, only happen every two or three years (hence the phrase “once in a blue moon”). Blue supermoons are even rarer, occurring once every 10 years or so . The last one was in 2018 during a lunar eclipse , and the next blue supermoons will occur as a pair in 2037. Will the moon actually look blue? No. The term “blue moon” doesn’t really describe its color; it’s mostly its usual milky gray. (Certain phenomena, like wildfires and volcanic eruptions , can tint the moon blue, the same visual effect that gave North American skies an orange hue this summer .) According to NASA , the term “blue moon” used to refer to the third full moon in a season that had four full moons. The newer definition — the second full moon in a month — was coined by the magazine Sky & Telescope in 1946. Who Could See It? Unlike some other celestial events, everyone on Earth sees the same phases of the lunar cycle at night , so the blue supermoon was visible anywhere it was not obscured by clouds. NASA recommends using binoculars or a telescope to see more of the moon’s texture during any of its phases. On Wednesday evening, skygazers might also have spotted a bright dot to the upper right of the moon . That’s Saturn, a few days shy of reaching its closest point to Earth. The ringed planet will swing clockwise around the moon during the night. Katrina Miller is a science reporting fellow for The Times. She recently earned her Ph.D. in particle physics from the University of Chicago. More about Katrina Miller What’s Up in Space and Astronomy Keep track of things going on in our solar system and all around the universe.. Never miss an eclipse, a meteor shower, a rocket launch or any other event that’s out of this world with The Times’s Space and Astronomy Calendar . The lunar fanfare of August wrapped up with a treat: a rare blue supermoon . For reasons unknown, the spin of Mars is speeding up . The discovery, using data from NASA’s retired InSight lander, is puzzling to scientists who study the red planet. A year after NASA released the James Webb Space Telescope’s first images, the space agency dropped yet another breathtaking snapshot of our universe . Is Pluto a planet? And what is a planet, anyway? Test your knowledge here. How to watch the major meteor showers that light up night skies all year long. Confused by black holes ? You’re not alone. Let us unpack some of the universe’s most mysterious forces for you.
Interstellar space travel is manned or unmanned travel between stars. Interstellar travel is much more difficult than travel within the Solar System, though travel in starships is a staple of science fiction. Actually, there is no suitable technology at present. However, there is a project of an interstellar probe with an ion engine, the energy obtained through a laser base station. Given sufficient travel time and engineering work, both unmanned and sleeper ship interstellar travel seem possible. Both present considerable technological and economic challenges which are unlikely to be met in the near future, particularly for manned probes. NASA, ESA and other space agencies have done research into these topics for several years, and have worked out some theoretical approaches. Energy requirements appear to make interstellar travel impractical for "generation ships", but less so for heavily shielded sleeper ships. The difficulties of interstellar travel[change | change source] The main challenge facing interstellar travel is the vast distances that have to be covered. This means that a very great speed and/or a very long travel time is needed. The time it takes with most realistic propulsion methods would be from decades to millennia. Hence an interstellar ship would be much more exposed to the hazards found in interplanetary travel, including vacuum, radiation, weightlessness, and micrometeoroids. At high speeds the vehicle would be penetrated by many microscopic particles of matter unless heavily shielded. This in itself would greatly increase the propulsion problems. The long travel times make it difficult to design manned missions. The fundamental limits of space-time present another challenge. Also, interstellar trips would be hard to justify for economic reasons. Required energy[change | change source] A significant factor is the energy needed for a reasonable travel time. A lower bound for the required energy is the kinetic energy K = ½ mv2 where m is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the required energy at least doubles, because the energy needed to halt the ship equals the energy needed to accelerate it to travel speed. The velocity for a manned round trip of a few decades to even the nearest star is thousands of times greater than those of present space vehicles. This means that due to the v2 term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least 450 PJ or 4.5 ×1017 J or 125 billion kWh, not accounting for losses. The source of energy has to be carried, since solar panels do not work far from the Sun and other stars. The magnitude of this energy may make interstellar travel impossible. One engineer stated “At least 100 times the total energy output of the entire world [in a given year] would be required for the voyage (to Alpha Centauri)”. Interstellar medium[change | change source] interstellar dust and gas may cause considerable damage to the craft, due to the high relative speeds and large kinetic energies involved. Larger objects (such as bigger dust grains) are far less common, but would be much more destructive. . Travel time[change | change source] It can be argued that an interstellar mission which cannot be completed within 50 years should not be started at all. Instead, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion. On the other hand, a case can therefore be made for starting a mission without delay, because the non-propulsion problems may turn out to be more difficult than the propulsion engineering. Intergalactic travel involves distances about a million-fold greater than interstellar distances, making it radically more difficult than even interstellar travel. Kennedy's calculation[change | change source] Andrew Kennedy has shown that voyages undertaken before the minimum wait time will be overtaken by those who leave at the minimum, while those who leave after the minimum will never overtake those who left at the minimum. Kennedy's calculation depends on r, the mean annual increase in world power production. From any point in time to a given destination, there is a minimum to the total time to destination. Voyagers would probably arrive without being overtaken by later voyagers by waiting a time t before leaving. The relation between the time it takes to get to a destination (now, Tnow, or after waiting, Tt, and growth in velocity of travel is Taking a journey to Barnard's Star, six light years away, as an example, Kennedy shows that with a world mean annual economic growth rate of 1.4% and a corresponding growth in the velocity of travel, the quickest human civilization might get to the star is in 1,110 years from the year 2007. Interstellar distances[change | change source] Astronomical distances are often measured in the time it would take a beam of light to travel between two points (see light-year). Light in a vacuum travels approximately 300,000 kilometers per second or 186,000 miles per second. The distance from Earth to the Moon is 1.3 light-seconds. With current spacecraft propulsion technologies, a craft can cover the distance from the Earth to the Moon in around eight hours (New Horizons). That means light travels approximately thirty thousand times faster than current spacecraft propulsion technologies. The distance from Earth to other planets in the Solar System ranges from three light-minutes to about four light-hours. Depending on the planet and its alignment to Earth, for a typical unmanned spacecraft these trips will take from a few months to a little over a decade. The distance to other stars is much greater. If the distance from Earth to Sun is scaled down to one meter, the distance to Alpha Centauri A would be 271 kilometers or about 169 miles. The nearest known star to the Sun is Proxima Centauri, which is 4.23 light-years away. The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/600th of a light-year in 30 years and is currently moving at 1/18,000th the speed of light. At this rate, a journey to Proxima Centauri would take 72,000 years. Of course, this mission was not specifically intended to travel fast to the stars, and current technology could do much better. The travel time could be reduced to a few millennia using solar sails, or to a century or less using nuclear pulse propulsion. Special relativity offers the possibility of shortening the travel time: if a starship with sufficiently advanced engines could reach velocities near the speed of light, relativistic time dilation would make the voyage much shorter for the traveller. However, it would still take many years of elapsed time as viewed by the people remaining on Earth. On returning to Earth, the travellers would find that far more time had elapsed on Earth than had for them (twin paradox). Communications[change | change source] The round-trip delay time is the minimum time between a probe signal reaching Earth, and the probe getting instructions from Earth. Given that information can travel no faster than the speed of light, this is for the Voyager 1 about 32 hours, near Proxima Centauri it would be 8 years. Faster reactions would have to be programmed to be carried out automatically. Of course, in the case of a manned flight the crew can respond immediately to their observations. However, the round-trip delay time makes them not only extremely distant but, in terms of communication, extremely isolated from Earth. Another factor is the energy needed for interstellar communications to arrive reliably. Obviously, gas and particles would degrade signals (interstellar extinction), and there would be limits to the energy available to send the signal. Manned missions[change | change source] The mass of any craft capable of carrying humans would inevitably be substantially larger than that necessary for an unmanned interstellar probe. The vastly greater travel times involved would require a life support system. The first interstellar missions are unlikely to carry life forms. Prime targets for interstellar travel[change | change source] There are 59 known stellar systems within 20 light years from the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions: |Stellar system||Distance (ly)||Remarks| |Alpha Centauri||4.3||Closest system. Three stars (G2, K1, M5). Component A is similar to the Sun (a G2 star). Alpha Centauri B has one confirmed planet.| |Barnard's Star||6.0||Small, low luminosity M5 red dwarf. Next closest to Solar System.| |Sirius||8.7||Large, very bright A1 star with a white dwarf companion.| |Epsilon Eridani||10.8||Single K2 star slightly smaller and colder than the Sun. Has two asteroid belts, might have a giant and one much smaller planet, and may possess a solar system type planetary system.| |Tau Ceti||11.8||Single G8 star similar to the Sun. High probability of possessing a solar-system-type planetary system: current evidence shows 5 planets with potentially two in the habitable zone.| |Gliese 581||20.3||Multiple planet system. The unconfirmed exoplanet Gliese 581 g and the confirmed exoplanet Gliese 581 d are in the star's habitable zone.| |Vega||25.0||At least one planet, and of a suitable age to have evolved primitive life | Existing and near-term astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration. References[change | change source] - Project Daedalus — Origins - Landis, Geoffrey A. Interstellar ion probe, supplied with energy by the laser beam - O’Neill, Ian (2008). "Interstellar travel may remain in science fiction". Universe Today. http://www.universetoday.com/2008/08/19/bad-news-insterstellar-travel-may-remain-in-science-fiction/. - Lance Williams, (2012). "Electromagnetic control of spacetime and gravity: the hard problem of interstellar travel". Astronomical Review (2). http://astroreview.com/issue/2012/article/electromagnetic-control-of-spacetime-and-gravity-the-hard-problem-of-interstellar-travel. - Kondo, Yoji: Interstellar travel and multi-generation spaceships, ISBN 1-896522-99-8 p. 31 - Kennedy, Andrew (2006). "Interstellar travel: the wait calculation and the incentive trap of progress". Journal of the British Interplanetary Society (JBIS) 59 (7): 239–246. http://www.jbis.org.uk/paper.php?p=2006.59.239. - Forward, Robert L. (1996). "Ad Astra!". Journal of the British Interplanetary Society (JBIS) 49 (1): 23–32. - jobs. "The exoplanet next door : Nature News & Comment". Nature.com. doi:10.1038/nature11572. http://www.nature.com/news/the-exoplanet-next-door-1.11605. Retrieved October 17, 2012. - Star: eps Eridani. Extrasolar Planets Encyclopaedia (Die Enzyklopädie der extrasolaren Planeten), retrieved 2011-01-15
Exploding supernovas and the resulting "winds" were vital in the formation of today's galaxies, says an international team of astronomers. Scientists from the U.S., Canada, U.K. and Switzerland collaborated on the study, appearing this week in the journal Nature, that used millions of hours of supercomputer time to simulate the evolution of the universe. The research was undertaken to address a problem in the prevailing theory of galaxy formation, called the cold dark matter theory, first conceived in the 1980s. Previous simulations based on the theory have suggested that galaxies should have more stars than they actually do, especially so-called dwarf galaxies, which have less than one per cent of the stars found in galaxies like our Milky Way. Fabio Governato, an associate professor of astronomy at the University of Washington and lead author of the study, says previous simulations haven't included a detailed description of how stars form and die. "We performed new computer simulations, run over several national supercomputing facilities, and included a better description of where and how star formation happens in galaxies," said Governato, in a statement. Those supercomputer facilities included the SHARCNET computers at McMaster University in Hamilton, Ont. 'More accurate simulations' What the scientists found in the simulation was that massive stars exploding in supernovas generated stellar winds, pushing enormous amount of gas out of the centre of galaxies. The expulsion of gas prevented millions of new stars from forming, consigning the galaxy to drawf status. The supernova explosions were the missing piece of the puzzle, the researchers said, and their simulation supports the cold dark matter theory. "The cold dark matter theory works amazingly well at telling where, when and how many galaxies should form," Governato said. "What we did was find a better description of processes that we know happen in the real universe, resulting in more accurate simulations," he said. The theory of dark matter holds that most of the matter in the universe — as much as 75 per cent — is dark material that can't be observed in the electromagnetic spectrum. "Cold" in this case refers to particles following the Big Bang that have speeds much lower than the speed of light. In the cold dark matter theory of galaxy formation, clumps of matter coalesce into structures that eventually form massive halos, and galaxies form within the halos.
Book T of C Chap T of C This is the 2007 version. Click here for the 2017 chapter 15 table of contents. Perhaps the most influential study of conformity came from Solomon E. Asch (1951). Asch gave groups of seven or nine college students what appeared to be a test of perceptual judgment: matching the length of a line segment to comparison lines. Each subject saw a pair of cards set up in front of the room, similar to the ones that follow. Stimuli like those used by Asch What was Asch's classic experiment on conformity? Subjects received the following instructions: This is a task involving the discrimination of lengths of lines. Before you is a pair of cards. On the left is a card with one line. The card at the right has three lines different in length; they are numbered 1,2, and 3, in order. One of the three lines at the right is equal to the standard line at the left-you will decide in each case which is the equal line. You will state your judgment in terms of the number of the line. There will be 18 such comparisons in all... As the number of comparisons is few and the group small, I will call upon each of you in turn to announce your judgments. In a group of nine, eight subjects were actually confederates of the experimenter. The experiment was rigged so that the genuine (na´ve) subject was called upon next-to-last in the group. The experimenter's confederates had been instructed, in advance, to make deliberately ridiculous judgments on many of the trials, but to agree unanimously with one another. On 12 of the 18 trials, they said in loud voices (for example) that the 4 1/2" line was exactly equal to 3" standard line. The pressure of the group had a dramatic effect. Although people could pick the correct line 99% of the time when making the judgments by themselves, they went along with the erroneous group judgment 75% of the time, even when it was plainly wrong. How did Asch's subjects rationalize making obviously wrong judgments? The conforming subjects did not fool themselves into thinking the wrong line was equal to the standard line. They could see the difference. However, they were influenced by eight people in a row making the "wrong" decision. Asked later why they had made such obviously incorrect judgments, subjects reported, "They must have been looking at line widths" or "I assumed it was an optical illusion" or "If eight out of nine people made the same choice, I must have missed something in the instructions." Asch obtained the conformity effect even when the confederate declared an eleven-inch line to be equivalent to a four-inch standard. He found that small groups-even groups of three, containing two confederates and one na´ve subject-were sufficient to induce the effect. How many subjects remained independent and did not conform? About a quarter of the subjects remained independent throughout the testing and never changed their judgments to fit those of the group. One could argue that Asch's experiment showed stubborn independence in some people, just as it showed conformity in others. A subject who did not conform reported to Asch later: I've never had any feeling that there was any virtue in being like others. I'm used to being different. I often come out well by being different. I don't like easy group opinions. What happened when there was a dissenter in the group? If responses were written in private? Asch later tested the effect of having a dissenter in the group. He found that if only one of seven confederates disagreed with the group decision, this was enough to free most subjects from the conformity effect. However, if the dissenter defected later, joining the majority after the first five trials, rates of conformity increased again. The public nature of the judgment also seemed to have an effect. If subjects were invited to write their responses in private, while the majority made oral responses, this destroyed the conformity effect. Asch's experiment inspired a lot of follow-up research by other experimenters. Factors found to increase conformity included the following: What factors increased conformity? 1. Attractiveness of other members in the group (people tended to go along with a group of attractive people) 2. Complexity or difficulty of the task (people were more likely to conform if the judgment was difficult). 3. Group cohesiveness (people conformed more if friendships or mutual dependencies were set up beforehand). How do we explain the power of the pressure to conform? Ross, Bierbauer and Hoffman (1976) point out that the conformity situation may have been more pressure-packed than most people appreciate. To appreciate further the nature of this dilemma, let us imagine an introductory lecture in psychology. The instructor is describing the Asch study and has just shown a picture of the experimental stimuli. Suddenly he is interrupted by a student who remarks, "But line A is the correct answer..." Predictably, the class would laugh aloud and thereby communicate their enjoyment of their peer's joke. Suppose, however, that the dissenter failed to smile or to otherwise confirm that he was trying to be funny. Suppose, instead, that he insisted, "Why are you all laughing at me? I can see perfectly, and line A is correct." Once convinced of the dissenter's sincerity, the class response almost certainly would be a mixture of discomfort, bewilderment, concern, and doubt about the dissenter's mental and perceptual competence. It is this response that the Asch dissenters risked and, accordingly, it is not surprising that many chose to avoid it through conformity. What did NBC find out, in 1997? Was the Asch conformity effect possibly due to the era in which it was carried out? After all, the early 1950s were famous for emphasizing conformity, such as the "corporate man" who did everything possible to eliminate his individuality and fit into a business setting. To see if the same experiment would work with a later generation of subjects, NBC news had social psychologist Anthony Pratkanis replicate the Asch experiment in front of a hidden camera for its Dateline show in 1997. Sure enough, the experiment still worked, and the percentage of conformists was almost identical to what Asch found. Most students, even some who looked creative or rebellious on the outside, went along with obviously incorrect group judgments. Later they explained that they did not want to look foolish, so they just "caved in." Prev page | Back to top | T of C | Next page Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below. Copyright © 2007-2011 Russ Dewey
Probability for equally likely outcomes (f/N rule) to measure probability or to measure the possible outcomes (measuring uncertainty) Suppose an experiment has N possible outcomes, all equally likely. An event that can occur in f ways has probabilyt of f/N occurring: Probability of an event = f <-- number of ways event can occur N <-- total number of possible outcomes experiment: an action whose outcome cannot be predicted with curtainty event: some specified result that may or may not occur when an experiment is performed N 75617 = 0.100 Interpretation 10.0% of familied makte between so and so Frequentist interpretation of probability( the meaning of probability) (when outcomes are equally likely probabilities are nothing more than percentage (relative frequency)) When number of tosses is small the probability is flactuates a lot. When number of tosses is largethe probability stabelizes (50/50) Interpretation of probability: a probability near 0 (ex. 0.2) indicates that the event in question is very unlikely to occur when the experiment is performed. When a probability near 1 (100%)(0.8) suggests that the event is quite likely to occur. although the frequentist interpretation is helpful for understanding the meaning of probability, it cannot be used as a definition of probability. One common way to define probabilities is to specify a probability model - a mathematical description of the experiment based on certain primary aspects and assumptions. Equal-likelihood model: is a axample of probability model. Its primary aspect and assumption are that all possible outcomes are equally likely to occur. Basic properties of probabilities Property 1: The probability of an event is always between 0 and 1, inclusive. Property 2: The probability of an event that cannot occur is 0. ( Such an event is called impossible event) Property 3: The probability of an event that must occur is 1. ( such event called certain event) ex. numbers 5 and -0.23 could not possibly be probabilities. P(E) - probability of event Event and Sample space Event (E) - a collection of outcomes for the experiment, that is, any subset of the sample space. Sample space (S)- the collection of all possible outcomes for an experiment Specified event occurs - if that event contains the card selected. Ex. if the card selected turns out to be the king of spades, the second and fourth event occur, whereas the first and third events do not. are one of the best ways to portray events and relationships among events visually. S - rectangle E -disks, circles inside the rectangle Complement of E everything ot side of E "not E" The event "E does not occur" A & B A intersection B or A B, A and B The event " both A and B occur" All outcomes common to event A and event B. A or B The event "either A or B or both" A union B, A B event A or B consists of all outcomes either in event A or in Event B or both. equivalently, that "at least one of event A and B occurs" Mutually Excllusive Events two or more events are mutually exclusive events if not two of them have outcomes in common intersection is empty empty set no elements in it 4.2 Homework problem less than 7% - means 7 not included At least 8% - 8 and more write event in parenthesis C' - complement of C if E is an event, the P(E) represent the probability that event E occurs. Read as " the probability of E" Special addition rule applied only to mutually exclusive events P(A or B)= P(A) + P(B) The complement rule P(E)+P(not E)= 1 or P(E) = 1-P(not E) General Addition Rule used for any event that is not mutually exclusive P(A or B) = P(A) + P(B) - P(A&B) without using Genral addition rule can be done by f The general addition rule is consistent with the special addition rule - if two events are mutually exclusive, both rules yeild the same result General addition rule for more than 2 events ex. for 3 events P(A or B or C) = P(A) + P(B) + P(C) -P(A&B) - P(A&C) - P(B&C) + P(A&B&C) P(S)= 1 probability of a sample set means all events 36/36=1
Speed is the absolute value of velocity: speed = . This is the definition of speed, but hardly enough to be sure students know about speed and its relationship to velocity and acceleration. Velocity is a vector quantity; that is, it has both a direction and a magnitude. The magnitude of velocity vector is the speed. Speed is a non-negative number and has no direction associated with it. Velocity has a magnitude and a direction. Speed has the same value and units as velocity; speed is a number. The question that seems to trouble students the most is to determine whether the speed is increasing or decreasing. The short answer is Speed is increasing when the velocity and acceleration have the same sign. Speed is decreasing when the velocity and acceleration have different signs. You should demonstrate this in some real context, such as driving a car (see below). Also, you can explain it graphically. The figure below shows the graph of the velocity (blue graph) of a particle moving on the interval . The red graph is , the speed. The sections where are reflected over the x-axis. (The graphs overlap on [b, d].) It is now quite east to see that the speed is increasing on the intervals [0,a], [b, c] and [d,e]. Another way of approaching the concept is this: the speed is the non-directed length of the vertical segment from the velocity’s graph to the t-axis. Picture the segment shown moving across the graph. When it is getting longer (either above or below the t-axis) the speed increases. Thinking of the speed as the non-directed distance from the velocity to the axis makes answering the two questions below easy: - What are the values of t at which the speed obtains its (local) maximum values? Answer: x = a, c, and e. - When do the minimum speeds occur? What are they? Answer: the speed is zero at b and d Students often benefit from a verbal explanation of all this. Picture a car moving along a road going forwards (in the positive direction) its velocity is positive. - If you step on the gas, acceleration pulls you in the direction you are moving and your speed increases. (v > 0, a > 0, speed increases) - Going too fast is not good, so you put on your brakes, you now accelerate in the opposite direction (decelerate?), but you are still moving forward, but slower. (v > 0, a < 0, speed decreases) - Finally, you stop. Then you shift into reverse and start moving backwards (negative velocity) and you push on the gas to accelerate in the negative direction, so your speed increases. (v < 0, a < 0, speed increases) - Then you put on the breaks (accelerate in the positive direction) and your speed decreases again. (v < 0, a > 0, speed decreases) Here is an activity that will help your students discover this relationship. Give Part 1 to half the class and Part 2 to the other half. Part 3 (on the back of Part 1 and Part 2) is the same for both groups. – Added 12-19-17 Also see: A Note on Speed for the purely analytic approach. Update: “A Note on Speed” added 4-21-2018 Pingback: Adapting 2021 AB 2 | Teaching Calculus Just one small nitpick, although perhaps it’s my monitor… in the first graph, to me it looks like the graph of the velocity is mostly in orange, not red (only the part where b<x<d, where the velocity and speed overlaps, seems to be red.) On the AP test, can students just draw a graph of speed (as the absolute value of velocity) and then notice that speed is increasing or decreasing? AP readers are loathe to have to interpret a student’s graph. In these questions they want to see the value of the acceleration and velocity at the time given. If not the value then some indication as to their signs that the student has worked out. If they have the same sign, then the speed is increasing, if they have different signs then decreasing. Sorry. I do like your approach with the reflection of the graphs!!! I also like to discuss with my students, using the graph, the acceleration of the function. You alluded to this in your example. Speed would be increasing when the velocity and acceleration have the same sign… Looking at the graph we would see where the function was below the axis and where the slope of the tangent line was negative. Looking at the graph we would see where the function was above the axis and where the slope of the tangent line was positive. (note: stress using endpoints “brackets” when discussing increasing/decreasing) Thanks again for sharing! Thanks Paul. I just added a Discussion/worksheet about speed to the “Resources” pages (click tab at top of page). I think I meant to include it with the original post. This is very helpful Lin. My students and I thank you.
||The English used in this article or section may not be easy for everybody to understand. (April 2012)| The Federal Reserve (sometimes called "The Fed") is a large central bank in Washington, D.C. that was founded in 1913. It lends money to other, smaller banks. The Federal Reserve Board is a group of financial leaders who work for the Federal Reserve and decide how much to charge these banks for borrowing money (this charge is called an "interest rate"). The Federal Reserve interest rate is decided by the Federal Reserve Board after studying the condition of the US economy. When the economy is growing too fast, the Federal Reserve makes borrowing more expensive by increasing the interest rate, which means people and companies spend less, which discourages inflation. When economic growth slows, the interest rate is decreased so that borrowing will increase and there will be growth.
A minimum wage is the lowest remuneration that employers must legally pay their workers. Equivalently, it is the price floor below which workers may not sell their labor. Although minimum wage laws are in effect in many jurisdictions, differences of opinion exist about the benefits and drawbacks of a minimum wage. Supporters of the minimum wage say it increases the standard of living of workers, reduces poverty, reduces inequality, boosts morale and forces businesses to be more "efficient". In contrast, opponents of the minimum wage say it increases poverty, increases unemployment (particularly among unskilled or inexperienced workers) and is damaging to businesses, because excessively high minimum wages require businesses to raise the prices of their product or service to accommodate the extra expense of paying a higher wage. - Minimum wage laws - Informal minimum wages - Setting minimum wage - Supply and demand - Criticism of the neoclassical model - Empirical studies - Card and Krueger - Research subsequent to Card and Kruegers work - Statistical meta analyses - Debate over consequences - Surveys of economists - Basic income - Guaranteed minimum income - Refundable tax credit - Collective bargaining - US movement Modern minimum wage laws trace their origin to the Ordinance of Labourers (1349), which was a decree by King Edward III that set a maximum wage for laborers in medieval England. King Edward III, who was a wealthy landowner, was dependent, like his lords, on serfs to work the land. In the autumn of 1348, the Black Plague reached England and decimated the population. The severe shortage of labor caused wages to soar and encouraged King Edward III to set a wage ceiling. Subsequent amendments to the ordinance, such as the Statute of Labourers (1351), increased the penalties for paying a wage above the set rates. While the laws governing wages initially set a ceiling on compensation, they were eventually used to set a living wage. An amendment to the Statute of Labourers in 1389 effectively fixed wages to the price of food. As time passed, the Justice of the Peace, who was charged with setting the maximum wage, also began to set formal minimum wages. The practice was eventually formalized with the passage of the Act Fixing a Minimum Wage in 1604 by King James I for workers in the textile industry. By the early 19th century, the Statutes of Labourers was repealed as increasingly capitalistic England embraced laissez-faire policies which disfavored regulations of wages (whether upper or lower limits). The subsequent 19th century saw significant labor unrest affect many industrial nations. As trade unions were decriminalized during the century, attempts to control wages through collective agreement were made. However, this meant that a uniform minimum wage was not possible. In Principles of Political Economy in 1848, John Stuart Mill argued that because of the collective action problems that workers faced in organisation, it was a justified departure from laissez-faire policies (or freedom of contract) to regulate people's wages and hours by law. It was not until the 1890s that the first modern legislative attempts to regulate minimum wages were seen in New Zealand and Australia. The movement for a minimum wage was initially focused on stopping sweatshop labor and controlling the proliferation of sweatshops in manufacturing industries. The sweatshops employed large numbers of women and young workers, paying them what were considered to be substandard wages. The sweatshop owners were thought to have unfair bargaining power over their employees, and a minimum wage was proposed as a means to make them pay fairly. Over time, the focus changed to helping people, especially families, become more self-sufficient. Minimum wage laws The first modern national minimum wage law was enacted by the government of New Zealand in 1894, followed by Australia in 1896 and the United Kingdom in 1909. In the United States, statutory minimum wages were first introduced nationally in 1938, and they were reintroduced and expanded in the United Kingdom in 1998. There is now legislation or binding collective bargaining regarding minimum wage in more than 90 percent of all countries. In the European Union, 22 member states out of 28 currently have national minimum wages. Other countries, such as Sweden, Finland, Denmark, Switzerland, Austria, and Italy, have no minimum wage laws, but rely on employer groups and trade unions to set minimum earnings through collective bargaining. Minimum wage rates vary greatly across many different jurisdictions, not only in setting a particular amount of money—for example $7.25 per hour ($14,500 per year) under certain US state laws (or $2.13 for employees who receive tips, which is known as the tipped minimum wage), $9.47 in the US state of Washington, or £6.50 (for those aged 21+) in the United Kingdom—but also in terms of which pay period (for example Russia and China set monthly minimum wages) or the scope of coverage. Currently the American federal minimum wage rests at seven dollars, twenty-five cents ($7.25) per hour. However, some states do not recognize the minimum wage law such as Louisiana and Tennessee. Other states operate below the federal minimum wage such as Georgia and Wyoming. Some jurisdictions even allow employers to count tips given to their workers as credit towards the minimum wage levels. India was one of the first developing countries to introduce minimum wage policy. It also has one of the most complicated systems with more than 1,200 minimum wage rates. Informal minimum wages Customs and extra-legal pressures from governments or labor unions can produce a de facto minimum wage. So can international public opinion, by pressuring multinational companies to pay Third World workers wages usually found in more industrialized countries. The latter situation in Southeast Asia and Latin America was publicized in the 2000s, but it existed with companies in West Africa in the middle of the twentieth century. Setting minimum wage Among the indicators that might be used to establish an initial minimum wage rate are ones that minimize the loss of jobs while preserving international competitiveness. Among these are general economic conditions as measured by real and nominal gross domestic product; inflation; labor supply and demand; wage levels, distribution and differentials; employment terms; productivity growth; labor costs; business operating costs; the number and trend of bankruptcies; economic freedom rankings; standards of living and the prevailing average wage rate. In the business sector, concerns include the expected increased cost of doing business, threats to profitability, rising levels of unemployment (and subsequent higher government expenditure on welfare benefits raising tax rates), and the possible knock-on effects to the wages of more experienced workers who might already be earning the new statutory minimum wage, or slightly more. Among workers and their representatives, political consideration weigh in as labor leaders seek to win support by demanding the highest possible rate. Other concerns include purchasing power, inflation indexing and standardized working hours. In the United States, the minimum wage promulgated by the Fair Labor Standards Act of 1938. According to the Economic Policy Institute, the minimum wage in the United States would have been $18.28 in 2013 if the minimum wage had kept pace with labor productivity. To adjust for increased rates of worker productivity in the United States, raising the minimum wage to $22 (or more) an hour has been presented. Supply and demand An analysis of supply and demand of the type shown in many mainstream economics textbooks implies that by mandating a price floor above the equilibrium wage, minimum wage laws should cause unemployment. This is because a greater number of people are willing to work at the higher wage while a smaller number of jobs will be available at the higher wage. Companies can be more selective in those whom they employ thus the least skilled and least experienced will typically be excluded. An imposition or increase of a minimum wage will generally only affect employment in the low-skill labor market, as the equilibrium wage is already at or below the minimum wage, whereas in higher skill labor markets the equilibrium wage is too high for a change in minimum wage to affect employment. According to the supply and demand model shown in many textbooks on economics, increasing the minimum wage decreases the employment of minimum-wage workers. One such textbook says: If a higher minimum wage increases the wage rates of unskilled workers above the level that would be established by market forces, the quantity of unskilled workers employed will fall. The minimum wage will price the services of the least productive (and therefore lowest-wage) workers out of the market. …The direct results of minimum wage legislation are clearly mixed. Some workers, most likely those whose previous wages were closest to the minimum, will enjoy higher wages. This is known as the "ripple effect". The ripple effect shows that when you increase the minimum wage the wages of all others will consequently increase due the need for relativity. Others, particularly those with the lowest prelegislation wage rates, will be unable to find work. They will be pushed into the ranks of the unemployed or out of the labor force. Some argue that by increasing the federal minimum wage, however, the economy will be adversely affected due to small businesses not being able to keep up with the need to subsequently increase all workers wages. The textbook illustrates the point with a supply and demand diagram similar to the one above. In the diagram it is assumed that workers are willing to labor for more hours if paid a higher wage. Economists graph this relationship with the wage on the vertical axis and the quantity (hours) of labor supplied on the horizontal axis. Since higher wages increase the quantity supplied, the supply of labor curve is upward sloping, and is shown as a line moving up and to the right. A firm's cost is a function of the wage rate. It is assumed that the higher the wage, the fewer hours an employer will demand of an employee. This is because, as the wage rate rises, it becomes more expensive for firms to hire workers and so firms hire fewer workers (or hire them for fewer hours). The demand of labor curve is therefore shown as a line moving down and to the right. Combining the demand and supply curves for labor allows us to examine the effect of the minimum wage. We will start by assuming that the supply and demand curves for labor will not change as a result of raising the minimum wage. This assumption has been questioned. If no minimum wage is in place, workers and employers will continue to adjust the quantity of labor supplied according to price until the quantity of labor demanded is equal to the quantity of labor supplied, reaching equilibrium price, where the supply and demand curves intersect. Minimum wage behaves as a classical price floor on labor. Standard theory says that, if set above the equilibrium price, more labor will be willing to be provided by workers than will be demanded by employers, creating a surplus of labor, i.e. unemployment. In other words, the simplest and most basic economics says this about commodities like labor (and wheat, for example): Artificially raising the price of the commodity tends to cause the supply of it to increase and the demand for it to lessen. The result is a surplus of the commodity. When there is a wheat surplus, the government buys it. Since the government does not hire surplus labor, the labor surplus takes the form of unemployment, which tends to be higher with minimum wage laws than without them. So the basic theory says that raising the minimum wage helps workers whose wages are raised, and hurts people who are not hired (or lose their jobs) because companies cut back on employment. But proponents of the minimum wage hold that the situation is much more complicated than the basic theory can account for. One complicating factor is possible monopsony in the labor market, whereby the individual employer has some market power in determining wages paid. Thus it is at least theoretically possible that the minimum wage may boost employment when affected employees spend more in other sectors of the economy. Though single employer market power is unlikely to exist in most labor markets in the sense of the traditional 'company town,' asymmetric information, imperfect mobility, and the personal element of the labor transaction give some degree of wage-setting power to most firms. Criticism of the neoclassical model The argument that a minimum wage decreases employment is based on a simple supply and demand model of the labor market. A number of economists (for example Pierangelo Garegnani, Robert L. Vienneau, and Arrigo Opocher & Ian Steedman), building on the work of Piero Sraffa, argue that that model, even given all its assumptions, is logically incoherent. Michael Anyadike-Danes and Wynne Godley argue, based on simulation results, that little of the empirical work done with the textbook model constitutes a potentially falsifiable theory, and consequently empirical evidence hardly exists for that model. Graham White argues, partially on the basis of Sraffianism, that the policy of increased labor market flexibility, including the reduction of minimum wages, does not have an "intellectually coherent" argument in economic theory. Gary Fields, Professor of Labor Economics and Economics at Cornell University, argues that the standard textbook model for the minimum wage is ambiguous, and that the standard theoretical arguments incorrectly measure only a one-sector market. Fields says a two-sector market, where "the self-employed, service workers, and farm workers are typically excluded from minimum-wage coverage... [and with] one sector with minimum-wage coverage and the other without it [and possible mobility between the two]," is the basis for better analysis. Through this model, Fields shows the typical theoretical argument to be ambiguous and says "the predictions derived from the textbook model definitely do not carry over to the two-sector case. Therefore, since a non-covered sector exists nearly everywhere, the predictions of the textbook model simply cannot be relied on." An alternate view of the labor market has low-wage labor markets characterized as monopsonistic competition wherein buyers (employers) have significantly more market power than do sellers (workers). This monopsony could be a result of intentional collusion between employers, or naturalistic factors such as segmented markets, search costs, information costs, imperfect mobility and the personal element of labor markets. In such a case a simple supply and demand graph would not yield the quantity of labor clearing and the wage rate. This is because while the upward sloping aggregate labor supply would remain unchanged, instead of using the upward labor supply curve shown in a supply and demand diagram, monopsonistic employers would use a steeper upward sloping curve corresponding to marginal expenditures to yield the intersection with the supply curve resulting in a wage rate lower than would be the case under competition. Also, the amount of labor sold would also be lower than the competitive optimal allocation. Such a case is a type of market failure and results in workers being paid less than their marginal value. Under the monopsonistic assumption, an appropriately set minimum wage could increase both wages and employment, with the optimal level being equal to the marginal product of labor. This view emphasizes the role of minimum wages as a market regulation policy akin to antitrust policies, as opposed to an illusory "free lunch" for low-wage workers. Another reason minimum wage may not affect employment in certain industries is that the demand for the product the employees produce is highly inelastic. For example, if management is forced to increase wages, management can pass on the increase in wage to consumers in the form of higher prices. Since demand for the product is highly inelastic, consumers continue to buy the product at the higher price and so the manager is not forced to lay off workers. Economist Paul Krugman argues this explanation neglects to explain why the firm was not charging this higher price absent the minimum wage. Three other possible reasons minimum wages do not affect employment were suggested by Alan Blinder: higher wages may reduce turnover, and hence training costs; raising the minimum wage may "render moot" the potential problem of recruiting workers at a higher wage than current workers; and minimum wage workers might represent such a small proportion of a business's cost that the increase is too small to matter. He admits that he does not know if these are correct, but argues that "the list demonstrates that one can accept the new empirical findings and still be a card-carrying economist." Economists disagree as to the measurable impact of minimum wages in practice. This disagreement usually takes the form of competing empirical tests of the elasticities of supply and demand in labor markets and the degree to which markets differ from the efficiency that models of perfect competition predict. Economists have done empirical studies on different aspects of the minimum wage, including: Until the mid-1990s, a general consensus existed among economists, both conservative and liberal, that the minimum wage reduced employment, especially among younger and low-skill workers. In addition to the basic supply-demand intuition, there were a number of empirical studies that supported this view. For example, Gramlich (1976) found that many of the benefits went to higher income families, and that teenagers were made worse off by the unemployment associated with the minimum wage. Brown et al. (1983) noted that time series studies to that point had found that for a 10 percent increase in the minimum wage, there was a decrease in teenage employment of 1–3 percent. However, the studies found wider variation, from 0 to over 3 percent, in their estimates for the effect on teenage unemployment (teenagers without a job and looking for one). In contrast to the simple supply and demand diagram, it was commonly found that teenagers withdrew from the labor force in response to the minimum wage, which produced the possibility of equal reductions in the supply as well as the demand for labor at a higher minimum wage and hence no impact on the unemployment rate. Using a variety of specifications of the employment and unemployment equations (using ordinary least squares vs. generalized least squares regression procedures, and linear vs. logarithmic specifications), they found that a 10 percent increase in the minimum wage caused a 1 percent decrease in teenage employment, and no change in the teenage unemployment rate. The study also found a small, but statistically significant, increase in unemployment for adults aged 20–24. Wellington (1991) updated Brown et al.'s research with data through 1986 to provide new estimates encompassing a period when the real (i.e., inflation-adjusted) value of the minimum wage was declining, because it had not increased since 1981. She found that a 10% increase in the minimum wage decreased the absolute teenage employment by 0.6%, with no effect on the teen or young adult unemployment rates. Some research suggests that the unemployment effects of small minimum wage increases are dominated by other factors. In Florida, where voters approved an increase in 2004, a follow-up comprehensive study after the increase confirmed a strong economy with increased employment above previous years in Florida and better than in the US as a whole. When it comes to on-the-job training, some believe the increase in wages is taken out of training expenses. A 2001 empirical study found that there is "no evidence that minimum wages reduce training, and little evidence that they tend to increase training." Some empirical studies have tried to ascertain the benefits of a minimum wage beyond employment effects. In an analysis of census data, Joseph Sabia and Robert Nielson found no statistically significant evidence that minimum wage increases helped reduce financial, housing, health, or food insecurity. This study was undertaken by the Employment Policies Institute, a think tank funded by the food, beverage and hospitality industries. In 2012, Michael Reich published an economic analysis that suggested that a proposed minimum wage hike in San Diego might stimulate the city's economy by about $190 million. The Economist wrote in December 2013: "A minimum wage, providing it is not set too high, could thus boost pay with no ill effects on jobs....America's federal minimum wage, at 38% of median income, is one of the rich world's lowest. Some studies find no harm to employment from federal or state minimum wages, others see a small one, but none finds any serious damage. ... High minimum wages, however, particularly in rigid labour markets, do appear to hit employment. France has the rich world’s highest wage floor, at more than 60% of the median for adults and a far bigger fraction of the typical wage for the young. This helps explain why France also has shockingly high rates of youth unemployment: 26% for 15- to 24-year-olds." Card and Krueger In 1992, the minimum wage in New Jersey increased from $4.25 to $5.05 per hour (an 18.8% increase), while in the adjacent state of Pennsylvania it remained at $4.25. David Card and Alan Krueger gathered information on fast food restaurants in New Jersey and eastern Pennsylvania in an attempt to see what effect this increase had on employment within New Jersey. Basic economic theory would have implied that relative employment should have decreased in New Jersey. Card and Krueger surveyed employers before the April 1992 New Jersey increase, and again in November–December 1992, asking managers for data on the full-time equivalent staff level of their restaurants both times. Based on data from the employers' responses, the authors concluded that the increase in the minimum wage slightly increased employment in the New Jersey restaurants. One possible explanation that the current minimum wage laws may not affect unemployment in the United States is that the minimum wage is set close to the equilibrium point for low and unskilled workers. Thus, according to this explanation, in the absence of the minimum wage law unskilled workers would be paid approximately the same amount and an increase above this equilibrium point could likely bring about increased unemployment for the low and unskilled workers. Card and Krueger expanded on this initial article in their 1995 book Myth and Measurement: The New Economics of the Minimum Wage. They argued that the negative employment effects of minimum wage laws are minimal if not non-existent. For example, they look at the 1992 increase in New Jersey's minimum wage, the 1988 rise in California's minimum wage, and the 1990–91 increases in the federal minimum wage. In addition to their own findings, they reanalyzed earlier studies with updated data, generally finding that the older results of a negative employment effect did not hold up in the larger datasets. Research subsequent to Card and Krueger's work In subsequent research, David Neumark and William Wascher attempted to verify Card and Krueger's results by using administrative payroll records from a sample of large fast food restaurant chains in order to verify employment. They found that the minimum wage increases were followed by decreases in employment. On the other hand, an assessment of data collected and analyzed by Neumark and Wascher did not initially contradict the Card and Krueger results, but in a later edited version they found a four percent decrease in employment, and reported that "the estimated disemployment effects in the payroll data are often statistically significant at the 5- or 10-percent level although there are some estimators and subsamples that yield insignificant—although almost always negative" employment effects. However, this paper's conclusions were rebutted in a 2000 paper by Card and Krueger. A 2011 paper has reconciled the difference between Card and Krueger's survey data and Neumark and Wascher's payroll-based data. The paper shows that both datasets evidence conditional employment effects that are positive for small restaurants, but are negative for large fast-food restaurants. In 1996 and 1997, the federal minimum wage was increased from $4.25 to $5.15, thereby increasing the minimum wage by $0.90 in Pennsylvania but by just $0.10 in New Jersey; this allowed for an examination of the effects of minimum wage increases in the same area, subsequent to the 1992 change studied by Card and Krueger. A study by Hoffman and Trace found the result anticipated by traditional theory: a detrimental effect on employment. Further application of the methodology used by Card and Krueger by other researchers yielded results similar to their original findings, across additional data sets. A 2010 study by three economists (Arindrajit Dube of the University of Massachusetts Amherst, William Lester of the University of North Carolina at Chapel Hill, and Michael Reich of the University of California, Berkeley), compared adjacent counties in different states where the minimum wage had been raised in one of the states. They analyzed employment trends for several categories of low-wage workers from 1990 to 2006 and found that increases in minimum wages had no negative effects on low-wage employment and successfully increased the income of workers in food services and retail employment, as well as the narrower category of workers in restaurants. However, a 2011 study by Baskaya and Rubinstein of Brown University found that at the federal level, "a rise in minimum wage have [sic] an instantaneous impact on wage rates and a corresponding negative impact on employment", stating, "Minimum wage increases boost teenage wage rates and reduce teenage employment." Another 2011 study by Sen, Rybczynski, and Van De Waal found that "a 10% increase in the minimum wage is significantly correlated with a 3−5% drop in teen employment." A 2012 study by Sabia, Hansen, and Burkhauser found that "minimum wage increases can have substantial adverse labor demand effects for low-skilled individuals", with the largest effects on those aged 16 to 24. A 2013 study by Meer and West concluded that "the minimum wage reduces net job growth, primarily through its effect on job creation by expanding establishments ... most pronounced for younger workers and in industries with a higher proportion of low-wage workers." This study by Meer and West was later critiqued for its trends of assumption in the context of narrowly defined low-wage groups. The authors replied to the critiques and released additional data which addressed the criticism of their methodology, but did not resolve the issue of whether their data showed a causal relationship. Another 2013 study by Suzana Laporšek of the University of Primorska, on youth unemployment in Europe claimed there was "a negative, statistically significant impact of minimum wage on youth employment." A 2013 study by labor economists Tony Fang and Carl Lin which studied minimum wages and employment in China, found that "minimum wage changes have significant adverse effects on employment in the Eastern and Central regions of China, and result in disemployment for females, young adults, and low-skilled workers". Several researchers have conducted statistical meta-analyses of the employment effects of the minimum wage. In 1995, Card and Krueger analyzed 14 earlier time-series studies on minimum wages and concluded that there was clear evidence of publication bias (in favor of studies that found a statistically significant negative employment effect). They point out that later studies, which had more data and lower standard errors, did not show the expected increase in t-statistic (almost all the studies had a t-statistic of about two, just above the level of statistical significance at the .05 level). Though a serious methodological indictment, opponents of the minimum wage largely ignored this issue; as Thomas Leonard noted, "The silence is fairly deafening." In 2005, T.D. Stanley showed that Card and Krueger's results could signify either publication bias or the absence of a minimum wage effect. However, using a different methodology, Stanley concluded that there is evidence of publication bias and that correction of this bias shows no relationship between the minimum wage and unemployment. In 2008, Hristos Doucouliagos and T.D. Stanley conducted a similar meta-analysis of 64 U.S. studies on disemployment effects and concluded that Card and Krueger's initial claim of publication bias is still correct. Moreover, they concluded, "Once this publication selection is corrected, little or no evidence of a negative association between minimum wages and employment remains." Debate over consequences Minimum wage laws affect workers in most low-paid fields of employment and have usually been judged against the criterion of reducing poverty. Minimum wage laws receive less support from economists than from the general public. Despite decades of experience and economic research, debates about the costs and benefits of minimum wages continue today. Various groups have great ideological, political, financial, and emotional investments in issues surrounding minimum wage laws. For example, agencies that administer the laws have a vested interest in showing that "their" laws do not create unemployment, as do labor unions whose members' finances are protected by minimum wage laws. On the other side of the issue, low-wage employers such as restaurants finance the Employment Policies Institute, which has released numerous studies opposing the minimum wage. The presence of these powerful groups and factors means that the debate on the issue is not always based on dispassionate analysis. Additionally, it is extraordinarily difficult to separate the effects of minimum wage from all the other variables that affect employment. The following table summarizes the arguments made by those for and against minimum wage laws: A widely circulated argument that the minimum wage was ineffective at reducing poverty was provided by George Stigler in 1949: In 2006, the International Labour Organization (ILO) argued that the minimum wage could not be directly linked to unemployment in countries that have suffered job losses. In April 2010, the Organisation for Economic Co-operation and Development (OECD) released a report arguing that countries could alleviate teen unemployment by "lowering the cost of employing low-skilled youth" through a sub-minimum training wage. A study of U.S. states showed that businesses' annual and average payrolls grow faster and employment grew at a faster rate in states with a minimum wage. The study showed a correlation, but did not claim to prove causation. Although strongly opposed by both the business community and the Conservative Party when introduced in 1999, the Conservatives reversed their opposition in 2000. Accounts differ as to the effects of the minimum wage. The Centre for Economic Performance found no discernible impact on employment levels from the wage increases, while the Low Pay Commission found that employers had reduced their rate of hiring and employee hours employed, and found ways to cause current workers to be more productive (especially service companies). The Institute for the Study of Labor found prices in the minimum wage sector rose significantly faster than prices in non-minimum wage sectors, in the four years following the implementation of the minimum wage. Neither trade unions nor employer organizations contest the minimum wage, although the latter had especially done so heavily until 1999. In 2014, supporters of minimum wage cited a study that found that job creation within the United States is faster in states that raised their minimum wages. In 2014, supporters of minimum wage cited news organizations who reported the state with the highest minimum-wage garnered more job creation than the rest of the United States. In 2014, in Seattle, Washington, liberal and progressive business owners who had supported the city's new $15 minimum wage said they might hold off on expanding their businesses and thus creating new jobs, due to the uncertain timescale of the wage increase implementation. However, subsequently at least two of the business owners quoted did expand. The dollar value of the minimum wage loses purchasing power over time due to inflation. Minimum wage laws, for instance proposals to index the minimum wage to average wages, have the potential to keep the dollar value of the minimum wage relevant and predictable. With regard to the economic effects of introducing minimum wage legislation in Germany in January 2015, recent developments have shown that the feared increase in unemployment has not materialized, however, in some economic sectors and regions of the country, it came to a decline in job opportunities particularly for temporary and part-time workers, and some low-wage jobs have disappeared entirely. Because of this overall positive development, the Deutsche Bundesbank revised its opinion, and ascertained that “the impact of the introduction of the minimum wage on the total volume of work appears to be very limited in the present business cycle”. Surveys of economists According to a 1978 article in the American Economic Review, 90% of the economists surveyed agreed that the minimum wage increases unemployment among low-skilled workers. By 1992 the survey found 79% of economists in agreement with that statement, and by 2000, 45.6% were in full agreement with the statement and 27.9% agreed with provisos (73.5% total). The authors of the 2000 study also reweighted data from a 1990 sample to show that at that time 62.4% of academic economists agreed with the statement above, while 19.5% agreed with provisos and 17.5% disagreed. They state that the reduction on consensus on this question is "likely" due to the Card and Krueger research and subsequent debate. A similar survey in 2006 by Robert Whaples polled PhD members of the American Economic Association (AEA). Whaples found that 46.8% respondents wanted the minimum wage eliminated, 37.7% supported an increase, 14.3% wanted it kept at the current level, and 1.3% wanted it decreased. Another survey in 2007 conducted by the University of New Hampshire Survey Center found that 73% of labor economists surveyed in the United States believed 150% of the then-current minimum wage would result in employment losses and 68% believed a mandated minimum wage would cause an increase in hiring of workers with greater skills. 31% felt that no hiring changes would result. Surveys of labor economists have found a sharp split on the minimum wage. Fuchs et al. (1998) polled labor economists at the top 40 research universities in the United States on a variety of questions in the summer of 1996. Their 65 respondents were nearly evenly divided when asked if the minimum wage should be increased. They argued that the different policy views were not related to views on whether raising the minimum wage would reduce teen employment (the median economist said there would be a reduction of 1%), but on value differences such as income redistribution. Daniel B. Klein and Stewart Dompe conclude, on the basis of previous surveys, "the average level of support for the minimum wage is somewhat higher among labor economists than among AEA members." In 2007, Klein and Dompe conducted a non-anonymous survey of supporters of the minimum wage who had signed the "Raise the Minimum Wage" statement published by the Economic Policy Institute. 95 of the 605 signatories responded. They found that a majority signed on the grounds that it transferred income from employers to workers, or equalized bargaining power between them in the labor market. In addition, a majority considered disemployment to be a moderate potential drawback to the increase they supported. In 2013, a diverse group of 37 economics professors was surveyed on their view of the minimum wage's impact on employment. 34% of respondents agreed with the statement, "Raising the federal minimum wage to $9 per hour would make it noticeably harder for low-skilled workers to find employment." 32% disagreed and the remaining respondents were uncertain or had no opinion on the question. 47% agreed with the statement, "The distortionary costs of raising the federal minimum wage to $9 per hour and indexing it to inflation are sufficiently small compared with the benefits to low-skilled workers who can find employment that this would be a desirable policy", while 11% disagreed. Economists and other political commentators have proposed alternatives to the minimum wage. They argue that these alternatives may address the issue of poverty better than a minimum wage, as it would benefit a broader population of low wage earners, not cause any unemployment, and distribute the costs widely rather than concentrating it on employers of low wage workers. A basic income (or negative income tax) is a system of social security that periodically provides each citizen with a sum of money that is sufficient to live on frugally. It is argued that recipients of the basic income would have considerably more bargaining power when negotiating a wage with an employer as there would be no risk of destitution for not taking the employment. As a result, the jobseeker could spend more time looking for a more appropriate or satisfying job, or they could wait until a higher-paying job appeared. Alternately, they could spend more time increasing their skills in university, which would make them more suitable for higher-paying jobs, as well as provide numerous other benefits. Experiments on Basic Income and NIT in Canada and the USA show that people spent more time studying while the program was running. Proponents argue that a basic income that is based on a broad tax base would be more economically efficient, as the minimum wage effectively imposes a high marginal tax on employers, causing losses in efficiency. Guaranteed minimum income A guaranteed minimum income is another proposed system of social welfare provision. It is similar to a basic income or negative income tax system, except that it is normally conditional and subject to a means test. Some proposals also stipulate a willingness to participate in the labor market, or a willingness to perform community services. Refundable tax credit A refundable tax credit is a mechanism whereby the tax system can reduce the tax owed by a household to below zero, and result in a net payment to the taxpayer beyond their own payments into the tax system. Examples of refundable tax credits include the earned income tax credit and the additional child tax credit in the US, and working tax credits and child tax credits in the UK. Such a system is slightly different from a negative income tax, in that the refundable tax credit is usually only paid to households that have earned at least some income. This policy is more targeted against poverty than the minimum wage, because it avoids subsidizing low-income workers who are supported by high-income households (for example, teenagers still living with their parents). In the United States, earned income tax credit rates, also known as EITC or EIC, vary by state—some are refundable while other states do not allow a refundable tax credit. The federal EITC program has been expanded by a number of presidents including Jimmy Carter, Ronald Reagan, George H.W. Bush, and Bill Clinton. In 1986, President Reagan described the EITC as "the best anti poverty, the best pro-family, the best job creation measure to come out of Congress." The ability of the earned income tax credit to deliver larger monetary benefits to the poor workers than an increase in the minimum wage and at a lower cost to society was documented in a 2007 report by the Congressional Budget Office. The Adam Smith Institute prefers cutting taxes on the poor and middle class instead of raising wages as an alternative to the minimum wage. Italy, Sweden, Norway, Finland, and Denmark are examples of developed nations where there is no minimum wage that is required by legislation. Such nations, particularly the Nordics, have very high union participation rates. Instead, minimum wage standards in different sectors are set by collective bargaining. In January 2014, seven Nobel economists—Kenneth Arrow, Peter Diamond, Eric Maskin, Thomas Schelling, Robert Solow, Michael Spence, and Joseph Stiglitz—and 600 other economists wrote a letter to the US Congress and the US President urging that, by 2016, the US government should raise the minimum wage to $10.10. They endorsed the Minimum Wage Fairness Act which was introduced by US Senator Tom Harkin in 2013. U.S. Senator Bernie Sanders introduced a bill in 2015 that would raise the minimum wage to $15, and in his 2016 campaign for president ran on a platform of increasing it. Although Sanders did not become the nominee, the Democratic National Committee adopted his $15 minimum wage push in their 2016 party platform. Reactions from former McDonald's USA Ed Rensi about raising minimum wage to $15 is to completely push humans out of the picture when it comes to labor if they are to pay minimum wage at $15 they would look into replacing humans with machines as that would be the more cost-effective than having employees that are ineffective. He as well believes that an increase to $15 an hour would cause job loss at an extraordinary level. said former McDonald’s (MCD) USA CEO during an interview on the FOX Business Network’s Mornings with Maria. Rensi also believes it does not only affect the fast food industry, franchising he sees as the best business model in the United States, it is dependent on people that have low job skills that have to grow and if you cannot pay them a reasonable wage then they are going to be replaced with machines In late March 2016, Governor of California Jerry Brown reached a deal to raise the minimum wage to $15 by 2022 for big businesses and 2023 for smaller businesses. In contrast, the relatively high minimum wage in Puerto Rico has been blamed by various politicians and commentators as a highly significant factor in the Puerto Rican government-debt crisis. One study concluded that 'Employers are disinclined to hire workers because the US federal minimum wage is very high relative to the local average'. As of December 2014, unions were exempt from recent minimum wage increases in Chicago, Illinois, SeaTac, Washington, and Milwaukee County, Wisconsin, as well as the California cities of Los Angeles, San Francisco, Long Beach, San Jose, Richmond, and Oakland.
Stonehenge is part of what is now arguably the most extensive and complex megalithic site in Europe. It was actually purchased in 1915 for a sum equivalent today to £680,000 by Cecil Chubb, who later gave it to the nation(aa). Two depictions of Stonehege exist which go back as far as medieval times, with a third recently added by Professor Christian Heck(ai). Little serious study of the monument was undertaken until the 17th century antiquarians, predecessors of the archaeologists, took an interest. New technology has now revealed the existence of another henge less than a kilometre from Stonehenge (BBC Focus October 2010). We were next presented with evidence that an early form of ball bearings may have been used to move the large stones of which the monument was constructed(d). Other recent discoveries in the vicinity include the 3,550-year-old skeleton of a teenage boy buried with a rare amber necklace – a clear indication of status. Furthermore, dental analysis revealed that he had come from the Mediterranean region. October 2015 gave us a report(ad) that a semi-permanent structure was discovered about a mile east of Stonehenge and dated to be 1,300 years earlier than megalithic edifice. The two big questions relating to Stonehege are its exact purpose and the method of construction. Allied to that is the question of how the ‘bluestones’ were transported from Wales. Was it by humans or glaciers(aj). What may have been a much earlier precursor to Stonehenge’s calendrical features, tentatively dated as 10,000 old, has been identified in Scotland’s Aberdeenshire(f). This is now arguably the world’s oldest lunar calendar. We were next presented with evidence that an early form of ball-bearing may have been used to move the large stones of which the monument was constructed(d). Stone balls were also discovered near a megalithic monument in Scotland and in Malta, stone balls have been found in the vicinity of the ancient temples there – some still in situ under the stones. In 2004, Gordon Pipes put forward a radical new method of construction(ac), which requires minimal manpower and equipment. In 2009, Pipes expanded on his ‘stone-rowing’ idea in book form. More discoveries are expected as investigations continue. In 2014 it was announced that although most attention is focussed on the rising sun at the summer solstice, it is now thought that Stonehenge was more likely to have been concerned with the midwinter setting sun(m). Another theory has recently be advanced by Thomas O. Mills which suggests that Stonehenge was aligned with the position of the North Pole as it was situated around 10,000 BC as proposed by Charles Hapgood.(u) Paul D. Burley has published a two-part paper(q)(r) on Stonehenge, which draws attention to the fact that most commentators have focused on the solar or lunar significance of the site’s alignments that that he feels is in stark contrast to other European megalithic monuments that would appear to have been designed with stellar alignments in mind. Burley is the author of Stonehenge: As Above, So Below. In 1995 Duncan Steel suggested in his book, Rogue Asteroids and Doomsday Comets, that Stonehenge I had been constructed as a predictor of the Earth’s intersection with the path of a comet and its attendant debris, which had a 19-year periodicity(x). Stonehenge, among other megalithic structures, has been linked by various writers with Plato’s Atlantis. One extreme example of this is the suggestion that if the number of Aubrey Holes, 56, is multiplied by the diameter of the Aubrey Circle we get 16,200 feet which is “the exact diameter of Plato’s Atlantis”. Now a ten minute search on the Internet reveals FIVE different figures for the diameter of the Circle, ranging from 271.6’ to 288’. Combine that with the uncertainty attached to value of the unit of measurement employed by Plato, it is clear that any claim of a connection between the Aubrey Holes and Atlantis is at best tenuous and at worst foolish. Jürgen Spanuth suggested that the five trilithons “most probably represented five sets of twins.” [0015.85], an idea echoed later by Dieter Braasch(as). Spanuth was adamant that a commonly held view linking Stonehenge with Hyperborea was incorrect as Hyperboreans came from Jutland. The late Philip Coppens echoed(b) the views of a fellow Belgian, Marcel Mestdagh, that there might be a connection between monuments within the Stonehenge Heritage Site and Atlantis, namely Woodhenge, which was comprised of posts arranged in six concentric circles. The suggestion being that this arrangement is in some manner a reflection of the concentric features in Atlantis described by Plato. I can only consider this to be highly speculative, somewhat akin to the suggestion(c) that Stonehenge I was an earthquake predictor. For those interested, a recently reconstructed German counterpart of Woodhenge has had the original dated to 2300 BC(aq). However, in the meanwhile we will have to be content with a recent book by Professor Mike Parker-Pearson, Stonehenge: Exploring the Greatest Stone Age Mystery , which includes all the discoveries revealed by the recent ten years of investigation. A 2014 offering from Professor David P.Gregg, The Stonehenge Codes, throws further light on the mathematics used for the building and development of Stonehenge over a 1500 year period was consistently the same polygon geometry. Gregg has also identified an earlier Babylonian influence. His book has a considerable numerical content that many will find heavy going. The text of the book is available online(j). The July 2014 edition of the BBC Focus magazine offers evidence that the history of the Stonehenge location can be traced to nearer the end of the Ice Age. It has been generally accepted for many years that the bluestones (spotted dolerites) at Stonehenge had been brought from the Preseli Mountains of Wales. Now (Nov.2013) evidence has been presented that identifies the precise outcrop, Carn Goedog, as their source(h). Further investigation has produced the claim by Paul Devereux that the rock there was chosen because of its acoustic qualities(I), raising the possibility that Stonehenge was the site of the first ‘rock’ concert. A more wide-ranging essay on the subject of archaeoacoustics is available online(ak). However, in November 2015, a report threw doubt on the existence of a neolithic quarry in the Preseli Hills(ag). Confusingly, the following month it was reported(ah) that studies carried in Wales suggested that the stones had been erected there first before their transportation to Wiltshire.In May 2016 the controversial matter of the method of transportation from Wales was claimed to have been resolved when it was demonstrated by students from University College London, supervised by Parker-Pearson that the bluestones could have been mounted on a sycamore sleigh and dragged along timbers requiring far less effort than was previously expected.(ao) Parker-Pearson believes that originally the stones had been part of a Welsh tomb which was dismantled and brought to Wiltshire as the successors migrated westward(ap). There is now a search underway tp locate the site of the original monument in Wales. After centuries of being described as one of the wonders of the megalithic world, the construction skills of Stonehenge’s builders have been harshly criticised by Professor Ronald Hutton of Bristol University, who went as far as to describe them as ‘cowboy builders’(n). In 2012, Gordon Freeman, a Canadian scientist, published Hidden Stonehenge in which he offers an extensive study of a native American “medicine wheel” in Alberta and compares its astronomical alignments with that of Stonehenge, revealing ‘incredible’ similarities. His book highlights the use of sophisticated astronomical knowledge at both locations, in the very distant past suggesting cultural links millennia before Columbus! A site in Australia discovered in the first half of the last century by Frederic Slater (President of the Australian Archaeological Society) and dubbed as ‘Australia’s Stonehenge’ was bulldozed in 1940 on the orders of the Australian Government! The location, obviously, never as impressive as its namesake on Salisbury Plain, has been again identified and with drawings made over seventy years ago has enabled a computer generated image of the site to be made(t). A father and son team, Steven & Evan Strong have recently relocated the damaged site(af). In May 2013, Melville Nicholls published a Kindle ebook, Children of the Sea God, in which he argues strongly for a Stonehenge built by Atlanteans, better known as the Bell Beaker People! Robert John Langdon has now proposed(g) that Stonehenge was constructed by megalith builders, around 8500 BC, who had migrated from Doggerland/Atlantis as it became submerged and that the Altar Stone at Stonehenge points to Doggerland! Shoji Yoshinori has suggested that Stonehenge was intended as a model of Atlantis(k). as had the late Philip Coppens(b). It is quite obvious that more convincing evidence is required if any claim of a Stonehenge/Atlantis connection is to gain greater traction. As recent as the summer of 2014 evidence was accidentally discovered(o) that suggested that the Stonehenge megalithic stones form a complete circle. Commenting on the discovery Susan Greaney from English Heritage said “A lot of people assume we’ve excavated the entire site and everything we’re ever going to know about the monument is known, but actually there’s quite a lot we still don’t know and there’s quite a lot that can be discovered just through non-excavation methods.” An extensive digital mapping project carried out at Stonehenge by researchers from the University of Birmingham and the Ludwig Boltzmann Institute of Vienna has revealed “that the area around Stonehenge is teeming with previously unseen archaeology and that the application of new technology can transform how archaeologists and the wider public understand one of the best-studied landscapes on Earth.”(p) December 2014 saw an encampment site just 1.5 miles from Stonehenge have its date confirmed at around 4000 BC(s). Marden Henge, situated between Stonehenge and Avebury is reckoned to be ten time bigger than Stonehenge and has now (2015) seen the start of a three-year, £1,00,000, dig by 80 archaeologists hoping to unlock its secrets(a). Dr. Jim Leary, a leading archaeologist working at the site is convinced that Marden may turn out to be more significant than Stonehenge(w). Earlier in 2015 Tim Daw, a steward at the Stonehenge site, has claimed that he had discovered a previously unknown alignment, involving a line of stones at 80 degrees to the axis of the monument. His theory is that the tallest stone at Stonehenge points towards the midsummer sunset and has been observed to be correct(v). The archaeological importance of Stonehenge was boosted further in September 2015 with the announcement that a line of nearly 100 buried stones had been discovered just a mile away, beside the Durrington Walls ‘superhenge’(y). There are images available, including a short video clip relating to this new discovery(z). In November 2015 the New York Times published an updated overview(ae) of the various excavations that have taken place in the vicinity of Stonehenge. Sarah Ewbank has now offered us a fascinating new theory regarding the original purpose and plan of Stonehenge. In a fully illustrated website(al) she reveals that the structure was concieved as “a ‘Cathedral-like’ building with a massive oak-framed roof, and a huge hall at it’s centre.” Further discoveries are listed on the Heritage England website(ab). What is not listed there is the information that Stonehenge was constructed by giants on the instruction of the Devil! This b.s tidbit was imparted to us in April 2016 by Dr. Dennis Lindsay on the TV show of disgraced US evangelist Jim Bakker(am). Another blog from Jason Colavito exposed further Stonehenge nonsense, this time from New Zealander, Ted Harper, who has recently claimed that the Wiltshire monument together with the Great Pyramid, both warn of a meteor strike in 2020. Theories relating to Stonehenge and Atlantis seem to proliferate at comparable rates. In a new book, The Memory Code, by Lynne Kelly, she proposes that the Wiltshire monument is a giant mnemonic(ar) and that other megalithic sites also were. (m)BBC Focus Magazine, July 2014, p.51 (x) http://www.archaeologyuk.org/ba/ba45/ba45feat.html (offline Mar. 2016) see Archive 2657 (ai) http://www.archaeologyuk.org/ba/ba92/feat1.shtml (offline Mar. 2016) see Archive 2832
Be a pro and use this guide to calculate Standard Deviation in Excel in a jiffy. In statistics, Standard Deviation is used to find how far each value in a data set varies from the mean or average value. This helps us understand how closely each number is clustered around the mean. There are several types of professionals who use standard deviation as a fundamental risk measure, such as Insurance analysts, portfolio managers, Statisticians, Market researchers, Real estate agents, stock investors, etc. However, calculating Standard Deviation in Excel can be a daunting process for those who are new to Excel or aren’t familiar with it. That is why, in this article, we are going to explain what Standard Deviation is and how to calculate it in Excel. What is Standard Deviation? The mean value indicates the average value in the data set. And the Standard Deviation represents the difference between the values of the data set and their average value. In other words, standard deviation tells you whether your data is close to the mean or varies a lot. For example, if a teacher were to tell you that the average score of her students is 60 (mean). And if you have the list of her students’ scores, you can use standard deviation to find out how accurate she is. There are two types of Standard deviations: population standard deviation and sample standard deviation. The sample standard deviation (SSD) is calculated from the random samples of the population or data while population standard deviation (PSD) is calculated from the entire population data. The higher the standard deviation, the more spread out the data is from the mean, and the lower the standard deviation, the closer the values are to the average/mean. If the standard deviation is 0, all the data points in the data set are equal. Also, a higher standard deviation means the mean is less accurate, and a lower standard deviation indicates the mean is more reliable. There are two different ways for calculating Standard Deviation in Excel using formulas or built-in functions. Population vs. Sample Sample calculations are common because sometimes it’s not possible to calculate the entire data set. Before you start calculating standard deviation, you must know what kind of data you have – entire data or sample of the data. Because you have to use different formulas and functions for sample and population data. - Population data refers to the entire data set. It means the data is available for all the members of a group. The population standard deviation calculates the distance of each value in a data set from the population mean. An example of population data is a census. - Sample data on other hand is the subset of the population. It is a subset that represents the entire population. The sample is a smaller group of data that is derived from the population. Sample data is used when the entire population is not available or it is enough for statistics. A good example of a sample is a survey. Calculate Standard Deviations using Manual Calculations Manually calculating standard deviations is a bit of a lengthy process, but can be easily done using formulas. First, you need to calculate the variance of data and then find the square root of the variance. Population Standard Deviation The formula for calculating the population standard deviation is given below: - μ is the arithmetic mean - Xi is individual values in the set of data - N is the total number of X values (population) in the data set Sample Standard Deviation The formula for calculating the sample standard deviation is given below: - μ is the arithmetic mean - Xi is individual values in the sample data set - N is the total number of X values (sample) in the data set When calculating sample standard deviation, only the sample of the data set is considered from an entire data set. Hence, Bessel’s correction (N-1) is used instead of N to give a better estimation of the standard deviation. To calculate the standard deviation for the below data set, follow these steps: 1. Calculate the Mean (Average) First, you need to calculate the mean/average (μ) of all values in the data set. To do that, you add up all the values and then divide the sum by the count of the values in the data set. You can find the average using the manual arithmetic expression: You can also use the AVERAGE function in Excel to calculate the mean: Here, B2:B11 in the formula represents the cells (denoted as the column number followed by the row number) that hold the data for which we intend to calculate the mean. Replace these values with the cells in your sheet while using the formula. 2. For Each Number, Calculate the Distance to the Mean: After that, you need to find the deviation or distance to the mean by subtracting the Mean from each value in the data set. To do that, enter the below formula: $B$13 is the mean of the data. The above formula is entered in cell D2 and then applied to the whole column by dragging it downwards to find the deviation for the whole column (D2:D13). Place your cursor at the bottom-right corner of the cell, here D2. A ‘+’ symbol will appear; drag it downward. 3. Square the difference Now, square the difference for each value by using the below formula: Then apply the formula to the whole column by dragging it downward. Squaring the difference will also turn the negative values into positive values. 4. Sum the Squared Differences Next, sum all the squares of the deviation about the mean (Xi-μ)2. Here’s the formula for adding up the squared differences: If you have large data set, you can calculate the count of values by: 5. Calculate the Variance To calculate the variance, you need to divide the squared difference by the number of values. So far, the steps for calculating sample and population standard deviation were the same. In this step, the formulas are going to change a little for both standard deviations, as explained before. For sample variance, you need to divide the sum of squared differences by the number of values minus 1: You can use either of the formulas to calculate sample variance. For population variance, you need to divide the sum of squared differences by just the number of values: 6. Get the Square Root of the Variance Finally, you need to take a square root of the above variance to get the Standard Deviation. Sample Standard deviation: To get the sample standard deviation, take the square root of the sample variance: Population Standard Deviation: To get population standard deviation, calculate the square root of the population variance: Calculate the Standard Error in Excel The standard error of the mean or simply the standard error is another measure of variability, very similar to standard deviation, yet different. It is a measure of how far each population means is likely to be from a sample mean. The difference between standard error and the standard deviation is that standard error uses statistics (sample data) while standard deviation uses parameters (population data). The Standard Deviation usually measures the variability within a single sample while Standard Error measures variability across multiple samples of a population. Here’s how you can calculate Standard Error (SE): The general formula for standard error is the standard deviation divided by the square root of the number of values in the data set. = Sample Standard Deviation / Square root of number of values (n) To calculate Standard Error (SE), enter the below formula: STDEV(B2:B11) finds the standard deviation of the sample (B2:B11), and the SD is divided by the square root of the number of values (n) in B2:B11. Instead of COUNT(B2:B11), you can also directly type in the number of values in the data set (10). Calculate the Standard Deviation using Excel Built-in Functions Microsoft Excel includes six different functions for calculating standard deviation. These functions make it extremely easy to calculate the Standard Deviation in Excel, cutting down on the time you need to spend on the calculations. The only catch is that you need to know which function to use when. The function you need to use depends upon the data you have – sample or population. When you type =STDEV in a blank Excel cell, it will show you the following six versions of the standard deviation functions: - STDEV: It is used for calculating the standard deviation for sample data. This function is the oldest Excel function (before Excel 2007) for finding standard deviation. This still exists in the latest Excel versions for compatibility purposes. - STDEVP: This is another older version of the standard deviation that exists for compatibility with Excel 2007 and earlier. It calculates standard deviation based on the population data. - STDEV.S: This is a newer version of the STEDV function (available since Excel 2010). It is used for calculating the standard deviation for sample data. - STDEV.P: This is a newer version of the STEDVP function in Excel (available since Excel 2010). It is used for calculating the standard deviation for entire population data. - STDEVA: This formula calculates the sample standard deviation of a dataset by including text and logical values. It is very similar to STDEV.S. All FALSE values and Text values are taken as ‘0’ and TRUE is taken as ‘1’. - STDEVPA: This formula calculates the population standard deviation by including text and logical values. It is similar to STDEV.P. All FALSE values and Text values are taken as ‘0’ and TRUE is taken as ‘1’. STDEV, STDEVP, STDEV.S, STDEV.P ignores the text and logical (TRUE or FALSE) values in the dataset. In most cases, you will only need STDEV.P or STDEV.S to perform standard deviation calculations. To give you a better understanding, here’s a summary of the six functions: |Function name||Standard Deviation||Handling of Logical & Text values||Excel version| |STDEV.S||Sample standard deviation||Ignored||2010 – 2021| |STDEV.P||Population standard deviation||Ignored||2010 – 2021| |STDEV||Sample standard deviation||Ignored||2003 – 2021 (Available for compatibility with 2007 and earlier) |STDEVP||Population standard deviation||Ignored||2003 – 2021 (Available for compatibility with 2007 and earlier) |STDEVA||Sample standard deviation||Evaluated (TRUE=1, FALSE=0, Text = 0)||2003 – 2021| |STDEVPA||Population standard deviation||Evaluated (TRUE=1, FALSE=0, Text = 0)||2003 – 2021| Excel STDEV.P Function for Population Standard Deviation If your data set represents the entire population, you can use the STDEV.P function to calculate the population standard deviation. The syntax for the STDEV.P function is: - number1 is the first number argument that corresponds to the first data point of the population data. - [number2],.. is the second number argument that corresponds to the second data point of the population data and so on. The function must contain two or more values in the arguments and the function can take up to 255 numeric arguments. You can input numbers, arrays, and cell references as arguments. B2:B11 is the range of cells that contains the population data. The above formula will return the standard deviation for the given population. As you can see, we get exactly the same result 26.58289 (population standard deviation) as the above manual method. This function automatically performs all the step-by-step calculations from the above manual method in the background and gives you the result. In case your data contains any boolean values (TRUE or FALSE) or text values, this function ignores those values and calculates the standard deviation with the remaining values. As you can see below, the same above formula produced a different result because it ignored the values in the cells B8 and B11. Excel STDEV.S Function for Sample Standard Deviation If your data set represents the sample population, you can use the STDEV.S function to calculate the sample standard deviation. For example, you have conducted a test for a large number of students but you only have a test score of 10 students, so you can use STDEV.S to find the sample standard deviation and apply it to the entire population. The syntax for the STDEV.S function is: - number1 is the first number argument corresponding to the first data point of the sample data. - [number2],… is the second number argument that corresponds to the second data point of the sample data and so on. You can input numbers, arrays, and cell references as arguments. The above formula sums up all the squares of the deviation about the mean and divides it by the count minus 1 (n-1) in the background and returns the below result. STDEV.S function also ignores the text and logical values if there are any in the data set as shown below. Excel STDEVA Function for Sample Standard Deviation The STDEVA is another function used to calculate the standard deviation for a sample, but it differs from the STDEV.S only in the way it handles logical and text values. In all the above functions, logical values and text values are ignored, but the STDEVA function converts those values into 1s and 0s. - The logical values TRUE are counted as ‘1’ and FALSE are counted as ‘0’. The values can be contained within cells, arrays, or entered directly into the function as arguments. - All text strings including empty strings (“”), text representations of numbers, and any other text are evaluated as ‘0’. To evaluate the standard deviation for a sample, including logical values and text, use this formula: If there are no logical or text values in the data set, it will return the usual standard deviation. Excel STDEVPA Function for Population Standard Deviation Excel also has a function called STDEVPA for calculating standard deviation for a population by including text and logical values. This function is similar to the STDEVA in handling text and boolean values. - The logical values TRUE are counted as ‘1’ and FALSE are counted as ‘0’. - All text strings including empty strings (“”), text representations of numbers, and any other text are evaluated as ‘0’. To evaluate standard deviation for a population, including logical values and text, use the below formula: Excel STDEV Function Excel’s STDEV Function is very similar to the STDEV.S function, it can calculate the standard deviation for sample data. If you are working in Excel 2007 or earlier version, you need to use the STDEV function to calculate the standard deviation. Syntax for STDEV function: =STDEV(number1, [number 2],...) STDEV function exists in the newer versions of Excel for compatibility purposes which means it will be removed in the future. So, Microsoft is recommending that users use STDEV.S instead of STDEV. Excel STDEVP Function Excel’s STDEVP Function works exactly the same way as the STDEV.P function. If you are working in Excel 2007 or earlier version, you have to use the STDEVP function to calculate the standard deviation for population data. The Syntax for STDEVP function: =STDEVP(number1, [number 2],...) STDEVP may also be removed from the future Excel version. Calculate Standard Deviation in Excel Using Insert Function If all these functions are hard to remember, you can use the Insert Function option to quickly calculate the standard deviation. Also, you can avoid errors while writing the formula by using the Insert function feature to automatically insert the desired formula in your chosen result cell. Here’s how you can do that: First, select the cell where you want the output. Then, go to the ‘Formulas’ tab and select the ‘Insert Function’ button in the ribbon. This will open the Insert Function dialog box. There, search for ‘Standard Deviation’ in the ‘Search for a function’ field or choose the ‘Statistical’ category from the ‘Or select a category’ drop-down, and click ‘Go’. Then, scroll down the list of functions within the ‘Select a function’ window, choose a standard deviation function (STDEV.P, STDEV.S, STDEVA, or STDEPA), and click ‘OK’. This will open up Function Arguments dialog windows with two fields Number 1 and Number 2. In the Number 1 field, enter the range for which you want to calculate the standard deviation. Or, click the upward-facing arrow in the text field and highlight the range from the worksheet. Each number argument can only take up to 255 cell counts. If the number of cells exceeds 255, you can use Number 2, Number 3, etc. Then, click ‘OK’. Once you click OK, it will calculate the standard deviation using the selected function and show you the result in the cell you originally selected. Get Standard Deviation using Data Analysis Tool in Excel You can also get standard deviation as part of the Descriptive Statistics summary of your data using the data analysis tool. Excel’s Data Analysis tool can automatically generate various key statistical values, including mean, median, Variance, standard deviation, standard error, etc. For the below data set, we want to calculate descriptive statistics. Here’s how you can do that: To get Descriptive Statistics, go to the ‘Data’ tab and click the ‘Data Analysis’ tool from the Analysis section. In the Data Analysis dialog window, select ‘Descriptive Statistics’ under Analysis Tools and click ‘OK’. This will open the Descriptive Statistics dialog box in which you need to configure the Input and Output options. First, enter the range of variables/values you want to analyze in the ‘Input Range’ field. You can manually enter the range in the field or click the upward-facing arrow button at the end of the field to choose a range. After that, select the range from the sheet and click the downward arrow button to confirm the range. Next, choose how you want to organize your variables (rows or columns). Here, we are selecting ‘Columns’ because our input range is in columns. If you selected or entered the range (in the Input Range) with headers, you should tick the ‘Labels in first row’ option. In the ‘Output Range’ field, enter the range where you want to display the statistical result. If you want to display the result in the current worksheet or another worksheet in the current workbook, click the ‘Output Range’ radio button and specify the range in the field next to it. If you want to show the results in a new spreadsheet, simply select the ‘New Worksheet Ply’ radio button. Or, if you want to display the results in a new workbook, select the ‘New Workbook’ option. Finally, check the ‘Summary statistics’ option and click ‘OK’. And you will get all the necessary statistics you want including Standard Error, Standard Deviation, Sample Variance, etc. How to Calculate Standard Deviation with an IF Criteria Besides the above-mentioned six Standard Deviation functions, Excel also has two more functions called DSTDEV and DSTDEVP for calculating standard deviation with an IF condition. - DSTDEV function is used for calculating the standard deviation of data that is extracted from the sample data set matching the given criteria. - DSTDEVP function is used for calculating the standard deviation of data that is extracted from the population data set matching the given criteria. DSTDEV Function for Sample Standard Deviation The syntax for the DSTDEV function: =DSTDEV(database, field, criteria) - Database – The range of cells (table) where the data entries with values you want to calculate standard deviation are from. The range must include headers in the first row. - Field: This specifies the field or column where the numbers you want to calculate the standard deviation are located. You need to specify the field name (i.e. column label or header) enclosed in double quotes or field number (i.e. column number) within the table. - Criteria: The range of cells that contains where your criteria are. The criteria range must contain at least one column label matching your database headers and one cell below the column label that specifies the condition from the column. It can include multiple rows to specify multiple conditions. Suppose you have the below dataset where you need to calculate standard deviation based on conditions: For example, to find the sample standard deviation of scores obtained in math subject by all students, enter either of the below formulas: The above formula finds all the scores corresponding to Math and calculates the standard deviation for those scores. You need to create a separate criteria range (G1:H2) as shown below and enter the formula in an empty cell. DSTDEVP Function for Population Standard Deviation The syntax for the DSTDEVP function: =DSTDEVP(database, field, criteria) - Database – The range of cells (table) where the data entries with values you want to calculate standard deviation are from. - Field: This specifies the field or column where the values you want to calculate the standard deviation are located. - Criteria: The range of cells that contains where your criteria are. For example, to find the population standard deviation of scores obtained in math subject by all students, enter the below formulas: To find the population standard deviation of scores obtained in the math subject by students who are 14 and older, enter the below formula: You can also use the following wildcards in the text related criteria to calculate standard deviation: *– Matches any character with any amount ?– Matches any single character ~– Finds * or ? character in the search. You can include more than one row or column in the criteria range to calculate standard deviation using OR or AND logic. If you add more than one row in the criteria range, the functions will use OR logic (TRUE if at least one of the conditions is TRUE) to calculate the standard deviation. Whereas if you add more than one column in the criteria range, the functions will use the AND logic (TRUE if all of the conditions are TRUE) to evaluate. Calculate Weighted Standard Deviation in Excel Normally, when we calculate a standard deviation for a data set, all the values in the data set carry equal weights or importance. However, in some cases, each value has a different weight in the data set and some values have higher weights than others because some values are more important than others. In such cases, you cannot use the above built-in functions to calculate the standard deviation. So, you need to use other SUM, SUMPRODUCT, and SQRT functions to calculate the weighted standard deviation manually. Let us see how to calculate weighted standard deviation in Excel. Suppose you have a dataset where the first column contains data values and the second column contains weights of each of those values: First, you need to calculate the weighted mean or average for the given data. You can do that with the following formula: - value_range – Range or array of numbers. - weight_range – Range cells with weights In the above formula, the SUMPRODUCT function multiplies each of the values (column A) with its weight (column B) and sums up the results. Then, the SUMPRODUCT result is divided by the sum of the data weights to produce the weighted mean. Once you have the weighted mean, you can calculate the weighted standard deviation. To calculate the standard deviation for population data, enter the below formula: =SQRT(SUMPRODUCT((value_range - weighted_mean)^2, weight_range)/SUM( weight_range)) In the above formula, the calculated weighted mean is subtracted from each data value (column A) and each result is squared. After that, each square result is multiplied by data weight (column B) using the SUMPRODUCT function. Then, the SUMPRODUCT result is divided by the SUM of the weights. After that, the square root of the result is calculated using the SQRT function to find the standard deviation value. To calculate the standard deviation for sample data, enter the below formula: =SQRT(SUMPRODUCT((value_range - weighted_mean)^2, weight_range)/SUM( weight_range)-1) The only difference in this formula from the above formula is that we added ‘-1’ to the sum of weight for the population data. Add Standard Deviation Bars In Excel You can also add standard deviation bars to visualize the margin of your standard deviation using Excel error bars. Excel errors are part of the chart elements that let you represent data variability and measurement. To add standard deviation bars to your chart, follow these steps: First, create a chart or graph for your data set. To do that, select the range of cells, go to the ‘Insert’ tab and choose a graph option from the Charts group. Once the chart is inserted, select the graph by clicking anywhere on the graph, then click the ‘Chart Elements’ (+) button. Then, click the ‘Error Bars’ option and select ‘Standard Deviation’. As a result, the standard deviation bars will be inserted for all data points as shown below. Calculating Variance in Excel When calculating the standard deviation of your data, you often need to calculate variance as well. Variance is the variability in the data which is also the square of the standard deviation. Similar to standard deviation, it also has 6 built-in functions that you can use to calculate the variance of your data. But, Microsoft recommends users to use VAR.S and VAR.P to calculate the variance of the sample data and population data respectively. To calculates the variance based on the sample data, use the below formula: The above formula calculates sample variance for the range B2:B15 and displays the result in cell E2. To calculates the variance based on the population data, use the below formula: That’s it. That’s the only crash course you’ll ever need for calculating Standard Deviation in Excel. Now, go on and show off your newly acquired skills!
||It has been suggested that Periodic table (metals and nonmetals) be merged into this article. (Discuss) Proposed since June 2013.| |Alkaline earth metals| |Elements which are possibly metals| |Elements which are sometimes considered metals| A metal (from Greek "μέταλλον" – métallon, "mine, quarry, metal") is a material (an element, compound, or alloy) that is typically hard, opaque, shiny, and has good electrical and thermal conductivity. Metals are generally malleable — that is, they can be hammered or pressed permanently out of shape without breaking or cracking — as well as fusible (able to be fused or melted) and ductile (able to be drawn out into a thin wire). About 91 of the 118 elements in the periodic table are metals (some elements appear in both metallic and non-metallic forms). The meaning of "metal" differs for various communities. For example, astronomers use the blanket term "metal" for convenience to collectively describe all elements other than hydrogen and helium (the main components of stars, which in turn comprise most of the visible matter in the universe). Thus, in astronomy and physical cosmology, the metallicity of an object is the proportion of its matter made up of chemical elements other than hydrogen and helium. In addition, many elements and compounds that are not normally classified as metals become metallic under high pressures; these are known as metallic allotropes of non-metals. - 1 Structure and bonding - 2 Properties - 3 Alloys - 4 Categories - 5 Extraction - 6 Recycling of metals - 7 Metallurgy - 8 Applications - 9 Trade - 10 History - 11 See also - 12 References - 13 External links Structure and bonding The atoms of metallic substances are closely positioned to neighboring atoms in one of two common arrangements. The first arrangement is known as body-centered cubic. In this arrangement, each atom is positioned at the center of eight others. The other is known as face-centered cubic. In this arrangement, each atom is positioned in the center of six others. The ongoing arrangement of atoms in these structures forms a crystal. Some metals adopt both structures depending on the temperature. Atoms of metals readily lose their outer shell electrons, resulting in a free flowing cloud of electrons within their otherwise solid arrangement. This provides the ability of metallic substances to easily transmit heat and electricity. While this flow of electrons occurs, the solid characteristic of the metal is produced by electrostatic interactions between each atom and the electron cloud. This type of bond is called a metallic bond. Metals are usually inclined to form cations through electron loss, reacting with oxygen in the air to form oxides over various timescales (iron rusts over years, while potassium burns in seconds). Examples: - 4 Na + O2 → 2 Na2O (sodium oxide) - 2 Ca + O2 → 2 CaO (calcium oxide) - 4 Al + 3 O2 → 2 Al2O3 (aluminium oxide). The transition metals (such as iron, copper, zinc, and nickel) are slower to oxidize because they form passivating layer of oxide that protects the interior. Others, like palladium, platinum and gold, do not react with the atmosphere at all. Some metals form a barrier layer of oxide on their surface which cannot be penetrated by further oxygen molecules and thus retain their shiny appearance and good conductivity for many decades (like aluminium, magnesium, some steels, and titanium). The oxides of metals are generally basic, as opposed to those of nonmetals, which are acidic. Blatant exceptions are largely oxides with very high oxidation states such as CrO3, Mn2O7, and OsO4, which have strictly acidic reactions. Painting, anodizing or plating metals are good ways to prevent their corrosion. However, a more reactive metal in the electrochemical series must be chosen for coating, especially when chipping of the coating is expected. Water and the two metals form an electrochemical cell, and if the coating is less reactive than the coatee, the coating actually promotes corrosion. Metals in general have high electrical conductivity, high thermal conductivity, and high density. Typically they are malleable and ductile, deforming under stress without cleaving. In terms of optical properties, metals are shiny and lustrous. Sheets of metal beyond a few micrometres in thickness appear opaque, but gold leaf transmits green light. Although most metals have higher densities than most nonmetals, there is wide variation in their densities, Lithium being the least dense solid element and osmium the densest. The alkali and alkaline earth metals in groups I A and II A are referred to as the light metals because they have low density, low hardness, and low melting points. The high density of most metals is due to the tightly packed crystal lattice of the metallic structure. The strength of metallic bonds for different metals reaches a maximum around the center of the transition metal series, as those elements have large amounts of delocalized electrons in tight binding type metallic bonds. However, other factors (such as atomic radius, nuclear charge, number of bonds orbitals, overlap of orbital energies and crystal form) are involved as well. The electrical and thermal conductivities of metals originate from the fact that their outer electrons are delocalized. This situation can be visualized by seeing the atomic structure of a metal as a collection of atoms embedded in a sea of highly mobile electrons. The electrical conductivity, as well as the electrons' contribution to the heat capacity and heat conductivity of metals can be calculated from the free electron model, which does not take into account the detailed structure of the ion lattice. When considering the electronic band structure and binding energy of a metal, it is necessary to take into account the positive potential caused by the specific arrangement of the ion cores – which is periodic in crystals. The most important consequence of the periodic potential is the formation of a small band gap at the boundary of the Brillouin zone. Mathematically, the potential of the ion cores can be treated by various models, the simplest being the nearly free electron model. Mechanical properties of metals include ductility, i.e. their capacity for plastic deformation. Reversible elastic deformation in metals can be described by Hooke's Law for restoring forces, where the stress is linearly proportional to the strain. Forces larger than the elastic limit, or heat, may cause a permanent (irreversible) deformation of the object, known as plastic deformation or plasticity. This irreversible change in atomic arrangement may occur as a result of: - The action of an applied force (or work). An applied force may be tensile (pulling) force, compressive (pushing) force, shear, bending or torsion (twisting) forces. - A change in temperature (heat). A temperature change may affect the mobility of the structural defects such as grain boundaries, point vacancies, line and screw dislocations, stacking faults and twins in both crystalline and non-crystalline solids. The movement or displacement of such mobile defects is thermally activated, and thus limited by the rate of atomic diffusion. Viscous flow near grain boundaries, for example, can give rise to internal slip, creep and fatigue in metals. It can also contribute to significant changes in the microstructure like grain growth and localized densification due to the elimination of intergranular porosity. Screw dislocations may slip in the direction of any lattice plane containing the dislocation, while the principal driving force for "dislocation climb" is the movement or diffusion of vacancies through a crystal lattice. In addition, the nondirectional nature of metallic bonding is also thought to contribute significantly to the ductility of most metallic solids. When the planes of an ionic bond slide past one another, the resultant change in location shifts ions of the same charge into close proximity, resulting in the cleavage of the crystal; such shift is not observed in covalently bonded crystals where fracture and crystal fragmentation occurs. An alloy is a mixture of two or more elements in which the main component is a metal. Most pure metals are either too soft, brittle or chemically reactive for practical use. Combining different ratios of metals as alloys modifies the properties of pure metals to produce desirable characteristics. The aim of making alloys is generally to make them less brittle, harder, resistant to corrosion, or have a more desirable color and luster. Of all the metallic alloys in use today, the alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steel) make up the largest proportion both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low, mid and high carbon steels, with increasing carbon levels reducing ductility and toughness. The addition of silicon will produce cast irons, while the addition of chromium, nickel and molybdenum to carbon steels (more than 10%) results in stainless steels. Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known since prehistory—bronze gave the Bronze Age its name—and have many applications today, most importantly in electrical wiring. The alloys of the other three metals have been developed relatively recently; due to their chemical reactivity they require electrolytic extraction processes. The alloys of aluminium, titanium and magnesium are valued for their high strength-to-weight ratios; magnesium can also provide electromagnetic shielding. These materials are ideal for situations where high strength-to-weight ratio is more important than material cost, such as in aerospace and some automotive applications. Alloys specially designed for highly demanding applications, such as jet engines, may contain more than ten elements. In chemistry, the term base metal is used informally to refer to a metal that oxidizes or corrodes relatively easily, and reacts variably with dilute hydrochloric acid (HCl) to form hydrogen. Examples include iron, nickel, lead and zinc. Copper is considered a base metal as it oxidizes relatively easily, although it does not react with HCl. It is commonly used in opposition to noble metal. In alchemy, a base metal was a common and inexpensive metal, as opposed to precious metals, mainly gold and silver. A longtime goal of the alchemists was the transmutation of base metals into precious metals. The term "ferrous" is derived from the Latin word meaning "containing iron". This can include pure iron, such as wrought iron, or an alloy such as steel. Ferrous metals are often magnetic, but not exclusively. Noble metals are metals that are resistant to corrosion or oxidation, unlike most base metals. They tend to be precious metals, often due to perceived rarity. Examples include gold, platinum, silver and rhodium. A precious metal is a rare metallic chemical element of high economic value. Chemically, the precious metals are less reactive than most elements, have high luster and high electrical conductivity. Historically, precious metals were important as currency, but are now regarded mainly as investment and industrial commodities. Gold, silver, platinum and palladium each have an ISO 4217 currency code. The best-known precious metals are gold and silver. While both have industrial uses, they are better known for their uses in art, jewelry, and coinage. Other precious metals include the platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum, of which platinum is the most widely traded. The demand for precious metals is driven not only by their practical use, but also by their role as investments and a store of value. Palladium was, as of summer 2006, valued at a little under half the price of gold, and platinum at around twice that of gold. Silver is substantially less expensive than these metals, but is often traditionally considered a precious metal for its role in coinage and jewelry. Metals are often extracted from the Earth by means of mining, resulting in ores that are relatively rich sources of the requisite elements. Ore is located by prospecting techniques, followed by the exploration and examination of deposits. Mineral sources are generally divided into surface mines, which are mined by excavation using heavy equipment, and subsurface mines. Once the ore is mined, the metals must be extracted, usually by chemical or electrolytic reduction. Pyrometallurgy uses high temperatures to convert ore into raw metals, while hydrometallurgy employs aqueous chemistry for the same purpose. The methods used depend on the metal and their contaminants. When a metal ore is an ionic compound of that metal and a non-metal, the ore must usually be smelted — heated with a reducing agent — to extract the pure metal. Many common metals, such as iron, are smelted using carbon as a reducing agent. Some metals, such as aluminium and sodium, have no commercially practical reducing agent, and are extracted using electrolysis instead. Sulfide ores are not reduced directly to the metal but are roasted in air to convert them to oxides. Recycling of metals Demand for metals is closely linked to economic growth. During the 20th century, the variety of metals uses in society grew rapidly. Today, the development of major nations, such as China and India, and advances in technologies, are fuelling ever more demand. The result is that mining activities are expanding, and more and more of the world's metal stocks are above ground in use, rather than below ground as unused reserves. An example is the in-use stock of copper. Between 1932 and 1999, copper in use in the USA rose from 73g to 238g per person. Metals are inherently recyclable, so in principle, can be used over and over again, minimizing these negative environmental impacts and saving energy at the same time. For example, 95% of the energy used to make aluminium from bauxite ore is saved by using recycled material. However, levels of metals recycling are generally low. In 2010, the International Resource Panel, hosted by the United Nations Environment Programme (UNEP) published reports on metal stocks that exist within society and their recycling rates. The report authors observed that the metal stocks in society can serve as huge mines above ground. However, they warned that the recycling rates of some rare metals used in applications such as mobile phones, battery packs for hybrid cars and fuel cells are so low that unless future end-of-life recycling rates are dramatically stepped up these critical metals will become unavailable for use in modern technology. Metallurgy is a domain of materials science that studies the physical and chemical behavior of metallic elements, their intermetallic compounds, and their mixtures, which are called alloys. Some metals and metal alloys possess high structural strength per unit mass, making them useful materials for carrying large loads or resisting impact damage. Metal alloys can be engineered to have high resistance to shear, torque and deformation. However the same metal can also be vulnerable to fatigue damage through repeated use or from sudden stress failure when a load capacity is exceeded. The strength and resilience of metals has led to their frequent use in high-rise building and bridge construction, as well as most vehicles, many appliances, tools, pipes, non-illuminated signs and railroad tracks. Metals are good conductors, making them valuable in electrical appliances and for carrying an electric current over a distance with little energy lost. Electrical power grids rely on metal cables to distribute electricity. Home electrical systems, for the most part, are wired with copper wire for its good conducting properties. The thermal conductivity of metal is useful for containers to heat materials over a flame. Metal is also used for heat sinks to protect sensitive equipment from overheating. The high reflectivity of some metals is important in the construction of mirrors, including precision astronomical instruments. This last property can also make metallic jewelry aesthetically appealing. Some metals have specialized uses; radioactive metals such as uranium and plutonium are used in nuclear power plants to produce energy via nuclear fission. Mercury is a liquid at room temperature and is used in switches to complete a circuit when it flows over the switch contacts. Shape memory alloy is used for applications such as pipes, fasteners and vascular stents. The nature of metals has fascinated mankind for many centuries, because these materials provided people with tools of unsurpassed properties both in war and in their preparation and processing. Sterling gold and silver were known to man since the Stone Age. Lead and silver were fused from their ores as early as the fourth millennium BC. Ancient Latin and Greek writers such as Theophrastus, Pliny the Elder in his Natural History, or Pedanius Dioscorides, did not try to classify metals. The ancients never attained the concept "metal" as a distinct elementary substance of fixed, characteristic chemical and physical properties. Following Empedocles, all substances within the sublunary sphere were assumed to vary in their constituent classical elements of earth, water, air and fire. Following the Pythagoreans, Plato assumed that these elements could be further reduced to plane geometrical shapes (triangles and squares) bounding space and relating to the regular polyhedra in the sequence earth:cube, water:icosahedron, air:octahedron, fire:tetrahedron. However, this philosophical extension did not become as popular as the simple four elements, after it was rejected by Aristotle. Aristotle also rejected the atomic theory of Democritus, since he classified the implied existence of a vacuum necessary for motion as a contradiction (a vacuum implies nonexistence, therefore cannot exist). Aristotle did, however, introduce underlying antagonistic qualities (or forces) of dry vs. wet and cold vs. heat into the composition of each of the four elements. The word "metal" originally meant "mines" and only later gained the general meaning of products from materials obtained in mines. In the first centuries A.D. a relation between the planets and the existing metals was assumed as Gold:Sun, Silver:Moon, Electrum:Jupiter, Iron:Mars, Copper:Venus, Tin:Mercury, Lead: Saturn. After electrum was determined to be a combination of silver and gold, the relations Tin:Jupiter and Mercury:Mercury were substituted into the previous sequence. Arabic and medieval alchemists believed that all metals, and in fact, all sublunar matter, were composed of the principle of sulfur, carrying the combustible property, and the principle of mercury, the mother of all metals and carrier of the liquidity or fusibility, and the volatility properties. These principles were not necessarily the common substances sulfur and mercury found in most laboratories. This theory reinforced the belief that the all metals were destined to become gold in the bowels of the earth through the proper combinations of heat, digestion, time, and elimination of contaminants, all of which could be developed and hastened through the knowledge and methods of alchemy. Paracelsus added the third principle of salt, carrying the nonvolatile and incombustible properties, in his tria prima doctrine. These theories retained the four classical elements as underlying the composition of sulfur, mercury and salt. The first systematic text on the arts of mining and metallurgy was De la Pirotechnia by Vannoccio Biringuccio, which treats the examination, fusion, and working of metals. Sixteen years later, Georgius Agricola published De Re Metallica in 1555, a clear and complete account of the profession of mining, metallurgy, and the accessory arts and sciences, as well as qualifying as the greatest treatise on the chemical industry through the sixteenth century. He gave the following description of a metal in his De Natura Fossilium (1546). Metal is a mineral body, by nature either liquid or somewhat hard. The latter may be melted by the heat of the fire, but when it has cooled down again and lost all heat, it becomes hard again and resumes its proper form. In this respect it differs from the stone which melts in the fire, for although the latter regain its hardness, yet it loses its pristine form and properties. Traditionally there are six different kinds of metals, namely gold, silver, copper, iron, tin and lead. There are really others, for quicksilver is a metal, although the Alchemists disagree with us on this subject, and bismuth is also. The ancient Greek writers seem to have been ignorant of bismuth, wherefore Ammonius rightly states that there are many species of metals, animals, and plants which are unknown to us. Stibium when smelted in the crucible and refined has as much right to be regarded as a proper metal as is accorded to lead by writers. If when smelted, a certain portion be added to tin, a bookseller's alloy is produced from which the type is made that is used by those who print books on paper. Each metal has its own form which it preserves when separated from those metals which were mixed with it. Therefore neither electrum nor Stannum [not meaning our tin] is of itself a real metal, but rather an alloy of two metals. Electrum is an alloy of gold and silver, Stannum of lead and silver. And yet if silver be parted from the electrum, then gold remains and not electrum; if silver be taken away from Stannum, then lead remains and not Stannum. Whether brass, however, is found as a native metal or not, cannot be ascertained with any surety. We only know of the artificial brass, which consists of copper tinted with the colour of the mineral calamine. And yet if any should be dug up, it would be a proper metal. Black and white copper seem to be different from the red kind. Metal, therefore, is by nature either solid, as I have stated, or fluid, as in the unique case of quicksilver. But enough now concerning the simple kinds. - μέταλλον Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library - metal, on Oxford Dictionaries - metal. Encyclopædia Britannica - John C. Martin. "What we learn from a star's metal content". New Analysis RR Lyrae Kinematics in the Solar Neighborhood. Retrieved September 7, 2005. - Holleman, A. F.; Wiberg, E. "Inorganic Chemistry" Academic Press: San Diego, 2001. ISBN 0-12-352651-5. - Mortimer, Charles E. (1975). Chemistry: A Conceptual Approach (3rd ed.). New York:: D. Van Nostrad Company. - Ductility – strength of materials - "Los Alamos National Laboratory – Sodium". Retrieved 2007-06-08. - "Los Alamos National Laboratory – Aluminum". Retrieved 2007-06-08. - The Recycling Rates of Metals: A Status Report 2010, International Resource Panel, United Nations Environment Programme - Tread lightly: Aluminium attack Carolyn Fry, Guardian.co.uk, 22 February 2008. - Metal Stocks in Society: Scientific Synthesis 2010, International Resource Panel, United Nations Environment Programme - Frank Kreith and Yogi Goswami, eds. (2004). The CRC Handbook of Mechanical Engineering, 2nd edition. Boca Raton. p. 12-2. - Structure of merchandise imports - Der Große Brockhaus (in German). 7: L-MIJ (Sixteenth, altogether newly prepared ed.). Wiesbaden: Bibliographisches Institut & F. A. Brockhaus. 1955. p. 715. - John Maxson Stillman, The Story of Early Chemistry D. Appleton (1924) - Georgius Agricola, De Re Metallica (1556) Tr. Herbert Clark Hoover & Lou Henry Hoover (1912); Footnote quoting De Natura Fossilium (1546), p. 180 |Wikisource has the text of the 1879 American Cyclopædia article Metal.| |Periodic table (Large version)|
Science Fair Wizard - Pick a topic - Determine a problem - Investigate your problem - Formulate a hypothesis - Design an experiment - Test your hypothesis - Presenting your data - Using Statistics to Analyze Data - Writing up your conclusions - Re-testing your hypothesis - Write your research paper - Construct your exhibit - Prepare your presentation - Show Time! Pre-science fair checklist - Submit your paperwork Step 7C: Writing up your conclusions Your conclusion should revisit the purpose of your experiment and hypothesis in light of your data analysis. Make sure you address your original question or problem when you interpret your data. Your conclusions should be valid (that is, logical) and limited to the results of the experiment. Evaluate your data. Explain the effect of experimental error or any procedural changes on your results. Were there variables you couldn’t control for, such as age or other characteristics? Why is this information (your data analysis) important or significant? What is the relevance of your data to everyday life? Science is a process that doesn’t only try to answer a question but generates more questions. What new questions do you have as a result of your experiment? Does your data support your hypothesis or not? If your hypothesis is incorrect, think about the reasons why this might have happened. This doesn’t mean you didn’t carry out your experiment correctly. Revisit your notes. Did you change anything about the procedure and materials that could explain what happened? Tip: Use the worksheets you have been completing along the way to refresh your memory about your library research and procedure as you write your conclusion. These worksheets will be helpful when it is time to put together your Science Project Paper. Scientists at Argonne National Laboratory can help you with your project. ( just ask ) The digital library project - Excel - Tutorials - Basic Probability Rules - Single Event Probability - Complement Rule - Levels of Measurement - Independent and Dependent Variables - Entering Data - Central Tendency - Data and Tests - Displaying Data - Discussing Statistics In-text - SEM and Confidence Intervals - Two-Way Frequency Tables - Empirical Rule - Finding Probability - Accessing SPSS - Chart and Graphs - Frequency Table and Distribution - Descriptive Statistics - Converting Raw Scores to Z-Scores - Converting Z-scores to t-scores - Split File/Split Output - Partial Eta Squared - Downloading and Installing G*Power: Windows/PC - One-Way ANOVA - Two-Way ANOVA - Repeated Measures ANOVA - Test of Association - Pearson's r - Point Biserial - Mediation and Moderation - Simple Linear Regression - Multiple Linear Regression - Binomial Logistic Regression - Multinomial Logistic Regression - Independent Samples T-test - Dependent Samples T-test - Testing Assumptions - T-tests using SPSS - T-Test Practice - Predictive Analytics This link opens in a new window - Quantitative Research Questions - Null & Alternative Hypotheses - One-Tail vs. Two-Tail - Alpha & Beta - Associated Probability - Decision Rule - Statement of Conclusion - Statistics Group Sessions Statement of the Conclusion When writing your results, you’re going to write the decision regarding the null, but you also want to state the results in layman’s terms. Tie the statistical results back to the original claim and interpret what those statistics mean, without all the quantitative jargon. 1) Claim : Females run faster than males. Results of the test : t o > t c Decision : Reject Null Hypothesis. Conclusion : There is sufficient evidence to suggest that females run faster than males. 2) Claim : There is a difference in the highest level of education obtained based on socioeconomic status. Results of the test : p > α Decision : Fail to Reject Null Hypothesis. Conclusion : There is not enough evidence to suggest that highest level of education differs based on socioeconomic status. 3) Claim : The number of calories consumed and the number of hours spent exercising each week are significant predictors of weight. Results of the test : p < α Decision : Reject Null Hypothesis. Conclusion : The results of the hypothesis test suggest that a person’s weight can be predicted given caloric intake and the number of hours spent exercising each week. Was this resource helpful? - << Previous: Decision Rule - Next: Statistics Group Sessions >> - Last Updated: Nov 12, 2023 7:28 AM - URL: https://resources.nu.edu/statsresources Drawing Conclusions from Statistics - Describe the role of random sampling and random assignment in drawing cause-and-effect conclusions One limitation to the study mentioned previously about the babies choosing the “helper” toy is that the conclusion only applies to the 16 infants in the study. We don’t know much about how those 16 infants were selected. Suppose we want to select a subset of individuals (a sample ) from a much larger group of individuals (the population ) in such a way that conclusions from the sample can be generalized to the larger population. This is the question faced by pollsters every day. Example 1 : The General Social Survey (GSS) is a survey on societal trends conducted every other year in the United States. Based on a sample of about 2,000 adult Americans, researchers make claims about what percentage of the U.S. population consider themselves to be “liberal,” what percentage consider themselves “happy,” what percentage feel “rushed” in their daily lives, and many other issues. The key to making these claims about the larger population of all American adults lies in how the sample is selected. The goal is to select a sample that is representative of the population, and a common way to achieve this goal is to select a random sample that gives every member of the population an equal chance of being selected for the sample. In its simplest form, random sampling involves numbering every member of the population and then using a computer to randomly select the subset to be surveyed. Most polls don’t operate exactly like this, but they do use probability-based sampling methods to select individuals from nationally representative panels. In 2004, the GSS reported that 817 of 977 respondents (or 83.6%) indicated that they always or sometimes feel rushed. This is a clear majority, but we again need to consider variation due to random sampling . Fortunately, we can use the same probability model we did in the previous example to investigate the probable size of this error. (Note, we can use the coin-tossing model when the actual population size is much, much larger than the sample size, as then we can still consider the probability to be the same for every individual in the sample.) This probability model predicts that the sample result will be within 3 percentage points of the population value (roughly 1 over the square root of the sample size, the margin of error ). A statistician would conclude, with 95% confidence, that between 80.6% and 86.6% of all adult Americans in 2004 would have responded that they sometimes or always feel rushed. The key to the margin of error is that when we use a probability sampling method, we can make claims about how often (in the long run, with repeated random sampling) the sample result would fall within a certain distance from the unknown population value by chance (meaning by random sampling variation) alone. Conversely, non-random samples are often suspect to bias, meaning the sampling method systematically over-represents some segments of the population and under-represents others. We also still need to consider other sources of bias, such as individuals not responding honestly. These sources of error are not measured by the margin of error. Cause and Effect In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don’t). In some studies, the researchers actively form the groups themselves. But then we have a similar question—could any differences we observe in the groups be an artifact of that group-formation process? Or maybe the difference we observe in the groups is so large that we can discount a “fluke” in the group-formation process as a reasonable explanation for what we find? Example 2 : A psychology study investigated whether people tend to display more creativity when they are thinking about intrinsic (internal) or extrinsic (external) motivations (Ramsey & Schafer, 2002, based on a study by Amabile, 1985). The subjects were 47 people with extensive experience with creative writing. Subjects began by answering survey questions about either intrinsic motivations for writing (such as the pleasure of self-expression) or extrinsic motivations (such as public recognition). Then all subjects were instructed to write a haiku, and those poems were evaluated for creativity by a panel of judges. The researchers conjectured beforehand that subjects who were thinking about intrinsic motivations would display more creativity than subjects who were thinking about extrinsic motivations. The creativity scores from the 47 subjects in this study are displayed in Figure 2, where higher scores indicate more creativity. In this example, the key question is whether the type of motivation affects creativity scores. In particular, do subjects who were asked about intrinsic motivations tend to have higher creativity scores than subjects who were asked about extrinsic motivations? Figure 2 reveals that both motivation groups saw considerable variability in creativity scores, and these scores have considerable overlap between the groups. In other words, it’s certainly not always the case that those with extrinsic motivations have higher creativity than those with intrinsic motivations, but there may still be a statistical tendency in this direction. (Psychologist Keith Stanovich (2013) refers to people’s difficulties with thinking about such probabilistic tendencies as “the Achilles heel of human cognition.”) The mean creativity score is 19.88 for the intrinsic group, compared to 15.74 for the extrinsic group, which supports the researchers’ conjecture. Yet comparing only the means of the two groups fails to consider the variability of creativity scores in the groups. We can measure variability with statistics using, for instance, the standard deviation: 5.25 for the extrinsic group and 4.40 for the intrinsic group. The standard deviations tell us that most of the creativity scores are within about 5 points of the mean score in each group. We see that the mean score for the intrinsic group lies within one standard deviation of the mean score for extrinsic group. So, although there is a tendency for the creativity scores to be higher in the intrinsic group, on average, the difference is not extremely large. We again want to consider possible explanations for this difference. The study only involved individuals with extensive creative writing experience. Although this limits the population to which we can generalize, it does not explain why the mean creativity score was a bit larger for the intrinsic group than for the extrinsic group. Maybe women tend to receive higher creativity scores? Here is where we need to focus on how the individuals were assigned to the motivation groups. If only women were in the intrinsic motivation group and only men in the extrinsic group, then this would present a problem because we wouldn’t know if the intrinsic group did better because of the different type of motivation or because they were women. However, the researchers guarded against such a problem by randomly assigning the individuals to the motivation groups. Like flipping a coin, each individual was just as likely to be assigned to either type of motivation. Why is this helpful? Because this random assignment tends to balance out all the variables related to creativity we can think of, and even those we don’t think of in advance, between the two groups. So we should have a similar male/female split between the two groups; we should have a similar age distribution between the two groups; we should have a similar distribution of educational background between the two groups; and so on. Random assignment should produce groups that are as similar as possible except for the type of motivation, which presumably eliminates all those other variables as possible explanations for the observed tendency for higher scores in the intrinsic group. But does this always work? No, so by “luck of the draw” the groups may be a little different prior to answering the motivation survey. So then the question is, is it possible that an unlucky random assignment is responsible for the observed difference in creativity scores between the groups? In other words, suppose each individual’s poem was going to get the same creativity score no matter which group they were assigned to, that the type of motivation in no way impacted their score. Then how often would the random-assignment process alone lead to a difference in mean creativity scores as large (or larger) than 19.88 – 15.74 = 4.14 points? We again want to apply to a probability model to approximate a p-value , but this time the model will be a bit different. Think of writing everyone’s creativity scores on an index card, shuffling up the index cards, and then dealing out 23 to the extrinsic motivation group and 24 to the intrinsic motivation group, and finding the difference in the group means. We (better yet, the computer) can repeat this process over and over to see how often, when the scores don’t change, random assignment leads to a difference in means at least as large as 4.41. Figure 3 shows the results from 1,000 such hypothetical random assignments for these scores. Only 2 of the 1,000 simulated random assignments produced a difference in group means of 4.41 or larger. In other words, the approximate p-value is 2/1000 = 0.002. This small p-value indicates that it would be very surprising for the random assignment process alone to produce such a large difference in group means. Therefore, as with Example 2, we have strong evidence that focusing on intrinsic motivations tends to increase creativity scores, as compared to thinking about extrinsic motivations. Notice that the previous statement implies a cause-and-effect relationship between motivation and creativity score; is such a strong conclusion justified? Yes, because of the random assignment used in the study. That should have balanced out any other variables between the two groups, so now that the small p-value convinces us that the higher mean in the intrinsic group wasn’t just a coincidence, the only reasonable explanation left is the difference in the type of motivation. Can we generalize this conclusion to everyone? Not necessarily—we could cautiously generalize this conclusion to individuals with extensive experience in creative writing similar the individuals in this study, but we would still want to know more about how these individuals were selected to participate. Statistical thinking involves the careful design of a study to collect meaningful data to answer a focused research question, detailed analysis of patterns in the data, and drawing conclusions that go beyond the observed data. Random sampling is paramount to generalizing results from our sample to a larger population, and random assignment is key to drawing cause-and-effect conclusions. With both kinds of randomness, probability models help us assess how much random variation we can expect in our results, in order to determine whether our results could happen by chance alone and to estimate a margin of error. So where does this leave us with regard to the coffee study mentioned previously (the Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012 found that men who drank at least six cups of coffee a day had a 10% lower chance of dying (women 15% lower) than those who drank none)? We can answer many of the questions: - This was a 14-year study conducted by researchers at the National Cancer Institute. - The results were published in the June issue of the New England Journal of Medicine , a respected, peer-reviewed journal. - The study reviewed coffee habits of more than 402,000 people ages 50 to 71 from six states and two metropolitan areas. Those with cancer, heart disease, and stroke were excluded at the start of the study. Coffee consumption was assessed once at the start of the study. - About 52,000 people died during the course of the study. - People who drank between two and five cups of coffee daily showed a lower risk as well, but the amount of reduction increased for those drinking six or more cups. - The sample sizes were fairly large and so the p-values are quite small, even though percent reduction in risk was not extremely large (dropping from a 12% chance to about 10%–11%). - Whether coffee was caffeinated or decaffeinated did not appear to affect the results. - This was an observational study, so no cause-and-effect conclusions can be drawn between coffee drinking and increased longevity, contrary to the impression conveyed by many news headlines about this study. In particular, it’s possible that those with chronic diseases don’t tend to drink coffee. This study needs to be reviewed in the larger context of similar studies and consistency of results across studies, with the constant caution that this was not a randomized experiment. Whereas a statistical analysis can still “adjust” for other potential confounding variables, we are not yet convinced that researchers have identified them all or completely isolated why this decrease in death risk is evident. Researchers can now take the findings of this study and develop more focused studies that address new questions. Explore these outside resources to learn more about applied statistics: - Video about p-values: P-Value Extravaganza - Interactive web applets for teaching and learning statistics - Inter-university Consortium for Political and Social Research where you can find and analyze data. - The Consortium for the Advancement of Undergraduate Statistics Think It Over - Find a recent research article in your field and answer the following: What was the primary research question? How were individuals selected to participate in the study? Were summary results provided? How strong is the evidence presented in favor or against the research question? Was random assignment used? Summarize the main conclusions from the study, addressing the issues of statistical significance, statistical confidence, generalizability, and cause and effect. Do you agree with the conclusions drawn from this study, based on the study design and the results presented? - Is it reasonable to use a random sample of 1,000 individuals to draw conclusions about all U.S. adults? Explain why or why not. CC licensed content, Original - Modification, adaptation, and original content. Authored by : Pat Carroll and Lumen Learning. Provided by : Lumen Learning. License : CC BY: Attribution CC licensed content, Shared previously - Statistical Thinking. Authored by : Beth Chance and Allan Rossman, California Polytechnic State University, San Luis Obispo. Provided by : Noba. Located at : http://nobaproject.com/modules/statistical-thinking . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike - The Replication Crisis. Authored by : Colin Thomas William. Provided by : Ivy Tech Community College. License : CC BY: Attribution related to whether the results from the sample can be generalized to a larger population. the collection of individuals on which we collect data. a larger collection of individuals that we would like to generalize our results to. using a probability-based method to select a subset of individuals for the sample from the population. the expected amount of random variation in a statistic; often defined for 95% confidence level. using a probability-based method to divide a sample into treatment groups. the probability of observing a particular outcome in a sample, or more extreme, under a conjecture about the larger population or process. related to whether we say one variable is causing changes in the other variable, versus other variables that may be related to these two variables. General Psychology Copyright © by OpenStax and Lumen Learning is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted. Share This Book Have a language expert improve your writing Run a free plagiarism check in 10 minutes, generate accurate citations for free. - Knowledge Base - Research paper Writing a Research Paper Conclusion | Step-by-Step Guide Published on October 30, 2022 by Jack Caulfield . Revised on April 13, 2023. - Restate the problem statement addressed in the paper - Summarize your overall arguments or findings - Suggest the key takeaways from your paper The content of the conclusion varies depending on whether your paper presents the results of original empirical research or constructs an argument through engagement with sources . Table of contents Step 1: restate the problem, step 2: sum up the paper, step 3: discuss the implications, research paper conclusion examples, frequently asked questions about research paper conclusions. The first task of your conclusion is to remind the reader of your research problem . You will have discussed this problem in depth throughout the body, but now the point is to zoom back out from the details to the bigger picture. While you are restating a problem you’ve already introduced, you should avoid phrasing it identically to how it appeared in the introduction . Ideally, you’ll find a novel way to circle back to the problem from the more detailed ideas discussed in the body. For example, an argumentative paper advocating new measures to reduce the environmental impact of agriculture might restate its problem as follows: Meanwhile, an empirical paper studying the relationship of Instagram use with body image issues might present its problem like this: “In conclusion …” Avoid starting your conclusion with phrases like “In conclusion” or “To conclude,” as this can come across as too obvious and make your writing seem unsophisticated. The content and placement of your conclusion should make its function clear without the need for additional signposting. Receive feedback on language, structure, and formatting Professional editors proofread and edit your paper by focusing on: - Academic style - Vague sentences - Style consistency See an example Having zoomed back in on the problem, it’s time to summarize how the body of the paper went about addressing it, and what conclusions this approach led to. Depending on the nature of your research paper, this might mean restating your thesis and arguments, or summarizing your overall findings. Argumentative paper: Restate your thesis and arguments In an argumentative paper, you will have presented a thesis statement in your introduction, expressing the overall claim your paper argues for. In the conclusion, you should restate the thesis and show how it has been developed through the body of the paper. Briefly summarize the key arguments made in the body, showing how each of them contributes to proving your thesis. You may also mention any counterarguments you addressed, emphasizing why your thesis holds up against them, particularly if your argument is a controversial one. Don’t go into the details of your evidence or present new ideas; focus on outlining in broad strokes the argument you have made. Empirical paper: Summarize your findings In an empirical paper, this is the time to summarize your key findings. Don’t go into great detail here (you will have presented your in-depth results and discussion already), but do clearly express the answers to the research questions you investigated. Describe your main findings, even if they weren’t necessarily the ones you expected or hoped for, and explain the overall conclusion they led you to. Having summed up your key arguments or findings, the conclusion ends by considering the broader implications of your research. This means expressing the key takeaways, practical or theoretical, from your paper—often in the form of a call for action or suggestions for future research. Argumentative paper: Strong closing statement An argumentative paper generally ends with a strong closing statement. In the case of a practical argument, make a call for action: What actions do you think should be taken by the people or organizations concerned in response to your argument? If your topic is more theoretical and unsuitable for a call for action, your closing statement should express the significance of your argument—for example, in proposing a new understanding of a topic or laying the groundwork for future research. Empirical paper: Future research directions In a more empirical paper, you can close by either making recommendations for practice (for example, in clinical or policy papers), or suggesting directions for future research. Whatever the scope of your own research, there will always be room for further investigation of related topics, and you’ll often discover new questions and problems during the research process . Finish your paper on a forward-looking note by suggesting how you or other researchers might build on this topic in the future and address any limitations of the current paper. Full examples of research paper conclusions are shown in the tabs below: one for an argumentative paper, the other for an empirical paper. - Argumentative paper - Empirical paper While the role of cattle in climate change is by now common knowledge, countries like the Netherlands continually fail to confront this issue with the urgency it deserves. The evidence is clear: To create a truly futureproof agricultural sector, Dutch farmers must be incentivized to transition from livestock farming to sustainable vegetable farming. As well as dramatically lowering emissions, plant-based agriculture, if approached in the right way, can produce more food with less land, providing opportunities for nature regeneration areas that will themselves contribute to climate targets. Although this approach would have economic ramifications, from a long-term perspective, it would represent a significant step towards a more sustainable and resilient national economy. Transitioning to sustainable vegetable farming will make the Netherlands greener and healthier, setting an example for other European governments. Farmers, policymakers, and consumers must focus on the future, not just on their own short-term interests, and work to implement this transition now. As social media becomes increasingly central to young people’s everyday lives, it is important to understand how different platforms affect their developing self-conception. By testing the effect of daily Instagram use among teenage girls, this study established that highly visual social media does indeed have a significant effect on body image concerns, with a strong correlation between the amount of time spent on the platform and participants’ self-reported dissatisfaction with their appearance. However, the strength of this effect was moderated by pre-test self-esteem ratings: Participants with higher self-esteem were less likely to experience an increase in body image concerns after using Instagram. This suggests that, while Instagram does impact body image, it is also important to consider the wider social and psychological context in which this usage occurs: Teenagers who are already predisposed to self-esteem issues may be at greater risk of experiencing negative effects. Future research into Instagram and other highly visual social media should focus on establishing a clearer picture of how self-esteem and related constructs influence young people’s experiences of these platforms. Furthermore, while this experiment measured Instagram usage in terms of time spent on the platform, observational studies are required to gain more insight into different patterns of usage—to investigate, for instance, whether active posting is associated with different effects than passive consumption of social media content. If you’re unsure about the conclusion, it can be helpful to ask a friend or fellow student to read your conclusion and summarize the main takeaways. - Do they understand from your conclusion what your research was about? - Are they able to summarize the implications of your findings? - Can they answer your research question based on your conclusion? You can also get an expert to proofread and feedback your paper with a paper editing service . Scribbr Citation Checker New The AI-powered Citation Checker helps you avoid common mistakes such as: - Missing commas and periods - Incorrect usage of “et al.” - Ampersands (&) in narrative citations - Missing reference entries The conclusion of a research paper has several key elements you should make sure to include: - A restatement of the research problem - A summary of your key arguments and/or findings - A short discussion of the implications of your research No, it’s not appropriate to present new arguments or evidence in the conclusion . While you might be tempted to save a striking argument for last, research papers follow a more formal structure than this. All your findings and arguments should be presented in the body of the text (more specifically in the results and discussion sections if you are following a scientific structure). The conclusion is meant to summarize and reflect on the evidence and arguments you have already presented, not introduce new ones. Cite this Scribbr article If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator. Caulfield, J. (2023, April 13). Writing a Research Paper Conclusion | Step-by-Step Guide. Scribbr. Retrieved November 13, 2023, from https://www.scribbr.com/research-paper/research-paper-conclusion/ Is this article helpful? Other students also liked, writing a research paper introduction | step-by-step guide, how to create a structured research paper outline | example, checklist: writing a great research paper, what is your plagiarism score. The second to last step in conducting a research study is to interpret the findings in the Discussion section, draw conclusions, and make recommendations. It is important that everything in this last section is based off of the results of the data analysis. In an empirical research study, the conclusions and recommendations must be directly related to the data that was collected and analyzed. Simply put, look at the key topics in the conclusion and recommendations. If that topic was not precisely assessed by the questionnaire, then you cannot draw a conclusion or make a recommendation about that topic. A paper can only make valid conclusions and recommendations on those variables that the study has empirical data to support. For example, almost every single paper written in Nigeria that I read recommends that the government should provide more teaching materials and increase the salaries of teachers. However, better instructional materials and adequate pay are rarely even distantly related to the topic of the paper! Draw conclusions and make recommendations only directly related to the purpose and results of the study. Everybody in Nigeria knows that schools need more materials and teachers need higher salaries Draw conclusions and make recommendations that are meaningful, unique, and relate to the results of the study. Most papers require five different sections for the Discussion, although the order may vary depending on the requirements of the paper. Each section is described separately below. This section might be called Discussion or it might be called Summary of Findings. The purpose of this section is to highlight the major statistical findings from the results section and interpret them. First, restate the overall purpose of the study. Then explain the main finding as related to the overall purpose of the study. Next, summarize other interesting findings from the results section. Explain how the statistical findings relate to that purpose of the study. One way to do this is to take every research question and hypothesis in turn and explain in plain terms what the statistical results mean. Also describe how the results are related to education in general. All explanations must be supported by the results of the data analysis. Generally, the Discussion section does not need to include any numbers. No statistics need to be repeated from the results, nor does the discussion need to refer to table numbers. Instead, simply explain the results in language that is easy for a non-researcher to understand. Also try to integrate the findings into the results of other research studies. An example paragraph from a Discussion section is given below: Next, give recommendations based on the results of the study. What practical steps can educators take to implement the key findings of the research study? Remember, these recommendations must be supported by the statistical findings from the data analysis. If the statistical results found that a new teaching program improves mathematical exam scores, then the only valid recommendation that can be made is that the new teaching program should be implemented in order to improve exam scores. However, if the data analysis found that the new teaching program does not improve mathematical exam scores, then the researcher cannot conclude that the new teaching program should be implemented, because the program was found to be ineffective in improving exam scores. Educators can only change their own behavior; they cannot change the government. Therefore, the most beneficial recommendations will be ones that educators themselves can implement. Below is a sample recommendation. Notice how the first sentence provides the empirical support for the recommendation. After the recommendations have been written, reread each recommendation. Consider which statistical result from the results section supports that recommendation. If there is no statistical result to support the recommendation, then it must be canceled. All studies have limitations in terms of the sample, measurement or manipulation of key variables, and procedure for data collection. This section should report the limitations that resulted from the research methods. How could the research be conducted with a different research design? How may the participants and sampling techniques not be representative of the target population? How might the target population be limited? How were the instruments inadequate? Were there any problems with the treatment? What problems resulted from the study's procedures? What other unexpected problems arose in the data collection? I frequently read that the study was limited by time, money, or other resources. However, every single research study ever conducted in the history of this world was limited by money, resources, and time. These factors are external to the study and should not be mentionedd. A sample Limitations section is given below. Every research study provides one or two answers about education, but also opens the door for five to ten additional questions. Based on the Discussion/Summary of Findings and Limitations of the study, what additional research should be conducted? What questions arose because of the major finding of your study? How can other research studies improve over the limitations that were described in the Limitations section? A sample of Suggestions for Further Research section is below. The final section of the paper is the Conclusion section. Briefly summarize the overall conclusion of the data analysis based on the purpose of the study. Also explain the importance of the major finding to educational practice. An example conclusion is given below. Copyright 2013, Katrina A. Korb, All Rights Reserved ORDER YOUR PAPER 15% off today from a verified trusted writer Statistics Project - What It is and How to Write It Writing reports, why the writing is important, step 1: the abstract, step 2: the introduction, step 3: include a literature review, step 4: discuss the methods, step 5: compile the results section, step 6: make your conclusions, step 7: end with the appendices, our top writers. Master's in Project Management, PMP, Six Sigma 3398 written pages 447 a+ papers My Master’s degree and comprehensive writing experience allow me to complete any order fast and hit the nail on the head every time. MBA, PMP, ITIL 15126 written pages 2097 a+ papers I am experienced writer with an MBA, PMP, ITIL, that consistently delivers unique, quality papers. I take pride in my experience and quickness. MS in Human Resource Management 372 written pages 112 a+ papers I hold a MS degree in Human Resource and my goal is to help students with flawless, unique papers, delivered on time. RN, MSN, PCN, PHN 11483 written pages 3713 a+ papers As Registered Nurse (RN, PCN), I can quickly deal with any medical paper. My expertise and writing skills are perfect for this job. 4436 written pages 799 a+ papers I have MPA, MHA degrees but, most importantly, experience and skills to provide unique, well-written papers on time. DNP, BA, APN, PMHNP-BC 4658 written pages 586 a+ papers I can write about multiple areas and countless topics, as I have a DNP and BA degrees. High-quality writing is my second name. PhD in American History 1518 written pages 271 a+ papers A PhD in American history comes handy. Unique papers, any topics, swift delivery — helping with academic writing is my passion. MA, PsyD, LMFT 2525 written pages 472 a+ papers Incredibly fast PsyD writer. Efficient paper writing for college. Hundreds of different tasks finished. Satisfaction guaranteed. MEd, NCC, LPC, LMFT 3100 written pages 502 a+ papers Top-ranked writer with tons of experience. Ready to take on any task, and make it unique, as well as objectively good. Always ready! MSW, LICSWA, DSW-C 1894 written pages 281 a+ papers Experienced Social Work expert focused on good writing, total uniqueness, and customer satisfaction. My goal — to help YOU. - Stats & Feedback Have your tasks done by our professionals to get the best possible results. NO Billing information is kept with us. You pay through secure and verified payment systems. All papers we provide are of the highest quality with a well-researched material, proper format and citation style. Our 24/7 Support team is available to assist you at any time. You also can communicate with your writer during the whole process. You are the single owner of the completed order. We DO NOT resell any papers written by our expert All orders are done from scratch following your instructions. Also, papers are reviewed for plagiarism and grammar mistakes. You can check the quality of our work by looking at various paper examples in the Samples section on our website. Your platform is like an emergency exit when I feel the cold breath of a deadline behind :D Perfect, fast, and cheap — your services combine the best features of custom papers! Thank you for all the A's I have! It is the only service where I order my assignments because here you will be assisted be real experts only! Thank you for my high-quality (as always) book report and such a pleasant discount! - High School $11.23 page 14 days - College $12.64 page 14 days - Undergraduate $13.2 page 14 days - Graduate $14.08 page 14 days - PhD $14.59 page 14 days Free samples of our work There are different types of essays: narrative, persuasive, compare\contrast, definition and many many others. They are written using a required citation style, where the most common are APA and MLA. We want to share some of the essays samples written on various topics using different citation styles. - Essay Writing - Term Paper Writing - Research Paper Writing - Coursework Writing - Case Study Writing - Article Writing - Article Critique - Annotated Bibliography Writing - Research Proposal - Thesis Proposal - Dissertation Writing - Admission / Application Essay - Editing and Proofreading - Multiple Choice Questions - Group Project - Lab Report Help - Statistics Project Help - Math Problems Help - Buy Term Paper - Term Paper Help - Case Study Help - Complete Coursework for Me - Dissertation Editing Services - Marketing Paper - Bestcustomwriting.com Coupons - Edit My Paper - Hire Essay Writers - Buy College Essay - Custom Essay Writing - Culture Essay - Argumentative Essay - Citation Styles - Cause and Effect Essay - 5 Paragraph Essay - Paper Writing Service - Help Me Write An Essay - Write My Paper - Research Paper Help - Term Papers for Sale - Write My Research Paper - Homework Help - College Papers For Sale - Write My Thesis - Coursework Assistance - Custom Term Paper Writing - Buy An Article Critique - College Essay Help - Paper Writers Online - Write My Lab Report - Mathematics Paper - Write My Essay - Do My Homework - Buy a PowerPoint Presentation - Buy a Thesis Paper - Buy an Essay - Comparison Essay - Buy Discussion Post - Buy Assignment - Deductive Essay - Exploratory Essay - Literature Essay - Narrative Essay - Opinion Essay - Take My Online Class - Reflective Essay - Response Essay - Custom Papers - Dissertation Help - Buy Research Paper - Criminal Law And Justice Essay - Political Science Essay - Pay for Papers - College Paper Help - How to Write a College Essay - High School Writing - Personal Statement Help - Book Report - Report Writing - Cheap Coursework Help - Literary Research Paper - Essay Assistance - Academic Writing Services - Coursework Help - Thesis Papers for Sale - Coursework Writing Service UK Use your opportunity to get a discount! To get your special discount, write your email below Best papers and best prices ! Want to get quality paper done on time cheaper?
Each computer directly connected to the Internet has at least one specific IP address. However, users do not want to work with numerical addresses such as 188.8.131.52 but with a domain name or more specifically addresses (called FQDN addresses) such as www.commentcamarche.net. It is possible to associate names in normal language with numerical addresses thanks to a system called DNS (Domain Name System). At the beginning of TCP/IP, since the networks were not very extensive, or in other words the number of computers connected to the same network was low, network administrators created files called manual conversion tables. These manual conversion tables were sequential files, generally called hosts or hosts.txt, associating on each line the IP address of the machine and the related literal name called the host name. However, the previous system of conversion tables required manual updating of the tables for all computers in the event of an addition or modification of a machine name. So with the explosion in the size of networks and their interconnection, it was necessary to implement a management system for names which was hierarchical and easier to administrate. The system called Domain Name System (DNS) was developed in November 1983 by Paul Mockapetris (RFC 882 and RFC 883) then revised in 1987 in RFCs 1034 and 1035. DNS has been subject to many RFCs. This system offers: The structure of the DNS system relies on a tree structure where the higher level domains (called TLD, for Top Level Domains) are defined, attached to a root node represented by a dot. Each node of the tree is called a domain name. Each node has a label with a maximum length of 63 characters. All domain names therefore make up an inverse tree where each node is separated from the following node by a dot ("."). The end of a branch is called the host, and corresponds to a machine or entity on the network. The host name given to it must be unique in the respective domain, or if the need arises in the sub-domain. For example a domain's web server generally bears the name www. The word "domain" formally corresponds to the suffix of a domain name, i.e. the tree structure's collection of node labels, with the exception of the host. The absolute name relating to all the node labels of a tree structure, separated by dots, and finished by a final dot is called the FQDN address (Fully Qualified Domain Name). The maximum depth of the tree structure is 127 levels and the maximum length of a FQDN name is 255 characters. The FQDN address makes it possible to uniquely locate a machine on the network of networks. So, www.commentcamarche.net. is an FQDN address. The machines called domain name servers make it possible to establish the link between domain names and IP addresses of machines on a network. Every domain has a domain name server, called a primary domain name server, as well as a secondary domain name server, able to take over from the primary domain name server in the event of unavailability. Every domain name server is declared in the domain name server of the immediately higher level, meaning authority can implicitly be delegated over the domains. The name system is a distributed architecture, where each entity is responsible for the management of its domain name. Therefore, there is no organization with responsibility for the management of all domain names. The servers relating to the top level domains (TLD) are called "root name servers". There are 13 of them, distributed around the planet with the names "a.root-servers.net" to "m.root-servers.net". A domain name server defines a zone, i.e. a collection of domains over which the server has authority. The domain name system is transparent for the user, nevertheless, the following points must be remembered: The most commonly used server is called BIND (Berkeley Internet Name Domain). This is free software available under UNIX systems, initially developed by the University of Berkeley in California and now maintained by ISC (Internet Systems Consortium). The consistent mechanism for finding the IP address relating to a host name is called "domain name resolution". The application making it possible to conduct this operation (generally integrated in the operating system is called "resolving". When an application wants to connect to a known host by its domain name (e.g. "www.commentcamarche.net"), it interrogates a domain name server defined in its network configuration. In fact, each machine connected to the network has the IP addresses of its service provider's two domain name servers in its configuration. A request is then sent to the first domain name server (called the "primary domain name server"). If this domain name server has the record in its cache, it sends it to the application, if not, it interrogates a root server (in our case a server relating to the TLD ".net"). The root name server sends a list of domain name servers with authority over the domain (in this case, the IP addresses of the primary and secondary domain name servers for commentcamarche.net). The primary domain name server with authority over the domain will then be interrogated and will return the corresponding record to the domain host (in our case www). A DNS is a distributed database containing records known as RR (Resource Records), relating to domain names. They alone are concerned with reading the information after the people responsible for the administration of a domain, the operation of domain name servers being totally transparent to users. Because of the cache system enabling the DNS system to be distributed, the records for each domain have a lifetime known as TTL (Time to Live) enabling the intermediary servers to know the information's expiry date and therefore know if it is necessary to verify it or not. Generally, a DNS record contains the following information: |Domain name (FQDN)||TTL||Type||Class||RData| www.commentcamarche.net. IN MX 10 mail.commentcamarche.net. There are two categories of TLD (Top Level Domains): |AE||United Arab Emirates| |AG||Antigua and Barbuda| |CD||Democratic Republic of Congo| |CF||Central African Republic| |EDU||Organisation with educational links| |FK||Falkland Islands (Malvinas)| |FX||France (European Territory)| |HM||Heard and McDonald Islands| |IM||Isle of Man| |IO||British Indian Ocean Territory| |KN||Saint Kitts and Nevis| |MP||Northern Mariana Islands| |NET||Organisation with Internet links| |ORG||Non referenced organization| |PG||Papua New Guinea| |PM||Saint-Pierre and Miquelon| |PR||Puerto Rico (USA)| |SJ||Svalbard and Jan Mayen Islands| |ST||Sao Tomé and Principe| |TC||Turks and Caicos Islands| |TF||French Austral Territories| |TT||Trinidad and Tobago| |UM||US Minor Outlying Islands| |VC||Saint-Vincent and the Grenadines| |VG||British Virgin Islands| |VI||American Virgin Islands| |WF||Wallis and Futuna|
Presentation on theme: "Essential Standard 1.00: Understand economic activities of individuals and families. Objective 1.02 Understand characteristics of financial goals, steps."— Presentation transcript: Essential Standard 1.00: Understand economic activities of individuals and families. Objective 1.02 Understand characteristics of financial goals, steps in decision making and factors that affect financial decisions. Essential Questions What are the steps in goal setting and decision making? How do individuals needs, wants, values, standards, and priorities affect financial goals and decisions? FINANCIAL GOALS Financial goals are accomplished through and give direction to financial planning. Learning to set financial goals is an important part of learning to live independently. Financial Goals should be SMART: Specific Measurable Attainable Realistic Time-bound Decision Making Steps 1. Identify the problem & decision to be made 2. Identify resources and gather information 3. Identify the options (alternatives) *team members brainstorm 4. Identify the pros and cons of each option 5. Choose the best option 6. Put the decision into action..just do it! 7. Evaluate the outcomes of the decision Do the Positives (+) outweigh the Negatives (-)? 5 Six Steps in Economic Decision- Making Process + Advantages Disadvantages Use T-account to determine positives and negatives Good Decision Making Following logical steps when making decisions helps individuals make informed choices. When decisions are made from habit or on impulse, there is a greater likelihood of negative outcomes. Good decisions lead to the achievement of goals and a feeling of self-control and self-confidence. Good decisions are a key to successful independent living. Who makes those decisions? Be sure YOU are making the best decisions! FACTORS THAT AFFECT FINANCIAL DECISIONS Family factors Cultural factors Social factors Societal and demographic factors Economic factors Technology The media The marketplace Legal and moral factors Personal factors Personal Factors Breakdown Needs and wants Distinguishing wants from needs helps individuals and families set more realistic goals and make better decisions. Values Understanding and prioritizing values helps individuals and families set goals and make decisions that lead to greater personal satisfaction. Standards --- Measures of quality or excellence With regard to standards for success, individuals have different views of what it means to be successful. Priorities Each individual or family needs to set priorities by deciding what is more important at any point in time. Principles of Financial Planning from the Jump$tart Coalition Money doubles by the “Rule of 72” Your credit past is your credit future Start saving young Stay insured Budget your money Don’t borrow what you can’t repay Map your financial future Don’t expect something for nothing High returns equal high risks Know your take-home pay Compare interest rates Pay yourself first http://www.jumpstartcoalition.org/files2010/2010_J$_Calendar.pdf Factors Affecting Decisions Family structure Income level Lifestyle Size Age Stage of life cycle Health status Emergencies Cultural factors Cultural and ethnic groups impact Values Beliefs Lifestyle Family structures Clothing choices What are the advantages and disadvantages of cultural diversity within a family, a school, a workplace, a community? Social factors Education level Family structure Immigration Ethnicity Rural, urban, suburban community Peer pressure Community relationships and involvement Societal and Demographic Factors Demography is the statistical characteristics of a population Age Sex Race Birth, marriage, death rates Where people live Economic Factors Employment rate Kind and number of jobs available Inflation A period of rapid increase in the price of goods and services Recession An extended period of slow economic growth * Review the Business Cycle-prosperity, recession, depression, recovery Government Regulations & Spending Fiscal policies affect personal & business spending Cash for clunkers, stimulus $, tax rates Technology The use of mechanical or electronic devices to manipulate Information (Computers, Ipads, Fax machines, fiber optics, GPS systems, smart phones) Objects (i.e Robots, automated assembly lines, hybrid cars) On going change impacts Training needs Replacing obsolete technology Types of jobs available The Media Impacts the ways people and businesses communicate and operate locally, nationally, and globally. Communications that reach large audiences with the aid of publication devices that include Internet Television Voice, text, & data transmissions Publications The Marketplace Supply Goods and services available to the consumer Demand Consumer desire to purchase as compared to availability Market response How quickly the market adjusts to supply versus demand Legal and Moral Factors Laws that impact spending Taxation Investment and Retirement Accounts Insurance requirements Beliefs in what is right and wrong What is appropriate behavior of employers, employees, and individuals at home, work, and within the community. Charitable Giving and Community Service Personal Factors Needs Items to survive – food, clothing, & shelter Wants Not essential but desirable – cell phone, Music CDs, Values A person’s belief about what is important and desirable
www.evamaths.blogspot.in Further Volume and Surface Area Objectives * To find the volume and surface area of spheres, cones, pyramids and cylinders. * To solve problems involving volume and surface area of spheres, cones, pyramids and cylinders. Section 1: Volume Recap from grade B and C work Volume of cuboid = length × width × height height length width Volume of prism = cross-sectional area × length cross-sectional area length Volume of cylinder = r 2 h , where r is the radius and h is the height of the cylinder. height, h radius, r Example: A cuboid measures 15 cm by 12 cm by 8 cm. Find the capacity of the cuboid. Give your answers in litres. Solution: Volume = 15 × 12 × 8 = 1440 cm3. As 1 litre = 1000 cm3, the capacity of the cuboid = 1.44 litres. www.evamaths.blogspot.in Example 2: A cylinder has a volume of 965 cm3. If the height of the cylinder is 16 cm, find the radius. Give your answer to 2 significant figures. Solution: Substitute the information from the question into the formula for the volume of a cylinder: Volume of cylinder = r 2 h 965 = r 2 16 965 = 16 r 2 965 = 50.26548 r 2 19.198 = r 2 4.38156 = r So the radius of the cylinder is 4.4 cm (to 2 SF) Past examination question A can of drink has the shape of a cylinder. The can has a radius of 4 cm and a height of 15 cm. Calculate the volume of the cylinder. Give your answer correct to three significant figures. Past examination question Diagram NOT accurately drawn 5 cm 4 cm 7 cm 3 cm Calculate the volume of the triangular prism. www.evamaths.blogspot.in www.evamaths.blogspot.in Volume of a sphere 4 3 Volume of a sphere = r 3 (This formula is given on the GCSE formula sheet). radius, r A hemisphere is half a sphere. Example The radius of a sphere is 6.7 cm. Find the volume. Solution: Substitute r = 6.7 cm into the formula 4 Volume = r 3 3 6.7 cm 4 V = 6.7 3 3 V = 1259.833 (remember to use the cube button on your calculator) V = 1260 cm3 (to 3 SF) Example 2: Find the volume of the hemisphere shown in the diagram. Solution: The diameter of the hemisphere is 18.4 cm. Therefore the radius is 9.2 cm. 1 diameter = 18.4 cm Volume of the hemisphere = volume of sphere 2 1 4 = r 3 2 3 1 4 = 9.2 3 2 3 1 = 3261.76 2 = 1630 cm3 (to 3 SF) www.evamaths.blogspot.in Example 3: A sphere has a volume of 86.5 cm3. Find the radius of the sphere. Solution: 4 3 Substitute into the formula for the volume of a sphere: Volume = r 3 4 3 86.5 = r 3 So 86.5 = 4.18879r 3 i.e. 20.65035 = r 3 So r = 2.74 cm (to 3 SF) (cube rooting) The sphere has radius 2.74 cm. Examination style question The object shown is made up from a cylinder and a hemisphere. The cylinder has radius 5.0 cm and height 22 cm. Find the volume of the object. Solution: Volume of cylinder = r 2 h 22 cm = 5 2 22 = 1728 cm3 (to nearest whole number) The hemisphere must also have radius 5 cm. 1 5.0 cm Volume of the hemisphere = volume of sphere 2 1 4 = r 3 2 3 1 4 = 53 2 3 = 262 cm3 Therefore total volume of the object = 1728 + 262 = 1990 cm3. Problem style example A tank measures 15 cm by 10 cm by 10 cm The tank is half-full of water. 10 cm 10 cm 15 cm A solid metal sphere with radius 2 cm is placed into the tank. Assuming that the sphere sinks to the bottom of the tank, calculate the amount by which the water level in the tank rises. www.evamaths.blogspot.in Solution As the sphere will be completely submerged, it will displace its volume of water. 4 3 4 Volume of sphere = r = 23 = 33.51 cm3. 3 3 Therefore the water displaced is 33.51 cm3. The water displaced has the form of a cuboid with measurements 15 cm by 10 cm by h cm, where h is the height by which the water level rises. So 15 × 10 × h = 33.51 i.e. h = 0.22 cm The water rises by 0.22 cm. Examination question A solid plastic toy is made in the shape of a cylinder which is joined to a hemisphere at both ends. 5 cm The diameter of the toy at the joins is 5 cm. 10 cm The length of the cylindrical part of the toy is 10 cm. Calculate the volume of plastic needed to make the toy. Give your answer correct to three significant figures. www.evamaths.blogspot.in Examination question (Problem style) A water tank is 50 cm long, 34 cm wide and 24 cm high. It contains water to a depth of 18 cm. 18 cm 24 cm 34 cm 50 cm Four identical spheres are placed in the tank and are fully submerged. The water level rises by 4.5cm. Calculate the radius of the spheres. Volume of a pyramid Pyramids come in a range of shapes. They can have bases which are any shape e.g. triangular, square, rectangular, circular etc. The volume of any pyramid can be found using the formula: 1 Volume of pyramid = base area height 3 (This formula is NOT given to you in the exam – you will need to learn it!) www.evamaths.blogspot.in Example: (non-calculator paper) The pyramid shown has a square base. The square has sides of length 12 cm. The height of the pyramid is 10 cm. Find the volume. 10 cm Solution: The area of the square base is 12 × 12 = 144 cm2 So, the volume of the pyramid is: 1 Volume = 144 10 12 cm 3 = 48 × 10 = 480 cm3. Example 2: The diagram shows a triangular-based pyramid. The base of the pyramid is a right-angled triangle. The volume of the pyramid is 325 cm3. Find the height of the pyramid. Solution: The base of the pyramid is as shown: 8 cm 9 cm 8 cm 9 cm 1 The area of the base is 9 8 36 cm2. 2 Substitute information into the formula for the volume of a pyramid. 1 Volume of pyramid = base area height 3 1 325 = 36 height 3 325 = 12 × height. So, height = 325 ÷ 12 = 27.08 cm (to 4 SF). Volume of a cone A cone is a pyramid with a circular base. The formula for the volume of a cone is: height, h 1 2 Volume of cone = r h 3 where r is the radius of the cone and h is the height of the cone. radius, r www.evamaths.blogspot.in Example 1 (non-calculator paper) The base of a cone has a radius of 4 cm. The height of the cone is 6 cm. Find the volume of the cone. 6 cm Leave your answer in terms of . Solution: Substitute the information into the formula for the volume of a cone: 1 4 cm Volume of cone = r 2 h 3 1 = 42 6 3 = 2 16 (start by finding 1/3 of 6) volume = 32π cm3. Example 2: A cone has a volume of 1650 cm3. The cone has a height of 28 cm. Find the radius of the cone. Give your answer correct to 2 significant figures. 28 cm Solution: Substitute information into the formula: 1 Volume of cone = r 2 h 3 radius, r 1 1650 = r 2 28 3 1 1650 = 29.32153r 2 (evaluating 28 ) 3 r 2 56.2726 i.e. r = 7.5 cm (to 2 SF) The radius of the cone is therefore 7.5 cm. Problem solving: Worked examination question The radius of the base of a cone is x cm and its height is h cm. The radius of a sphere is 2x cm. Diagrams NOT accurately drawn h cm x cm 2x cm The volume of the cone and the volume of the sphere are equal. Express h in terms of x. Give your answer in its simplest form. www.evamaths.blogspot.in www.evamaths.blogspot.in Solution: 1 2 1 The volume of the cone is r h = πx 2 h 3 3 4 4 The volume of the sphere is r 3 (2 x) 3 (note: the brackets around 2x are important) 3 3 4 = 8x 3 (cubing both 2 and x) 3 32 3 = x 3 As the sphere and the cone have the same volume, we can form an equation: 1 2 32 x h x 3 3 3 x h 32x 3 2 (multiplying both sides by 3) x h 32x 2 3 (dividing both sides by π) h 32x (diving both sides by x 2 ) Past examination question A child’s toy is made out of plastic. The toy is solid. The top of the toy is a cone of height 10 cm and base radius 4 cm. The bottom of the toy is a hemisphere of radius 4 cm. 10 cm Calculate the volume of plastic needed to make the toy. 4 cm www.evamaths.blogspot.in Volume of a frustrum A frustrum is a cone with a smaller cone sliced off the top. Examination style question The diagram shows a large cone of height 24 cm and base radius 4 m. 1.5 cm 24 cm 4 cm A small cone of radius 1.5 cm is cut off the top leaving a frustrum. Calculate the volume of the frustrum. Solution: 1 The volume of the large cone is: 4 2 24 402.12 cm3 3 To find the volume of the small cone, we need its height. 1.5 3 The radius of the small cone is of the radius of the large cone. 4 8 3 Therefore the height of the small cone is of the height of the large cone, i.e. the small cone has 8 3 height 24 9 cm 8 1 So the volume of the small cone is 1.5 2 9 21.21 cm3 3 The volume of the frustrum is 402.12 – 21.21 = 381 cm3 (to 3F) Section 2: Surface Area Recap: Grade B and C You should be familiar with finding the surface area of prisms (such as cuboids, triangular prisms, etc). The surface area of a prism is found by adding together the area of each face. www.evamaths.blogspot.in Examination style question 3 cm Find the total surface area of the solid prism shown in the diagram. The cross-section is an isosceles trapezium. 5 cm 4 cm 8 cm 9 cm Solution: The prism has six faces – two are trapeziums and 4 are rectangles. The area of the front and back faces are: The formula for the area of a trapezium 1 is: (3 9) 4 6 4 24 cm2 1 2 (sum of parallel sides) height 2 The two sides faces each have an area equal to 5 × 8 = 40 cm2 The area of the top face is 3 × 8 = 24 cm2 The area of the base is 9 × 8 = 72 cm2 So the total surface area is 24 + 24 + 40 + 40 + 24 + 72 = 224 cm2. Surface area of cylinders, spheres, cones and pyramids Cylinders A solid cylinder has 3 faces – a circular face at either end and a curved face around the middle: Surface area of a cylinder = 2 rh 2 r 2 height, h curved area of top surface area and bottom (This formula is not on the formula sheet). radius, r Sphere A sphere has a single curved face. Surface area of a sphere = 4 r 2 (This formula is on the formula sheet) radius, r www.evamaths.blogspot.in Cone A solid cone has two surfaces – the curved surface and the circular base. The formula for the curved surface area is: curved surface area = rl height, h slant length, l where l is the slant length. The values of l, r and h are related by Pythagoras’ theorem: h2 r 2 l 2 . radius, r Pyramid There is no general formula for the total surface area of a pyramid. Just take each face in turn and use the relevant formula for finding the area of that face’s shape. Worked example 1: Find the total surface area of the solid hemisphere shown. 5.5 cm Solution: The hemisphere has a radius of 5.5 cm. It has 2 surfaces – a circular base and a curved surface. The area of the circular base is r 2 5.52 95.033cm 2 1 1 The area of the curved surface is 4 r 2 4 5.52 190.066 cm 2 2 formula for surface 2 area of a whole sphere 2 So, total surface area = 285 cm (to 3 SF) Worked example 2 The diagram shows an object made from two cones, one on top of the other. The top cone has a height of 8 cm and the bottom cone has a height of 10 cm. Both cones have a radius of 5 cm. Find the total surface area of the object. 8 cm Solution: The formula for the curved surface area of a cone is: rl . We can find the slant length, l, for each cone using Pythagoras’ theorem – we know the radius and the height of each cone. 10 cm Top cone: l 2 52 82 25 64 89 l 89 9.434cm Therefore, 5 cm Curved surface area = 5 9.434 148.2cm www.evamaths.blogspot.in Bottom cone: l 2 52 102 25 100 125 l 125 11.180cm Therefore, Curved surface area = 5 11.180 175.6cm So total surface area is 324cm2 (to 3SF) Worked example 3: (non-calculator) A cylinder is made from metal. It has a base but no lid. The height of the cylinder is 8 cm. 8 cm The radius of the cylinder is 3 cm. Find the amount of metal required to make the cylinder. Leave your answer in terms of . 3 cm Solution: The area of the base is r 2 32 9 The curved surface area is 2 rh 2 3 8 48 So the area of metal required = 9 48 57 cm 2 Examination style question 1: A solid object is formed by joining a hemisphere to a cylinder. Both the hemisphere and the cylinder have a diameter of 4.2 cm. The cylinder has a height of 5.6 cm. Calculate the total surface area of the whole object. 5.6 cm Give your answer to 3 SF. 4.2 cm Examination style question 2: A sphere has a volume of 356 cm3. Calculate the surface area of the sphere. Pages to are hidden for "Further Volume n Surface"Please download to view full document
An azimuth (// (listen); from Arabic اَلسُّمُوت as-sumūt, “the directions”, the plural form of the Arabic noun السَّمْت as-samt, meaning "the direction") is an angular measurement in a spherical coordinate system. The vector from an observer (origin) to a point of interest is projected perpendicularly onto a reference plane; the angle between the projected vector and a reference vector on the reference plane is called the azimuth. When used as a celestial coordinate, the azimuth is the horizontal direction of a star or other astronomical object in the sky. The star is the point of interest, the reference plane is the local area (e.g. a circular area 5 km in radius at sea level) around an observer on Earth's surface, and the reference vector points to true north. The azimuth is the angle between the north vector and the star's vector on the horizontal plane. - 1 Navigation - 2 Cartographical azimuth - 3 Calculating azimuth - 4 Mapping - 5 Astronomy - 6 Other systems - 7 Other uses of the word - 8 Etymology of the word - 9 See also - 10 Notes - 11 References - 12 External links In land navigation, azimuth is usually denoted alpha, α, and defined as a horizontal angle measured clockwise from a north base line or meridian. Azimuth has also been more generally defined as a horizontal angle measured clockwise from any fixed reference plane or easily established base direction line. Today, the reference plane for an azimuth is typically true north, measured as a 0° azimuth, though other angular units (grad, mil) can be used. Moving clockwise on a 360 degree circle, east has azimuth 90°, south 180°, and west 270°. There are exceptions: some navigation systems use south as the reference vector. Any direction can be the reference vector, as long as it is clearly defined. Quite commonly, azimuths or compass bearings are stated in a system in which either north or south can be the zero, and the angle may be measured clockwise or anticlockwise from the zero. For example, a bearing might be described as "(from) south, (turn) thirty degrees (toward the) east" (the words in brackets are usually omitted), abbreviated "S30°E", which is the bearing 30 degrees in the eastward direction from south, i.e. the bearing 150 degrees clockwise from north. The reference direction, stated first, is always north or south, and the turning direction, stated last, is east or west. The directions are chosen so that the angle, stated between them, is positive, between zero and 90 degrees. If the bearing happens to be exactly in the direction of one of the cardinal points, a different notation, e.g. "due east", is used instead. True north-based azimuths The cartographical azimuth (in decimal degrees) can be calculated when the coordinates of 2 points are known in a flat plane (cartographical coordinates): Remark that the reference axes are swapped relative to the (counterclockwise) mathematical polar coordinate system and that the azimuth is clockwise relative to the north. This is the reason why the X and Y axis in the above formula are swapped. If the azimuth becomes negative, one can always add 360°. The formula in radians would be slightly easier: - Caveat: Most computer libraries (C/C++, Python, Java, ...) reverse the order of the atan2 parameters. When the coordinates (X1, Y1) of one point, the distance L, and the azimuth α to another point (X2, Y2) are known, one can calculate its coordinates: This is typically used in triangulation. We are standing at latitude , longitude zero; we want to find the azimuth from our viewpoint to Point 2 at latitude , longitude L (positive eastward). We can get a fair approximation by assuming the Earth is a sphere, in which case the azimuth α is given by A better approximation assumes the Earth is a slightly-squashed sphere (an oblate spheroid); azimuth then has at least two very slightly different meanings. Normal-section azimuth is the angle measured at our viewpoint by a theodolite whose axis is perpendicular to the surface of the spheroid; geodetic azimuth is the angle between north and the geodesic; that is, the shortest path on the surface of the spheroid from our viewpoint to Point 2. The difference is usually immeasurably small; if Point 2 is not more than 100 km away, the difference will not exceed 0.03 arc second. Normal-section azimuth is simpler to calculate; Bomford says Cunningham's formula is exact for any distance. If f is the flattening, and e the eccentricity, for the chosen spheroid (e.g., 1⁄298.257223563 for WGS84) then If φ1 = 0 then To calculate the azimuth of the sun or a star given its declination and hour angle at our location, we modify the formula for a spherical earth. Replace φ2 with declination and longitude difference with hour angle, and change the sign (since the hour angle is positive westward instead of east). There is a wide variety of azimuthal map projections. They all have the property that directions (the azimuths) from a central point are preserved. Some navigation systems use south as the reference plane. However, any direction can serve as the plane of reference, as long as it is clearly defined for everyone using that system. Used in celestial navigation, an azimuth is the direction of a celestial body from the observer. In astronomy, an azimuth is sometimes referred to as a bearing. In modern astronomy azimuth is nearly always measured from the north. (The article on coordinate systems, for example, uses a convention measuring from the south.) In former times, it was common to refer to azimuth from the south, as it was then zero at the same time that the hour angle of a star was zero. This assumes, however, that the star (upper) culminates in the south, which is only true if the star's declination is less than (i.e. further south than) the observer's latitude. If, instead of measuring from and along the horizon, the angles are measured from and along the celestial equator, the angles are called right ascension if referenced to the Vernal Equinox, or hour angle if referenced to the celestial meridian. In the horizontal coordinate system, used in celestial navigation and satellite dish installation, azimuth is one of the two coordinates. The other is altitude, sometimes called elevation above the horizon. See also: Sat finder. In mathematics, the azimuth angle of a point in cylindrical coordinates or spherical coordinates is the anticlockwise angle between the positive x-axis and the projection of the vector onto the xy-plane. The angle is the same as an angle in polar coordinates of the component of the vector in the xy-plane and is normally measured in radians rather than degrees. As well as measuring the angle differently, in mathematical applications theta, θ, is very often used to represent the azimuth rather than the representation of symbol phi φ. Other uses of the word For magnetic tape drives, azimuth refers to the angle between the tape head(s) and tape. In sound localization experiments and literature, the azimuth refers to the angle the sound source makes compared to the imaginary straight line that is drawn from within the head through the area between the eyes. Etymology of the word The word azimuth is in all European languages today. It originates from medieval Arabic al-sumūt, pronounced as-sumūt in Arabic, meaning "the directions" (plural of Arabic al-samt = "the direction"). The Arabic word entered late medieval Latin in an astronomy context and in particular in the use of the Arabic version of the astrolabe astronomy instrument. The word's first record in English is in the 1390s in Treatise on the Astrolabe by Geoffrey Chaucer. The first known record in any Western language is in Spanish in the 1270s in an astronomy book that was largely derived from Arabic sources, the Libros del saber de astronomía commissioned by King Alfonso X of Castile. - "Azimuth". Dictionary.com. - U.S. Army, Map Reading and Land Navigation, FM 21–26, Headquarters, Dept. of the Army, Washington, D.C. (7 May 1993), ch. 6, p. 2 - U.S. Army, Map Reading and Land Navigation, FM 21–26, Headquarters, Dept. of the Army, Washington, D.C. (28 March 1956), ch. 3, p. 63 - U.S. Army, ch. 6 p. 2 - U.S. Army, Advanced Map and Aerial Photograph Reading, Headquarters, War Department, Washington, D.C. (17 September 1941), pp. 24–25 - U.S. Army, Advanced Map and Aerial Photograph Reading, Headquarters, War Department, Washington, D.C. (23 December 1944), p. 15 - Rutstrum, Carl, The Wilderness Route Finder, University of Minnesota Press (2000), ISBN 0-8166-3661-3, p. 194 - "Azimuth" at New English Dictionary on Historical Principles; "azimut" at Centre National de Ressources Textuelles et Lexicales; "al-Samt" at Brill's Encyclopedia of Islam; "azimuth" at EnglishWordsOfArabicAncestry.wordpress.com Archived January 2, 2014, at the Wayback Machine. In Arabic the written al-sumūt is always pronounced as-sumūt (see pronunciation of "al-" in Arabic). - Rutstrum, Carl, The Wilderness Route Finder, University of Minnesota Press (2000), ISBN 0-8166-3661-3 - U.S. Army, Advanced Map and Aerial Photograph Reading, FM 21–26, Headquarters, War Department, Washington, D.C. (17 September 1941) - U.S. Army, Advanced Map and Aerial Photograph Reading, FM 21–26, Headquarters, War Department, Washington, D.C. (23 December 1944) - U.S. Army, Map Reading and Land Navigation, FM 21–26, Headquarters, Dept. of the Army, Washington, D.C. (7 May 1993) |Look up azimuth in Wiktionary, the free dictionary.|
Using Parents and Siblings during a Social Story Intervention for Two Children Diagnosed with PDD-NOS Very few experimental studies have examined the use of Social Stories to modify the social skills of children with autism spectrum disorders. The behaviors targeted for the present study include a problem social skill (i.e., excessive directions) and a prosocial skill (i.e., compliments). The study used both a multiple-baseline-design-across-behaviors and a multiple-baseline-design-across-participants with two children diagnosed with Pervasive Developmental Disorder-Not Otherwise Specified. The main dependent variables were frequencies of directions and compliments. Results demonstrated that Social Stories were effective at modifying these social skills, and child and parent evaluations of the intervention were positive. KeywordsAutism Pervasive developmental disorder Social stories Carol Gray first developed social stories because individuals with autism spectrum disorders (ASDs) often have difficulty understanding and responding during social situations (Gray and Garand 1993). As defined by Gray (2000), “…a Social Story is a short story—defined by specific characteristics - that describes a situation, concept, or social skill using a format that is meaningful for people with ASD” (p. 13–1). While detailed information is provided about how to write social stories (Gray 2004) few well-controlled studies have examined their effectiveness, with the first experimental study published within the last decade (Kuttler et al.1998). In a recent review, Nichols et al. (2005) found only ten experimental, peer-reviewed studies of social stories. Of these ten studies, only two actually focused on actual social skills (Barry and Burlew 2004; Thiemann and Goldstein 2001) while the other studies focused on daily living skills and decreasing disruptive behavior. Thiemann and Goldstein (2001) included five boys with autism ranging in age from 6 to 12years. Each participant was grouped with two typically-developing peers. Target skills were chosen from a group of four possible skills: increasing contingent responses, securing attention, initiating comments, and initiating requests. A multiple-baseline-design-across-behaviors was used for each participant. The baseline phase consisted of social interactions between the child with autism and the two peers. The intervention phase consisted of sessions during which time the child with autism read a social story and then interacted with peers. In general, results demonstrated that the four social skills improved after the introduction of social stories. At least two social skills for each participant increased. A study by Barry and Burlew (2004) investigated the effects of a social story on the choice-making behavior and appropriate play skills of two children with severe autism. “Holly,” a 7-year-old girl, had receptive language skills, but she did not initiate speech beyond yelling, “no.” “Aaron,” an 8-year-old boy, did not speak other than exhibiting echolalia, and he did not read. Aaron was able to respond to picture prompts. An ABCD multiple-baseline-design-across-participants was implemented. Target behaviors were making independent choices about where to play and exhibiting appropriate play behaviors at the play center (i.e., interacting with play materials or peers appropriately). The study included a baseline control phase (A), a social story intervention phase (B) that focused on choice-making and appropriate play with materials, and a social story intervention phase (C) that focused on appropriate play with peers. A fourth phase (D) consisted of reading the social story at the beginning of the school day, but the story was not read immediately before the play sessions. Results indicated that choice-making behavior increased steadily as demonstrated by reduced number of prompts necessary to have the child go to the play center. Aaron did not actually interact with any peers during the study, but he did engage in parallel play. As a result of Holly’s newly acquired play skills she was placed in a general education classroom where she immediately chose two girls in the class as friends. This study is unique because it focused on children with very limited language skills. Since the Nichols et al. (2005) review was published, two additional studies have been published that examine the use of social stories on the social skills for children with ASDs (Delano and Snell 2006; Sansosti and Powell-Smith 2006). Both of these single-case research studies were well-designed and demonstrated promising results for the intervention. However, these four published studies include just 13 children, thus far more research needs to be conducted involving social stories and social skills. Several other problems exist within the current literature. First, in research studies social stories are often combined with other interventions (e.g. schedules, timers, cues, corrective feedback, prompts) during the intervention phase, confounding the unique effects of the social story (Barry and Burlew 2004; Hagiwara and Myles 1999; Kuoch and Mirenda 2003; Kuttler et al.1998; Lorimer et al.2002; Thiemann and Goldstein 2001). Second, parents and siblings are rarely included in this research, thus overlooking a valuable and economical source of interventionists and peers. Third, the literature also lacks studies that use a placebo control to account for the extra adult attention received during a social story. Fourth, few studies have examined the maintenance of gained skills. Finally, the literature rarely discusses treatment acceptability of social stories as rated by parents or children. To some extent, the present study attempts to address all of these limitations. We hypothesized that the social story would be effective at decreasing a social skill excess (i.e., directions) for one participant. We also hypothesized the social story would increase a social skill deficit (i.e., compliments) for both participants. Materials and Methods Participants and Setting Two families were recruited for research using flyers that were posted at a local clinic. Both participants had previously been diagnosed with Pervasive Developmental Disorder-Not Otherwise Specified (PDD-NOS). Each of the participants met the criteria of a second grade reading level as indicated by their mothers. The study took place in participants’ homes and mothers implemented the study procedures. “Mark” was a Caucasian male who lived with his biological parents and two younger brothers. He was 9years-10months of age and in the fourth grade. He was diagnosed with PDD-NOS by a pediatric neurologist one month before the study took place. Mark was polite and could carry on a conversation, although his eye-contact was poor. Mark smiled often and he appeared to be a happy most of the time. His national grade percentile ranks on the Illinois Stanford Achievement Test were: Reading 43%; Math 39%; Language 54%; Science 81%; Social Science 91%; and Listening 62%. His mother reported that Mark had difficulty making friends at school, and he attended a weekly social skills group for children with autism spectrum disorders. Mark’s mother had attended a social stories presentation by Carol Gray, and she had previously written social stories for Mark. She indicated that these stories had helped Mark in situations such as school and family gatherings. Mark had not used social stories recently, however, and he did not read social stories other than those used in this study during the month the study was conducted. Mark’s younger brother (8years-10months of age) took part in data collection play sessions with Mark. Mark’s brother was diagnosed with high-functioning autism at a young age and he received intensive early intervention with applied behavior analysis. At the time of the study, Mark’s brother was functioning at a high level and his mother did not have any particular concerns about his behavior. The study focused on Mark, because his mother was more concerned about his social skills. “Logan” was a Caucasian male who lived with his biological parents and a younger brother. He was 12years-7months of age and in the seventh grade at the time of the study. Logan was diagnosed with PDD-NOS at age six by a pediatric neuropsychologist. He received special education services, but he was in regular education classrooms. Logan received resource support, and he had a personal aide in the classroom. He also could carry on a conversation, and his mother described him as very creative and imaginative. He tended to speak formally, often sounding like a “little professor.” He also tended to be a perfectionist, becoming frustrated when he did not perform as well as he thought he should. Logan’s mother reported that he had difficulty navigating the social environment at school, and he was often bullied. He attended a weekly social skills group for children with ASDs and was assessed by a school psychologist at 10years-10months of age. On the Wechsler Intelligence Scale for Children, 3rd Edition, Logan received a Verbal IQ of 105, a Performance IQ of 120, and a Full Scale IQ of 107. Logan’s verbal skills were within the average range and his nonverbal skills were within the superior range. His overall intellectual ability was within the average range. On the Wechsler Individual Achievement Test Logan received composite scores of 84 in Reading and 95 in Mathematics, which are in the low average and average ranges, respectively. Logan’s mother stated that he had not been exposed to social stories before. Logan’s younger brother (10years-6months of age) took part in the data collection play sessions with Logan. Measures and Data Collection The main dependent measures were frequencies of occurrence of target social skills. The goals for Mark were to decrease the excessive “directions” he gave to his brother and increase the “compliments” he gave for his brother’s ideas. The target behavior for Logan was also “compliments” and focused more on compliments within the context of being a good sport (see Appendix A for operational definitions). Data were collected over a period of 4weeks with the number of sessions per week varying between zero to three. Parents collected data by videotaping their childrens’ play sessions. Mark’s play sessions were 15min and Logan’s play sessions were 10min. The time differences were due to the parents’ preferences for length of play session. Three of Logan’s play sessions were longer than 10min so these sessions were divided into two sessions and coded as separate sessions. At the end of the 4-week data collection period, the children were given a three-item questionnaire to assess intervention acceptability (e.g., “How much did you like reading your social story?”). This form included smiley-face and frowning-face pictures to enhance the meaning of the 4-point Likert scale for the first two questions. The third question was an open-ended question. A general questionnaire was given to parents to obtain descriptive information. The questionnaire obtained qualitative information by inquiring about positive and negative consequences of the study and subjective impressions of improvement. In addition, parents were asked if they would continue using the social story created for this study and if they planned to create more social stories. Suggestions for improvement in the study were also gathered. Since “Mark” had two target behaviors, the study employed a multiple-baseline-across-behaviors design for him, and the “directions” behavior included a maintenance phase. For the behavior of “compliments,” the study employed a multiple-baseline-design-across-participants. After parents expressed interest in the study, they met with the principal investigator to further discuss participation in the study. During an initial meeting the study was explained, informed consent was signed, and possible target behaviors and play scenarios were discussed. Parental input was used to decide that Mark and his brother would play with cars, and Logan and his brother would play a popular children’s card game (“Yu-Gi-Oh!”). A second meeting was held with each parent to explain the procedures of the baseline phase. Each parent was given a tip sheet for completing the baseline activity and an experimental procedures checklist. The tip sheet encouraged parents to choose a certain time of the day that they believed they would be likely to observe problem behaviors. Parents were asked to find a quiet room and have their children read one page aloud to them from a favorite book (a “non-social” story). During baseline, a “non-social” story was used as a placebo in order to control for receiving adult attention and reading a story before the activity. Next, they were to turn the camera on and have the siblings play together for ten to 15min. The parents were asked not to be involved in their children’s interactions unless the children were in physical danger. At the end of the play session, the parents were asked to provide a small reward to each sibling for participating in the play session. The reward was not contingent upon performing the target behaviors. During the social story phase, the procedure remained the same as the baseline phase except for the substitution of social stories (see Appendices B and C) in place of the “non-social” story. For each story, three comprehension questions were added to verify the children’s understanding of the main themes. Parents were instructed to ask the child the questions and if the child was not providing an appropriate answer, the parent was to provide an answer. Mark gave excessive directions to his sibling, so the behavior of “directions” was chosen as a target behavior to decrease and a social story was written and titled, “Giving Just a Few Directions Makes Playing Fun.” The story met Gray’s (2004) guidelines and had a total of 17 sentences over 8 pages. Standard 8 1/2 × 11 cardstock was folded in half to create a booklet. The story included nine photographs of Mark and his brother playing with their cars. The photographs were included to increase visual interest and to include a special interest of Mark’s (i.e., the cars). The social story for Mark was discussed with his mother to assure that she was in agreement with the story and to elicit suggestions. A second social story was also created for Mark, which aimed to increase the prosocial behavior of “compliments.” The “compliments” social story titled, “Listening to Others’ Ideas” and was a total of 15 sentences over 7 pages. The story included six new photographs of Mark and his brother playing with their cars. Mark’s mother was asked to provide suggestions for the story. The “directions” social story was no longer read during the next phase and the “compliments” story was introduced. For Logan’s intervention phase, an individualized social story was written with the aim of increasing his use of compliments. His social story was titled, “Being a Good Sport.” Thus, the operational definition for compliments for Logan had a greater emphasis on sportsmanlike behavior during games. His social story was ten sentences over seven pages. Cardstock was used to construct the booklet and clip-art and “Yu-Gi-Oh!” images from the Internet were used to illustrate the story. Pictures and references to “Yu-Gi-Oh!” were included in order to incorporate Logan’s special interest. After all the sessions were completed, the children and mothers answered the acceptability questions. The participants and siblings were each given a university-logo water bottle and the parents were presented with a gift card for a local restaurant to thank them for their participation. As another token of appreciation, parents were given the Jenison Autism Journal, which included the most up-to-date social story criteria and suggestions for writing social stories (Gray 2004). Reliability checks were completed randomly for the baseline and intervention phases for each of the three behaviors. The first author was a graduate student studying clinical-child and school psychology, and she rated the frequency of behavior for all of the sessions. An undergraduate research assistant, who was blind to condition, rated 33% of the sessions for each behavior. The research assistant was a junior and a psychology major. She had previous experience with children with autism as an applied behavior analysis therapist. The reliability was calculated by dividing the smaller frequency by the larger frequency and multiplying by 100%. Parents completed a self-report checklist of their compliance with the experimental procedures after each session. The experimental procedures checklist was a calendar that included the steps for completing the study procedures. During the baseline phase the checklist had four steps: (1) Read story, (2) Camera on, (3) Activity, and (4) Reward. During the intervention phase the checklist had five steps: (1) Read story, (2) Comprehension questions, (3) Camera on, (4) Activity, and (5) Reward. Procedural integrity was calculated by dividing the number of correct steps by the total number of steps and multiplying by 100%. Interobserver Agreement and Procedural Integrity Mean interobserver agreement for directions was 65.1%. Interobserver agreement for compliments for both children was 100%. The procedural agreement for Mark was 100% and the procedural agreement for Logan was 97.1%. The 97.1% agreement was due to one missed step in the baseline phase. The parent stated that she and her son forgot to read a page from his favorite book before the play session. In addition, the videotapes showed that the mothers never interfered in their children’s playtime during the sessions. During the baseline phase, Logan exhibited only one compliment, and the trend was stable and flat (M = 0.11). After the social story was introduced, Logan’s compliments increased dramatically to seven. While compliments in the Social Story phase were higher, compliments declined over the course of the phase (M = 2.25). Overall, there was a 19.5% increase in compliments from baseline to intervention. Mark and Logan both indicated that they “somewhat liked” reading the social story. In addition, they “somewhat liked” learning something new. Mark stated that his favorite part about the social story was looking at the pictures. He also liked getting to play with his cars after he read the social story. Logan stated that his favorite part about the social story was “the part on how to lose like a good sport.” Both mothers reported that their children willingly read the social stories. Mark’s mother stated that as a positive consequence of the study, she learned more about social stories. Logan’s mother expressed that a positive consequence of the study was being able to have her son focus on a particular skill before an activity. She also liked being able to use an existing situation to work on social skills. Both parents were unsure if their children’s social skills had improved and they felt more time would be needed after the intervention to observe the effects. Both parents planned to continue using the social stories created for this study with their children. Mark’s mother estimated that she would use the story three to four times a week, while Logan’s mother believed she would use it approximately once a week before similar activities. Mark’s mother stated that as issues arose, she would write more social stories. Logan’s mother also expressed a desire to use the social story with her younger son who did not have an autism spectrum disorder. Logan’s mother also wanted to write a social story for Logan to address saying “no” to peers when pressured to engage in inappropriate activities. Children with autism spectrum disorders face many social challenges. Numerous interventions have been developed to improve the social skills of children with autism. Social stories are currently a popular intervention; however, with limited research it is important to further investigate this intervention. This study was designed to examine the effects of social stories on the behavior of two children with PDD-NOS. The hypothesis that directions would decrease with use of a social story was supported with Mark. After the social story was introduced, Mark’s directions decreased and stabilized. The results are consistent with other findings that social stories can be effective in decreasing problem behaviors (Brownell 2002; Kuoch and Mirenda 2003; Kuttler et al.1998; Lorimer et al.2002; Scattone et al.2002). In addition, the decreased level of directions was maintained, and continuing to decrease, after the intervention was withdrawn. This most likely indicates that the social story was no longer needed to decrease the behavior. The other hypothesis that compliments would increase when using a social story was supported for both children. Mark’s compliments increased gradually after the social story was introduced, but there was not immediate change. Logan’s compliments increased immediately. This increase was not maintained during the intervention phase, and the compliments continued to decrease during the Social Story phase. Thus, the results are somewhat mixed. Overall, the results indicate that prosocial skills were learned, but it appears that the short intervention did not allow enough time for the skills to stabilize. Adams et al. (2004) and Thiemann and Goldstein (2001) also demonstrated increases in prosocial skills but had longer intervention periods, which may explain why the gains were more pronounced. Results of the acceptability questions showed that the children somewhat liked reading the social story, but the intervention did not receive the highest rating possible from the children. The parents reported anecdotally that the children willingly read the social stories. Incorporating colorful images and children’s interests may have helped increase the children’s willingness to read the stories. Both mothers stated that the social stories intervention was an acceptable method of intervention. In addition, they both indicated that the intervention was likely to be effective and to result in continued improvement. Overall, both children and parents had a positive reaction to social stories. These positive evaluations are important because it shows that children enjoy reading social stories and parents are likely to consistently implement a social story intervention. Our results corroborate the findings of the two other studies that addressed intervention acceptability (Adams et al.2004; Scattone et al.2002). Both mothers were unsure if their children’s social skills had improved, however, they indicated that they planned to continue using the social stories created for this study and write more social stories. Logan’s mother also stated that she would like to use the social story with her other son, who does not have an autism spectrum disorder. The parents’ intent to use social stories in the future provides further support for the acceptability of the intervention. The current study has some strengths in comparison to previous research. Specifically, this study included children diagnosed with PDD-NOS, which have been rarely included in the literature. This was the only study to involve siblings and one of the few to involve parents. In addition, the study controlled for adult attention by including the “non-social” story in the baseline condition. The far majority of previous studies did not control for this variable. In the current study, children read the stories aloud, which encouraged them to be engaged in the story, whereas most of the literature involved adults reading to children. Intervention acceptability was assessed in the current study, while most studies do not address acceptability. One final strength was that no additional interventions were added during the intervention phase. This allowed for the effects of the social story to be assessed independently of other variables. The study also has certain limitations, which have implications for future research. One limitation was the length of the intervention. The stories were used for only three to four sessions. Although effects were seen, there may not have been enough sessions to gauge the stability of the behaviors. In addition, the intensity of the intervention was low. The sessions were completed over 4-weeks, with the sessions spaced over uneven intervals. Future research could focus on increasing the length or intensity of intervention. The current study did not assess for long-term effects after cessation of the intervention or for generalization outside of the training environment. Future research could examine the long-term effects and generalization. Another major limitation was that the primary observer of this study was not blind to condition. Although a blind observer was used for reliability checks for 33% of the data, future studies would benefit to have a blind primary observer. In addition, the interobserver reliability was low for the directions. The current study did not compare the social stories intervention to another intervention. A comparison of social stories to other interventions would be an interesting direction for future research. Finally, a limitation of the general social story literature is that females and minorities are extremely underrepresented and research with these populations is warranted. Research with other disabled populations or with typically developing children (Burke et al.2004) may also prove fruitful. Social stories are economical and straightforward, and can be created and implemented by a variety of individuals such as parents, teachers, and aides. Another benefit of social stories is the brevity of intervention; it only takes a few minutes to read a story. Social stories also offer intervention flexibility, and can be created for almost any topic or situation and they can be easily modified as needed. With promising research results, social stories appear to be a viable intervention option. This study helped fulfill the requirements for the first author’s Master of Art degree in Clinical Child and School psychology. We would like to thank the families for their participation. We would also like to thank Alexandra Welch for her help with the study. - Adams, L., Gouvousis, A., VanLue, M., & Waldron, C. (2004). Social story intervention: Improving communication skills in a child with an autism spectrum disorder. Focus on Autism and Other Developmental Disabilities, 19, 87–94.Google Scholar - Barry, L. M., & Burlew, S. B. (2004). Using social stories to teach choice and play skills to children with autism. Focus on Autism and Other Developmental Disabilities, 19, 45–51.Google Scholar - Gray, C. (2000). How to write a Social Story. The new social story handbook (illustrated edition). Arlington, TX: Future Horizons.Google Scholar - Gray, C. A. (2004). Social stories 10.0: The new defining criteria & guidelines. Jenison Autism Journal, 15, 2–21.Google Scholar - Gray, C. A., & Garand, J. D. (1993). Social stories: Improving responses of students with autism with accurate social information. Focus on Autistic Behavior, 8, 1–10.Google Scholar - Hagiwara, T., & Myles, B. S. (1999). A multimedia social story intervention: Teaching skills to children with autism. Focus on Autism and Other Developmental Disabilities, 14, 82–95.Google Scholar - Nichols, S. L., Hupp, S. D. A., Jewell, J., & Zeigler, C. S. (2005). Review of social story interventions for children diagnosed with autism spectrum disorders. Journal of Evidence-Based Practices for Schools, 6, 90–120.Google Scholar
A jury is a sworn body of people (the jurors) convened to render an impartial verdict (a finding of fact on a question) officially submitted to them by a court, or to set a penalty or judgment. Juries developed in England during the Middle Ages, and are a hallmark of the Anglo-American common law legal system. They are still commonly used today in Great Britain, the United States, Canada, Australia, and other countries whose legal systems are descended from England's legal traditions. Most trial juries are "petit juries", and usually consist of twelve people. A larger jury known as a grand jury was used to investigate potential crimes and render indictments against suspects, but all common law countries except the United States and Liberia have phased these out. The modern criminal court jury arrangement has evolved out of the medieval juries in England. Members were supposed to inform themselves of crimes and then of the details of the crimes. Their function was therefore closer to that of a grand jury than that of a jury in a trial. The word jury derives from Anglo-Norman juré ("sworn"). Juries are most common in common law adversarial-system jurisdictions. In the modern system, juries act as triers of fact, while judges act as triers of law (but see nullification). A trial without a jury (in which both questions of fact and questions of law are decided by a judge) is known as a bench trial. The "petit jury" (or "trial jury", sometimes "petty jury") hears the evidence in a trial as presented by both the plaintiff (petitioner) and the defendant (respondent). After hearing the evidence and often jury instructions from the judge, the group retires for deliberation, to consider a verdict. The majority required for a verdict varies. In some cases it must be unanimous, while in other jurisdictions it may be a majority or supermajority. A jury that is unable to come to a verdict is referred to as a hung jury. The size of the jury varies; in criminal cases involving serious felonies there are usually 12 jurors. In civil cases many trials require fewer than twelve jurors. A grand jury, a type of jury now confined almost exclusively to federal courts and some state jurisdictions in the United States, determines whether there is enough evidence for a criminal trial to go forward. Grand juries carry out this duty by examining evidence presented to them by a prosecutor and issuing indictments, or by investigating alleged crimes and issuing presentments. A grand jury is traditionally larger than and distinguishable from the petit jury used during a trial, usually with 12 jurors. It is not required that a suspect be notified of grand jury proceedings. Grand juries can also be used for filing charges in the form of a sealed indictment against unaware suspects who are arrested later by a surprise police visit. In addition to their primary role in screening criminal prosecutions and assisting in the investigation of crimes, grand juries in California, Florida, and some other U.S. states are sometimes utilized to perform an investigative and policy audit function similar to that filled by the Government Accountability Office in the United States federal government and legislative state auditors in many U.S. states. A third kind of jury, known as a coroner's jury can be convened in some common law jurisdiction in connection with an inquest by a coroner. A coroner is a public official (often an elected local government official in the United States), who is charged with determining the circumstances leading to a death in ambiguous or suspicious cases. A coroner's jury is generally a body that a coroner can convene on an optional basis in order to increase public confidence in the coroner's finding where there might otherwise be a controversy. In practice, coroner's juries are most often convened in order to avoid the appearance of impropriety by one governmental official in the criminal justice system toward another if no charges are filed against the person causing the death, when a governmental party such as a law enforcement officer is involved in the death. Serving on a jury is normally compulsory for individuals who are qualified for jury service. A jury is intended to be an impartial panel capable of reaching a verdict. Procedures and requirements may include a fluent understanding of the language and the opportunity to test jurors' neutrality or otherwise exclude jurors who are perceived as likely to be less than neutral or partial to one side. Juries are initially chosen randomly, usually from the eligible population of adult citizens residing in the court's jurisdictional area. Jury selection in the United States usually includes organized questioning of the prospective jurors (jury pool) by the lawyers for the plaintiff and the defendant and by the judge— voir dire—as well as rejecting some jurors because of bias or inability to properly serve ("challenge for cause"), and the discretionary right of each side to reject a specified number of jurors without having to prove a proper cause for the rejection ("peremptory challenge"), before the jury is impaneled. A head juror is called the "foreperson", "foreman" or "presiding juror". The foreperson may be chosen before the trial begins, or at the beginning of the jury's deliberations. The foreperson may be selected by the judge or by vote of the jurors, depending on the jurisdiction. The foreperson's role may include asking questions (usually to the judge) on behalf of the jury, facilitating jury discussions, and announcing the verdict of the jury. Since there is always the possibility of jurors not completing a trial for health or other reasons, often one or more alternate jurors are selected. Alternates are present for the entire trial but do not take part in deliberating the case and deciding the verdict unless one or more of the impaneled jurors are removed from the jury. In Connecticut, alternate jurors are dismissed before the panel of sworn jurors begin deliberation. Connecticut General Statutes 51–243(e) and 54-82h do not allow alternate jurors to be segregated from the regular sworn jurors. In civil cases in Connecticut, C.G.S. 51–243(e) provides that alternate jurors "shall be dismissed." This differs from the power given to the court in criminal trials under C.G.S. 54-82h, permitting the court not to dismiss the alternate jurors, and to have the regular jury panel begin deliberations. When an insufficient number of summoned jurors appear in court to handle a matter, the law in many jurisdictions empowers the jury commissioner or other official convening the jury to involuntarily impress bystanders in the vicinity of the place where the jury is to be convened to serve on the jury. The modern jury evolved out of the ancient custom of many ancient Germanic tribes whereby a group of men of good character was used to investigate crimes and judge the accused. The same custom evolved into the vehmic court system in medieval Germany. In Anglo-Saxon England, juries investigated crimes. After the Norman Conquest, some parts of the country preserved juries as the means of investigating crimes. The use of ordinary members of the community to consider crimes was unusual in ancient cultures, but was nonetheless also found in ancient Greece. The modern jury trial evolved out of this custom in the mid-12th century during the reign of Henry II. Juries, usually 6 or 12 men, were an "ancient institution" even then in some parts of England, at the same time as Members consisted of representatives of the basic units of local government— hundreds (an administrative sub-division of the shire, embracing several vills) and villages. Called juries of presentment, these men testified under oath to crimes committed in their neighbourhood. The Assize of Clarendon in 1166 caused these juries to be adopted systematically throughout the country. The jury in this period was "self-informing," meaning it heard very little evidence or testimony in court. Instead, jurors were recruited from the locality of the dispute and were expected to know the facts before coming to court. The source of juror knowledge could include first-hand knowledge, investigation, and less reliable sources such as rumour and hearsay. Between 1166 and 1179 new procedures including a division of functions between the sheriff, the jury of local men, and the royal justices ushered in the era of the English Common Law. Sheriffs prepared cases for trial and found jurors with relevant knowledge and testimony. Jurors 'found' a verdict by witnessing as to fact, even assessing and applying information from their own and community memory—little was written at this time and what was, such as deeds and writs, were subject to fraud. Royal justices supervised trials, answered questions as to law, and announced the court's decision which was then subject to appeal. Sheriffs executed the decision of the court. These procedures enabled Henry II to delegate authority without endowing his subordinates with too much power. ("Henry II" 293) In 1215 the Catholic Church removed its sanction from all forms of the ordeal—procedures by which suspects up to that time were 'tested' as to guilt (e.g., in the ordeal of hot metal, molten metal was sometimes poured into a suspected thief's hand. If the wound healed rapidly and well, it was believed God found the suspect innocent, and if not then the suspect was found guilty). With trial by ordeal banned, establishing guilt would have been problematic had England not had forty years of judicial experience. Justices were by then accustomed to asking jurors of presentment about points of fact in assessing indictments; it was a short step to ask jurors if they concluded the accused was guilty as charged. ("Henry II" 358) An early reference to a jury type group in England is in a decree issued by Aethelred at Wantage (997), which provided that in every Hundred "the twelve leading thegns together with the reeve shall go out and swear on the relics which are given into their hands, that they will not accuse any innocent man nor shield a guilty one." The resulting Wantage Code formally recognized legal customs that were part of the Danelaw. The testimonial concept can also be traced to Normandy before 1066 when a jury of nobles was established to decide land disputes. In this manner, the Duke, being the largest land owner, could not act as a judge in his own case. One of the earliest antecedents of modern jury systems is the jury in ancient Greece, including the city-state of Athens, where records of jury courts date back to 500 BCE. These juries voted by secret ballot and were eventually granted the power to annul unconstitutional laws, thus introducing the practice of judicial review. In modern justice systems, the law is considered "self-contained" and "distinct from other coercive forces, and perceived as separate from the political life of the community," but "all these barriers are absent in the context of classical Athens. In practice and in conception the law and its administration are in some important respects indistinguishable from the life of the community in general." In 1730, the British Parliament passed the Bill for Better Regulation of Juries. The Act stipulated that the list of all those liable for jury service was to be posted in each parish and that jury panels would be selected by lot, also known as sortition, from these lists. Its aim was to prevent middle-class citizens from evading their responsibilities by financially putting into question the neutrality of the under-sheriff, the official entrusted with impaneling juries. Prior to the Act, the main means of ensuring impartiality was by allowing legal challenges to the sheriff's choices. The new provisions did not specifically aim at establishing impartiality but had the effect of reinforcing the authority of the jury by guaranteeing impartiality at the point of selection. The example of early 18th century England legal reform shows how civic lotteries can be used to organize the duties and responsibilities of the citizen body in relation to the state. It established the impartiality and neutrality of juries as well as reiterating the dual nature of the citizen-state relationship. The need for a leader/organizer in a jury is very important, in order to get a proper and agreed verdict. In 1825, the rules concerning juror selection were consolidated. Property qualifications and various other rules were standardised, although an exemption was left open for towns which "possessed" their own courts. This reflected a more general understanding that local officials retained a large amount of discretion regarding which people they actually summoned. In the late eighteenth century, King has found evidence of butchers being excluded from service in Essex; while Crosby has found evidence of "peripatetic ice cream vendors" not being summoned in the summer time as late as 1923. After 1919, women were no longer excluded from jury service by virtue of their sex, although they still had to satisfy the ordinary property qualifications. The exemption which had been created by the 1825 Act for towns which "possessed" their own courts meant ten towns were free to ignore the property qualifications. This amplified in these towns the general understanding that local officials had a free hand in summoning freely from among those people who were qualified to be jurors. In 1920, three of these ten towns - Leicester, Lincoln, and Nottingham - consistently empanelled assize juries of six men and six women; while at the Bristol, Exeter, and Norwich assizes no women were empanelled at all. This quickly led to a tightening up of the rules, and an abolition of these ten towns' discretion. After 1922, trial juries throughout England had to satisfy the same qualifications; although it was not until the 1980s that a centralised system was designed for selecting jurors from among the people who were qualified to serve. This meant there was still a great amount of discretion in the hands of local officials. The size of the jury is to provide a "cross-section" of the public. In Williams v. Florida, 399 U.S. 78 (1970), the Supreme Court of the United States ruled that a Florida state jury of six was sufficient, that "the 12-man panel is not a necessary ingredient of "trial by jury," and that respondent's refusal to impanel more than the six members provided for by Florida law "did not violate petitioner's Sixth Amendment rights as applied to the States through the Fourteenth." In Ballew v. Georgia, 435 U.S. 223 (1978), the Supreme Court ruled that the number of jurors could not be reduced below six. In Brownlee v The Queen (2001) 207 CLR 278, the High Court of Australia unanimously held that a jury of 12 members was not an essential feature of "trial by jury" in section 80 of the Australian Constitution. In Scotland, a jury in a criminal trial consists of 15 jurors, which is thought to be the largest in the world. In 2009 a review by the Scottish Government regarding the possibility of reduction led to the decision to retain 15 jurors, with the Cabinet Secretary for Justice stating that after extensive consultation, he had decided that Scotland had got it "uniquely right". Trials in the Republic of Ireland which are scheduled to last over 2 months can, but do not have to, have 15 jurors. A study by the University of Glasgow suggested that a civil jury of 12 people was ineffective because a few jurors ended up dominating the discussion, and that seven was a better number because more people feel comfortable speaking, and they have an easier time reaching a unanimous decision. For juries to fulfill their role of analyzing the facts of the case, there are strict rules about their use of information during the trial. Juries are often instructed to avoid learning about the case from any source other than the trial (for example from media or the Internet) and not to conduct their own investigations (such as independently visiting a crime scene). Parties to the case, lawyers, and witnesses are not allowed to speak with a member of the jury. Doing these things may constitute reversible error. Rarely, such as in very high-profile cases, the court may order a jury sequestered for the deliberation phase or for the entire trial. Jurors are generally required to keep their deliberations in strict confidence during the trial and deliberations, and in some jurisdictions even after a verdict is rendered. In Canadian and English law, the jury's deliberations must never be disclosed outside the jury, even years after the case; to repeat parts of the trial or verdict is considered to be contempt of court, a criminal offense. In the United States, confidentiality is usually only required until a verdict has been reached, and jurors have sometimes made remarks that called into question whether a verdict was properly reached. In Australia, academics are permitted to scrutinize the jury process only after obtaining a certificate or approval from the Attorney-General. Because of the importance of preventing undue influence on a jury, jury tampering (like witness tampering) is a serious crime, whether attempted through bribery, threat of violence, or other means. Jurors themselves can also be held liable if they deliberately compromise their impartiality. The role of the jury is described as that of a finder of fact, while the judge is seen as having the sole responsibility of interpreting the appropriate law and instructing the jury accordingly. The jury determines the truth or falsity of factual allegations and renders a verdict on whether a criminal defendant is guilty, or a civil defendant is civilly liable. Sometimes a jury makes specific findings of fact in what is called a "special verdict." A verdict without specific findings of fact that includes only findings of guilt, or civil liability and an overall amount of civil damages, if awarded, is called a "general verdict." Juries are often justified because they leaven the law with community norms. A jury trial verdict in a case is binding only in that case, and is not a legally binding precedent in other cases. For example, it would be possible for one jury to find that particular conduct is negligent, and another jury to find that the conduct is not negligent, without either verdict being legally invalid, on precisely the same factual evidence. Of course, no two witnesses are exactly the same, and even the same witness will not express testimony in exactly the same way twice, so this would be difficult to prove. It is the role of the judge, not the jury, to determine what law applies to a particular set of facts. However, occasionally jurors find the law to be invalid or unfair, and on that basis acquit the defendant, regardless of the evidence presented that the defendant violated the law. This is commonly referred to as "jury nullification of law" or simply jury nullification. When there is no jury ("bench trial"), the judge makes rulings on both questions of law and of fact. In most continental European jurisdictions, judges have more power in a trial and the role and powers of a jury are often restricted. Actual jury law and trial procedures differ significantly between countries. The collective knowledge and deliberate nature of juries are also given as reasons in their favor: Detailed interviews with jurors after they rendered verdicts in trials involving complex expert testimony have demonstrated careful and critical analysis. The interviewed jurors clearly recognized that the experts were selected within an adversary process. They employed sensible techniques to evaluate the experts’ testimony, such as assessing the completeness and consistency of the testimony, comparing it with other evidence at the trial, and evaluating it against their own knowledge and life experience. Moreover, the research shows that in deliberations jurors combine their individual perspectives on the evidence and debate its relative merits before arriving at a verdict. In the United States, juries are sometimes called on, when asked to do so by a judge in the jury instructions, to make factual findings on particular issues. This may include, for example, aggravating circumstances which will be used to elevate the defendant's sentence if the defendant is convicted. This practice was required in all death penalty cases in Blakely v. Washington, 542 U.S. 296 (2004), where the Supreme Court ruled that allowing judges to make such findings unilaterally violates the Sixth Amendment right to a jury trial. A similar Sixth Amendment argument in Apprendi v. New Jersey, 530 U.S. 466 (2000) resulted in the Supreme Court's expansion of the requirement to all criminal cases, holding that "any fact that increases the penalty for a crime beyond the prescribed statutory maximum must be submitted to a jury and proved beyond a reasonable doubt". Many U.S. jurisdictions permit the seating of an advisory jury in a civil case in which there is no right to trial by jury to provide non-binding advice to the trial judge, although this procedural tool is rarely used. For example, a judge might seat an advisory jury to guide the judge in awarding non-economic damages (such as "pain and suffering" damages) in a case where there is no right to a jury trial, such as (depending on state law) a case involving "equitable" rather than "legal" claims. In Canada, juries are also allowed to make suggestions for sentencing periods at the time of sentencing. The suggestions of the jury are presented before the judge by the Crown prosecutor(s) before the sentence is handed down. In a small number of U.S. jurisdictions, including the states of Tennessee and Texas, juries are charged both with the task of finding guilt or innocence as well assessing and fixing sentences. However, this is not the practice in most other legal systems based on the English tradition, in which judges retain sole responsibility for deciding sentences according to law. The exception is the award of damages in English law libel cases, although a judge is now obliged to make a recommendation to the jury as to the appropriate amount. In legal systems based on English tradition, findings of fact by a jury, and jury conclusions that could be supported by jury findings of fact even if the specific factual basis for the verdict is not known are entitled to great deference on appeal. In other legal systems, it is generally possible for an appellate court to reconsider both findings of fact and conclusions of law made in the trial court, and in those systems, evidence may be presented to appellate courts in what amounts to a trial de novo (new trial) of appealed findings of fact. The finality of trial court findings of fact in legal systems based on the English tradition has a major impact on court procedure in these systems. This makes it imperative that lawyers be highly prepared for trial because errors and misjudgments related to the presentation of evidence at trial to a jury cannot generally be corrected later on appeal, particularly in court systems based on the English tradition. The higher the stakes, the more this is true. Surprises at trial are much more consequential in court systems based on the English tradition than they are in other legal systems[ citation needed]. Jury nullification means deciding not to apply the law to the facts in a particular case by jury decision. In other words, it is "the process whereby a jury in a criminal case effectively nullifies a law by acquitting a defendant regardless of the weight of evidence against him or her." In the 17th and 18th centuries, there was a series of such cases, starting in 1670 with the trial of the Quaker William Penn which asserted the (de facto) right, or at least power, of a jury to render a verdict contrary to the facts or law. A good example is the case of one Carnegie of Finhaven who in 1728 accidentally killed the Scottish Earl of Strathmore. As the defendant had undoubtedly killed the Earl, the law (as it stood) required the jury to render the verdict that the case had been "proven" and cause Carnegie of Finhaven to die for an accidental killing. Instead, the jury asserted what is believed to be their "ancient right" to judge the whole case and not just the facts and brought in the verdict of "not guilty". This led to the development of the not proven verdict in Scots law. Today in the United States, juries are instructed by the judge to follow the judge's instructions concerning what is the law and to render a verdict solely on the evidence presented in court. Important past exercises of nullification include cases involving slavery (see Fugitive Slave Act of 1850), freedom of the press (see John Peter Zenger), and freedom of religion (see William Penn). In United States v. Moylan, 417 F.2d 1002 (4th. Cir. 1969), Fourth Circuit Court of Appeal unanimously ruled: "If the jury feels that the law under which the defendant is accused is unjust, or exigent circumstances justified the actions of the accused, or for any reason which appeals to their logic or passion, the jury has the right to acquit, and the courts must abide that decision." The Fully Informed Jury Association is a non-profit educational organization dedicated to informing jurors of their rights and seeking the passage of laws to require judges to inform jurors that they can and should judge the law. In Sparf v. the United States, 156 U.S. 51 (1895), the Supreme Court, in a 5–4 decision, held that a trial judge has no responsibility to inform the jury of the right to nullify laws. Modern American jurisprudence is generally intolerant of the practice, and a juror can be removed from a case if the judge believes that the juror is aware of the power of nullification. In the United Kingdom, a similar power exists, often called "jury equity". This enables a jury to reach a decision in direct contradiction with the law if they feel the law is unjust. This can create a persuasive precedent for future cases, or render prosecutors reluctant to bring a charge – thus a jury has the power to influence the law. The standard justification of jury equity is taken from the final few pages of Lord Devlin's book "Trial by Jury". Devlin explained jury equity through two now-famous metaphors: that the jury is "the lamp that shows that freedom lives" and that it is a "little parliament". The second metaphor emphasises that, just as members of parliament are generally dominated by government but can occasionally assert their independence, juries are usually dominated by judges but can, in extraordinary circumstances, throw off this control. Devlin thereby sought to emphasise that neither jury equity nor judicial control is set in stone. Perhaps the best example of modern-day jury equity in England and Wales was the acquittal of Clive Ponting, on a charge of revealing secret information, under section 2 of the Official Secrets Act 1911 in 1985. Mr. Ponting's defense was that the revelation was in the public interest. The trial judge directed the jury that "the public interest is what the government of the day says it is" – effectively a direction to the jury to convict. Nevertheless, the jury returned a verdict of not guilty. Another example is the acquittal in 1989 of Michael Randle and Pat Pottle, who confessed in open court to charges of springing the Soviet spy George Blake from Wormwood Scrubs Prison and smuggling him to East Germany in 1966. Pottle successfully appealed to the jury to disregard the judge's instruction that they consider only whether the defendants were guilty in law, and assert a jury's ancient right to throw out a politically motivated prosecution, in this case, compounded by its cynical untimeliness. In Scotland (with a separate legal system from that of England and Wales) although technically the "not guilty" verdict was originally a form of jury nullification, over time the interpretation has changed so that now the "not guilty" verdict has become the normal one when a jury is not persuaded of guilt and the "not proven" verdict is only used when the jury is not certain of innocence or guilt. It is absolutely central to Scottish and English law that there is a presumption of innocence. It is not a trivial distinction since any shift in the burden of proof is a significant change which undermines the safeguard for the citizen. Besides petit juries for jury trials and grand juries for issuing indictments, juries are sometimes used in non-legal or quasi-legal contexts. Blue ribbon juries attend to civic matters as an ad-hoc body in the executive branch of a government. Outside government, a jury or panel of judges may make determinations in competition, such as at a wine tasting, art exhibition, talent contest, or reality game show. These types of contests are juried competitions.[ citation needed] Blue ribbon juries are juries selected from prominent, well-educated citizens, sometimes to investigate a particular problem such as civic corruption. Blue ribbon juries cannot be used in real trials, which require constitutional safeguards to produce a jury of one's peers. The blue-ribbon jury is intended to overcome the problems of ordinary juries in interpreting complex technical or commercial questions. In the United States, blue-ribbon juries were provided for by statutes, the terms varying by jurisdiction. Each state may determine the extent to which the use of a jury is used. The use of a jury is optional for civil trials in any Australian state. The use of a jury in criminal trials is generally by a unanimous verdict of 12 lay members of the public. Some States provide exceptions such as majority (11-to-1 or 10-to-2) verdicts where a jury cannot otherwise reach a verdict. All states except Victoria allow a person accused of a criminal offence to elect to be tried by a judge-alone rather than the default jury provision. The Constitution of Australia provides in section 80 that 'the trial on indictment of any offence against any law of the Commonwealth shall be by jury'. The Commonwealth can determine which offences are 'on indictment'. It would be entirely consistent with the Constitution that a homicide offence could be tried not 'on indictment,' or conversely that a simple assault could be tried 'on indictment.' This interpretation has been criticized as a 'mockery' of the section, rendering it useless. Where a trial 'on indictment' has been prescribed, it is an essential element that it be found by a unanimous verdict of guilty by 12 lay members of the public. This requirement stems from the (historical) meaning of 'jury' at the time that the Constitution was written and is (in principle) thus an integral element of trial by jury. Unlike in the Australian states, an accused person cannot elect a Judge-only trial, even where both the accused and the prosecutor seek such a trial. The Belgian Constitution provides that all cases involving the most serious crimes be judged by juries. As a safeguard against libel cases, press crimes can also only be tried by a jury. Racism is excluded from this safeguard. Twelve jurors decide by a qualified majority of two-thirds whether the defendant is guilty or not. A tied vote result in 'not guilty'; a '7 guilty – 5 not guilty' vote is transferred to the 3 professional judges who can, by unanimity, reverse the majority to 'not guilty'. The sentence is delivered by a majority of the 12 jurors and the 3 professional judges. As a result of the Taxquet ruling the juries give nowadays the most important motives that lead them to their verdict. The procedural codification has been altered to meet the demands formulated by the European Court of Human Rights. The Constitution of Brazil provides that only willful crimes against life, namely full or attempted murder, abortion, infanticide and suicide instigation, be judged by juries. Seven jurors vote in secret to decide whether the defendant is guilty or not, and decisions are taken by the majority. Manslaughter and other crimes in which the killing was committed without intent, however, are judged by a professional judge instead. In Canada, juries are used for some criminal trials but not others. For summary conviction offences or offences found under section 553 of the Criminal Code (theft and fraud up to the value of $5,000 and certain nuisance offences), the trial is before a judge alone. For most indictable offences, the accused person can elect to be tried by either a judge alone or a judge and jury. In the most serious offences, found in section 469 of the Criminal Code (such as murder or treason), a judge and a jury are always used, unless both the accused and the prosecutor agree that the trial should not be in front of a jury. The jury's verdict on the ultimate disposition of guilt or innocence must be unanimous, but can disagree on the evidentiary route that leads to that disposition. Juries do not make a recommendation as to the length of sentence, except for parole ineligibility for second-degree murder (but the judge is not bound by the jury's recommendation, and the jury is not required to make a recommendation). Jury selection is in accordance with specific criteria. Prospective jurors may only be asked certain questions, selected for direct pertinence to impartiality or other relevant matters. Any other questions must be approved by the judge. A jury in a criminal trial is initially composed of 12 jurors. The trial judge has the discretion to direct that one or two alternate jurors also be appointed. If a juror is discharged during the course of the trial, the trial will continue with an alternate juror, unless the number of jurors goes below 10. The Canadian Charter of Rights and Freedoms guarantees that anyone tried for an offense that has a maximum sentence of five or more years has the right to be tried by a jury (except for an offence under military law). Juries are infrequently used in civil trials in Canada. There are no civil juries in the courts of the Province of Quebec, nor in the Federal Court. Three professional judges sit alongside six jurors in first instance proceedings or nine in appeal proceedings. Before 2012, there were nine or twelve jurors, but this was reduced to cut spending. A two-thirds majority is needed in order to convict the defendant. During these procedures, judges and jurors have equal positions on questions of fact, while judges decide on questions of procedure. Judges and jurors also have equal positions on sentencing. Trial by jury was introduced in most German states after the revolutionary events of 1848. However, it remained controversial; and, early in the 20th century, there were moves to abolish it. The Emminger Reform of January 4, 1924, during an Article 48 state of emergency, abolished the jury system and replaced it with a mixed system including bench trials and lay judges. In 1925, the Social Democrats called for the reinstitution of the jury; a special meeting of the German Bar demanded revocation of the decrees, but "on the whole the abolition of the jury caused little commotion". Their verdicts were widely perceived as unjust and inconsistent. Today, most misdemeanors are tried by a Strafrichter, meaning a single judge at an Amtsgericht; felonies and more severe misdemeanors are tried by a Schöffengericht, also located at the Amtsgericht, composed of 1 judge and 2 lay judges; some felonies are heard by Erweitertes Schöffengericht, or extended Schöffengericht, composed of 2 judges and 2 lay judges; severe felonies and other "special" crimes are tried by the große Strafkammer, composed of 3 judges and 2 lay judges at the Landgericht, with specially assigned courts for some crimes called Sonderstrafkammer; felonies resulting in the death of a human being are tried by the Schwurgericht, composed of 3 judges and 2 lay judges, located at the Landgericht; and serious crimes against the state are tried by the Strafsenat, composed of 5 judges, at an Oberlandesgericht. In some civil cases, such as commercial law or patent law, there are also lay judges, who have to meet certain criteria (e.g., being a merchant). Article 86 of the Hong Kong Basic Law assures the practice of jury trials. Criminal cases in the High Court and some civil cases are tried by a jury in Hong Kong. There is no jury in the District Court. In addition, from time to time, the Coroner's Court may summon a jury to decide the cause of death in an inquest. Criminal cases are normally tried by a 7-person jury and sometimes, at the discretion of the court, a 9-person jury. Nevertheless, the Jury Ordinance requires that a jury in any proceedings should be composed of at least 5 jurors. Although article 86 of the basic law states that ‘the principle of trial by jury previously practiced in Hong Kong shall be maintained’, it does not guarantee that every case is to be tried by a jury. In the case Chiang Lily v. Secretary for Justice (2010), the Court of Final Appeal agreed that ‘there is no right to trial by jury in Hong Kong.’ Jury trials were abolished in most Indian courts by the 1973 Code of Criminal Procedure . . Nanavati Case was not the last Jury trial in India. West Bengal had Jury trials as late as 1973. Juries were not mentioned in the 1950 Indian Constitution, and it was ignored in many Indian states. The Law Commission recommended their abolition in 1958 in its 14th Report. They were retained in a discreet manner for Parsi divorce courts, wherein a panel of members called 'delegates' are randomly selected from the community to decide the fact of the case. Parsi divorce law is governed by 'The Parsi Marriage and Divorce Act, 1936' as amended in 1988, and is a mixture of the Panchayat legal system and jury process. The law in Ireland is historically based on English common law and had a similar jury system. Article 38 of the 1937 Constitution of Ireland mandates trial by jury for criminal offences, with exceptions for minor offences, military tribunals, and where "the ordinary courts are inadequate to secure the effective administration of justice, and the preservation of public peace and order". DPP v McNally[ full citation needed] sets out that a jury has the right to reach a not guilty verdict even in direct contradiction of the evidence. The principal statute regulating the selection, obligations and conduct of juries is the Juries Act 1976 as amended by the Civil Law (Miscellaneous Provisions) Act 2008. There is a fine of €500 for failing to report for jury service, though this was poorly enforced until a change of policy at the Courts Service in 2016. Criminal jury trials are held in the Circuit Court or the Central Criminal Court. Juryless trials under the inadequacy exception, dealing with terrorism or organised crime, are held in the Special Criminal Court, on application by the Director of Public Prosecutions (DPP). Juries are also used in some civil law trials, such as for defamation; they are sometimes used at coroner's inquests. Normally consisting of twelve persons, juries are selected from a jury panel which is picked at random by the county registrar from the electoral register. Juries only decide questions of fact and have no role in criminal sentencing. It is not necessary that a jury be unanimous in its verdict. In civil cases, a verdict may be reached by a majority of nine of the twelve members. In a criminal case, a verdict need not be unanimous where there are not fewer than eleven jurors if ten of them agree on a verdict after considering the case for a "reasonable time". Juries are not paid, nor do they receive travel expenses; however they do receive lunch for the days that they are serving. The Law Reform Commission examined jury service, producing a consultation paper in 2010 and then a report in 2013. One of its recommendations, to permit extra jurors for long trials in case some are excused, was enacted in 2013. In November 2013, the DPP requested a 15-member jury at the trial of three Anglo Irish Bank executives. Where more than twelve jurors are present, twelve will be chosen by lot to retire and consider the verdict. In Italy, a Civil law jurisdiction, untrained judges are present only in the Corte d'Assise, where two career magistrates are supported by six so-called Lay Judges, who are raffled from the registrar of voters. Any Italian citizen, with no distinction of sex or religion, between 30 and 65 years of age, can be appointed as a lay judge; in order to be eligible as a lay judge for the Corte d'Assise, however, there is a minimum educational requirement, as the lay judge must have completed his/her education at the Scuola Media (junior high school) level, while said level is raised for the Corte d'Assise d'Appello (appeal level of the Corte d'Assise) to the Scuola Superiore (senior high school) degree. In the Corte d'Assise, decisions concerning both fact and law matters are taken by the stipendiary judges and "Lay Judges" together at a special meeting behind closed doors, named Camera di Consiglio ("Counsel Chamber"), and the Court is subsequently required to publish written explanations of its decisions within 90 days from the verdict. Errors of law or inconsistencies in the explanation of a decision can and usually will lead to the annulment of the decision. A Court d'Assise and a Court d' Assise d'Appello decides on a majority of votes, and therefore predominantly on the votes of the lay judges, who are a majority of six to two, but in fact lay judges, who are not trained to write such explanation and must rely on one or the other stipendiary judge to do it, are effectively prevented from overruling both of them. The Corte d'Assise has jurisdiction to try crimes carrying a maximum penalty of 24 years in prison or life imprisonment, and other serious crimes; felonies that fall under its jurisdiction include terrorism, murder, manslaughter, severe attempts against State personalities, as well as some matters of law requiring ethical and professional evaluations (ex. assisted suicide), while it generally has no jurisdiction over cases whose evaluation requires knowledge of Law which the "Lay Judges" generally don't have. Penalties imposed by the court can include life sentences. Juries are used in trials for all trials involving Category 4 offences such as treason, murder and manslaughter, although in exceptional circumstances a judge-alone trial may be ordered. At the option of the defendant, juries may be used in trials involving Category 3 offences, that is offences where the maximum penalty available is two years imprisonment or greater. In civil cases, juries are only used in cases of defamation, false imprisonment and malicious prosecution. Juries must initially try to reach a unanimous verdict, but if one cannot be reached in a reasonable timeframe, the judge may accept a majority verdict of all-but-one (i.e. 11–1 or 10–1) in criminal cases and three-quarters (i.e. 9–3 or 9–2) in civil cases. Juries existed in Norway as early as the year 800, and perhaps even earlier. They brought the jury system to England and Scotland. Juries were phased out as late as the 17th century, when Norway's central government was in Copenhagen, Denmark. Though Norway and Denmark had different legal systems throughout their personal union (1387–1536), and later under the governmental union (1536–1814), there was attempt to harmonize the legal systems of the two countries. Even if juries were abolished, the layman continued to play an important role in the legal system throughout in Norway. The jury was reintroduced in 1887, and was then solely used in criminal cases on the second tier of the three-tier Norwegian court system (" Lagmannsretten"). The jury consisted of 10 people, and had to reach a majority verdict consisting of seven or more of the jurors. The jury never gave a reason for its verdict, rather it simply gave a "guilty" or "non-guilty" verdict. In a sense, the concept of being judged by one's peers existed on both the first and second tier of the Norwegian court system: In Tingretten, one judge and two lay judges preside, and in Lagmannsretten two judges and five lay judges preside. The lay judges do not hold any legal qualification, and represent the peers of the person on trial, as members of the general public. As a guarantee against any abuse of power by the educated elite, the number of lay judges always exceeds the number of appointed judges. In the Supreme Court, only trained lawyers are seated. The right to a jury trial is provided by Constitution of Russian Federation but for criminal cases only and in the procedure defined by law. Initially, the Criminal Procedure Code, which was adopted in 2001, provided that the right to a jury trial could be realized in criminal cases which should be heard by regional courts and military courts of military districts/fleets as the courts of first instance; the jury was composed of 12 jurors. In 2008, the anti-state criminal cases (treason, espionage, armed rebellion, sabotage, mass riot, creating an illegal paramilitary group, forcible seizure of power, terrorism) were removed from the jurisdiction of the jury trial. From 1 June 2018, defendants can claim a jury trial in criminal cases which are heard by district courts and garrison military courts as the courts of first instance; from that moment on, the jury is composed of 8 (in regional courts and military courts of military districts/fleets) or 6 (in district courts and garrison military courts) jurors. A juror must be 25 years old, legally competent, and without a criminal record. Spain has no strong tradition of using juries. However we can find mentions in the Bayonne Statute. Later, Article 307 of the Spanish Constitution of 1812 allowed the Cortes to pass legislation if they felt that over the time it was needed to distinguish between "judges of law" and "judges of facts". Such legislation however was never enacted. Article 2 of the Spanish Constitution of 1837 while proclamating the freedom of the people to publicate written contents without previous censorship according to the laws also provided that "press crimes" could only be tried by juries. This meant that a grand jury would need to indict, and a petit jury would need to convict. Juries were later abolished in 1845, but were later restored in 1869 for all "political crimes" and "those common crimes the law may deem appropriate to be so tried by a jury". A Law concerning the Jury entered into force on January 1, 1899 and lasted until 1936, where juries were again disbanded with the outbreak of the Spanish Civil War. The actual Constitution of 1978 permits the Cortes Generales to pass legislation allowing juries in criminal trials. The provision is arguably somewhat vague: "Article 125 – Citizens may engage in popular action and participate in the administration of justice through the institution of the Jury, in the manner and with respect to those criminal trials as may be determined by law, as well as in customary and traditional courts." Jury trials can only happen in the criminal jurisdiction and it is not a choice of the defendant to be tried by jury, or by a single judge or a panel of judges. Organic Law 5/1995, of May 22 regulates the categories of crimes in which a trial by jury is mandatory. For all other crimes, a single judge or a panel of judges will decide both on facts and the law. Spanish juries are composed of 9 citizens and a professional Judge. Juries decide on facts and whether to convict or acquit the defendant. In case of conviction they can also make recommendations such as if the defendant should be pardoned if they asked to, or if they think the defendant could be released on parole, etc. One of the first jury trial cases was that of Mikel Otegi who was tried in 1997 for the murder of two police officers. After a confused[ clarification needed] trial, five jury members of a total of nine voted to acquit and the judge ordered the accused set free. This verdict shocked the nation. Another alleged miscarriage of justice by jury trial was the Wanninkhof murder case. This section does not cite any sources. (November 2013) ( Learn how and when to remove this template message) In press libel cases and other cases concerning offenses against freedom of the press, the question of whether or not the printed material falls outside permissible limits is submitted to a jury of 9 members which provides a pre-screening before the case is ruled on by normal courts. In these cases 6 out of 9 jurors must find against the defendant, and may not be overruled in cases of acquittal. Sweden has no tradition of using juries in most types of criminal or civil trial. The sole exception, since 1815, is in cases involving freedom of the press, prosecuted under Chapter 7 of the Freedom of the Press Act, part of Sweden's constitution. The most frequently prosecuted offence under this act is defamation, although in total eighteen offences, including high treason and espionage, are covered. These cases are tried in district courts (first tier courts) by a jury of nine laymen. The jury in press freedom cases rules only on the facts of the case and the question of guilt or innocence. The trial judge may overrule a jury's guilty verdict, but may not overrule an acquittal. A conviction requires a majority verdict of 6–3. Sentencing is the sole prerogative of judges. Jury members must be Swedish citizens and resident in the county in which the case is being heard. They must be of sound judgement and known for their independence and integrity. Combined, they should represent a range of social groups and opinions, as well as all parts of the county. It is the county council that have the responsibility to appoints juries for a tenure of four years under which they may serve in multiple cases. The appointed jurymen are divided into two groups, in most counties the first with sixteen members and the second with eight. From this pool of available jurymen the court hears and excludes those with conflicts of interest in the case, after which the defendants and plaintiffs have the right to exclude a number of members, varying by county and group. The final jury is then randomly selected by drawing of lots. Juries are not used in other criminal and civil cases. For most other cases in the first and second tier courts lay judges sit alongside professional judges. Lay judges participate in deciding both the facts of the case and sentencing. Lay judges are appointed by local authorities, or in practice by the political parties represented on the authorities. Lay judges are therefore usually selected from among nominees of ruling political parties. In England and Wales jury trials are used for criminal cases, requiring 12 jurors (between the ages of 18 and 75), although the trial may continue with as few as 9. The right to a jury trial has been enshrined in English law since Magna Carta in 1215, and is most common in serious cases, although the defendant can insist on a jury trial for most criminal cases. Jury trials in complex fraud cases have been described by some members and appointees of the Labour Party as expensive and time-consuming. In contrast, the Bar Council, Liberty and other political parties have supported the idea that trial by jury is at the heart of the judicial system and placed the blame for a few complicated jury trials failing on inadequate preparation by the prosecution. On 18 June 2009 the Lord Chief Justice, Lord Judge, sitting in the Court of Appeal, made English legal history by ruling that a criminal trial in the Crown Court could take place without a jury, under the provisions of the Criminal Justice Act 2003. Jury trials are also available for some few areas of civil law (for example defamation cases and those involving police conduct); these also require 12 jurors (9 in the County Court). However less than 1% of civil trials involve juries. At the new Manchester Civil Justice Centre, constructed in 2008, fewer than 10 of the 48 courtrooms had jury facilities. During the Troubles in Northern Ireland, jury trials were suspended and trials took place before Diplock Courts. These were essentially trials before judges only. This was to combat the intimidation of juries . [ citation needed] Scottish trials are based on an adversarial approach. First the prosecution leads evidence from witnesses and after each witness the defence has an opportunity to cross examine. Following the Prosecution case, the defence may move a motion of no case to answer if the worst the prosecution has been able to lead in evidence would be insufficient to convict of any crime. If there remains a case to answer, the defence leads evidence from witnesses in an attempt to refute previous evidence led by the prosecution, with cross examination being permitted after each witness. Once both prosecution and defence have concluded leading evidence, the case goes to summing up where firstly the prosecution and then the defence get to sum up their case based on the evidence that has been heard. The jury is given guidance on points of law and then sent out to consider its verdict. Juries are composed of fifteen residents. In criminal law in federal courts and a minority of state court systems of the United States, a grand jury is convened to hear only testimony and evidence to determine whether there is a sufficient basis for deciding to indict the defendant and proceed toward trial. In each court district where a grand jury is required, a group of 16–23 citizens holds an inquiry on criminal complaints brought by the prosecutor to decide whether a trial is warranted (based on the standard that probable cause exists that a crime was committed), in which case an indictment is issued. In jurisdictions where the size of a jury varies, in general the size of juries tends to be larger if the crime alleged is more serious. If a grand jury rejects a proposed indictment the grand jury's action is known as a "no bill." If they accept a proposed indictment, the grand jury's action is known as a "true bill." Grand jury proceedings are ex parte: only the prosecutor and witnesses who the prosecutor calls may present evidence to the grand jury and defendants are not allowed to present mitigating evidence or even to know the testimony that was presented to the grand jury, and hearsay evidence is permitted. This is so because a grand jury cannot convict a defendant. It can only decide to indict the defendant and proceed forward toward trial. Grand juries vote to indict in the overwhelming majority of cases, and prosecutors are not prohibited from presenting the same case to a new grand jury if a "no bill" was returned by a previous grand jury. A typical grand jury considers a new criminal case every fifteen minutes. In some jurisdictions, in addition to indicting persons for crimes, a grand jury may also issue reports on matters that they investigate apart from the criminal indictments, particularly when the grand jury investigation involves a public scandal. Historically, grand juries were sometimes used in American law to serve a purpose similar to an investigatory commission. Both Article III of the U.S. Constitution and the Sixth Amendment require that criminal cases be tried by a jury. Originally this applied only to federal courts. However, the Fourteenth Amendment extended this mandate to the states. Although the Constitution originally did not require a jury for civil cases, this led to an uproar which was followed by adoption of the Seventh Amendment, which requires a civil jury in cases where the value in dispute is greater than twenty dollars. However, the Seventh Amendment right to a civil jury trial does not apply in state courts, where the right to a jury is strictly a matter of state law. However, in practice, all states except Louisiana preserve the right to a jury trial in almost all civil cases where the sole remedy sought is money damages to the same extent as jury trials are permitted by the Seventh Amendment. Under the law of many states, jury trials are not allowed in small claims cases. The civil jury in the United States is a defining element of the process by which personal injury trials are handled. In practice, even though the defendant in a criminal action is entitled to a trial by jury, most criminal actions in the U.S. are resolved by plea bargain. Only about 2% of civil cases go to trial, with only about half of those trials being conducted before juries. In 1898 the Supreme Court held that the jury must be composed of at least twelve persons, although this was not necessarily extended to state civil jury trials. In 1970, however, the Supreme Court held that the twelve person requirement was a "historical accident", and upheld six-person juries if provided for under state law in both criminal and civil state court cases. There is controversy over smaller juries, with proponents arguing that they are more efficient and opponents arguing that they lead to fluctuating verdicts. In a later case, however, the court rejected the use of five-person juries in criminal cases. Juries go through a selection process called voir dire in which the lawyers question the jurors and then make "challenges for cause" and "peremptory challenges" to remove jurors. Traditionally the removal of jurors based on a peremptory challenge required no justification or explanation, but the tradition has been changed by the Supreme Court where the reason for the peremptory challenge was the race of the potential juror. Since the 1970s " scientific jury selection" has become popular. Unanimous jury verdicts have been standard in US American law. This requirement was upheld by the Supreme Court in 1897, but the standard was relaxed in 1972 in two criminal cases. As of 1999 over thirty states had laws allowing less than unanimity in civil cases, but, until 2020, Oregon and Louisiana were the only states which have laws allowing less than unanimous jury verdicts for criminal cases (these laws were overturned in Ramos v. Louisiana). When the required number of jurors cannot agree on a verdict (a situation sometimes referred to as a hung jury), a mistrial is declared, and the case may be retried with a newly constituted jury. The practice generally was that the jury rules only on questions of fact and guilt; setting the penalty was reserved for the judge. This practice was confirmed by rulings of the U.S. Supreme Court such as in Ring v. Arizona, which found Arizona's practice of having the judge decide whether aggravating factors exist to make a defendant eligible for the death penalty, to be unconstitutional, and reserving the determination of whether the aggravating factors exist to be decided by the jury. However, in some states (such as Alabama and Florida), the ultimate decision on the punishment is made by the judge, and the jury gives only a non-binding recommendation. The judge can impose the death penalty even if the jury recommends life without parole. There is no set format for jury deliberations, and the jury takes a period of time to settle into discussing the evidence and deciding on guilt and any other facts the judge instructs them to determine. Deliberation is done by the jury only, with none of the lawyers, the judge, or the defendant present. The first step will typically be to find out the initial feeling or reaction of the jurors to the case, which may be by a show of hands, or via secret ballot. The jury will then attempt to arrive at a consensus verdict. The discussion usually helps to identify jurors' views to see whether a consensus will emerge as well as areas that bear further discussion. Points often arise that were not specifically discussed during the trial. The result of these discussions is that in most cases the jury comes to a unanimous decision and a verdict is thus achieved. In some states and under circumstances, the decision need not be unanimous. In a few states and in death penalty cases, depending upon the law, the trial jury, or sometimes a separate jury, may determine whether the death penalty is appropriate in "capital" murder cases. Usually, sentencing is handled by the judge at a separate hearing. The judge may but does not always follow the recommendations of the jury when deciding on a sentence. When used alone the term jury usually refers to a petit jury, rather than a grand jury. Jury sentencing is the practice of having juries decide what penalties to give those who have been convicted of criminal offenses. The practice of jury sentencing began in Virginia in the 18th century and spread westward to other states that were influenced by Virginia-trained lawyers. As of 2018, Arkansas, Kentucky, Missouri, Oklahoma, Texas, and Virginia have sentencing by jury. Alabama, Georgia, Indiana, Illinois, Mississippi, Montana, Tennessee, and West Virginia had jury sentencing in times past, but then abandoned it. Canadian juries have long had the option to recommend mercy, leniency, or clemency, and the 1961 Criminal Code required judges to give a jury instruction, following a verdict convicting a defendant of capital murder, soliciting a recommendation as to whether he should be granted clemency. When capital punishment in Canada was abolished in 1976, as part of the same raft of reforms, the Criminal Code was also amended to grant juries the ability to recommend periods of parole ineligibility immediately following a guilty verdict in second-degree murder cases; however, these recommendations are usually ignored, based on the idea that judges are better-informed about relevant facts and sentencing jurisprudence and, unlike the jury, permitted to give reasons for their judgments. Proponents of jury sentencing argue that since sentencing involves fact-finding (a task traditionally within the purview of juries), and since the original intent of the founders was to have juries check judges' power, it is the proper role of juries to participate in sentencing. Opponents argue that judges' training and experience with the use of presentence reports and sentencing guidelines, as well as the fact that jury control procedures typically deprive juries of the opportunity to hear information about the defendant's background during the trial, make it more practical to have judges sentence defendants. The impetus for introducing jury sentencing was that in the late 18th century, punishment options expanded beyond shaming sanctions and the mandatory death penalty and came to include various ranges and modes of imprisonment, creating more room for case-by-case decisionmaking to which juries were thought to be well-suited. Virginia was the first state to adopt jury sentencing. The state's first constitution was enacted in 1776, and shortly thereafter, in 1779, Thomas Jefferson proposed to the Virginia General Assembly a revised criminal code that would have eliminated pardons and benefit of clergy, abolished capital punishment for most offenses, and allowed juries to decide punishments when the penalty was discretionary. This bill failed, however, both in 1779 and 1786, after James Madison had reintroduced it while Jefferson was in France. Sentencing by jury was, however, successfully enacted in Virginia's 1796 penal code, which like the 1779 bill replaced capital punishment with terms of imprisonment for most felony offenses. Kentucky adopted a penal reform bill introduced by John Breckenridge that implemented sentencing by jury in 1798. While in Virginia, magistrates continued to have misdemeanor sentencing power (possibly because of the political influence of magistrates who served in the General Assembly), in Kentucky, this power was given to juries. Kentucky juries tried and sentenced slaves and free blacks, and even decided cases involving prison discipline, imposing punishments such as flagellation or solitary confinement for infractions. Georgia and Tennessee adopted sentencing by jury in 1816 and 1829, respectively. In contrast, northern states such as Pennsylvania, Maryland, New Jersey, and New York allowed judges to determine penalties, with Pennsylvania also allowing judges to pardon prisoners who, in their view, had evidenced sincere reformation. One hypothesis is that Virginia opted for jury sentencing because Federalists like George Keith Taylor distrusted the Republican district court judges; while in Pennsylvania, the Constitutionalists sought (over the objections of Republicans) to put sentencing power in the hands of the judges because the bench was populated by Constitutionalists. North Carolina, South Carolina, and Florida, which did not establish penitentiaries until after the American Civil War, also left sentencing to judges' discretion. The adoption of jury sentencing happened at the same time that the movement for an elective judiciary gathered speed, with at least four states, Alabama, Mississippi, Montana, and North Dakota switching to judicial elections around the same time that they adopted jury sentencing. Both reforms may have been due to a mistrust of unelected judges. During the ten years of the Republic of Texas, judges determined sentences. The change to jury determination of the penalty was brought about by one of the first laws passed by the first legislature of the State of Texas in 1846, which empowered the jury to sentence the defendant in all criminal cases except capital cases and cases for which punishment was fixed by law. Indiana, Illinois, Arkansas, Oklahoma, and West Virginia adopted jury sentencing later in the 19th century. The 1895 U.S. Supreme Court ruling in Sparf v. United States reflected growing concern that letting juries decide whether or how the law should be applied in particular cases could be detrimental to the rule of law. By 1910, the role of juries in determining penalties was being eroded by the professionalization of sentencing, as many states passed laws that created parole and probation systems. These systems were based on a consequentialist philosophy that it would be more useful for society to focus on finding ways to prevent future crime than on fixing blame for crime that had occurred in the past. Criminal behavior was viewed as the result of such factors as heredity, social circumstances, random breeding, and Darwinian struggle, rather than an abuse of divinely-granted free will. Psychology and sociology would determine the causes of crime and what social reforms and treatment programs would correct them. Probation officers gathered and analyzed information about the defendant's character and prepared a presentence report that served as the basis for the ultimate sentence. Probation provided opportunities for treatment in the community for juveniles and adults. In the prison system, parole commissioners, trained in penology and insulated from political pressures, determined when prisoners had been rehabilitated and could be reintegrated into society. The process of preparing a presentence report, which takes weeks, only begins after the defendant is convicted, since if he/she were to be acquitted, the effort that went into preparing the report would be wasted. It would, therefore, not be possible for juries to sentence the defendant at the time of conviction, if the jury needed to rely on a presentence report in making its sentencing decision; rather, the jury would need to be broken up and reassembled later, which could be unworkable if the delay between verdict and sentencing is substantial. Furthermore, jury control procedures typically provide that during the trial, information about the defendant's background that is not relevant to the issue of guilt is not to be presented in the presence of the jury, lest it prejudice him. The assumptions that presentence reports would be more informative than presentence hearings, and that training and experience were required to intelligently consider the data and assess sanctions, militated in favor of having a judge rather than a jury do the sentencing. In the case of McKeiver v Pennsylvania, the U.S. Supreme Court held that alleged juvenile delinquents have no right to a jury trial, with Harry Blackmun and three other Justices opining that an adversarial system would put an end to the prospect of an intimate, informal protective proceeding focused on rehabilitation. Georgia and Tennessee both had periods (from 1937–1939, and from 1913–1923, respectively) in which they briefly abandoned jury sentencing while experimenting with indeterminate sentencing. By 1919, fourteen states gave juries sentencing powers in non-capital cases, although by 1960, that number had dropped to thirteen. By the 1970s and 1980s, determinate sentencing, a new intellectual current that repudiated the rehabilitative model with its focus on using mathematical models and grids to determine sentences, had made inroads, making jury sentencing seem like more of an anachronism. Georgia permanently abandoned jury sentencing in 1974 and Tennessee did the same in 1982. By the 1980s, Alabama, Illinois, Indiana, Montana, and North Dakota had also abandoned jury sentencing, and Mississippi was using jury sentencing only in rape and statutory rape cases. Oklahoma abolished jury sentencing but reinstated it in 1999. In Canada, a faint hope clause formerly allowed a jury to be empanelled to consider whether an offender's number of years of imprisonment without eligibility for parole ought to be reduced, but this was repealed in 2011. According to some commentators, the time is ripe for a revival of jury sentencing, because flaws in the determinate sentencing systems are becoming increasingly apparent. Lawmakers drafting legislation such as the Sentencing Reform Act have had difficulty mustering the political will to make clear choices among opposing moral and ideological viewpoints, instead delegating these decisions to agencies that lack the representativeness and democratic origin of legislatures. Prosecutors have routinely circumvented the sentencing guidelines through their charging and plea bargaining decisions, creating a new set of disparities, despite the intent of the guidelines to curtail disparities. Determinate sentencing has also failed to reduce racial disparity in sentencing. Also, some juries have been acquitting guilty defendants to save them from what they regard as overly harsh mandatory minimum sentences, such as those imposed by the Rockefeller Drug Laws and California's three-strikes law. There have been movements to abolish sentencing commissions and guideline systems and inform jurors of their right to nullify. Decisions like Apprendi v. New Jersey (requiring a jury, rather than a judge, to find any facts that would increase a defendant's maximum sentence) and Ring v. Arizona (requiring a jury, rather than a judge, to find whether there are aggravating factors justifying capital punishment) have also signaled a willingness by the judiciary to expand the role of the jury in the legal process. Jury sentencing has been seen as a way to in many cases render moot the questions raised by Apprendi and related cases such as Blakely v. Washington and United States v. Booker about the differences between elements of an offense and sentencing factors by letting the jury decide all the facts. Cases such as Miller v. Alabama and Graham v. Florida (banning mandatory life imprisonment without parole, and life imprisonment without parole in non-homicide cases, respectively, for juveniles, as contrary to the Eighth Amendment to the United States Constitution's prohibition of cruel and unusual punishment) also raise a question of whether the Supreme Court logically should allow only a jury, rather than a judge, to determine a juvenile should receive such a sentence, given the parallels between adult capital punishment case law and juvenile life imprisonment with parole case law. In Virginia, under the 1796 act, capital punishment remained mandatory for first-degree murder, but the penalty for second-degree murder was any term between five and eighteen years in the penitentiary. The 1796 act gave the court in murder cases the authority to "determine the degree of the crime, and to give sentence accordingly" when a defendant was "convicted by confession." The judge's discretion to set sentences in cases of confession did not exist in Kentucky. In Missouri, informing juries of sentences of defendants in similar cases or the sentences of co-participants in the crime on trial is strictly prohibited under the rules of evidence." Similarly, the Kentucky truth in sentencing statute, which generally increases the information available to sentencing juries, does not provide for sentencing guidelines and statistics. Kentucky courts have also held parole eligibility statistics inadmissible. The military at one time provided jurors with sentencing statistics and guidelines was the military, but this practice ended in the late 1950s as the military's judicial philosophy shifted its emphasis away from sentencing uniformity and towards individualized judgments. The United States Court of Military Appeals held that jurors were not to consider sentences in similar cases or to consult the sentencing manual. Under Virginia's current system, jurors are controversially not allowed access to the Commonwealth's sentencing guidelines or to information about whether sentences will run consecutively or concurrently, and until 2000 were also not informed that parole had been abolished in Virginia. A judge must justify any departure from the jury's recommendation in writing to the Virginia Criminal Sentencing Commission. Less than one-quarter of jury-recommended sentences are modified by judges. Due to concerns about juries' imposing higher sentences than what the sentencing guidelines would suggest, many defendants opt either for bench trials or plea bargains. States with jury sentencing have often allowed judges to intervene in the sentencing process, e.g. by reducing the sentence imposed by the jury, imposing hard labor or solitary confinement in addition to the jury's assessment of fines, or determining the place of confinement imposed by the jury. In Alabama, judges were allowed to override juries' recommendations of life imprisonment and impose capital punishment instead, until a 2017 law took that power away. All jury sentencing states except Texas allow the judge to fix the punishment in case the jury fails to agree on a sentence, making it impossible for there to be a mistrial due to a hung jury at sentencing. In 2020, the Virginia Senate approved SB 810, giving juries applicable discretionary sentencing guidelines worksheets, and SB 811, providing that the court ascertain the punishment unless the defendant requests jury sentencing. Proponent Joe Morrissey said, "Juries are unpredictable . . . You have much more stability with the judge doing the sentencing." An argument based on the Sixth and Seventh Amendments to the United States Constitution is that criminal and civil juries have similar societal functions, including checking the abuse of governmental power, injecting community values into legal decisions, and aiding public acceptance of legal determinations; and therefore the criminal system should have juries decide sentences much as the civil system has juries decide judgments. A counter-argument is that studies show, at least in second-degree murder cases where juries are allowed to recommend mercy, that more punitive sentences increase perceptions of legitimacy, and that judges' declining to follow juries' recommendations does not decrease public confidence and perceptions of fairness and legitimacy. Arguments that have been raised against sentencing by jury are that juries are not as accountable as judges; that putting them in charge of determining both guilt and the sentence concentrates too much power in one body; and that different juries may differ widely in the sentences they impose. Counterarguments are that the lack of accountability of jurors to a higher authority preserves their judicial independence, and that judges are also capable of differing from other judges in the sentences they impose. Judges may even deviate from their own usual sentencing practices if the case is high-profile or a judicial election is coming up. Also, disparities are not always a sign of arbitrariness; sometimes they may reflect geographical differences in public attitudes toward a given crime, or a jury's taking proper account of the individual circumstances of each offender. It is sometimes argued that an unreasonable juror may force the rest of the jury into an undesirable compromise to find the defendant guilty but impose an overly light sentence. A counter-argument is that whether this is bad or good is a matter of perception since "one juror's principled holdout is another juror's irrational nullification. One jury's 'compromise' is another jury's perfectly appropriate give-and-take deliberations." According to University of Chicago Law School lecturer Jenia Iontcheva, sentencing decisions are well-suited to being made through a process of deliberative democracy rather than by experts such as judges, since they involve deeply contested moral and political issues rather than scientific or technical issues. She argues that since sentencing requires individualized, case-by-case assessments, sentences should be decided through small-scale deliberation by juries, as opposed to having lawmakers codify general policies for mechanical application by judges. An advantage Iontcheva cites of having juries come together to deliberate on sentences is that the jurors may alter their preferences in the light of new perspectives. She argues that the hearing and consideration of diverse opinions will give the sentencing decisions greater legitimacy, and that engaging ordinary citizens in government through this process of deliberative democracy will give these citizens confidence about their ability to influence political decisions and thus increase their willingness to participate in politics even after the end of their jury service. Racial and other minorities may also benefit from having greater representation among jurors than among judges. In jurisdictions that do not have any statutory provisions formally allowing jury sentencing, judges have sometimes consulted with the jury on sentencing anyway. At the federal level, the practice of polling the jury and using their input in sentencing was upheld on appeal by the 6th U.S. Circuit Court of Appeals. Sentencing is said to be more time-consuming for jurors than the relatively easy task of ascertaining guilt or innocence, which means an increase in jury fees and in the amount of productivity lost to jury duty. In New South Wales, a 2007 proposal by Chief Justice Jim Spigelman to involve juries in sentencing was rejected after District Court Chief Judge Reg Blanch cited "an expected wide difference of views between jurors about questions relating to sentence". Concerns about jury tampering through intimidation by defendants were also raised. Germany and many other continental European countries have a system in which professional judges and lay judges deliberate together at both the trial and sentencing stages; such systems have been praised as a superior alternative because the mixed court dispenses with most of the time‐consuming practices of jury control that characterize Anglo‐American trial procedure, yet serves the purposes of a jury trial better than plea bargaining and bench trials, which have displaced the jury from routine American practice. Civil rights leader James Bevel was sentenced to 15 years in prison pursuant to the recommendation of a Virginia jury that found him guilty of having sex with his teenage daughter in the 1990s when they lived in Leesburg. The sentencing range had been 5 to 20 years. Jurors are selected from a jury pool formed for a specified period of time—usually from one day to two weeks—from lists of citizens living in the jurisdiction of the court. The lists may be electoral rolls (i.e., a list of registered voters in the locale), people who have driver's licenses or other relevant data bases. When selected, being a member of a jury pool is, in principle, compulsory. Prospective jurors are sent a summons and are obligated to appear in a specified jury pool room on a specified date. However, jurors can be released from the pool for several reasons including illness, prior commitments that can't be abandoned without hardship, change of address to outside the court's jurisdiction, travel or employment outside the jurisdiction at the time of duty, and others. Often jurisdictions pay token amounts for jury duty and many issue stipends to cover transportation expenses for jurors. Work places cannot penalize employees who serve jury duty. Payments to jurors varies by jurisdiction. In the United States jurors for grand juries are selected from jury pools. Selection of jurors from a jury pool occurs when a trial is announced and juror names are randomly selected and called out by the jury pool clerk. Depending on the type of trial—whether a 6-person or 12 person jury is needed, in the United States—anywhere from 15 to 30 prospective jurors are sent to the courtroom to participate in voir dire, pronounced [vwaʁ diʁ] in French, and defined as the oath to speak the truth in the examination testing competence of a juror, or in another application, a witness. Once the list of prospective jurors has assembled in the courtroom the court clerk assigns them seats in the order their names were originally drawn. At this point the judge often will ask each prospective juror to answer a list of general questions such as name, occupation, education, family relationships, time conflicts for the anticipated length of the trial. The list is usually written up and clearly visible to assist nervous prospective jurors and may include several questions uniquely pertinent to the particular trial. These questions are to familiarize the judge and attorneys with the jurors and glean biases, experiences, or relationships that could jeopardize the proper course of the trial. After each prospective juror has answered the general slate of questions the attorneys may ask follow-up questions of some or all prospective jurors. Each side in the trial is allotted a certain number of challenges to remove prospective jurors from consideration. Some challenges are issued during voir dire while others are presented to the judge at the end of voir dire. The judge calls out the names of the anonymously challenged prospective jurors and those return to the pool for consideration in other trials. A jury is formed, then, of the remaining prospective jurors in the order that their names were originally chosen. Any prospective jurors not thus impaneled return to the jury pool room. Scholarly research on jury behavior in American non-capital criminal felony trials reveals that juror outcomes appear to track the opinions of the median juror, rather than the opinions of the extreme juror on the panel, although juries were required to render unanimous verdicts in the jurisdictions studied. Thus, although juries must render unanimous verdicts, in run-of-the-mill criminal trials they behave in practice as if they were operating using a majority rules voting system. As much of the research on social conformity suggests, individuals tend to lose their sense of individuality when faced with powerful group forces (i.e., normative influence; informational influence; interpersonal influence). This raises the question if the effectiveness of jury decision-making compromised by individuals’ tendencies to conform to the normative transmissions of a group. Since a clear archetype for determining guilt does not exist, the criminal justice system must rely on rulings handed down by juries. Even after a decision has been made, it is virtually impossible to know whether a jury has been correct or incorrect in freeing or accusing a defendant of a crime. Although establishing the effectiveness of juries is an arduous task, contemporary research has provided partial support for the proficiency of juries as decision makers. Evidence has shown that jurors typically take their roles very seriously. According to Simon (1980), jurors approach their responsibilities as decision makers much in the same way as a court judge – with great seriousness, a lawful mind, and a concern for consistency that is evidence-based. By actively processing evidence, making inferences, using common sense and personal experiences to inform their decision-making, research has indicated that jurors are effective decision makers who seek thorough understanding, rather than passive, apathetic participants unfit to serve on a jury. Evidence supporting jury effectiveness has also been illustrated in studies that investigate the parallels between judge and jury decision-making. According to Kalven and Zeisel (1966), it is not uncommon to find that the verdicts passed down by juries following a trial match the verdicts held by the appointed judges. Upon surveying judges and jurors of approximately 8000 criminal and civil trials, it was discovered that the verdicts handed down by both parties were in agreement 80% of the time. Jurors, like most individuals, are not free from holding social and cognitive biases. People may negatively judge individuals who do not adhere to established social norms (e.g., an individual's dress sense) or do not meet societal standards of success. Although these biases tend to influence jurors’ individual decisions during a trial, while working as part of a group (i.e., jury), these biases are typically controlled. Groups tend to exert buffering effects that allow jurors to disregard their initial personal biases when forming a credible group decision. - "CURRENT GRAND JURY REPORTS – Miami Dade Office of the State Attorney". Miamisao.com. Retrieved 2014-01-05. - See, e.g., Section 1245.1 of Pennsylvania's codified laws regarding coroners. http://www.pacoroners.org/Laws.php - See, e.g., Inquest Schedule, Jury Findings and Vedicts (2013) of British Columbia. http://www.pssg.gov.bc.ca/coroners/schedule/index.htm (retrieved March 8, 2013) - See, e.g., Sections 13-71-112 and 30-10-607, Colorado Revised Statutes - W.L. Warren, "Henry II" University of California Press,(1973) - Daniel Klerman, "Was the Jury Every Self-Informing" Archived 2011-07-19 at the Wayback Machine Southern California Law Review 77: (2003), 123. - Oxford History of England, 2nd ed 1955, vol III Domesday Book to Magna Carta, A l Poole, pp.397–398. - Garnish, Lis (1995). "Wantage Church History" (PDF). Local History Series. Vale and Downland Museum. Archived from the original (PDF) on 2007-09-25. Retrieved 2009-09-24. - See, for example, discussions of the Brunner theory of testimonial, rather than judicial participation as jury origin, explored in MacNair, Vicinage and the Antecedents of the Jury – I. Theories, in Law and History Review, Vol. 17 No 3, 1999, pp. 6–18. - Carey, Christopher. "Legal Space in Classical Athens." Greece & Rome 41(2): October 1994, pp. 172–186. - Holdsworth, William Searle (1922). A History of English Law. 1 (3 ed.). Little, Brown. pp. 268–269. OCLC 48555551.CS1 maint: ref=harv ( link) - Dowlen, Oliver. Sorted: Civic Lotteries and the Future of Public Participation. (MASS LBP: Toronto, 2008) pp 38 https://www.legislation.gov.uk/ukpga/1825/50/pdfs/ukpga_18250050_en.pdf. Missing or empty - King, PJR. "'Illiterate Plebeians, Easily Misled': jury composition, experience, and behaviour in Essex, 1735-1815". Cockburn and Green (Eds), Twelve Good Men and True: The Criminal Trial Jury in England, 1200-1800 (Princeton UP 1988). - Crosby, Kevin (2019). "Restricting the Juror Franchise in 1920s England and Wales". Law and History Review. 37 (1): 176. doi: 10.1017/S0738248018000639. - Crosby, Kevin (2019). "Restricting the Juror Franchise in 1920s England and Wales". Law and History Review. 37 (1): 195. doi: 10.1017/S0738248018000639. - Thomas, Cheryl; Lloyd-Bostock, Sally. "The Continuing Decline of the English Jury". N Vidmar (Ed), World Jury Systems (OUP 2000). - Williams, at 86 - Review could reduce jury numbers BBC News, 26 April 2008 - Scotland's unique 15-strong juries will not be abolished The Scotsman, 11 May 2009 - Is "The More the Merrier?", Mental Floss, November–December 2011, p. 74 - Verkaik, Robert (September 3, 2001). "Juries 'swayed by dominant speakers'". Independent. Retrieved May 22, 2018. - Uhlig, Robert (September 4, 2001). "Juries are 'too large for correct verdicts'". The Telegraph. Telegraph Media Group Limited. Retrieved May 22, 2018. - Sanders, Joseph (16 January 2008). "A Norms Approach to Jury "Nullification:" Interests, Values, and Scripts". Law & Policy. 30 (1): 12–45. doi: 10.1111/j.1467-9930.2008.00268.x. Archived from the original on 5 January 2013. - Jury Trials: In Favor Archived 2010-11-28 at the Wayback Machine eJournal USA, Anatomy of a Jury Trial, 1 July 2009 - Apprendi, at 490 - See, e.g., Federal Rule of Civil Procedure 52 (2011); Colorado Rule of Civil Procedure 52 (2011). - Tenn. Code Ann. §§ 40-20-104, 40-20-107 - Texas Code of Criminal Procedure Article 37.07 Sec. 1(b) - jury nullification definition – Dictionary – MSN Encarta. Archived from the original on 2010-12-07. - Nullifying the Jury: "The Judicial Oligarchy" Declares War on Jury Nullification Washburn Law Journal May 2, 2007, - Patrick Devlin, 'Trial by Jury' (Stevens & Sons 1956) - Kevin Crosby, 'Controlling Devlin's Jury: what the jury thinks, and what the jury sees online' Criminal Law Review 15 - New Statesman, 2000-10-09. - Luckhurst, Tim (March 20, 2005). "The case for keeping 'not proven' verdict". The Sunday Times, TimesOnline. Retrieved 2009-09-24. - Broadbridge, Sally (15 May 2009). "The "not proven" verdict in Scotland". Standard Note SN/HA/2710. U.K. Parliament, House of Commons, Home Affairs Section. Archived from the original on 17 January 2012. Retrieved 2009-09-24. - For example Uniform Civil Procedure Rules 2005 (NSW) r 29.2, Supreme Court (general civil procedure) rules 2015 (Vic) r 47.02. - Smith v The Queen HCA 27, (2015) 255 CLR 161 judgement summary (PDF), High Court (Australia) - Commonwealth of Australia Constitution (Cth) s 80 Trial by jury. - Cheng v The Queen HCA 53, (2000) 203 CLR 248, High Court (Australia). - R v Federal Court of Bankruptcy; Ex parte Lowenstein HCA 10, (1938) 59 CLR 556 at p 582 per Dixon and Evatt JJ dissenting, High Court (Australia). - Cheatle v The Queen HCA 44 at , (1993) 177 CLR 541, High Court (Australia). - Alqudsi v The Queen HCA 24, (2016) 258 CLR 203 judgement summary (PDF), High Court (Australia) - Taxquet v Belgium, 13-01-2009 Archived 2012-05-31 at the Wayback Machine - Criminal Code, RSC 1985 c C-46, s 785, "summary conviction court" - Criminal Code, RSC 1985, c C-46, s 536 - Criminal Code, RSC 1985, c C-46, ss 471–473. - Criminal Code, RSC 1985, Part XX: Jury Trials - R. v. Thatcher, 1 S.C.R. 652 - R. v. Robinson (2004), 189 C.C.C. (3d) 152 (Ont. C.A.) - Criminal Code, RSC 1985, c C-46, s 631(2.1). - Criminal Code, RSC 1985, c c-46, s 644. - Casper, Gerhard; Zeisel, Hans (January 1972). "Lay Judges in the German Criminal Courts". Journal of Legal Studies. 1 (1): 135–191 . doi: 10.1086/467481. JSTOR 724014. - Casper, Gerhard; Zeisel, Hans (January 1972). "Lay Judges in the German Criminal Courts". Journal of Legal Studies. 1 (1): 135–191 . doi: 10.1086/467481. JSTOR 724014. - Casper, Gerhard; Zeisel, Hans (January 1972). "Lay Judges in the German Criminal Courts". Journal of Legal Studies. 1 (1): 135–191 . doi: 10.1086/467481. JSTOR 724014. - "Jury system in Parsi Matrimonial Disputes". RIGHT TO RECALL AGAINST CORRUPTION – Facebook. August 30, 2016. - Jean-Louis Halpérin (25 March 2011). "Lay Justice in India" (PDF). École Normale Supérieure. Archived from the original (PDF) on 2014-05-03. - "CONSTITUTION OF IRELAND: TRIAL OF OFFENCES". Irish Statute Book. August 2012. Retrieved 1 November 2013. - "Jury service". Citizens Information Board. 2 October 2012. Retrieved 1 November 2013. - "Juries Act, 1976". Irish Statute Book. Retrieved 1 November 2013. - "Civil Law (Miscellaneous Provisions) Act 2008; PART 6: Juries". Irish Statute Book. Retrieved 1 November 2013. - "Courts Service to notify gardaí of jury non-reporting". Irish Legal News. 16 February 2016. Retrieved 17 February 2016. - "Criminal trials". Citizens Information Bureau. 29 August 2012. Retrieved 1 November 2013. - "Special Criminal Court". Citizens Information Board. 6 August 2009. Retrieved 1 November 2013. - "Role of the jury". Citizens Information Board. 5 September 2012. Retrieved 1 November 2013. - "Inquests". Citizens Information Bureau. 9 September 2010. Retrieved 1 November 2013. - "Consultation Paper on Jury Service". Irish Law Reform Commission. 29 March 2010. Retrieved 1 November 2013. "JURY SERVICE" (PDF) (107–2013). Law Reform Commission. April 2013. 1393-3132. Cite journal requires - Shatter, Alan (9 July 2013). "Courts and Civil Law (Miscellaneous Provisions) Bill 2013: Second Stage (Continued)". Dáil Éireann debates. Retrieved 1 November 2013. Part 5 of the Bill amends the Juries Act 1976 to provide for the appointment of up to three additional jurors to deal with lengthy trials. The provision follows a recommendation to this effect in the Law Reform Commission's recently published report on jury service - "Courts and Civil Law (Miscellaneous Provisions) Act 2013, Section 23". Irish Statute Book. 24 July 2013. Retrieved 1 November 2013. - McDonald, Dearbhail (1 November 2013). "Anglo criminal trial: Larger 15 strong jury panel appointed". Irish Independent. Retrieved 1 November 2013. - "NZ's first majority guilty verdict". Stuff. Retrieved 2009-06-03. - "Lov om rettergangsmåten i straffesaker (Straffeprosessloven)". Lovdata. Retrieved 2008-08-22. - "18.12.2008 Ссылки на недоказанность наличия. К профессиональному празднику чекисты получили два подарка, значительно облегчающие карьерный рост в органах госбезопасности". Новая газета. - "09.01.2019 Суды присяжных появились в 55 регионах России". Российская газета. - Terrill 2009, p. 439. sfn error: no target: CITEREFTerrill2009 ( help) - "Ley Orgánica 5/1995, de 22 de mayo, del Tribunal del Jurado" (in Spanish). 1995. Retrieved 2019-04-03. - ESPAÑA | Juicio a Mikel Otegi por asesinar a dos ertzainas. Un jurado popular absuelve al joven de Jarrai - Archived December 22, 2011, at the Wayback Machine - Tryckfrihetsförordning (1949:105-SFS 2010:1409) Riksdagen (in Swedish) - The Freedom of the Press Act/Sweden The International Constitutional Law Project - "The advantages and disadvantages of lay judges from a Swedish perspective". Cairn.info. Retrieved 2014-01-05. - "Så blir du vald - Bli nämndeman". - Lloyd-Bostock S, Thomas C. (1999). DECLINE OF THE "LITTLE PARLIAMENT": JURIES AND JURY REFORM IN ENGLAND AND WALES Archived 2012-04-02 at the Wayback Machine.Law and Contemporary Problems. - Freeman, Simon (June 21, 2005). "Jury trials 'intolerable' in major fraud cases". The Sunday Times. - "First trial without jury approved". BBC News. 18 June 2009. - Glendon MA, Carozza PG, Picker CB. (2008) Comparative Legal Traditions, p. 251. Thomson-West. - juries. "a group of people who have been chosen to listen to all the facts in a trial in a law court and to decide if a person is guilty or not guilty, or if a claim has been proved: members of the jury The jury has/have been unable to return a verdict (= reach a decision). Police officers aren't usually allowed to be/sit/serve on a jury". Cambridge Dictionary. Retrieved 1 June 2020. - O'Day, Alan (1994). Dimensions of Irish terrorism. G.K. Hall. ISBN 0816173389. OCLC 29023375. - "Why Was I Picked For Jury Service?". Courtroom Advice. Retrieved 2010-09-21. - King NJ (1999). "The American Criminal Jury". Law and Contemporary Problems. 62 (2): 41–67. doi: 10.2307/1192252. JSTOR 1192252. Retrieved 2009-06-04. - Landsman S. (1999). "The Civil Jury in America". Law and Contemporary Problems. 62 (2): 285–304. doi: 10.2307/1192260. JSTOR 1192260. Retrieved 2009-06-04. - Amar, A.R. (1998). The Bill of Rights. New Haven, CT: Yale University. pp. 81–118. - "Plea Bargains and the Role of Judges". 2008 National Convention Breakout Session. The American Constitution Society for Law and Policy (ACS). Archived from the original on 2009-10-07. Retrieved 2009-09-24. - Ring v. Arizona, 536 U.S. 284 (2002) - Unanimous Jury Votes for Life Sentence, but Alabama Judge Imposes Death Death Penalty Information Center - This power is often used in drug cases "to impose an enhanced sentence ... based on the sentencing judge's determination of a fact that was not found by the jury or admitted by the defendant". In April 2008, the U.S. District Court, in a 236 page opinion Archived 2008-05-18 at the Wayback Machine to address this ruled that juries should be told before they deliberate if a defendant is facing a mandatory minimum sentence and also called it "inappropriate" to ignore the juries power to refuse to convict (jury nullification). - King, Nancy J. (2003). "The Origins of Felony Jury Sentencing in the United States". Chi.-Kent L. Rev. 78 (937). - "59 ARK.CODE ANN. § 5-4-103". 2010. If a defendant is charged with a felony and is found guilty of an offense by a jury, the jury shall fix punishment . . . .Cite journal requires "60 KY.REV.STAT.ANN. § 532.055". 2010. Upon return of a verdict of guilty . . . the court shall conduct a sentencing hearing before the jury, if such case was tried before a jury. In the hearing the jury will determine the punishment to be imposed within the range provided elsewhere by law. "61 MO.REV.STAT. § 557.036(3)". 2013. If the jury at the first stage of a trial finds the defendant guilty of the submitted offense . . . The jury shall assess and declare the punishment as authorized by statute. - 62 OKLA.STAT.ANN. tit. 22, § 926.1 (West 2010) (“In all cases of a verdict of conviction for any offense against any of the laws of the State of Oklahoma, the jury may, and shall upon the request of the defendant assess and declare the punishment in their verdict within the limitations fixed by law . . . .”). "63 TEX.CODE CRIM.PROC. art. 37.07(b)". 2009. [I]n other cases where the defendant so elects in writing before the commencement of the voir dire examination of the jury panel, the punishment shall be assessed by the same jury . . . . If a finding of guilty is returned, the defendant may, with the consent of the attorney for the state, change his election of one who assesses the punishment. "VA.CODE ANN. § 19.2-295". 2011. [T]he term of confinement in the state correctional facility or in jail and the amount of fine, if any, of a person convicted of a criminal offense,shall be ascertained by the jury, or by the court in cases tried without a jury. - GA. CODE ANN. § 27-2502 (1953) - ILL, ANN. STAT. ch. 38, § 754a (Smith-Hurd Supp. 1959) - MONT. REV. CODES ANN. § 94-7411 (1947) - TENN. CODE ANN. §§ 40-2704 to −2707 (1955) - Rankin, Micah B. (2015). "The Origins, Evolution and Puzzling Irrelevance of Jury Recommendations in Second-Degree Murder Sentencing". Queen's Law Journal. 40 (2). - Kirgis, Paul F. (2005). "The Right to a Jury Decision on Sentencing Facts after Booker: What the Seventh Amendment Can Teach the Sixth". Ga. L. Rev. 39 (897). - "Statutory Structures for Sentencing Felons to Prison". Columbia Law Review. 60 (8): 1134–1172. 1 December 1960. doi: 10.2307/1120351. JSTOR 1120351. - Iontcheva, Jenia (April 2003). "Jury Sentencing as Democratic Practice". Virginia Law Review. 89 (2): 311–383. doi: 10.2307/3202435. JSTOR 3202435. - Lewis, O.F. (1922). The development of American prisons and prison customs, 1776–1845. Prison Association of New York. Any convict commencing a quarrel with another should "suffer such punishment (within the prison) as should be awarded by an impartial jury, but not over four lashes, or 10 hours of solitary confinement. - Webster, Charles W. (1960). "Jury Sentencing – Grab-Bag Justice". Sw L.J. 14 (221). - Alschuler, Albert (Winter 2003). "The changing purposes of criminal punishment: A retrospective on the past century and some thoughts about the next". The University of Chicago Law Review. 70 (1): 1–22. doi: 10.2307/1600541. JSTOR 1600541. - Hoffman, Morris B. "The Case for Jury Sentencing". Duke Law Journal. 52 (951). - Lanni, Adriaan (1 May 1999). "Jury Sentencing in Noncapital Cases: An Idea Whose Time Has Come (Again)?". The Yale Law Journal. 108 (7): 1775–1803. doi: 10.2307/797450. JSTOR 797450. - Bibas, Stephanos and Klein, Susan R. (2008). "The Sixth Amendment and Criminal Sentencing". Faculty Scholarship (921).CS1 maint: multiple names: authors list ( link) - Carrington, Melissa (Fall 2011). "Applying Apprendi to jury sentencing: why state felony jury sentencing threatens the right to a jury trial" (PDF). University of Illinois Law Review. 2011 (4): 1359–1385. - Russell, Sarah F. (2015). "Jury Sentencing and Juveniles: Eighth Amendment Limits and Sixth Amendment Rights". B.C.L. Rev. 56 (553). - Kelly, Ashley and Dujardin, Peter (1 April 2012). "Virginia judges rarely question juries' sentencing recommendations". Daily Press.CS1 maint: multiple names: authors list ( link) - Durkin, Alana (1 January 2016). "Virginia eyes new sentences after juries didn't get key fact". Fredericksburg Free-Lance Star. - Ress, David (21 January 2019). "House Courts subcommittee kills parole bill". Daily Press. - Stone, Caleb R. (2014). "Sentencing Roulette: How Virginia's Criminal Sentencing System is Imposing an Unconstitutional Trial Penalty That Suppresses the Rights of Criminal Defendants to a Jury Trial". Wm.& Mary Bill RTS. J. 23 (559). - Green, Frank (18 October 2009). "Number of juried trials slumps both in Va., nationwide". Daily Progress. - Remkus, Ashley (21 July 2017). "Did judicial override end in Alabama? Some say judges can still overrule jury over death penalty". AL.com. - Ribeiro, Gianni; Antrobus, Emma (November 2017). "Investigating the Impact of Jury Sentencing Recommendations Using Procedural Justice Theory". New Criminal Law Review. 20 (4): 535–568. doi: 10.1525/nclr.2017.20.4.535. - Heisig, Eric (29 June 2016). "Federal appeals court upholds judge's lowest possible sentence in child-porn case". Cleveland.com. - Pearlman, Jonathan (27 April 2007). "Keep juries away from sentencing, say judges". Sydney Morning Herald. - Langbein, John H. (January 1981). "Mixed Court and Jury Court: Could the Continental Alternative Fill the American Need?". American Bar Foundation Research Journal. 6: 195–219. doi: 10.1111/j.1747-4469.1981.tb00426.x. - Barakat, Matthew (10 April 2008). "Civil Rights Leader Convicted of Incest". Associated Press. - Romo, Vanessa (11 December 2018). "Charlottesville Jury Recommends 419 Years Plus Life For Neo-Nazi Who Killed Protester". NPR. - Archived April 27, 2012, at the Wayback Machine - Patrick J. Bayer, Randi Hjalmarsson, Shamena Anwar, "Jury Discrimination in Criminal Trials" (September 2010) Economic Research Initiatives at Duke (ERID) Working Papers Series No. 55 http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1673994 - Forsyth, D.R. 2010. Group Dynamics, 5th Edition. Belmont, CA: Thomson Wadsworth. ISBN 0-534-36822-0 - Simon, R. J. (1980). The jury: Its role in American society. Lexington, MA: Heath - "Human Genome Project Information Site Has Been Updated". Ornl.gov. 2013-07-23. Retrieved 2014-01-05. - Kalven, H. & Zeisel, H. (1966). The American Jury. Boston: Little, Brown. - Wrightsman, L., Nietzel, M. T., & Fortune, W. H. (1998). Psychology and the legal system (4th edition). Monterey, California: Brooks/Cole. - Kerr, N. L., & Huang, J. Y. (1986). How much difference does one juror make in jury deliberation. Personality and Social Psychology Bulletin, 12, 325–343. |Wikimedia Commons has media related to Juries.| |Wikiquote has quotations related to: Jury|
The curve is a graph of a function of x. If so, find the domain and range of the function. The curve is the graph of a function of x. Vertical line test states that the curve in the xy coordinate plane is the graph of the function of x if and only if a vertical line intersects the curve at exactly one point. Perform the vertical line test for the given graph. Draw a vertical line such that it passes through the curve as shown below in Figure 1. It is observed from Figure 1 that the vertical line intersects the curve exactly at one point. Therefore, the curve is the graph of a function of x. Since, the domain of a function is the set of all possible x values of the graph, the domain of the function is . Since, the range of a function is the set of all possible y values of the graph, the range of the function is . Subscribe to bartleby learn! Ask subject matter experts 30 homework questions each month. Plus, you’ll have access to millions of step-by-step textbook answers!
This page is referenced by: The Weimar Republic: Germany's Unstable Democracy In Germany, nationalism played a key role in people’s identities and in national politics since the state’s unification in 1871, and even well before. Germans increasingly found themselves making comparisons between Germans and French, Germans and Slavs, and between Germans and Jews, with the Germans, of course, always higher in the racial/ethnic hierarchy. With its loss in WWI, German nationalism took a radical turn. Germany, unlike the other countries we have been discussing, emerged from the Paris Peace negotiations a shrunken country. It had lost chunks of territory to Poland and France as well as additional slivers to Belgium and Denmark. It was cut into two pieces by the Polish Corridor. It lost all of its colonial possessions. Key resource-rich areas had been lopped off. Its army and navy were limited by the terms of the peace. While throughout Central Europe countries armed and attacked one another, Germany was systematically demilitarized and weakened. The victors in Paris refused to negotiate with the old imperial regime and demanded a democratic order in Germany. The result was perhaps predictable. While some Germans supported the new democratic order, many more rejected the new government and the terms of the peace. German nationalism increasingly became the rallying cry for the right wing. Standing for Germany meant standing against Paris, Poland, democracy, communism and Bolshevism, and Jews. Such nationalism would feed into notions that the German army during WWI had not been defeated at the front but had been "stabbed in the back" by defeatists at home. Worse, so the theory went, these defeatists (socialists, communists, democrats, Jews) were now in power in the newly elected government.The new Central Europe was comprised of ethnically defined and territorially insecure states. These states’ nationalism, reinforced by Wilson’s Fourteen Points, justified both outward hostility toward neighbors and inward oppression of ethnic minorities. The growing field of ethnic and racial science and eugenics threw fuel on the fire. In the context of this heightened nationalism and competition between nations, democratic governing structures were difficult to maintain. Germany provides a good example of the radicalization of European politics in the 1920s. Germany lost the First World War on the battlefield, but never experienced foreign armies or fighting on its territory. As a result, the population was not psychologically prepared to accept unconditional surrender and the eventual terms of the peace imposed on it by the allies. These terms included Germany assuming full responsibility for starting the war (War Guilt Clause), huge reparation payments to the allies (primarily France) and countries like Belgium, major territorial loses, limitations to both its army and navy, and the demilitarization of borders. What’s more, the responsibility for signing the Versailles Treaty fell to a new democratic German government, the Weimar Republic, not to the military establishment or the former imperial regime. Added to this were many hardships. The German currency declined in value, wiping out billions of dollars of wealth. Vast agricultural destruction throughout Europe caused massive food shortages, especially among the defeated countries. A great influenza pandemic spread through Europe (Spanish Flu), taking millions of lives, especially of people already malnourished or weakened in some way by the war. Hundreds of thousands of German men returned home from war crippled, shell-shocked, unemployed, and/or still armed and loyal to their military units. It was in this highly unfavorable context that the Weimar Republic, Germany’s new democratic government, tried to take root and grow. The Weimar Republic survived until 1933 for two principle reasons, both of which were also weaknesses. First, it was able to cut deals with conservative elites in the government administration and the military. Weimar politicians placated the right wing by giving the judiciary freedom to punish those on the left (communists and socialists) while allowing right wing criminals to escape judgment. We see this quite clearly in the government's reaction to the para-military coup in 1920, the so-called Kapp Putsch. The putsch failed when it encountered massive labor opposition (general strikes), but most of the leaders and major participants in the coup against Weimar democracy were let off without (or with minor) punishment. We can contrast this with the state's reaction to leftist rebellions (Spartacist Uprising, Red Ruhr Uprising) was forceful and brutal, and depended on the participation of the conservative military establishment (or paramilitary Freikorps). During the Weimar Republic, the military was permitted to establish itself as a force beyond government control and in the hands of old military elites from the war. This freedom convinced the military not to move to topple the government, but it also meant that the government would not (and could not) limit the force of the military in domestic life. Full civilian control of the military, a fundamental principle of modern democracies, was absent in Weimar Germany. The second survival mechanism/weakness was the Weimar government's ability to establish a fluctuating coalition of center-left to center-right parties that were willing (at least for a time) to work within the structures of constitutional framework. At first, these parties sought to distance themselves from the extreme positions on either side and to build coalitions that could govern. When times improved after the crushing inflation of the early 1920s, it seemed like this centrist politics might win out over the politics of the extremes, though it should be said that the spectrum, in general, was shifting to the right even as, by the mid-1920s, Germany was witnessing swift economic recovery. And yet, below the level of electoral politics opposition was brewing on both the right and left. The communist party (on the left) and the nationalist parties (on the right) were growing increasingly more hostile to the government. On the left, the main complaints were that the government was acting as a shield for right-wing interests, that these interests were co-opting the state and using it to oppress the workers. On the right, the arguments were many: the state had become a pawn of the Jews; the state was in danger of undergoing a communist revolution; the state had sold out Germany to the allies; the state was doing nothing to prevent German humiliations either in the economic or the military sector. What’s more, the right wing saw traditional values under threat: families were changing (women working); cities were growing and becoming the home of foreigners and Jews; crime was on the rise; traditional German family businesses were being pushed out by Americanized industries and an international marketplace stacked in favor of other countries. German folk culture and values were seen as under siege by the French, Jews, Americans, Slavs, and Bolsheviks, in other words by the principle enemies of the German right. Throughout the 1920s, however, this movement remained relatively small. Hitler’s Beer Hall Putsch in 1923, during which the Nazis tried to seize power in Munich, failed miserably. Hitler was arrested and jailed. The Nazi movement appeared weak. It is important to remember that there were two different types of right-wing movements at work in Germany in the 1920s and early 1930s. The first was the more traditional right-wing, what we might call the imperial Wilhelmine right or conservative right. Here, we find old military, bureaucratic, and economic elites, those remaining from the Wilhelmine era and the war. In addition, we see the rise of what could be termed a "modern" right, a revolutionary nationalism that sought to mobilize the masses in new ways, through economic reforms, technological innovation, propaganda, racial fears, and visions of future nationalist glory. What caused the governing coalition in the Weimar Republic to crumble? Why did Germany, experiencing steady recovery by 1926, fall to (or embrace) Nazism seven years later? Why did Germans overwhelmingly support this new type of radical racial nationalism? These are questions historians and critics have tried to answer since 1933, key questions for understanding the complicated interaction between culture and politics and the inherent fragility of all democratic political orders, including those of today. To keep things concise, I will emphasize two aspects, though one will be discussed more thoroughly in a subsequent chapter. The first was the global economic depression that began in 1929 and crippled the world economy, hitting Germany especially hard. The second aspect is more complicated and has to do with German culture and politics in the 1920s. During the late 1920s, Weimar Germany witnessed an erosion of the political center by the left and right wing fringes, but especially by forces on the right. The Culture of Decline in Weimar Germany The culture of 1920s Germany was shaped by the experiences of WWI and its aftermath, including the peace negotiations, influenza and hyper-inflation. These experiences, however, were processed very differently depending on one's political, social and cultural point of view. With millions of Germans killed and wounded, evidence of the war’s destruction and its pointlessness seemed everywhere to people on the political left. Increasingly, socialists and communists saw the war as an enormous scam instituted by the elite class against common German citizens. The “people” suffered, socialists and communists argued, while industrialists made huge amounts of money and military and political leaders indulged their wildly idealized visions of world economic and political domination. This view caused a good part of the Germany population to reject the status quo. Those who had been against the war from the beginning – namely the communist party – all of a sudden seemed like the most trustworthy advocates for the rights of the common man. The German Communist Party rejected the new democratic order and sharply criticized socialists who worked within it. A potential left wing challenge to the Weimar government was always present. Such challenges had been made in the first days of the republic’s life. In Berlin, communists organized around the Spartacus League attempted to establish an independent socialist polity. The socialist-led Weimar government turned to units of disbanded military under the command of former generals to violently put down the communist threat. These Freikorps soldiers would later play a key role in the toppling of the republic. The threat to the governing “center” of the Weimar Republic would not have been cause for much concern had the German people – primarily the German middle and working classes – remained loyal to the government. That they didn’t is one of the most discussed historical issues of the 20th century. Why didn’t the German middle classes choose democracy over authoritarianism? For what reasons did they choose war over international cooperation and peace? Why did they eventually sacrifice everything to wage a brutal and total war against their neighbors and the world? The first part of the answer is economic. The hyperinflation of the early 1920s wiped out the savings of most of the German middle class. Economic security – normally a restraint against radicalism – vanished in the aftermath of the war. Second, the war and the resulting peace caused a deep national humiliation. The middle and working classes had most identified with the German war effort. Loss on the battlefield was a sever blow to the collective German identity. Third, the Weimar government itself was running into major problems, first and foremost living up to its social promises. Though the German economy was growing, it was not growing quickly enough to make good on government promises. Both the right and the left felt like the government was increasingly distant from its goals. Gradually, more people drifted from the political center. Finally, there was a sense in Germany that, in general, the world was in crisis, that things were not going well, that the long years of German progress were coming to an end, or had already ended. Modernity, according to this German view, with its assembly lines and cities and American culture, with its French notions of civilization and its Jewish world finance, was ruining what had made the “West” great. This view, that western civilization had reached its zenith and had now begun a long and slow (but basically irreversible) decline, was famously advanced by the historian Oswald Spengler. Spengler, a world historian, advanced this provocative and (for Germans) terrifying thesis of Western decline in two volumes, one published in 1918 and the next in 1922 – both during the heart of German travails. Spengler became an instant hit in Germany, the book selling over 100,000 copies by mid-decade. Modern civilization based on capitalism and democracy, he argued, was in decline. Democracy, according to Spengler, was a weak and corrupt form of government. Capitalism was the opposite of all that was vital amount the people, especially since it divided a people into a social class and stoked class-based antagonism.Others took up Spengler's diagnosis of society and looked for a path toward rejuvenation. Young Adolf Hitler gravitated immediately to Spengler’s pessimistic notions and Hitler posed a national-socialist-authoritarian answer to the dilemma of civilization’s decline. This response to the issue of Western or German or civilizational decline became the core appeal of Nazism. On the other side of the political spectrum, communists looked east to the Bolsheviks for a model of how to pull Germany (and the rest of the industrial world) out of the mire. Germans on the right become increasingly nationalist, racist and militant. Germans on the left became increasingly Bolshevized – meaning that they were ready to support a minority, vanguard party in an overthrow of the state. Increasingly, these became the two main ways for Germans to imagine a way out of the cultural, political and economic morass. Gradually, year after year the room in the center of German political shrank. Fridge parties garnered larger percentages of the vote and seats in parliament. The nature of coalition politics in the German Reichstag meant that the centrist parties could no longer govern without support from the fringe. By 1929, however, the fate of the Weimar Republic was not yet sealed. True, it had lost the support of large fractions to the right and left. True, it did not have control over the German military, which was increasingly hostile to the republic by the late 1920s. True, a huge reservoir of popular anxiety existed in both the industrial towns and cities, and especially in the increasingly strained agricultural sector. True, Bolshevik politics in Russia were creating hope among German communists and incredible fear among the German elite and rural and middle classes. True, anti-Semitism was increasing in massive proportions. True, the German economy was very fragile. And yet, had it not been for the Great Crash of 1929 and the subsequent depression, it is unclear whether the Nazi Party would have succeeded in channeling these different fears and insecurities into a single movement. What we can say for sure is that the polarizing politics of the 1920s left the playing field virtually empty of a serious competitor from the center, one that could have offered a convincing and viable alternative in the atmosphere of the Great Depression. Only the communists were able to put up resistance. Hitler’s first act was to purge them.
An X-Ray image of a supernova remnant and its central neutron star Click on image for full size ROSAT satellite image courtesy of NASA Neutron Stars are the end point of a massive star's life. When a really massive star runs out of nuclear fuel in its core the core begins to collapse under gravity. When the core collapses the entire star collapses. The surface of the star falls down until it hits the now incredibly dense core. It then bounces off the core and blows apart in a supernova . All that remains is the collapsed core, a Neutron Star or sometimes a Black Hole , if the star was really massive. A typical neutron star is the size of a small city, only 10 Kilometers in diameter but it may have the mass of as many as three suns. It is quite dense. One spoonful of neutron star material on Earth would weigh as much as all the cars on Earth put together. Some neutron stars spin very rapidly and have very strong magnetic fields. If the magnetic poles are not lined up with the star's rotation axis then the magnetic field spins around very fast. Charged particles can get caught up in the magnetic fields and beam away radiation like a lighthouse lamp. This type of neutron star is called a pulsar. Pulsars are detected by their rapidly repeating radio signals beamed at Earth from those charged particles trapped in the magnetic field. When they were first discovered it was thought that they were radio signals from "Little Green Men" from outer space. Weird. Shop Windows to the Universe Science Store! The Fall 2009 issue of The Earth Scientist , which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store You might also be interested in: NASA revealed a few of the many images that will come from its newest telescope. Since first being deployed July 23, the Chandra X Observatory has had no problems. It first made a series of engine firings,...more Jocelyn Bell Burnell is a British astronomer who was born in 1943. She discovered pulsars - stars which emit periodic radio waves - in 1967. Burnell was a graduate student at Cambridge University when...more In the 1960's, the United States launched some satellites to look for very high energy light, called Gamma Rays. Gamma Rays are produced whenever a nuclear bomb explodes. The satellites found many bursts...more During the early 1900's, which is not very long ago, astronomers were unaware that there were other galaxies outside our own Milky Way Galaxy. When they saw a small fuzzy patch in the sky through their...more Neutron Stars are the end point of a massive star's life. When a really massive star runs out of nuclear fuel in its core the core begins to collapse under gravity. When the core collapses the entire star...more Spiral galaxies may remind you of a pinwheel. They are rotating disks of mostly hydrogen gas, dust and stars. Through a telescope or binoculars, the bright nucleus of the galaxy may be visible but the...more When stars like our own sun die they will become white dwarfs. As a star like our sun is running out of fuel in its core it begins to bloat into a red giant. This will happen to our sun in 5 Billion years....more
So far we've looked at functions written as y = f(x) = (some function of the variable x) or x = f(y) = (some function of the variable y). Often, especially in physical science, it's convenient to look at functions of two or more variables (but we'll stick to two here) in a different way, as parametric functions. The easiest way of thinking about parametric functions is to introduce the concept of time. Think about drawing a function like a parabola, from left to right, as time unfolds. Then it's easy to start thinking about both the x-coordinate and the y-coordinate changing as a function of time: x = f(t), y = g(t). In fact, we call time the parameter in this sense. In this section we're going to explore functions for which a point on the graph, (x, y) is given by (f(t), g(t)) – parametric functions. Parametric functions are functions of a number of coordinates (2 for the 2-dimensional plane, 3 for 3-D space, and so on), where each of coordinate (x, y, z ...) is expressed as another function of some parameter, like time: x = f(t), y = g(t), z = h(t), and so on. Function: y = f(x) Parametric: x = g(t), y = h(t) Let's take a look at this graph by making a table of (x, y) coordinates, but we'll match them up with the t's and keep track of those, too. The x-coordinates of each point on the graph of this function are given by x = f(t), and the y's by y = g(t). The (x, y) pairs are plotted in the graph below. Clearly a sideways parabola like this isn't a function when written as y = f(x) because there are two y values for all but one of the x's in the domain. But in parametric form, there's one and only one (x, y) pair for each t. The function would actually have to loop back on itself and intersect to make a duplicate. In order to express a circle as a function, we'd need to solve for y as a function of x (in this case, the radius r would be a fixed number). In taking the required square root, we'd actually get two functions with the ± we'd have to attach to the radical, like this: Now we have a functional form for either the top half of a circle (the "+" function) or the bottom (the "-" function), but not both. If we think about expressing x and y in terms of another variable, however, we can find a nice parametric form for a circle. The obvious parameter is the angle of the circle, measured, as usual, from the positive side of the x-axis in a counterclockwise direction. Here's the diagram: So our parameterization is x = r·cos(θ), and y = r·sin(θ), which traces out a circle of radius r, from θ = 0 to θ = 2π. You might recognize these transformations between polar coordinates (r, θ) and Cartesian coordinates (x, y) if you've worked in polar coordinates before. If we calculate the first few points of that circle, say for angles of θ = 0, π/4 and π/2, we can see that the circle is traced in a counterclockwise direction as θ increases. Here are those values in a table: and here's how they look on a plot of the circle: That's a fancy way of saying "converting from parametric to function form." We'll take our first two examples as they were given in parametric form above, and see how we can get them back to functional form. Take the first example above. We can eliminate the parameter, t, by rearranging the expression for y and plugging y-1 in for t in the expression for x. Now we can complete the square on y in a few steps to solve for y: Identify the perfect square on the left and find a common denominator on the right to get: We notice that this function comes in two halves, the upper and the lower, as we would expect for a sideways parabola. Our first example had the parameterization If we're lucky enough to recognize the possibilities in writing x2 + y2, we'll see that ... and this is just the equation of a circle of radius r. It turns out that parameterizations of the form x = a·cos(kx), y = a·sin(kx) are very common in the physical sciences, so it's worth getting used to manipulating them if that's where you're headed. First let's make a table to get an idea of the shape and direction of the graph of this curve, then we'll eliminate the parameter to shed some more light on it. Here's what the graph of that looks like. The direction of the curve is counterclockwise. It's an ellipse with a major axis of 5 units and a minor axis of 2, so that gives us an idea of what the non-parametric form should look like. Now let's convert to standard form by eliminating the parameter. Suspecting that this is an ellipse, which will involve the squares of x and y, we'll begin by squaring the parameterization: Now divide by 25 on the left and 4 on the right: Now we can add these two equations, using the Pythagorean identity to get one on the right side ... So the formula of our ellipse is which indeed has a long axis along x of length 10 (= 2 × 5) and a short axis along y of 4 units (2 × 2). See the conic sections section if you need a refresher on ellipses. It's not necessarily worth memorizing, but we've also got the idea that the parameterization x = a·cos(t), y = b·sin(t) is an ellipse centered at the origin with axes of 2a and 2b. As in the previous examples, plot a few points to get an idea of the shape and direction of each parameterization, then convert each to non-parameterized form. Parametric functions allow us to calculate (using integration) both the length of a curve and the amount of surface area on a given 3-dimensional curve. As you study multi-variable calculus, you'll see that the idea of "surface area" can be extended to figures in higher dimensions, too. It's a tricky thing to wrap your head around, the surface area of a 5-dimensional object, but it's mathematically sound. Finding the slope of a parametric curve at a point is just finding the derivative at that point. The trouble is that we want dy/dx when what we have to work with is x = f(t) and y = g(t). But as long as we're working with the Leibniz form of derivative notation, the solution is pretty obvious: So to find the slope of a parametric function at a point, we take the derivative with respect to y and divide by the derivative with respect to x. This method can be generalized for higher dimensions, too. Then take the ratio Dx / Dy: That's the general derivative of the function. Noticed that it's undefined at t = 0. The specific slope at t = 3 is Now to determine the concavity, we need to calculate the second derivatives. The second derivative is a bit tricky. It looks like this: Now we can take the second derivative of dy/dx and plug in the rest: The we just evaluate that at x = 3: Here is a plot of that parametric function. The graph may not look concave up at t = 3, but there is an inflection point at about t = 2.7, and it is. A tangent is horizontal to this parametric curve if dy/dt = 0 and dx/dt ≠ 0, and it is vertical if dx/dt = 0 and dy/dt ≠ 0. First we'll calculate the derivatives: Now it's just a matter of plugging t = 0, π, π/2 and 3π/2 into our parametric equations to get vertical tangents at (3, 0) and (-3, 0), and horizontal tangents at (0, 3) and (0, -3). The graph of this parametric function is a circle: Let's go back to a circle to calculate arc length. It's handy because we already know that the length of a circle is its circumference, c = 2πr, so we should get that answer. If we take a chord, s , of the circle, we can calculate its length as shown below: Now we can blow the circle up a bit and take an infinitesimally small chord, So we have an equation for the length of the infinitesimal arc, Here is how we incorporate the parameter, t, into that expression: Now we can use the parameterization, x = r cos(t) and y = r sin(t) (I'm switching from θ to t here because t is more commonly used to denote a parameter) to find We solve for Finally, we can integrate those infinitesimal segments of arc around the circle, from 0 to 2π to find the formula for the circumference of a circle, c = 2πr – pretty cool. In general, we can state the length of a smooth curve in the box below. If a smooth curve is defined parametrically by then its length is given by My students often have the mistaken impression that time is the "fourth dimension." It's not. In fact, any real problem in physics can have many true dimensions. For example, the position of an airplane in the sky is only fully specified by a set of x, y and z coordinates and three angles, pitch, roll and yaw, to describe the orientation of the plane with respect to the ground. It's a six-dimensional problem. Time, on the other hand, is never a dimension. Dimensions can have negative and positive values. Time is never negative. Time is always a parameter, not a dimension. This site is a one-person operation. If you can manage it, I'd appreciate anything you could give to help defray the cost of keeping up the domain name, server, search service and my time. xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to firstname.lastname@example.org.
In monetary economics, the quantity theory of money (QTM) states that the general price level of goods and services is directly proportional to the amount of money in circulation, or money supply. The theory was originally formulated by Polish mathematician Nicolaus Copernicus in 1517, and was influentially restated by philosophers John Locke, David Hume, Jean Bodin, and by economists Milton Friedman and Anna Schwartz in A Monetary History of the United States published in 1963. The theory was challenged by Keynesian economics, but updated and reinvigorated by the monetarist school of economics. Critics of the theory argue that money velocity is not stable and, in the short-run, prices are sticky, so the direct relationship between money supply and price level does not hold. In mainstream macroeconomic theory, changes in the money supply play no role in determining the inflation rate. In such models, inflation is determined by the monetary policy reaction function. Alternative theories include the real bills doctrine and the more recent fiscal theory of the price level. Origins and development The quantity theory descends from Nicolaus Copernicus, followers of the School of Salamanca like Martín de Azpilicueta, Jean Bodin, Henry Thornton, and various others who noted the increase in prices following the import of gold and silver, used in the coinage of money, from the New World. The “equation of exchange” relating the supply of money to the value of money transactions was stated by John Stuart Mill who expanded on the ideas of David Hume. The quantity theory was developed by Simon Newcomb, Alfred de Foville, Irving Fisher, and Ludwig von Mises in the late 19th and early 20th century. Henry Thornton introduced the idea of a central bank after the financial panic of 1793, although, the concept of a modern central bank was not given much importance until Keynes published “A Tract on Monetary Reform” in 1923. In 1802, Thornton published An Enquiry into the Nature and Effects of the Paper Credit of Great Britain in which he gave an account of his theory regarding the central bank’s ability to control price level. According to his theory, the central bank could control the currency in circulation through book keeping. This control could allow the central bank to gain a command of the money supply of the country. This ultimately would lead to the central bank’s ability to control the price level. His introduction of the central bank’s ability to influence the price level was a major contribution to the development of the quantity theory of money. Karl Marx modified it by arguing that the labor theory of value requires that prices, under equilibrium conditions, are determined by socially necessary labor time needed to produce the commodity and that quantity of money was a function of the quantity of commodities, the prices of commodities, and the velocity. Marx did not reject the basic concept of the Quantity Theory of Money, but rejected the notion that each of the four elements were equal, and instead argued that the quantity of commodities and the price of commodities are the determinative elements and that the volume of money follows from them. He argued… The law, that the quantity of the circulating medium is determined by the sum of the prices of the commodities circulating, and the average velocity of currency may also be stated as follows: given the sum of the values of commodities, and the average rapidity of their metamorphoses, the quantity of precious metal current as money depends on the value of that precious metal. The erroneous opinion that it is, on the contrary, prices that are determined by the quantity of the circulating medium, and that the latter depends on the quantity of the precious metals in a country;this opinion was based by those who first held it, on the absurd hypothesis that commodities are without a price, and money without a value, when they first enter into circulation, and that, once in the circulation, an aliquot part of the medley of commodities is exchanged for an aliquot part of the heap of precious metals. John Maynard Keynes, like Marx, accepted the theory in general and wrote… This Theory is fundamental. Its correspondence with fact is not open to question. Also like Marx he believed that the theory was misrepresented. Where Marx argues that the amount of money in circulation is determined by the quantity of goods times the prices of goods Keynes argued the amount of money was determined by the purchasing power or aggregate demand. He wrote Thus the number of notes which the public ordinarily have on hand is determined by the purchasing power which it suits them to hold or to carry about, and by nothing else. In the Tract on Monetary Reform (1923), Keynes developed his own quantity equation: n = p(k + rk’),where n is the number of “currency notes or other forms of cash in circulation with the public”, p is “the index number of the cost of living”, and r is “the proportion of the bank’s potential liabilities (k’) held in the form of cash.” Keynes also assumes “…the public,(k’) including the business world, finds it convenient to keep the equivalent of k consumption in cash and of a further available k’ at their banks against cheques…” So long as k, k’, and r do not change, changes in n cause proportional changes in p. Keynes however notes… The error often made by careless adherents of the Quantity Theory, which may partly explain why it is not universally accepted is as follows. The Theory has often been expounded on the further assumption that a mere change in the quantity of the currency cannot affect k, r, and k’, – that is to say, in mathematical parlance, that n is an independent variable in relation to these quantities. It would follow from this that an arbitrary doubling of n, since this in itself is assumed not to affect k, r, and k’, must have the effect of raising p to double what it would have been otherwise. The Quantity Theory is often stated in this, or a similar, form. Now “in the long run” this is probably true. If, after the American Civil War, that American dollar had been stabilized and defined by law at 10 per cent below its present value, it would be safe to assume that n and p would now be just 10 per cent greater than they actually are and that the present values of k, r, and k’ would be entirely unaffected. But this long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean will be flat again. In actual experience, a change in n is liable to have a reaction both on k and k’ and on r. It will be enough to give a few typical instances. Before the war (and indeed since) there was a considerable element of what was conventional and arbitrary in the reserve policy of the banks, but especially in the policy of the State Banks towards their gold reserves. These reserves were kept for show rather than for use, and their amount was not the result of close reasoning. There was a decided tendency on the part of these banks between 1900 and 1914 to bottle up gold when it flowed towards them and to part with it reluctantly when the tide was flowing the other way. Consequently, when gold became relatively abundant they tended to hoard what came their way and to raise the proportion of the reserves, with the result that the increased output of South African gold was absorbed with less effect on the price level than would have been the case if an increase of n had been totally without reaction on the value of r. …Thus in these and other ways the terms of our equation tend in their movements to favor the stability of p, and there is a certain friction which prevents a moderate change in v from exercising its full proportionate effect on p. On the other hand, a large change in n, which rubs away the initial frictions, and especially a change in n due to causes which set up a general expectation of a further change in the same direction, may produce a more than proportionate effect on p. Keynes thus accepts the Quantity Theory as accurate over the long-term but not over the short term. Keynes remarks that contrary to contemporaneous thinking, velocity and output were not stable but highly variable and as such, the quantity of money was of little importance in driving prices. The theory was influentially restated by Milton Friedman in response to the work of John Maynard Keynes and Keynesianism. Friedman understood that Keynes was like Friedman, a “quantity theorist” and that Keynes Revolution “was from, as it were, within the governing body”, i.e. consistent with previous Quantity Theory. Friedman notes the similarities between his views and those of Keynes when he wrote… A counter-revolution, whether in politics or in science, never restores the initial situation. It always produces a situation that has some similarity to the initial one but is also strongly influenced by the intervening revolution. That is certainly true of monetarism which has benefited much from Keynes’s work. Indeed I may say, as have so many others since there is no way of contradicting it, that if Keynes were alive today he would no doubt be at the forefront of the counter-revolution. Friedman notes that Keynes shifted the focus away from the quantity of money (Fisher’s M and Keynes’ n) and put the focus on price and output. Friedman writes… What matters, said Keynes, is not the quantity of money. What matters is the part of total spending which is independent of current income, what has come to be called autonomous spending and to be identified in practice largely with investment by business and expenditures by government. The Monetarist counter-position was that contrary to Keynes, velocity was not a passive function of the quantity of money but it can be an independent variable. Friedman wrote: Perhaps the simplest way for me to suggest why this was relevant is to recall that an essential element of the Keynesian doctrine was the passivity of velocity. If money rose, velocity would decline. Empirically, however, it turns out that the movements of velocity tend to reinforce those of money instead of to offset them. When the quantity of money declined by a third from 1929 to 1933 in the United States, velocity declined also. When the quantity of money rises rapidly in almost any country, velocity also rises rapidly. Far from velocity offsetting the movements of the quantity of money, it reinforces them. Thus while Marx, Keynes, and Friedman all accepted the Quantity Theory, they each placed different emphasis as to which variable was the driver in changing prices. Marx emphasized production, Keynes income and demand, and Friedman the quantity of money. Academic discussion remains over the degree to which different figures developed the theory. For instance, Bieda argues that Copernicus’s observation Money can lose its value through excessive abundance, if so much silver is coined as to heighten people’s demand for silver bullion. For in this way, the coinage’s estimation vanishes when it cannot buy as much silver as the money itself contains […]. The solution is to mint no more coinage until it recovers its par value. amounts to a statement of the theory, while other economic historians date the discovery later, to figures such as Jean Bodin, David Hume, and John Stuart Mill. The quantity theory of money preserved its importance even in the decades after Friedmanian monetarism had occurred. In new classical macroeconomics the quantity theory of money was still a doctrine of fundamental importance, but Robert E. Lucas and other leading new classical economists made serious efforts to specify and refine its theoretical meaning. For new classical economists, following David Hume’s famous essay “Of Money”, money was not neutral in the short-run, so the quantity theory was assumed to hold only in the long-run. These theoretical considerations involved serious changes as to the scope of countercyclical economic policy. Historically, the main rival of the quantity theory was the real bills doctrine, which says that the issue of money does not raise prices, as long as the new money is issued in exchange for assets of sufficient value. Knut Wicksell criticized the quantity theory of money, citing the notion of a “pure credit economy”. John Maynard Keynes criticized the quantity theory of money in The General Theory of Employment, Interest and Money. Keynes had originally been a proponent of the theory, but he presented an alternative in the General Theory. Keynes argued that the price level was not strictly determined by the money supply. Changes in the money supply could have effects on real variables like output. Ludwig von Mises agreed that there was a core of truth in the quantity theory, but criticized its focus on the supply of money without adequately explaining the demand for money. He said the theory “fails to explain the mechanism of variations in the value of money”. - ^ Jump up to:ab Volckart, Oliver (1997). “Early beginnings of the quantity theory of money and their context in Polish and Prussian monetary policies, c. 1520–1550”. The Economic History Review. Wiley-Blackwell. 50 (3): 430–49. doi:10.1111/1468-0289.00063. ISSN 0013-0117. JSTOR 2599810. - ^“Quantity theory of money”. Encyclopædia Britannica. Encyclopædia Britannica, Inc. - ^ Jump up to:ab Hamilton, Earl J. (1965). American Treasure and the Price Revolution in Spain, 1501-1650. New York: Octagon. - ^ Jump up to:ab Minsky, Hyman P. John Maynard Keynes, McGraw-Hill. 2008. p.2. - ^Nicolaus Copernicus (1517), memorandum on monetary policy. - ^Hutchinson, Marjorie (1952). The School of Salamanca; Readings in Spanish Monetary Theory, 1544-1605. Oxford: Clarendon. - ^John Stuart Mill (1848), Principles of Political Economy. - ^David Hume (1748), “Of Interest,” “Of Interest” in Essays Moral and Political. - ^Simon Newcomb (1885), Principles of Political Economy. - ^Alfred de Foville (1907), La Monnaie. - ^Irving Fisher (1911), The Purchasing Power of Money, - ^von Mises, Ludwig Heinrich; Theorie des Geldes und der Umlaufsmittel [The Theory of Money and Credit] - ^Hetzel, Robert L. “Henry Thornton: Seminal Monetary Theorist and Father of the Modern Central Bank.” Henry Thornton: Seminal Monetary Theorist and Father of the Modern Central Bank (n.d.): 1. July–Aug. 1987. - ^Capital Vol I, Chapter 3, B. The Currency of Money, as well A Contribution to the Critique of Political Economy Chapter II, 3 “Money” - ^Tract on Monetary Reform, London, United Kingdom: Macmillan, 1924 Archived August 8, 2013, at the Wayback Machine - ^“Keynes’ Theory of Money and His Attack on the Classical Model”, L. E. Johnson, R. Ley, & T. Cate (International Advances in Economic Research, November 2001) “Archived copy”(PDF). Archived from the original (PDF) on July 17, 2013. Retrieved June 17, 2013. - ^ Jump up to:ab “The Counter-Revolution in Monetary Theory”, Milton Friedman (IEA Occasional Paper, no. 33 Institute of Economic Affairs. First published by the Institute of Economic Affairs, London, 1970.) “Archived copy” (PDF). Archived from the original (PDF) on 2014-03-22. Retrieved 2013-06-17. - ^Milton Friedman (1956), “The Quantity Theory of Money: A Restatement” in Studies in the Quantity Theory of Money, edited by M. Friedman. Reprinted in M. Friedman The Optimum Quantity of Money (2005), 51-p. 67. - ^ Jump up to:ab c Volckart, Oliver (1997), “Early beginnings of the quantity theory of money and their context in Polish and Prussian monetary policies, c. 1520–1550”, The Economic History Review, 50 (3): 430–49, doi:10.1111/1468-0289.00063 - ^Bieda, K. (1973), “Copernicus as an economist”, Economic Record, 49: 89–103, doi:10.1111/j.1475-4932.1973.tb02270.x - ^Wennerlind, Carl (2005), “David Hume’s monetary theory revisited”, Journal of Political Economy, 113 (1): 233–37, doi:10.1086/426037 - ^Galbács, Peter (2015). The Theory of New Classical Macroeconomics. A Positive Critique. Contributions to Economics. Heidelberg/New York/Dordrecht/London: Springer. doi:10.1007/978-3-319-17578-2. ISBN 978-3-319-17578-2. - ^Roy Green (1987), “real bills doctrine”, in The New Palgrave: A Dictionary of Economics, v. 4, pp. 101–02. - ^Milton Friedman & Anna J. Schwartz (1965), The Great Contraction 1929–1933, Princeton: Princeton University Press, ISBN 978-0-691-00350-4 - ^Froyen, Richard T. Macroeconomics: Theories and Policies. 3rd Edition. Macmillan Publishing Company: New York, 1990. pp. 70–71. - ^Milton Friedman (1987), “quantity theory of money”, The New Palgrave: A Dictionary of Economics, v. 4, p. 15. - ^Summarized in Friedman (1987), “quantity theory of money”, pp. 15–17. - ^Friedman (1987), “quantity theory of money”, p. 19. - ^Jahan, Sarwat. “Inflation Targeting: Holding the Line”. International Monetary Funds, Finance & Development. Retrieved 28 December 2014. - ^NA (2005), How Does the Fed Determine Interest Rates to Control the Money Supply?”, Federal Reserve Bank of San Francisco. February,“Archived copy”. Archived from the original on December 8, 2008. Retrieved November 1, 2007. - ^W. Hafer and David C. Wheelock (2001), “The Rise and Fall of a Policy Rule: Monetarism at the St. Louis Fed, 1968-1986”, Federal Reserve Bank of St. Louis, Review, January/February, p. 19. - ^Wicksell, Knut (1898). Interest and Prices (PDF). - ^Ludwig von Mises (1912), “The Theory of Money and Credit (Chapter 8, Sec 6)”. Ofer Abarbanel is a 25 year securities lending broker and expert who has advised many Israeli regulators, among them the Israel Tax Authority, with respect to stock loans, repurchase agreements and credit derivatives. Founder TBIL.co STATX Fund.
Reinforcement learning is a very useful (and currently popular) subtype of machine learning and artificial intelligence. It is based on the principle that agents, when placed in an interactive environment, can learn from their actions via rewards associated with the actions, and improve the time to achieve their goal. In this article, we’ll explore the fundamental concepts of reinforcement learning and discuss its key components, types, and applications. Definition of Reinforcement Learning We can define reinforcement learning as a machine learning technique involving an agent who needs to decide which actions it needs to do to perform a task that has been assigned to it most effectively. For this, rewards are assigned to the different actions that the agent can take at different situations or states of the environment. Initially, the agent has no idea about the best or correct actions. Using reinforcement learning, it explores its action choices via trial and error and figures out the best set of actions for completing its assigned task. The basic idea behind a reinforcement learning agent is to learn from experience. Just like humans learn lessons from their past successes and mistakes, reinforcement learning agents do the same – when they do something “good” they get a reward, but, if they do something “bad”, they get penalized. The reward reinforces the good actions while the penalty avoids the bad ones. Reinforcement learning requires several key components: - Agent – This is the “who” or the subject of the process, which performs different actions to perform a task that has been assigned to it. - Environment – This is the “where” or a situation in which the agent is placed. - Actions – This is the “what” or the steps an agent needs to take to reach the goal. - Rewards – This is the feedback an agent receives after performing an action. Before we dig deep into the technicalities, let’s warm up with a real-life example. Reinforcement isn’t new, and we’ve used it for different purposes for centuries. One of the most basic examples is dog training. Let’s say you’re in a park, trying to teach your dog to fetch a ball. In this case, the dog is the agent, and the park is the environment. Once you throw the ball, the dog will run to catch it, and that’s the action part. When he brings the ball back to you and releases it, he’ll get a reward (a treat). Since he got a reward, the dog will understand that his actions were appropriate and will repeat them in the future. If the dog doesn’t bring the ball back, he may get some “punishment” – you may ignore him or say “No!” After a few attempts (or more than a few, depending on how stubborn your dog is), the dog will fetch the ball with ease. We can say that the reinforcement learning process has three steps: Types of Reinforcement Learning There are two types of reinforcement learning: model-based and model-free. Model-Based Reinforcement Learning With model-based reinforcement learning (RL), there’s a model that an agent uses to create additional experiences. Think of this model as a mental image that the agent can analyze to assess whether particular strategies could work. Some of the advantages of this RL type are: - It doesn’t need a lot of samples. - It can save time. - It offers a safe environment for testing and exploration. The potential drawbacks are: - Its performance relies on the model. If the model isn’t good, the performance won’t be good either. - It’s quite complex. Model-Free Reinforcement Learning In this case, an agent doesn’t rely on a model. Instead, the basis for its actions lies in direct interactions with the environment. An agent tries different scenarios and tests whether they’re successful. If yes, the agent will keep repeating them. If not, it will try another scenario until it finds the right one. What are the advantages of model-free reinforcement learning? - It doesn’t depend on a model’s accuracy. - It’s not as computationally complex as model-based RL. - It’s often better for real-life situations. Some of the drawbacks are: - It requires more exploration, so it can be more time-consuming. - It can be dangerous because it relies on real-life interactions. Model-Based vs. Model-Free Reinforcement Learning: Example Understanding model-based and model-free RL can be challenging because they often seem too complex and abstract. We’ll try to make the concepts easier to understand through a real-life example. Let’s say you have two soccer teams that have never played each other before. Therefore, neither of the teams knows what to expect. At the beginning of the match, Team A tries different strategies to see whether they can score a goal. When they find a strategy that works, they’ll keep using it to score more goals. This is model-free reinforcement learning. On the other hand, Team B came prepared. They spent hours investigating strategies and examining the opponent. The players came up with tactics based on their interpretation of how Team A will play. This is model-based reinforcement learning. Who will be more successful? There’s no way to tell. Team B may be more successful in the beginning because they have previous knowledge. But Team A can catch up quickly, especially if they use the right tactics from the start. Reinforcement Learning Algorithms A reinforcement learning algorithm specifies how an agent learns suitable actions from the rewards. RL algorithms are divided into two categories: value-based and policy gradient-based. Value-based algorithms learn the value at each state of the environment, where the value of a state is given by the expected rewards to complete the task while starting from that state. This model-free, off-policy RL algorithm focuses on providing guidelines to the agent on what actions to take and under what circumstances to win the reward. The algorithm uses Q-tables in which it calculates the potential rewards for different state-action pairs in the environment. The table contains Q-values that get updated after each action during the agent’s training. During execution, the agent goes back to this table to see which actions have the best value. Deep Q-Networks (DQN) Deep Q-networks, or deep q-learning, operate similarly to q-learning. The main difference is that the algorithm in this case is based on neural networks. The acronym stands for state-action-reward-state-action. SARSA is an on-policy RL algorithm that uses the current action from the current policy to learn the value. These algorithms directly update the policy to maximize the reward. There are different policy gradient-based algorithms: REINFORCE, proximal policy optimization, trust region policy optimization, actor-critic algorithms, advantage actor-critic, deep deterministic policy gradient (DDPG), and twin-delayed DDPG. Examples of Reinforcement Learning Applications The advantages of reinforcement learning have been recognized in many spheres. Here are several concrete applications of RL. Robotics and Automation With RL, robotic arms can be trained to perform human-like tasks. Robotic arms can give you a hand in warehouse management, packaging, quality testing, defect inspection, and many other aspects. Another notable role of RL lies in automation, and self-driving cars are an excellent example. They’re introduced to different situations through which they learn how to behave in specific circumstances and offer better performance. Gaming and Entertainment Gaming and entertainment industries certainly benefit from RL in many ways. From AlphaGo (the first program that has beaten a human in the board game Go) to video games AI, RL offers limitless possibilities. Finance and Trading RL can optimize and improve trading strategies, help with portfolio management, minimize risks that come with running a business, and maximize profit. Healthcare and Medicine RL can help healthcare workers customize the best treatment plan for their patients, focusing on personalization. It can also play a major role in drug discovery and testing, allowing the entire sector to get one step closer to curing patients quickly and efficiently. Basics for Implementing Reinforcement Learning The success of reinforcement learning in a specific area depends on many factors. First, you need to analyze a specific situation and see which RL algorithm suits it. Your job doesn’t end there; now you need to define the environment and the agent and figure out the right reward system. Without them, RL doesn’t exist. Next, allow the agent to put its detective cap on and explore new features, but ensure it uses the existing knowledge adequately (strike the right balance between exploration and exploitation). Since RL changes rapidly, you want to keep your model updated. Examine it every now and then to see what you can tweak to keep your model in top shape. Explore the World of Possibilities With Reinforcement Learning Reinforcement learning goes hand-in-hand with the development and modernization of many industries. We’ve been witnesses to the incredible things RL can achieve when used correctly, and the future looks even better. Hop in on the RL train and immerse yourself in this fascinating world. Soon, we will be launching four new Degrees for AY24-25 at OPIT – Open Institute of Technology I want to offer a behind-the-scenes look at the Product Definition process that has shaped these upcoming programs. 🚀 Phase 1: Discovery (Late May – End of July) Our journey began with intensive brainstorming sessions with OPIT’s Academic Board (Francesco Profumo, Lorenzo Livi, Alexiei Dingli, Andrea Pescino, Rosario Maccarrone) . We also conducted 50+ interviews with tech and digital entrepreneurs (both from startups and established firms), academics and students. Finally, we deep-dived into the “Future of Jobs 2023” report by the World Economic Forum and other valuable research. 🔍 Phase 2: Selection – Crafting Our Roadmap (July – August) Our focus? Introducing new degrees addressing critical workforce shortages and upskilling/reskilling needs for the next 5-10 years, promising significant societal impact and a broad market reach. Our decision? To channel our energies on full BScs and MScs, and steer away from shorter courses or corporate-focused offerings. This aligns perfectly with our core mission. 💡 Focus Areas Unveiled! We’re thrilled to concentrate on pivotal fields like: - Advanced AI - Digital Business - Metaverse & Gaming - Cloud Computing (less “glamorous”, but market demand is undeniable). 🎓 Phase 3: Definition – Shaping the Degrees (August – November) With an expert in each of the above fields, and with the strong collaboration of our Academic Director, Prof. Lorenzo Livi , we embarked on a rigorous “drill-down process”. Our goal? To meld modern theoretical knowledge with cutting-edge competencies and skills. This phase included interviewing over 60+ top academics, industry professionals, and students and get valuable, program-specific, insights from our Marketing department. 🌟 Phase 4: Accreditation and Launch – The Final Stretch We’re currently in the accreditation process, gearing up for the launch. The focus is now shifting towards marketing, working closely with Greta Maiocchi and her Marketing and Admissions team. Together, we’re translating our new academic offering into a compelling value proposition for the market. Stay tuned for more updates! Far from being a temporary educational measure that came into its own during the pandemic, online education is providing students from all over the world with new ways to learn. That’s proven by statistics from Oxford Learning College, which point out that over 100 million students are now enrolled in some form of online course. The demand for these types of courses clearly exists. In fact, the same organization indicates that educational facilities that introduce online learning see a 42% increase in income – on average – suggesting that the demand is there. Enter the Open Institute of Technology (OPIT). Delivering three online courses – a Bachelor’s degree in computer science and two Master’s degrees – with more to come, OPIT is positioning itself as a leader in the online education space. But why is that? After all, many institutions are making the jump to e-learning, so what separates OPIT from the pack? Here, you’ll discover the answers as you delve into the five reasons why you should trust OPIT for your online education. Reason 1 – A Practical Approach OPIT focuses on computer science education – a field in which theory often dominates the educational landscape. The organization’s Rector, Professor Francesco Profumo, makes this clear in a press release from June 2023. He points to a misalignment between what educators are teaching computer science students and what the labor market actually needs from those students as a key problem. “The starting point is the awareness of the misalignment,” he says when talking about how OPIT structures its online courses. “That so-called mismatch is generated by too much theory and too little practical approach.” In other words, students in many classes spend far too much time learning the “hows” and “whys” behind computerized systems without actually getting their hands dirty with real work that gives them practical experience in using those systems. OPIT takes a different approach. It has developed a didactic approach that focuses far more on the practical element than other courses. That approach is delivered through a combination of classroom sessions – such as live lessons and masterclasses – and practical work offered through quizzes and exercises that mimic real-world situations. An OPIT student doesn’t simply learn how computers work. They put their skills into practice through direct programming and application, equipping them with skills that are extremely attractive to major employers in the tech field and beyond. Reason 2 – Flexibility Combined With Support Flexibility in how you study is one of the main benefits of any online course. You control when you learn and how you do it, creating an environment that’s beneficial to your education rather than being forced into a classroom setting with which you may not feel comfortable. This is hardly new ground. Any online educational platform can claim that it offers “flexibility” simply because it provides courses via the web. Where OPIT differs is that it combines that flexibility with unparalleled support bolstered by the experiences of teachers employed from all over the world. The founder and director of OPIT, Riccardo Ocleppo, sheds more light on this difference in approach when he says, “We believe that education, even if it takes place physically at a distance, must guarantee closeness on all other aspects.” That closeness starts with the support offered to students throughout their entire study period. Tutors are accessible to students at all times. Plus, every participant benefits from weekly professor interactions, ensuring they aren’t left feeling stuck on an educational “island” and have to rely solely on themselves for their education. OPIT further counters the potential isolation that comes with online learning with a Student Support team to guide students through any difficulties they may have with their courses. In this focus on support, OPIT showcases one of its main differences from other online platforms. You don’t simply receive course material before being told to “get on with it.” You have the flexibility to learn at your own pace while also having a support structure that serves as a foundation for that learning. Reason 3 – OPIT Can Adapt to Change Quickly The field of computer science is constantly evolving. In the 2020s alone, we’ve seen the rise of generative AI – spurred on by the explosive success of services like ChatGPT – and how those new technologies have changed the way that people use computers. Riccardo Ocleppo has seen the impact that these constant evolutions have had on students. Before founding OPIT, he was an entrepreneur who received first-hand experience of the fact that many traditional educational institutions struggle to adapt to change. “Traditional educational institutions are very slow to adapt to this wave of new technologies and trends within the educational sector,” he says. He points to computer science as a particular issue, highlighting the example of a board in Italy of which he is a member. That board – packed with some of the country’s most prestigious tech universities – spent three years eventually deciding to add just two modules on new and emerging technologies to their study programs. That left Ocleppo feeling frustrated. When he founded OPIT, he did so intending to make it an adaptable institution in which courses were informed by what the industry needs. Every member of its faculty is not only a superb teacher but also somebody with experience working in industry. Speaking of industry, OPIT collaborates with major companies in the tech field to ensure its courses deliver the skills that those organizations expect from new candidates. This confronts frustration on both sides. For companies, an OPIT graduate is one for which they don’t need to bridge a “skill gap” between what they’ve learned and what the company needs. For you, as a student, it means that you’re developing skills that make you a more desirable prospect once you have your degree. Reason 4 – OPIT Delivers Tier One Education Despite their popularity, online courses can still carry a stigma of not being “legitimate” in the face of more traditional degrees. Ocleppo is acutely aware of this fact, which is why he’s quick to point out that OPIT always aims to deliver a Tier One education in the computer science field. “That means putting together the best professors who create superb learning material, all brought together with a teaching methodology that leverages the advancements made in online teaching,” he says. OPIT’s degrees are all accredited by the European Union to support this approach, ensuring they carry as much weight as any other European degree. It’s accredited by both the European Qualification Framework (EQF) and the Malta Qualification Framework (MQF), with all of its courses having full legal value throughout Europe. It’s also here where we see OPIT’s approach to practicality come into play via its course structuring. Take its Bachelor’s degree in computer science as an example. Yes, that course starts with a focus on theoretical and foundational knowledge. Building a computer and understanding how the device processes instructions is vital information from a programming perspective. But once those foundations are in place, OPIT delivers on its promises of covering the most current topics in the field. Machine learning, cloud computing, data science, artificial intelligence, and cybersecurity – all valuable to employers – are taught at the undergraduate level. Students benefit from a broader approach to computer science than most institutions are capable of, rather than bogging them down in theory that serves little practical purpose. Reason 5 – The Learning Experience Let’s wrap up by honing in on what it’s actually like for students to learn with OPIT. After all, as Ocleppo points out, one of the main challenges with online education is that students rarely have defined checkpoints to follow. They can start feeling lost in the process, confronted with a metaphorical ocean of information they need to learn, all in service of one big exam at the end. Alternatively, some students may feel the temptation to not work through the materials thoroughly, focusing instead on passing a final exam. The result is that those students may pass, but they do so without a full grasp of what they’ve learned – a nightmare for employers who already have skill gaps to handle. OPIT confronts both challenges by focusing on a continuous learning methodology. Assessments – primarily practical – take place throughout the course, serving as much-needed checkpoints for evaluating progress. When combined with the previously mentioned support that OPIT offers, this approach has led to courses that are created from scratch in service of the student’s actual needs. Choose OPIT for Your Computer Science Education At OPIT, the focus lies as much on helping students to achieve their dream careers as it does on teaching them. All courses are built collaboratively. With a dedicated faculty combined with major industry players, such as Google and Microsoft, it delivers materials that bridge the skill gap seen in the computer science field today. There’s also more to come. Beyond the three degrees OPIT offers, the institution plans to add more. Game development, data science, and cloud computing, to name a few, will receive dedicated degrees in the coming months, accentuating OPIT’s dedication to adapting to the continuous evolution of the computer science industry. Discover OPIT today – your journey into computing starts with the best online education institution available.
The first CSL article in this series has introduced you to X-Ray Fluorescence, the method of exciting an element’s atoms and measuring the Characteristic X-Rays given off when the atom de-excites back to a stable state. Any number of methods may be used to excite ordinary atoms to fluoresce in this manner. Just about any energetic charged particle will do it, as well as ionizing electromagnetic photons, such as Gamma Rays and X-Rays. The first article in this series used small radioactive beta sources to provide that excitation. The only requirement is that the energy imparted to the target atom exceeds the binding energy of the particular electron shell involved. This binding energy is unique for each and every element and each and every electron shell within that element. For an atom to be in a stable condition, quantum theory states that orbital electrons must exist in discrete energy levels (called shells) and that shells must be filled. XRF involves an inner electron being first ejected from the atom by the added energy, then being replaced in it’s shell by a donor electron. When we measure the resulting XRF, what we see is the difference energy between the kinetic energy state of the donor electron, minus the binding energy of its new orbital shell. Therefore each and every element has a set of “Characteristic X-Ray” signatures for each of its electron shells. When a pure beta source is used to excite XRF, the beta particles penetrate the target’s surface only slightly, therefore betas are used in such applications as coating thickness analysis, and surface elemental analysis. If we were to use X-Rays, they of course penetrate deeper, causing elements beyond the surface of the target to become excited. Sometimes, if those atoms are no too deep in the target, their resulting fluorescent X-Rays will break through the surface, allowing them to be measured. A rule of thumb is, the higher the target’s Atomic Number the more self-shielding is an issue (element density also plays a part in this). Introducing APXS, a form of XRF using alpha particles to excite the target atoms. Traditionally element analysis used heavy lab equipment, large X-Ray generators and liquid nitrogen cooled sensors. Today’s world requires portable, lightweight equipment designed to be used away from the Home Lab, out in the field. No better example of remote site operation is Mars! Mars Rover Curiosity uses XRF with excitation provided by alpha particles, a relatively new technology Curiosity uses some pretty hefty, military grade radioactive sources, like Curium, which are undesirable and unobtainable to Citizen Scientists and school labs. NASA REFERENCES: http://mars.jpl.nasa.gov/msl/mission/instruments/spectrometers/apxs/ We do however have a pretty potent alpha particle source not only readily available but downright cheap! Ionization type Smoke detectors, presently available at Wal-Mart for less than $4 USD will supply the only excitation needed. They contain less than one microCurie of Am-241, in a safe configuration, that can be owned without license and disposed of in any trash can when finished. Still we caution young students to seek adult supervision. We should note that devices containing these radioactive elements are NRC controlled, and cannot be sold or resold without a specific license, so don’t try to make and market the apparatus described below, just make one for your personal use. IMPORTANT NOTE: Depending on the country you live in, these ionization smoke detectors may or may not be available, plus other restrictions might apply. Am-241 gives alpha particles and some low energy photons, both of which combine to make an excellent excitation source, exciting the surface with the alpha particles and deeper layers with the photons. This is a distinct advantage as we will see in later experiments. From one to eight smoke detectors are required, but no more are ever needed. Before we go into the adaptor containing the exciters, let’s talk about the sensors and the one selected for use in this project. Sensors specifically designed to measure very low energy photons have to have some pretty unique qualities. First they have to have a housing, or at least a “window” made of very lightweight material. Anything too robust will simply block the desired photons by absorption. Next they have to be sensitive to the energy range in question, and finally they must give a signal output that is proportional to the amount of energy detected. After all, we are judging our results by the energy given off an atom, so we must be able to measure that energy. Geiger Counters are well known, and some even will respond to low energy photons, but DO NOT give any energy information. All the “clicks” heard in the speaker are the same, no matter what sort of radiation caused them. Geiger Counters are ruled out for XRF. Another sensor called a Gas Flow Proportional Detector DOES give energy information, and have been used in commercially made XRF machines for a long time. They have some disadvantages, namely: 1) They are VERY expensive. 2) They require THOUSANDS of Volts to operate. 3) A large bottle of nuclear counting gas- usually argon/methane is required. 4) Their output signals are VERY WEAK, requiring expensive and fragile preamplifiers to be used before the MCA. Â On the plus side, they do operate at room temperature, so don’t need liquid nitrogen. For this reason alone proportional detectors have been used before superior but not more affordable technology has recently become available. Next we will examine the scintillation detector. These types of sensor work by emitting a flash of light from a crystal when radiation strikes it. These small flashes are routed to a Photo Multiple Tube (PMT) which converts them to photoelectrons, then greatly amplify those photoelectrons into an electrical pulse. Some have an amplification factor of one million or more. Once amplified, these electrical pulses are analyzed by our Multi Channel Analyzer (MCA), and their height determined individually. The height of these pulses is what contains the originating energy information- the higher the pulses in milliVolts, the higher was the originating radiation energy. This whole process is called PHA or Pulse Height Analysis, and is the basis for all the scans that I show related to radiation energy. On the scan, the horizontal axis increases as per energy, while the vertical axis increases by number of pulses. Later on we will publish many tutorial that explain each and every aspect. Traditional scintillation detectors used a crystal of sodium iodide, activated with a small percentage of thallium, These are very efficient, but sodium iodide is hygroscopic, which means it absorbs moisture from the air like a sponge. Sodium iodide left in the open will turn to slush overnight. Recent developments in crystal technology have allowed a new crystal to become available, but at a cost. This one uses Cesium Iodide, also thallium activated. It does not share the same moisture issues as its sodium iodide cousins, and to add to its efficiency, it is also denser. Therefore a really good scan can be made with a 1 mm thick CsI(Tl) scintillator sensor so long as the “window” allows the photons to enter the sensitive volume. S E International has created an example of this technology , in the embodiment of their RAP-47 LEG Probe. The RAP-47 uses a select grade PMT, a 1″ diameter X 1mm thick CsI(Tl) crystal and a thin aluminum window. At less than $1000, these are a bargain, but check eBay for them at less than half that. Other suitable LEG probes are made by Ludlum Measurements and Technical Associates, but none of those use CsI(Tl) yet. THE RAPCAP APPARATUS The RAPCAP is a home built XRF exciter made to clip on to the front of a RAP-47 or other LEG probe. Radioactive sources removed from up to 8 $4 ionization type smoke detectors provide the alpha particles and low energy X-Rays. These are arranged so they point away from and perpendicular to the face of the sensor, and have a small layer of lead shielding immediately behind them to avoid direct rays for the source hitting the probe window. This first version shown uses 8 such sources, mounted on a brass washer with 1/8″ holes drilled around the periphery to accept the sources, to completely surround the target as it is placed immediately in front of the RAPCAP, with a space of 1`/4″ allowed. This small space is required so the excitation rays and particles have enough room to fully illuminate the target. Subsequent XRF rays FROM the target are collimated through a hole in the center of the RAPCAP and back to the sensor window. This “flat” version is best for flat targets, which need be no larger than 1″ X 1″. For tiny specimens, another version was tried and deemed successful, it has the excitation sources positioned at 22 1/2 degrees from perpendicular, thereby focusing the excitation onto a spot, just in front of the collimation hole. This version can analyze sub-gram sized samples, held in place by a fixture( tweezers). Over a few week period of time, I have used the RAPCAP to analyze all applicable elements on the Periodic Chart from Iron through Bismuth. Since a few elements within that range are naturally radioactive, these have been analyzed by the probe but without excitation (self-excited). RAPCAP STEP BY STEP CONSTRUCTION Step 1) Have your mentor purchase several ionization smoke detectors and remove the sources, using approved safety methods. Step 2) Prepare a washer like base from brass, aluminum or other metal that is easy to work. If using a large brass washer, drill up to 8 evenly spaced 1/8″ holes surrounding the central hole. The sources have a protrusion on the back that will fit perfectly into these holes. Secure them with epoxy, then place an undrilled similar washer on the back to further shield the sensor. Step 3) Attach the now modified probe to the class MCA* (set to read from 2 through 50 keV) , and begin sampling elements! I will be available for questions from teachers and mentors, and over the next few months will go through the Periodic Table of the Elements with you, one at a time. Some pretty cool elements can be found in everyday household items. HINT: Next time a battery goes dead in a device of yours, keep it because we will dissect it later!! * If your classroom does not own an MCA, don’t worry, we will be showing in weeks to come how to use a freeware program on an ordinary computer with a soundcard to fulfill all the functions of an expensive MCA. Glossary of terms used: Beta Particle: an electron, whose origin is the nucleus of an atom. Can be positively or negatively charged. Alpha Particle: a mass bundle consisting of two protons and two neutrons, originating in and expelled from the nucleus of an atom. Carries a double positive charge. Gamma Ray: A photon created within the nucleus of an atom. X-Ray: A photon created within the electron shell area of an atom. XRF: X-Ray Fluorescence. LEG: Low Energy Gamma. MCA: Multi Channel Analyzer. PHA- Pulse Height Analysis, analyzing and displaying pulse heights (amplitude), a function of an MCA. keV: kilo electron Volt (a measurement of energy).
A bone dating from the Pleistocene Ice Age of an extinct species of elephant. A scanning electronic micrograph of bone at 10,000x magnification. A bone is a rigid organ that constitutes part of the vertebrate skeleton. Bones support and protect the various organs of the body, produce red and white blood cells, store minerals, provide structure and support for the body, and enable mobility. Bones come in a variety of shapes and sizes and have a complex internal and external structure. They are lightweight yet strong and hard, and serve multiple functions. Bone tissue (osseous tissue) is a hard tissue, a type of dense connective tissue. It has a honeycomb-like matrix internally, which helps to give the bone rigidity. Bone tissue is made up of different types of bone cells. Osteoblasts and osteocytes are involved in the formation and mineralization of bone; osteoclasts are involved in the resorption of bone tissue. Modified (flattened) osteoblasts become the lining cells that form a protective layer on the bone surface. The mineralised matrix of bone tissue has an organic component of mainly collagen called ossein and an inorganic component of bone mineral made up of various salts. Bone tissue is a mineralized tissue of two types, cortical bone and cancellous bone. Other types of tissue found in bones include bone marrow, endosteum, periosteum, nerves, blood vessels and cartilage. In the human body at birth, there are over 270 bones, but many of these fuse together during development, leaving a total of 206 separate bones in the adult, not counting numerous small sesamoid bones. The largest bone in the body is the femur or thigh-bone, and the smallest is the stapes in the middle ear. Bone is not uniformly solid, but includes a tough matrix. This matrix makes up about 30% of the bone and the other 70% is of salts that give strength to it. The matrix is made up of between 90 and 95% collagen fibers, and the remainder is ground substance. The primary tissue of bone, bone tissue (osseous tissue), is relatively hard and lightweight. Its matrix is mostly made up of a composite material incorporating the inorganic mineral calcium phosphate in the chemical arrangement termed calcium hydroxylapatite (this is the bone mineral that gives bones their rigidity) and collagen, an elastic protein which improves fracture resistance. The collagen of bone is known as ossein. Bone is formed by the hardening of this matrix around entrapped cells. When these cells become entrapped from osteoblasts they become osteocytes. The hard outer layer of bones is composed of cortical bone also called compact bone being much denser than cancellous bone. It forms the hard exterior (cortex) of bones. The cortical bone gives bone its smooth, white, and solid appearance, and accounts for 80% of the total bone mass of an adult human skeleton. It facilitates bone's main functions: to support the whole body, protect organs, provide levers for movement, and store and release chemical elements, mainly calcium. It consists of multiple microscopic columns, each called an osteon. Each column is multiple layers of osteoblasts and osteocytes around a central canal called the haversian canal. Volkmann's canals at right angles connect the osteons together. The columns are metabolically active, and as bone is reabsorbed and created the nature and location of the cells within the osteon will change. Cortical bone is covered by a periosteum on its outer surface, and an endosteum on its inner surface. The endosteum is the boundary between the cortical bone and the cancellous bone. The primary anatomical and functional unit of cortical bone is the osteon. Cancellous bone also known as trabecular or spongy bone tissue is the internal tissue of the skeletal bone and is an open cell porous network. Cancellous bone has a higher surface-area-to-volume ratio than cortical bone because it is less dense. This makes it softer, and weaker but more flexible. The greater surface area also makes it suitable for metabolic activities such as the exchange of calcium ions. Cancellous bone is typically found at the ends of long bones, near to joints and within the interior of vertebrae. Cancellous bone is highly vascular and frequently contains red bone marrow where haematopoiesis, the production of blood cells, occurs. The primary anatomical and functional unit of cancellous bone is the trabecula. The trabeculae are aligned towards the mechanical load distribution that a bone experiences within long bones such as the femur. As far as short bones are concerned, trabecular alignment has been studied in the vertebral pedicle. Thin formations of osteoblasts covered in endosteum create an irregular network of spaces, known as trabeculae. Within these spaces are bone marrow and hematopoietic stem cells that give rise to platelets, red blood cells and white blood cells. Trabecular marrow is composed of a network of rod- and plate-like elements that make the overall organ lighter and allow room for blood vessels and marrow. Trabecular bone accounts for the remaining 20% of total bone mass but has nearly ten times the surface area of compact bone. Bone marrow, also known as myeloid tissue in red bone marrow, can be found in almost any bone that holds cancellous tissue. In newborns, all such bones are filled exclusively with red marrow or hematopoietic marrow, but as the child ages the hematopoietic fraction decreases in quantity and the fatty/ yellow fraction called marrow adipose tissue (MAT) increases in quantity. In adults, red marrow is mostly found in the bone marrow of the femur, the ribs, the vertebrae and pelvic bones. Bone is a metabolically active tissue composed of several types of cells. These cells include osteoblasts, which are involved in the creation and mineralization of bone tissue, osteocytes, and osteoclasts, which are involved in the reabsorption of bone tissue. Osteoblasts and osteocytes are derived from osteoprogenitor cells, but osteoclasts are derived from the same cells that differentiate to form macrophages and monocytes. Within the marrow of the bone there are also hematopoietic stem cells. These cells give rise to other cells, including white blood cells, red blood cells, and platelets. Osteocytes are mostly inactive osteoblasts. Osteocytes originate from osteoblasts that have migrated into and become trapped and surrounded by bone matrix that they themselves produced. The spaces they occupy are known as lacunae. Osteocytes have many processes that reach out to meet osteoblasts and other osteocytes probably for the purposes of communication. Osteocytes remain in contact with other cells in the bone through gap junctions--coupled cell processes--which pass through small channels in the bone matrix called the canaliculi. Bones consist of living cells embedded in a mineralized organic matrix. This matrix consists of organic components, mainly Type I collagen - "organic" referring to materials produced as a result of the human body - and inorganic components, primarily hydroxyapatite and other salts of calcium and phosphate. Above 30% of the acellular part of bone consists of the organic components, and 70% of salts. The collagen fibers give bone its tensile strength, and the interspersed crystals of hydroxyapatite give bone its compressive strength. These effects are synergistic. The inorganic composition of bone (bone mineral) is primarily formed from salts of calcium and phosphate, the major salt being hydroxyapatite (Ca10(PO4)6(OH)2). The exact composition of the matrix may change over time and with nutrition, with the ratio of calcium to phosphate varying between 1.3 and 2.0 (per weight), and trace minerals such as magnesium, sodium, potassium and carbonate also being found. Type I collagen composes 90-95% of the organic matrix, with remainder of the matrix being a homogenous liquid called ground substance consisting of proteoglycans such as hyaluronic acid and chondroitin sulfate , as well as non-collagenous proteins such as osteocalcin, osteopontin or bone sialoprotein . Collagen consists of strands of repeating units, which give bone tensile strength, and are arranged in an overlapping fashion that prevents shear stress. The function of ground substance is not fully known. Two types of bone can be identified microscopically according to the arrangement of collagen: woven and lamellar. Woven bone is produced when osteoblasts produce osteoid rapidly, which occurs initially in all fetal bones, but is later replaced by more resilient lamellar bone. In adults woven bone is created after fractures or in Paget's disease. Woven bone is weaker, with a smaller number of randomly oriented collagen fibers, but forms quickly; it is for this appearance of the fibrous matrix that the bone is termed woven. It is soon replaced by lamellar bone, which is highly organized in concentric sheets with a much lower proportion of osteocytes to surrounding tissue. Lamellar bone, which makes its first appearance in humans in the fetus during the third trimester, is stronger and filled with many collagen fibers parallel to other fibers in the same layer (these parallel columns are called osteons). In cross-section, the fibers run in opposite directions in alternating layers, much like in plywood, assisting in the bone's ability to resist torsion forces. After a fracture, woven bone forms initially and is gradually replaced by lamellar bone during a process known as "bony substitution." Compared to woven bone, lamellar bone formation takes place more slowly. The orderly deposition of collagen fibers restricts the formation of osteoid to about 1 to 2 µm per day. Lamellar bone also requires a relatively flat surface to lay the collagen fibers in parallel or concentric layers. The extracellular matrix of bone is laid down by osteoblasts, which secrete both collagen and ground substance. These synthesise collagen within the cell, and then secrete collagen fibrils. The collagen fibres rapidly polymerise to form collagen strands. At this stage they are not yet mineralised, and are called "osteoid". Around the strands calcium and phosphate precipitate on the surface of these strands, within a days to weeks becoming crystals of hydroxyapatite. In order to mineralise the bone, the osteoblasts secrete vesicles containing alkaline phosphatase. This cleaves the phosphate groups and acts as the foci for calcium and phosphate deposition. The vesicles then rupture and act as a centre for crystals to grow on. More particularly, bone mineral is formed from globular and plate structures. There are five types of bones in the human body: long, short, flat, irregular, and sesamoid. In the study of anatomy, anatomists use a number of anatomical terms to describe the appearance, shape and function of bones. Other anatomical terms are also used to describe the location of bones. Like other anatomical terms, many of these derive from Latin and Greek. Some anatomists still use Latin to refer to bones. The term "osseous", and the prefix "osteo-", referring to things related to bone, are still used commonly today. Some examples of terms used to describe bones include the term "foramen" to describe a hole through which something passes, and a "canal" or "meatus" to describe a tunnel-like structure. A protrusion from a bone can be called a number of terms, including a "condyle", "crest", "spine", "eminence", "tubercle" or "tuberosity", depending on the protrusion's shape and location. In general, long bones are said to have a "head", "neck", and "body". When two bones join together, they are said to "articulate". If the two bones have a fibrous connection and are relatively immobile, then the joint is called a "suture". The formation of bone is called ossification. During the fetal stage of development this occurs by two processes: intramembranous ossification and endochondral ossification. Intramembranous ossification involves the formation of bone from connective tissue whereas endochondral ossification involves the formation of bone from cartilage. Intramembranous ossification mainly occurs during formation of the flat bones of the skull but also the mandible, maxilla, and clavicles; the bone is formed from connective tissue such as mesenchyme tissue rather than from cartilage. The process includes: the development of the ossification center, calcification, trabeculae formation and the development of the periosteum. Endochondral ossification occurs in long bones and most other bones in the body; it involves the development of bone from cartilage. This process includes the development of a cartilage model, its growth and development, development of the primary and secondary ossification centers, and the formation of articular cartilage and the epiphyseal plates. Endochondral ossification begins with points in the cartilage called "primary ossification centers." They mostly appear during fetal development, though a few short bones begin their primary ossification after birth. They are responsible for the formation of the diaphyses of long bones, short bones and certain parts of irregular bones. Secondary ossification occurs after birth, and forms the epiphyses of long bones and the extremities of irregular and flat bones. The diaphysis and both epiphyses of a long bone are separated by a growing zone of cartilage (the epiphyseal plate). At skeletal maturity (18 to 25 years of age), all of the cartilage is replaced by bone, fusing the diaphysis and both epiphyses together (epiphyseal closure). In the upper limbs, only the diaphyses of the long bones and scapula are ossified. The epiphyses, carpal bones, coracoid process, medial border of the scapula, and acromion are still cartilaginous. The following steps are followed in the conversion of cartilage to bone: |Functions of Bone| Bones have a variety of functions: Bones serve a variety of mechanical functions. Together the bones in the body form the skeleton. They provide a frame to keep the body supported, and an attachment point for skeletal muscles, tendons, ligaments and joints, which function together to generate and transfer forces so that individual body parts or the whole body can be manipulated in three-dimensional space (the interaction between bone and muscle is studied in biomechanics). Bones protect internal organs, such as the skull protecting the brain or the ribs protecting the heart and lungs. Because of the way that bone is formed, bone has a high compressive strength of about 170 MPa (1800 kgf/cm²), poor tensile strength of 104-121 MPa, and a very low shear stress strength (51.6 MPa). This means that bone resists pushing(compressional) stress well, resist pulling(tensional) stress less well, but only poorly resists shear stress (such as due to torsional loads). While bone is essentially brittle, bone does have a significant degree of elasticity, contributed chiefly by collagen. The macroscopic yield strength of cancellous bone has been investigated using high resolution computer models. The cancellous part of bones contain bone marrow. Bone marrow produces blood cells in a process called hematopoiesis. Blood cells that are created in bone marrow include red blood cells, platelets and white blood cells. Progenitor cells such as the hematopoietic stem cell divide in a process called mitosis to produce precursor cells. These include precursors which eventually give rise to white blood cells, and erythroblasts which give rise to red blood cells. Unlike red and white blood cells, created by mitosis, platelets are shed from very large cells called megakaryocytes. This process of progressive differentiation occurs within the bone marrow. After the cells are matured, they enter the circulation. Every day, over 2.5 billion red blood cells and platelets, and 50-100 billion granulocytes are produced in this way. As well as creating cells, bone marrow is also one of the major sites where defective or aged red blood cells are destroyed. °Determined by the species, age, and the type of bone, bone cells make up to 15 percent of the bone. Growth factor storage -- mineralized bone matrix stores important growth factors such as insulin-like growth factors, transforming growth factor, bone morphogenetic proteins and others. Bone is constantly being created and replaced in a process known as remodeling. This ongoing turnover of bone is a process of resorption followed by replacement of bone with little change in shape. This is accomplished through osteoblasts and osteoclasts. Cells are stimulated by a variety of signals, and together referred to as a remodeling unit. Approximately 10% of the skeletal mass of an adult is remodelled each year. The purpose of remodeling is to regulate calcium homeostasis, repair microdamaged bones from everyday stress, and to shape the skeleton during growth.. Repeated stress, such as weight-bearing exercise or bone healing, results in the bone thickening at the points of maximum stress (Wolff's law). It has been hypothesized that this is a result of bone's piezoelectric properties, which cause bone to generate small electrical potentials under stress. The action of osteoblasts and osteoclasts are controlled by a number of chemical enzymes that either promote or inhibit the activity of the bone remodeling cells, controlling the rate at which bone is made, destroyed, or changed in shape. The cells also use paracrine signalling to control the activity of each other. For example, the rate at which osteoclasts resorb bone is inhibited by calcitonin and osteoprotegerin. Calcitonin is produced by parafollicular cells in the thyroid gland, and can bind to receptors on osteoclasts to directly inhibit osteoclast activity. Osteoprotegerin is secreted by osteoblasts and is able to bind RANK-L, inhibiting osteoclast stimulation. Osteoblasts can also be stimulated to increase bone mass through increased secretion of osteoid and by inhibiting the ability of osteoclasts to break down osseous tissue. Increased secretion of osteoid is stimulated by the secretion of growth hormone by the pituitary, thyroid hormone and the sex hormones (estrogens and androgens). These hormones also promote increased secretion of osteoprotegerin. Osteoblasts can also be induced to secrete a number of cytokines that promote reabsorbtion of bone by stimulating osteoclast activity and differentiation from progenitor cells. Vitamin D, parathyroid hormone and stimulation from osteocytes induce osteoblasts to increase secretion of RANK-ligand and interleukin 6, which cytokines then stimulate increased reabsorption of bone by osteoclasts. These same compounds also increase secretion of macrophage colony-stimulating factor by osteoblasts, which promotes the differentiation of progenitor cells into osteoclasts, and decrease secretion of osteoprotegerin. Bone volume is determined by the rates of bone formation and bone resorption. Recent research has suggested that certain growth factors may work to locally alter bone formation by increasing osteoblast activity. Numerous bone-derived growth factors have been isolated and classified via bone cultures. These factors include insulin-like growth factors I and II, transforming growth factor-beta, fibroblast growth factor, platelet-derived growth factor, and bone morphogenetic proteins. Evidence suggests that bone cells produce growth factors for extracellular storage in the bone matrix. The release of these growth factors from the bone matrix could cause the proliferation of osteoblast precursors. Essentially, bone growth factors may act as potential determinants of local bone formation. Research has suggested that cancellous bone volume in postemenopausal osteoporosis may be determined by the relationship between the total bone forming surface and the percent of surface resorption. A number of diseases can affect bone, including arthritis, fractures, infections, osteoporosis and tumours. Conditions relating to bone can be managed by a variety of doctors, including rheumatologists for joints, and orthopedic surgeons, who may conduct surgery to fix broken bones. Other doctors, such as rehabilitation specialists may be involved in recovery, radiologists in interpreting the findings on imaging, and pathologists in investigating the cause of the disease, and family doctors may play a role in preventing complications of bone disease such as osteoporosis. When a doctor sees a patient, a history and exam will be taken. Bones are then often imaged, called radiography. This might include ultrasound X-ray, CT scan, MRI scan and other imaging such as a Bone scan, which may be used to investigate cancer. Other tests such as a blood test for autoimmune markers may be taken, or a synovial fluid aspirate may be taken. In normal bone, fractures occur when there is significant force applied, or repetitive trauma over a long time. Fractures can also occur when a bone is weakened, such as with osteoporosis, or when there is a structural problem, such as when the bone remodels excessively (such as Paget's disease) or is the site of the growth of cancer. Common fractures include wrist fractures and hip fractures, associated with osteoporosis, vertebral fractures associated with high-energy trauma and cancer, and fractures of long-bones. Not all fractures are painful. When serious, depending on the fractures type and location, complications may include flail chest, compartment syndromes or fat embolism. Compound fractures involve the bone's penetration through the skin. Some complex fractures can be treated by the use of bone grafting procedures that replace missing bone portions. Fractures and their underlying causes can be investigated by X-rays, CT scans and MRIs. Fractures are described by their location and shape, and several classification systems exist, depending on the location of the fracture. A common long bone fracture in children is a Salter-Harris fracture. When fractures are managed, pain relief is often given, and the fractured area is often immobilised. This is to promote bone healing. In addition, surgical measures such as internal fixation may be used. Because of the immobilisation, people with fractures are often advised to undergo rehabilitation. There are several types of tumour that can affect bone; examples of benign bone tumours include osteoma, osteoid osteoma, osteochondroma, osteoblastoma, enchondroma, giant cell tumor of bone, and aneurysmal bone cyst. Cancer can arise in bone tissue, and bones are also a common site for other cancers to spread (metastasise) to. Cancers that arise in bone are called "primary" cancers, although such cancers are rare. Metastases within bone are "secondary" cancers, with the most common being breast cancer, lung cancer, prostate cancer, thyroid cancer, and kidney cancer. Secondary cancers that affect bone can either destroy bone (called a "lytic" cancer) or create bone (a "sclerotic" cancer). Cancers of the bone marrow inside the bone can also affect bone tissue, examples including leukemia and multiple myeloma. Bone may also be affected by cancers in other parts of the body. Cancers in other parts of the body may release parathyroid hormone or parathyroid hormone-related peptide. This increases bone reabsorption, and can lead to bone fractures. Bone tissue that is destroyed or altered as a result of cancers is distorted, weakened, and more prone to fracture. This may lead to compression of the spinal cord, destruction of the marrow resulting in bruising, bleeding and immunosuppression, and is one cause of bone pain. If the cancer is metastatic, then there might be other symptoms depending on the site of the original cancer. Some bone cancers can also be felt. Cancers of the bone are managed according to their type, their stage, prognosis, and what symptoms they cause. Many primary cancers of bone are treated with radiotherapy. Cancers of bone marrow may be treated with chemotherapy, and other forms of targeted therapy such as immunotherapy may be used.Palliative care, which focuses on maximising a person's quality of life, may play a role in management, particularly if the likelihood of survival within five years is poor. Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined in women by the World Health Organization as a bone mineral density of 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average. This density is measured using dual energy X-ray absorptiometry (DEXA), with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and be at risk of fracture. Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium supplements may also be advised, as may Vitamin D. When medication is used, it may include bisphosphonates, Strontium ranelate, and hormone replacement therapy. Osteopathic medicine is a school of medical thought originally developed based on the idea of the link between the musculoskeletal system and overall health, but now very similar to mainstream medicine. As of 2012 , over 77,000 physicians in the United States are trained in Osteopathic medicine colleges. The study of bones and teeth is referred to as osteology. It is frequently used in anthropology, archeology and forensic science for a variety of tasks. This can include determining the nutritional, health, age or injury status of the individual the bones were taken from. Preparing fleshed bones for these types of studies can involve the process of maceration. Typically anthropologists and archeologists study bone tools made by Homo sapiens and Homo neanderthalensis. Bones can serve a number of uses such as projectile points or artistic pigments, and can also be made from external bones such as antlers. Bird skeletons are very lightweight. Their bones are smaller and thinner, to aid flight. Among mammals, bats come closest to birds in terms of bone density, suggesting that small dense bones are a flight adaptation. Many bird bones have little marrow due to their being hollow. The extinct predatory fish Dunkleosteus had sharp edges of hard exposed bone along its jaws. The proportion of cortical bone that is 80% in the human skeleton may be much lower in other animals, especially in marine mammals and marine turtles, or in various Mesozoic marine reptiles, such as ichthyosaurs, among others. Many bone diseases that affect humans also affect other vertebrates - an example of one disorder is skeletal flurosis. Bones from slaughtered animals have a number of uses. In prehistoric times, they have been used for making bone tools. They have further been used in bone carving, already important in prehistoric art, and also in modern time as crafting materials for buttons, beads, handles, bobbins, calculation aids, head nuts, dice, poker chips, pick-up sticks, ornaments, etc. A special genre is scrimshaw. Bone glue can be made by prolonged boiling of ground or cracked bones, followed by filtering and evaporation to thicken the resulting fluid. Historically once important, bone glue and other animal glues today have only a few specialized uses, such as in antiques restoration. Essentially the same process, with further refinement, thickening and drying, is used to make gelatin. Broth is made by simmering several ingredients for a long time, traditionally including bones. Ground bones are used as an organic phosphorus-nitrogen fertilizer and as additive in animal feed. Bones, in particular after calcination to bone ash, are used as source of calcium phosphate for the production of bone china and previously also phosphorus chemicals. Various cultures throughout history have adopted the custom of shaping an infant's head by the practice of artificial cranial deformation. A widely practised custom in China was that of foot binding to limit the normal growth of the foot.
Math working mats to explore place value, operations, and fractions. Use this teaching resource in the classroom when working with number and operation concepts. The working mats are available for: Students complete activities including: - drawing arrays - completing fact families - demonstrating commutative property - finding equivalence between fractions - writing a story - drawing a picture. Print out and laminate the mats for students to use repeatedly with a dry erase marker. Alternatively, provide each student with a black and white version to paste into their math journal. Common Core Curriculum alignment Read and write numbers to 1000 using base-ten numerals, number names, and expanded form. Fluently add and subtract within 100 using strategies based on place value, properties of operations, and/or the relationship between addition and subtraction. Add up to four two-digit numbers using strategies based on place value and properties of operations. Explain why addition and subtraction strategies work, using place value and the properties of operations. Apply properties of operations as strategies to multiply and divide.2 Examples: If 6 × 4 = 24 is known, then 4 × 6 = 24 is also known. (Commutative property of multiplication.) 3 × 5 × 2 can be found by 3 × 5 = 15, then 15 × 2 = 30, or by 5 × ... Understand division as an unknown-factor problem. For example, find 32 ÷ 8 by finding the number that makes 32 when multiplied by 8. Fluently multiply and divide within 100, using strategies such as the relationship between multiplication and division (e.g., knowing that 8 × 5 = 40, one knows 40 ÷ 5 = 8) or properties of operations. By the end of Grade 3, know from memory all pr... Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction. Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. Multiply a whole number of up to four digits by a one-digit whole number, and multiply two two-digit numbers, using strategies based on place value and the properties of operations. Illustrate and explain the calculation by using equations, rectangul... Find whole-number quotients and remainders with up to four-digit dividends and one-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explai... Explain why a fraction a/b is equivalent to a fraction (n × a)/(n × b) by using visual fraction models, with attention to how the number and size of the parts differ even though the two fractions themselves are the same size. Use this principle to ... Use decimal notation for fractions with denominators 10 or 100. For example, rewrite 0.62 as 62/100; describe a length as 0.62 meters; locate 0.62 on a number line diagram. Read and write decimals to thousandths using base-ten numerals, number names, and expanded form, e.g., 347.392 = 3 × 100 + 4 × 10 + 7 × 1 + 3 × (1/10) + 9 × (1/100) + 2 × (1/1000). Find whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and expl... Add, subtract, multiply, and divide decimals to hundredths, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written me... We create premium quality, downloadable teaching resources for primary/elementary school teachers that make classrooms buzz! Find more resources for these topics Request a change You must be logged in to request a change. Sign up now! Report an Error You must be logged in to report an error. Sign up now!
In Python or any other Programming language, Python absolute value means to remove any negative sign in front of a number and to think of all numeric values as positive (or zero). However, in Python, we can get the absolute value of any number by inbuilt functions which are abs() and fabs(). In this article, you will learn about the abs() function. The main purpose of this function is to return the absolute value of the number(variable) passed as an argument to it. How Can We Find Absolute Value Using abs() Function? The abs() function in Python is used for obtaining the Python absolute value or the positive value of a number. We can get the absolute value of an integer, complex number or a floating number using the abs() function. If the argument x (integral value) is a float or integer, then the resultant absolute value will be an integer or float respectively. If the argument x (integral value) is a complex number, the return value will only be the magnitude part that can be a floating-point. Python Absolute Value Function abs() function works on? The abs() function works on the following numbers: 1. Integers, for example 6, -6, 1 etc. 2. Floating point numbers, for example, 5.34, -1.44 etc 3. Complex numbers, for example 3+4j, 4+6j etc. Basic Syntax of abs() Function in Python The syntax of abs() function is: abs( x ) x can a number, or expression that evaluates to a number. This function only takes one argument which is the number whose absolute value you want to find out. The data type of this argument can be any one of the following listed. - Complex Number abs() function returns absolute value for the given number. - For an integer value, it will return an integer - For float value, it will return a floating-point value. - And if the complex number it will return the magnitude part which can also be a floating-point number. abs() Function Compatibility This absolute value function abs() function is available and compatible with both Python 2.x and 3.x |Python 2.x||Python 3.x| Absolute Value Examples using abs() Function in Python We can get Python absolute value of generally three data types: Note: In Python imaginary part of the complex number is denoted by j. Example 1: Python program to return the absolute value of an integer- int_num = -40 print("The absolute value of an integer number is:", abs(int_num)) The absolute value of an integer number is: 40 Example 2: Python program to return the absolute value of a floating-point number- float_num = -65.50 print("The absolute value of a float number is:", abs(float_num)) The absolute value of a float number is: 65.5 Example 3: Python program to return the absolute value of a complex number- This example shows the magnitude of a complex number. #A Complex Number complexNumber = 12 + 9j #Finding Magnitude of the Complex Number magnitude = abs(complexNumber) #Printing magnitude to console print("The Magnitude of " + str(complexNumber) + " is " + str(magnitude)) The magnitude of (12+9j) is 15.0 Example 4: Python absolute value of a list inputList = [12, 11, -9, 36] mappedList = map(abs, inputList) print(list(mappedList)) [12, 11, 9, 36] Do abs() function Work for String Data Type? If you provide any other argument of type other than a number, you will get TypeError. str_abs = 'PythonPool' absVal = abs(str_abs) print("Absolute value of this string is", "is", absVal) You will get an error like Is There Any Other Function to Get The Absolute Value in Python? Yes, We can get the absolute value of a number using the fabs() function. fabs() method is also a built-in function but it is defined in the math module, so to use fabs() method, we need to import math module first. Difference between abs() and fabs() methods There are mainly two differences between abs() and fabs() methods, - abs() method is a standard built-in method, for this, there is no need to import a module. But, the fabs() method is defined in the math module, for this, we need to import the math module first. - abs() method returns either an integer value or a float value based on given number type. But, fabs() method returns only float value, no matter given number is an integer type or a float type - Python Data Types - Implementation of Stack in Python - Substring operations in Python - Best Book to Learn Python in 2020 We learned to find the Python absolute value of a number or a numeric expression using abs() function of Python. With absolute values, we convert negative numbers to positive ones. This helps when computing “distances” from positions. With abs, a built-in, we do this with no extra code. If you still have any doubt or suggestions do comment down below.
Greenland Ice Is Discharging At Fast Rate: NASA Lee Rannals for redOrbit.com – Your Universe Online New data obtained by NASA’s Operation IceBridge program is shedding new light on how ice sheets in Greenland are changing. Scientists used satellite observations and ice thickness measurements made by Operation IceBridge to determine the rate at which ice flows through Greenland’s glaciers into the ocean. The findings have shown how glacier flow is affecting the Greenland Ice Sheet and that this process is being dominated by a small number of glaciers. Ice sheets grow as snow accumulates and gets compacted into ice, but they lose mass when ice and snow at the surface melts and runs off. When this warm up takes place, ice also discharges into the ocean. Scientists calculate the difference between yearly snowfall on an ice sheet and the sum of melting and discharge and label the total as “mass budget.” Ideally, this mass budget would balance out year over year, but for years the Greenland Ice Sheet has had a negative mass budget, meaning it has been losing its mass overall. Ice discharge is controlled by ice thickness, glacier valley shape and ice velocity. Researchers in the study used data from IceBridge’s Multichannel Coherent Radar Depth Sounder (MCoRDS) to determine ice thickness and sub-glacial terrain. They also used images from satellites like Landsat and Terra to calculate the ice velocity. “Glacier discharge may vary considerably between years,” Ellyn Enderlin, glaciologist at the University of Maine, Orono, Maine and the study’s lead author of the paper published in Geophysical Research Letters, said in a statement. “Annual changes in speed and thickness must be taken into account.” The researchers were able to calculate each glacier’s contribution to Greenland’s mass loss and the total volume of ice being discharged from the Greenland Ice Sheet. They found that of the 178 glaciers studied, 15 accounted for more than three-quarters of ice discharged since 2000, and four accounted for about half. The team also found that the size of these basins did not correlate with glacier discharge rate, which shuffled up the order of Greenland’s largest glaciers. NASA’s IceBridge study proved to be a valuable source of intel for the researchers studying the ice sheet. “IceBridge has collected so much data on elevation and thickness that we can now do analysis down to the individual glacier level and do it for the entire ice sheet,” Michael Studinger, IceBridge project scientist at NASA’s Goddard Space Flight Center, said in a statement. “We can now quantify contributions from the different processes that contribute to ice loss.”
Are you prepared to delve into the captivating realm of supercomputers? In a domain where intricacy is unraveled and predictions come to life, these formidable machines serve as the key to unveiling the secrets of our universe. From simulating weather patterns and predicting climate change to deciphering genetic codes and exploring space, supercomputers stand as the vanguards of scientific progress. Join us on an exhilarating journey as we explore their immense capabilities, understanding how they elevate simulations and predictions to unprecedented heights. Exploring the Evolution of Supercomputers Supercomputers, also known as high-performance computing (HPC) systems, have been driving technological advancements for decades. These exceptionally powerful machines are designed to conduct intricate calculations and process vast datasets at remarkable speeds, making them indispensable in scientific research and data analysis. The roots of supercomputing trace back to the 1940s, with the advent of electronic computers. However, it wasn’t until the 1960s that computer scientist Seymour Cray, often dubbed the “father of supercomputing,” coined the term “supercomputer.” Cray’s groundbreaking designs, including the CDC 6600 and CDC 8600, marked a pivotal moment in supercomputing history. The subsequent decades witnessed rapid advancements, with companies like IBM, Fujitsu, and Cray Inc. continually pushing the boundaries to create faster and more potent machines. In 1985, Cray’s Cray-2 set a milestone as the world’s fastest computer, achieving a peak performance of 1.9 gigaflops (billion floating-point operations per second). How Supercomputers Operate Supercomputers, the epitome of computing prowess, can execute trillions of calculations per second. They serve as critical tools in scientific research, weather forecasting, financial modeling, and beyond. But what exactly are supercomputers, and how do they operate? In essence, supercomputers are highly specialized machines tailored to handle complex and large-scale computational tasks. Their differentiating factors lie in processing power, memory capacity, and the ability to process data at extraordinary speeds. At the heart of every supercomputer lies its central processing unit (CPU). Supercomputers utilize multiple CPUs, sometimes numbering in the thousands, working in parallel to conduct simultaneous computations. This parallel processing capability enables the rapid and efficient processing of massive datasets. Coupled with powerful processors, supercomputers boast substantial random access memory (RAM), facilitating the storage and retrieval of large datasets without performance degradation. The synergy of robust processors and expansive memory renders supercomputers ideal for tackling intricate simulations and predictions. A pivotal component of supercomputers is their high-speed interconnect system, which links individual processing units seamlessly. This efficient communication network ensures smooth collaboration between CPUs during computation processes. Advantages of Harnessing Supercomputers for Simulations and Predictions Supercomputers, with their unparalleled computational might, have transformed the landscape of simulations and predictions. These cutting-edge systems process vast amounts of data at unprecedented speeds, proving invaluable to scientists, engineers, and researchers. Let’s delve into the key advantages of utilizing supercomputers for simulations and predictions: 1. Increased Speed and Efficiency: Supercomputers excel at processing billions of calculations per second, expediting complex simulations and predictions. This not only saves time but also allows for more iterations and refinements, enhancing the accuracy of the final outcomes. 2. Handling Large Datasets: Simulations and predictions often involve massive datasets with multiple variables, overwhelming traditional computers. Supercomputers, through parallel processing, efficiently tackle these large datasets, significantly reducing the time required for completion. 3. Accurate Results: Supercomputers leverage precise algorithms and sophisticated mathematical models to generate accurate simulations. Their computational power enables the incorporation of numerous parameters, resulting in more precise outcomes compared to traditional methods. 4. Complex Simulations: With technological advancements, real-world problems have become increasingly complex. Supercomputers rise to the challenge, handling intricate simulations involving multiple factors and interactions, such as weather patterns, fluid dynamics, and molecular interactions. 5. Advancements in Research: The integration of supercomputers has opened new frontiers in various fields, including astronomy, climate science, aerospace engineering, and drug discovery. Researchers can conduct extensive simulations and predictions, leading to groundbreaking discoveries and advancements. 6. Cost-Effective Solutions: While the initial investment in supercomputers is substantial, their increasing cost-effectiveness over the years is noteworthy. Enhanced processing power, reduced size, and energy consumption, coupled with shared resources in universities and research institutions, contribute to cost-effective accessibility. Limitations and Challenges of Supercomputers Despite their formidable capabilities, supercomputers grapple with limitations and challenges. Recognizing these hurdles is crucial for a comprehensive understanding. The development, operation, and maintenance of supercomputers incur exorbitant expenses, rendering them inaccessible to many organizations. Additionally, their substantial energy consumption contributes to ongoing operational costs. 2. Physical Space: Supercomputers, often as expansive as a football field, present challenges in terms of physical space and specialized infrastructure. Smaller institutions or universities with limited resources find it challenging to accommodate such colossal machines. 3. Cooling Requirements: Intense processing generates significant heat, necessitating sophisticated cooling systems. Liquid cooling or chilled water systems add to operational costs, addressing the challenge of heat dissipation. 4. Programming Complexity: Supercomputers demand specialized programming techniques and algorithms for efficient utilization. Developing software applications for these complex machines poses a challenge, even for seasoned programmers. 5. Job Scheduling: Sharing resources among multiple users requires efficient job scheduling to manage priorities. Ensuring equitable access without causing delays for others is a significant challenge for supercomputer schedulers. 6. Maintenance and Upgrades: Constant maintenance and upgrades are imperative to keep supercomputers at peak performance. These activities involve significant costs and potential downtime, impacting ongoing research projects. 7. Energy Efficiency: The colossal energy consumption of supercomputers raises concerns about their environmental impact. The demand for more energy-efficient alternatives grows, prompting innovation in sustainable supercomputing. Real-world applications of supercomputers Supercomputers, with their extraordinary processing power and advanced algorithms, find application across diverse industries. 1. Weather Forecasting: Supercomputers simulate complex atmospheric conditions, enabling more accurate weather predictions. This is critical for anticipating severe weather events and safeguarding lives and property. 2. Climate Studies: Vital for studying climate patterns, supercomputers analyze massive datasets to better understand and predict climate changes, informing decisions on climate change mitigation. 3. Drug Discovery: In the pharmaceutical industry, supercomputers expedite drug discovery by analyzing molecular structures and simulating their behavior in various environments. 4. Space Exploration: Space agencies utilize supercomputers to model space environments and simulate missions, preparing for potential challenges in space exploration. 5. Oil and Gas Exploration: Supercomputers play a pivotal role in the oil and gas industry by processing seismic data to locate new reserves accurately. 6. Financial Modeling: In finance, supercomputers analyze vast datasets in real-time, aiding high-frequency trading firms in making rapid and informed trading decisions. supercomputers stand as marvels of modern technology, capable of unraveling complexity and enhancing predictions through lightning.
ESA Science & Technology - Hubble New results from the NASA/ESA Hubble Space Telescope suggest the formation of the first stars and galaxies in the early Universe took place sooner than previously thought. Hubble finds that "distance" from the brightest stars is key to preserving primordial discs [heic2009] The NASA/ESA Hubble Space Telescope was used to conduct a three-year study of the crowded, massive and young star cluster Westerlund 2. This is the first time that astronomers have analysed an extremely dense star cluster to study which environments are favourable to planet formation. The NASA/ESA Hubble Space Telescope has provided astronomers with the sharpest view yet of the breakup of Comet C/2019 Y4 (ATLAS). The telescope resolved roughly 30 fragments of the fragile comet on 20 April and 25 pieces on 23 April. The NASA/ESA Hubble Space Telescope’s iconic images and scientific breakthroughs have redefined our view of the Universe. To commemorate three decades of scientific discoveries, this image is one of the most photogenic examples of the many turbulent stellar nurseries the telescope has observed during its 30-year lifetime. What astronomers thought was a planet beyond our solar system, has now seemingly vanished from sight. Astronomers now suggest that a full-grown planet never existed in the first place. New data from the NASA/ESA Hubble Space Telescope have provided the strongest evidence yet for mid-sized black holes in the Universe. Hubble confirms that this "intermediate-mass" black hole dwells inside a dense star cluster.
Relativism is the idea that views are relative to differences in perception and consideration. There is no universal, objective truth according to relativism; rather each point of view has its own truth. The major categories of relativism vary in their degree of scope and controversy. Moral relativism encompasses the differences in moral judgments among people and cultures. Truth relativism is the doctrine that there are no absolute truths, i.e., that truth is always relative to some particular frame of reference, such as a language or a culture (cultural relativism). Descriptive relativism seeks to describe the differences among cultures and people without evaluation, while normative relativism evaluates the morality or truthfulness of views within a given framework. Forms of relativism Anthropological versus philosophical relativism Anthropological relativism refers to a methodological stance, in which the researcher suspends (or brackets) his or her own cultural biases while attempting to understand beliefs and behaviors in their local contexts. This has become known as methodological relativism, and concerns itself specifically with avoiding ethnocentrism or the application of one's own cultural standards to the assessment of other cultures. This is also the basis of the so-called "emic" and "etic" distinction, in which: - An emic or insider account of behavior is a description of a society in terms that are meaningful to the participant or actor's own culture; an emic account is therefore culture-specific, and typically refers to what is considered "common sense" within the culture under observation. - An etic or outsider account is a description of a society by an observer, in terms that can be applied to other cultures; that is, an etic account is culturally neutral, and typically refers to the conceptual framework of the social scientist. (This is complicated when it is scientific research itself that is under study, or when there is theoretical or terminological disagreement within the social sciences.) Philosophical relativism, in contrast, asserts that the truth of a proposition depends on the metaphysical, or theoretical frame, or the instrumental method, or the context in which the proposition is expressed, or on the person, groups, or culture who interpret the proposition. Descriptive versus normative relativism The concept of relativism also has importance both for philosophers and for anthropologists in another way. In general, anthropologists engage in descriptive relativism ("how things are" or "how things seem"), whereas philosophers engage in normative relativism ("how things ought to be"), although there is some overlap (for example, descriptive relativism can pertain to concepts, normative relativism to truth). Descriptive relativism assumes that certain cultural groups have different modes of thought, standards of reasoning, and so forth, and it is the anthropologist's task to describe, but not to evaluate the validity of these principles and practices of a cultural group. It is possible for an anthropologist in his or her fieldwork to be a descriptive relativist about some things that typically concern the philosopher (e.g., ethical principles) but not about others (e.g., logical principles). However, the descriptive relativist's empirical claims about epistemic principles, moral ideals and the like are often countered by anthropological arguments that such things are universal, and much of the recent literature on these matters is explicitly concerned with the extent of, and evidence for, cultural or moral or linguistic or human universals. The fact that the various species of descriptive relativism are empirical claims, may tempt the philosopher to conclude that they are of little philosophical interest, but there are several reasons why this isn't so. First, some philosophers, notably Kant, argue that certain sorts of cognitive differences between human beings (or even all rational beings) are impossible, so such differences could never be found to obtain in fact, an argument that places a priori limits on what empirical inquiry could discover and on what versions of descriptive relativism could be true. Second, claims about actual differences between groups play a central role in some arguments for normative relativism (for example, arguments for normative ethical relativism often begin with claims that different groups in fact have different moral codes or ideals). Finally, the anthropologist's descriptive account of relativism helps to separate the fixed aspects of human nature from those that can vary, and so a descriptive claim that some important aspect of experience or thought does (or does not) vary across groups of human beings tells us something important about human nature and the human condition. Normative relativism concerns normative or evaluative claims that modes of thought, standards of reasoning, or the like are only right or wrong relative to a framework. ‘Normative’ is meant in a general sense, applying to a wide range of views; in the case of beliefs, for example, normative correctness equals truth. This does not mean, of course, that framework-relative correctness or truth is always clear, the first challenge being to explain what it amounts to in any given case (e.g., with respect to concepts, truth, epistemic norms). Normative relativism (say, in regard to normative ethical relativism) therefore implies that things (say, ethical claims) are not simply true in themselves, but only have truth values relative to broader frameworks (say, moral codes). (Many normative ethical relativist arguments run from premises about ethics to conclusions that assert the relativity of truth values, bypassing general claims about the nature of truth, but it is often more illuminating to consider the type of relativism under question directly.) Postmodernism and relativism The term "relativism" often comes up in debates over postmodernism, poststructuralism and phenomenology. Critics of these perspectives often identify advocates with the label "relativism". For example, the Sapir–Whorf hypothesis is often considered a relativist view because it posits that linguistic categories and structures shape the way people view the world. Stanley Fish has defended postmodernism and relativism. These perspectives do not strictly count as relativist in the philosophical sense, because they express agnosticism on the nature of reality and make epistemological rather than ontological claims. Nevertheless, the term is useful to differentiate them from realists who believe that the purpose of philosophy, science, or literary critique is to locate externally true meanings. Important philosophers and theorists such as Michel Foucault, Max Stirner, political movements such as post-anarchism or post-Marxism can also be considered as relativist in this sense - though a better term might be social constructivist. The spread and popularity of this kind of "soft" relativism varies between academic disciplines. It has wide support in anthropology and has a majority following in cultural studies. It also has advocates in political theory and political science, sociology, and continental philosophy (as distinct from Anglo-American analytical philosophy). It has inspired empirical studies of the social construction of meaning such as those associated with labelling theory, which defenders can point to as evidence of the validity of their theories (albeit risking accusations of performative contradiction in the process). Advocates of this kind of relativism often also claim that recent developments in the natural sciences, such as Heisenberg's uncertainty principle, quantum mechanics, chaos theory and complexity theory show that science is now becoming relativistic. However, many scientists who use these methods continue to identify as realist or post-positivist, and some sharply criticize the association. Related and contrasting positions Relationism is the theory that there are only relations between individual entities, and no intrinsic properties. Despite the similarity in name, it is held by some to be a position distinct from relativism—for instance, because "statements about relational properties [...] assert an absolute truth about things in the world". On the other hand, others wish to equate relativism, relationism and even relativity, which is a precise theory of relationships between physical objects: Nevertheless, "This confluence of relativity theory with relativism became a strong contributing factor in the increasing prominence of relativism". Whereas previous investigations of science only sought sociological or psychological explanations of failed scientific theories or pathological science, the 'strong programme' is more relativistic, assessing scientific truth and falsehood equally in a historic and cultural context. Relativism is not skepticism, which superficially resembles relativism, because they both doubt absolute notions of truth. However, whereas skeptics go on to doubt all notions of truth, relativists replace absolute truth with a positive theory of many equally valid relative truths. For the relativist, there is no more to truth than the right context, or the right personal or cultural belief, so there is a lot of truth in the world. Catholic Church and relativism According to the Church and to some theologians, relativism, as a denial of absolute truth, leads to moral license and a denial of the possibility of sin and of God. Whether moral or epistemological, relativism constitutes a denial of the capacity of the human mind and reason to arrive at truth. Truth, according to Catholic theologians and philosophers (following Aristotle) consists of adequatio rei et intellectus, the correspondence of the mind and reality. Another way of putting it states that the mind has the same form as reality. This means when the form of the computer in front of someone (the type, color, shape, capacity, etc.) is also the form that is in their mind, then what they know is true because their mind corresponds to objective reality. The denial of an absolute reference, of an axis mundi, denies God, who equates to Absolute Truth, according to these Christian theologians. They link relativism to secularism, an obstruction of religion in human life. Pope Leo XIII (1810–1903) was the first known Pope to use the word relativism in the encyclical Humanum genus (1884). Leo XIII condemned Freemasonry and claimed that its philosophical and political system was largely based on relativism. John Paul II - As is immediately evident, the crisis of truth is not unconnected with this development. Once the idea of a universal truth about the good, knowable by human reason, is lost, inevitably the notion of conscience also changes. Conscience is no longer considered in its primordial reality as an act of a person's intelligence, the function of which is to apply the universal knowledge of the good in a specific situation and thus to express a judgment about the right conduct to be chosen here and now. Instead, there is a tendency to grant to the individual conscience the prerogative of independently determining the criteria of good and evil and then acting accordingly. Such an outlook is quite congenial to an individualist ethic, wherein each individual is faced with his own truth, different from the truth of others. Taken to its extreme consequences, this individualism leads to a denial of the very idea of human nature. In Evangelium Vitae (The Gospel of Life), he says: - Freedom negates and destroys itself, and becomes a factor leading to the destruction of others, when it no longer recognizes and respects its essential link with the truth. When freedom, out of a desire to emancipate itself from all forms of tradition and authority, shuts out even the most obvious evidence of an objective and universal truth, which is the foundation of personal and social life, then the person ends up by no longer taking as the sole and indisputable point of reference for his own choices the truth about good and evil, but only his subjective and changeable opinion or, indeed, his selfish interest and whim. - How many winds of doctrine we have known in recent decades, how many ideological currents, how many ways of thinking. The small boat of thought of many Christians has often been tossed about by these waves – thrown from one extreme to the other: from Marxism to liberalism, even to libertinism; from collectivism to radical individualism; from atheism to a vague religious mysticism; from agnosticism to syncretism, and so forth. Every day new sects are created and what Saint Paul says about human trickery comes true, with cunning which tries to draw those into error (cf Ephesians 4, 14). Having a clear Faith, based on the Creed of the Church, is often labeled today as a fundamentalism. Whereas, relativism, which is letting oneself be tossed and "swept along by every wind of teaching", looks like the only attitude acceptable to today's standards. We are moving towards a dictatorship of relativism which does not recognize anything as certain and which has as its highest goal one's own ego and one's own desires. However, we have a different goal: the Son of God, true man. He is the measure of true humanism. Being an "Adult" means having a faith which does not follow the waves of today's fashions or the latest novelties. A faith which is deeply rooted in friendship with Christ is adult and mature. It is this friendship which opens us up to all that is good and gives us the knowledge to judge true from false, and deceit from truth. On June 6, 2005, Pope Benedict XVI told educators: - Today, a particularly insidious obstacle to the task of education is the massive presence in our society and culture of that relativism which, recognizing nothing as definitive, leaves as the ultimate criterion only the self with its desires. And under the semblance of freedom it becomes a prison for each one, for it separates people from one another, locking each person into his or her own 'ego'. Then during the World Youth Day in August 2005, he also traced to relativism the problems produced by the communist and sexual revolutions, and provided a counter-counter argument. - In the last century we experienced revolutions with a common programme–expecting nothing more from God, they assumed total responsibility for the cause of the world in order to change it. And this, as we saw, meant that a human and partial point of view was always taken as an absolute guiding principle. Absolutizing what is not absolute but relative is called totalitarianism. It does not liberate man, but takes away his dignity and enslaves him. It is not ideologies that save the world, but only a return to the living God, our Creator, the Guarantor of our freedom, the Guarantor of what is really good and true. A common argument against relativism suggests that it inherently contradicts, refutes, or stultifies itself: the statement "all is relative" classes either as a relative statement or as an absolute one. If it is relative, then this statement does not rule out absolutes. If the statement is absolute, on the other hand, then it provides an example of an absolute statement, proving that not all truths are relative. However, this argument against relativism only applies to relativism that positions truth as relative–i.e. epistemological/truth-value relativism. More specifically, it is only extreme forms of epistemological relativism that can come in for this criticism as there are many epistemological relativists who posit that some aspects of what is regarded as factually "true" are not universal, yet still accept that other universal truths exist (e.g. gas laws or moral laws). Another argument against relativism posits a Natural Law. Simply put, the physical universe works under basic principles: the "Laws of Nature". Some contend that a natural Moral Law may also exist, for example as argued by Richard Dawkins in The God Delusion (2006) and addressed by C. S. Lewis in "Mere Christianity" (1952). Dawkins said "I think we face an equal but much more sinister challenge from the left, in the shape of cultural relativism - the view that scientific truth is only one kind of truth and it is not to be especially privileged". Philosopher Hilary Putnam, among others, states that some forms of relativism make it impossible to believe one is in error. If there is no truth beyond an individual's belief that something is true, then an individual cannot hold their own beliefs to be false or mistaken. A related criticism is that relativizing truth to individuals destroys the distinction between truth and belief. Indian religions tend to view the perceivable universe and cosmos as relativistic. Mahavira (599-527 BC), the 24th Tirthankara of Jainism, developed an early philosophy regarding relativism and subjectivism known as Anekantavada. Hindu religion has no theological difficulties in accepting degrees of truth in other religions. A Rig Vedic hymn states that "Truth is One, though the sages tell it variously." (Ékam sat vipra bahudā vadanti) Madhyamaka Buddhism, which forms the basis for many Mahayana Buddhist schools and was founded by Nagarjuna, discerns two levels of truth, absolute and relative. The two truths doctrine states that there is Relative or common-sense truth, which describes our daily experience of a concrete world, and Ultimate truth, which describes the ultimate reality as sunyata, empty of concrete and inherent characteristics. The conventional truth may be interpreted as "obscurative truth" or "that which obscures the true nature" as a result. It is constituted by the appearances of mistaken awareness. Conventional truth would be the appearance that includes a duality of apprehender and apprehended, and objects perceived within that. Ultimate truths, are phenomena free from the duality of apprehender and apprehended. In Sikhism the Gurus (spiritual teacher ) have propagated the message of "many paths" leading to the one God and ultimate salvation for all souls who tread on the path of righteousness. They have supported the view that proponents of all faiths can, by doing good and virtuous deeds and by remembering the Lord, certainly achieve salvation. The students of the Sikh faith are told to accept all leading faiths as possible vehicles for attaining spiritual enlightenment provided the faithful study, ponder and practice the teachings of their prophets and leaders. The holy book of the Sikhs called the Sri Guru Granth Sahib says: "Do not say that the Vedas, the Bible and the Koran are false. Those who do not contemplate them are false." Guru Granth Sahib page 1350; later stating "The seconds, minutes, and hours, days, weeks and months, and the various seasons originate from the one Sun; O nanak, in just the same way, the many forms originate from the Creator." Guru Granth Sahib page 12,13. Sophists are considered the founding fathers of relativism in the Western World. Elements of relativism emerged among the Sophists in the 5th century BC. Notably, it was Protagoras who coined the phrase, "Man is the measure of all things: of things which are, that they are, and of things which are not, that they are not." The thinking of the Sophists is mainly known through their opponents, Plato and Socrates. In a well known paraphrased dialogue with Socrates, Protagoras said: "What is true for you is true for you, and what is true for me is true for me." Another important advocate of relativism, Bernard Crick, a British political scientist, wrote the book In Defence of Politics (first published in 1962), suggesting the inevitability of moral conflict between people. Crick stated that only ethics could resolve such conflict, and when that occurred in public it resulted in politics. Accordingly, Crick saw the process of dispute resolution, harms reduction, mediation or peacemaking as central to all of moral philosophy. He became an important influence on the feminists and later on the Greens. The philosopher of science Paul Feyerabend is often considered to be a relativist, though he denied being one. Feyerabend argued that modern science suffers from being methodologically monistic (the belief that only a single methodology can produce scientific progress). Feyerabend summarises his case in his work Against Method with the phrase "anything goes". - In an aphorism [Feyerabend] often repeated, "potentially every culture is all cultures". This is intended to convey that world views are not hermetically closed, since their leading concepts have an "ambiguity" - better, an open-endedness - which enables people from other cultures to engage with them. [...] It follows that relativism, understood as the doctrine that truth is relative to closed systems, can get no purchase. [...] For Feyerabend, both hermetic relativism and its absolutist rival [realism] serve, in their different ways, to "devalue human existence". The former encourages that unsavoury brand of political correctness which takes the refusal to criticise "other cultures" to the extreme of condoning murderous dictatorship and barbaric practices. The latter, especially in its favoured contemporary form of "scientific realism", with the excessive prestige it affords to the abstractions of "the monster 'science'", is in bed with a politics which likewise disdains variety, richness and everyday individuality - a politics which likewise "hides" its norms behind allegedly neutral facts, "blunts choices and imposes laws". Thomas Kuhn's philosophy of science, as expressed in The Structure of Scientific Revolutions is often interpreted as relativistic. He claimed that as well as progressing steadily and incrementally ("normal science"), science undergoes periodic revolutions or "paradigm shifts", leaving scientists working in different paradigms with difficulty in even communicating. Thus the truth of a claim, or the existence of a posited entity is relative to the paradigm employed. However, it isn't necessary for him to embrace relativism because every paradigm presupposes the prior, building upon itself through history and so on. This leads to there being a fundamental, incremental, and referential structure of development which is not relative but again, fundamental. - From these remarks, one thing is however certain: Kuhn is not saying that incommensurable theories cannot be compared - what they can’t be is compared in terms of a system of common measure. He very plainly says that they can be compared, and he reiterates this repeatedly in later work, in a (mostly in vain) effort to avert the crude and sometimes catastrophic misinterpretations he suffered from mainstream philosophers and post-modern relativists alike. But Thomas Kuhn denied the accusation of being a relativist later in his postscript. - scientific development is ... a unidirectional and irreversible process. Latter scientific theories are better than earlier ones for solving puzzles ... That is not a relativist's position, and it displays the sense in which I am a convinced believer in scientific progress. Some have argued that one can also read Kuhn's work as essentially positivist in its ontology: the revolutions he posits are epistemological, lurching toward a presumably 'better' understanding of an objective reality through the lens presented by the new paradigm. However, a number of passages in Structures do indeed appear to be distinctly relativist, and to directly challenge the notion of an objective reality and the ability of science to progress towards an ever-greater grasp of it, particularly through the process of paradigm change. - In the sciences there need not be progress of another sort. We may, to be more precise, have to relinquish the notion, explicit or implicit, that changes of paradigm carry scientists and those who learn from them closer and closer to the truth. - We are all deeply accustomed to seeing science as the one enterprise that draws constantly nearer to some goal set by nature in advance. But need there be any such goal? Can we not account for both science’s existence and its success in terms of evolution from the community’s state of knowledge at any given time? Does it really help to imagine that there is some one full, objective, true account of nature and that the proper measure of scientific achievement is the extent to which it brings us closer to that ultimate goal? George Lakoff and Mark Johnson George Lakoff and Mark Johnson define relativism in their book Metaphors We Live By as the rejection of both subjectivism and metaphysical objectivism in order to focus on the relationship between them, i.e. the metaphor by which we relate our current experience to our previous experience. In particular, Lakoff and Johnson characterize "objectivism" as a "straw man", and, to a lesser degree, criticize the views of Karl Popper, Kant and Aristotle. In his book Invariances, Robert Nozick expresses a complex set of theories about the absolute and the relative. He thinks the absolute/relative distinction should be recast in terms of an invariant/variant distinction, where there are many things a proposition can be invariant with regard to or vary with. He thinks it is coherent for truth to be relative, and speculates that it might vary with time. He thinks necessity is an unobtainable notion, but can be approximated by robust invariance across a variety of conditions—although we can never identify a proposition that is invariant with regard to everything. Finally, he is not particularly warm to one of the most famous forms of relativism, moral relativism, preferring an evolutionary account. Joseph Margolis advocates a view he calls "robust relativism" and defends it in his books: Historied Thought, Constructed World, Chapter 4 (California, 1995) and The Truth about Relativism (Blackwells, 1991). He opens his account by stating that our logics should depend on what we take to be the nature of the sphere to which we wish to apply our logics. Holding that there can be no distinctions which are not "privileged" between the alethic, the ontic, and the epistemic, he maintains that a many valued logic just might be the most apt for aesthetics or history since, because in these practices, we are loath to hold to simple binary logic; and he also holds that many-valued logic is relativistic. (This is perhaps an unusual definition of "relativistic". Compare with his comments on "relationism"). "True" and "False" as mutually exclusive and exhaustive judgements on Hamlet, for instance, really does seem absurd. A many valued logic—"apt", "reasonable", "likely", and so on—seems intuitively more applicable to Hamlet interpretation. Where apparent contradictions arise between such interpretations, we might call the interpretations "incongruent", rather than dubbing either "false", because using many-valued logic implies that a measured value is a mixture of two extreme possibilities. Using the subset of many-valued logic, fuzzy logic, it can be said that various interpretations can be represented by membership in more than one possible truth sets simultaneously. Fuzzy logic is therefore probably the best mathematical structure for understanding "robust relativism" and has been interpreted by Bart Kosko as philosophically being related to Zen Buddhism. It was Aristotle who held that relativism implied we should, sticking with appearances only, end up contradicting ourselves somewhere if we could apply all attributes to all ousiai (beings). Aristotle, however, made non-contradiction dependent upon his essentialism. If his essentialism is false, then so too is his ground for disallowing relativism. (Subsequent philosophers have found other reasons for supporting the principle of non-contradiction). Beginning with Protagoras and invoking Charles Sanders Peirce, Margolis shows that the historic struggle to discredit relativism is an attempt to impose an unexamined belief in the world's essentially rigid rule-like nature. Plato and Aristotle merely attacked "relationalism"—the doctrine of true-for l or true for k, and the like, where l and k are different speakers or different worlds, or the something similar (Most philosophers would call this position "relativism"). For Margolis, "true" means true; that is, the alethic use of "true" remains untouched. However, in real world contexts, and context is ubiquitous in the real world, we must apply truth values. Here, in epistemic terms, we might retire "true" tout court as an evaluation and keep "false". The rest of our value-judgements could be graded from "extremely plausible" down to "false". Judgements which on a bivalent logic would be incompatible or contradictory are further seen as "incongruent", though one may well have more weight than the other. In short, relativistic logic is not, or need not be, the bugbear it is often presented to be. It may simply be the best type of logic to apply to certain very uncertain spheres of real experiences in the world (although some sort of logic needs to be applied to make that judgement). Those who swear by bivalent logic might simply be the ultimate keepers of the great fear of the flux. Philosopher Richard Rorty has a somewhat paradoxical role in the debate over relativism: he is criticized for his relativistic views by many commentators, but has always denied that relativism applies to much anybody, being nothing more than a Platonic scarecrow. Rorty claims, rather, that he is a pragmatist, and that to construe pragmatism as relativism is to beg the question. - '"Relativism" is the view that every belief on a certain topic, or perhaps about any topic, is as good as every other. No one holds this view. Except for the occasional cooperative freshman, one cannot find anybody who says that two incompatible opinions on an important topic are equally good. The philosophers who get called 'relativists' are those who say that the grounds for choosing between such opinions are less algorithmic than had been thought.' - 'In short, my strategy for escaping the self-referential difficulties into which "the Relativist" keeps getting himself is to move everything over from epistemology and metaphysics into cultural politics, from claims to knowledge and appeals to self-evidence to suggestions about what we should try.' Rorty takes a deflationary attitude to truth, believing there is nothing of interest to be said about truth in general, including the contention that it is generally subjective. He also argues that the notion of warrant or justification can do most of the work traditionally assigned to the concept of truth, and that justification is relative; justification is justification to an audience, for Rorty. In Contingency, Irony, and Solidarity he argues that the debate between so-called relativists and so-called objectivists is beside the point because they don't have enough premises in common for either side to prove anything to the other. - Bahá'í Faith and the unity of religion - Degree of truth - Factual relativism - False dilemma - Fuzzy logic - Graded absolutism - John Hick - Moral relativism - Multi-valued logic - Normative ethics - Philosophical realism - Pluralism (philosophy) - Principle of Bivalence - Propositional logic - Science Wars - Social constructionism - Subjective logic - Two truths doctrine - Stanford Encyclopedia of Philosophy, "Relativism, roughly put, is the view that truth and falsity, right and wrong, standards of reasoning, and procedures of justification are products of differing conventions and frameworks of assessment and that their authority is confined to the context giving rise to them." - Maria Baghramian identifies 16 (Relativism, 2004,Baghramian) - Swoyer, Chris (February 22, 2003). "Relativism". Retrieved May 10, 2010. - Baghramian, Maria and Carter, Adam, "Relativism", "The Stanford Encyclopedia of Philosophy (Fall 2015 Edition)", Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/fall2015/entries/relativism/#RelAboTruAleRel/ "Relativism about truth, or alethic relativism, at its simplest, is the claim that what is true for one individual or social group may not be true for another" - Collins, Harry (1998-04-01). "What's wrong with relativism?". Physics World. Bristol, UK: IOP Publishing. Retrieved 2008-04-16. ...methodological relativism - impartial assessment of how knowledge develops - is the key idea for sociology of scientific knowledge... - Locke, Shaftesbury, and Hutcheson: Contesting Diversity in the Enlightenment and Beyond by Dr. Daniel Carey - Methodological and Philosophical Relativism by Gananath Obeyesekere - Brown, Donald E. (1991). Human Universals. McGraw-Hill. ISBN 0-87722-841-8. - Stanford Encyclopedia of Philosophy - Don't Blame Relativism as "serious thought" - Sokal and the Science Wars - Quantum quackery - Baghramian, M. Relativism, 2004, p43 - Interview with Bruno LatourOn Relativism, Pragmatism, and Critical Theory - Baghramian, M. Relativism, 2004, p85 - Wood. A, Relativism - World Youth Day News August August 21, 2005 - Humanum genus - Mass «Pro Eligendo Romano Pontifice»: Homily of Card. Joseph Ratzinger - Inaugural Address at the Ecclesial Diocesan Convention of Rome - 20th World Youth Day - Cologne - Marienfeld, Youth Vigil - Craig Rusbult. Reality 101 - Keith Dixon. Is Cultural Relativism Self-Refuting? (British Journal of Sociology, vol 28, No. 1) - Cultural Relativism at All About Philosophy. - The Friesian School on relativism. - The God Delusion, Chapter 6 - Mere Christianity, Chapter 1 - Richard Dawkins quoted in Dawkins' Christmas card list; Dawkins at the Hay Festival, The Guardian, 28 May 2007 - Baghramian, M. Relativism, 2004 - Including Julien Beillard, who presents his case on the impossibility of moral relativism in the July 2013 issue of Philosophy Now magazine, accessible here - Levinson, Jules (August 2006) Lotsawa Times Volume II Archived 2008-07-24 at the Wayback Machine - Guru Granth Sahib page 1350 - Richard Austin Gudmundsen (2000). Scientific Inquiry: Applied to the Doctrine of Jesus Christ. Cedar Fort. p. 50. ISBN 978-1-55517-497-2. Retrieved 2011-01-24. Sahakian, William S.; Mabel Lewis Sahakian (1993). Ideas of the great philosophers. Barnes & Noble Publishing. p. 28. ISBN 978-1-56619-271-2. What is true for you is true for you. - Sahakian, W. S.; M. L. Sahakian (1965). Realms of philosophy. Schenkman Pub. Co. p. 40. Retrieved 2011-01-24. - Cooper, David E., "Voodoo and the monster of science," Times Higher Education, 17 March 2000 - Lloyd, Elisabeth. "Feyerabend, Mill, and Pluralism", Philosophy of Science 64, p. S397. - Feyerabend, Against Method, 3rd ed., p. vii - Cooper, David E., "Voodoo and the monster of science," Times Higher Education, 17 March 2000 - Sharrock. W., Read R. Kuhn: Philosopher of Scientific Revolutions - Kuhn, The Structure of Scientific Revolutions, p. 206. - Kuhn, The Structure of Scientific Revolutions, p. 170. - Kuhn, The Structure of Scientific Revolutions, p. 171.</ - Rorty, R. Consequences of Pragmatism - Richard Rorty, Pragmatism, Relativism, and Irrationalism - Rorty, R. Hilary Putnam and the Relativist Menace - Maria Baghramian, Relativism, London: Routledge, 2004, ISBN 0-415-16150-9 - Gad Barzilai, Communities and Law: Politics and Cultures of Legal Identities, Ann Arbor: University of Michigan Press, 2003, ISBN 0-472-11315-1 - Andrew Lionel Blais, On the Plurality of Actual Worlds, University of Massachusetts Press, 1997, ISBN 1-55849-072-8 - Benjamin Brown, Thoughts and Ways of Thinking: Source Theory and Its Applications. London: Ubiquity Press, 2017. . - Buchbinder, David; McGuire, Ann Elizabeth (2007). "The backlash against relativism: the new curricular fundamentalism". The International Journal of the Humanities: Annual Review. Common Ground Journals and Books. 5 (5): 51–59. doi:10.18848/1447-9508/CGP/v05i05/42109. - Ernest Gellner, Relativism and the Social Sciences, Cambridge: Cambridge University Press, 1985, ISBN 0-521-33798-4 - Rom Harré and Michael Krausz, Varieties of Relativism, Oxford, UK; New York, NY: Blackwell, 1996, ISBN 0-631-18409-0 - Knight, Robert H. The Age of Consent: the Rise of Relativism and the Corruption of Popular Culture. Dallas, Tex.: Spence Publishing Co., 1998. xxiv, 253, p. ISBN 1-890626-05-8 - Michael Krausz, ed., Relativism: A Contemporary Anthology, New York: Columbia University Press, 2010, ISBN 978-0-231-14410-0 - Martin Hollis, Steven Lukes, Rationality and Relativism, Oxford: Basil Blackwell, 1982, ISBN 0-631-12773-9 - Joseph Margolis, Michael Krausz, R. M. Burian, Eds., Rationality, Relativism, and the Human Sciences, Dordrecht: Boston, M. Nijhoff, 1986, ISBN 90-247-3271-9 - Jack W. Meiland, Michael Krausz, Eds. Relativism, Cognitive and Moral, Notre Dame: University of Notre Dame Press, 1982, ISBN 0-268-01611-9 - Markus Seidel, Epistemic Relativism: A Constructive Critique, Basingstoke: Palgrave Macmillan, 2014, ISBN 978-1-137-37788-3 - HeWillAdd FromTheBroadMeadow AHelperOfMan, "In Defense of Relativity.", CreateSpace Independent Publishing Platform, 2013, ISBN 1482608359 |Wikiquote has quotations related to: Relativism| |Wikimedia Commons has media related to Relativism.| - "Epistemology and Relativism". Internet Encyclopedia of Philosophy. - Westacott, E. Relativism, 2005, Internet Encyclopedia of Philosophy - Westacott, E. Cognitive Relativism, 2006, Internet Encyclopedia of Philosophy - Professor Ronald Jones on relativism - What 'Being Relative' Means, a passage from Pierre Lecomte du Nouy's "Human Destiny" (1947) - BBC Radio 4 series "In Our Time", on Relativism - the battle against transcendent knowledge, 19 January 2006 - Against Relativism, by Christopher Noriss - Zalta, Edward N. (ed.). "Relativism". Stanford Encyclopedia of Philosophy. - The Friesian School on Relativism - The Catholic Encyclopedia - Harvey Siegel reviews Paul Boghossian's Fear of Knowledge
The economy generally moves in cycles, from high to low, then to high again. When the economy is optimal, then the economic output is also optimal, with unemployment at a stable, low rate. When the economy overheats, then inflation increases, because aggregate demand grows faster than aggregate supply. Eventually, the economy reverses, with aggregate demand decreasing, thus forcing businesses to lay people off. When an economy is in a recession, tax revenues declined at the same time that the government needs to spend more money to help the unemployed and to stimulate demand. There are 2 major types of tools that a government can use to stimulate the economy: monetary policy and fiscal policy. Monetary policy is usually conducted by the country's monetary authority, which for most modern economies is the central bank, through the use of operations that influence interest rates or the quantity of money so that certain macroeconomic objectives, such as low-inflation and optimized economic output, can be achieved. However, as Milton Friedman pointed out, monetary policy can only affect the economy in the short run. In the long run, the economy will reach a new equilibrium based on the change in money supply, but, with all else being equal, the economic output will be the same as it was before the change of monetary policy, assuming that the economy was at equilibrium then. Fiscal policy, on the other hand, consists of operations by the government to achieve macroeconomic goals through changes in taxation, public borrowing, and government expenditures. Monetary Policy Regimes There are different monetary tools available and different ways to influence economy. These tools are implemented according to a set of rules, which can be grouped according to the specific macroeconomic objective that they are trying to achieve. Monetary policy rules are developed because it takes time to gather information about an economy and it takes time for a monetary policy to effect changes in that economy. Therefore, many believe it is more prudent to follow monetary policy rules that have worked well in the past and that are well understood. If central bankers had discretion, they would attempt to fine-tune the economy based on their intuition, which could have negative effects in the economy, but would not be knowable till later. A monetary policy regime is a set of monetary policy rules used to achieve a specific objective. Some regimes target specific macroeconomic rates, such as the exchange rate, inflation rate, and the growth of the money supply. Generally, these regimes attempt to keep the exchange rate, inflation rate, or the money supply growth rate within narrow constraints. A common objective on the money supply growth rate is equalize it with the nominal income growth. A more nebulous regime attempts to manage economic risks by using more tools targeting different aspects of the economy, to try to preempt monetary or economic instability. Recognizing that inflation can be high while the economic output is less than optimal makes it difficult to stimulate the economy by only considering the inflation rate. To solve the problem for the Federal Reserve, the economist John Taylor came up with a rule that that relates the federal funds rate to both the inflation rate and the output gap, which became known as Taylor's rule: Federal Funds Rate = 1 + (1.5 × Inflation Rate) + (0.5 × Output Gap) Primary Monetary Policy Tool: Setting the Federal Funds Range In the United States, the Federal Reserve enacts monetary policy primarily by setting a target interest rate, specifically, the federal funds rate. In Europe, the equivalent rate is the Euro Overnight Index Average (EONIA). The federal funds rate varies with supply and demand continuously, so the target rate cannot be set exactly. Instead, the Federal Reserve sets a range of about 0.25% that is easier to implement. For instance, as of March 12, 2018, the target range is 1.25% to 1.5%. By monitoring the effective federal funds rate (EFFR), which is the interest rate actually charged in the federal funds market, the Fed can take appropriate action to move the EFFR back toward the middle of the range, if the EFFR is near the lower or upper bound of the range. The Federal Reserve sets reserve requirements for banks, the minimum of money in their vaults or in their accounts at the Federal Reserve. If they fall below this minimum, then they must get more capital from their owners, attract new deposits, or borrow money from other banks or from the Federal Reserve. On the other hand, banks do not want to keep much more than the minimum, because they incur an opportunity cost of not earning interest that could otherwise be earned by lending the money. Hence, banks often fall short of the minimum reserve requirements, requiring them to borrow in the interbank market, known as the federal funds market. The interest rate charged in this market is the federal funds rate, which the Fed keeps within its target range by buying or selling US Treasuries, thereby increasing or decreasing the money supply. When the Federal Reserve wants to lower the federal funds rate, it buys US Treasuries from its dealers, which are major banks. These purchases increase the amount of money that these banks have on hand, thereby decreasing their need to borrow the money in the federal funds market. As with other things, a lower demand lowers prices, and the price of money is the interest rate charged for its borrowing. Because holding cash incurs an opportunity cost of not earning a return for the money, the dealers seek out other bonds to buy, which increases the demand for those bonds, thereby increasing their prices and decreasing their yields, since bond prices and yields are inversely proportional. The Federal Reserve pays for these purchases by simply incrementing the accounts of the dealers who are selling by the amount of their purchase. When the Federal Reserve wants to raise the federal funds rate, then it does the reverse: it sells US Treasuries to its primary dealers, thus lowering the amount of money that they have on hand, which increases their demand in the federal funds market, which increases the cost of credit, i.e. the interest rate. Banks may also sell other bonds to increase their reserves, thus increasing the supply of bonds, which decreases their costs, and increases their interest rates. Because banks make money by charging a higher interest rate for their loans than what they pay on their debts or other sources of capital, the federal funds rate determines the interest rates on most other loans. Because of its simplicity and effectiveness, setting a target interest rate range is a primary monetary tool used by most central banks. Lowering the interest rate stimulates the economy, especially if it is at less than its potential output. With lower interest rates, consumers are willing to buy more and businesses are willing to invest more, which increases consumption and investment, thereby increasing GDP. With higher interest rates, the opposite happens: people buy less and businesses cut back on investments and other purchases, thereby slowing the economy. There are several theoretical principles that underpin monetary policy. A primary principle is the Quantity Theory of Money: monetary policy can influence the prices of goods and services, but not the quantity; thus, changing the money supply cannot change economic activity in the long run. Although the quantity theory of money shows that economic activity cannot be increased over the long term by increasing the money supply, it can increase economic activity over the short run. A.W. Phillips noticed in 1958 that there was an inverse relationship between the inflation rate and the unemployment rate. Hence, the curve showing this relationship is called the Phillips curve. The short-term relationship showed that the central banks could lower the unemployment rate by temporarily increasing the quantity of money, which also increased inflation. Unemployment arises naturally from causes other than from lack of jobs, such as when people, such as new graduates, recent immigrants, or women returning to the job market, or those suddenly laid off or fired, look for jobs — so-called frictional unemployment — or when there is a mismatch between the skills offered by the labor force and the skills demanded by employers — structural unemployment. Because transitory and structural unemployment exist regardless of the number of jobs in the marketplace, there will always be what is called a natural rate of unemployment, or as some economists refer to as the nonaccelerating inflation rate of unemployment (NAIRU). If the central bank tries to lower the unemployment rate below its natural rate, then monetary policy will have the opposite effect of what is trying to be achieved: both unemployment and inflation will increase. Only cyclical unemployment, which is the unemployment that results when there are fewer jobs than there are people looking for work, can be influenced effectively through monetary policy. If the economy were already at the natural rate of unemployment, then any attempt by the central bank to increase economic activity by increasing the money supply would only increase inflation while decreasing unemployment for only a short time. Eventually, the unemployment rate would revert to the natural rate of unemployment, but inflation would be higher. Over time, NAIRU itself can shift because of changes in the economy and especially because of changes in technology. For instance, with the rise of the Internet, it became easier and faster to find a new job. However, since about 2010, the employment rate did not fall commensurately with increase in job vacancies because many workers did not have the necessary skills or knowledge to fill those vacancies. Hence, the long run Phillips curve can shift to a different rate of unemployment over time. Economic output is related to the unemployment rate. Since labor is a major factor in the production of goods and services, less than full employment reduces economic output. On the other hand, if the economy is overheating, then output will be slightly higher than the regular output at full employment, since people work overtime to meet the higher aggregate demand. The gross domestic product (GDP) associated with full employment is the normal economic output. If actual output differs from this normal output, then there is an output gap: Output Gap = Actual GDP − Potential GDP Author Okun noted this relationship in 1962, that economic output varied inversely with unemployment, a relationship called Okun's law. Thus, the output gap is something that central banks consider when deciding a monetary policy. Historically, central banks relied on the historical relationship between unemployment, economic activity, interest rates, etc. However, it became clear that those relationships could also be affected by what people would expect from changes in the monetary policy. They would anticipate the consequences of monetary policy changes, then alter their behavior accordingly, thereby reducing the effectiveness of the policy change. Therefore, any economic model that would try to forecast how the economy would change in response to a monetary policy change must also incorporate possible changes in people's behavior based on their expectation of future inflation. That changes in people's behavior based on their expectations changes the results of the monetary policy change forms the basis of the rational expectations hypothesis. Thus, the central bank must have a credible policy of controlling inflation; otherwise, if higher inflation is expected, then people will demand higher wages and businesses will raise prices, thus increasing inflation. Based on the rational expectations hypothesis, the Lucas critique proposes that economic policy based only on historical aggregate information will be ineffective. To rectify this, economic models should consider economic agents, i.e. consumers, at the microeconomic level to better anticipate what changes in monetary policy may do. Economic agents may counteract changes in monetary policy, thereby reducing their effectiveness — what has become known as the policy ineffectiveness proposition. This proposition has been used to explain the stagflation in the late 1970s and 1980s, when the Federal Reserve attempted to mollify the effects of high oil prices at that time and to lower the unemployment rate by using an expansionary monetary policy. Instead, inflation increased while unemployment remained high. There have been criticisms of the rational expectations hypothesis because people are often times irrational, so basing models on irrational people can be problematic. However, behavioral economists believe that much irrationality is predictable, especially since people tend to follow others in their economic activity, such as buying stocks in the stock market or creating other asset bubbles. Nonetheless, the rational expectations hypothesis still offers better predictions than that offered by the simple historical relationship between the relevant macroeconomic factors. Unconventional Monetary Policies After the 2007 - 2009 Great Recession, central banks found it difficult to stimulate the economy using conventional methods. Therefore, other methods were used to try to stimulate the economy: - reducing interest rates to 0, or even charging an interest rate, thus making the interest rate effectively negative - quantitative or credit easing, - forward guidance, and - foreign exchange intervention Central banks also extended liquidity to financial institutions and other key credit markets to stimulate lending as a means to increase economic activity. Another method by which central banks can increase lending is by offering banks cheaper financing if they lend a prescribed minimum to households and small and medium-sized enterprises (SMEs), such as the UK Funding for Lending, which the Bank of England set up in 2012. During the Great Recession, the 1st monetary policy change was to reduce interest rates. Because the economy did not respond, central banks lowered the interest rate to 0 or nearly 0. Some central banks even charged other banks interest for keeping their funds at the bank so that they would be more likely to lend the money. However, people were out of work and already deeply in debt. Therefore, they could not borrow even at greatly reduced rates, so banks did not lend to them. Furthermore, people could not buy the goods and services offered by the economy, so businesses could not take advantage of the cheap credit because they had no way of paying it back. The next tool that the United States Federal Reserve used to stimulate the economy was quantitative easing. Buying long-term securities lowers long-term borrowing costs, since interest rates on debt securities, especially government securities, are used as benchmarks for other lending rates. The security purchases lower the yield curve, especially for longer-term interest rates. There are 2 types of securities that central banks can buy: government securities and private sector securities. The purchase of longer-term government securities is called quantitative easing; the purchase of private sector securities is called credit easing. For instance, during the Great Recession, banks held a significant investment in mortgage-backed securities (MBS) that were falling in value because of pervasive defaults by subprime borrowers. Therefore, the Fed purchased these MBSs to prop up their prices and to keep the banks holding the securities solvent. In 2015, the European Central Bank (ECB) attempted to pull Europe out of the recession that has lingered on into 2015 by buying government bonds of distressed countries, such as Spain and Greece, to stimulate their economies by lowering their interest rates. Nevertheless, quantitative easing was little more effective than reducing interest rates, because unemployment and consumer debt were still high. Instead, buying securities increases their price, which increases the amount of money that bondholders get for their bonds. Since most bondholders are wealthy, the money went to the wealthy, who usually have no immediate need for the money. Instead, they anticipate inflation, so they buy assets that will increase in price along with the increase in the money supply, thereby creating asset bubbles. The Best Monetary Policy is a Fiscal Policy: Tax the Poor Less In times of recession, the best means of achieving monetary policy objectives, in my opinion, is to use a fiscal policy: lower taxes on the poor. This immediately increases demand for all types of products and services, while avoiding the asset bubbles created by the rich when they receive higher prices for their bonds when the money supply is increased. Moreover, the poor suffer the most in recessions. Although the wealthy also suffer, they quickly earn their money back by investing in the stock market and in other markets that tend to grow as economy climbs out of the recession. On the other hand, poor people just suffer. With little or no money and mountains of debt, they cut back on their purchases, sinking the economy even more. Not only are the poor out of work and deep in debt during recessions, they are also burdened with high taxes. For instance, in the United States, employment taxes take out about 15% of each worker's pay. Although the employer pays half of the employment tax, lower income workers bear most of the tax burden, in that the employers simply pay the workers less because of their share of the tax. Additionally, both states and their municipalities also put most of their tax burden on work, further increasing the tax burden. The United States attempted to lighten this burden by reducing the employment tax from 15.3% to 13.3% for 2 years. It helped. However, reducing taxes to 0 for poor people so that they can afford even to live would have been more effective. With their much higher marginal propensity to consume, they would immediately spend the extra money on the goods and services offered by businesses, thereby increasing business revenue, increasing employment, then increasing tax revenue. Of course, lowering taxes on the poor means that others, notably the wealthy, would be forced to pay more taxes, but since the wealthy have significant influence with governments, they generally pay much lower taxes, because much or most of their income comes from investments or inheritance, which are taxed at much lower rates than employment, if they are taxed at all. By taxing work less, not only would labor be cheaper for employers, but it would also motivate people to work since they would be getting a higher price for their labor. As any economist knows, lower prices for employers would increase their demand for labor, while higher prices for suppliers of labor would increase their supply, i.e., people would be more willing to work. Furthermore, there is a large deadweight loss of taxation in taxing labor. On the other hand, there is no deadweight loss at all in taxing inheritance, since, as economists like to say, the supply of death is completely inelastic while the demand for gifts and bequests is completely elastic, since the beneficiaries do nothing to receive it.
A confidence interval for a population mean, when the population standard deviation is known, is based on the conclusion of the Central Limit Theorem that the sampling distribution of the sample means follow an approximately normal distribution. Suppose that our sample has a mean of and we have constructed the 90% confidence interval (5, 15) where EBM = 5. Calculating the Confidence Interval To construct a confidence interval for a single unknown population mean μ, where the population standard deviation is known, we need as an estimate for μ and we need the margin of error. Here, the margin of error (EBM) is called the error bound for a population mean (abbreviated EBM). The sample mean is the point estimate of the unknown population mean μ. The confidence interval estimate will have the form: (point estimate - error bound, point estimate + error bound) or, in symbols,() The margin of error (EBM) depends on the confidence level (abbreviated CL). The confidence level is often considered the probability that the calculated confidence interval estimate will contain the true population parameter. However, it is more accurate to state that the confidence level is the percent of confidence intervals that contain the true population parameter when repeated samples are taken. Most often, it is the choice of the person constructing the confidence interval to choose a confidence level of 90% or higher because that person wants to be reasonably certain of his or her conclusions. There is another probability called alpha (α). α is related to the confidence level, CL. α is the probability that the interval does not contain the unknown population parameter. Mathematically, α + CL = 1. - Suppose we have collected data from a sample. We know the sample mean but we do not know the mean for the entire population. - The sample mean is seven, and the error bound for the mean is 2.5. = 7 and EBM = 2.5 The confidence interval is (7 – 2.5, 7 + 2.5), and calculating the values gives (4.5, 9.5). If the confidence level (CL) is 95%, then we say that, "We estimate with 95% confidence that the true value of the population mean is between 4.5 and 9.5." Suppose we have data from a sample. The sample mean is 15, and the error bound for the mean is 3.2. What is the confidence interval estimate for the population mean? A confidence interval for a population mean with a known standard deviation is based on the fact that the sample means follow an approximately normal distribution. Suppose that our sample has a mean of = 10, and we have constructed the 90% confidence interval (5, 15) where EBM = 5. To get a 90% confidence interval, we must include the central 90% of the probability of the normal distribution. If we include the central 90%, we leave out a total of α = 10% in both tails, or 5% in each tail, of the normal distribution. To capture the central 90%, we must go out 1.645 "standard deviations" on either side of the calculated sample mean. The value 1.645 is the z-score from a standard normal probability distribution that puts an area of 0.90 in the center, an area of 0.05 in the far left tail, and an area of 0.05 in the far right tail. It is important that the "standard deviation" used must be appropriate for the parameter we are estimating, so in this section we need to use the standard deviation that applies to sample means, which is . The fraction , is commonly called the "standard error of the mean" in order to distinguish clearly the standard deviation for a mean from the population standard deviation σ. - is normally distributed, that is, ~ N. - When the population standard deviation σ is known, we use a normal distribution to calculate the error bound. Calculating the Confidence Interval To construct a confidence interval estimate for an unknown population mean, we need data from a random sample. The steps to construct and interpret the confidence interval are: - Calculate the sample mean from the sample data. Remember, in this section we already know the population standard deviation σ. - Find the z-score that corresponds to the confidence level. - Calculate the error bound EBM. - Construct the confidence interval. - Write a sentence that interprets the estimate in the context of the situation in the problem. (Explain what the confidence interval means, in the words of the problem.) We will first examine each step in more detail, and then illustrate the process with some examples. Finding the z-score for the Stated Confidence Level When we know the population standard deviation σ, we use a standard normal distribution to calculate the error bound EBM and construct the confidence interval. We need to find the value of z that puts an area equal to the confidence level (in decimal form) in the middle of the standard normal distribution Z ~ N(0, 1). The confidence level, CL, is the area in the middle of the standard normal distribution. CL = 1 – α, so α is the area that is split equally between the two tails. Each of the tails contains an area equal to . The z-score that has an area to the right of is denoted by . For example, when CL = 0.95, α = 0.05 and = 0.025; we write = z0.025. The area to the right of z0.025 is 0.025 and the area to the left of z0.025 is 1 – 0.025 = 0.975. , using a calculator, computer or a standard normal probability table. invNorm(0.975, 0, 1) = 1.96 Remember to use the area to the LEFT of ; in this chapter the last two inputs in the invNorm command are 0, 1, because you are using a standard normal distribution Z ~ N(0, 1). Calculating the Error Bound (EBM) The error bound formula for an unknown population mean μ when the population standard deviation σ is known is - EBM = Constructing the Confidence Interval - The confidence interval estimate has the format . The graph gives a picture of the entire situation. CL + + = CL + α = 1. Writing the Interpretation The interpretation should clearly state the confidence level (CL), explain what population parameter is being estimated (here, a population mean), and state the confidence interval (both endpoints). "We estimate with ___% confidence that the true population mean (include the context of the problem) is between ___ and ___ (include appropriate units)." Suppose scores on exams in statistics are normally distributed with an unknown population mean and a population standard deviation of three points. A random sample of 36 scores is taken and gives a sample mean (sample mean score) of 68. Find a confidence interval estimate for the population mean exam score (the mean score on all exams). Find a 90% confidence interval for the true (population) mean of statistics exam scores. Suppose average pizza delivery times are normally distributed with an unknown population mean and a population standard deviation of six minutes. A random sample of 28 pizza delivery restaurants is taken and has a sample mean delivery time of 36 minutes. Find a 90% confidence interval estimate for the population mean delivery time. The Specific Absorption Rate (SAR) for a cell phone measures the amount of radio frequency (RF) energy absorbed by the user’s body when using the handset. Every cell phone emits RF energy. Different phone models have different SAR measures. To receive certification from the Federal Communications Commission (FCC) for sale in the United States, the SAR level for a cell phone must be no more than 1.6 watts per kilogram. Table 8.1 shows the highest SAR level for a random selection of cell phone models as measured by the FCC. |Phone Model||SAR||Phone Model||SAR||Phone Model||SAR| |Apple iPhone 4S||1.11||LG Ally||1.36||Pantech Laser||0.74| |BlackBerry Pearl 8120||1.48||LG AX275||1.34||Samsung Character||0.5| |BlackBerry Tour 9630||1.43||LG Cosmos||1.18||Samsung Epic 4G Touch||0.4| |Cricket TXTM8||1.3||LG CU515||1.3||Samsung M240||0.867| |HP/Palm Centro||1.09||LG Trax CU575||1.26||Samsung Messager III SCH-R750||0.68| |HTC One V||0.455||Motorola Q9h||1.29||Samsung Nexus S||0.51| |HTC Touch Pro 2||1.41||Motorola Razr2 V8||0.36||Samsung SGH-A227||1.13| |Huawei M835 Ideos||0.82||Motorola Razr2 V9||0.52||SGH-a107 GoPhone||0.3| |Kyocera DuraPlus||0.78||Motorola V195s||1.6||Sony W350a||1.48| |Kyocera K127 Marbl||1.25||Nokia 1680||1.39||T-Mobile Concord||1.38| Find a 98% confidence interval for the true (population) mean of the Specific Absorption Rates (SARs) for cell phones. Assume that the population standard deviation is σ = 0.337. Table 8.2 shows a different random sampling of 20 cell phone models. Use this data to calculate a 93% confidence interval for the true mean SAR for cell phones certified for use in the United States. As previously, assume that the population standard deviation is σ = 0.337. |Phone Model||SAR||Phone Model||SAR| |Blackberry Pearl 8120||1.48||Nokia E71x||1.53| |HTC Evo Design 4G||0.8||Nokia N75||0.68| |HTC Freestyle||1.15||Nokia N79||1.4| |LG Ally||1.36||Sagem Puma||1.24| |LG Fathom||0.77||Samsung Fascinate||0.57| |LG Optimus Vu||0.462||Samsung Infuse 4G||0.2| |Motorola Cliq XT||1.36||Samsung Nexus S||0.51| |Motorola Droid Pro||1.39||Samsung Replenish||0.3| |Motorola Droid Razr M||1.3||Sony W518a Walkman||0.73| |Nokia 7705 Twist||0.7||ZTE C79||0.869| Notice the difference in the confidence intervals calculated in Example 8.3 and the following Try It exercise. These intervals are different for several reasons: they were calculated from different samples, the samples were different sizes, and the intervals were calculated for different levels of confidence. Even though the intervals are different, they do not yield conflicting information. The effects of these kinds of changes are the subject of the next section in this chapter. Changing the Confidence Level or Sample Size Suppose we change the original problem in Example 8.2 by using a 95% confidence level. Find a 95% confidence interval for the true (population) mean statistics exam score. We estimate with 95% confidence that the true population mean for all statistics exam scores is between 67.02 and 68.98. Explanation of 95% Confidence Level: Ninety-five percent of all confidence intervals constructed in this way contain the true value of the population mean statistics exam score. Comparing the results: The 90% confidence interval is (67.18, 68.82). The 95% confidence interval is (67.02, 68.98). The 95% confidence interval is wider. If you look at the graphs, because the area 0.95 is larger than the area 0.90, it makes sense that the 95% confidence interval is wider. To be more confident that the confidence interval actually does contain the true value of the population mean for all statistics exam scores, the confidence interval necessarily needs to be wider. - Increasing the confidence level increases the error bound, making the confidence interval wider. - Decreasing the confidence level decreases the error bound, making the confidence interval narrower. Refer back to the pizza-delivery Try It exercise. The population standard deviation is six minutes and the sample mean deliver time is 36 minutes. Use a sample size of 20. Find a 95% confidence interval estimate for the true mean pizza delivery time. Suppose we change the original problem in Example 8.2 to see what happens to the error bound if the sample size is changed. Leave everything the same except the sample size. Use the original 90% confidence level. What happens to the error bound and the confidence interval if we increase the sample size and use n = 100 instead of n = 36? What happens if we decrease the sample size to n = 25 instead of n = 36? - = 68 - EBM = - σ = 3; The confidence level is 90% (CL=0.90); = z0.05 = 1.645. - Increasing the sample size causes the error bound to decrease, making the confidence interval narrower. - Decreasing the sample size causes the error bound to increase, making the confidence interval wider. Refer back to the pizza-delivery Try It exercise. The mean delivery time is 36 minutes and the population standard deviation is six minutes. Assume the sample size is changed to 50 restaurants with the same sample mean. Find a 90% confidence interval estimate for the population mean delivery time. Working Backwards to Find the Error Bound or Sample Mean When we calculate a confidence interval, we find the sample mean, calculate the error bound, and use them to calculate the confidence interval. However, sometimes when we read statistical studies, the study may state the confidence interval only. If we know the confidence interval, we can work backwards to find both the error bound and the sample mean. - From the upper value for the interval, subtract the sample mean, - OR, from the upper value for the interval, subtract the lower value. Then divide the difference by two. - Subtract the error bound from the upper value of the confidence interval, - OR, average the upper and lower endpoints of the confidence interval. Notice that there are two methods to perform each calculation. You can choose the method that is easier to use with the information you know. Suppose we know that a confidence interval is (67.18, 68.82) and we want to find the error bound. We may know that the sample mean is 68, or perhaps our source only gave the confidence interval and did not tell us the value of the sample mean. - If we know that the sample mean is 68: EBM = 68.82 – 68 = 0.82. - If we don't know the sample mean: EBM = = 0.82. - If we know the error bound: = 68.82 – 0.82 = 68 - If we don't know the error bound: = = 68. Suppose we know that a confidence interval is (42.12, 47.88). Find the error bound and the sample mean. Calculating the Sample Size n If researchers desire a specific margin of error, then they can use the error bound formula to calculate the required sample size. The error bound formula for a population mean when the population standard deviation is known is EBM = . The formula for sample size is n = , found by solving the error bound formula for n. In this formula, z is , corresponding to the desired confidence level. A researcher planning a study who wants a specified confidence level and error bound can use this formula to calculate the size of the sample needed for the study. The population standard deviation for the age of Foothill College students is 15 years. If we want to be 95% confident that the sample mean age is within two years of the true population mean age of Foothill College students, how many randomly selected Foothill College students must be surveyed? - From the problem, we know that σ = 15 and EBM = 2. - z = z0.025 = 1.96, because the confidence level is 95%. - n = = = 216.09 using the sample size equation. - Use n = 217: Always round the answer UP to the next higher integer to ensure that the sample size is large enough. Therefore, 217 Foothill College students should be surveyed in order to be 95% confident that we are within two years of the true population mean age of Foothill College students. The population standard deviation for the height of high school basketball players is three inches. If we want to be 95% confident that the sample mean height is within one inch of the true population mean height, how many randomly selected students must be surveyed?
Array Class Collections File Keyword String .NET ASP.NET Cast Compression Data Delegate Directive Enum Exception If Interface LINQ Loop Method Number Process Property Regex Sort StringBuilder Struct Switch Time Windows WPF Strings. Nearly every program uses strings. we find characters, and textual data. The string type allows us to test and manipulate character data. Literals are a way to specify string data. We use quote characters to enclose literal data. Strings are also dynamically created.Literal Split separates parts of strings. We isolate and extract substrings with a single method call. not just tutorials, need to split and join strings. Search. These methods search string data for substrings. The most useful one is IndexOf. There are variants of IndexOf such as Contains.IndexOfIndexOfAnyLastIndexOfLastIndexOfAnyContains Concat. When we concat strings, we put them together into a larger string. When we append, we put a string at the end of another one.Concat Insert, remove. We insert a string at any position into an existing one. We remove a series of character starting at any position.InsertRemove Replace. String data may contain a series of characters we want to replace with another substring, in each place it is found. The Replace method is useful here.Replace Length. We never need to manually count the number of characters. Instead, we use the Length property, a simple memory read, on a string.Length Substring. We acquire a substring of any string with the Substring method. Also we can use Substring to truncate, or take the rightmost part, of a string.SubstringTruncateRight Equals, compare. We compare two strings for equality. The string.Equals method is commonly used. But Compare is also helpful. With it, we develop sorting routines.EqualsCompareStringComparerStringComparison Starts, ends. It is often useful to test only the first few characters of a string for a certain value. StartsWith provides this ability. EndsWith does the opposite.StartsWithEndsWith Constructor. We typically do not need string constructors in programs. But they can be useful in specific situations. And with Copy, we duplicate string data.String ConstructorCopyCopyTo Format. Strings are used to format data types. There are many other formatting patterns and adjustments you can make to these substitution markers.string.FormatDateTime FormatToString Parse, TryParse. These transform strings into other types. Many parsing routines are built-in. Usually it is a bad idea to create your own if one already exists.TryParseDateTime.ParseEnum.Parseint.Parseint.TryParseHex Format Lower, upper. Sometimes a string may be in ALL CAPS. With simple methods, we can transform the casing of strings. The ToLower method changes all uppercase letters to lowercase ones.ToUpperToLowerToLowerInvariantUppercase First LettersToTitleCaseTextInfoIsUpper, IsLower Trim. This modifies space and newline characters in string data. Strings sometimes contain characters at their starts or ends that we do not want.TrimTrimEndTrimStart Pad strings. Trimming a string removes extra characters on either end. Padding a string instead adds extra characters. With padding, we create columns of text. We can justify text.PadLeftPadRight Newlines, whitespace. Strings often contain newline or whitespace. We often need to check for these values. We use methods like IsNullOrWhitespace.Environment.NewLineIsNullOrWhitespaceWhitespaceLine Count Empty. Does life have meaning? Or is it just emptiness? I have no idea. It is easier (and more fun) to test strings for emptiness.Empty StringsNull Stringsstring.EmptyIsNullOrEmpty Chars. A string contains data that is made up of individual characters. We deal with these chars in looping constructs. Accessing chars is often the fastest way to test strings.CharChange CharactersString CharsString For-Loop Methods. Strings have many methods. They interact with many language features. We use modifiers on strings. We use strings as parameters and properties.InternIsInternedNormalize, IsNormalizedString ToStringString Property Performance. In typical usage strings are fast. But they are sometimes used in an inefficient way. Often reducing string allocations is helpful.Memory UsageEquals PerformanceReplace LogicReplace CharsToString Cacheint.Parse OptimizationToString FormatsToLower Optimization Explanations. I explain concepts of strings. How do we append strings when there is no append method? We also learn to increment strings.String AppendIncrement String NumberExplode StringBuilder. This is not a string, but it is used to build up or change strings. For appending strings in a loop, you almost certainly want to use StringBuilder. It is much faster here. A string is immutable. It can be used in many methods, and none of them have to worry about data changes. It never becomes invalid. This reduces copies and makes programs more robust.
The principles of the mechanics of motion are introduced in the Physics curriculum in Key stage 4 in terms of the interchange between kinetic and potential energy and forces. This is further enlarged upon in Key stage 5 where Simple Harmonic Motion is discussed. A Mathematical treatment is introduced in the study of mechanics at Key stage 5, with motion in a horizontal and then vertical circle being covered in later mechanics modules. There are many aspects of mechanics that initially appear counter intuitive; an example is that two bodies of the same size and shape but different masses will, when released fro the same height, hit the ground simultaneously. Similarly, the idea that horizontal and vertical velocities of a projectile may be analysed independently is not one which comes without some thought. However, this concept is an excellent demonstration of the idea of velocity as a vector. The explanation of many of these principles may be assisted considerably by a practical demonstration using of a simple marble run. It has the great advantage of being familiar, but able to exhibit physical principles on a wide variety of levels. By adding measurement tools, the mathematics underpinning the physics may be demonstrated in a very powerful way linking back to the original insights of mathematical scientists such as Isaac Newton. The Marble Run The marble run comprises a flexible, plastic track built from two parallel rails, much like a railway. These can be joined easily and, with a wide variety of supports, structures resembling roller coasters can be built. The ‘cars’ are marbles, either steel or plastic which are useful as they have different masses. Acceleration Under Gravity This can be demonstrated using a slope of constant angle and releasing marbles from different heights. It becomes apparent that marbles released from a greater height attain a higher speed at the bottom of the slope, but that speed at the bottom is not linearly related to the release height. Using light gates at the bottom of the run, it will be possible to measure speed and relate it to release height. This will also provide an opportunity to discuss the effects of friction. A roller coaster run (initially without vertical loops) is a good way to demonstrate the conversion between potential and kinetic energy. Students will rapidly grasp that the marble gains speed as it falls from its release point, and loses it again as it climbs back. This can lead to discussions about how to design the track so that the marble will always reach the end. This provides another opportunity to discuss loss mechanisms. This can also be demonstrated using a ‘U’ shaped track and observing the marble dissipating its energy as it oscillates back and forth. This can be used as an example of damped simple harmonic motion for Key stage 5. The marble run can be used as a launch for projectiles. By building a ramp with a horizontal section at the bottom, and a sand tray to catch the marble safely, the range of the marble can be related to its launch height. By arranging the track so that the marble is launched at a small upward angle to the ground, the subsequent increase in range can be demonstrated. In later mechanics modules, the relationship between range and release angle has to be derived, but the principles can be explained to younger students. This can also lead to a discussion of siege engines. Motion in a Circle Motion in a horizontal circle can be investigated by building a circular section of the track. This demonstrates Newton’s First Law but also introduces the idea of centripetal force since, as the speed of the marble increases, it will leave the track at the point at which the centripetal force provided by the track becomes insufficient for the chosen speed and radius. Thus the influences of speed and bend radius can be investigated and the use of a banked track to permit higher speeds may also be shown. The principles of motion in a vertical circle can be demonstrated using a ramp feeding into a loop. Having discussed kinetic and potential energy and speed, students can deduce the variables which dictate whether the marble will complete a full circle. This can lead to a discussion on the construction of roller roasters. A marble run can be obtained from: 4children2enjoy Ltd, 87A Newton Road, Mumbles, Swansea SA3 4BN. Light gates for measuring the speed of the marble can be found from a variety of suppliers of school scientific equipment. On example is:
By the end of this section, you will be able to: - Describe the processes of a simple heat engine. - Explain the differences among the simple thermodynamic processes—isobaric, isochoric, isothermal, and adiabatic. - Calculate total work done in a cyclical thermodynamic process. One of the most important things we can do with heat transfer is to use it to do work for us. Such a device is called a heat engine. Car engines and steam turbines that generate electricity are examples of heat engines. Figure 2 shows schematically how the first law of thermodynamics applies to the typical heat engine. It is impossible to devise a system where Qout = 0, that is, in which no heat transfer occurs to the environment. The illustrations above show one of the ways in which heat transfer does work. Fuel combustion produces heat transfer to a gas in a cylinder, increasing the pressure of the gas and thereby the force it exerts on a movable piston. The gas does work on the outside world, as this force moves the piston through some distance. Heat transfer to the gas cylinder results in work being done. To repeat this process, the piston needs to be returned to its starting point. Heat transfer now occurs from the gas to the surroundings so that its pressure decreases, and a force is exerted by the surroundings to push the piston back through some distance. Variations of this process are employed daily in hundreds of millions of heat engines. We will examine heat engines in detail in the next section. In this section, we consider some of the simpler underlying processes on which heat engines are based. PV Diagrams and their Relationship to Work Done on or by a Gas A process by which a gas does work on a piston at constant pressure is called an isobaric process. Since the pressure is constant, the force exerted is constant and the work done is given as PΔV. W = Fd. See the symbols as shown in Figure 4. Now F = PA, and so W = PAd. Because the volume of a cylinder is its cross-sectional area A times its length d, we see that Ad = ΔV, the change in volume; thus, W = PΔV (isobaric process). Note that if ΔV is positive, then W is positive, meaning that work is done by the gas on the outside world. (Note that the pressure involved in this work that we’ve called P is the pressure of the gas inside the tank. If we call the pressure outside the tank Pext, an expanding gas would be working against the external pressure; the work done would therefore be W = −PextΔV (isobaric process). Many texts use this definition of work, and not the definition based on internal pressure, as the basis of the First Law of Thermodynamics. This definition reverses the sign conventions for work, and results in a statement of the first law that becomes ΔU = Q + W.) It is not surprising that W = PΔV, since we have already noted in our treatment of fluids that pressure is a type of potential energy per unit volume and that pressure in fact has units of energy divided by volume. We also noted in our discussion of the ideal gas law that PV has units of energy. In this case, some of the energy associated with pressure becomes work. Figure 5 shows a graph of pressure versus volume (that is, a PV diagram for an isobaric process. You can see in the figure that the work done is the area under the graph. This property of PV diagrams is very useful and broadly applicable: the work done on or by a system in going from one state to another equals the area under the curve on a PV diagram. We can see where this leads by considering Figure 6a, which shows a more general process in which both pressure and volume change. The area under the curve is closely approximated by dividing it into strips, each having an average constant pressure Pi(ave). The work done is Wi = Pi(ave)ΔVi for each strip, and the total work done is the sum of the Wi. Thus the total work done is the total area under the curve. If the path is reversed, as in Figure 6b, then work is done on the system. The area under the curve in that case is negative, because ΔV is negative. PV diagrams clearly illustrate that the work done depends on the path taken and not just the endpoints. This path dependence is seen in Figure 7a, where more work is done in going from A to C by the path via point B than by the path via point D. The vertical paths, where volume is constant, are called isochoric processes. Since volume is constant, ΔV = 0, and no work is done in an isochoric process. Now, if the system follows the cyclical path ABCDA, as in Figure 7b, then the total work done is the area inside the loop. The negative area below path CD subtracts, leaving only the area inside the rectangle. In fact, the work done in any cyclical process (one that returns to its starting point) is the area inside the loop it forms on a PV diagram, as Figure 7c illustrates for a general cyclical process. Note that the loop must be traversed in the clockwise direction for work to be positive—that is, for there to be a net work output. Example 1. Total Work Done in a Cyclical Process Equals the Area Inside the Closed Loop on a PV Diagram Calculate the total work done in the cyclical process ABCDA shown in Figure 7b by the following two methods to verify that work equals the area inside the closed loop on the PV diagram. (Take the data in the figure to be precise to three significant figures.) - Calculate the work done along each segment of the path and add these values to get the total work. - Calculate the area inside the rectangle ABCDA. To find the work along any path on a PV diagram, you use the fact that work is pressure times change in volume, or W = PΔV. So in part 1, this value is calculated for each leg of the path around the closed loop. Solution for Part 1 The work along path AB is Since the path BC is isochoric, ΔVBC=0, and so WBC=0. The work along path CD is negative, since ΔVCD is negative (the volume decreases). The work is Again, since the path DA is isochoric, ΔVDA=0, and so WDA=0. Now the total work is Solution for Part 2 The area inside the rectangle is its height times its width, or Thus, area = 650 J = W. The result, as anticipated, is that the area inside the closed loop equals the work done. The area is often easier to calculate than is the work done along each path. It is also convenient to visualize the area inside different curves on PV diagrams in order to see which processes might produce the most work. Recall that work can be done to the system, or by the system, depending on the sign of W. A positive W is work that is done by the system on the outside environment; a negative W represents work done by the environment on the system. Figure 8a shows two other important processes on a PV diagram. For comparison, both are shown starting from the same point A. The upper curve ending at point B is an isothermal process—that is, one in which temperature is kept constant. If the gas behaves like an ideal gas, as is often the case, and if no phase change occurs, then PV = nRT. Since T is constant, PV is a constant for an isothermal process. We ordinarily expect the temperature of a gas to decrease as it expands, and so we correctly suspect that heat transfer must occur from the surroundings to the gas to keep the temperature constant during an isothermal expansion. To show this more rigorously for the special case of a monatomic ideal gas, we note that the average kinetic energy of an atom in such a gas is given by . The kinetic energy of the atoms in a monatomic ideal gas is its only form of internal energy, and so its total internal energy U is , (monatomic ideal gas), where N is the number of atoms in the gas. This relationship means that the internal energy of an ideal monatomic gas is constant during an isothermal process—that is, ΔU=0. If the internal energy does not change, then the net heat transfer into the gas must equal the net work done by the gas. That is, because ΔU = Q − W = 0 here, Q = W. We must have just enough heat transfer to replace the work done. An isothermal process is inherently slow, because heat transfer occurs continuously to keep the gas temperature constant at all times and must be allowed to spread through the gas so that there are no hot or cold regions. Also shown in Figure 8a is a curve AC for an adiabatic process, defined to be one in which there is no heat transfer—that is, Q = 0. Processes that are nearly adiabatic can be achieved either by using very effective insulation or by performing the process so fast that there is little time for heat transfer. Temperature must decrease during an adiabatic process, since work is done at the expense of internal energy: . (You might have noted that a gas released into atmospheric pressure from a pressurized cylinder is substantially colder than the gas in the cylinder.) In fact, because Q = 0, ΔU = –W for an adiabatic process. Lower temperature results in lower pressure along the way, so that curve AC is lower than curve AB, and less work is done. If the path ABCA could be followed by cooling the gas from B to C at constant volume (isochorically), Figure 8b, there would be a net work output. Both isothermal and adiabatic processes such as shown in Figure 8 are reversible in principle. A reversible process is one in which both the system and its environment can return to exactly the states they were in by following the reverse path. The reverse isothermal and adiabatic paths are BA and CA, respectively. Real macroscopic processes are never exactly reversible. In the previous examples, our system is a gas (like that in Figure 4), and its environment is the piston, cylinder, and the rest of the universe. If there are any energy-dissipating mechanisms, such as friction or turbulence, then heat transfer to the environment occurs for either direction of the piston. So, for example, if the path BA is followed and there is friction, then the gas will be returned to its original state but the environment will not—it will have been heated in both directions. Reversibility requires the direction of heat transfer to reverse for the reverse path. Since dissipative mechanisms cannot be completely eliminated, real processes cannot be reversible. There must be reasons that real macroscopic processes cannot be reversible. We can imagine them going in reverse. For example, heat transfer occurs spontaneously from hot to cold and never spontaneously the reverse. Yet it would not violate the first law of thermodynamics for this to happen. In fact, all spontaneous processes, such as bubbles bursting, never go in reverse. There is a second thermodynamic law that forbids them from going in reverse. When we study this law, we will learn something about nature and also find that such a law limits the efficiency of heat engines. We will find that heat engines with the greatest possible theoretical efficiency would have to use reversible processes, and even they cannot convert all heat transfer into doing work. Table 1 summarizes the simpler thermodynamic processes and their definitions. |Table 1. Summary of Simple Thermodynamic Processes| |Isobaric||Constant pressure||W = PΔV| |Isochoric||Constant volume||W = 0| |Isothermal||Constant temperature||Q = W| |Adiabatic||No heat transfer||Q = 0| PhET Explorations: States of Matter Watch different types of molecules form a solid, liquid, or gas. Add or remove heat and watch the phase change. Change the temperature or volume of a container and see a pressure-temperature diagram respond in real time. Relate the interaction potential to the forces between molecules. - One of the important implications of the first law of thermodynamics is that machines can be harnessed to do work that humans previously did by hand or by external energy supplies such as running water or the heat of the Sun. A machine that uses heat transfer to do work is known as a heat engine. - There are several simple processes, used by heat engines, that flow from the first law of thermodynamics. Among them are the isobaric, isochoric, isothermal and adiabatic processes. - These processes differ from one another based on how they affect pressure, volume, temperature, and heat transfer. - If the work done is performed on the outside environment, work (W) will be a positive value. If the work done is done to the heat engine system, work (W) will be a negative value. - Some thermodynamic processes, including isothermal and adiabatic processes, are reversible in theory; that is, both the thermodynamic system and the environment can be returned to their initial states. However, because of loss of energy owing to the second law of thermodynamics, complete reversibility does not work in practice. - A great deal of effort, time, and money has been spent in the quest for the so-called perpetual-motion machine, which is defined as a hypothetical machine that operates or produces useful work indefinitely and/or a hypothetical machine that produces more work or energy than it consumes. Explain, in terms of heat engines and the first law of thermodynamics, why or why not such a machine is likely to be constructed. - One method of converting heat transfer into doing work is for heat transfer into a gas to take place, which expands, doing work on a piston, as shown in the figure below. (a) Is the heat transfer converted directly to work in an isobaric process, or does it go through another form first? Explain your answer. (b) What about in an isothermal process? (c) What about in an adiabatic process (where heat transfer occurred prior to the adiabatic process)? - Would the previous question make any sense for an isochoric process? Explain your answer. - We ordinarily say that ΔU = 0 for an isothermal process. Does this assume no phase change takes place? Explain your answer. - The temperature of a rapidly expanding gas decreases. Explain why in terms of the first law of thermodynamics. (Hint: Consider whether the gas does work and whether heat transfer occurs rapidly into the gas through conduction.) - Which cyclical process represented by the two closed loops, ABCFA and ABDEA, on the PV diagram in the figure below produces the greatest net work? Is that process also the one with the smallest work input required to return it to point A? Explain your responses. - A real process may be nearly adiabatic if it occurs over a very short time. How does the short time span help the process to be adiabatic? - It is unlikely that a process can be isothermal unless it is a very slow process. Explain why. Is the same true for isobaric and isochoric processes? Explain your answer. Problems & Exercises - A car tire contains 0.0380 m3 of air at a pressure of 2.20 × 105 N/m2 (about 32 psi). How much more internal energy does this gas have than the same volume has at zero gauge pressure (which is equivalent to normal atmospheric pressure)? - A helium-filled toy balloon has a gauge pressure of 0.200 atm and a volume of 10.0 L. How much greater is the internal energy of the helium in the balloon than it would be at zero gauge pressure? - Steam to drive an old-fashioned steam locomotive is supplied at a constant gauge pressure of 1.75 × 106 N/m2 (about 250 psi) to a piston with a 0.200-m radius. (a) By calculating PΔV, find the work done by the steam when the piston moves 0.800 m. Note that this is the net work output, since gauge pressure is used. (b) Now find the amount of work by calculating the force exerted times the distance traveled. Is the answer the same as in part (a)? - A hand-driven tire pump has a piston with a 2.50-cm diameter and a maximum stroke of 30.0 cm. (a) How much work do you do in one stroke if the average gauge pressure is 2.40 × 105 N/m2 (about 35 psi)? (b) What average force do you exert on the piston, neglecting friction and gravitational force? - Calculate the net work output of a heat engine following path ABCDA in the figure below. - What is the net work output of a heat engine that follows path ABDA in the figure above, with a straight line from B to D? Why is the work output less than for path ABCDA? Explicitly show how you follow the steps in the Problem-Solving Strategies for Thermodynamics. - Unreasonable Results. What is wrong with the claim that a cyclical heat engine does 4.00 kJ of work on an input of 24.0 kJ of heat transfer while 16.0 kJ of heat transfers to the environment? - (a) A cyclical heat engine, operating between temperatures of 450ºC and 150ºC produces 4.00 MJ of work on a heat transfer of 5.00 MJ into the engine. How much heat transfer occurs to the environment? (b) What is unreasonable about the engine? (c) Which premise is unreasonable? - Construct Your Own Problem. Consider a car’s gasoline engine. Construct a problem in which you calculate the maximum efficiency this engine can have. Among the things to consider are the effective hot and cold reservoir temperatures. Compare your calculated efficiency with the actual efficiency of car engines. - Construct Your Own Problem. Consider a car trip into the mountains. Construct a problem in which you calculate the overall efficiency of the car for the trip as a ratio of kinetic and potential energy gained to fuel consumed. Compare this efficiency to the thermodynamic efficiency quoted for gasoline engines and discuss why the thermodynamic efficiency is so much greater. Among the factors to be considered are the gain in altitude and speed, the mass of the car, the distance traveled, and typical fuel economy. heat engine: a machine that uses heat transfer to do work isobaric process: constant-pressure process in which a gas does work isochoric process: a constant-volume process isothermal process: a constant-temperature process adiabatic process: a process in which no heat transfer takes place reversible process: a process in which both the heat engine system and the external environment theoretically can be returned to their original states Selected Solutions to Problems & Exercises 1. 6.77 × 103 J 3. (a) W = PΔV = 1.76 × 105 J; (b) W = Fd = 1.76 × 105 J. Yes, the answer is the same. 5. W = 4.5 × 103 J 7. W is not equal to the difference between the heat input and the heat output.
What is Pseudo Force? A Pseudo force (also called a fictitious force, inertial force or d’Alembert force) is an apparent force that acts on all masses whose motion is described using a non-inertial frame of reference frame, such as rotating reference frame. Pseudo force comes in effect when the frame of reference has started acceleration compared to a non-accelerating frame. The force F does not arise from any physical interaction between two objects, but rather from the acceleration ‘a’ of the non-inertial reference frame itself. As a frame can accelerate in any arbitrary way, so can pseudo forces be as arbitrary (but only in direct response to the acceleration of the frame). However, four pseudo forces are defined for frames accelerated in commonly occurring ways: one by a relative acceleration of the origin in a straight line (rectilinear acceleration); two involving rotation: Coriolis force and Centrifugal force and fourth called Euler force, caused by a variable rate of rotation. Examples of Pseudo Force: For example, if you consider a person standing at a bus stop watching an accelerating car, he infers that a force is exerted on the car and it is accelerating. Here there is no problem and the pseudo force concept is not required But, if the person inside the accelerating car is looking at the person standing at the bus stop, he finds that the person is accelerating with respect to the car, though no force is acting on it. Here, the concept of pseudo force is required to convert the non-inertial frame of reference to an equivalent inertial frame of reference. For another example consider a ball hung from the roof of a train by means of an inextensible string. If the train is at rest or is moving with a uniform speed in a straight line the string will be vertical. A passenger will infer that the net force acting on the ball is zero. If the train begins to accelerate, then the string will be making an angle with respect to the vertical. For the passenger, there are only two forces, and they are not collinear. But, the ball remains apparently in a state of equilibrium (as long as the acceleration of the train is constant). Here, the concept of pseudo force is required. accelerating car of mass M with the passenger of mass m. The force from the axle is (m + M)a. In the inertial frame, this is the only force on the car and passenger. an exploded view in the inertial frame. The passenger is subject to the accelerating force ma. The seat (assumed of negligible mass) is compressed between the reaction force –ma and the applied force from the car ma. The car is subject to the net acceleration force Ma that is the difference between the applied force (m + M)a from the axle and the reaction from the seat −ma. an exploded view in the non-inertial frame. In the non-inertial frame where the car is not accelerating, the force from the axle is balanced by a fictitious backward force −(m + M)a, a portion −Ma applied to the car, and −ma to the passenger. The car is subject to the fictitious force −Ma and the force (m + M)a from the axle. The difference between these forces ma is applied to the seat, which exerts a reaction −ma upon the car, so zero net force is applied to the car. The seat (assumed massless) transmits the force ma to the passenger, who is subject also to the fictitious force −ma, resulting in zero net force on the passenger. The passenger exerts a reaction force −ma upon the seat, which is therefore compressed. In all frames the compression of the seat is the same, and the force delivered by the axle is the sa Frequently Asked Questions – FAQs What is a pseudo force? Is centripetal force always equal to centrifugal force? Is work done by these forces to be considered for work-energy theorem? How can these forces help in problem-solving? Stay tuned with BYJU’S to learn more about JEE mains, JEE Advanced, JEE Revision tips, JEE study Techniques etc., only at BYJU’S.
Whether you are preparing for your Physics exam or in search of ideas for a science project, the importance of Newton’s Laws of Motion can never be ignored. This is because if you are preparing for an exam, these laws are easy to prepare; and if you are looking for ideas regarding a science project, then they can be easily demonstrated and so, make a science project quite meaningful. If you choose demonstrating Newton’s Laws of Motion for you middle school Science project then there are many ways of doing that. One good thing about demonstrating these laws is that unlike many other science projects, the expense is almost negligible. You just need a few basic things which are quite easily available in your house. The First Law of Motion Newton’s first law of motion states that “An object will remain at rest or will continue moving in a straight line with uniform velocity unless it is acted upon by an external force or constraint”. This law is also known as “inertia” and can be demonstrated by a simple experiment. You just need to have a bottle or jar, a playing card and a coin. Experiment. Place the playing card on the top of the bottle opening as shown. Place the coin over the card such that the coin is positioned exactly on the top of the bottle’s opening. Now knock the card by striking it with your finger or by the flat end of a ruler. You will see that the card will always fall inside the bottle with the card falling elsewhere. The Second Law of Motion This second law states that “When force ‘F’ is applied on a body it produces acceleration ‘a’ which is parallel and directly proportional to the applied force ‘F’ and inversely proportional to the mass ‘m’ of the body”. The law is commonly represented as F = ma and can be demonstrated by the below given experiment. Experiment. Take a hard rectangular wooden board with a smooth surface and attach a pulley at the center on one of its shorter side. Place a meter along the longer side of the board. Now take a piece of string with reasonable length and attach one end of it with a simple Dynamics trolley and the other end with some light weights. Move the string over the pulley such that the weights are now hanging down. As you increase the weight on the string, you will notice that there is some acceleration produced in the dynamic trolley which will be in the direction of the applied force or weight. Keep increasing the weights and mass on the trolley. If you take readings, it will be observed that; F = ma. The Third Law of Motion This law is simpler of the other two from an understanding perspective and states; “To every action there is an equal reaction”. This can be demonstrated by means of a simple Balloon Powered Car. Experiment. In the Balloon powered car, it can be observed that the air moves out of the balloon in one direction while the car moves in the opposite direction. This proves the concept of action and reaction.
Graphing Ferris Wheel Heights Lesson 3 of 11 Objective: SWBAT sketch the graph of a function of a Ferris wheel rider's height over time and to plot key points (maxima/minima) on that function's graph. The more deeply that students think about and understand the first questions about symmetries on the Ferris Wheel Heights Warm-Up, the better prepared they will be to sketch graphs of the trigonometric functions. The big idea is: if students understand how to get information about one quadrant from another quadrant, they will more easily be able to create a graph. I think that seeing the symmetries on the circle first is a great way to make this leap. There are a lot of different ways for students to think about the second problem. This is a good chance for students to think about multiple representations: they can use sketches of graphs, or start to set up data tables, or use a diagram of a Ferris wheel to identify the key information. At this point I hope that my students will start to look for generalizations—even though they don’t currently have any way to find a formula to fit the data—they can make some generalizations about how to use the four pieces of given information to find the maximum and minimum points. If they attempt this work, MP7 and MP8 will surely come into play. Instructional Note: Some students may choose to spend their whole time trying to figure out this generalization—which is awesome, and it is great to note that they do not need to know anything about the trigonometric functions in order to make this generalization. I include the third problem to make sure that students remember key information about how special right triangles work. My expectation is that my students will use their knowledge of the Pythagorean Theorem to set up equations, rather than memorizing the information about special right triangles. I inform students that the last problem is important; they will eventually apply their knowledge of these problems to the Ferris wheel context. I think that a great question for them to think about during this lesson is, "How do these two types of problems connect with each other?" Today's closing is a great time to ask students to make the generalizations. My students are already expressing informal observations, which is great. I will prompt them to make these more formal is by asking them: If you were going to teach somebody some shortcuts about graphing Ferris wheel heights, what tricks could you show them? How did you figure these tricks out? I expect that this question start the process of creating productive generalizations. Anthing that my students come up with today will be helpful once we start to write function rules more formally. I will also using the closing activity of today's lesson to start a conversation about function rules: - Does anybody know a function rule that could fit these kinds of graphs? What do you think about the ones you know already? Do any of the rules you have studied so far apply to these situations? Chances are, none of my students will have these ideas yet, because we really haven't talked about trigonometry at all, but hopefully with this question, more students will start to think about rules that may fit these situations. The more they learn from thinking and reasoning on their own, rather than waiting for us to tell them, the more belief they will have in their own reasoning and thinking abilities, which will empower them to do more reasoning and thinking!
Become a fan of h2g2 Project Apollo: The Beginnings | Mission Planning | Landing Site Selection | Earthbound Support Systems Astronaut Selection and Training | The Saturn V | The Saturn 1B | The Apollo Spacecraft | Guidance and Navigation Command and Service Modules | The Lunar Module | Assembling and Launching | Pathfinders | The Early Missions Apollo 11, The First Landing | The Intermediate Missions | Apollo 15 Exploration | Apollo 16 Exploration Apollo 17 Exploration | Skylab and Apollo-Soyuz | Conclusion Apollo 11's landing site in the Sea of Tranquillity had been chosen primarily for its ease of access and because it provided a relatively broad and flat expanse to set down, where any landing within the designated zone would be acceptable without the need for pinpoint accuracy. With the completion of the Apollo 11 mission, which had landed some four miles past its designated site, President Kennedy's challenge had been met; but future missions would need to demonstrate a higher degree of accuracy to land in some of the more geologically productive sites. It would be of no use planning sophisticated missions to inaccessible spots if they were unable to put the spacecraft down accurately. Post flight analysis of Apollo 11 had shown a number of reasons for the 'long' landing. The main factor was perturbations in the descent orbit caused by fluctuations in the lunar gravity field. Concentrations of Mass (Mascons) in the younger mare plains caused localized increases in the gravity field, which affected the spacecraft's line of flight. This had first been noticed in the Lunar Orbiter missions where the satellite's orbits had been affected. NASA's flight guidance team came up with an answer and were confident that they could place future spacecraft on their landing sites with much greater accuracy. By measuring the Doppler shift of the spacecraft's returned radio signal - which was already used to pinpoint its position - and comparing it to a predicted value, the difference could be used to adjust the guidance computer to compensate for the gravity effects. However, this would have to be demonstrated with an accurate, pinpoint landing before committing future missions to the more hazardous mountainous regions. The missions following Apollo 11 were to be extended to allow a longer stay time on the surface with a rest period between two EVAs. Each moon walk would be for periods of up to three and a half hours, allowing the crew to extend their excursions away from the immediate vicinity of the LM; although, at this stage, they were still on foot. These extended missions were designated as 'H' class and four were planned to take place, Apollo's 12 to 15. Ultimately, only two would take place, one would not make it to the surface and another would be upgraded to 'J' class. On the 14 November, 1969, Apollo 12 crewed by Commander Charles 'Pete' Conrad, Lunar Module Pilot Alan L Bean and Command Module Pilot Richard F Gordon took off from launch pad 39a and was struck by lightning 36 seconds after lift off. A second strike occurring 16 seconds later. On board, circuit breakers to the service module's power cells tripped out, lighting up almost all the warning lights on the CM's control panel. Moments later, as the second strike occurred, the navigation system lost its data platform and the master alarm came on. However, the Saturn's launch guidance system remained functional and Apollo 12 held on its planned trajectory. Electrical systems flight controller John Aaron recognised that all three of the service module's power cells had tripped out and advised the crew to switch the command module's power supply to its internal batteries, which restored the instrument telemetry between the spacecraft and mission control. Still without instruments inside the spacecraft, the crew held off resetting the main power circuits from the service module until the Saturn staged. As the first stage booster fell away, Bean reset the power source and brought the crews instrumentation back on line. Meanwhile the Saturn, which was controlled by the flight guidance system in the top of the third stage, continued unaffected and guided the craft into its earth parking orbit without mishap. The craft remained in orbit while the damage was assessed and the CSM's guidance platform data was reset. After checking out the spacecraft some concern remained that the electrical circuit for the re-entry parachute pyrotechnics may have been effected and that the landing parachutes may fail to deploy after re-entry. As this could not be confirmed, it was decided to continue the mission; and the 'go' signal was given for the Trans Lunar Injection (TLI) burn. Post launch analysis found that the lightning discharge had been able to take place through the ionised gas trail left by the Saturn's exhaust which conducted the strikes back to the launch tower. The target for this mission was to be the Oceanus Procellarum (Ocean of Storms), to the western end of the Apollo Landing Zone, near a pattern of craters known as 'Snowman', where a robot lander probe, Surveyor 3, had put down two years previously in the crater that formed the torso of the Snowman formation. One of the mission's tasks would be to retrieve parts of Surveyor to see how structural materials had been effected by long term exposure to heat, cold and radiation from space. If successful, the landing would also demonstrate the ability to achieve a pin point touch-down at a selected site, which would open the possibility for landings in more restricted locations. Another feature of interest that clinched the site selection was an ejecta ray of material crossing the formation from the much later Copernicus Crater some 200 miles to the north and well outside the Apollo landing zone. Sampling the ray would provide the date of the Copernicus impact and give an insight into the ages of all the terrain that the ray crossed. Comparison of samples from another of the great maria with those taken at Tranquillity base by Apollo 11 would also establish whether all the maria had been formed at the same time and whether the lava differed in composition. Apollo 12's lunar module named Intrepid was crewed by Conrad and Bean, while the command module, Yankee Clipper, was piloted by Gordon. The CM's name reflected the fact that all Apollo 12's crew were US Navy personnel. No further difficulties were experienced on the outbound journey, after the electrifying start, and Conrad began Intrepid's descent to the lunar surface four days later on 19 December, 1969. Passing through 7,000 feet, Intrepid pitched over allowing the crew to get their first good view of the landing site. At first Conrad could not make out Snowman's features; but as Bean began calling out the numbers, Conrad aligned the Landing Point Designator (LPD) and found the graticule cross hairs lined up directly on the centre of Surveyor's crater. Blipping his hand controller, he moved the LPD's aim to the rim of the crater and overflew the edge looking for a landing spot between Surveyor and Head Craters. Approaching the surface, the engine's efflux began kicking up large quantities of dust that completely obscured the surface. Flying blind he brought Intrepid in to a pinpoint landing at 3.04 degrees south, 23.42 degrees west, within 530 feet of its target. The First EVA, ALSEP Deployment Conrad exited Intrepid and on stepping out onto the surface said: Whoopee... Man, that may have been a small one for Neil but that's a long one for me. He was referring to the difference in height between himself and Armstrong. As one of the shorter astronauts at 5'6'' tall, stepping off the LM's ladder, which ended about three and a half feet above the surface, was no mean feat for him in a bulky space suit. He was joined on the surface by Bean a few minutes later. Conrad's first task was to deploy a High Gain 'S' Band Antenna , an umbrella-like aerial measuring five feet in diameter, mounted on a small tripod that unfurled and was to be pointed directly at earth to improve communication signals. Bean deployed a colour TV camera for coverage of the mission back to earth. As he did so, he inadvertently pointed it directly at the sun which almost immediately burnt out the camera's image tube, ending any possibility of TV transmissions from the surface. While public and media interest in the lunar missions had reached an all time high with Apollo 11, interest was now declining as space flight was becoming regarded by the general public as almost routine. The landing of Apollo 12 was covered by the American television networks; but as soon as it was realised that no further pictures would be received from the moon, the television companies cancelled their remaining scheduled broadcasts. During the first EVA, the crew deployed the first Apollo Lunar Surface Experiment Package (ALSEP), which consisted of a number of experimental devices to remain on the surface and transmit data back to earth after the astronauts departure. The ALSEP had been omitted from the Apollo 11 mission, mainly due to weight considerations and that the missions priority had been to establish a landing. The inclusion of the ALSEP now gave a greater emphasis to the scientific and exploration elements of the missions. The ALSEP included another Passive Seismic Experiment, similar to that on Apollo 11, and (in addition), - Lunar Surface Magnetometer to measure the moons magnetic field - Cold Cathode Gauge Experiment to measure any residual atmosphere or gasses remaining on the surface - Solar Wind Spectrometer to record the direction and density of the solar wind - Suprathermal Ion Detector to detect the low energy ions of the solar wind All these experiments were tied into a central station, which provided power and communicated data back to earth. Power generation of the central station was from a Radioactive Thermal Generator (RTG), developed by the American Atomic Energy Commission for use in space and other remote locations. The generator, a SNAP-27 (Systems for Nuclear Auxiliary Power) used a core of plutonium as its power source. During the flight the plutonium element was housed separately in a protected flask on the exterior of the lunar module's descent stage, so that in the event of the craft being destroyed during take off or in the earth's atmosphere, the plutonium would remain intact. The element was to be removed from its flask and inserted into the RTG when the experiments were set up on the surface. However, Bean experienced difficulty removing the element from the flask when it jammed half way out. Conrad was called over to help, but the element remained stuck until the application of a couple of hard raps with the geological sample hammer released it. 'Never come to the moon without a hammer.' quipped Conrad. On completion of the ALSEP deployment the crew found they still had some minutes of the EVA left and began some random sampling of the surrounding area. They ventured out some 75 yards to the edge of Middle Crescent Crater, a shallow crater 350 yards wide to sample and photograph before returning to Intrepid for a rest period The Second EVA, Bench Crater and Surveyor Conrad and Bean carried out two EVA's. The second was a circular traverse of over a mile, taking in the Snowman's Head crater; then south to Bench, Sharp, and Halo craters; and back to the Surveyor crater, where the robot craft rested on its inner, eastern slope. The crew carried out sampling stops at each of the craters. Bench crater was of particular interest to the geologists due to its terraced layering, showing a distinct bench, or ledge, in its wall and a small mound in the centre, from which it derived its name. Conrad was invited by the Earth-bound geologists to go into the crater to obtain a sample, but declined due to the steepness of the inner slope. Due to the prevailing lighting conditions and the undulating surface, they had difficulty establishing the location of Sharp crater, the furthest point in the traverse from the LM. They returned past the southern side of Bench, where they again had difficulty locating Halo. Rather than fall behind on their schedule, they carried on to Surveyor. Observed from the LM on the first EVA, Surveyor crater had at first looked as if it would be too steep to enter; but on closer inspection, this proved to be an effect of the low angle of lighting; and the crew were able to walk around the craters rim and down the gently sloping inner face to where the lander was located. They observed that Surveyor had bounced slightly on landing, leaving imprints in the surface dust and that it's framework was covered in a reddish-brown discoloration. After photographing the site, they cut off a piece of the lander's frame and electrical wiring, and removed its TV camera and scoop arm for return to earth. They further removed samples, including one particular rock that could be identified from Surveyors earlier television transmissions. This would give the geologists an opportunity to see how accurate the conclusions they had drawn from Surveyor's TV evidence had been. The crew's lunar surface stay time had been extended considerably from that of Apollo 11's; and they remained on the surface for a total of 31.5 hours, including the two EVA's totalling 7.75 hours, before their take off and return to dock with the orbiting Yankee Clipper. On return to the LM, the crew found that more dust than had been expected had attached itself to their clothing, and despite vacuuming it off, quantities of it proved to be a problem as, after take-off, it became weightless and floated around inside the capsule making the return journey uncomfortable. One of the experiments deployed on their first EVA was a seismograph for further study of moonquakes. Intrepid had one more useful function to perform after docking and transfer of the crew to the CSM: It was de-orbited to impact on the moon's surface some 50 miles from the landing site at a speed of over 3,500 mph to provide a shock wave for the seismograph, enabling technicians to calibrate the instrument from a known impact source. A totally unexpected result was that it recorded shock waves from the impact for three quarters of an hour afterwards, as they echoed around the moon's interior. It was described as '... though one had struck a bell in a belfry of a church a single blow and found that the reverberations from it continued for thirty minutes'. The Trans Earth Injection (TEI) burn and return from the moon was largely routine for Apollo 12. The crew brought back 75 pounds of samples and after re-entry with the parachutes functioning without mishap, splashed down in the Pacific, east of Pago Pago on the 24 November, 1969, to be picked up by the recovery ship USS Hornet. After the previous successful Apollo flights and the two landings, space flight was beginning to seem routine to the general public. Having achieved the Kennedy challenge, the costs of the manned space flight program and further space exploration, were coming under close scrutiny and of the original 20 planned missions one, Apollo 15, had already been cancelled to save a Saturn V launch vehicle for the Skylab space station missions. A further two lunar missions were also postponed until Skylab's completion. After cancellation of the Apollo 15 mission, its target, the Fra Mauro formation, was re-assigned to Apollo 13 as it was considered to be potentially one of the most important geological areas on the lunar surface, and liable to yield samples from the earliest formation of the moon. What was to happen on the next lunar mission provided six days of high drama and almost the loss of a crew. Apollo 13 launched on 11 April, 1970, for a landing attempt in the Fra Mauro region of the Oceanus Procellarum (Ocean of Storms), with the crew of Commander James A Lovell, Lunar Module Pilot Fred W Haise and Command Module Pilot John L Swigert. The original CM Pilot Ken Mattingly had inadvertently been exposed to German measles a few days before lift off and had been replaced at short notice by Swigert from the back-up crew. During the launch, the only malfunction occurred when the second stage S-II centre engine shut down two minutes prematurely, which caused the outboard engines to have to run 34 seconds longer. The third stage S-IVB engine also had to burn for an additional nine seconds to make up the shortfall. The crew in the CSM Odyssey then carried out the TLI burn and completed the docking and withdrawal of the LM Aquarius. Nearly 31 hours into the flight, the crew carried out a mid-course correction, which took them out of a 'free return' course and into a hybrid trajectory, which would put them into an orbit around the moon suitable for a landing in the Fra Mauro area. They then settled down to the usual trans lunar journey routine 'Houston, We Have a Problem' On the 13 April, 1970, 55 hours into the flight and just after closing down a television broadcast, Swigert undertook a routine task to stir up the No 2 fuel cell's oxygen tank in the service module. Seconds later, an explosion blew out a side panel and damaged the lines to the No 1 oxygen tank, which resulted in the complete loss of oxygen for the two fuel cells. Consequently, water and electrical power supply to the command module was cut off; and, without electrical power, all control of the service module's main Service Propulsion System's (SPS) engine was also lost. The seeds of the accident had been sown some five years earlier when design engineers decided to modify the internal 28 volt power supplies of the spacecraft to allow it to accept a higher 65 volt supply from the ground services, during its time on the launch pad. Two thermostatic switches, which safeguarded the heating circuits inside the fuel cell's oxygen tank, were overlooked in the modification program. In itself this may not have caused a problem, as the switches would not operate unless overheating occurred in the tank; and, as the heater was not normally functioning for more than a few seconds at a time, this was not likely in normal use. Indeed, all flights from Apollo 7 onwards had used tanks with unmodified switches without a problem arising. The faulty tank in Apollo 13 had originally been due for use in Apollo 10 but had been dropped about two inches during installation, which had caused some external denting of its thin walled skin and possibly further damage to an internal filling line. Another tank was fitted to Apollo 10, and the damaged one sent for repair. Tests after the repair to the dented skin had showed no further faults; and it was not realised that the fill line inside the tank was still faulty. The tank was later installed in Apollo 13. During the countdown to Apollo 13's launch, difficulty was experienced when draining the tank, probably due to the faulty fill line. The problem was not thought to be serious; and, rather than delay the launch for up to a month while the tank was replaced, it was thought acceptable to boil off the residual oxygen. This involved keeping the tank's internal heater running for extended periods, causing the unmodified switches to operate when the internal temperature increased beyond the safe limit. As the switches tripped, they arced and welded themselves shut, under the influence of the full 65 volts across their unmodified terminals. Now, without the protection afforded by the switches the internal temperature of the tank continued to rise and, at some point, built up to over 1,000 degrees Fahrenheit, damaging the insulation to the internal fan motor, so that it became a potential hazard when the tank was later refilled with oxygen. On Apollo 13, operation of the fan switch caused a short circuit in the damaged fan wiring and the insulation to ignite. The fire spread along the electrical conduit in the tanks side wall, weakening it so that it ruptured under the internal pressure of 1,000 psi. The resulting explosion damaged the adjacent No 1 tank's feed lines, while blowing off the SM's bay cover panels. In normal operation the fuel cells combined oxygen and hydrogen to produce water and electricity supplies for the command module; but with both tanks out of action and the remaining oxygen bleeding away through the damaged plumbing, the fuel cells were unable to operate, starving the command module of power and water. The service module on which the crew depended, was for all practical purposes dead; and, as the flight was now committed to the outward journey to the moon, there was no doubt that the crew of Apollo 13 were in serious trouble. Swigert: 'Okay Houston, we've had a problem here...' Mission Control (MCC): 'This is Houston, say again please...' Lovell: 'Houston, we've had a problem.' Abort and Return Had the explosion occurred after the moon landing, the crew would have had no possibility of returning home. At GET 30:40, Apollo 13 had completed a mid-course correction, which had taken them out of the original safe 'free return' trajectory and into one which would put them into an orbit necessary for a landing in the western Fra Mauro area of the moon. They now needed a burn of the SM's engine to bring them back to the free return trajectory; but, without electrical power, they had no control of any of the service module's functions. They did, however, still have the fully functioning lunar module Aquarius attached to the CSM, which could be used as a lifeboat, and its descent engine, which could provide propulsion. The lunar module was only designed to function for two days with two crew members on board, whereas the return journey was going to take four days; and the trans earth injection burn would take a significant amount of the LM's power. It was essential to conserve as much power as possible, so all non-essential circuits, heaters, and electronics were powered down. Oxygen was not a significant problem, as sufficient reserves in the LM, the CM, and the lunar suits were available; but without the CM's environmental system working, a build up of carbon dioxide would occur inside the craft as the LM's system became overloaded. The LM's system would have to be used to scrub carbon dioxide from the internal atmosphere; but its lithium hydroxide filters had been designed for about 45 hours use by two men rather than 4 days with three. Although the CM carried its own filters, they were not interchangeable with those in the LM; and a way had to be found to bring them into use. Back at mission control, the back up crew had been called in to fly simulated missions to recalculate the flight parameters to work out the best options for a return, and to solve the various problems arising from the loss of the SM's power source. A team of engineers were given the task of solving the carbon dioxide problem with a brief to use only materials that could be found aboard Apollo. They succeeded using a combination of suit hoses, plastic bags, and duct tape to convert the CM's filters to the LM's environmental system; and the crew were able to cobble this fix together under instruction from mission control. Another major problem was to keep the CM's internal batteries charged with enough power to control the capsule during re-entry, after the LM was jettisoned, when its batteries would be its only power source. Working in the simulators and using wiring diagrams, the ground engineers and back up crew found a convoluted way to pass a trickle of current from the LM to the CM through a sensor circuit normally used to monitor power usage in the LM. By configuring switches and power breakers they were able to convey enough power from the LM to the CM's batteries to keep them charged. Also without a water supply from the SM the crew were limited to about 200 ml of water a day as the majority of the LM's supply was required to keep the essential cooling systems of the electronic equipment going. Two burns of Aquarius' descent engine were found to be necessary for the return journey. The first, carried out five hours after the explosion, was to put Apollo 13 back into a 'free return' trajectory, so that the craft would loop around the back of the moon and slingshot out on a heading towards earth. But this also produced some navigational problems. The LM was not equipped with the same navigational information as the CM, so the computer's data platform had to be transferred from the CM to the LM. They also had to confirm that the direction of the burn would be correct; but it was proving impossible to get a navigational star sighting to fix their position, due to debris from the explosion that was surrounding them twinkling in the sunlight and blanking out the stars on which they needed to take sightings. The debris field could be observed from earth extending more than twenty miles out from the spacecraft. Transfer of the data between the two computers was accomplished by Haise punching the corrected figures into the LM's computer manually after Lovell recalculated them from the CM's data and verified by mission control. Lovell also had to stabilize the craft using the LM's thrusters, as the craft was tumbling slowly from the initial explosion and venting oxygen from the ruptured plumbing, inducing an uneven roll. He finally brought the craft under control and aligned in the correct direction after two hours. Eventually, with the craft stabilized and aligned, they were ready to fire the descent engine; and, after a thirty one second burn, they were back on the free return course. The new free return course would put the craft into a re-entry corridor that would bring them to a splashdown in the Indian Ocean instead of the Pacific, where the recovery fleet was stationed. Haise had also calculated that they would barely have sufficient cooling water for the LM's electronic circuits; and the water supply would be exhausted about five hours before re-entry. He was aware, however, that when Apollo 11's LM, Eagle had been abandoned in lunar orbit its electronic systems had continued to function some eight hours after the water supply had been turned off. If Aquarius performed in the same way, they would have about a three hour margin. Calculations at mission control showed that a second burn two hours after rounding the moon would speed up the return by about nine hours and place the re-entry over the Pacific with its recovery ships. But to make the second burn they still had to establish the crafts position and register it in Aquarius' computer. Mission control had devised a way to do this without taking a star sighting; and Capcom Charlie Duke passed up the procedure. Lovell manoeuvred the craft into an attitude dictated by a set of co-ordinates relayed up by Duke; which, if the craft was in the correct position, would show the sun's disc in the LM's navigation telescope. Taking sightings, Haise found it to be correct to within one degree. Two hours after rounding the moon, at an altitude of 137 miles, the second burn trimmed nine hours off the return journey and placed the re-entry corridor over the Pacific Ocean. Again the burn went according to plan and inserted the spacecraft into its new trajectory Conserving power meant powering down Aquarius to only the basic functions, which included turning off the guidance computer and heating. With little else to do except wait it out, the crew remained in Aquarius for the majority of the return journey, each one taking turns to sleep in the cold of the unheated command module. The temperature inside the craft dropped to 38 degrees Fahrenheit and condensation formed on the cold inner surfaces of the walls, instruments, and controls, creating a potential hazard from circuits shorting out when power would be re-instated just prior to re-entry. The crew were unable to sleep adequately and a leaking water spigot had soaked Swigert's boots and feet, which took two days to dry out, making the return passage tiring and unpleasant. A course correction burn of Aquarius' engine was found to be necessary just after passing the midway point; and it would have to be done without the computers help. This time Mission control instructed Lovell to align the craft so that the sun could be seen through the LM's overhead docking window and rotate the craft to get the horns of the earth's crescent shape on the cross hairs of the LM's navigation sextant, which would put the craft in the correct position for the burn. To make the burn Haise kept the pitch aligned, Swigert timed the burn, and Lovell fired the engine while controlling the crafts roll. After 14 seconds, they were dead on course. Nine hours out from re-entry, the crew began to return power to the command module, Odyssey and make ready for a last course correction, which was made necessary by occasional venting of gasses from the damaged service module pushing the craft off course. This was a minor correction using the LM's thrusters for a short 20-second burn. With four hours to go, the crew jettisoned Odyssey's service module; and, as it drifted away from them, they could see and photograph the extent of the damage for the first time. A complete panel covering one of the six equipment bays was missing, exposing considerable internal damage to the oxygen tanks, plumbing, and surrounding equipment. Finally, with Odyssey powered up on its internal batteries, it was time to cast off Aquarius, which had served them so well, and as the pyrotechnics were fired to separate the two craft, Capcom Joe Kerwin said for all of them: 'Farewell Aquarius, and we thank you.' One last unknown factor remained, about which nothing could be done. The question of whether or not Odyssey's heat shield had been damaged by the explosion, and if it would fail during re-entry. They turned the craft's heat shield towards the atmosphere and prepared for re-entry. On the 17 April, 1970, half the world waited with bated breath to hear the re-acquisition of signal as Apollo 13 came safely through re-entry and splashed down near Samoa in the Pacific Ocean, with its crew intact four miles from the recovery ship, USS Iwo Jima. With the failure of Apollo 13 to complete its mission, Apollo 14, whose designated landing site had been the Littrow Crater region, a site on the mountain range bounding the Mare Serenitatis, was now rescheduled to take over the mission to Fra Mauro. The crew and engineers had time to reconfigure the flight, while the investigation into the accident and modifications were carried out to the craft. Modifications included re-siting a third oxygen tank to isolate it from the other two, in order to ensure a supply in the event of another failure of the primary tanks. Another innovation with Apollo 14 was the partial isolation for two weeks before the mission of the primary and back up crews from all but the most necessary human contact. This precaution was to prevent the contraction of communicable diseases, which had hampered the crews of Apollo's 7, 8, 9, and 13. The Apollo crews were limited to contact with family and necessary ground personnel. Even then, these primary contacts were monitored by the medical teams; and the crews were restricted to certain areas within the Cape. The Fra Mauro Formation, a highland area, in the Oceanus Procellarum (Ocean of Storms) and south of the Imbrium Basin, was considered to be of the highest scientific interest and a prime geological site. The highland formation was thought to have been formed from ejecta material displaced in the cataclysmic event that had formed the Imbrium Basin in the moon's early crust, and that it would supply samples and data from depths of up to 60 miles within the original crust of the moon from a period just after it had begun to solidify from a molten state. Dating the material would establish the date of the Imbrium event and, from that, the dates of everything that the Imbrium ejecta blanket touched could be established as either pre or post Imbrium event. The landing target area was to be in a relatively flat spot near a later 350 yard diameter impact crater named Cone Crater, which had drilled its way through the Fra Mauro regolith and into the ejecta blanket, where it was hoped, the displaced Imbrium material, and perhaps even the underlying early crustal material, may have been exposed. The key to Fra Mauro lay in obtaining samples from the rim of Cone crater, where rock displaced from its deepest levels would have been deposited. Landing at Fra Mauro On 31 January, 1971, at T minus eight minutes, the mission launch director halted the countdown of Apollo 14 due to bad weather crossing the Cape. After a 40 minute delay, the countdown resumed and Apollo 14 lifted off from pad 39a into an evening sky, carrying its crew, Commander Alan B Shepard Jr, Command Module Pilot Stuart A Roosa, and Lunar Module Pilot Edgar D Mitchell. After a successful launch and TLI burn, Roosa separated the command module Kitty Hawk from the S-IVB and prepared to dock with the lunar module Antares, to withdraw it from the top of the third stage. After five attempts Roosa had not been able to get the docking probe to latch into the LM's drogue. On the sixth attempt he latched on successfully, after firing the forward thrusters as the two craft came together to ram the docking probe home. After extraction of the LM from the S-IVB, examination of the docking probe and ring showed considerable scoring on the mating faces; but no other fault was apparent; and the probe was deemed serviceable to continue the mission. Apollo 14 braked into lunar orbit; and, after one orbit, Kitty Hawk's engine was fired again to place it into an elliptical orbit with a low point of 50.000 feet from where Antares could begin its powered descent. This modification of the flight plan to use the CM's engine instead of that of the LM to achieve the lower descent orbit meant a saving of the LM's fuel, which would provide a longer hover time over the landing site. Shepard and Mitchell separated Antares from Kitty Hawk; and, while Roosa boosted the CSM back up to a higher circular orbit, they checked out the lander for the descent. Almost immediately a problem with the LM's guidance computer was recognised by mission control at Houston. The computer's display was showing that the abort switch in the closed position when it was correctly selected open. Mission control requested Shepard to use an old fashioned remedy and give it a tap, which immediately cleared the fault, only for it to return a few minutes later. Normally the switch would be in the open position during the descent, which was controlled by the Primary Guidance and Navigation System's (pings) computer. If the secondary Abort Guidance System (aggs) recognised the switch as closed at any time during the descent, it would accept this as an abort signal, initiate separation of the descent stage, and fire the ascent stage engine to return the craft to orbit. Later, post flight analysis established that the fault was probably due to a loose piece of solder floating under the weightless conditions inside the switch, making and breaking contact intermittently. It would be impossible to continue with the landing if any possibility of an inadvertent self-induced abort existed. Since it was only the switch that was suspect, the remedy was to modify the landing procedure programs in the guidance computer to by-pass the recognition of the abort switch procedure. This meant that Shepard and Mitchell had to manually feed in revised programs, devised by mission control immediately prior to the descent. The reprogramming was not completed as they disappeared behind the moon on the final orbit before initiating the LM's descent burn. Re-appearing on the other side, Mitchell only had minutes to copy the final programs and enter them in the correct order into the computer as the engine was started up for the powered descent by Shepard. Mitchell succeeded in installing the revised programs as they continued the powered descent. Another problem cropped up as the descent proceeded through 32,000 feet. The ground ranging radar should have begun supplying height information at about 35,000 feet. At 30,000 feet, the radar had still not come on line; and the crew became anxious, as mission rules called for a mandatory abort of the landing at 10,000 feet if they were still without landing radar. The radar operated in two modes, short, and long range. The long-range mode used at high altitude is switched to short range at 3500 feet. Resulting from the modifications to the landing program procedures for the abort switch problem, the logic circuit for the radar had been left in short range mode, unable to lock on above 3500 feet. At 20,000 feet, mission control instructed Mitchell to cycle the radar circuit breaker switch. He flipped the switch open and closed, which allowed the logic circuit to reset itself to long range, where it immediately began to supply the altitude and rate of descent information needed. Antares descended towards undulating terrain with heavy cratering. On pitchover Shepard was easily able to identify Cone Crater and his prime landing area adjacent to it. He put Antares down within 175 feet of its target point on a sloping stretch of moonscape that gave the craft an eight degree list on touchdown. Their position was 3.65 degrees south, 17.48 degree west, approximately 350 miles west-southwest of the moons visible centre. The First EVA, ALSEP Deployment As Shepard descended the LM's ladder and stood on its footpad, Capcom Bruce McCandless remarked, 'Not bad for an old man.' Stepping onto the surface Shepard replied, 'Okay.. you're right. Alan's on the surface and it's been a long way, but we're here.' He was referring to the almost ten years since he had been the first American in space in the Mercury-Redstone capsule Freedom 7. His step onto the lunar surface also made him the only one of the original batch of seven 'Right Stuff' astronauts to walk on the moon. Mitchell joined him on the surface and they took the contingency sample and set up a high gain, S-band antenna to improve communication links with earth. They also unloaded a new innovation, a Modular Equipment Transporter (MET). A two wheeled handcart like transporter, dubbed the 'rickshaw', to be used to carry tools and samples during the EVA's. Shepard also set up a colour television camera, this time equipped with a lens cap, pointing to a site some 500 feet northwest of the LM where they intended to set up the ALSEP package experiments. Apollo 14's ALSEP now contained a greater array of scientific experiments than any previous missions. The experiments included: - Active Seismometer - Atmospheric Detector - Charged Particle Detector - Ionosphere Detector - Laser Reflector (similar to that left by Apollo 11) The active seismometer utilized a pair of geophones (laid out on the surface) and calibrated explosive charges (set off by the astronaut, using a hand held 'thumper) enabled the instrument to pick up the generated shock wave and measure the depth of the surface material. Several of the charges failed to go off, but a sufficient number worked to get the required data. Sampling and photography of the site took up the remainder of the four and a half hour excursion. After close down of the first EVA Shepard and Mitchell spent an uncomfortable night trying to sleep, which was made more difficult by the sloping angle of the LM's floor. Both astronauts slept fitfully and the rest period was terminated an hour earlier than planned. The Second EVA, Cone Crater The second EVA included most of the missions geological sampling and the traverse to Cone Crater, which was nearly a mile away from the LM, to sample the material at its rim. They pulled the MET the first half of the walk over relatively flat ground, stopping at intervals to take samples and measurements of the moon's magnetic field with a portable magnetometer. The inclination of the crater's outer rim began to increase, until they were climbing a 10 per cent slope studded with boulders, making it necessary to take more frequent stops as their heart and breathing rates increased. The lighting of the surface undulations, and the lack of recognisable objects to give perspective, was also making navigation difficult; and several times they had to stop to re-evaluate their position. The EVA's duration was extended by a half hour; and eventually it became obvious that they were not sure exactly how far they were from the craters rim. The decision was taken by Shepard, who was concerned that they were not leaving themselves enough time to complete the last sampling stop, to take their samples and return. In fact, they had reached within thirty feet of the rim of Cone; but, due to the far side crater wall being lower than the one they were climbing, they had been unable to recognise just how close they were to the rim. >From the high ground with the sun behind them, they could now see Antares almost a mile away and make out the features that had eluded them on the uphill trek. Their return downhill was easier, during which the remaining samples were taken. Towards the end of the return trek tiredness was beginning to show; and it was becoming obvious that astronauts on future missions would require assistance to get about the moonscape, if greater areas were to be explored. Golf on the Moon Before closing out the EVA, Shepard had one last, unofficial task. From his suit pocket he produced a couple of golf balls. To the camera he said: Houston... you might recognise what I have in my hand as the handle for the contingency sampler return; it just so happens to have a genuine six iron on the bottom of it. In my left hand I have a little white pellet that's familiar to millions of Americans. I'll drop it down. Unfortunately the suit is so stiff , I can't do this with two hands, but I'm going to try a little sand trap shot here. With a one handed swing he moved the ball on the second try a couple of feet towards the camera. On his third swing he connected and the ball sailed out 'Miles and miles and miles.' Actually it landed in a nearby crater and the second ball joined it, followed by the golf club handle launched javelin style after the six iron head had been removed. The head returned to earth with Shepard and ended up on display in the US Golf Association, Hall of Fame in New Jersey. Returning to the LM, and closing up the EVA, Shepard called back to Capcom Fred Haise, 'Okay Houston, the crew of Antares is leaving Fra Mauro Base.' Haise replied, perhaps a little wistfully, 'Roger Al.. You and Ed did a great job... I don't think I could have done better myself.' After almost 36 hours on the lunar surface, Antares lifted off to rendezvous with Roosa in Kitty Hawk, using a new direct ascent trajectory to meet up at the highest point of its orbit, without having to make the major changes of orbit, as had previous missions. Transferring to Kitty Hawk, the crew sent Antares to crash back on the moon; and three hours later the service module's engine was fired up to send them back along the return path to earth. During the return flight, the crew demonstrated a number of experiments, including casting metals in zero gravity conditions. Some of these experiments were successful enough to warrant further investigation on later Skylab flights. Splashdown in the early morning light of the 9 February, 1971, was just a half mile from its target area; and the crew were picked out of the Pacific by Sea King helicopters from the USS New Orleans and returned to quarantine for the following 15 days. They would be the last crew to be quarantined. Apollo 14 concluded the intermediate 'H' type missions. The total sample collection of rocks weighed in at almost 95 pounds but the disadvantages of astronauts walking and navigating between sites was highlighted by the time expended in Shepard and Mitchell's climb to Cone's rim, which had curtailed the amount of useful work they were able to perform. The samples taken near the rim amounted to less than two pounds of individual rocks and one single rock of 20 pounds. Although they had reached their goal, they had not recognised it; and to the geology team it was a disappointing return from what was considered an important site. Nevertheless, the overall return of samples provided an insight into Fra Mauro's origins and indicated its age at 3.2 to 3.85 billion years. The portable magnetometer also found a surprisingly strong residual magnetic field in some of the surface rocks.
Today, we will learn how to write your own custom functions in Python. Once you master writing functions, you will be ready to build full applications using Python. So far, we have used built-in functions, such as Now, we will learn how to write a custom function on our own. A function allows us to perform a specific task without worrying about the implementation details. Let's start with an illustrative example. Under the hood, the max function iterates over all the elements using a for loop and compares each element with all other elements to find the maximum value. Thanks to the max function, we can do this without worrying so much about the implementation detail. This is the idea of encapsulation. A few things to take note of: - the rules for function names are the same as for variable names (e.g. you can't have spaces in the name but rather separate two words with an underscore). - the argument and variables defined in the function can only be used within the function (i.e. local variables). - a function can return no value (i.e. None). Example is the Which are valid ways to begin function definition? Choose two correct answers. def my_function(arg1, arg2): def my function(arg1): def my_function(arg1 arg2): def my_function(arg1, arg2, arg3): More on Return Values - you can return multiple values from a function - instead of returning a value, some functions may just print text to the console You can specify default values for arguments. The default value is used for an argument if a user-defined value is not specified upon the function call. - Calculate population density - Convert an integer number of days to weeks Given the number of population of a city (e.g. Medan) and its area in $km^2$, write a function to compute the corresponding population density. Hint: population density is given by density = population/area - name the function - the function should take two arguments: Write a function that takes an integer number of days and returns a string with the number of weeks and days that is. For example, 8 days are equal to 1 week and 1 day. Hint: use integer division and the modulo operator - name the function - the function should take exactly one argument (number of days) - assignment statement - for loop - method call - while loop A variable that is defined inside a function can only be used within that function. The variable cannot be accessed from outside the function We say that some_variable is a local variable, i.e. the scope of some_variable is local to my_function. On the other hand, a variable that is defined outside of a function is said to have a global scope. When you program, you'll often find that similar ideas come up again and again. You'll use variables for things like counting, iterating and accumulating values to return. In order to write readable code, you'll find yourself wanting to use similar names for similar ideas. As soon as you put multiple piece of code together (for instance, multiple functions or function calls in a single script) you might find that you want to use the same name for two separate concepts. Fortunately, you don't need to come up with new names endlessly. Reusing names for objects is OK as long as you keep them in separate scope. Good practice: It is best to define variables in the smallest scope they will be needed in. While functions can refer to variables defined in a larger scope, this is very rarely a good idea since you may not know what variables you have defined if your program has a lot of variables. - The call to this function causes an UnboundLocalErro because the variable balance has a global scope - we can access the value of a global variable in a function but we cannot change or modify it If you want to change the global variable balance in a function, pass it as an argument.
INTRODUCTION TO VECTORS - PART 1 Copyright by Ingrid Stewart, Ph.D. Please Send Questions and Comments to email@example.com. Learning Objectives - This is what you must know after studying the lecture and doing the practice problems! 1. Use Vector Notation. 2. Write a vector in component form. 3. Find the magnitude of a vector. 4. Find the direction of a vector. 5. Add vectors. 6. Multiply a vector with a scalar. 7. Find the negative of a vector. 8. Subtract vectors. Many quantities in geometry and physics, such as area, time, and temperature, can be represented by a single real number. Other quantities, such as force and velocity, involve both magnitude and direction and cannot be completely characterized by a single real number. To represent such a quantity, we use a directed line segment (arrow) called a VECTOR. Below is the picture of a vector whose direction is indicated by the positive angle , and its magnitude is the distance from point P to point Q. Point P is called the initial point and point Q is called the terminal point. There is always an arrow pointing to the terminal point to indicate that we are looking at a vector and not a line segment. Vectors are denoted by lowercase letters or by their initial and terminal point. In handwritten documents, a half arrow is placed over either notation. For example, in the picture above we could call the vector . In printed documents, vector names are shown in bold print without the half arrow. Standard Position of a Vector A vector with its initial point at the origin of a Rectangular Coordinate System is said to be in standard position. Definition of the Component Form of a Vector Given a vector in standard position, the coordinates of its terminal point, say , determine its component form, written as where q1 and q2 are called components. The component form of a vector requires angle brackets . Finding the Component Form of a Vector - see #1 and 2 in the "Examples" document If a vector is not in standard position, we can still find its component form. This also moves it to standard position. Given a vector with initial point P at and terminal point Q at , its component form is found by . As a memory aid remember "terminal minus initial" for the x- and y-coordinate! Magnitude of a Vector - see #3 through 6 in the "Examples" document This is the "length" of the vector. It can also represent speed, weight, etc. The magnitude of a vector v = is denoted by ||v||. It is found by using the Pythagorean Theorem on a vector in standard position and is ||v|| = . NOTE: YOU MUST MEMORIZE THIS FORMULA! Direction of a Vector - see #7 and 8 in the "Examples" document Place the initial point of a vector at the origin in a coordinate system. Then the direction of the vector is given by the positive angle between the positive x-axis and the vector.We use the fact that which we derived in the lesson on polar coordinates using the following picture: Zero VectorVector Addition - see #9 in the "Examples" document A zero vector can be denoted with a boldfaced 0 or. In component form it can be written as v = < 0, 0 >. Let u = and v = then u + v = . The vector u + v is called the resultant vector. Scalar Multiplication of Vectors - see #10 in the "Examples" document In vector algebra, any real number is called scalar. Let u = and let c be a scalar (a real number), then . The Negative of a Vector - see #10 in the "Examples" document Given vector v, its negative is v. It has the same magnitude as vector v, but points exactly in the opposite direction. Vector Subtraction - see #10 in the "Examples" document Vector subtraction is viewed as an addition of the negative of a vector. u v = u + (v) = u 1v
The Nullification Crisis was a United States sectional political crisis in 1832–1837, during the presidency of Andrew Jackson, which involved a confrontation between South Carolina and the federal government. It ensued after South Carolina declared that the federal Tariffs of 1828 and 1832 were unconstitutional and therefore null and void within the sovereign boundaries of the state. The US suffered an economic downturn throughout the 1820s, and South Carolina was particularly affected. Many South Carolina politicians blamed the change in fortunes on the national tariff policy that developed after the War of 1812 to promote American manufacturing over its European competition. The controversial and highly protective Tariff of 1828 (known to its detractors as the "Tariff of Abominations") was enacted into law during the presidency of John Quincy Adams. The tariff was opposed in the South and parts of New England. By 1828, South Carolina state politics increasingly organized around the tariff issue. Its opponents expected that the election of Jackson as President would result in the tariff being significantly reduced. When the Jackson administration failed to take any actions to address their concerns, the most radical faction in the state began to advocate that the state itself declare the tariff null and void within South Carolina. In Washington, an open split on the issue occurred between Jackson and Vice President John C. Calhoun, a native South Carolinian and the most effective proponent of the constitutional theory of state nullification. On July 14, 1832, before Calhoun had resigned the Vice Presidency in order to run for the Senate where he could more effectively defend nullification, Jackson signed into law the Tariff of 1832. This compromise tariff received the support of most northerners and half of the southerners in Congress. The reductions were too little for South Carolina, and on November 24, 1832, a state convention adopted the Ordinance of Nullification, which declared that the Tariffs of 1828 and 1832 were unconstitutional and unenforceable in South Carolina after February 1, 1833. Military preparations to resist anticipated federal enforcement were initiated by the state. On March 1, 1833, Congress passed both the Force Bill—authorizing the President to use military forces against South Carolina—and a new negotiated tariff, the Compromise Tariff of 1833, which was satisfactory to South Carolina. The South Carolina convention reconvened and repealed its Nullification Ordinance on March 15, 1833, but three days later nullified the Force Bill as a symbolic gesture to maintain its principles. The crisis was over, and both sides could find reasons to claim victory. The tariff rates were reduced and stayed low to the satisfaction of the South, but the states’ rights doctrine of nullification remained controversial. By the 1850s the issues of the expansion of slavery into the western territories and the threat of the Slave Power became the central issues in the nation. Since the Nullification Crisis, the doctrine of states' rights has been asserted again by opponents of the Fugitive Slave Act of 1850, proponents of California's Specific Contract Act of 1863 (which nullified the Legal Tender Act of 1862), opponents of Federal acts prohibiting the sale and possession of marijuana in the first decade of the 21st century, and opponents of implementation of laws and regulations pertaining to firearms from the late 1900s up to 2013. The historian Richard E. Ellis wrote: |“||By creating a national government with the authority to act directly upon individuals, by denying to the state many of the prerogatives that they formerly had, and by leaving open to the central government the possibility of claiming for itself many powers not explicitly assigned to it, the Constitution and Bill of Rights as finally ratified substantially increased the strength of the central government at the expense of the states.||”| The extent of this change and the problem of the actual distribution of powers between state and the federal governments would be a matter of political and ideological discussion up to the Civil War and beyond. In the early 1790s the debate centered on Alexander Hamilton's nationalistic financial program versus Jefferson's democratic and agrarian program, a conflict that led to the formation of two opposing national political parties. Later in the decade the Alien and Sedition Acts led to the states' rights position being articulated in the Kentucky and Virginia Resolutions. The Kentucky Resolutions, written by Thomas Jefferson, contained the following, which has often been cited as a justification for both nullification and secession: |“||… that in cases of an abuse of the delegated powers, the members of the general government, being chosen by the people, a change by the people would be the constitutional remedy; but, where powers are assumed which have not been delegated, a nullification of the act is the rightful remedy: that every State has a natural right in cases not within the compact, (casus non fœderis) to nullify of their own authority all assumptions of power by others within their limits: that without this right, they would be under the dominion, absolute and unlimited, of whosoever might exercise this right of judgment for them: that nevertheless, this commonwealth, from motives of regard and respect for its co-States, has wished to communicate with them on the subject: that with them alone it is proper to communicate, they alone being parties to the compact, and solely authorized to judge in the last resort of the powers exercised under it… .||”| The Virginia Resolutions, written by James Madison, hold a similar argument: |“||The resolutions, having taken this view of the Federal compact, proceed to infer that, in cases of a deliberate, palpable, and dangerous exercise of other powers, not granted by the said compact, the States, who are parties thereto, have the right, and are in duty bound to interpose to arrest the evil, and for maintaining, within their respective limits, the authorities, rights, and liberties appertaining to them. ...The Constitution of the United States was formed by the sanction of the States, given by each in its sovereign capacity. It adds to the stability and dignity, as well as to the authority of the Constitution, that it rests on this solid foundation. The States, then, being parties to the constitutional compact, and in their sovereign capacity, it follows of necessity that there can be no tribunal above their authority to decide, in the last resort, whether the compact made by them be violated; and, consequently, as parties to it, they must themselves decide, in the last resort, such questions as may be of sufficient magnitude to require their interposition.||”| Historians differ over the extent to which either resolution advocated the doctrine of nullification. Historian Lance Banning wrote, "The legislators of Kentucky (or more likely, John Breckinridge, the Kentucky legislator who sponsored the resolution) deleted Jefferson's suggestion that the rightful remedy for federal usurpations was a "nullification" of such acts by each state acting on its own to prevent their operation within its respective borders. Rather than suggesting individual, although concerted, measures of this sort, Kentucky was content to ask its sisters to unite in declarations that the acts were "void and of no force", and in "requesting their appeal" at the succeeding session of the Congress.” The key sentence, and the word "nullification" was used in supplementary Resolutions passed by Kentucky in 1799. Madison's judgment is clearer. He was chairman of a committee of the Virginia Legislature which issued a book-length Report on the Resolutions of 1798, published in 1800 after they had been decried by several states. This asserted that the state did not claim legal force. "The declarations in such cases are expressions of opinion, unaccompanied by other effect than what they may produce upon opinion, by exciting reflection. The opinions of the judiciary, on the other hand, are carried into immediate effect by force." If the states collectively agreed in their declarations, there were several methods by which it might prevail, from persuading Congress to repeal the unconstitutional law, to calling a constitutional convention, as two-thirds of the states may. When, at the time of the Nullification Crisis, he was presented with the Kentucky resolutions of 1799, he argued that the resolutions themselves were not Jefferson's words, and that Jefferson meant this not as a constitutional but as a revolutionary right. Madison biographer Ralph Ketcham wrote: |“||Though Madison agreed entirely with the specific condemnation of the Alien and Sedition Acts, with the concept of the limited delegated power of the general government, and even with the proposition that laws contrary to the Constitution were illegal, he drew back from the declaration that each state legislature had the power to act within its borders against the authority of the general government to oppose laws the legislature deemed unconstitutional.”||”| Historian Sean Wilentz explains the widespread opposition to these resolutions: |“||Several states followed Maryland's House of Delegates in rejecting the idea that any state could, by legislative action, even claim that a federal law was unconstitutional, and suggested that any effort to do so was treasonous. A few northern states, including Massachusetts, denied the powers claimed by Kentucky and Virginia and insisted that the Sedition law was perfectly constitutional .... Ten state legislatures with heavy Federalist majorities from around the country censured Kentucky and Virginia for usurping powers that supposedly belonged to the federal judiciary. Northern Republicans supported the resolutions' objections to the alien and sedition acts, but opposed the idea of state review of federal laws. Southern Republicans outside Virginia and Kentucky were eloquently silent about the matter, and no southern legislature heeded the call to battle.||”| The election of 1800 was a turning point in national politics as the Federalists were replaced by the Democratic-Republican Party led by Jefferson. But, the four presidential terms spanning the period from 1800 to 1817 "did little to advance the cause of states’ rights and much to weaken it.” Over Jefferson’s opposition, the power of the federal judiciary, led by Federalist Chief Justice John Marshall, increased. Jefferson expanded federal powers with the acquisition of the Louisiana Territory and his use of a national embargo designed to prevent involvement in a European war. Madison in 1809 used national troops to enforce a Supreme Court decision in Pennsylvania, appointed an "extreme nationalist” in Joseph Story to the Supreme Court, signed the bill creating the Second Bank of the United States, and called for a constitutional amendment to promote internal improvements. Opposition to the War of 1812 was centered in New England. Delegates to a convention in Hartford, Connecticut met in December 1814 to consider a New England response to Madison’s war policy. The debate allowed many radicals to argue the cause of states’ rights and state sovereignty. In the end, moderate voices dominated and the final product was not secession or nullification, but a series of proposed constitutional amendments. Identifying the South’s domination of the government as the cause of much of their problems, the proposed amendments included "the repeal of the three-fifths clause, a requirement that two-thirds of both houses of Congress agree before any new state could be admitted to the Union, limits on the length of embargoes, and the outlawing of the election of a president from the same state to successive terms, clearly aimed at the Virginians.” The war was over before the proposals were submitted to President Madison. After the conclusion of the War of 1812 Sean Wilentz notes: |“||Madison’s speech [his 1815 annual message to Congress] affirmed that the war had reinforced the evolution of mainstream Republicanism, moving it further away from its original and localist assumptions. The war’s immense strain on the treasury led to new calls from nationalist Republicans for a national bank. The difficulties in moving and supplying troops exposed the wretchedness of the country’s transportation links, and the need for extensive new roads and canals. A boom in American manufacturing during the prolonged cessation of trade with Britain created an entirely new class of enterprisers, most of them tied politically to the Republicans, who might not survive without tariff protection. More broadly, the war reinforced feelings of national identity and connection.||”| This spirit of nationalism was linked to the tremendous growth and economic prosperity of this post war era. However in 1819 the nation suffered its first financial panic and the 1820s turned out to be a decade of political turmoil that again led to fierce debates over competing views of the exact nature of American federalism. The "extreme democratic and agrarian rhetoric” that had been so effective in 1798 led to renewed attacks on the "numerous market-oriented enterprises, particularly banks, corporations, creditors, and absentee landholders”. The Tariff of 1816 had some protective features, and it received support throughout the nation, including that of John C. Calhoun and fellow South Carolinian William Lowndes. The first explicitly protective tariff linked to a specific program of internal improvements was the Tariff of 1824. Sponsored by Henry Clay, this tariff provided a general level of protection at 35% ad valorem (compared to 25% with the 1816 act) and hiked duties on iron, woolens, cotton, hemp, and wool and cotton bagging. The bill barely passed the federal House of Representatives by a vote of 107 to 102. The Middle states and Northwest supported the bill, the South and Southwest opposed it, and New England split its vote with a majority opposing it. In the Senate the bill, with the support of Tennessee Senator Andrew Jackson, passed by four votes, and President James Monroe, the Virginia heir to the Jefferson-Madison control of the White House, signed the bill on March 25, 1824. Daniel Webster of Massachusetts led the New England opposition to this tariff. Protest against the prospect and the constitutionality of higher tariffs began in 1826 and 1827 with William Branch Giles, who had the Virginia legislature pass resolutions denying the power of Congress to pass protective tariffs, citing the Virginia Resolutions of 1798 and James Madison's 1800 defense of them. Madison denied both the appeal to nullification and the unconstitutionality; he had always held that the power to regulate commerce included protection. Jefferson had, at the end of his life, written against protective tariffs. The Tariff of 1828 was largely the work of Martin Van Buren (although Silas Wright Jr. of New York prepared the main provisions) and was partly a political ploy to elect Andrew Jackson president. Van Buren calculated that the South would vote for Jackson regardless of the issues so he ignored their interests in drafting the bill. New England, he thought, was just as likely to support the incumbent John Quincy Adams, so the bill levied heavy taxes on raw materials consumed by New England such as hemp, flax, molasses, iron and sail duck. With an additional tariff on iron to satisfy Pennsylvania interests, Van Buren expected the tariff to help deliver Pennsylvania, New York, Missouri, Ohio, and Kentucky to Jackson. Over opposition from the South and some from New England, the tariff was passed with the full support of many Jackson supporters in Congress and signed by President Adams in early 1828. As expected, Jackson and his running mate John Calhoun carried the entire South with overwhelming numbers in all the states but Louisiana where Adams drew 47% of the vote in a losing effort. However many Southerners became dissatisfied as Jackson, in his first two annual messages to Congress, failed to launch a strong attack on the tariff. Historian William J. Cooper Jr. writes: |“||The most doctrinaire ideologues of the Old Republican group [supporters of the Jefferson and Madison position in the late 1790s] first found Jackson wanting. These purists identified the tariff of 1828, the hated Tariff of Abominations, as the most heinous manifestation of the nationalist policy they abhorred. That protective tariff violated their constitutional theory, for, as they interpreted the document, it gave no permission for a protective tariff. Moreover, they saw protection as benefiting the North and hurting the South.||”| South Carolina Background (1819–1828) South Carolina had been adversely affected by the national economic decline of the 1820s. During this decade, the population decreased by 56,000 whites and 30,000 slaves, out of a total free and slave population of 580,000. The whites left for better places; they took slaves with them or sold them to traders moving slaves to the Deep South for sale. Historian Richard E. Ellis describes the situation: |“||Throughout the colonial and early national periods, South Carolina had sustained substantial economic growth and prosperity. This had created an extremely wealthy and extravagant low country aristocracy whose fortunes were based first on the cultivation of rice and indigo, and then on cotton. Then the state was devastated by the Panic of 1819. The depression that followed was more severe than in almost any other state of the Union. Moreover, competition from the newer cotton producing areas along the Gulf Coast, blessed with fertile lands that produced a higher crop-yield per acre, made recovery painfully slow. To make matters worse, in large areas of South Carolina slaves vastly outnumbered whites, and there existed both considerable fear of slave rebellion and a growing sensitivity to even the smallest criticism of "the peculiar institution.”||”| State leaders, led by states’ rights advocates like William Smith and Thomas Cooper, blamed most of the state’s economic problems on the Tariff of 1816 and national internal improvement projects. Soil erosion and competition from the New Southwest were also very significant reasons for the state’s declining fortunes. George McDuffie was a particularly effective speaker for the anti-tariff forces, and he popularized the Forty Bale theory. McDuffie argued that the 40% tariff on cotton finished goods meant that "the manufacturer actually invades your barns, and plunders you of 40 out of every 100 bales that you produce.” Mathematically incorrect, this argument still struck a nerve with his constituency. Nationalists such as Calhoun were forced by the increasing power of such leaders to retreat from their previous positions and adopt, in the words of Ellis, "an even more extreme version of the states' rights doctrine" in order to maintain political significance within South Carolina. South Carolina’s first effort at nullification occurred in 1822. Its planters believed that free black sailors had assisted Denmark Vesey in his planned slave rebellion. South Carolina passed a Negro Seamen Act, which required that all black foreign seamen be imprisoned while their ships were docked in Charleston. Britain strongly objected, especially as it was recruiting more Africans as sailors. What was worse, if the captains did not pay the fees to cover the cost of jailing, South Carolina would sell the sailors into slavery. Other southern states also passed laws against free black sailors. Supreme Court Justice William Johnson, in his capacity as a circuit judge, declared the South Carolina law as unconstitutional since it violated United States treaties with Great Britain. The South Carolina Senate announced that the judge’s ruling was invalid and that the Act would be enforced. The federal government did not attempt to carry out Johnson's decision. Route to nullification in South Carolina (1828–1832) Historian Avery Craven argues that, for the most part, the debate from 1828-1832 was a local South Carolina affair. The state's leaders were not united and the sides were roughly equal. The western part of the state and a faction in Charleston, led by Joel Poinsett, would remain loyal to the Union. Only in small part was the conflict between "a National North against a States’-right South”. After the final vote on the Tariff of 1828, the South Carolina congressional delegation held two caucuses, the second at the home of Senator Robert Y. Hayne. They were rebuffed in their efforts to coordinate a united Southern response and focused on how their state representatives would react. While many agreed with George McDuffie that tariff policy could lead to secession at some future date, they all agreed that as much as possible, the issue should be kept out of the upcoming presidential election. Calhoun, while not at this meeting, served as a moderating influence. He felt that the first step in reducing the tariff was to defeat Adams and his supporters in the upcoming election. William C. Preston, on behalf of the South Carolina legislature, asked Calhoun to prepare a report on the tariff situation. Calhoun readily accepted this challenge and in a few weeks time had a 35,000-word draft of what would become his "Exposition and Protest”. Calhoun’s "Exposition” was completed late in 1828. He argued that the tariff of 1828 was unconstitutional because it favored manufacturing over commerce and agriculture. He thought that the tariff power could only be used to generate revenue, not to provide protection from foreign competition for American industries. He believed that the people of a state or several states, acting in a democratically elected convention, had the retained power to veto any act of the federal government which violated the Constitution. This veto, the core of the doctrine of nullification, was explained by Calhoun in the Exposition: |“||If it be conceded, as it must be by every one who is the least conversant with our institutions, that the sovereign powers delegated are divided between the General and State Governments, and that the latter hold their portion by the same tenure as the former, it would seem impossible to deny to the States the right of deciding on the infractions of their powers, and the proper remedy to be applied for their correction. The right of judging, in such cases, is an essential attribute of sovereignty, of which the States cannot be divested without losing their sovereignty itself, and being reduced to a subordinate corporate condition. In fact, to divide power, and to give to one of the parties the exclusive right of judging of the portion allotted to each, is, in reality, not to divide it at all; and to reserve such exclusive right to the General Government (it matters not by what department to be exercised), is to convert it, in fact, into a great consolidated government, with unlimited powers, and to divest the States, in reality, of all their rights, It is impossible to understand the force of terms, and to deny so plain a conclusion.||”| The report also detailed the specific southern grievances over the tariff that led to the current dissatisfaction. Fearful that "hotheads” such as McDuffie might force the legislature into taking some drastic action against the federal government, historian John Niven describes Calhoun’s political purpose in the document: |“||All through that hot and humid summer, emotions among the vociferous planter population had been worked up to a near-frenzy of excitement. The whole tenor of the argument built up in the "Exposition” was aimed to present the case in a cool, considered manner that would dampen any drastic moves yet would set in motion the machinery for repeal of the tariff act. It would also warn other sections of the Union against any future legislation that an increasingly self-conscious South might consider punitive, especially on the subject of slavery.||”| The report was submitted to the state legislature which had 5,000 copies printed and distributed. Calhoun, who still had designs on succeeding Jackson as president, was not identified as the author but word on this soon leaked out. The legislature took no action on the report at that time. In the summer of 1828 Robert Barnwell Rhett, soon to be considered the most radical of the South Carolinians, entered the fray over the tariff. As a state representative, Rhett called for the governor to convene a special session of the legislature. An outstanding orator, Rhett appealed to his constituents to resist the majority in Congress. Rhett addressed the danger of doing nothing: |“||But if you are doubtful of yourselves – if you are not prepared to follow up your principles wherever they may lead, to their very last consequence – if you love life better than honor, -- prefer ease to perilous liberty and glory; awake not! Stir not! -- Impotent resistance will add vengeance to your ruin. Live in smiling peace with your insatiable Oppressors, and die with the noble consolation that your submissive patience will survive triumphant your beggary and despair.||”| Rhett’s rhetoric about revolution and war was too radical in the summer of 1828 but, with the election of Jackson assured, James Hamilton Jr. on October 28 in the Colleton County Courthouse in Walterborough "launched the formal nullification campaign.” Renouncing his former nationalism, Hamilton warned the people that, "Your task-master must soon become a tyrant, from the very abuses and corruption of the system, without the bowels of compassion, or a jot of human sympathy.” He called for implementation of Mr. Jefferson’s "rightful remedy” of nullification. Hamilton sent a copy of the speech directly to President-elect Jackson. But, despite a statewide campaign by Hamilton and McDuffie, a proposal to call a nullification convention in 1829 was defeated by the South Carolina legislature meeting at the end of 1828. State leaders such as Calhoun, Hayne, Smith, and William Drayton were all able to remain publicly non-committal or opposed to nullification for the next couple of years. The division in the state between radicals and conservatives continued throughout 1829 and 1830. After the failure of a state project to arrange financing of a railroad within the state to promote internal trade, the state petitioned Congress to invest $250,000 in the company trying to build the railroad. After Congress tabled the measure, the debate in South Carolina resumed between those who wanted state investment and those who wanted to work to get Congress' support. The debate demonstrated that a significant minority of the state did have an interest in Clay’s American System. The effect of the Webster–Hayne debate was to energize the radicals, and some moderates started to move in their direction. The state election campaign of 1830 focused on the tariff issue and the need for a state convention. On the defensive, radicals underplayed the intent of the convention as pro-nullification. When voters were presented with races where an unpledged convention was the issue, the radicals generally won. When conservatives effectively characterized the race as being about nullification, the radicals lost. The October election was narrowly carried by the radicals, although the blurring of the issues left them without any specific mandate. In South Carolina, the governor was selected by the legislature, which selected James Hamilton, the leader of the radical movement, as governor and fellow radical Henry L. Pinckney as speaker of the South Carolina House. For the open Senate seat, the legislature chose the more radical Stephen Miller over William Smith. With radicals in leading positions, in 1831, they began to capture momentum. State politics became sharply divided along Nullifier and Unionist lines. Still, the margin in the legislature fell short of the two-thirds majority needed for a convention. Many of the radicals felt that convincing Calhoun of the futility of his plans for the presidency would lead him into their ranks. Calhoun meanwhile had concluded that Martin Van Buren was clearly establishing himself as Jackson’s heir apparent. At Hamilton’s prompting, George McDuffie made a three-hour speech in Charleston demanding nullification of the tariff at any cost. In the state, the success of McDuffie’s speech seemed to open up the possibilities of both military confrontation with the federal government and civil war within the state. With silence no longer an acceptable alternative, Calhoun looked for the opportunity to take control of the anti-tariff faction in the state; by June he was preparing what would be known as his Fort Hill Address. Published on July 26, 1831, the address repeated and expanded the positions Calhoun had made in the "Exposition”. While the logic of much of the speech was consistent with the states’ rights position of most Jacksonians, and even Daniel Webster remarked that it "was the ablest and most plausible, and therefore the most dangerous vindication of that particular form of Revolution”, the speech still placed Calhoun clearly in the nullifier camp. Within South Carolina, his gestures at moderation in the speech were drowned out as planters received word of the Nat Turner insurrection in Virginia. Calhoun was not alone in finding a connection between the abolition movement and the sectional aspects of the tariff issue. It confirmed for Calhoun what he had written in a September 11, 1830 letter: I consider the tariff act as the occasion, rather than the real cause of the present unhappy state of things. The truth can no longer be disguised, that the peculiar domestick [sic] institution of the Southern States and the consequent direction which that and her soil have given to her industry, has placed them in regard to taxation and appropriations in opposite relation to the majority of the Union, against the danger of which, if there be no protective power in the reserved rights of the states they must in the end be forced to rebel, or, submit to have their paramount interests sacrificed, their domestic institutions subordinated by Colonization and other schemes, and themselves and children reduced to wretchedness. From this point, the nullifiers accelerated their organization and rhetoric. In July 1831 the States Rights and Free Trade Association was formed in Charleston and expanded throughout the state. Unlike state political organizations in the past, which were led by the South Carolina planter aristocracy, this group appealed to all segments of the population, including non-slaveholder farmers, small slaveholders, and the Charleston non-agricultural class. Governor Hamilton was instrumental in seeing that the association, which was both a political and a social organization, expanded throughout the state. In the winter of 1831 and spring of 1832, the governor held conventions and rallies throughout the state to mobilize the nullification movement. The conservatives were unable to match the radicals in either organization or leadership. The state elections of 1832 were "charged with tension and bespattered with violence,” and "polite debates often degenerated into frontier brawls.” Unlike the previous year’s election, the choice was clear between nullifiers and unionists. The nullifiers won and on October 20, 1832, Governor Hamilton called the legislature into a special session to consider a convention. The legislative vote was 96-25 in the House and 31-13 in the Senate In November 1832 the Nullification Convention met. The convention declared that the tariffs of 1828 and 1832 were unconstitutional and unenforceable within the state of South Carolina after February 1, 1833. They said that attempts to use force to collect the taxes would lead to the state’s secession. Robert Hayne, who followed Hamilton as governor in 1833, established a 2,000-man group of mounted minutemen and 25,000 infantry who would march to Charleston in the event of a military conflict. These troops were to be armed with $100,000 in arms purchased in the North. The enabling legislation passed by the legislature was carefully constructed to avoid clashes if at all possible and to create an aura of legality in the process. To avoid conflicts with Unionists, it allowed importers to pay the tariff if they so desired. Other merchants could pay the tariff by obtaining a paper tariff bond from the customs officer. They would then refuse to pay the bond when due, and if the customs official seized the goods, the merchant would file for a writ of replevin to recover the goods in state court. Customs officials who refused to return the goods (by placing them under the protection of federal troops) would be civilly liable for twice the value of the goods. To insure that state officials and judges supported the law, a "test oath" would be required for all new state officials, binding them to support the ordinance of nullification. Governor Hayne in his inaugural address announced South Carolina's position: |“||If the sacred soil of Carolina should be polluted by the footsteps of an invader, or be stained with the blood of her citizens, shed in defense, I trust in Almighty God that no son of hers … who has been nourished at her bosom … will be found raising a parricidal arm against our common mother. And even should she stand ALONE in this great struggle for constitutional liberty … that there will not be found, in the wider limits of the state, one recreant son who will not fly to the rescue, and be ready to lay down his life in her defense.||”| Washington, D.C. (1828–1832) When President Jackson took office in March 1829 he was well aware of the turmoil created by the "Tariff of Abominations”. While he may have abandoned some of his earlier beliefs that had allowed him to vote for the Tariff of 1824, he still felt protectionism was justified for products essential to military preparedness and did not believe that the current tariff should be reduced until the national debt was fully paid off. He addressed the issue in his inaugural address and his first three messages to Congress, but offered no specific relief. In December 1831, with the proponents of nullification in South Carolina gaining momentum, Jackson was recommending "the exercise of that spirit of concession and conciliation which has distinguished the friends of our Union in all great emergencies.” However on the constitutional issue of nullification, despite his strong beliefs in states’ rights, Jackson did not waver. Calhoun’s "Exposition and Protest” did start a national debate over the doctrine of nullification. The leading proponents of the nationalistic view included Daniel Webster, Supreme Court Justice Joseph Story, Judge William Alexander Duer, John Quincy Adams, Nathaniel Chipman, and Nathan Dane. These people rejected the compact theory advanced by Calhoun, claiming that the Constitution was the product of the people, not the states. According to the nationalist position, the Supreme Court had the final say on the constitutionality of legislation, the national union was perpetual and had supreme authority over individual states. The nullifiers, on the other hand, asserted that the central government was not to be the ultimate arbiter of its own power, and that the states, as the contracting entities, could judge for themselves what was or was not constitutional. While Calhoun’s "Exposition” claimed that nullification was based on the reasoning behind the Kentucky and Virginia Resolutions, an aging James Madison in an August 28, 1830 letter to Edward Everett, intended for publication, disagreed. Madison wrote, denying that any individual state could alter the compact: |“||Can more be necessary to demonstrate the inadmissibility of such a doctrine than that it puts it in the power of the smallest fraction over 1/4 of the U. S. — that is, of 7 States out of 24 — to give the law and even the Constn. to 17 States, each of the 17 having as parties to the Constn. an equal right with each of the 7 to expound it & to insist on the exposition. That the 7 might, in particular instances be right and the 17 wrong, is more than possible. But to establish a positive & permanent rule giving such a power to such a minority over such a majority, would overturn the first principle of free Govt. and in practice necessarily overturn the Govt. itself.||”| Part of the South’s strategy to force repeal of the tariff was to arrange an alliance with the West. Under the plan, the South would support the West’s demand for free lands in the public domain if the West would support repeal of the tariff. With this purpose Robert Hayne took the floor on the Senate in early 1830, thus beginning "the most celebrated debate, in the Senate’s history.” Daniel Webster’s response shifted the debate, subsequently styled the Webster-Hayne debates, from the specific issue of western lands to a general debate on the very nature of the United States. Webster's position differed from Madison's: Webster asserted that the people of the United States acted as one aggregate body, Madison held that the people of the several states had acted collectively. John Rowan spoke against Webster on that issue, and Madison wrote, congratulating Webster, but explaining his own position. The debate presented the fullest articulation of the differences over nullification, and 40,000 copies of Webster’s response, which concluded with "liberty and Union, now and forever, one and inseparable”, were distributed nationwide. Many people expected the states’ rights Jackson to side with Hayne. However once the debate shifted to secession and nullification, Jackson sided with Webster. On April 13, 1830 at the traditional Democratic Party celebration honoring Thomas Jefferson’s birthday, Jackson chose to make his position clear. In a battle of toasts, Hayne proposed, "The Union of the States, and the Sovereignty of the States.” Jackson’s response, when his turn came, was, "Our Federal Union: It must be preserved.” To those attending, the effect was dramatic. Calhoun would respond with his own toast, in a play on Webster’s closing remarks in the earlier debate, "The Union. Next to our liberty, the most dear.” Finally Martin Van Buren would offer, "Mutual forbearance and reciprocal concession. Through their agency the Union was established. The patriotic spirit from which they emanated will forever sustain it.” Van Buren wrote in his autobiography of Jackson’s toast, "The veil was rent – the incantations of the night were exposed to the light of day.” Senator Thomas Hart Benton, in his memoirs, stated that the toast "electrified the country.” Jackson would have the final words a few days later when a visitor from South Carolina asked if Jackson had any message he wanted relayed to his friends back in the state. Jackson’s reply was: |“||Yes I have; please give my compliments to my friends in your State and say to them, that if a single drop of blood shall be shed there in opposition to the laws of the United States, I will hang the first man I can lay my hand on engaged in such treasonable conduct, upon the first tree I can reach.||”| Other issues than the tariff were still being decided. In May 1830 Jackson vetoed the Maysville Road Bill an important internal improvements program (especially to Kentucky and Henry Clay), and then followed this with additional vetoes of other such projects shortly before Congress adjourned at the end of May. Clay would use these vetoes to launch his presidential campaign. In 1831 the re-chartering of the Bank of the United States, with Clay and Jackson on opposite sides, reopened a long simmering problem. This issue was featured at the December 1831 National Republican convention in Baltimore which nominated Henry Clay for president, and the proposal to re-charter was formally introduced into Congress on January 6, 1832. The Calhoun-Jackson split entered the center stage when Calhoun, as vice-president presiding over the Senate, cast the tie-breaking vote to deny Martin Van Buren the post of minister to England. Van Buren was subsequently selected as Jackson’s running mate at the 1832 Democratic National Convention held in May. In February 1832 Henry Clay, back in the Senate after a two decades absence, made a three-day-long speech calling for a new tariff schedule and an expansion of his American System. In an effort to reach out to John Calhoun and other southerners, Clay’s proposal provided for a ten million dollar revenue reduction based on the amount of budget surplus he anticipated for the coming year. Significant protection was still part of the plan as the reduction primarily came on those imports not in competition with domestic producers. Jackson proposed an alternative that reduced overall tariffs to 28%. John Quincy Adams, now in the House of Representatives, used his Committee of Manufacturers to produce a compromise bill that, in its final form, reduced revenues by five million dollars, lowered duties on non-competitive products, and retained high tariffs on woolens, iron, and cotton products. In the course of the political maneuvering, George McDuffie’s Ways and Means Committee, the normal originator of such bills, prepared a bill with drastic reduction across the board. McDuffie’s bill went nowhere. Jackson signed the Tariff of 1832 on July 14, 1832, a few days after he vetoed the Bank of the United States re-charter bill. Congress adjourned after it failed to override Jackson’s veto. With Congress in adjournment, Jackson anxiously watched events in South Carolina. The nullifiers found no significant compromise in the Tariff of 1832 and acted accordingly (see the above section). Jackson heard rumors of efforts to subvert members of the army and navy in Charleston and he ordered the secretaries of the army and navy to begin rotating troops and officers based on their loyalty. He ordered General Winfield Scott to prepare for military operations and ordered a naval squadron in Norfolk to prepare to go to Charleston. Jackson kept lines of communication open with unionists like Joel Poinsett, William Drayton, and James L. Petigru and sent George Breathitt, brother of the Kentucky governor, to independently obtain political and military intelligence. After their defeat at the polls in October, Petigru advised Jackson that he should " Be prepared to hear very shortly of a State Convention and an act of Nullification.” On October 19, 1832 Jackson wrote to his Secretary of War: |“||The attempt will be made to surprise the Forts and garrisons by the militia, and must be guarded against with vestal vigilance and any attempt by force repelled with prompt and exemplary punishment.||”| By mid-November Jackson’s reelection was assured. On December 3, 1832 Jackson sent his fourth annual message to Congress. The message "was stridently states’ rights and agrarian in its tone and thrust” and he disavowed protection as anything other than a temporary expedient. His intent regarding nullification, as communicated to Van Buren, was "to pass it barely in review, as a mere buble [sic], view the existing laws as competent to check and put it down.” He hoped to create a "moral force” that would transcend political parties and sections. The paragraph in the message that addressed nullification was: |“||It is my painful duty to state that in one quarter of the United States opposition to the revenue laws has arisen to a height which threatens to thwart their execution, if not to endanger the integrity of the Union. What ever obstructions may be thrown in the way of the judicial authorities of the General Government, it is hoped they will be able peaceably to overcome them by the prudence of their own officers and the patriotism of the people. But should this reasonable reliance on the moderation and good sense of all portions of our fellow citizens be disappointed, it is believed that the laws themselves are fully adequate to the suppression of such attempts as may be immediately made. Should the exigency arise rendering the execution of the existing laws impracticable from any cause what ever, prompt notice of it will be given to Congress, with a suggestion of such views and measures as may be deemed necessary to meet it.||”| On December 10 Jackson issued the Proclamation to the People of South Carolina, in which he characterized the positions of the nullifiers as "impractical absurdity" and "a metaphysical subtlety, in pursuit of an impractical theory." He provided this concise statement of his belief: |“||I consider, then, the power to annul a law of the United States, assumed by one State, incompatible with the existence of the Union, contradicted expressly by the letter of the Constitution, unauthorized by its spirit, inconsistent with every principle on which It was founded, and destructive of the great object for which it was formed.||”| The language used by Jackson, combined with the reports coming out of South Carolina, raised the spectre of military confrontation for many on both sides of the issue. A group of Democrats, led by Van Buren and Thomas Hart Benton among others, saw the only solution to the crisis in a substantial reduction of the tariff. Negotiation and Confrontation (1833) In apparent contradiction of his previous claim that the tariff could be enforced with existing laws, on January 16 Jackson sent his Force Bill Message to Congress. Custom houses in Beaufort and Georgetown would be closed and replaced by ships located at each port. In Charleston the custom house would be moved to either Castle Pinckney or Fort Moultrie in Charleston harbor. Direct payment rather than bonds would be required, and federal jails would be established for violators that the state refused to arrest and all cases arising under the state’s nullification act could be removed to the United States Circuit Court. In the most controversial part, the militia acts of 1795 and 1807 would be revised to permit the enforcement of the custom laws by both the militia and the regular United States military. Attempts were made in South Carolina to shift the debate away from nullification by focusing instead on the proposed enforcement. The Force bill went to the Senate Judiciary Committee chaired by Pennsylvania protectionist William Wilkins and supported by members Daniel Webster and Theodore Frelinghuysen of New Jersey; it gave Jackson everything he asked. On January 28 the Senate defeated a motion by a vote of 30 to 15 to postpone debate on the bill. All but two of the votes to delay were from the lower South and only three from this section voted against the motion. This did not signal any increased support for nullification but did signify doubts about enforcement. In order to draw more votes, proposals were made to limit the duration of the coercive powers and restrict the use of force to suppressing, rather than preventing, civil disorder. In the House the Judiciary Committee, in a 4-3 vote, rejected Jackson’s request to use force. By the time Calhoun made a major speech on February 15 strongly opposing it, the Force Bill was temporarily stalled. On the tariff issue, the drafting of a compromise tariff was assigned in December to the House Ways and Means Committee, now headed by Gulian C. Verplanck. Debate on the committee’s product on the House floor began in January 1833. The Verplanck tariff proposed reductions back to the 1816 levels over the course of the next two years while maintaining the basic principle of protectionism. The anti-Jackson protectionists saw this as an economic disaster that did not allow the Tariff of 1832 to even be tested and "an undignified truckling to the menaces and blustering of South Carolina." Northern Democrats did not oppose it in principle but still demanded protection for the varying interests of their own constituents. Those sympathetic to the nullifiers wanted a specific abandonment of the principle of protectionism and were willing to offer a longer transition period as a bargaining point. It was clear that the Verplanck tariff was not going to be implemented. In South Carolina, efforts were being made to avoid an unnecessary confrontation. Governor Hayne ordered the 25,000 troops he had created to train at home rather than gathering in Charleston. At a mass meeting in Charleston on January 21, it was decided to postpone the February 1 deadline for implementing nullification while Congress worked on a compromise tariff. At the same time a commissioner from Virginia, Benjamin Watkins Leigh, arrived in Charleston bearing resolutions that criticized both Jackson and the nullifiers and offering his state as a mediator. Henry Clay had not taken his defeat in the presidential election well and was unsure on what position he could take in the tariff negotiations. His long term concern was that Jackson eventually was determined to kill protectionism along with the American Plan. In February, after consulting with manufacturers and sugar interests in Louisiana who favored protection for the sugar industry, Clay started to work on a specific compromise plan. As a starting point, he accepted the nullifiers' offer of a transition period but extended it from seven and a half years to nine years with a final target of a 20% ad valorem rate. After first securing the support of his protectionist base, Clay, through an intermediary, broached the subject with Calhoun. Calhoun was receptive and after a private meeting with Clay at Clay’s boardinghouse, negotiations preceded. Clay introduced the negotiated tariff bill on February 12, and it was immediately referred to a select committee consisting of Clay as chairman, Felix Grundy of Tennessee, George M. Dallas of Pennsylvania, William Cabell Rives of Virginia, Webster, John M. Clayton of Delaware, and Calhoun. On February 21 the committee reported a bill to the floor of the Senate which was largely the original bill proposed by Clay. The Tariff of 1832 would continue except that reduction of all rates above 20% would be reduced by one tenth every two years with the final reductions back to 20% coming in 1842. Protectionism as a principle was not abandoned and provisions were made for raising the tariff if national interests demanded it. Although not specifically linked by any negotiated agreement, it became clear that the Force Bill and Compromise Tariff of 1833 were inexorably linked. In his February 25 speech ending the debate on the tariff, Clay captured the spirit of the voices for compromise by condemning Jackson's Proclamation to South Carolina as inflammatory, admitting the same problem with the Force Bill but indicating its necessity, and praising the Compromise Tariff as the final measure to restore balance, promote the rule of law, and avoid the "sacked cities," "desolated fields," and "smoking ruins" that he said would be the product of the failure to reach a final accord. The House passed the Compromise Tariff by 119-85 and the Force Bill by 149-48. In the Senate the tariff passed 29-16 and the Force bill by 32-1 with many opponents of it walking out rather than voting for it. Calhoun rushed to Charleston with the news of the final compromises. The Nullification Convention met again on March 11. It repealed the November Nullification Ordinance and also, "in a purely symbolic gesture", nullified the Force Bill. While the nullifiers claimed victory on the tariff issue, even though they had made concessions, the verdict was very different on nullification. The majority had, in the end, ruled and this boded ill for the South and their minorities hold on slavery. Rhett summed this up at the convention on March 13. Warning that, "A people, owning slaves, are mad, or worse than mad, who do not hold their destinies in their own hands," he continued: |“||Every stride of this Government, over your rights, brings it nearer and nearer to your peculiar policy. …The whole world are in arms against your institutions … Let Gentlemen not be deceived. It is not the Tariff – not Internal Improvement – nor yet the Force bill, which constitutes the great evil against which we are contending. … These are but the forms in which the despotic nature of the government is evinced – but it is the despotism which constitutes the evil: and until this Government is made a limited Government … there is no liberty – no security for the South.||”| People reflected on the meaning of the nullification crisis and its outcome for the country. On May 1, 1833 Jackson wrote, "the tariff was only a pretext, and disunion and southern confederacy the real object. The next pretext will be the negro, or slavery question." The final resolution of the crisis and Jackson's leadership had appeal throughout the North and South. Robert Remini, the historian and Jackson biographer, described the opposition that nullification drew from traditionally states’ rights Southern states: The Alabama legislature, for example, pronounced the doctrine "unsound in theory and dangerous in practice." Georgia said it was "mischievous," "rash and revolutionary." Mississippi lawmakers chided the South Carolinians for acting with "reckless precipitancy." Forest McDonald, describing the split over nullification among proponents of states rights, wrote, "The doctrine of states' rights, as embraced by most Americans, was not concerned exclusively, or even primarily with state resistance to federal authority." But, by the end of the nullification crisis, many southerners started to question whether the Jacksonian Democrats still represented Southern interests. The historian William J. Cooper notes that, "Numerous southerners had begun to perceive it [the Jacksonian Democratic Party] as a spear aimed at the South rather than a shield defending the South." In the political vacuum created by this alienation, the southern wing of the Whig Party was formed. The party was a coalition of interests united by the common thread of opposition to Andrew Jackson and, more specifically, his "definition of federal and executive power." The party included former National Republicans with an "urban, commercial, and nationalist outlook" as well as former nullifiers. Emphasizing that "they were more southern than the Democrats," the party grew within the South by going "after the abolition issue with unabashed vigor and glee." With both parties arguing who could best defend southern institutions, the nuances of the differences between free soil and abolitionism, which became an issue in the late 1840s with the Mexican War and territorial expansion, never became part of the political dialogue. This failure increased the volatility of the slavery issues. Richard Ellis argues that the end of the crisis signified the beginning of a new era. Within the states' rights movement, the traditional desire for simply "a weak, inactive, and frugal government" was challenged. Ellis states that "in the years leading up to the Civil War the nullifiers and their pro-slavery allies used the doctrine of states' rights and state sovereignty in such a way as to try to expand the powers of the federal government so that it could more effectively protect the peculiar institution." By the 1850s, states' rights had become a call for state equality under the Constitution. Madison reacted to this incipient tendency by writing two paragraphs of "Advice to My Country," found among his papers. It said that the Union "should be cherished and perpetuated. Let the open enemy to it be regarded as a Pandora with her box opened; and the disguised one, as the Serpent creeping with his deadly wiles into paradise." Richard Rush published this "Advice" in 1850, by which time Southern spirit was so high that it was denounced as a forgery. The first test for the South over the slavery issue began during the final congressional session of 1835. In what became known as the Gag Rule Debates, abolitionists flooded the Congress with anti-slavery petitions to end slavery and the slave trade in Washington, D.C. The debate was reopened each session as Southerners, led by South Carolinians Henry Pinckney and John Hammond, prevented the petitions from even being officially received by Congress. Led by John Quincy Adams, the slavery debate remained on the national stage until late 1844 when Congress lifted all restrictions on processing the petitions. Describing the legacy of the crisis, Sean Wilentz writes: |“||The battle between Jacksonian democratic nationalists, northern and southern, and nullifier sectionalists would resound through the politics of slavery and antislavery for decades to come. Jackson's victory, ironically, would help accelerate the emergence of southern pro-slavery as a coherent and articulate political force, which would help solidify northern antislavery opinion, inside as well as outside Jackson's party. Those developments would accelerate the emergence of two fundamentally incompatible democracies, one in the slave South, the other in the free North.||”| For South Carolina, the legacy of the crisis involved both the divisions within the state during the crisis and the apparent isolation of the state as the crisis was resolved. By 1860, when South Carolina became the first state to secede, the state was more internally united than any other southern state. Historian Charles Edward Cauthen writes: |“||Probably to a greater extent than in any other Southern state South Carolina had been prepared by her leaders over a period of thirty years for the issues of 1860. Indoctrination in the principles of state sovereignty, education in the necessity of maintaining Southern institutions, warnings of the dangers of control of the federal government by a section hostile to its interests – in a word, the education of the masses in the principles and necessity of secession under certain circumstances – had been carried on with a skill and success hardly inferior to the masterly propaganda of the abolitionists themselves. It was this education, this propaganda, by South Carolina leaders which made secession the almost spontaneous movement that it was.||”| - Origins of the American Civil War - American System (economic plan) - American School (economics) - Alexander Hamilton - Friedrich List - Freehling, The Road to Disunion, pg. 255. Craven pg. 60. Ellis pg. 7. - Remini, Andrew Jackson, v2, pp. 136-137. Niven, pp. 135-137. Freehling, Prelude to Civil War, pg. 143. - Craven, pg. 65. Niven, pp. 135-137. Freehling, Prelude to Civil War, pg. 143. - Niven p. 192. Calhoun replaced Robert Y. Hayne as senator so that Hayne could follow James Hamilton as governor. Niven writes, "There is no doubt that these moves were part of a well-thought-out plan whereby Hayne would restrain the hotheads in the state legislature and Calhoun would defend his brainchild, nullification, in Washington against administration stalwarts and the likes of Daniel Webster, the new apostle of northern nationalism." - Howe p. 410. In the Senate only Virginia and South Carolina voted against the 1832 tariff. Howe writes, "Most southerners saw the measure as a significant amelioration of their grievance and were now content to back Jackson for reelection rather than pursue the more drastic remedy such as the one South Carolina was touting." - Freehling, Prelude to Civil War pp. 1-3. Freehling writes, "In Charleston Governor Robert Y. Hayne ... tried to form an army which could hope to challenge the forces of ‘Old Hickory.’ Hayne recruited a brigade of mounted minutemen, 2,000 strong, which could swoop down on Charleston the moment fighting broke out, and a volunteer army of 25,000 men which could march on foot to save the beleaguered city. In the North Governor Hayne’s agents bought over $100,000 worth of arms; in Charleston Hamilton readied his volunteers for an assault on the federal forts.” - Wilentz, pg. 388. - Woods, pg. 78. - Tuttle, California Digest 26, pg. 47. - "Linn sheriff says he won't enforce federal gun orders". January 16, 2013. - Ellis, pg. 4. - McDonald pg. vii. McDonald wrote, "Of all the problems that beset the United States during the century from the Declaration of Independence to the end of Reconstruction, the most pervasive concerned disagreements about the nature of the Union and the line to be drawn between the authority of the general government and that of the several states. At times the issue bubbled silently and unseen between the surface of public consciousness; at times it exploded: now and again the balance between general and local authority seemed to be settled in one direction or another, only to be upset anew and to move back toward the opposite position, but the contention never went away.” - Ellis pp. 1-2. - For full text of the resolutions, see Kentucky Resolutions of 1798 and Kentucky Resolutions of 1799. - James Madison, Virginia Resolutions of 1798 - Banning pg. 388. - Brant, pg. 297, 629. - Brant, pp. 298. - Brant, pg. 629. - Ketcham pg. 396. - Wilentz, pg. 80. - Ellis, pg. 5. Madison called for the constitutional amendment because he believed much of the American System was unconstitutional. Historian Richard Buel Jr. notes that in preparing for the worst from the Hartford Convention, the Madison administration made preparation to intervene militarily in case of New England secession. Troops from the Canada–US border were moved near Albany so that they could move into either Massachusetts or Connecticut if necessary. New England troops were also returned to their recruitment areas in order to serve as a focus for loyalists. Buel, pp. 220-221. - McDonald, pp. 69-70. - Wilentz pg. 166. - Wilentz, pg. 181. - Ellis, pg. 6. Wilentz, pg. 182. - Freehling, Prelude to Civil War, pp. 92-93. - Wilentz pg. 243. Economic historian Frank Taussig notes "The act of 1816, which is generally said to mark the beginning of a distinctly protective policy in this country, belongs rather to the earlier series of acts, beginning with that of 1789, than to the group of acts of 1824, 1828, and 1832. Its highest permanent rate of duty was twenty per cent., an increase over the previous rates which is chiefly accounted for by the heavy interest charge on the debt incurred during the war. But after the crash of 1819, a movement in favor of protection set in, which was backed by a strong popular feeling such as had been absent in the earlier years.” The Tariff History of the United States (Part I) Teaching American History - Remini, Henry Clay, pg. 232. Freehling, The Road to Disunion, pg. 257. - McDonald, pg. 95. - Brant, p. 622. - Remini, Andrew Jackson, v2, pp. 136-137. McDonald presents a slightly different rationale. He stated that the bill would "adversely affect New England woolen manufacturers, ship builders, and shipowners” and Van Buren calculated that New England and the South would unite to defeat the bill, allowing Jacksonians to have it both ways – in the North they could claim they tried but failed to pass a needed tariff and in the South they could claim that they had thwarted an effort to increase import duties. McDonald, pp. 94-95. - Cooper, pp. 11-12. - Freehling, The Road to Disunion, pg. 255. Historian Avery Craven wrote, "Historians have generally ignored the fact that the South Carolina statesmen, in the so-called Nullification controversy, were struggling against a practical situation. They have conjured up a great struggle between nationalism and States” rights and described these men as theorists reveling in constitutional refinements for the mere sake of logic. Yet here was a clear case of commercial and agricultural depression. Craven, pg. 60. - Ellis, pg. 7. Freehling notes that divisions over nullification in the state generally corresponded to the extent that the section suffered economically. The exception was the "Low country rice and luxury cotton planters” who supported nullification despite their ability to survive the economic depression. This section had the highest percentage of slave population. Freehling, Prelude to Civil War, pg. 25. - Cauthen pg. 1. - Ellis, pg. 7. Freehling, Road to Disunion, pg. 256. - Gerald Horne, Negro Comrades of the Crown: African Americans and the British Empire Fight the U.S. Before Emancipation'], New York University (NYU) Press, 2012, pp. 97-98 - Freehling, Road to Disunion, p. 254. - Craven, pg. 65. - Niven, pp. 135-137. Freehling, Prelude to Civil War, pg. 143. - South Carolina Exposition and Protest - Niven, pp. 158-162. - Niven, pg. 161. - Niven, pp. 163-164. - Walther, pg. 123. Craven, pp. 63-64. - Freehling, Prelude to Civil War, pg. 149. - Freehling, Prelude to Civil War, pp. 152-155, 173-175. A two-thirds vote of each house of the legislature was required to convene a state convention. - Freehling, Prelude to Civil War, pp. 177-186. - Freehling, Prelude to Civil War, pp. 205-213. - Freehling, Prelude to Civil War, pp. 213-218. - Peterson, pp. 189-192. Niven, pp. 174-181. Calhoun wrote of McDuffie’s speech, "I think it every way imprudent and have so written Hamilton … I see clearly it brings matters to a crisis, and that I must meet it promptly and manfully.” Freehling in his works frequently refers to the radicals as "Calhounites” even before 1831. This is because the radicals, rallying around Calhoun’s "Exposition,” were linked ideologically, if not yet practically, with Calhoun. - Niven, pp. 181-184. - Ellis pg. 193. Freehling, Prelude to Civil War, pg. 257. - Freehling, pp. 224-239. - Freehling, Prelude to Civil War, pp. 252-260. - Freehling, Prelude to Civil War, pp. 1-3. - Ellis, pp. 97-98. - Remini, Andrew Jackson, v. 3, pg. 14. - Ellis, pp. 41-43. - Ellis, pg. 9. - Ellis pg. 9. - Brant, pg. 627. - Ellis pg. 10. Ellis wrote, "But the nullifiers' attempt to legitimize their controversial doctrine by claiming it was a logical extension of the principles embodied in the Kentucky and Virginia Resolutions upset him. In a private letter he deliberately wrote for publication, Madison denied many of the assertions of the nullifiers and lashed out in particular at South Carolina's claim that if a state nullified an act of the federal government it could only be overruled by an amendment to the Constitution." Full text of the letter is available at http://www.constitution.org/jm/18300828_everett.htm. - Brant, pp. 626-7. Webster never asserted the consolidating position again. - McDonald, pp. 105-106. - Remini, Andrew Jackson, v. 2, pp. 233-235. - Remini, Andrew Jackson',' v. 2, pp. 233-237. - Remini, Andrew Jackson, v. 2, pp. 255-256. Peterson, pp. 196-197. - Remini, Andrew Jackson, v. 2, pp. 343-348. - Remini, Andrew Jackson, v. 2 pp. 347-355. - Remini, Andrew Jackson, v. 2 pp. 358-373. Peterson, pp. 203-212. - Remini, Andrew Jackson, v. 2, pp. 382-389. - Ellis pg. 82. - Remini, Andrew Jackson, v. 3 pp. 9-11. Full text of his message available at http://www.thisnation.com/library/sotu/1832aj.html - Ellis pg 83-84. Full document available at: http://www.yale.edu/lawweb/avalon/presiden/proclamations/jack01.htm - Ellis, po. 93-95. - Ellis, pp. 160-165. Peterson, pp. 222-224. Peterson differs with Ellis in arguing that passage of the Force Bill "was never in doubt.” - Ellis, pp. 99-100. Peterson, pg. 217. - Wilentz, pp. 384-385. - Peterson, pp. 217-226. - Peterson, pp. 226-228. - Peterson pp. 229-232. - Freehling, Prelude to Civil War, pp. 295-297. - Freehling, Prelude to Civil War, pg. 297. Willentz pg. 388. - Jon Meacham (2009), American Lion: Andrew Jackson in the White House, New York: Random House, p. 247; Correspondence of Andrew Jackson, Vol. V, p. 72. - Remini, Andrew Jackson, v3. pg. 42. - McDonald, pg. 110. - Cooper, pp. 53–65. - Ellis, pg. 198. - Brant p. 646; Rush produced a copy in Mrs. Madison's hand; the original also survives. The contemporary letter to Edward Coles (Brant, p. 639) makes plain that the enemy in question is the nullifier. - Freehling, Prelude to Civil War, pp. 346-356. McDonald (pp. 121–122) saw states' rights in the period from 1833–1847 as almost totally successful in creating a "virtually nonfunctional" federal government. This did not insure political harmony, as "the national political arena became the center of heated controversy concerning the newly raised issue of slavery, a controversy that reached the flash point during the debates about the annexation of the Republic of Texas." - Cauthen, pg. 32. - Brant, Irving: The Fourth President: A Life of James Madison Bobbs Merrill, 1970. - Buel, Richard Jr. America on the Brink: How the Political Struggle Over the War of 1812 Almost Destroyed the Young Republic (2005) ISBN 1-4039-6238-3 - Cauthen, Charles Edward. South Carolina Goes to War (1950) ISBN 1-57003-560-1 - Cooper, William J. Jr. The South and the Politics of Slavery 1828-1856 (1978) ISBN 0-8071-0385-3 - Craven, Avery. The Coming of the Civil War (1942) ISBN 0-226-11894-0 - Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights, and the Nullification Crisis (1987) - Freehling, William W. The Road to Disunion: Secessionists at Bay, 1776-1854 (1991), Vol. 1 - Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816-1836 (1965) ISBN 0-19-507681-8 - Howe, Daniel Walker. What Hath God Wrought: The Transformation of America, 1815-1848. (2007) ISBN 978-0-19-507894-7 - McDonald, Forrest. States’ Rights and the Union: Imperium in Imperio 1776-1876 (2000) ISBN 0-7006-1040-5 - Niven, John. John C. Calhoun and the Price of Union (1988) ISBN 0-8071-1451-0 - Peterson, Merrill D. The Great Triumvirate: Webster, Clay, and Calhoun (1987) ISBN 0-19-503877-0 - Remini, Robert V. Andrew Jackson and the Course of American Freedom, 1822-1832, v2 (1981) ISBN 0-06-014844-6 - Remini, Robert V. Andrew Jackson and the Course of American Democracy, 1833-1845, v3 (1984) ISBN 0-06-015279-6 - Remini, Robert V. Henry Clay: Statesman for the Union (1991) ISBN 0-393-31088-4 - Tuttle, Charles A. (Court Reporter) California Digest: A Digest of the Reports of the Supreme Court of California, Volume 26 (1906) - Walther, Eric C. The Fire-Eaters (1992) ISBN 0-8071-1731-5 - Wilentz, Sean. The Rise of American Democracy: Jefferson to Lincoln (2005) ISBN 0-393-05820-4 - Woods, Thomas E. Jr. Nullification (2010) ISBN 978-1-59698-149-2 - Barnwell, John (1982). Love of Order: South Carolina's First Secession Crisis. Chapel Hill: University of North Carolina Press. ISBN 0-8078-1498-9. - Capers, Gerald M. (1960). John C. Calhoun, Opportunist: A Reappraisal. Gainesville: University of Florida Press. - Coit, Margaret L. (1950). John C. Calhoun: American Portrait. Boston: Houghton Mifflin Co. - Houston, David Franklin (1896). A Critical Study of Nullification in South Carolina. Longmans, Green, and Co. - Latner, Richard B. (1977). "The Nullification Crisis and Republican Subversion". Journal of Southern History. 43 (1): 18–38. JSTOR 2207553. - McCurry, Stephanie (1995). Masters of Small Worlds: Yeoman Households, Gender Relations and the Political Culture of the Antebellum South Carolina Low Country. New York: Oxford University Press. ISBN 0-19-507236-7. - Pease, Jane H.; Pease, William H. (1981). "The Economics and Politics of Charleston's Nullification Crisis". Journal of Southern History. 47 (3): 335–362. JSTOR 2207798. - Ratcliffe, Donald (2000). "The Nullification Crisis, Southern Discontents, and the American Political Process". American Nineteenth Century History. 1 (2): 1–30. doi:10.1080/14664650008567014. - Wiltse, Charles (1949). John C. Calhoun, Nullifier, 1829–1839. Indianapolis: Bobbs-Merrill. - South Carolina Exposition and Protest, by Calhoun, 1828. - The Fort Hill Address: On the Relations of the States and the Federal Government, by Calhoun, July 1831. - South Carolina Ordinance of Nullification, November 24, 1832. - President Jackson's Proclamation to South Carolina, December 10, 1832. - Primary Documents in American History: Nullification Proclamation (Library of Congress) - President Jackson's Message to the Senate and House Regarding South Carolina's Nullification Ordinance, January 16, 1833 - Nullification Revisited: An article examining the constitutionality of nullification (from a favorable aspect, and with regard to both recent and historical events). - Early Threat of Secession: Missouri Compromise of 1820 and Nullification Crisis
Lisa got 14 out of 30 questions correct on her quiz. What is this as a percent? Multiply the numerator and denominator by Or just use a calculator to divide. But as this is a test results, it would probably be given as 47% by rounding to the nearest whole number. Fractions, decimals and percentages are all different way of expressing the same relationship between two numbers. They are interchangeable. Just dividing will give 0.466666666... But the first 2 decimal places represent hundredths which indicate percent. So this value could also be written as From this we see that it is 46.6666 % However, recurring decimals in percentages (especially thirds and sixths) are better written in fraction form. So the best way of giving an exact answer without rounding off is as Or multiply by 100% (100% = 1,) so we are not changing the value. However as this is for a test, a whole number answer would probably be given as 47% To construct a percentage we require to form a fraction and multiply it by 100. The required fraction is found as follows. #color(red)("number of correct questions")/color(blue)("total number of questions")# To obtain this fraction as a percentage, multiply by 100.% which may be simplified by 'cancelling' the 30 and 100 (dividing both by 10) #(14)/(cancel(30)^3) xxcancel(100)^(10)/1 %=(14xx10)/(3xx1)=140/3 %# #140/3%=46 2/ 3%=46.66...=46.6bar6%# There are two ways of writing percentage. Suppose we are talking about 25 percent. The most common way of seeing this is in the format of For the purpose of calculations it is convenient to write The word percent can be split into 2 parts Part 1: 'per' means for each of. Part 2: 'cent' means 100. Think of centenary, Using the principle of the fraction format. Let the unknown count be 14 out of 30 Write this as a ratio From here we have two options of approach. Method 1: ( shortcut approach) Multiply both sides by 100 so that you find the value of Method 2: Treat as a ratio and proportion up so that the denominator of The shortcut method really is the same as method 2 but it cuts out some steps For multiply or divide, what we do to the bottom we do to the top! As a percentage we write it as
Ideal gas law The ideal gas law, also called the general gas equation, is the equation of state of a hypothetical ideal gas. It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Benoît Paul Émile Clapeyron in 1834 as a combination of the empirical Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. The ideal gas law is often written in an empirical form: The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: the appropriate SI unit is the kelvin. The most frequently introduced forms are: - is the absolute pressure of the gas, - is the volume of the gas, - is the amount of substance of gas (also known as number of moles), - is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant, - is the Boltzmann constant - is the Avogadro constant - is the absolute temperature of the gas - is the number of particles (usually atoms or molecules) of the gas. In SI units, p is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in kelvins (the Kelvin scale is a shifted Celsius scale, where 0.00 K = −273.15 °C, the lowest possible temperature). R has for value 8.314 J/(mol·K) = 1.989 ≈ 2 cal/(mol·K), or 0.0821 L⋅atm/(mol⋅K). How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount (n) (in moles) is equal to total mass of the gas (m) (in kilograms) divided by the molar mass (M) (in kilograms per mole): By replacing n with m/M and subsequently introducing density ρ = m/V, we get: Defining the specific gas constant Rspecific(r) as the ratio R/M, This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume v, the reciprocal of density, as It is common, especially in engineering and meteorological applications, to represent the specific gas constant by the symbol R. In such cases, the universal gas constant is usually given a different symbol such as or to distinguish it. In any case, the context and/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being referred to. In statistical mechanics the following molecular equation is derived from first principles where P is the absolute pressure of the gas, n is the number density of the molecules (given by the ratio n = N/V, in contrast to the previous formulation in which n is the number of moles), T is the absolute temperature, and kB is the Boltzmann constant relating temperature and energy, given by: where NA is the Avogadro constant. and since ρ = m/V = nμmu, we find that the ideal gas law can be rewritten as Combined gas law Combining the laws of Charles, Boyle and Gay-Lussac gives the combined gas law, which takes the same functional form as the ideal gas law save that the number of moles is unspecified, and the ratio of to is simply taken as a constant: where is the pressure of the gas, is the volume of the gas, is the absolute temperature of the gas, and is a constant. When comparing the same substance under two different sets of conditions, the law can be written as Energy associated with a gas According to the assumptions of the kinetic theory of ideal gases, one can consider that there are no intermolecular attractions between the molecules, or atoms, of an ideal gas. In other words, its potential energy is zero. Hence, all the energy possessed by the gas is the kinetic energy of the molecules, or atoms, of the gas. |Energy of a monoatomic gas||Mathematical expression| |Energy associated with one mole| |Energy associated with one gram| |Energy associated with one atom| Applications to thermodynamic processes The table below essentially simplifies the ideal gas equation for a particular processes, thus making this equation easier to solve using numerical methods. A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by subscript. As shown in the first column of the table, basic thermodynamic processes are defined such that one of the gas properties (P, V, T, S, or H) is constant throughout the process. For a given thermodynamics process, in order to specify the extent of a particular process, one of the properties ratios (which are listed under the column labeled "known ratio") must be specified (either directly or indirectly). Also, the property for which the ratio is known must be distinct from the property held constant in the previous column (otherwise the ratio would be unity, and not enough information would be available to simplify the gas law equation). In the final three columns, the properties (p, V, or T) at state 2 can be calculated from the properties at state 1 using the equations listed. |Process||Constant||Known ratio or delta||p2||V2||T2| |Isobaric process||Pressure||V2/V1||p2 = p1||V2 = V1(V2/V1)||T2 = T1(V2/V1)| |T2/T1||p2 = p1||V2 = V1(T2/T1)||T2 = T1(T2/T1)| |Volume||p2/p1||p2 = p1(p2/p1)||V2 = V1||T2 = T1(p2/p1)| |T2/T1||p2 = p1(T2/T1)||V2 = V1||T2 = T1(T2/T1)| |Isothermal process||Temperature||p2/p1||p2 = p1(p2/p1)||V2 = V1/(p2/p1)||T2 = T1| |V2/V1||p2 = p1/(V2/V1)||V2 = V1(V2/V1)||T2 = T1| (Reversible adiabatic process) |p2/p1||p2 = p1(p2/p1)||V2 = V1(p2/p1)(−1/γ)||T2 = T1(p2/p1)(γ − 1)/γ| |V2/V1||p2 = p1(V2/V1)−γ||V2 = V1(V2/V1)||T2 = T1(V2/V1)(1 − γ)| |T2/T1||p2 = p1(T2/T1)γ/(γ − 1)||V2 = V1(T2/T1)1/(1 − γ)||T2 = T1(T2/T1)| |Polytropic process||P Vn||p2/p1||p2 = p1(p2/p1)||V2 = V1(p2/p1)(−1/n)||T2 = T1(p2/p1)(n − 1)/n| |V2/V1||p2 = p1(V2/V1)−n||V2 = V1(V2/V1)||T2 = T1(V2/V1)(1 − n)| |T2/T1||p2 = p1(T2/T1)n/(n − 1)||V2 = V1(T2/T1)1/(1 − n)||T2 = T1(T2/T1)| (Irreversible adiabatic process) |p2 − p1||p2 = p1 + (p2 − p1)||T2 = T1 + μJT(p2 − p1)| |T2 − T1||p2 = p1 + (T2 − T1)/μJT||T2 = T1 + (T2 − T1)| ^ a. In an isentropic process, system entropy (S) is constant. Under these conditions, p1V1γ = p2V2γ, where γ is defined as the heat capacity ratio, which is constant for a calorifically perfect gas. The value used for γ is typically 1.4 for diatomic gases like nitrogen (N2) and oxygen (O2), (and air, which is 99% diatomic). Also γ is typically 1.6 for mono atomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines γ varies between 1.35 and 1.15, depending on constitution gases and temperature. ^ b. In an isenthalpic process, system enthalpy (H) is constant. In the case of free expansion for an ideal gas, there are no molecular interactions, and the temperature remains constant. For real gasses, the molecules do interact via attraction or repulsion depending on temperature and pressure, and heating or cooling does occur. This is known as the Joule–Thomson effect. For reference, the Joule–Thomson coefficient μJT for air at room temperature and sea level is 0.22 °C/bar. Deviations from ideal behavior of real gases The equation of state given here (PV = nRT) applies only to an ideal gas, or as an approximation to a real gas that behaves sufficiently like an ideal gas. There are in fact many different forms of the equation of state. Since the ideal gas law neglects both molecular size and inter molecular attractions, it is most accurate for monatomic gases at high temperatures and low pressures. The neglect of molecular size becomes less important for lower densities, i.e. for larger volumes at lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular size. The relative importance of intermolecular attractions diminishes with increasing thermal kinetic energy, i.e., with increasing temperatures. More detailed equations of state, such as the van der Waals equation, account for deviations from ideality caused by molecular size and intermolecular forces. The empirical laws that led to the derivation of the ideal gas law were discovered with experiments that changed only 2 state variables of the gas and kept every other one constant. All the possible gas laws that could have been discovered with this kind of setup are: (1) known as Boyle's law (2) known as Charles's law (3) known as Avogadro's law (4) known as Gay-Lussac's law where P stands for pressure, V for volume, N for number of particles in the gas and T for temperature; where are not actual constants but are in this context because of each equation requiring only the parameters explicitly noted in it changing. To derive the ideal gas law one does not need to know all 6 formulas, one can just know 3 and with those derive the rest or just one more to be able to get the ideal gas law, which needs 4. Since each formula only holds when only the state variables involved in said formula change while the others remain constant, we cannot simply use algebra and directly combine them all. I.e. Boyle did his experiments while keeping N and T constant and this must be taken into account. Keeping this in mind, to carry the derivation on correctly, one must imagine the gas being altered by one process at a time. The derivation using 4 formulas can look like this: at first the gas has parameters After this process, the gas has parameters After this process, the gas has parameters After this process, the gas has parameters After this process, the gas has parameters If three of the six equations are known, it may be possible to derive the remaining three using the same method. However, because each formula has two variables, this is possible only for certain groups of three. For example, if you were to have equations (1), (2) and (4) you would not be able to get any more because combining any two of them will only give you the third. However, if you had equations (1), (2) and (3) you would be able to get all six equations because combining (1) and (2) will yield (4), then (1) and (3) will yield (6), then (4) and (6) will yield (5), as well as would the combination of (2) and (3) as is explained in the following visual relation: where the numbers represent the gas laws numbered above. If you were to use the same method used above on 2 of the 3 laws on the vertices of one triangle that has a "O" inside it, you would get the third. Change only pressure and volume first: then only volume and temperature: then as we can choose any value for , if we set , equation (2') becomes: The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved. The fundamental assumptions of the kinetic theory of gases imply that Using the Maxwell–Boltzmann distribution, the fraction of molecules that have a speed in the range to is , where and denotes the Boltzmann constant. The root-mean-square speed can be calculated by Using the integration formula it follows that from which we get the ideal gas law: Let q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then the time-averaged kinetic energy of the particle is: By Newton's third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the container, and this force is given by the pressure P of the gas. Hence where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is the divergence theorem implies that where dV is an infinitesimal volume within the container and V is the total volume of the container. Putting these equalities together yields which immediately implies the ideal gas law for N particles: For a d-dimensional system, the ideal gas pressure is: where is the volume of the d-dimensional domain in which the gas exists. Note that the dimensions of the pressure changes with dimensionality. - Boltzmann constant – Physical constant relating particle kinetic energy with temperature - Configuration integral – Function in thermodynamics and statistical physics - Dynamic pressure – Concept in fluid dynamics - Gas laws - Internal energy – Energy contained within a system - Van der Waals equation – Gas equation of state which accounts for non-ideal gas behavior - Clapeyron, E. (1835). "Mémoire sur la puissance motrice de la chaleur". Journal de l'École Polytechnique (in French). XIV: 153–90. Facsimile at the Bibliothèque nationale de France (pp. 153–90). - Krönig, A. (1856). "Grundzüge einer Theorie der Gase". Annalen der Physik und Chemie (in German). 99 (10): 315–22. Bibcode:1856AnP...175..315K. doi:10.1002/andp.18561751008. Facsimile at the Bibliothèque nationale de France (pp. 315–22). - Clausius, R. (1857). "Ueber die Art der Bewegung, welche wir Wärme nennen". Annalen der Physik und Chemie (in German). 176 (3): 353–79. Bibcode:1857AnP...176..353C. doi:10.1002/andp.18571760302. Facsimile at the Bibliothèque nationale de France (pp. 353–79). - "Equation of State". Archived from the original on 2014-08-23. Retrieved 2010-08-29. - Moran; Shapiro (2000). Fundamentals of Engineering Thermodynamics (4th ed.). Wiley. ISBN 0-471-31713-6. - Raymond, Kenneth W. (2010). General, organic, and biological chemistry : an integrated approach (3rd ed.). John Wiley & Sons. p. 186. ISBN 9780470504765. Retrieved 29 January 2019. - J. R. Roebuck (1926). "The Joule-Thomson Effect in Air". Proceedings of the National Academy of Sciences of the United States of America. 12 (1): 55–58. Bibcode:1926PNAS...12...55R. doi:10.1073/pnas.12.1.55. PMC 1084398. PMID 16576959. - Khotimah, Siti Nurul; Viridi, Sparisoma (2011-06-07). "Partition function of 1-, 2-, and 3-D monatomic ideal gas: A simple and comprehensive review". Jurnal Pengajaran Fisika Sekolah Menengah. 2 (2): 15–18. arXiv:1106.1273. Bibcode:2011arXiv1106.1273N. - Davis; Masten (2002). Principles of Environmental Engineering and Science. New York: McGraw-Hill. ISBN 0-07-235053-9. - "Website giving credit to Benoît Paul Émile Clapeyron, (1799–1864) in 1834". Archived from the original on July 5, 2007. - Configuration integral (statistical mechanics) where an alternative statistical mechanics derivation of the ideal-gas law, using the relationship between the Helmholtz free energy and the partition function, but without using the equipartition theorem, is provided. Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28. - Gas equations in detail
File ForksMany operating systems treat a file simply as a named, ordered sequence of bytes (possibly terminated by a byte having a special value that indicates the end-of-file). As illustrated in Figure 1-1, however, each Macintosh file has two forks, known as the data fork and the resource fork. A file's resource fork contains that file's resources. If the file is an application, the resource fork typically contains resources that describe the application's menus, dialog boxes, icons, and even the executable code of the application itself. A particularly important resource is the application's 'SIZE'resource, which contains information about the capabilities of the application and its run-time memory requirements. If the file is a document, its resource fork typically contains preference settings, window locations, and document-specific fonts, icons, and so forth. Figure 1-1 The two forks of a Macintosh file A file's data fork contains the file's data. It is simply a series of consecutive bytes of data. In a sense, the data fork of a Macintosh file corresponds to an entire file in operating systems that treat a file simply as a sequence of bytes. The bytes stored in a file's data fork do not have to exhibit any internal structure, unlike the bytes stored in the resource fork (which consists of a resource map followed by resources). Rather, your application is responsible for interpreting the bytes in the data fork in whatever manner is appropriate. The data fork of a document file might, for example, contain the text of a letter. Even though a Macintosh file always contains both a resource fork and a data fork, one or both of those forks can be empty. Document files sometimes contain only data (in which case the resource fork is empty). More often, document files contain both resources and data. Application files generally contain resources only (in which case, the data fork is empty). Application files can, however, contain data as well. Whether you store specific data in the data fork or in the resource fork of a file depends largely on whether that data can usefully be structured as a resource. For example, if you want to store a small number of names and telephone numbers, you can easily define a resource type that pairs each name with its telephone number. Then you can read names and corresponding numbers from the resource file by using Resource Manager routines. To retrieve the data stored in a resource, you simply specify the resource type and ID; you don't need to know, for instance, how many bytes of data are stored in that resource. In some cases, however, it is not possible or advisable to store your data in resources. The data might be too difficult to put into the structure required by the Resource Manager. For example, it is easiest to store a document's text, which is usually of variable length, in a file's data fork. Then you can use File Manager routines to access any byte or group of bytes individually. Even when it is easy to define a resource type for your data, limitations on the Resource Manager might compel you to store your data in the data fork instead. A resource fork can contain at most about 2700 resources. More importantly, the Resource Manager searches linearly through a file's resource types and resource IDs. If the number of types or IDs to be searched is large, accessing the resource data can become slow. As a rule of thumb, if you need to manage data that would occupy more than about 500 resources total, you should use the data fork instead. Because the Resource Manager is of limited use in storing large amounts of user-generated data, most of the techniques in "Using Files" (beginning on page 1-12) illustrate the use of File Manager routines to manage information stored in a file's data fork. See the section "Using a Preferences File" on page 1-36 for an example of the use of the Resource Manager to access data stored in a file's resource fork. - In general, you should store data created by the user in a file's data fork, unless the data is guaranteed to occupy a small number of resources. The Resource Manager was not designed to be a general-purpose data storage and retrieval system. Also, the Resource Manager does not support multiple access to a file's resource fork. If you want to store data that can be accessed by multiple users of a shared volume, use the
Edited Invalid date 6.1 Rotation Angle and Angular Velocity The arcs of a bird's flight and Earth's path around the Sun are examples of curved motions. If there is a net external force, motion is along a straight line at constant speed. We will study the forces that cause motion along curves. This chapter is a continuation of Dynamics:Newton's Laws of Motion as we study more applications of the laws of motion. The study of this topic will lead to the study of many new topics under the name rotation. When points in an object move in circular paths, it's called pure rotational motion. The motion is motion with no rotation. There is a rotating hockey puck moving along ice. We studied motion along a straight line and introduced concepts such as displacement, velocity, and acceleration. Projectile motion is a case in which an object is projected into the air while being subject to the force of gravity. In this chapter, we look at situations where the object does not land but moves in a curve. The study of uniform circular motion begins by defining two quantities. There is a line from the center of the CD to the edge. The amount of rotation is similar to linear distance. There is a rotation of the circle's radius. The length is described. The length of the circle is known as the arcs length. The circle's diameter is. Table 6.1 shows a comparison of radians and degrees. Points 1 and 2 are the same angle, but point 2 is at a greater distance from the center of rotation. If rad, the CD has made one complete revolution, and every point on the CD is back at its original position. The greater the rotation angle, the greater the velocity. The units are radians per second. The velocity is similar to the linear one. The pit on the rotating CD is used to get the precise relationship between the two variables. The largest point on the rim is proportional to the distance from the center of rotation. The linear speed of a point on the rim is called the tangential speed. Consider the tire of a moving car as an example of the second relationship in play. The speed of a point on the rim of a tire is the same as the speed of a car. The tire spins large if the car moves fast. A larger-radius tire will produce a greater linear speed for the car. The tire rotation is the same as if the car were jacked up. The tire radius is where the car moves forward at linear velocity. A larger tire's speed is related to the car's speed. When the car travels at about, calculate the car tire's angular velocity. We can use the second relationship in to calculate the angular velocity if we know the tire's radius. We get 50.0/s when we cancel units. The units of rad/s must be in the angular velocity. Review flashcards and saved quizzes Getting your flashcards Privacy & Terms
Think of a number, square it and subtract your starting number. Is the number you’re left with odd or even? How do the images help to explain this? Watch this animation. What do you see? Can you explain why this happens? Here are some arrangements of circles. How many circles would I need to make the next size up for each? Can you create your own arrangement and investigate the number of circles it needs? In each of the pictures the invitation is for you to: Count what you see. Identify how you think the pattern would continue. Can you find a way of counting the spheres in these arrangements? How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...? Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread? Can you find all the ways to get 15 at the top of this triangle of numbers? Many opportunities to work in different ways. How can you arrange these 10 matches in four piles so that when you move one match from three of the piles into the fourth, you end up with the same arrangement? Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be? Delight your friends with this cunning trick! Can you explain how it works? This challenge, written for the Young Mathematicians' Award, invites you to explore 'centred squares'. Polygonal numbers are those that are arranged in shapes as they enlarge. Explore the polygonal numbers drawn here. While we were sorting some papers we found 3 strange sheets which seemed to come from small books but there were page numbers at the foot of each page. Did the pages come from the same book? Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions. Can you continue this pattern of triangles and begin to predict how many sticks are used for each new "layer"? This challenge focuses on finding the sum and difference of pairs of two-digit numbers. Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens? Can you dissect an equilateral triangle into 6 smaller ones? What number of smaller equilateral triangles is it NOT possible to dissect a larger equilateral triangle into? Find out what a "fault-free" rectangle is and try to make some of your own. Can you make dice stairs using the rules stated? How do you know you have all the possible stairs? This task follows on from Build it Up and takes the ideas into three dimensions! In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square? Compare the numbers of particular tiles in one or all of these three designs, inspired by the floor tiles of a church in Cambridge. This challenge encourages you to explore dividing a three-digit number by a single-digit number. Here are two kinds of spirals for you to explore. What do you notice? These squares have been made from Cuisenaire rods. Can you describe the pattern? What would the next square look like? Use the interactivity to investigate what kinds of triangles can be drawn on peg boards with different numbers of pegs. Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need? Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this? If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable. Decide which of these diagrams are traversable. In how many different ways can you break up a stick of 7 interlocking cubes? Now try with a stick of 8 cubes and a stick of 6 cubes. Square numbers can be represented as the sum of consecutive odd numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153? We can arrange dots in a similar way to the 5 on a dice and they usually sit quite well into a rectangular shape. How many altogether in this 3 by 5? What happens for other sizes? Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores. An investigation that gives you the opportunity to make and justify predictions. What happens when you round these three-digit numbers to the nearest 100? Watch this animation. What do you notice? What happens when you try more or fewer cubes in a bundle? Take a look at the video of this trick. Can you perform it yourself? Why is this maths and not magic? Find a route from the outside to the inside of this square, stepping on as many tiles as possible. Are these statements relating to odd and even numbers always true, sometimes true or never true? Watch this video to see how to roll the dice. Now it's your turn! What do you notice about the dice numbers you have recorded? Use two dice to generate two numbers with one decimal place. What happens when you round these numbers to the nearest whole number? What happens when you round these numbers to the nearest whole number? Use your addition and subtraction skills, combined with some strategic thinking, to beat your partner at this game. In this game for two players, the idea is to take it in turns to choose 1, 3, 5 or 7. The winner is the first to make the total 37. Can you describe this route to infinity? Where will the arrows take you next? This challenge asks you to imagine a snake coiling on itself. Try out this number trick. What happens with different starting numbers? What do you notice? What can you say about these shapes? This problem challenges you to create shapes with different areas and perimeters.
Today is Easter Sunday. Why? How is the date of Easter determined? This is a revision of a blog post from 2013. Easter Day is one of the most important events in the Christian calendar. It is also one of the most mathematically elusive. In fact, regularization of the observance of Easter was one of the primary motivations for calendar reform centuries ago. Easter is linked to the Jewish Passover. The informal rule is that Easter Day is the first Sunday after the first full moon after the vernal equinox. But the ecclesiastical full moon and equinox involved in this rule are not always the same as the corresponding astronomical events, which, after all, depend upon the location of the observer on the earth. easter_2018 = datestr(easter(2018)) easter_2018 = '01-Apr-2018' My MATLAB® program is based on the algorithm presented in the first volume of the classic series by Donald Knuth, The Art of Computer Programming. Knuth has used it in several publications to illustrate different programming languages. The task has often been the topic of an exercise in computer programming courses. Knuth says that the algorithm is due to the Neapolitan astronomer Aloysius Lilius and the German Jesuit mathematician Christopher Clavious in the late 16th century and that it is used by most Western churches to determine the date of Easter Sunday for any year after 1582. The date varies between March 22 and April 25. The earth's orbit around the sun and the moon's orbit around the earth are not in sync. It takes the earth about 365.2425 days to orbit the sun. This is known as a tropical year. The moon's orbit around the earth is complicated, but an average orbit takes about 29.53 days. This is known as a synodic month. The fraction year = 365.2425; month = 29.53; format rat ratio = year/month ratio = 6444/521 is not the ratio of small integers. However, in the 5th century BC, an astronomer from Athens named Meton observed that the ratio is very close to 235/19. format short ratio meton = 235/19 ratio = 12.3685 meton = 12.3684 In other words, 19 tropical years is close to 235 synodic months. This Metonic cycle was the basis for the Greek calendar and is the key to the algorithm for determining Easter. Here is the revised MATLAB program. Try other years. Can you spot the change made to my old program in the blog post from 2013? function dn = easter(y) % EASTER Date of Easter. % EASTER(y) is the datenum of Easter in year y. % Example: % datestr(easter(2020)) % % Ref: Donald Knuth, The Art of Computer Programming, % Fundamental Algorithms, pp. 155-156. % Copyright 2014-18 Cleve Moler % Copyright 2014-18 The MathWorks, Inc. % Golden number in 19-year Metonic cycle. g = mod(y,19) + 1; % Century number. c = floor(y/100) + 1; % Corrections for leap years and moon's orbit. x = floor(3*c/4) - 12; z = floor((8*c+5)/25) - 5; % Epact. e = mod(11*g+20+z-x,30); if (e==25 && g>11 || e==24), e = e + 1; end % Full moon. n = 44 - e; if n < 21, n = n + 30; end % Find a Sunday. d = floor(5*y/4) - x - 10; % Easter is a Sunday in March or April. d = n + 7 - mod(d+n,7); dn = datenum(y,3,32); Donald E. Knuth, The Art of Computer Programming, Volume 1: Fundamental Algorithms (3rd edition), pp. 159-160, Addison-Wesley, 1997, ISBN 0-201-89683-4, PDF available. Wikipedia, Primary article on Easter. <http://en.wikipedia.org/wiki/Easter> Wikipedia, Computus, details on calculation of Easter. <http://en.wikipedia.org/wiki/Computus> Wikipedia, Metonic cycle. <http://en.wikipedia.org/wiki/Metonic_cycle> Get the MATLAB code Published with MATLAB® R2018a To leave a comment, please click here to sign in to your MathWorks Account or create a new one.
Space Weather is a term used to describe the relationship between the sun, the Earth, and the technological systems upon which we rely. It is a complicated chain of physical processes (predominantly electromagnetic) that links the solar atmosphere, the Earth’s magnetic field, the Earth’s atmosphere, and the Earth’s crust. Satellites, airlines, GPS signals, and the power grid are all vulnerable to the effects of space weather. It Starts at the Sun The Sun’s surface and atmosphere are incredibly dynamic. These regions consist of plasma, which is an electrically charged gas. The Sun has a strong and complex magnetic field. Plasma organizes about magnetic field lines, creating the solar filaments and coronal loops, shown early in the above video. This organization is caused by a fundamental property of plasma called the frozen-in theorem: the charged particles inside of plasma can move along magnetic field lines, but not across them. In the Sun’s atmosphere, the pressure of the hot plasma gas is constantly fighting the forces of the Sun’s magnetic field, creating a tenuous environment that is eager to release this tension. Solar plasma is hot, so it blows off into space to create the solar wind. The solar wind fills our solar system with hydrogen, helium, and other trace elements. It pulls the Sun’s magnetic field into space with it, where it is known as the Interplanetary Magnetic Field or IMF for short. The solar wind moves through space at a brisk 400 km/s (just under a million miles per hour). It takes around 4 days for the “quiet” solar wind to reach Earth. The solar wind is highly variable, however, and can reach much faster speeds. When certain regions on the Sun’s surface become unstable, explosive releases of energy occur. These include solar flares, which are bright explosions of light and particle radiation (think of small particles moving almost the speed of light!) These are sometimes associated with Coronal Mass Ejections, or CMEs. CMEs are when a portion of the Sun’s atmosphere explodes into space all at once. The amount of gas in a single CME can surpass the amount of mass within a single Earth mountain if it was vaporized. These events create bursts of strong solar wind: it is denser (more plasma), faster (thousands of kilometers per second), and carries a stronger IMF. The Earth as an Obstacle in the Solar Wind The Earth has a strong dipole magnetic field that has the following geometry: Because of the frozen-flux theorem, described above, the solar wind plasma cannot pass through the Earth’s magnetic field lines. Instead, just like a large rock in a shallow stream, the solar wind flow is re-routed around the Earth’s field. This creates the magnetosphere, or a cavity in the solar wind flow. The energy transfer from However, unlike the simple rock-in-a-stream analogy, the relationship between the solar wind and the Earth’s magnetosphere is an electromagnetic one. The magnetic obstacle to the solar flow isn’t solid; it compresses on the dayside and stretches into a long magnetotail on the nightside. The energy of the flowing solar wind is transferred to the magnetosphere by forming electric currents and electric fields around and through the magnetosphere. Plasma within the Earth’s magnetosphere is accelerated, intensifying the radiation belts. Electric currents and particles flow along magnetic field lines and into the Earth’s upper atmosphere, driving the beautiful aurora. The Earth’s magnetic field is perturbed, twisted, and warped compared to its more regular dipole shape. The entire process is dynamic and complicated. While the speed and density of the solar wind are both important in controlling the energy transfer to the magnetosphere, the most important factor is the direction and strength of the interplanetary magnetic field (IMF). When the IMF is southward (that is, it is directed from north to south with respect to the Earth’s magnetic poles), it is oppositely aligned with the Earth’s magnetic field, which points northward at the dayside of the magnetosphere. Oppositely aligned magnetic fields can go through a process known as reconnection. This is where the IMF lines merge with those lines that are connected to the Earth. Because the new lines are flowing along with the solar wind, they are peeled off the day side of the magnetosphere and dragged to the night side. These eventually recirculate to the dayside in a process known as magnetic convection. The movie above shows magnetic convection in action. Magnetic convection is important because it is the main way through which plasma is energized to dangerous levels. It also allows electric currents to flow into the upper atmosphere. Therefore, if you want to know if a solar storm will become a space weather storm at Earth, you need to know the direction of the IMF! Southward IMF is what causes the strongest space weather events. Our Vulnerability to Space Weather An interesting event occurred on September 1st of 1859: Richard Carrington, an amateur solar astronomer, witnessed an incredibly strong solar flare. 18 hours after his observation, world-wide auroras filled the sky and the largest space weather storm ever observed by humans was in full swing. During this storm, telegraph wires ran without power; some stations sparked and caught fire. For the first time, we were experiencing the effects of space weather on technological systems. Today, we are more vulnerable to space weather than ever before. Here are some of the most critical space weather effects: - Spacecraft are naked to the particle and X-ray radiation associated with solar flares and solar storms. This can damage solar arrays, drive electric discharges across electronics, and deteriorate spacecraft materials. Spacecraft behavioral upsets due to space weather are common. Several spacecraft have famously been rendered temporarily inoperable and even destroyed by space weather storms. - Because much of the energy of space weather storms flows into the high-latitude regions of the upper atmosphere, it becomes very disturbed during space weather storms. This causes scattering and distortion, or scintillation, of signals between spacecraft and the ground. This is especially important for GPS signals, which cannot accurately report positions during active space weather times. Next time your GPS navigation system is failing, check the space weather conditions. - Along with electric currents, particle radiation flows along field lines and into the the upper atmosphere. While the atmosphere shields us from harm while we are on the ground, the passengers and crew of polar-flying aircraft are at increased radiation risk during space weather storms. - The electric currents flowing through the upper atmosphere can induce currents in any long, ground based conductor. While this usually means conducting minerals in the Earth’s crust, the currents can also flow through long pipelines and power lines. This can heat and corrode pipes, heat power transformers, and disrupt the power grid. In March of 1989, a strong space weather storm caused the collapse of the eastern Canadian power grid for 9 hours. There are many documented cases of large, high voltage transformers being completely destroyed during space weather events. If a space weather storm on the level of the famous 1859 “Carrington Event” were to happen today, the impact would be catastrophic. Experts predict that nation-wide power outages would last at least a month as companies struggle to replace transformers and other damaged infrastructure. Communication capabilities would be severely crippled, possible long term with the loss of multiple satellites. The impact on the economy is estimated to be in the trillions of dollars.
Science, Tech, Math › Science How Solar Flares Work What risks are posed by solar flares? Share Flipboard Email Print VICTOR HABBICK VISIONS/Getty Images Science Astronomy An Introduction to Astronomy Important Astronomers Solar System Stars, Planets, and Galaxies Space Exploration Chemistry Biology Physics Geology Weather & Climate By Anne Marie Helmenstine, Ph.D. Chemistry Expert Ph.D., Biomedical Sciences, University of Tennessee at Knoxville B.A., Physics and Mathematics, Hastings College Dr. Helmenstine holds a Ph.D. in biomedical sciences and is a science writer, educator, and consultant. She has taught science courses at the high school, college, and graduate levels. our editorial process Facebook Facebook Twitter Twitter Anne Marie Helmenstine, Ph.D. Updated June 28, 2019 A sudden flash of brightness on the Sun's surface is called a solar flare. If the effect is seen on a star besides the Sun, the phenomenon is called a stellar flare. A stellar or solar flare releases a vast amount of energy, typically on the order of 1 × 1025 joules, over a broad spectrum of wavelengths and particles. This amount of energy is comparable to the explosion of 1 billion megatons of TNT or ten million volcanic eruptions. In addition to light, a solar flare may eject atoms, electrons, and ions into space in what is called a coronal mass ejection. When particles are released by the Sun, they are able to reach Earth within a day or two. Fortunately, the mass may be ejected outward in any direction, so the Earth isn't always affected. Unfortunately, scientists aren't able to forecast flares, only give a warning when one has occurred. The most powerful solar flare was the first one that was observed. The event occurred on September 1, 1859, and is called the Solar Storm of 1859 or the "Carrington Event". It was reported independently by astronomer Richard Carrington and Richard Hodgson. This flare was visible to the naked eye, set telegraph systems aflame, and produced auroras all the way down to Hawaii and Cuba. While scientists at the time didn't have the ability to measure the strength of the solar flare, modern scientists were able to reconstruct the event based on nitrate and the isotope beryllium-10 produced from the radiation. Essentially, evidence of the flare was preserved in ice in Greenland. How a Solar Flare Works Like planets, stars consists of multiple layers. In the case of a solar flare, all layers of the Sun's atmosphere are affected. In other words, energy is released from the photosphere, chromosphere, and corona. Flares tend to occur near sunspots, which are regions of intense magnetic fields. These fields link the atmosphere of the Sun to its interior. Flares are believed to result from a process called magnetic reconnection, when loops of magnetic force break apart, rejoin and release energy. When magnetic energy is suddenly released by the corona (suddenly meaning over a matter of minutes), light and particles are accelerated into space. The source of the released matter appears to be material from the unconnected helical magnetic field, however, scientists haven't completely worked out how flares work and why there are sometimes more released particles than the amount within a coronal loop. Plasma in the affected area reaches temperatures in the order of tens of million Kelvin, which is nearly as hot as the Sun's core. The electrons, protons, and ions are accelerated by the intense energy to nearly the speed of light. Electromagnetic radiation covers the entire spectrum, from gamma rays to radio waves. The energy released in the visible part of the spectrum makes some solar flares observable to the naked eye, but most of the energy is outside the visible range, so flares are observed using scientific instrumentation. Whether or not a solar flare is accompanied by a coronal mass ejection is not readily predictable. Solar flares may also release a flare spray, which involves an ejection of material that is faster than a solar prominence. Particles released from a flare spray may attain a velocity of 20 to 200 kilometers per second (kps). To put this into perspective, the speed of light is 299.7 kps! How Often Do Solar Flares Occur? Smaller solar flares occur more often than large ones. The frequency of any flare occurring depends on the activity of the Sun. Following the 11-year solar cycle, there may be several flares per day during an active part of the cycle, compared with fewer than one per week during a quiet phase. During peak activity, there may be 20 flares a day and over 100 per week. How Solar Flares Are Classified An earlier method of solar flare classification was based on the intensity of the Hα line of the solar spectrum. The modern classification system categorizes flares according to their peak flux of 100 to 800 picometer X-rays, as observed by the GOES spacecraft that orbit the Earth. Classification Peak Flux (Watts per square meter) A < 10−7 B 10−7 – 10−6 C 10−6 – 10−5 M 10−5 – 10−4 X > 10−4 Each category is further ranked on a linear scale, such that an X2 flare is twice as potent as an X1 flare. Ordinary Risks From Solar Flares Solar flares produce what is called solar weather on Earth. The solar wind impacts the magnetosphere of the Earth, producing aurora borealis and australis, and presenting a radiation risk to satellites, spacecraft, and astronauts. Most of the risk is to objects in low Earth orbit, but coronal mass ejections from solar flares can knock out power systems on Earth and completely disable satellites. If satellites did come down, cell phones and GPS systems would be without service. The ultraviolet light and x-rays released by a flare disrupt long-range radio and likely increase the risk of sunburn and cancer. Could a Solar Flare Destroy the Earth? In a word: yes. While the planet itself would survive an encounter with a "superflare", the atmosphere could be bombarded with radiation and all life could be obliterated. Scientists have observed the release of superflares from other stars up to 10,000 times more powerful than a typical solar flare. While most of these flares occur in stars that have more powerful magnetic fields than our Sun, about 10% of the time the star is comparable to or weaker than the Sun. From studying tree rings, researchers believe Earth has experienced two small superflares— one in 773 C.E. and another in 993 C.E. It's possible we can expect a superflare about once a millennium. The chance of an extinction level superflare is unknown. Even normal flares can have devastating consequences. NASA revealed Earth narrowly missed a catastrophic solar flare on July 23, 2012. If the flare had occurred just a week earlier, when it was pointed directly at us, society would have been knocked back to the Dark Ages. The intense radiation would have disabled electrical grids, communication, and GPS on a global scale. How likely is such an event in the future? Physicist Pete Rile calculates the odds of a disruptive solar flare is 12% per 10 years. How to Predict Solar Flares At present, scientists cannot predict a solar flare with any degree of accuracy. However, high sunspot activity is associated with an increased chance of flare production. Observation of sunspots, particularly the type called delta spots, is used to calculate the probability of a flare occurring and how strong it will be. If a strong flare (M or X class) is predicted, the US National Oceanic and Atmospheric Administration (NOAA) issues a forecast/warning. Usually, the warning allows for 1-2 days of preparation. If a solar flare and coronal mass ejection occur, the severity of the flare's impact on Earth depends on the type of particles released and how directly the flare faces the Earth. Sources "Big Sunspot 1520 Releases X1.4 Class Flare With Earth-Directed CME". NASA. July 12, 2012. "Description of a Singular Appearance seen in the Sun on September 1, 1859", Monthly Notices of the Royal Astronomical Society, v20, pp13+, 1859. Karoff, Christoffer. "Observational evidence for enhanced magnetic activity of superflare stars." Nature Communications volume 7, Mads Faurschou Knudsen, Peter De Cat, et al., Article number: 11058, March 24, 2016. Featured Video Cite this Article Format mla apa chicago Your Citation Helmenstine, Anne Marie, Ph.D. "How Solar Flares Work." ThoughtCo, Aug. 27, 2020, thoughtco.com/solar-flares-4137226. Helmenstine, Anne Marie, Ph.D. (2020, August 27). How Solar Flares Work. Retrieved from https://www.thoughtco.com/solar-flares-4137226 Helmenstine, Anne Marie, Ph.D. "How Solar Flares Work." ThoughtCo. https://www.thoughtco.com/solar-flares-4137226 (accessed December 6, 2021). copy citation Solar Storms: How They Form and What They Do SunLearn About Sunspots, the Sun's Cool, Dark Regions Gamma Radiation Definition Journey Through the Solar System: Our Sun Microwave Radiation Definition Sun Facts: What You Need to Know Solar Radiation and the Earth's Albedo 7 Extinction Level Events That Could End Life as We Know It Radiation in Space Gives Clues about the Universe Understanding Iridium Flares Photoelectric Effect: Electrons from Matter and Light Could Matter-Antimatter Reactors Work? An Introduction to Black Holes What Is Radioactivity? What is Radiation? Van Allen Radiation Belts Are Skyquakes Real?
What exactly is soil conservation, and how can we become involved? Soil offers the firmament on which we live and develop. It gives nutrients to trees, plants, crops, animals, and a hundred million microorganisms, all of which are required for life to continue on Earth. If the soil becomes unsuitable or unstable, the entire process comes to a halt; nothing else can grow or break down. To avoid this, we must be aware of the beautiful ecosystem that exists beneath our feet. What is soil conservation? Soil contains nutrients that are necessary for plant growth, animal life, and millions of microorganisms. The life cycle, however, comes to a halt if the soil becomes unhealthy, unstable, or polluted. Definition: Soil conservation refers to the practices and strategies implemented to prevent soil erosion, maintain soil fertility, and ensure a healthy soil ecosystem. It’s about managing the soil to prevent its destruction or degradation, which could be caused by a variety of factors, including agricultural activities, industrialization, urbanization, deforestation, and natural events like floods or landslides. It is concerned with keeping soils healthy through a variety of methods and techniques. Individuals who are committed to conservation assist to keep it fertile and productive while also protecting it from erosion and degradation. Why are soil conservation practices important? Conservation cropping systems rely heavily it. There are numerous advantages for producers who opt to use soil conservation methods on their farms. - Yields are comparable to or higher than traditional tillage. - Cut down on the amount of fuel and labor used. - It requires less time. - Lowering the cost of machinery repair and maintenance. - Potential cost savings on fertilizer and herbicides. - Increased soil productivity and quality. - Less erosion. - Increased infiltration and storage of water. - Better air and water quality. - Offers food and shelter to wildlife. Soil Formation Factors - Parent material refers to the rocks and deposits that formed the soil. - The climate in which the soils formed. - Living organisms that altered soils. - The land’s topography or slope. - The geological time span during which the soils have evolved (age of the soil). Ten good reasons to adapt soil conservation practices The following are the top 10 reasons: - Soil is not a renewable natural resource. According to the Food and Agriculture Organization (FAO), forming a centimeter of soil might take hundreds to thousands of years. However, erosion can cause a single centimeter of soil to be lost in a single year. - To maintain a steady supply of food at economical rates. It has been shown to boost agricultural output quality and quantity over time by retaining topsoil and preserving the soil’s long-term productivity. - Soil serves as the basis for our structures, roads, homes, and schools. In truth, the soil has an impact on how structures are constructed. - Beneficial soil microbes live in soils; these creatures are nature’s unseen helpers. They develop synergistic interactions with plants, among other things, to protect them from stress and nourish them with nutrients. - Soils remove dust, chemicals, and other impurities from surface water. This is why underground water is one of the purest water sources. - Farmers benefit from healthier soils because they increase agricultural yields and protect plants from stress. - To enhance wildlife habitat. Techniques for conservation of soil such as establishing buffer strips and windbreaks, as well as restoring soil organic matter, considerably improve the quality of the environment for all types of animals. - For purely aesthetic grounds. To make the scenery more appealing and gorgeous. - To contribute to the creation of a pollution-free environment in which we can live safely. - For our children’s future, so that they will have adequate soil to support life. According to legend, the land was not so much given to us by our forefathers as it was borrowed from our children. Soil conservations methods and techniques There are a variety of useful measures and methods for conservation of soil available, some of which humans have used since the dawn of time. The following are some of the most common examples of such practices: 1. Conservation tillage Conservation tillage is an agro management method that seeks to reduce the intensity or frequency of tillage operations in order to realize both environmental and economic benefits. Conventional tillage refers to the traditional way of farming in which soil is prepared for planting by thoroughly inverting it with a tractor-pulled plow, followed by tilting further in order to level the surface of the soil for crop cultivation. It, on the other hand, is a tillage approach that reduces plowing intensity while keeping crop residue to conserve soil, water, and energy resources. Planting, growing, and harvesting crops with as little disturbance to the surface of the soil as feasible is what conserved tillage entails. Soil tillage promotes microbial decomposition of organic matter in the soil, resulting in CO2 emissions into the atmosphere. As a result, reducing tillage encourages carbon sequestration in the soil. Many crops can now be produced with minimal tillage thanks to advances in weed control technology and farm machinery over the previous few decades. There are several types of conservation tillage: It necessitates the management of crop remains on the soil surface. Crop residues, a renewable resource, are important in conservation tillage. When crop residues are managed properly, they protect soil resources, improve soil quality, restore degraded ecosystems, improve nutrient cycling, increase water and availability, enhance pest suppression, such as weed and nematode suppression, reduce runoff and off-site nutrient leaching, and sustain and improve crop productivity and profitability. It can be used in conjunction with other measures to maximize the soil benefits of reduced tillage and increased surface coverage. 2. Contour farming Contour plowing lowers runoff while also assisting crops and soil in maintaining a steady altitude. It is accomplished by furrowing the land with contour lines between the crops. This strategy was used by the ancient Phoenicians and has been shown to retain more soil and enhance crop yields by 10% to 50%. 3. Strip cropping Strip cropping is a farming technique used when a slope is too steep or too long, or when there is no other way to prevent soil erosion. It alternates strips of closely planted crops like hay, wheat, or other small grains with strips of row crops like maize, soybeans, cotton, or sugar beets. Strip cropping helps to prevent soil erosion by providing natural dams for water, thus preserving soil strength. Certain plant layers absorb minerals and water from the soil more efficiently than others. When water hits the weaker soil, which lacks the minerals required to strengthen it, it usually washes it away. When strips of soil are strong enough to restrict the flow of water through them, the weaker topsoil cannot wash away as easily as it would ordinarily. As a result, arable land remains fertile for much longer. Windbreaks are an excellent approach for conservation of soil and reducing soil erosion in flat farming settings. This is made easier by planting rows of dense trees between the crops — evergreens are a wonderful year-round solution for this — or by planting crops in an unconventional fashion. Deciduous trees may also function if they can stand vigil all year. 5. Crop rotation Crop rotation is a fantastic strategy to combat soil infertility and has been used with great success for as long as there have been crops to grow. Crop rotation is regarded as excellent practice in organic farming by the Rodale Institute. Crop rotation is the technique of cultivating a variety of crops in the same location over the course of a growing season. The nutritional requirements of various crops vary. Because the crops are rotated each season, the approach decreases reliance on a single source of nutrients. 6. Cover crops Cover crops are an essential component of the stability of the conservation agriculture system, both for their direct and indirect effects on characteristics and for their ability to encourage enhanced biodiversity in the agro-ecosystem. While commercial crops have a market value, cover crops are mostly produced for soil fertility or as fodder for livestock. Cover crops are beneficial in areas where less biomass is produced, such as semi-arid (dry) areas and eroded soils, because they: - Protect the soil during fallow periods. - Mobilize and recycle nutrients. - Enhance soil structure and break compacted layers as well as hardpans. - Allow for rotation in a monoculture. - Can be used to control pests, weeds, or break soil compactness. To make use of the moisture that is residual in the soil, cover crops are frequently grown during periods of fallow, such as the period between crop harvest and the next planting. Their growth is stopped before or after the next crop is planted, but prior to the rivalry between the two types of crops commences. Another excellent soil conservation practice that reduces erosion from runoff water is the use of cover crops. 7. Buffer strips Buffer strips are permanently vegetated zones that safeguard water quality between a canal and a farm field. Buffer strips aid in soil retention by slowing and sifting storm flow. As a result, the amount of hazardous phosphorus that enters our lakes may be minimized. A buffer strip begins at the edge of the water and extends at least 30 feet inward towards the land, providing aesthetic surroundings and a habitat for wildlife. Buffers aid in the retention of soils and can also be used to grow plants that can be gathered and used as animal feed. Buffers exist in a variety of shapes and sizes, including: - Harvestable buffer strips –These are crop buffers that can also be harvested later on for forage by farmers. - Contour buffer strip – utilized in sloped agricultural areas to prevent erosion and limit downhill precipitation velocity. - Shoreline gardens – a buffer between a manicured residential lawn and a lake. Benefits of buffers - Less soil erosion – They aid in the retention and conservation of soil. - Wildlife habitat – provides food and cover for wildlife. - Protect and extend stream health – prevents loose silt from filling drainage ditches and streams. - Streambank integrity – more vegetation stabilizes the stream bank. - Aesthetic appeal. 8. Grassed waterways Grassed waterways are shallow, broad, saucer-shaped pathways that carry surface water over fields without causing any erosion to the soil. The river’s plant cover tends to slow the flow of water and protects the channel surface from erosion forces induced by runoff water. If left alone, runoff and snowmelt water will drain into a field’s natural draws or drainage pathways. Grassed waterways securely move water down natural draws through fields when appropriately scaled and created. Waterways also serve as outlets for terrace systems, contour cropping patterns, and diversion channels. When the watershed area generating the runoff water is quite big, grassed rivers are a good solution to soil erosion caused by concentrated water flows. How it helps - Grass cover protects the canal from gully erosion and captures sediment in runoff water. - Vegetation can also filter and absorb some of the pollutants and nutrients in runoff water. - Vegetation serves as a safe haven for little birds and animals. Terracing is an agricultural process that involves rearranging cropland or converting hills into agriculture by building particular ridged platforms. Terraces are the name given to these platforms. Terrace farming is an efficient and, in many cases, the only solution for hilly farmlands. Terraces are a fantastic water and soil conservation structure to use if you have sloping fields in your operation to decrease erosion and conserve moisture on steep slopes. The types of terraces that can be employed (narrow-based, broad-based, or terrace channels) are adaptable to your demands and soil type, and they can be spaced based on erosion possibilities and equipment considerations. Terraces play a significant role in minimizing soil erosion by delaying and lowering the energy of runoff. Some terraces collect drainage water and redirect it underground rather than overland as runoff. If erosion is a major problem on sloping terrain, one option to explore is a terrace system to slow and manage surface runoff and prevent soil erosion. Once created, a terrace, like any conservation technique, demands hands-on monitoring and upkeep to ensure peak effectiveness. 10. Drop inlets and rock chutes A drop inlet, also known as a shaft spillway, is made up of a vertical intake pipe and a horizontal underground conduit pipe. Water enters the vertical pipe at ground level and descends below, where it is safely channeled through a massive concrete, metal, or plastic pipe into a spillway such as a stream or ditch. A rock chute spillway is a construction that allows surface water to flow safely into an exit. This type of spillway aids in bank stabilization by reducing retrogressive erosion of waterway bottoms (furrows and ditches) and the production of erosional gullies in fields. This adaptable, low-cost, and effective construction is easily altered to the location and has minimal disadvantages for agricultural techniques. However, unlike a building with a sedimentation basin, it does not allow for water retention or the sedimentation of soil particles in runoff water. The rock chute spillway is used to alleviate erosion problems at the bottom of fields, at the outlet of a furrow, an interception channel, a grassed waterway, or anywhere water flows into a stream. Drop inlets and rock chutes are frequently used to “step” water down where there are abrupt elevation changes, thus protecting soil from erosion. Livestock dung, mulch, municipal sewage, and legume plants such as alfalfa and clover are examples of natural fertilizers. Manure and sludge are put to the field by spreading it out and then kneading it into the soil. Timing applications must adhere to strict restrictions, as both sludge and manure can cause significant water contamination if managed improperly. Grown legumes like clover or alfalfa are subsequently tilled into the soil as “green fertilizer.” Natural fertilizers, like chemical fertilizers, replenish the soil with important elements such as nitrogen, phosphorus, and potassium. They do, however, have the added benefit of contributing organic matter to the soil. 11. Bank stabilization Bank stabilization is another method of soil conservation. It refers to any technique used to keep soil in place on a bank or in a river. Here, the soil can be eroded by waves, stream currents, ice, and surface runoff. Advantages of bank stabilization are decreased soil erosion, increased water quality, and a more aesthetically pleasing setting. Gabion baskets, re-vegetation, and rip rap are three typical methods for controlling erosion at a stream or riverbank. The first two options rely on loose rock to preserve the underlying loose soil surface by cushioning the impact of stream water on the bank. The term “rip-rap” refers to loose rock on a steeply sloping bank. Riprap, on the other hand, can survive the rigors of ice and frost, whereas concrete may fracture. Gabion baskets are usually wire baskets filled with rocks. The wire baskets hold the rock in place. They are frequently used on steeper slopes and in regions where water flows quicker. Planting along the shoreline might also help to stabilize stream banks. Shrubs, natural grasses, and trees slow the flow of water across the soil and trap silt, keeping it out of the water. 12. Organic or ecological growing Organic farming is a farming practice that includes ecologically based pest treatments and biological fertilizers obtained mostly from animal and plant wastes, as well as nitrogen-fixing cover crops. Modern organic farming evolved in response to the environmental damage caused by the use of chemical pesticides and synthetic fertilizers in conventional agriculture, and it offers significant ecological benefits. Organic farming, when compared to conventional agriculture, utilizes fewer pesticides, lowers soil erosion, reduces nitrate leaching into groundwater and surface water, and recycles animal feces back into the farm. 13. Sediment control Similar to how agricultural soil erosion affects yields and plant growth, urban soil erosion reduces the possibility of healthy landscape plantings. This is especially true during urbanization when mass grading alters the natural soil profile and results in a large loss of topsoil. When soil is subjected to the effects of rainfall, the volume, and velocity of runoff increase. This causes a chain reaction that results in sediment movement and deposition, lower stream capacity, and, eventually, increased stream scour and floods. Though temporary, erosion and sediment control methods safeguard water resources from sediment contamination and increases in flow caused by active land development and redevelopment activities. Sediment and related nutrients are kept from leaving disturbed regions and polluting waterways by keeping soil on-site. Erosion control measures are primarily aimed to minimize soil particle detachment and transportation, whereas sediment control are designed to confine eroding soil on-site. This method of soil conservation is thought to be a more practical approach. 14. Integrated pest management Pests are a huge nuisance for farmers and have been a major difficulty to deal with, while pesticides damage nature by leaking into the water and the atmosphere. It is critical to replace synthetic pesticides with organic ones wherever possible, to build biological enemies of pests whenever possible, to rotate crop types to avoid expanding insect populations in the same field for years, and to use alternative strategies in complex situations. Integrated pest management (IPM) employs a number of strategies aimed at reducing the usage of chemical pesticides and, as a result, environmental hazards. Crop rotation is the foundation of IPM. Pests are starved out and less likely to establish themselves in harmful numbers the next year when crops are rotated from year to year. Crop rotation has been shown to be an effective pest management approach. To control pest populations, IPM also employs pest-resistant crops and biological measures such as the discharge of pest predators or parasites. Although IPM takes more time, the benefits of soil conservation, a better environment, and lower pesticide expenditures are undeniable. 15. Soil health by region Farmers can utilize a range of measures to maintain the health of their soils. Some of these techniques include avoiding tilling the land, planting cover crops in between growing seasons, and switching the crop variety grown on each field. According to a recent study, soil health information is commonly oversimplified. Farms don’t all yield the same outcomes. While one technique may be advantageous to one person, it may be problematic for another depending on where they live. More specific trends in soil health are best observed and evaluated at the regional to the considerable diversity in landscape, inherent soil quality, and farming practices. Let’s take a look at the soil specifics of Canadian provinces. a. British Columbia The need for soil protection varies substantially in British Columbia due to the wide range of cropping intensities. The greatest danger to soil conservation is posed by high-value specialty crops, as well as the heavy tillage and mechanical traffic that goes with them. The bulk of BC’s agricultural land is under high to severe risk of water erosion when the soils are bare. In the Fraser Valley, this is due to heavy rainfall and some steep cultivated slopes; in the Peace River region, it is due to easily eroded silty soils and vast fields with lengthy slopes at the foot of which melted snow runoff collect and washes soil away. Conservation efforts, however, have considerably reduced these dangers over the previous several decades. b. Prairie Provinces Many arable soils on the plains and grasslands are subject to wind erosion and salinization as a result of the strains of a dry climate. Vulnerable soils are also prone to water erosion, especially following summer storms or spring runoff. Severe wind erosion prompted the establishment of the Prairie Farm Rehabilitation Administration in 1935, which took quick and extreme measures to address the problem. When wind erosion became more widespread, efforts were reintroduced to encourage the use of conservation practices from the mid-20th century onwards. Improvements can be attributed to reduced use of summer fallow and increasing use of conservation tillage and other erosion controls, such as permanent grass cover and shelterbelts. The risk of soil salinity has decreased in some areas due to greater use of permanent vegetation cover and less frequent use of summer fallow. c. Ontario and Québec Crops such as corn and soybeans are abundantly cultivated in central Canada. These crops are planted early and harvested late because they require the longest growing season possible. The soil is frequently moist while these processes are carried out, resulting in the compaction of the soil. Moreover, these plants may lead to inadequate soil protection from rain and snowmelt erosion for prolonged periods of the year. Soil conservation methods like minimum and no-tillage retain crop residues on the surface of the soil and reduce heavily loaded mechanical activity. Crop rotation and the regular use of clover or alfalfa hay crops increase soil organic matter, culminating in a better soil structure and less stress. Manure and an adequate amount of fertilizer have a similar impact. Seeding places where runoff water collects to generate grassed streams also helps to reduce soil erosion. Wind erosion is rarely a problem, and it is usually restricted to locations where the soil is sandy or contains organic material (e.g., cultivated marshes). Windbreaks can be established in these sites by planting rows of trees or bushes, and agricultural leftovers can be retained on the surface of the ground to protect the soils from wind erosion. d. Atlantic Canada The soils in none of the four Atlantic Provinces are very productive. The soils are frequently depleted by nature and are often acidic. The intensive cultivation of vegetable crops and potatoes has further lowered organic matter levels, harmed soil structure, and resulted in severe soil erosion on sloping grounds. Farmers are combating these concerns by utilizing conservation techniques. Terraces, which are regular canals created across hills, are becoming more popular in the potato-growing areas of New Brunswick. By decreasing the length of the slopes, the terraces limit runoff water buildup. They transport the water to the field’s edge. They also encourage farmers to plant crop rows across the slope rather than up and down the hill, which ultimately reduces soil erosion caused by runoff. Crop rotation is another method of soil conservation in which potatoes are planted alternately with cereal crops (such as clover and barley). Grassed rivers are also employed in regions where water pools naturally, decreasing the danger of erosion and carving gullies through the soil. In this region, the usage of significant amounts of fertilizer for the potato crop frequently raises soil acidity. Farmers apply ground limestone to the soil and mix it using plowing tools to regulate soil acidity. To Sum Up Conserving soil is a major concern for individuals, farmers, and businesses because it is critical not only to use land productively and provide high yields but also to be able to do so in the future. Even though its impacts might not be visible in the short term, they will be beneficial to future generations. By integrating various methods of pest and weed control, different ways of conservation help to prevent erosion, maintain fertility, avoid deterioration, as well as reduce natural pollution caused by chemicals. Therefore, conservation initiatives provide a great contribution to the long-term viability of the environment and its resources. Frequently Asked Questions 1. What are the 4 methods of soil conservation? There are four primary methods of soil conservation. The first is contour plowing, second is terracing, the third is windbreaks, and the fourth method is cover cropping, 2. Which of the following best explains why soil conservation is important to human agriculture? It is crucial for human agriculture because it helps maintain soil fertility, prevent erosion, and preserve the health of ecosystems. By implementing conservation practices, farmers can ensure that their land remains productive and sustainable in the long run. It also helps to protect water quality by preventing soil erosion and the runoff of harmful chemicals into water bodies. 3. Which farming strategy conserves soil? One farming strategy that helps conserve soil is the implementation of cover cropping. Cover crops, such as legumes or grasses, are planted during fallow periods or after harvest to cover the soil surface. They protect the soil from erosion, improve soil structure, and add organic matter when incorporated. 4. Why do we conserve soil from erosion? Conserving soil from erosion is crucial for several reasons. Firstly, soil erosion leads to the loss of valuable topsoil, which is rich in nutrients necessary for plant growth. Additionally, eroded soil can clog waterways, negatively impacting water quality and aquatic ecosystems. Moreover, erosion reduces soil’s water-holding capacity and diminishes its ability to support plant roots. 5. Which agricultural practice involves planting crops after the cash crop is harvested to protect soil from runoff? The agricultural practice that involves planting crops after the cash crop is harvested to protect the soil from runoff is known as cover cropping. Cover crops are typically planted during the off-season or between cash crops to help prevent reduce nutrient runoff. 6. How is soil polluted and how soil can be conserved? Soil pollution can occur through various human activities such as industrial waste disposal, improper use of pesticides and fertilizers, mining operations, and improper waste management. These activities introduce harmful substances and contaminants into the soil, negatively impacting its quality and fertility. It involves adopting practices to prevent soil degradation and contamination. 7. What is the main mechanical method used by farmers to control soil erosion? The main mechanical method used by farmers to control soil erosion is the implementation of various types of soil conservation structures. One common method is the construction of terraces, which are horizontal platforms built on sloping land to slow down the flow of water and prevent erosion. Farmers also use contour plowing, where they plow parallel to the land’s contours to minimize the length and speed of water runoff. 8. Which is the best way of conserving soil on steep slopes? The best way of conserving soil on steep slopes is through the implementation of terracing. Terracing involves creating level platforms or steps across the slope, which help to slow down water runoff, reduce erosion, and retain soil moisture. 9. Which of the following is an example of using technology to help conserve soil? One example of using technology to help conserve soil is the implementation of precision agriculture. Precision agriculture involves the use of advanced technologies such as GPS, sensors, and remote sensing to gather data and make informed decisions regarding soil management. This allows farmers to apply fertilizers and irrigation more accurately, minimizing waste and reducing the potential for soil degradation. 10. How does no till farming help conserve soil fertility? No-till farming helps conserve soil fertility by minimizing soil disturbance. Instead of plowing or tilling the soil, farmers leave the crop residues and organic matter on the surface, acting as a protective layer. 11. Which soil conservation technique involves plowing and planting crops in rows across the slope of the land rather than up and down? The conservation technique that involves plowing and planting crops in rows across the slope of the land is called contour farming. By following the contour lines, water runoff is slowed down, reducing the risk of soil erosion. 12. How can buffer strips have a positive impact on waterways? Buffer strips can have a positive impact on waterways by acting as a natural filter and reducing water pollution. These strips of vegetation, such as grass or trees, are planted alongside rivers, streams, or other water bodies. They help to trap sediment, nutrients, and pollutants that may otherwise enter the water, improving its quality. 13. Can plants stop soil erosion? Yes, plants can play a significant role in preventing soil erosion. The roots of plants help bind the soil particles together, creating a stable structure that is less prone to erosion. The above-ground parts of plants, such as leaves and stems, act as a barrier that slows down the force of wind and water, reducing their erosive power. 14. How to prevent soil salinization? To prevent soil salinization, several measures can be taken. Proper irrigation management is crucial, including the use of saline-tolerant crops and efficient watering techniques that minimize waterlogging. Implementing proper drainage systems helps to flush out excess salts from the soil. Applying organic matter and amendments can improve soil structure and reduce salt accumulation. Lastly, practicing crop rotation and maintaining proper soil pH levels can help prevent soil salinization. 15. What causes soil to be acidic? Soil acidity can be caused by several factors. One common cause is the presence of acidic parent materials, such as certain types of rock. Acidic rainfall, high levels of organic matter decomposition, and leaching of basic minerals can also contribute to soil acidity. Human activities, such as excessive use of acidic fertilizers or pollution from industrial emissions, can further acidify the soil. These factors can affect the pH balance of the soil, leading to increased acidity.Whats
|History of Belarus| |History of Russia| |History of Ukraine| The Rus' people (Old East Slavic: Рѹсь; Modern Russian, Ukrainian, Belarusian: Русь (Rus'); Old Norse: Garðar; Greek: Ῥῶς (Rhos)) are generally understood in English-language scholarship as ethnically or ancestrally Scandinavian people trading and raiding on the river-routes between the Baltic and the Black Seas from around the eighth to eleventh centuries AD. Thus they are often referred to in English-language research as "Viking Rus'". The scholarly consensus is that Rus' people originated in what is currently coastal Middle Sweden around the eighth century and that their name has the same origin as Roslagen in Sweden (with the older name being Roden). Basing themselves among Slavic and Volga Finns in the upper Volga region, they formed a diaspora of traders and raiders exchanging furs and slaves for silk, silver and other commodities available to the east and south. Around the ninth century, on the river routes to the Black Sea, they had an unclear but significant role in forming the principality of Kievan Rus, gradually assimilating with local Slavic populations. They also extended their operations much further east and south, among the Bulgars and Khazars, on the routes to the Caspian Sea. By around the eleventh century, the word Rus' was increasingly associated with the principality of Kiev, and the term Varangian was becoming more common as a term for Scandinavians traveling the river-routes. Little, however, is certain about the Rus'. This is to a significant extent because, although Rus' people were active over a long period and vast distances, textual evidence for their activities is very sparse and almost never produced by contemporary Rus' people themselves. It is believed that writing was brought to the Rus by the Slavs for religious reasons, but this happened long after their early history. The word Rus' in the primary sources does not always mean the same thing as it does when used by today's scholars. Meanwhile, archaeological evidence and researchers' understanding of it is accumulating only gradually. As a trading diaspora, Rus' people intermingled extensively with Finnic, Slavic, and Turkic peoples and their customs and identity seem correspondingly to have varied considerably over time and space. The other key reason for dispute about the origins of Rus' people is the likelihood that they had a role in ninth- to tenth-century state formation in eastern Europe (ultimately giving their name to Russia and Belarus), making them relevant to what are today seen as the national histories of Russia, Ukraine, Sweden, Poland, Belarus, Finland and Baltic states. This has engendered fierce debate as different political interest groups promote their own stories as to who the Rus' originally were, in the belief that the politics of the ancient past legitimize policies in the present. - 1 Key sources - 2 History - 3 Debate on the origins of the Rus' - 4 New research: Beyond the Normanist/Anti-Normanist Debate - 5 References - 6 Bibliography - 7 External links The etymology and semantic history of the word Rus' has been a highly contentious topic, on which debate is ongoing. This is partly because of a widespread assumption that by identifying the linguistic origin of the name Rus', scholars can identify the origins of the people whom it described. This assumption has, however, been criticized in twenty-first-century scholarship. According to the prevalent theory, the name Rus', like the Proto-Finnic name for Sweden (*Ruotsi), is derived from an Old Norse term for "the men who row" (rods-) as rowing was the main method of navigating the rivers of Eastern Europe, and that it could be linked to the Swedish coastal area of Roslagen (Rus-law) or Roden, as it was known in earlier times. The name Rus' would then have the same origin as the Finnish and Estonian names for Sweden: Ruotsi and Rootsi. The earliest Slavonic-language narrative account of Rus' history is the Primary Chronicle, compiled and adapted from a wide range of sources in Kiev at the start of the thirteenth century. It has therefore been influential on modern history-writing, but it is also much later than the time it describes, and historians agree it primarily reflects the political and religious politics of the time of Mstislav I of Kiev. However, the chronicle does include the texts of a series of Rus'–Byzantine Treaties from 911, 945, and 971. The Rus'–Byzantine Treaties give a valuable insight into the names of the Rus'. Of the fourteen Rus' signatories to the Rus'–Byzantine Treaty in 907, all had Norse names. By the Rus'–Byzantine Treaty (945) in 945, some signatories of the Rus' had Slavic names while the vast majority had Norse names. The Chronicle presents the following origin myth for the arrival of Rus' in the region of Novgorod: the Rus' were a group of Varangians 'who imposed tribute upon the Chuds, the Slavs, the Merians, the Ves', and the Krivichians' (a variety of Slavic and Finnic peoples). The tributaries of the Varangians drove them back beyond the sea and, refusing them further tribute, set out to govern themselves. There was no law among them, but tribe rose against tribe. Discord thus ensued among them, and they began to war one against the other. They said to themselves, "Let us seek a prince who may rule over us, and judge us according to the Law". They accordingly went overseas to the Varangian Russes: these particular Varangians were known as Russes, just as some are called Swedes, and others Normans, English, and Gotlanders, for they were thus named. The Chuds, the Slavs, the Krivichians and the Ves' then said to the people of Rus', "Our land is great and rich, but there is no order in it. Come to rule and reign over us". Thus they selected three brothers, with their kinsfolk, who took with them all the Russes and migrated. The oldest, Rurik, located himself in Novgorod; the second, Sineus, at Beloozero; and the third, Truvor, in Izborsk. On account of these Varangians, the district of Novgorod became known as the land of Rus'. Arabic-language sources for Rus' people are relatively numerous, with over 30 relevant passages in roughly contemporaneous sources. It can be difficult to be sure that when Arabic sources talk about Rus' they mean the same thing as modern scholars. Sometimes it seems to be a general term for Scandinavians: when Al-Yaqūbi recorded Rūs attacking Seville in 844, he was almost certainly talking about vikings based in Frankia. At other times, it might denote people other than or alongside Scandinavians: thus the Mujmal al-Tawarikh calls Khazars and Rus' "brothers"; later, Muhammad al-Idrisi, Al-Qazwini, and Ibn Khaldun all identified the Rus' as a sub-group of the Turks. These uncertainties have fed into debates about the origins of the Rus'. Arabic sources for the Rus' had been collected, edited and translated for Western scholars by the mid-twentieth century. However, relatively little use was made of the Arabic sources in studies of the Rus' before the twenty-first century. This is partly because they mostly concern the region between the Black and the Caspian Seas, and from there north along the lower Volga and the Don. This made them less relevant than the Primary Chronicle to understanding European state formation further west. Moreover, imperialist ideologies in Russia and more widely discouraged research emphasising an ancient or distinctive history for Inner Eurasian peoples. Arabic sources portray Rus' people fairly clearly as a raiding and trading diaspora, or as mercenaries, under the Volga Bulghars or the Khazars, rather than taking a role in state formation. The most extensive Arabic account of the Rus' is by the Muslim diplomat and traveller Ahmad ibn Fadlan, who visited Volga Bulgaria in 922, described people under the label Rūs/Rūsiyyah at length, beginning thus: I have seen the Rus as they came on their merchant journeys and encamped by the Itil. I have never seen more perfect physical specimens, tall as date palms, blond and ruddy; they wear neither tunics nor caftans, but the men wear a garment which covers one side of the body and leaves a hand free. Each man has an axe, a sword, and a knife, and keeps each by him at all times. The swords are broad and grooved, of Frankish sort. Each woman wears on either breast a box of iron, silver, copper, or gold; the value of the box indicates the wealth of the husband. Each box has a ring from which depends a knife. The women wear neck-rings of gold and silver. Their most prized ornaments are green glass beads. They string them as necklaces for their women.— Gwyn Jones, A History of the Vikings Apart from Ibn Fadlan's account, Normanist theory draws heavily on the evidence of the Persian traveler Ibn Rustah who, it is postulated, visited Novgorod (or Tmutarakan, according to George Vernadsky) and described how the Rus' exploited the Slavs. As for the Rus, they live on an island ... that takes three days to walk round and is covered with thick undergrowth and forests; it is most unhealthy. ... They harry the Slavs, using ships to reach them; they carry them off as slaves and…sell them. They have no fields but simply live on what they get from the Slav's lands. ... When a son is born, the father will go up to the newborn baby, sword in hand; throwing it down, he says, "I shall not leave you with any property: You have only what you can provide with this weapon."— Ibn Rustah When the Varangians first appeared in Constantinople (the Paphlagonian expedition of the Rus' in the 820s and the Siege of Constantinople in 860), the Byzantines seem to have perceived the Rhos (Greek: Ῥώς) as a different people from the Slavs. At least no source says they are part of the Slavic race. Characteristically, pseudo-Symeon Magister and Theophanes Continuatus refer to the Rhos as Δρομῖται (dromitai), a word related to the Greek word meaning a run, suggesting the mobility of their movement by waterways. In his treatise De Administrando Imperio, Constantine VII describes the Rhos as the neighbours of Pechenegs who buy from the latter cows, horses, and sheep "because none of these animals may be found in Rhosia". His description represents the Rus' as a warlike northern tribe. Constantine also enumerates the names of the Dnieper cataracts in both ῥωσιστί ('rhosisti', the language of the Rus') and σκλαβιοτί ('sklavisti', the language of the Slavs). The Rus' names can most readily be etymologised as Old Norse, and have been argued to be older than the Slavic names: Constantine's form Latin transliteration Constantine's interpretation of the Slavonic Proposed Old Norse etymons Ἐσσονπῆ Essoupi "does not sleep" nes uppi "upper promontory" Οὐλβορσί Oulvorsi "island of the waterfall" Úlfarsey "Úlfar's island" hólm-foss "island rapid" Γελανδρί Gelandri "the sound of the fall" gjallandi/gellandi "yelling, loudly ringing" Ἀειφόρ Aeifor pelicans' nesting place æ-fari/ey-færr "never passable" æ-for/ey-forr "ever fierce" Βαρονφόρος Varouforos it forms a great maelstrom vara-foss "stony shore rapid" báru-foss "wave rapid" Λεάντι Leanti "surge of water" hlæjandi "laughing" Στρούκουν Stroukoun "the little fall" strjúkandi "stroking, delicately touching" strukum, "rapid current" Western European sources The first Western European source to mention the Rus' are the Annals of St. Bertin. These relate that Emperor Louis the Pious' court at Ingelheim, in 839, was visited by a delegation from the Byzantine emperor. In this delegation there were two men who called themselves Rhos (Rhos vocari dicebant). Louis enquired about their origins and learnt that they were Swedes (suoni). Fearing that they were spies for their allies, the Danes, he incarcerated them, before letting them proceed after receiving reassurances from Byzantium. Subsequently, in the 10th and 11th centuries, Latin sources routinely confused the Rus' with the extinct East Germanic tribe of Rugians. Olga of Kiev, for instance, was designated in one manuscript as a Rugian queen. Another source comes from Liutprand of Cremona, a 10th-century Lombard bishop who in a report from Constantinople to Holy Roman Emperor Otto I wrote that he had met the Rus whom we know by the other name of Norsemen. The quantity of archaeological evidence for the regions where Rus people were active grew steadily through the twentieth century, and beyond, and the end of the Cold War made the full range of material increasingly accessible to researchers. Key excavations have included those at Staraja Ladoga, Novgorod, Rurikovo Gorodischche, Gnëzdovo, Chernigov, Shestovitsa, numerous settlements between the Upper Volga and the Oka rivers, and Kiev. Twenty-first century research, therefore, is giving the synthesis of archaeological evidence an increasingly prominent place in understanding the Rus'. The distribution of coinage, including the early ninth-century Peterhof Hoard, has provided important ways to trace the flow and quantity of trade in areas where Rus were active, and even, through graffiti on the coins, the languages spoken by traders. Having settled Aldeigja (Ladoga) in the 750s, Scandinavian colonists played an important role in the early ethnogenesis of the Rus' people and in the formation of the Rus' Khaganate. The Varangians (Varyags, in Old East Slavic) are first mentioned by the Primary Chronicle as having exacted tribute from the Slavic and Finnic tribes in 859. It was the time of rapid expansion of the Vikings in Northern Europe; England began to pay Danegeld in 859, and the Curonians of Grobin faced an invasion by the Swedes at about the same date. It has been argued that the word Varangian, in its many forms, does not appear in primary sources until the eleventh century (though it does appear frequently in later sources describing earlier periods). This suggests that the term Rus' was used broadly to denote Scandinavians until it became too firmly associated with the now extensively Slavicised elite of Kievan Rus. At that point, the new term Varangian was increasingly preferred to name Scandinavians, probably mostly from what is currently Sweden, plying the river-routes between the Baltic and the Black/Caspian Seas. Due largely to geographic considerations, it is often argued that most of the Varangians who traveled and settled in the lands of eastern Baltic, modern Russian Federation and lands to the south came from the area of modern Sweden. The Varangians left a number of rune stones in their native Sweden that tell of their journeys to what is today Russia, Ukraine, Greece, and Belarus. Most of these rune stones can be seen today, and are a telling piece of historical evidence. The Varangian runestones tell of many notable Varangian expeditions, and even account for the fates of individual warriors and travelers. In Russian history, two cities are used to describe the beginnings of the country: Kiev and Novgorod. In the first part of the eleventh century the former was already a Slav metropolis, rich and powerful, a fast growing centre of civilization adopted from Byzantium. The latter town, Novgorod, was another centre of the same culture but founded in different surroundings, where some old local traditions moulded this commercial city into a mighty oligarchic republic of a kind otherwise unknown in this part of Europe. These towns have tended to overshadow other places of a significance that they had acquired long before Kiev and Novgorod. The two original centers of Rus were Staraja Ladoga and Rurikovo Gorodishche, two points on the ends the Volkhov, a river running for 200 km between Lake Ilmen in the south to Lake Ladoga in the north. This was the territory that most probably was originally called by the Norsemen Gardar, a name that long after Viking Age was given much wider content and become Gardariki, denomination for whole Old Russian State. The area between the lakes was the original Rus, and it was from here its name was transferred to the Slav territories on the middle Dnieper, which eventually became “Ruskaja zemlja”. The pre-history of the first territory of Rus has been sought in the developments around the mid eighth century, when Staraja Ladoga was founded as a trading place, serving the operations of Scandinavian hunters and dealers in furs obtained in the north-eastern forest zone of Eastern Europe. In the early period (the second part of the eighth and first part of the ninth century) Norse presence is only visible at Staraja Ladoga, and to a much lesser degree at a few other sites in the northern parts of Eastern Europe. The objects that represent Norse material culture of this period are rare outside Ladoga and mostly known as single finds. This rarity continues through the ninth century until the whole situation changes radically during the next century, when historians meet, at many places and in relatively large quantities, the material remains of a thriving Scandinavian culture. For a short period of time, some areas of Eastern Europe became as much part of the Norse world as were Danish and Norwegian territories in the West. The culture of the Rus contained Norse elements used as a manifestation of their Scandinavian background. These elements, which were current in tenth century Scandinavia, appear at various places in form of collections of many types of metal ornaments, mainly female but even male, such as weapons, decorated parts of horse bridle, and diverse objects embellished in current Norse art styles. Debate on the origins of the Rus' The historiography of the origins of the Rus' is infamously contentious, due to its perceived importance for the legitimation of nation-building, imperialism, and independence movements within the Slavonic-speaking world, and for legitimating different political relationships between eastern and western European countries. The Rus' feature prominently in the history of the Baltic states, Scandinavia, Poland, and the Byzantine Empire. They are particularly important in the historiography and cultural of Russia, Belarus and Ukraine but have also featured prominently for Poland. Added to these ideological forces is a scarcity of contemporary evidence for the emergence of a Rus' polity, and the great ethnic diversity and complexity of the wide area where Rus' people were active. Notwithstanding the existence of a diverse range of historical debates, contention has crystallized around whether the development of Kievan Rus' was influenced by non-Slavic, Viking migrants (this idea is characterized as the 'Normanist theory'), or whether Rus' emerged from autochthonous Slavic political development (known as the 'anti-Normanist theory'). Whereas the term Normans in English usually refers to the Scandinavian-descended ruling dynasty of Normandy in France from the tenth century onwards, and their scions elsewhere in Western Europe, in the context of the Rus', 'Normanism' refers to the idea that the Rus' had their origins in Scandinavia (i.e. among 'Northmen'). However, the term is used to cover a diverse range of opinions, not all of which are held by all Normanists. (Some, indeed, may mostly exist as accusations about the views of Normanists by polemical anti-Normanists.) As outlined by Leo Klejn, these are, in decreasing order of plausibility: - That Scandinavians migrated to the Ancient East-Slavic area. - That Kiev’s ruling dynasty was established by Scandinavians. - That the name Rus’ is etymologically Old Norse. - That Scandinavian migrants influenced the development of the East-Slavic state. - That Scandinavian migrants created the first East-Slavic state. - That the Scandinavians succeeded because of their racial superiority. - That the past shapes current politics: specifically, that descendants of Scandinavians are natural rulers, whereas Slavs are natural subordinates. The Normanist theory gained prominence in Russia (albeit not under that name) through the German historian Gerhardt Friedrich Müller (1705–1783), who was invited to work in the Russian Academy of Sciences in 1725. Müller built on arguments made by his predecessor Gottlieb-Siegfried Bayer in the papers De Varagis ('on the Varangians', 1729) and Origines russicae ('Russian origins', 1736), and on the Russian Primary Chronicle, written in the twelfth century, and covering the years 852 to 1110. At the beginning of an important speech in 1749, later published as Origines gentis et nominis Russorum ('The Origins of the People and the Name of the Russians'), Müller argued that Russia owed its name and early ruling dynasty to ethnically Scandinavian Varangians. This statement caused anger in his Russian audience, and earned him much animosity during his professional career in Russia. Scathing criticism from Lomonosov, Krasheninnikov, and other Russian historians led to Müller being forced to suspend his work on the issue until Lomonosov's death. It was even thought during the twentieth century that much of his research was destroyed, but recent research suggests that this is not the case: Müller managed to rework it and had it reprinted as Origines Rossicae in 1768. Despite the negative reception in the mid-eighteenth century, by the end of the century, Müller's views were the consensus in Russian historiography, and this remained largely the case through the nineteenth century and early twentieth centuries. Russian historians who accepted this historical account included Nikolai Karamzin (1766–1826) and his disciple Mikhail Pogodin (1800–75), who gave credit to the claims of the Primary Chronicle that the Varangians were invited by East Slavs to rule over them and bring order. The theory was not without political implications. For some, it fitted with embracing and celebrating the multiethnic character of the Russian Empire. However, it was also consistent with the racial theory widespread at the time that Normans (and their descendants) were naturally suited to government, whereas Slavs were not. According to Karamzin the Norse migration formed the basis and justification for Russian autocracy (as opposed to anarchy of the pre-Rurikid period), and Pogodin used the theory to advance his view that Russia was immune to social upheavals and revolutions, because the Russian state originated from a voluntary treaty between the people of Novgorod and Varangian rulers. Emergence of Western scholarly consensus During the historical debates of the twentieth century, the key evidence for the Normanist view that Scandinavian migrants had an important role in the formation of Kievan Rus' emerged as the following: - Notwithstanding other suggestions, the name Rus' can readily be interpreted as originating in Old Norse. - The personal names of the first few Rus' leaders are etymologically Old Norse, from Rurik (from Old Norse Hrærekr) down to Olga of Kiev (from Old Norse Helga). (From Olga's son Sviatoslav I of Kiev onwards, Slavonic names take over.) - The list of cataracts on the Dnieper listed by Constantine VII in his De Administrando Imperio as belonging to the language of the Rhos can most readily be etymologised as Old Norse. - The Annals of St. Bertin account of the Rhos for 839 has them identify themselves as suoni (Swedes). - Thirteenth-century Icelandic historiography portrays close connections between the eleventh-century rulers of Rus' and Scandinavian dynasties in England and Norway. In the twenty-first century, analyses of the rapidly growing range of archaeological evidence further noted that high-status ninth- to tenth-century burials of both men and women in the vicinity of the Upper Volga exhibit material culture largely consistent with that of Scandinavia (though this is less the case away from the river, or further downstream). This has been seen as further demonstrating the Scandinavian character of elites in "Old Rus'". It is also agreed, however, that ancestrally Scandinavian Rus' aristocrats, like Normans elsewhere, swiftly assimilated culturally to a Slavic identity: in the words of F. Donald Logan, "in 839, the Rus were Swedes; in 1043 the Rus were Slavs". This near absence of cultural traces (aside from several names, and perhaps the veche-system of Novgorod, comparable to thing in Scandinavia), is noteworthy, and the processes of cultural assimilation in Rus' are an important area of research. There is uncertainty as to how small the Scandinavian migration to Rus' was, but some recent archaeological work has argued for a substantial number of 'free peasants' settling in the upper Volga region. It is important to note that a number of Anglophone scholars remain equivocal about whether the question of Rus' origins can really be solved, however, either because the evidence is not good enough or because the Rus' were never an ethnic group with a clear point of origin. Use of Normanism in Western Europe In the earlier twentieth century, Nazi Germany promoted the idea that Russia owed its statehood to a Germanic, racially superior, elite. During the Second World War, the German government promised the Fascist Quisling government of Norway territory on the historic Austrvegr, reflecting Quisling's ambition to reenact his Normanist view of Viking history. For the organization of a Russian State structure was not the result of Russian Slavdom's State-political capacity, but rather a wonderful example of the State-building activity of the German element in an inferior race. Later Heinrich Himmler asserted that Russians are sub-race: The Slav is never able to build anything himself. In the long run, he's not capable of it. I'll come back to this later. With the exception of a few phenomena produced by Asia every couple of centuries, through that mixture of two heredities which may be fortunate for Asia but is unfortunate for us Europeans — with the exception, therefore, of an Attilla, a Ghenghis Khan, a Tamerlaine, a Lenin, a Stalin — the mixed race of the Slavs is based on a sub-race with a few drops of blood of our blood, blood of a leading race; the Slav is unable to control himself and create order. He is able to argue, able to debate, able to disintegrate, able to offer resistance against every authority and to revolt. But these human shoddy goods are just as incapable of maintaining order today as they were 700 or 800 years ago, when they called in the Varangians, when they called in the Ruriks. A Scandinavian origin of the Rus' has been bitterly contested by Slavic nationalists. Starting with Lomonosov (1711–1765), East Slavic scholars have criticized the idea of Norse invaders. By the early 20th century, the traditional anti-Normanist doctrine (as articulated by Dmitry Ilovaisky) seemed to have lost currency, but in Stalinist Russia, the anti-Normanist arguments were revived and adopted in official Soviet historiography, partly in response to Nazi propaganda, which posited that Russia owed its existence to a Germanic ruling elite. Mikhail Artamonov ranks among those who attempted to reconcile both theories by hypothesizing that the Kievan state united the southern Rus' (of Slavic stock) and the northern Rus' (of Germanic stock) into a single nation. The staunchest advocate of the anti-Normanist views in the period following the Second World War was Boris Rybakov, who argued that the cultural level of the Varangians could not have warranted an invitation from the culturally advanced Slavs. This conclusion leads Slavicists to deny the Primary Chronicle, which writes that the Varangian Rus' were invited by the native Slavs. Rybakov assumed that Nestor, putative author of the Chronicle, was biased against the pro-Greek party of Vladimir Monomakh and supported the pro-Scandinavian party of the ruling prince Svyatopolk. He cites Nestor as a pro-Scandinavian manipulator and compares his account of Rurik's invitation with numerous similar stories found in folklore around the world. By the twenty-first century, most professional scholars, in both Anglophone and Slavonic-language scholarship, had reached a consensus that the origins of the Rus' people lay in Scandinavia and that this originally Scandinavian elite had a significant role in forming the polity of Kievan Rus'. Indeed, in 1995, the Russian archaeologist Leo Klejn "gave a paper entitled ‘The End of the Discussion’, in the belief that anti-Normanism ‘was dead and buried’". However, Klejn soon had to revise this opinion as anti-Normanist ideas gained a new prominence in both public and academic discourse in Russia, Ukraine, and Belarus. Anglophone scholarship has identified the continued commitment to anti-Normanism in these countries since the collapse of the Soviet Union as being motivated by present-day ethno-nationalism and state-formation. One prominent Russian example occurred with an anti-Normanist conference in 2002, which was followed by publications on the same theme, and which appears to have been promoted by Russian government policy of the time. Accordingly, anti-Normanist accounts are prominent in some twenty-first century Russian school textbooks. Meanwhile, in Ukraine and to a lesser extent Belarus, post-Soviet nation-building opposed to a history of Russian imperialism has promoted anti-Normanist views in academia and, to a greater extent, popular culture. Other anti-Normanist interpretations There have been quite a few alternative, non-Normanist origins for the word Rus', although none was endorsed in the Western academic mainstream: - Three early emperors of the Urartian Empire at Caucasus from 8th to 6th century BC had their names Russa I, Russa II and Russa III, documented in cuneiform monuments. - The medieval legend of three brothers, one named Rus, had also its predecessor in very similar legend from ancient Armenians with almost the same classical name (studies by D.J. Marr). Furthermore, Kiev was founded centuries before the Rus' rule. - The ancient Sarmatian tribe of the Roxolani (from the Ossetic, ruhs 'light'; R русые волосы /rusyje volosy/ "light-brown hair"; cf. Dahl's dictionary definition of Русь /rus/: Русь ж. в знач. мир, белсвет. Rus, fig. world, universe [белсвет: lit. "white world", "white light"]). - From the Old Slavic name that meant "river-people" (tribes of fishermen and ploughmen who settled near the rivers Dnieper, Don, Dniester and Western Dvina and were known to navigate them). The rus root is preserved in the modern Slavic and Russian words "ruslo" (river-bed), "rusalka" (water sprite), etc. - From one of two rivers in Ukraine (near Kiev and Pereyaslav), Ros and Rusna, whose names are derived from a postulated Slavic term for water, akin to rosa (dew) (related to the above theory). - A Slavic word rusy (refers only to hair color — from dark ash-blond to light-brown), cognate with ryzhy (red-haired) and English red. - A postulated proto-Slavic word for bear, cognate with Greek arctos and Latin ursus. New research: Beyond the Normanist/Anti-Normanist Debate F. Donald Logan Scholars such as Omeljan Pritsak and Horace G. Lunt offer explanations that go beyond simplistic attempts to attribute 'ethnicity' on prima facie interpretation of literary, philological, and archaeological evidence. They view the Rus' as disparate, and often mutually antagonistic, clans of charismatic warriors and traders who formed wide-ranging networks across the North and Baltic Seas. They were a "multi-ethnic, multilingual and non-territorial community of sea nomads and trading settlements" that contained numerous Norsemen—but equally Slavs, Balts, and Finns. Evidence provided by the Primary Chronicle, written some three centuries later, cannot be taken as an accurate ethnographic account; as tales of 'migration' from distant lands were common literary topoi used by rulers to legitimise their contemporary rule whilst at the same time differentiating themselves from their "Baltic" and "Slavic" subject tribes. Tolochko argues "the story of the royal clan's journey is a device with its own function within the narrative of the chronicle. ... Yet if we take it for what it actually is, if we accept that it is not a documentary ethnographic description of the 10th century, but a medieval origo gentis[a] masterfully constructed by a Christian cleric of the early 12th century, then we have to reconsider the established scholarly narrative of the earliest phase of East European history, which owes so much to the Primary Chronicle. Archaeological research, synthesizing a wide range of twentieth-century excavations, has begun to develop what Jonathan Shepard has called a 'bottom up' vision of the formation of the Rus' polity, in which, during the ninth and tenth century increasingly intensive trade networks criss-crossed linguistically and ethnically diverse groups around rivers like the Volga, the Don, the Dnieper. This may have produced 'an essentially voluntary convergence of groupings in common pursuit of primary produce exchangeable for artifacts from afar'. This fits well with the image of Rus' that dominates the Arabic sources, focusing further south and east, around the Black and Caspian Seas, the Caucasus and the Volga Bulghars. Yet this narrative, though plausible, contends with the 'top-down' image of state development implied by the Primary Chronicle, archaeological assemblages indicating Scandinavian-style weapon-bearing elites on the Upper Volga, and evidence for slave-trading and violent destruction of fortified settlements. Numerous artefacts of Scandinavian affinity have been found in northern Russia. However, exchange between the north and southern shores of the Baltic had occurred since the Iron Age (albeit limited to immediately coastal areas). Northern Russia and adjacent Finnic lands had become a profitable meeting ground for peoples of diverse origins, especially for the trade of furs, and attracted by the presence of oriental silver from the mid-8th century AD. There is an undeniable presence of goods and people of Scandinavian origin; however, the predominant people remained the local (Baltic and Finnic) peoples. The increasing volume of trade and internal competition necessitated higher forms of organization. The Rus' appeared to emulate aspects of Khazar political organization—hence the mention of a Rus' chaganus in the Carolingian court in 839 (Royal Frankish Annals). Legitimization was sought by way of adopting a Christian and linguistically Slavic high culture that became the Kieven Rus'. The burials ('chamber' or 'retainer' graves) attributed to the Kievan Rus' have only a superficial resemblance to supposed Scandinavian prototypes—only the grave construction was similar, whilst the range of accompanying artefacts, the inclusion of weapons, horses and slave girls have no parallels in Scandinavia. Moreover, there is doubt if the emerging Kievan Rus' were the same clan as the "Rus" who visited the Carolingians in 839 or who attacked Constantinople in 860 AD. The rise of Kiev itself is mysterious. Devoid of any silver dirrham finds in the 8th century AD, it was situated west of the profitable fur and silver trade networks that spanned from the Baltic to the Muslim lands, via the Volga-Kama basins. At the prime hill in Kiev, fortifications and other symbols of consolidation and power appear from the 9th century, thus preceding the literary appearance of 'Rus' in the middle Dnieper region. By the 10th century, the lowlands around Kiev had extensive 'Slavic' styled settlements, and there is evidence of growing trade with the Byzantine lands. This might have attracted Rus' movements, and a shift in power, from the north to Kiev. Thus, Kiev does not appear to have evolved from the infrastructure of the Scandinavian trade networks, but rather it forcibly took them over, as evidenced by the destruction of numerous earlier trade settlements in the north, including the famous Staraja Ladoga. - "The Vikings at home". History Extra. - James E. Montgomery, ‘Vikings and Rus in Arabic Sources’, in Living Islamic History, ed. by Yasir Suleiman (Edinburgh: Edinburgh University Press, 2010), pp. 151–65 (pp. 152-54). - Marika Mägi, In Austrvegr: The Role of the Eastern Baltic in Viking Age Communication Across the Baltic Sea, The Northern World, 84 (Leiden: Brill, 2018), pp. 141-216 (esp. p. 216). - Blöndal, Sigfús (1978). The Varangians of Byzantium. Cambridge University Press. p. 1. ISBN 9780521035521. Retrieved 2 February 2014. - Stefan Brink, 'Who were the Vikings?', in The Viking World, ed. by Stefan Brink and Neil Price (Abingdon: Routledge, 2008), pp. 4-10 (pp. 6-7). - "Russ, adj. and n." OED Online, Oxford University Press, June 2018, www.oed.com/view/Entry/169069. Accessed 25 July 2018. - Thorir Jonsson Hraundal, 'New Perspectives on Eastern Vikings/Rus in Arabic Sources', Viking and Medieval Scandinavia, 10 (2014), 65–97 doi:10.1484/J.VMS.5.1052 (pp. 66-67). - Duczko 2004, p. 210 - The Russian Primary Chronicle: Laurentian Text, ed. and trans. by Samuel Hazzard Cross and Olgerd P. Sherbowitz-Wetzor (Cambridge, MA: The Medieval Academy of America, 1953), ISBN 0-910956-34-0, s.aa. 6368-6370 (860-862 CE) [pp. 59-60. - Thorir Jonsson Hraundal, 'New Perspectives on Eastern Vikings/Rus in Arabic Sources', Viking and Medieval Scandinavia, 10 (2014), 65–97 doi:10.1484/J.VMS.5.1052, p. 68. - P.B. Golden, “Rūs”, in Encyclopaedia of Islam, Second Edition, Edited by: P. Bearman, Th. Bianquis, C.E. Bosworth, E. van Donzel, W.P. Heinrichs. Consulted online on 26 July 2018 doi:10.1163/1573-3912_islam_COM_0942. - James E. Montgomery, 'Ibn Faḍlān and the Rūsiyyah', Journal of Arabic and Islamic Studies, 3 (2000), 1-25. - Ann Christys, Vikings in the South (London: Bloomsbury, 2015), pp. 15-45 (esp. p. 31). - Brink & Price 2008, p. 552 - Thorir Jonsson Hraundal, 'New Perspectives on Eastern Vikings/Rus in Arabic Sources', Viking and Medieval Scandinavia, 10 (2014), 65–97 doi:10.1484/J.VMS.5.1052 (p. 73). - A. Seippel (ed.), Rerum normannicarum fonts arabici, 2 vols (Oslo: Brøgger, 1896). This edition of Arabic sources for vikings was translated into Norwegian, and expanded, by H. Birkeland (ed. and trans.), Nordens historie: Middlealderen etter arabiske kilder (Oslo: Dyburad, 1954). It was translated into English by Alauddin I. Samarra’i (trans.), Arabic Sources on the Norse: English Translation and Notes Based on the Texts Edited by A. Seippel in ‘Rerum Normannicarum fontes Arabici’ (unpublished doctoral thesis, University of Wisconsin–Madison, 1959). - James E. Montgomery, ‘Ibn Rusta’s Lack of “Eloquence”, the Rus, and Samanid Cosmography’, Edebiyat, 12 (2001), 73–93. - James E. Montgomery, ‘Arabic Sources on the Vikings’, in The Viking World, ed. by Stefan Brink (London: Routledge, 2008), pp. 550–61. - James E. Montgomery, ‘Vikings and Rus in Arabic Sources’, in Living Islamic History, ed. by Yasir Suleiman (Edinburgh: Edinburgh University Press, 2010), pp. 151–65. - Thorir Jonsson Hraundal, 'New Perspectives on Eastern Vikings/Rus in Arabic Sources', Viking and Medieval Scandinavia, 10 (2014), 65–97 doi:10.1484/J.VMS.5.1052. - Thorir Jonsson Hraundal, 'New Perspectives on Eastern Vikings/Rus in Arabic Sources', Viking and Medieval Scandinavia, 10 (2014), 65–97 doi:10.1484/J.VMS.5.1052 (pp. 70-78). - Jones, Gwyn (2001). A History of the Vikings. Oxford University Press. p. 164. ISBN 0-19-280134-1. - Quoted from National Geographic, March 1985; Compare:Ferguson, Robert (2009). The Hammer and the Cross: A New History of the Vikings. Penguin UK. ISBN 9780141923871. Retrieved 25 July 2016. They have no fields but simply live on what they get from the Slavs' lands. - Volt, Ivo; Janika Päll (2005). Byzantino-Nordica 2004: Papers Presented at the International Symposium of Byzantine Studies Held on 7-11 May 2004 in Tartu, Estonia. Morgenstern Society. p. 16. ISBN 978-9949-11-266-1. Retrieved 28 September 2016. - H. R. Ellis Davidson, The Viking Road to Byzantium (London: Allen & Unwin, 1976), p. 83. - Sigfús Blöndal, ''The Varangians of Byzantium: An Aspect of Byzantine Military History'', rev. and trans. by Benedikt S. Benedikz (Cambridge: Cambridge University Press, 1978), pp. 9-12. - Wladyslaw Duczko, Viking Rus: Studies on the Presence of Scandinavians in Eastern Europe (Leiden: Brill, 2004), pp. 10-59. - Jonathan Shepard, 'The Viking Rus and Byzantium', ,in The Viking World, ed. by Stefan Brink and Neil Price (Abingdon: Routledge, 2008), pp. 496-516 (p. 497). - Janet Martin, 'The First East Slavic State', in A Companion to Russian History, ed. by Abbott Gleason (Oxford: Blackwell, 2009), pp. 34-50 (p. 36). - "The Varangian Guard 988-453". google.no. Archived from the original on 21 June 2014. Retrieved 16 May 2016. - "The Varangians of Byzantium". google.com. - Wladyslaw Duczko, Viking Rus: Studies on the Presence of Scandinavians in Eastern Europe (Leiden: Brill, 2004). - Jonathan Shepherd, 'Review Article: Back in Old Rus and the USSR: Archaeology, History and Politics', English Historical Review, vol. 131 (no. 549) (2016), 384-405 doi:10.1093/ehr/cew104. - Forte, Angelo; Oram, Richard; Pedersen, Frederik (2005). Viking Empires. Cambridge University Press. pp. 13–14. ISBN 0-521-82992-5. - Marika Mägi, In Austrvegr: The Role of the Eastern Baltic in Viking Age Communication Across the Baltic Sea, The Northern World, 84 (Leiden: Brill, 2018), p. 195, citing Alf Thulin, 'The Rus' of Nestor's Chronicle', Mediaeval Scandinavia, 13 (2000), 70-96. - Duczko, Wladyslaw. Viking Rus : Studies on the Presence of Scandinavians in Eastern Europe. The Northern World. Leiden: Brill, 2004. - Roman Zakharii, 'The Historiography of Normanist and Anti-Normanist theories on the origin of Rus’: A review of modern historiography and major sources on Varangian controversy and other Scandinavian concepts of the origins of Rus’' (unpublished M.Phil. thesis, University of Oslo, 2002). - Wladyslaw Duczko, Viking Rus: Studies on the Presence of Scandinavians in Eastern Europe (Leiden: Brill, 2004), pp. 3-9. - Serhii Plokhy, The Origins of the Slavic Nations Premodern Identities in Russia, Ukraine, and Belarus (Cambridge: Cambridge University Press, 2006), pp. 10-48. - Christian Raffensperger, 'The Place of Rus’ in Medieval Europe[permanent dead link]', History Compass, 12/11 (2014), 853–65 doi:10.1111/hic3.12201 (pp. 853-54). - Elena Melnikova, 'The "Varangian Problem": Science in the Grip of Ideology and Politics', in Russia's Identity in International Relations: Images, Perceptions, Misperceptions, ed. by Ray Taras (Abingdon: Routledge, 2013), pp. 42-52. - History Time (1 August 2017), Vikings Of The East: Igor & The Kievan Rus', retrieved 20 February 2019 - "Treaties Between the Rus and the Byzantine – Eastwards to Miklagard". onlineacademiccommunity.uvic.ca. Retrieved 20 February 2019. - "Rus | people". Encyclopedia Britannica. Retrieved 20 February 2019. - Janet Martin, 'The First East Slavic State', in A Companion to Russian History, ed. by Abbott Gleason (Oxford: Blackwell, 2009), pp. 34-50 (pp. 34-36). - "Normanist, n. and adj." OED Online, Oxford University Press, June 2018, www.oed.com/view/Entry/128286. Accessed 25 July 2018. - Dmitry Nikolayevich Verkhoturov, 'Normanism: What's in a Name?', Valla, 1.5 (2015), 57-65. - Wladyslaw Duczko, Viking Rus: Studies on the Presence of Scandinavians in Eastern Europe (Leiden: Brill, 2004), 4. - Dmitry Nikolayevich Verkhoturov, 'Normanism: What's in a Name?', Valla, 1.5 (2015), 57-65 (p. 57). - Serhii Plokhy, Lost Kingdom: The Quest for Empire and the Making of the Russian Nation from 1740 to the Present (London: Allen Lane, 2017). - Serhii Plokhy, Ukraine and Russia: Representations of the Past (Toronto: University of Toronto Press, 2008), chapter 1. - Elena Melnikova, 'The "Varangian Problem": Science in the Grip of Ideology and Politics', in Russia's Identity in International Relations: Images, Perceptions, Misperceptions, ed. by Ray Taras (Abingdon: Routledge, 2013), pp. 42-52 (p. 43). - Dmitry Nikolayevich Verkhoturov, 'Normanism: What's in a Name?', Valla, 1.5 (2015), 57-65 (pp. 58-59). - Elena Melnikova, 'The "Varangian Problem": Science in the Grip of Ideology and Politics', in Russia's Identity in International Relations: Images, Perceptions, Misperceptions, ed. by Ray Taras (Abingdon: Routledge, 2013), pp. 42-52 (pp. 44-45). - Christian Promitzer, 'Physical anthropology and ethnogenesis in Bulgaria, 1878–1944', Focaal—Journal of Global and Historical Anthropology, 58 (2010), 47–62 doi:10.3167/fcl.2010.580104 (pp. 49-50). - Cf. Richard Mcmahon, 'Anthropological Race Psychology 1820–1945: A Common European System of Ethnic Identity Narratives', Nations and Nationalism, 15 (2009), 575–96 (p. 579). - Cf. Matthew H. Hammond, 'Ethnicity and the Writing of Medieval Scottish History', The Scottish Historical Review, vol. 85 (no. 219) (April 2006), 1-27, doi:10.1353/shr.2006.0014. - Omeljan Pritsak, "Rus'", in Medieval Scandinavia: An Encyclopedia, ed. by Phillip Pulsiano (New York: Garland, 1993), pp. 555-56. - Jonathan Shepard, 'The Viking Rus and Byzantium', , in The Viking World, ed. by Stefan Brink and Neil Price (Abingdon: Routledge, 2008), pp. 496-516 (p. 497). - Logan 2005, p. 184 "The controversies over the nature of the Rus and the origins of the Russian state have bedevilled Viking studies, and indeed Russian history, for well over a century. It is historically certain that the Rus were Swedes. The evidence is incontrovertible, and that a debate still lingers at some levels of historical writing is clear evidence of the holding power of received notions. The debate over this issue - futile, embittered, tendentious, doctrinaire - served to obscure the most serious and genuine historical problem which remains: the assimilation of these Viking Rus into the Slavic people among whom they lived. The principal historical question is not whether the Rus were Scandinavians or Slavs, but, rather, how quickly these Scandinavian Rus became absorbed into Slavic life and culture." - I. Jansson, ‘Warfare, Trade or Colonisation? Some General Remarks on the Eastern Expansion of the Scandinavians in the Viking Period’, in The Rural Viking in Russia and Sweden, ed. by P. Hansson (Örebro, 1997), pp. 47–51. - Jonathan Shepherd, 'Review Article: Back in Old Rus and the USSR: Archaeology, History and Politics', English Historical Review, vol. 131 (no. 549) (2016), 384-405 (pp. 395-96) doi:10.1093/ehr/cew104. - Andrii Danylenko, 'The Name "Rus" in Search of a New Dimension', Jahrbücher für Geschichte Osteuropas, new series, 52 (2004), 1-32. - Marika Mägi, In Austrvegr: The Role of the Eastern Baltic in Viking Age Communication Across the Baltic Sea, The Northern World, 84 (Leiden: Brill, 2018), pp. 141-216. - Jonathan Shepherd, 'Review Article: Back in Old Rus and the USSR: Archaeology, History and Politics', English Historical Review, vol. 131 (no. 549) (2016), 384-405 doi:10.1093/ehr/cew104 (pp. 386-87). - Ole Kolsrud, “Kollaborasjon og imperialisme. Quisling-regjeringens 'Austrveg'-drøm 1941–1944”, Norsk historisk tidsskrift, 67 (1988), 241–270. - Adolf Hitler, Mein Kampf (HOUGHTON MIFFLIN COMPANY, 1941). - Heinrich Himmler, The Posen speech to SS officers (6 October 1943). - Bury & Gwatkin 1936, p. 327 "Though the point has been hotly contested by Slavonic patriots, there can be no doubt that these Rhos or Rus are really Swedish Vikings." - Janet Martin, 'The First East Slavic State', in A Companion to Russian History, ed. by Abbott Gleason (Oxford: Blackwell, 2009), pp. 34-50 (pp. 37-42). - Elena Melnikova, 'The "Varangian Problem": Science in the Grip of Ideology and Politics', in Russia's Identity in International Relations: Images, Perceptions, Misperceptions, ed. by Ray Taras (Abingdon: Routledge, 2013), pp. 42-52 (pp. 43-46. - Jonathan Shepherd, 'Review Article: Back in Old Rus and the USSR: Archaeology, History and Politics', English Historical Review, vol. 131 (no. 549) (2016), 384-405 doi:10.1093/ehr/cew104 (p. 387). - Waldman, & Mason 2005, p. 668 "In light of evidence, theories - most of them proposed by Soviet scholars with nationalistic agendas - of a Slav state in the Baltic region attacked by and ultimately absorbing Viking invaders are more likely the product of wishful thinking than of fact." - Wladyslaw Duczko, Viking Rus: Studies on the Presence of Scandinavians in Eastern Europe (Leiden: Brill, 2004), esp. pp. 3-9. - Abbott Gleason, 'Russian Historiography after the Fall', in A Companion to Russian History, ed. by Abbott Gleason (Oxford: Blackwell, 2009), pp. 1-14 (p. 5). - Elena Melnikova, 'The "Varangian Problem": Science in the Grip of Ideology and Politics', in Russia's Identity in International Relations: Images, Perceptions, Misperceptions, ed. by Ray Taras (Abingdon: Routledge, 2013), pp. 42-52 (p. 42). - Jonathan Shepherd, 'Review Article: Back in Old Rus and the USSR: Archaeology, History and Politics', English Historical Review, vol. 131 (no. 549) (2016), 384-405 doi:10.1093/ehr/cew104 (p. 387), citing Leo S. Klejn, Soviet Archaeology: Trends, Schools, and History, trans. by Rosh Ireland and Kevin Windle (Oxford: Oxford University Press, 2012), p. 119. - Christian Raffensperger, 'The Place of Rus’ in Medieval Europe[permanent dead link]', History Compass, 12/11 (2014), 853–65 doi:10.1111/hic3.12201 (esp. pp. 853-54, 858). - Dmitry Nikolayevich Verkhoturov, 'Normanism: What's in a Name?', Valla, 1.5 (2015), 57-65 (esp. 63). - Elena Melnikova, 'The "Varangian Problem": Science in the Grip of Ideology and Politics', in Russia's Identity in International Relations: Images, Perceptions, Misperceptions, ed. by Ray Taras (Abingdon: Routledge, 2013), pp. 42-52, citing I. A. Nastenko (ed.), Sbornik Russkogo istoricheskogo obshchestva: Antinormanism, vol 8. (no. 156) (Moskow: Russkaja Panorama, 2003) and V. V. Fomin, Varjagi i varjazhskaja Rus': Kitogam diskussii po varjazhskomu voprosu (Moscow: Russkaja Panorama, 2005). - Artem Istranin and Alexander Drono, 'Competing historical Narratives in Russian Textbooks', in Mutual Images: Textbook Representations of Historical Neighbours in the East of Europe, ed. by János M. Bak and Robert Maier, Eckert. Dossiers, 10 ([Braunschweig]: Georg Eckert Institute for International Textbook Research, 2017), 31-43 (pp. 35-36). - Serhii Plokhy, The Origins of the Slavic Nations Premodern Identities in Russia, Ukraine, and Belarus (Cambridge: Cambridge University Press, 2006), pp. 10-48 (esp. pp. 11-12). - Pritsak (1981, p. 14) - Lunt (1975, p. 271) - Tolochko (2008, p. 184 & 188, resp) - Jonathan Shepherd, 'Review Article: Back in Old Rus and the USSR: Archaeology, History and Politics', English Historical Review, vol. 131 (no. 549) (2016), 384-405 doi:10.1093/ehr/cew104 (pp. 389-402, quoting p. 397). - Thorir Jonsson Hraundal, 'New Perspectives on Eastern Vikings/Rus in Arabic Sources', Viking and Medieval Scandinavia, 10 (2014), 65–69 doi:10.1484/J.VMS.5.1052 (pp. 70-71). - Jonathan Shepherd, 'Review Article: Back in Old Rus and the USSR: Archaeology, History and Politics', English Historical Review, vol. 131 (no. 549) (2016), 384-405 doi:10.1093/ehr/cew104 (pp. 389-402). - Thorir Jonsson Hraundal, 'New Perspectives on Eastern Vikings/Rus in Arabic Sources', Viking and Medieval Scandinavia, 10 (2014), 65–69 doi:10.1484/J.VMS.5.1052 (p. 71). - Franklin (1996, p. 9) - Franklin (1996, p. 12) - Franklin (1996, pp. 22-25) - Pritsak, p. 31 - Shephard, pp. 122–3 - Tolochko, p. 187 - Franklin (1996, pp. 90–122) - Tolochko p. 186 - The Annals of Saint-Bertin, transl. Janet L. Nelson, Ninth-Century Histories 1 (Manchester and New York, 1991). - Davies, Norman. Europe: A History. New York: Oxford University Press, 1996. - Bury, John Bagnell; Gwatkin, Henry Melvill (1936). The Cambridge Medieval History, Volume 3. University Press. ISBN 0415327563. - Christian, David. A History of Russia, Mongolia, and Central Asia. Blackwell, 1999. - Danylenko, Andrii. "The name Rus': In search of a new dimension." Jahrbueher fuer Geschichte Osteuropas 52 (2004), 1–32. - Davidson, H.R. Ellis, The Viking Road to Byzantium. Allen & Unwin, 1976. - Dolukhanov, Pavel M. The Early Slavs: Eastern Europe from the Initial Settlement to the Kievan Rus. New York: Longman, 1996. - Duczko, Wladyslaw. Viking Rus: Studies on the Presence of Scandinavians in Eastern Europe (The Northern World; 12). Leiden: Brill Academic Publishers, 2004 (hardcover, ISBN 90-04-13874-9). - Goehrke, C. Frühzeit des Ostslaven. Darmstadt: Wissenschaftliche Buchgesellschaft, 1992. - Magocsi, Paul R. A History of Ukraine. Toronto: University of Toronto Press, 1996. - Pritsak, Omeljan. The Origin of Rus'. Cambridge Mass.: Harvard University Press, 1991. - Stang, Hakon. The Naming of Russia. Oslo: Middelelser, 1996. - Gerard Miller as the author of the Normanist theory (Brockhaus and Efron) - Logan, F. Donald (2005). The Vikings in History. Taylor & Francis. ISBN 0415327563. - On the language of old Rus: some questions and suggestions. Horace Gray Lunt. Harvard University, Harvard Ukrainian Research Institute, 1975 - The Emergence of Rus: 750–1200. Simon Franklin, Jonathan Shephard. Longman Publishing Group, 1996 - The Origin of Rus'. Omeljan Pritsak. Harvard University Press, 1981 - The Primary Chronicle's 'Ethnography' Revisited: Slavs and Varangians in the Middle Dnieper Region and the Origin of the Rus' State. Olksiy P Tolochko; in Franks, Northmen and Slavs. Identities and State Formation in Early Medieval Europe. Editors: Ildar H. Garipzanov, Patrick J. Geary, and Przemysław Urbańczyk. Brepols, 2008. - Brink, Stefan; Price, Price (2008). The Viking World. Routledge. ISBN 113431826X. Retrieved 2 August 2014. - Duczko, Wladyslaw (2004). Viking Rus: Studies on the Presence of Scandinavians in Eastern Europe. Brill. ISBN 9004138749. Retrieved 5 May 2013. - Waldman, Carl; Mason, Catherine (2005). Encyclopedia of European Peoples. Infobase Publishing. ISBN 1438129181. Media related to Rus' (Eastern Europe) at Wikimedia Commons - James E. Montgomery, 'Ibn Faḍlān and the Rūsiyyah', Journal of Arabic and Islamic Studies, 3 (2000), 1-25. Archive.org. Includes a translation of Ibn Fadlān's discussion of the Rūs/Rūsiyyah.
Historical Context And Foundational Ideas Of Keynesian Theory British economist John Maynard Keynes created the Keynesian theory of income and employment at the beginning of the 20th century. It is also referred to as Keynesianism or Keynesian economics and bears Keynes’ name. The Great Depression of the 1930s, a catastrophic economic slump that devastated many nations worldwide, is where the historical roots of the Keynesian philosophy may be found. Keynes thought that the problems that existed during the Great Depression could not be solved by conventional economic theory, which was founded on the ideas of classical economics. Keynes outlined his thesis in his 1936 book “The General Theory of Employment, Interest, and Money,” which focused on the importance of government involvement in bringing the economy back into balance. The fundamental tenet of the Keynesian theory is that, in order to boost demand and boost employment during economic downturns, the government should increase expenditure and cut taxes. The importance of aggregate demand in setting an economy’s level of output and employment is another point stressed by Keynesian economics. According to this theory, consumption, investment, government spending, and net exports together make up aggregate demand, commonly referred to as the total demand for goods and services in an economy. In general, the Keynesian theory of income and employment is a crucial economic theory that has had a big impact on government involvement in the economy and economic policy. Aggregate Supply And Demand: The Cornerstones Of Keynesian Analysis Aggregate supply and demand are two essential elements in the Keynesian theory of income and employment that are used to describe how the economy functions. The total quantity of products and services that consumers, companies, and the government are willing to purchase at a specific price level is referred to as aggregate demand. The amount of consumer, investment, government, and net export expenditure all affect aggregate demand in Keynesian analysis. The main part of total demand is made up of consumer expenditure, which is impacted by variables including income, interest rates, and consumer confidence. The amount of money that businesses are willing to spend on new machinery, structures, and other capital goods is referred to as investment spending. All of the government’s purchases of goods and services as well as transfer payments like social security and unemployment benefits are categorised as government spending. The distinction between an economy’s exports and imports is referred to as net exports. The total amount of goods and services that businesses are willing to create and sell at a specific price level is referred to as aggregate supply, on the other hand. According to Keynesian analysis, the economy’s availability of labour, capital, and technology determines aggregate supply. The equilibrium level of output and employment is defined as the point at which aggregate demand and supply intersect. According to Keynesian theory, there will be a surplus of goods and services if aggregate demand is lower than aggregate supply, which will result in a drop in output and employment. There will be a shortage of products and services if total demand exceeds total supply, which will enhance output and employment. In general, knowing how aggregate demand and supply interact is crucial for assessing an economy’s macroeconomic performance and creating successful economic policies. The Fiscal And Monetary Policy Has A Role In Economic Stabilization In The Hands Of The Government The notion that government intervention is necessary to stabilise the economy during periods of economic downturn is one of the main characteristics of the Keynesian theory of income and employment. Fiscal policy and monetary policy are the two fundamental instruments of governmental intervention in the Keynesian paradigm. Fiscal policy is the employment of government spending and taxation to affect total demand and maintain economic stability. The government can raise spending or lower taxes during periods of economic slowness or recession in order to boost aggregate demand, boost output, and boost employment. In contrast, the government can restrict expenditure or raise taxes during periods of inflation or overheating in order to lower aggregate demand and stop inflation. The term “monetary policy” refers to the employment of central bank actions, such as adjustments to the money supply or interest rates, to influence overall demand and maintain economic stability. The central bank may cut interest rates or raise the money supply during periods of economic slowness or recession to stimulate borrowing and spending, which in turn boosts aggregate demand and output. In contrast, the central bank can raise interest rates or cut the money supply during periods of inflation or overheating to discourage borrowing and spending, which lowers the overall level of demand and prevents inflation. A number of variables, including how responsive consumers and businesses are to changes in governmental policy and the size of the economic shocks, determine how effective fiscal and monetary policies are at stabilising the economy. In order to attain their goals of economic stabilisation, governments and central banks frequently combine fiscal and monetary measures. The Keynesian theory of income and employment, in its whole, places a strong emphasis on the necessity of government involvement in order to stabilise the economy during periods of economic downturn and offers a framework for comprehending the role of fiscal and monetary policies in attaining this goal. How Changes In Spending Affect Income And Employment: The Multiplier Effect The multiplier effect is a fundamental idea in the Keynesian theory of income and employment. According to the multiplier effect, changes in spending may have a disproportionately large impact on the economy’s employment and income levels. Take the case where the government raises infrastructure project spending. Initial demand for goods and services from companies who provide the materials and labour for the infrastructure projects will rise as a result of this. In order to satisfy the growing demand, these companies will expand their workforce and raise output. As a result of the workers these companies hire having more money to spend on goods and services, demand and employment in other areas of the economy will rise even higher. A multiplier effect on income and employment in the economy may result from continuing this cycle of higher expenditure and greater employment. The marginal propensity to consume (MPC) of consumers and the marginal propensity to invest (MPI) of firms are two variables that affect how large the multiplier impact will be. The MPI is the portion of each additional dollar of income that firms invest in new capital goods, whereas the MPC is the portion of each additional dollar of income that consumers spend on consumption goods and services. The multiplier impact will be bigger the higher the MPC and MPI. For policymakers, an understanding of the multiplier effect is crucial because it can help them determine the right amount of taxation and spending for the government. The government can utilise fiscal policy to boost demand and employment through the multiplier effect during periods of economic slowdown or recession. However, the size of the multiplier effect, which in turn depends on the MPC and MPI, determines how well fiscal policy performs in achieving its goals. In the Keynesian theory of income and employment, the multiplier effect is a key idea that offers a framework for comprehending how changes in expenditure might affect income and employment in the economy. The Paradox Of Saving Too Much Money: Why It Can Hurt The Economy In the Keynesian theory of income and employment, the paradox of thrift is a key idea. It alludes to the notion that trying to save more during periods of economic slowness or recession may actually result in a decline in aggregate demand, output, and employment. The paradox of thrift has the following justification. Spending on products and services decreases when people and households try to save more money. This decrease in spending causes the economy’s aggregate demand to fall, which in turn causes output and employment to decline. Due to the fact that the decline in aggregate demand cancels out the rise in individual and family saving, everyone ends up losing out. In the Keynesian theory of income and employment, the paradox of thrift emphasises the significance of aggregate demand. In the near run, output and employment are mostly determined by aggregate demand, and changes in aggregate demand can cause major changes in economic activity, according to Keynes. Therefore, government action is required to maintain economic stability and stop the paradox of thrift from causing a protracted economic downturn. In real life, governments can combat the implications of the paradox of thrift by implementing fiscal and monetary policies. For instance, during periods of economic slowdown or recession, the government may raise spending or cut taxes in order to boost aggregate demand and counteract the decline in spending by people and households. Similar to this, the central bank has the power to cut interest rates or boost the money supply to promote borrowing and spending, which can boost aggregate demand and output. Overall, the paradox of thrift underscores the necessity of governmental involvement in order to stabilise the economy during periods of economic depression and is a key idea in the Keynesian theory of income and employment. Keynesian Theory Criticism: Issues From The Classical And Monetarist Schools Of Thought The Keynesian theory of income and employment has been heavily criticised by many schools of thought, including the classical and monetarist schools, despite its enormous popularity. The classical school of thinking, which contends that the economy is self-regulating and would eventually trend to full employment, is one of the main critics of the Keynesian hypothesis. The traditional theory holds that any short-term changes in output or employment are due to transient causes like technical advancements or natural disasters, and that these changes would eventually be adjusted by market forces like flexible pricing and wages. This perspective contrasts with the Keynesian idea that economic recessions or slowdowns require government involvement to stabilise the economy. The monetarist school of thinking, which emphasises the role of monetary policy in stabilising the economy, has also criticised the Keynesian theory. The government can use monetary policy to stabilise the economy without the need for fiscal policy, according to monetarists, who contend that changes in the money supply can have a large impact on aggregate demand and output. The Keynesian emphasis on government spending, according to monetarists, can result in inflation and other macroeconomic imbalances, and market forces are better suited to properly distribute resources. The Keynesian theory is criticised for having trouble explaining long-term economic growth and development. Although Keynesian theory offers a framework for comprehending short-run fluctuations in output and employment, it does not offer a thorough theory of economic growth and development. In fact, some claim that it may even be detrimental to long-term economic growth because it encourages excessive government intervention and discourages private investment. The Keynesian theory of income and employment, despite these criticisms, continues to have a considerable impact on macroeconomic policy and has made a significant contribution to our knowledge of how the government stabilises the economy. The Influence Of Macroeconomic Policy And Current Debates In Keynesian Economics Macroeconomic policy has benefited greatly from the Keynesian theory of income and employment, and current economic discussions continue to be greatly influenced by it. Governments all throughout the world adopted Keynesian policies like deficit spending and demand management during the decades that followed the Great Depression and World War II, making Keynesian economics the preeminent framework for macroeconomic policy. These measures were created to encourage full employment and economic growth while stabilising the economy during periods of slowness or recession. The Keynesian consensus, however, started to disintegrate in the 1970s as growing inflation and economic stagnation called into question the efficiency of Keynesian programmes. New schools of thought, like monetarism and supply-side economics, which emphasised the significance of market forces and moderate government intervention in fostering economic growth, emerged as a result. Despite these difficulties, the Keynesian legacy continues to dominate discussions of macroeconomic policy today. Many economists still advocate for the necessity of government intervention in order to maintain economic stability and advance full employment, and governments all over the world continue to implement Keynesian policies like fiscal stimulus and demand management in reaction to financial crises. The COVID-19 epidemic in recent years has once more thrust Keynesian economics to the fore of discussions about macroeconomic policy, as governments all over the world have adopted substantial fiscal stimulus programs to offset the pandemic’s economic effects. Overall, the Keynesian theory of income and employment continues to affect current discussions on the role of government in fostering economic stability and growth and has a significant impact on macroeconomic policy. Keynesian Economic Policy Case Studies: Successes And Failures In Action Different nations and contexts have used the Keynesian theory of income and employment, with varying degrees of success. Following are a few instances of Keynesian economic principles and their results: The New Deal in the United States: In order to boost the economy and generate jobs during the Great Depression, President Franklin D. Roosevelt’s New Deal programme used Keynesian policies like deficit spending and public works projects. The New Deal is acknowledged for helping to lessen some of the harshest effects of the economic slump and setting the groundwork for future economic growth even though it did not put an end to the Great Depression. The post-World War II economic boom: To encourage economic growth and full employment, many nations adopted Keynesian policies such as deficit spending and demand management. Many nations experienced a period of consistent economic growth and rising living standards in the decades following the war, indicating that these policies were largely successful. The stagflation of the 1970s: Keynesian policies were put to the test during this time, when many nations experienced a period of slow economic development and soaring prices. Numerous economists contend that the Keynesian emphasis on deficit spending and demand management was inappropriate for addressing the supply-side issues that were then fueling inflation and economic stagnation. The Great Recession and its aftermath: To stabilise the economy and foster growth in the wake of the 2008 financial crisis, many governments pursued Keynesian policies including fiscal stimulus and quantitative easing. Many economists believe that these policies helped to prevent a more serious economic downturn and aided in the recovery in the years after the crisis, even though the effectiveness of these measures is still up for debate. Overall, there has been a mixed record of success and failure when Keynesian economic principles have been put into practise. While Keynesian policies have frequently been credited with fostering economic growth and stability, they have also encountered significant obstacles and constraints when trying to address complex economic issues like inflation and supply-side variables. Keynesian Theory’s Applicability And Importance In Contemporary Economic Analysis The Keynesian theory of income and employment has been challenged and criticised, but its relevance and significance in contemporary economic analysis remain strong. The continued usefulness of Keynesian policies in fostering economic growth and stability is a major factor in this. Although the Keynesian consensus may have crumbled in the 1970s, governments all over the world still use Keynesian policies to combat economic crises, such as fiscal stimulus and demand management. The COVID-19 pandemic has also brought to light the value of Keynesian approaches in reducing the crisis’s negative economic effects. Keynesian economics’ emphasis on the role of government in maintaining economic stability and full employment is another factor contributing to its continued relevance. Keynesian economics contends that while classical and monetarist schools of thought may emphasise the significance of market processes and moderate government intervention, government has a critical role to play in generating full employment and stabilising the economy. The Keynesian theory of income and employment also continues to influence economic research and current discussions of macroeconomic policy. Keynesian economics’ fundamental tenets, such as the significance of aggregate demand and the role of government in stabilising the economy, remain relevant and significant in contemporary economic analysis, even though the specifics of Keynesian policies may vary depending on the context and the economic problem being addressed. In general, the Keynesian theory of income and employment continues to have a significant impact on modern economic theory and practise. Although it might encounter obstacles and constraints, its lasting legacy is proof of its ongoing relevance and importance in contemporary economic analysis. Cеrtificatе of Complеtion I, [Studеnt’s Full Namе], hеrеby cеrtify that I havе succеssfully complеtеd thе еconomics projеct on “Kеynеsian Thеory of Incomе and Employmеnt” as part of my Class 12 curriculum at School Namе. This projеct allowеd mе to dеlvе into thе historical contеxt, foundational idеas, and kеy concеpts of Kеynеsian еconomics, providing mе with a dееpеr undеrstanding of macroеconomic thеory and govеrnmеnt intеrvеntion in thе еconomy. Projеct Titlе: Kеynеsian Thеory of Incomе and Employmеnt – An Economics Projеct Class: Class 12 Acadеmic Yеar: [Yеar] I am еxtrеmеly gratеful to my school for providing mе with thе opportunity to еxplorе and analyzе thе Kеynеsian thеory, which has had a profound impact on еconomic policiеs and govеrnmеnt intеrvеntions in various еconomiеs. Spеcial thanks to my еconomics tеachеr for guiding mе throughout this projеct and providing valuablе insights and fееdback to еnhancе my undеrstanding. Studying thе historical contеxt and foundational idеas of Kеynеsian еconomics, еspеcially thе influеncе of thе Grеat Dеprеssion on its dеvеlopmеnt, was еnlightеning. Lеarning about thе significancе of aggrеgatе dеmand and supply, fiscal and monеtary policiеs, and thе multipliеr еffеct has broadеnеd my knowlеdgе of macroеconomics and thе factors that drivе еconomic growth and stability. Onе of thе most fascinating aspеcts of thе projеct was undеrstanding thе paradox of thrift and its implications for еconomic policy. It opеnеd my еyеs to thе dеlicatе balancе rеquirеd bеtwееn saving and spеnding in an еconomy and how govеrnmеnt intеrvеntions play a crucial rolе in maintaining еquilibrium. I also еnjoyеd analyzing casе studiеs and rеal-world еxamplеs of Kеynеsian еconomic policiеs, such as thе Nеw Dеal in thе Unitеd Statеs and thе rеsponsе to thе 2008 financial crisis. Thеsе casе studiеs showcasеd thе practical application of Kеynеsian principlеs and thеir varying dеgrееs of succеss and challеngеs in diffеrеnt еconomic contеxts. Ovеrall, thе projеct has bееn an intеllеctually stimulating and rеwarding еxpеriеncе. It has providеd mе with a comprеhеnsivе undеrstanding of Kеynеsian еconomics and its significancе in contеmporary еconomic analysis and policymaking. I еxtеnd my hеartfеlt thanks to my family and friеnds for thеir unwavеring support and еncouragеmеnt throughout this projеct. Thеir motivation has bееn a constant sourcе of inspiration for mе. Oncе again, I want to еxprеss my gratitudе to my school, my tеachеr, and еvеryonе who has contributеd to thе succеssful complеtion of this еconomics projеct. It has bееn an invaluablе lеarning journеy, and I look forward to applying this knowlеdgе in my futurе studiеs and еndеavors. In order to download the PDF, You must follow on Zomato. Once done, Click on SubmitFollow On Zomato Subscribed? Click on Confirm
This newly found threat comes from a supernova's blast wave striking dense gas surrounding the exploded star, as depicted in the upper right of our artist's impression. When this impact occurs it can produce a large dose of X-rays that reaches an Earth-like planet (shown in the lower left, illuminated by its host star out of view to the right) months to years after the explosion and may last for decades. Such intense exposure may trigger an extinction event on the planet. A new study reporting this threat is based on X-ray observations of 31 supernovae and their aftermath - mostly from NASA's Chandra X-ray Observatory, Swift and NuSTAR missions, and ESA's XMM-Newton - show that planets can be subjected to lethal doses of radiation located as much as about 160 light-years away. Prior to this, most research on the effects of supernova explosions had focused on the danger from two periods: the intense radiation produced by a supernova in the days and months after the explosion, and the energetic particles that arrive hundreds to thousands of years afterward. If a torrent of X-rays sweeps over a nearby planet, the radiation could severely alter the planet's atmospheric chemistry. For an Earth-like planet, this process could wipe out a significant portion of ozone, which ultimately protects life from the dangerous ultraviolet radiation of its host star. It could also lead to the demise of a wide range of organisms, especially marine ones at the foundation of the food chain, leading to an extinction event. After years of lethal X-ray exposure from the supernova's interaction, and the impact of ultraviolet radiation from an Earth-like planet's host star, a large amount of nitrogen dioxide may be produced, causing a brown haze in the atmosphere, as shown in the illustration. A "de-greening" of land masses could also occur because of damage to plants. Among the four supernovae in the study SN 2010jl has produced the most X-rays. The authors estimate it to have delivered a lethal dose of X-rays for Earth-like planets less than about 100 light-years away. There is strong evidence - including the detection in different locations around the globe of a radioactive type of iron - that supernovae occurred close to Earth between about 2 million and 8 million years ago. Researchers estimate these supernovae were between about 65 and 500 light-years away from Earth. Although the Earth and the Solar System are currently in a safe space in terms of potential supernova explosions, many other planets in the Milky Way are not. These high-energy events would effectively shrink the areas within the Milky Way galaxy, known as the Galactic Habitable Zone, where conditions would be conducive for life as we know it. Because the X-ray observations of supernovae are sparse, particularly of the variety that strongly interact with their surroundings, the authors urge follow-up observations of interacting supernovae for months and years after the explosion. The paper describing this result appears in the April 20, 2023 issue of The Astrophysical Journal, and is available here. The other authors of the paper are Ian Brunton, Connor O'Mahoney, and Brian Fields (University of Illinois at Urbana-Champaign), Adrian Melott (University of Kansas), and Brian Thomas (Washburn University in Kansas). NASA's Marshall Space Flight Center manages the Chandra program. The Smithsonian Astrophysical Observatory's Chandra X-ray Center controls science operations from Cambridge, Massachusetts, and flight operations from Burlington, Massachusetts. |Subscribe Free To Our Daily Newsletters| Voyager will do more science with new power strategy Northrop Grumman's S.S. Sally Ride departs International Space Station Creating new and better drugs with protein crystal growth experiments on the ISS Is sex in space being taken seriously by the emerging space tourism sector? SpaceX delays launch of 46 Starlink satellites SpaceX's Starship launch: successful failure of most powerful rocket in history Aerojet Rocketdyne to provide propulsion for three additional Orion spacecraft Potential Failure Modes of SpaceX's Starship Curiosity: Move slowly and don't break things: Sols 3810-3811 NASA Retires Mineral Mapping Instrument on Mars Orbiter China releases first panoramic images of Mars Sols 3812-3813: Tiny Sticks Poking Out at Us China to promote space science progress on five themes China to develop satellite constellation for deep space exploration China's space missions break new ground Space exploration for betterment of humankind Viper and T-Rex on double rocket launch| DISH TV adding to fleet with new Maxar satellite order CGI to extend machine learning to LEO satellite network optimisation European Space Agency chief eyes tapping private industry partners NASA's 3D-printed superalloy can take the heat Paving the way for truly intelligent materials Researchers 3D print a miniature vacuum pump Researchers capture first atomic-scale images depicting early stages of particle accelerator film formation UGA researchers discover new planet outside solar system Scientists discover rare element in exoplanet's atmosphere New stellar danger to planets identified by Chandra TESS celebrates fifth year scanning the sky for new worlds Juice's first taste of science from space Icy Moonquakes: Surface Shaking Could Trigger Landslides Europe's Jupiter probe launched Europe's JUICE mission blasts off towards Jupiter's icy moons |Subscribe Free To Our Daily Newsletters|
Sine and cosine functions are used primarily in physics and engineering to model oscillatory behavior, such as the motion of a pendulum or the current in an $AC$ electrical circuit. But these functions also arise in the other sciences. In this project, we consider an application to biology—we use sine functions to model the population of a predator and its prey. An isolated island is inhabited by two species of mammals: lynx and hares. The lynx are predators who feed on the hares, their prey. The lynx and hare populations change cyclically, as graphed in Figure 1. In part $A$ of the graph, hares are abundant, so the lynx have plenty to eat and their population increases. By the time portrayed in part $B$, so many lynx are feeding on the hares that the hare population declines. In part $C$, the hare population has declined so much that there is not enough food for the lynx, so the lynx population starts to decrease. In part $D$, so many lynx have died that the hares have few enemies, and their population increases again. This takes us back to where we started, and the cycle repeats over and over again. The graphs in Figure 1 are sine curves that have been shifted upward, so they are graphs of functions of the form $$y = a \; \sin \; k(t - b) + c$$ Here $c$ is the amount by which the sine curve has been shifted vertically. Note that $c$ is the average value of the function, halfway between the highest and lowest values on the graph. The amplitude $|\; a \; |$ is the amount by which the graph varies above and below the average value (see Figure 2). $$y = a \; \sin \; k(t - b) + c$$ - Find functions of the form $y = a \; \sin \; k(t - b) + c$ that model the lynx and hare populations graphed in Figure 1. Graph both functions on your calculator and compare to Figure 1 to verify that your functions are the right ones. - Add the lynx and hare population functions to get a new function that models the total mammal population on this island. Graph this function on your calculator, and find its average value, amplitude, period, and phase shift. How are the average value and period of the mammal population function related to the average value and period of the lynx and hare population functions? - A small lake on the island contains two species of fish: hake and redfish. The hake are predators that eat the redfish. The fish population in the lake varies periodically with period $180$ days. The number of hake varies between $500$ and $1500$, and the number of redfish varies between $1000$ and $3000$. The hake reach their maximum population $30$ days after the redfish have reached their maximum population in the cycle. - Sketch a graph (like the one in Figure 1) that shows two complete periods of the population cycle for these species of fish. Assume that $t = 0$ corresponds to a time when the redfish population is at a maximum. - Find cosine functions of the form $y = a \; \cos \; k(t - b) + c$ that model the hake and redfish populations in the lake. - In real life, most predator/prey populations do not behave as simply as the examples we have described here. In most cases, the populations of predator and prey oscillate, but the amplitude of the oscillations gets smaller and smaller, so that eventually both populations stabilize near a constant value. Sketch a rough graph that illustrates how the populations of predator and prey might behave in this case.
Present valueWikipedia open wikipedia design. This article needs additional citations for verification. (March 2012) (Learn how and when to remove this template message) In economics and finance, present value (PV), also known as present discounted value, is the value of an expected income stream determined as of the date of valuation. The present value is always less than or equal to the future value because money has interest-earning potential, a characteristic referred to as the time value of money, except during times of negative interest rates, when the present value will be more than the future value. Time value can be described with the simplified phrase, "A dollar today is worth more than a dollar tomorrow". Here, 'worth more' means that its value is greater. A dollar today is worth more than a dollar tomorrow because the dollar can be invested and earn a day's worth of interest, making the total accumulate to a value more than a dollar by tomorrow. Interest can be compared to rent. Just as rent is paid to a landlord by a tenant without the ownership of the asset being transferred, interest is paid to a lender by a borrower who gains access to the money for a time before paying it back. By letting the borrower have access to the money, the lender has sacrificed the exchange value of this money, and is compensated for it in the form of interest. The initial amount of the borrowed funds (the present value) is less than the total amount of money paid to the lender. Present value calculations, and similarly future value calculations, are used to value loans, mortgages, annuities, sinking funds, perpetuities, bonds, and more. These calculations are used to make comparisons between cash flows that don’t occur at simultaneous times, since time dates must be consistent in order to make comparisons between values. When deciding between projects in which to invest, the choice can be made by comparing respective present values of such projects by means of discounting the expected income streams at the corresponding project interest rate, or rate of return. The project with the highest present value, i.e. that is most valuable today, should be chosen. - 1 Years' purchase - 2 Background - 3 Interest rates - 4 Calculation - 4.1 Present value of a lump sum - 4.2 Net present value of a stream of cash flows - 4.3 Variants/approaches - 4.4 Choice of interest rate - 5 Present value method of valuation - 6 See also - 7 References - 8 Further reading The traditional method of valuing future income streams as a present capital sum is to multiply the average expected annual cash-flow by a multiple, known as "years' purchase". For example, in selling to a third party a property leased to a tenant under a 99-year lease at a rent of $10,000 per annum, a deal might be struck at "20 years' purchase", which would value the lease at 20 * $10,000, i.e. $200,000. This equates to a present value discounted in perpetuity at 5%. For a riskier investment the purchaser would demand to pay a lower number of years' purchase. This was the method used for example by the English crown in setting re-sale prices for manors seized at the Dissolution of the Monasteries in the early 16th century. The standard usage was 20 years' purchase. If offered a choice between $100 today or $100 in one year, and there is a positive real interest rate throughout the year, ceteris paribus, a rational person will choose $100 today. This is described by economists as time preference. Time preference can be measured by auctioning off a risk free security—like a US Treasury bill. If a $100 note with a zero coupon, payable in one year, sells for $80 now, then $80 is the present value of the note that will be worth $100 a year from now. This is because money can be put in a bank account or any other (safe) investment that will return interest in the future. An investor who has some money has two options: to spend it right now or to save it. But the financial compensation for saving it (and not spending it) is that the money value will accrue through the compound interest that he or she will receive from a borrower (the bank account in which he has the money deposited). Therefore, to evaluate the real value of an amount of money today after a given period of time, economic agents compound the amount of money at a given (interest) rate. Most actuarial calculations use the risk-free interest rate which corresponds to the minimum guaranteed rate provided by a bank's saving account for example, assuming no risk of default by the bank to return the money to the account holder on time. To compare the change in purchasing power, the real interest rate (nominal interest rate minus inflation rate) should be used. The operation of evaluating a present value into the future value is called a capitalization (how much will $100 today be worth in 5 years?). The reverse operation—evaluating the present value of a future amount of money—is called a discounting (how much will $100 received in 5 years—at a lottery for example—be worth today?). It follows that if one has to choose between receiving $100 today and $100 in one year, the rational decision is to choose the $100 today. If the money is to be received in one year and assuming the savings account interest rate is 5%, the person has to be offered at least $105 in one year so that the two options are equivalent (either receiving $100 today or receiving $105 in one year). This is because if $100 is deposited in a savings account, the value will be $105 after one year, again assuming no risk of losing the initial amount through bank default. Interest is the additional amount of money gained between the beginning and the end of a time period. Interest represents the time value of money, and can be thought of as rent that is required of a borrower in order to use money from a lender. For example, when an individual takes out a bank loan, the individual is charged interest. Alternatively, when an individual deposits money into a bank, the money earns interest. In this case, the bank is the borrower of the funds and is responsible for crediting interest to the account holder. Similarly, when an individual invests in a company (through corporate bonds, or through stock), the company is borrowing funds, and must pay interest to the individual (in the form of coupon payments, dividends, or stock price appreciation). The interest rate is the change, expressed as a percentage, in the amount of money during one compounding period. A compounding period is the length of time that must transpire before interest is credited, or added to the total. For example, interest that is compounded annually is credited once a year, and the compounding period is one year. Interest that is compounded quarterly is credited four times a year, and the compounding period is three months. A compounding period can be any length of time, but some common periods are annually, semiannually, quarterly, monthly, daily, and even continuously. There are several types and terms associated with interest rates: - Compound interest, interest that increases exponentially over subsequent periods, - Simple interest, additive interest that does not increase - Effective interest rate, the effective equivalent compared to multiple compound interest periods - Nominal annual interest, the simple annual interest rate of multiple interest periods - Discount rate, an inverse interest rate when performing calculations in reverse - Continuously compounded interest, the mathematical limit of an interest rate with a period of zero time. - Real interest rate, which accounts for inflation. The operation of evaluating a present sum of money some time in the future is called a capitalization (how much will 100 today be worth in five years?). The reverse operation—evaluating the present value of a future amount of money—is called discounting (how much will 100 received in five years be worth today?). Spreadsheets commonly offer functions to compute present value. In Microsoft Excel, there are present value functions for single payments - "=NPV(...)", and series of equal, periodic payments - "=PV(...)". Programs will calculate present value flexibly for any cash flow and interest rate, or for a schedule of different interest rates at different times. Present value of a lump sum The most commonly applied model of present valuation uses compound interest. The standard formula is: Where is the future amount of money that must be discounted, is the number of compounding periods between the present date and the date where the sum is worth , is the interest rate for one compounding period (the end of a compounding period is when interest is applied, for example, annually, semiannually, quarterly, monthly, daily). The interest rate, , is given as a percentage, but expressed as a decimal in this formula. Often, is referred to as the Present Value Factor This is also found from the formula for the future value with negative time. For example, if you are to receive $1000 in five years, and the effective annual interest rate during this period is 10% (or 0.10), then the present value of this amount is The interpretation is that for an effective annual interest rate of 10%, an individual would be indifferent to receiving $1000 in five years, or $620.92 today. Net present value of a stream of cash flows A cash flow is an amount of money that is either paid out or received, differentiated by a negative or positive sign, at the end of a period. Conventionally, cash flows that are received are denoted with a positive sign (total cash has increased) and cash flows that are paid out are denoted with a negative sign (total cash has decreased). The cash flow for a period represents the net change in money of that period. Calculating the net present value, , of a stream of cash flows consists of discounting each cash flow to the present, using the present value factor and the appropriate number of compounding periods, and combining these values. For example, if a stream of cash flows consists of +$100 at the end of period one, -$50 at the end of period two, and +$35 at the end of period three, and the interest rate per compounding period is 5% (0.05) then the present value of these three Cash Flows are: Thus the net present value would be: There are a few considerations to be made. - The periods might not be consecutive. If this is the case, the exponents will change to reflect the appropriate number of periods - The interest rates per period might not be the same. The cash flow must be discounted using the interest rate for the appropriate period: if the interest rate changes, the sum must be discounted to the period where the change occurs using the second interest rate, then discounted back to the present using the first interest rate. For example, if the cash flow for period one is $100, and $200 for period two, and the interest rate for the first period is 5%, and 10% for the second, then the net present value would be: - The interest rate must necessarily coincide with the payment period. If not, either the payment period or the interest rate must be modified. For example, if the interest rate given is the effective annual interest rate, but cash flows are received (and/or paid) quarterly, the interest rate per quarter must be computed. This can be done by converting effective annual interest rate, , to nominal annual interest rate compounded quarterly: Here, is the nominal annual interest rate, compounded quarterly, and the interest rate per quarter is Present value of an annuity Many financial arrangements (including bonds, other loans, leases, salaries, membership dues, annuities including annuity-immediate and annuity-due, straight-line depreciation charges) stipulate structured payment schedules; payments of the same amount at regular time intervals. Such an arrangement is called an annuity. The expressions for the present value of such payments are summations of geometric series. There are two types of annuities: an annuity-immediate and annuity-due. For an annuity immediate, payments are received (or paid) at the end of each period, at times 1 through , while for an annuity due, payments are received (or paid) at the beginning of each period, at times 0 through . This subtle difference must be accounted for when calculating the present value. An annuity due is an annuity immediate with one more interest-earning period. Thus, the two present values differ by a factor of : The present value of an annuity immediate is the value at time 0 of the stream of cash flows: - = number of periods, - = amount of cash flows, - = effective periodic interest rate or rate of return. An approximation for annuity and loan calculations The above formula (1) for annuity immediate calculations offers little insight for the average user and requires the use of some form of computing machinery. There is an approximation which is less intimidating, easier to compute and offers some insight for the non-specialist. It is given by Where, as above, C is annuity payment, PV is principal, n is number of payments, starting at end of first period, and i is interest rate per period. Equivalently C is the periodic loan repayment for a loan of PV extending over n periods at interest rate, i. The formula is valid (for positive n, i) for ni≤3. For completeness, for ni≥3 the approximation is . The formula can, under some circumstances, reduce the calculation to one of mental arithmetic alone. For example, what are the (approximate) loan repayments for a loan of PV = $10,000 repaid annually for n = ten years at 15% interest (i = 0.15)? The applicable approximate formula is C ≈ 10,000*(1/10 + (2/3) 0.15) = 10,000*(0.1+0.1) = 10,000*0.2 = $2000 pa by mental arithmetic alone. The true answer is $1993, very close. The overall approximation is accurate to within ±6% (for all n≥1) for interest rates 0≤i≤0.20 and within ±10% for interest rates 0.20≤i≤0.40. It is, however, intended only for "rough" calculations. Present value of a perpetuity A perpetuity refers to periodic payments, receivable indefinitely, although few such instruments exist. The present value of a perpetuity can be calculated by taking the limit of the above formula as n approaches infinity. Formula (2) can also be found by subtracting from (1) the present value of a perpetuity delayed n periods, or directly by summing the present value of the payments which form a geometric series. Again there is a distinction between a perpetuity immediate – when payments received at the end of the period – and a perpetuity due – payment received at the beginning of a period. And similarly to annuity calculations, a perpetuity due and a perpetuity immediate differ by a factor of : PV of a bond A corporation issues a bond, an interest earning debt security, to an investor to raise funds. The bond has a face value, , coupon rate, , and maturity date which in turn yields the number of periods until the debt matures and must be repaid. A bondholder will receive coupon payments semiannually (unless otherwise specified) in the amount of , until the bond matures, at which point the bondholder will receive the final coupon payment and the face value of a bond, . The present value of a bond is the purchase price. The purchase price is equal to the bond's face value if the coupon rate is equal to the current interest rate of the market, and in this case, the bond is said to be sold 'at par'. If the coupon rate is less than the market interest rate, the purchase price will be less than the bond's face value, and the bond is said to have been sold 'at a discount', or below par. Finally, if the coupon rate is greater than the market interest rate, the purchase price will be greater than the bond's face value, and the bond is said to have been sold 'at a premium', or above par. The purchase price can be computed as: In fact, the present value of a cashflow at a constant interest rate is mathematically one point in the Laplace transform of that cashflow, evaluated with the transform variable (usually denoted "s") equal to the interest rate. The full Laplace transform is the curve of all present values, plotted as a function of interest rate. For discrete time, where payments are separated by large time periods, the transform reduces to a sum, but when payments are ongoing on an almost continual basis, the mathematics of continuous functions can be used as an approximation. These calculations must be applied carefully, as there are underlying assumptions: - That it is not necessary to account for price inflation, or alternatively, that the cost of inflation is incorporated into the interest rate. - That the likelihood of receiving the payments is high—or, alternatively, that the default risk is incorporated into the interest rate. See time value of money for further discussion. There are mainly two flavors of Present Value. Whenever there will be uncertainties in both timing and amount of the cash flows, the expected present value approach will often be the appropriate technique. - Traditional Present Value Approach – in this approach a single set of estimated cash flows and a single interest rate (commensurate with the risk, typically a weighted average of cost components) will be used to estimate the fair value. - Expected Present Value Approach – in this approach multiple cash flows scenarios with different/expected probabilities and a credit-adjusted risk free rate are used to estimate the fair value. Choice of interest rate The interest rate used is the risk-free interest rate if there are no risks involved in the project. The rate of return from the project must equal or exceed this rate of return or it would be better to invest the capital in these risk free assets. If there are risks involved in an investment this can be reflected through the use of a risk premium. The risk premium required can be found by comparing the project with the rate of return required from other projects with similar risks. Thus it is possible for investors to take account of any uncertainty involved in various investments. Present value method of valuation An investor, the lender of money, must decide the financial project in which to invest their money, and present value offers one method of deciding. A financial project requires an initial outlay of money, such as the price of stock or the price of a corporate bond. The project claims to return the initial outlay, as well as some surplus (for example, interest, or future cash flows). An investor can decide which project to invest in by calculating each projects’ present value (using the same interest rate for each calculation) and then comparing them. The project with the smallest present value – the least initial outlay – will be chosen because it offers the same return as the other projects for the least amount of money. - Moyer, Charles; William Kretlow; James McGuigan (2011). Contemporary Financial Management (12 ed.). Winsted: South-Western Publishing Co. pp. 147–498. ISBN 9780538479172. - Broverman, Samuel (2010). Mathematics of Investment and Credit. Winsted: ACTEX Publishers. pp. 4–229. ISBN 9781566987677. - Youings, Joyce, "Devon Monastic Lands: Calendar of Particulars for Grants 1536–1558", Devon & Cornwall Record Society, New Series, Vol.1, 1955 - Ross, Stephen; Randolph W. Westerfield; Bradford D. Jordan (2010). Fundamentals of Corporate Finance (9 ed.). New York: McGraw-Hill. pp. 145–287. ISBN 9780077246129. - Swingler, D. N., (2014), "A Rule of Thumb approximation for time value of money calculations", Journal of Personal Finance, Vol. 13,Issue 2, pp.57-61
Herd behavior describes how individuals in a group can act collectively without centralized direction. The term can refer to the behavior of animals in herds, packs, bird flocks, fish schools and so on, as well as the behavior of humans in demonstrations, riots and general strikes, sporting events, religious gatherings, episodes of mob violence and everyday decision-making, judgement and opinion-forming. Raafat, Chater and Frith proposed an integrated approach to herding, describing two key issues, the mechanisms of transmission of thoughts or behavior between individuals and the patterns of connections between them. They suggested that bringing together diverse theoretical approaches of herding behavior illuminates the applicability of the concept to many domains, ranging from cognitive neuroscience to economics. A group of animals fleeing from a predator shows the nature of herd behavior. In 1971, in the oft cited article "Geometry For The Selfish Herd," evolutionary biologist W. D. Hamilton asserted that each individual group member reduces the danger to itself by moving as close as possible to the center of the fleeing group. Thus the herd appears as a unit in moving together, but its function emerges from the uncoordinated behavior of self-serving individuals. Asymmetric aggregation of animals under panic conditions has been observed in many species, including humans, mice, and ants. Theoretical models have demonstrated symmetry-breaking similar to observations in empirical studies. For example, when panicked individuals are confined to a room with two equal and equidistant exits, a majority will favor one exit while the minority will favor the other. Characteristics of escape panic include: - Individuals attempt to move faster than normal. - Interactions between individuals become physical. - Exits become arched and clogged. - Escape is slowed by fallen individuals serving as obstacles. - Individuals display a tendency towards mass or copied behavior. - Alternative or less used exits are overlooked. In human societies The philosophers Søren Kierkegaard and Friedrich Nietzsche were among the first to criticize what they referred to as "the crowd" (Kierkegaard) and "herd morality" and the "herd instinct" (Nietzsche) in human society. Modern psychological and economic research has identified herd behavior in humans to explain the phenomena of large numbers of people acting in the same way at the same time. The British surgeon Wilfred Trotter popularized the "herd behavior" phrase in his book, Instincts of the Herd in Peace and War (1914). In The Theory of the Leisure Class, Thorstein Veblen explained economic behavior in terms of social influences such as "emulation," where some members of a group mimic other members of higher status. In "The Metropolis and Mental Life" (1903), early sociologist George Simmel referred to the "impulse to sociability in man", and sought to describe "the forms of association by which a mere sum of separate individuals are made into a 'society' ". Other social scientists explored behaviors related to herding, such as Freud (crowd psychology), Carl Jung (collective unconscious), and Gustave Le Bon (the popular mind). Swarm theory observed in non-human societies is a related concept and is being explored as it occurs in human society. Stock market bubbles Large stock market trends often begin and end with periods of frenzied buying (bubbles) or selling (crashes). Many observers cite these episodes as clear examples of herding behavior that is irrational and driven by emotion—greed in the bubbles, fear in the crashes. Individual investors join the crowd of others in a rush to get in or out of the market. Some followers of the technical analysis school of investing see the herding behavior of investors as an example of extreme market sentiment. The academic study of behavioral finance has identified herding in the collective irrationality of investors, particularly the work of Nobel laureates Vernon L. Smith, Amos Tversky, Daniel Kahneman, and Robert Shiller.[a] Hey and Morone (2004) analyzed a model of herd behavior in a market context. Their work is related to at least two important strands of literature. The first of these strands is that on herd behavior in a non-market context. The seminal references are Banerjee (1992) and Bikhchandani, Hirshleifer and Welch (1992), both of which showed that herd behavior may result from private information not publicly shared. More specifically, both of these papers showed that individuals, acting sequentially on the basis of private information and public knowledge about the behavior of others, may end up choosing the socially undesirable option. The second of the strands of literature motivating this paper is that of information aggregation in market contexts. A very early reference is the classic paper by Grossman and Stiglitz (1976) that showed that uninformed traders in a market context can become informed through the price in such a way that private information is aggregated correctly and efficiently. In this strand of the literature, the most commonly used empirical methodologies to test for herding toward the average, are the works of Christie and Huang (1995) and Chang, Cheng and Khorana (2000). Overall, it was shown that it is possible to observe herd-type behavior in a market context. The results refer to a market with a well-defined fundamental value. Even if herd behavior might only be observed rarely, this has important consequences for a whole range of real markets – most particularly foreign exchange markets. Crowds that gather on behalf of a grievance can involve herding behavior that turns violent, particularly when confronted by an opposing ethnic or racial group. The Los Angeles riots of 1992, New York Draft Riots and Tulsa Race Riot are notorious in U.S. history. The idea of a "group mind" or "mob behavior" was put forward by the French social psychologists Gabriel Tarde and Gustave Le Bon. "Benign" herding behaviors may occur frequently in everyday decisions based on learning from the information of others, as when a person on the street decides which of two restaurants to dine in. Suppose that both look appealing, but both are empty because it is early evening; so at random, this person chooses restaurant A. Soon a couple walks down the same street in search of a place to eat. They see that restaurant A has customers while B is empty, and choose A on the assumption that having customers makes it the better choice. Because other passersby do the same thing into the evening, restaurant A does more business that night than B. This phenomenon is also referred as an information cascade. Herd behavior is often a useful tool in marketing and, if used properly, can lead to increases in sales and changes to the structure of society. Whilst it has been shown that financial incentives cause action in large numbers of people, herd mentality often wins out in a case of “Keeping up with the Joneses.” Herd Behavior in Brand and Product success Communications technologies have contributed to the proliferation to consumer choice and “the power of crowds,” Consumers increasingly have more access to opinions and information from both opinion leaders and formers on platforms that have largely user-generated content, and thus have more tools with which to complete any decision-making process. Popularity is seen as an indication of better quality, and consumers will use the opinions of others posted on these platforms as a powerful compass to guide them towards products and brands that align with their preconceptions and the decisions of others in their peer groups. Taking into account differences in needs and their position in the socialization process, Lessig & Park examined groups of students and housewives and the influence that these reference groups have on one another. By way of herd mentality, students tended to encourage each other towards beer, hamburger and cigarettes, whilst housewives tended to encourage each other towards furniture and detergent. Whilst this particular study was done in 1977, one cannot discount its findings in today’s society. A study done by Burke, Leykin, Li and Zhang in 2014 on the social influence on shopper behavior shows that shoppers are influenced by direct interactions with companions, and as a group size grows, herd behaviour becomes more apparent. Discussions that create excitement and interest have greater impact on touch frequency and purchase likelihood grows with greater involvement caused by a large group. Shoppers in this Midwestern American shopping outlet were monitored and their purchases noted, and it was found up to a point, potential customers preferred to be in stores which had moderate levels of traffic. The other people in the store not only served as company, but also provided an inference point on which potential customers could model their behavior and make purchase decisions, as with any reference group or community. Social media can also be a powerful tool in perpetuating herd behaviour. Its immeasurable amount of user-generated content serves as a platform for opinion leaders to take the stage and influence purchase decisions, and recommendations from peers and evidence of positive online experience all serve to help consumers make purchasing decisions. Gunawan and Huarng’s 2015 study concluded that social influence is essential in framing attitudes towards brands, which in turn leads to purchase intention. Influencers form norms which their peers are found to follow, and targeting extroverted personalities increases chances of purchase even further. This is because the stronger personalities tend to be more engaged on consumer platforms and thus spread word of mouth information more efficiently. Many brands have begun to realise the importance of brand ambassadors and influencers, and it is being shown more clearly that herd behaviour can be used to drive sales and profits exponentially in favour of any brand through examination of these instances. Herd Behavior in Social Marketing Marketing can easily transcend beyond commercial roots, in that it can be used to encourage action to do with health, environmentalism and general society. Herd mentality often takes a front seat when it comes to social marketing, paving the way for campaigns such as Earth Day, and the variety of anti-smoking and anti-obesity campaigns seen in every country. Within cultures and communities, marketers must aim to influence opinion leaders who in turn influence each other, as it is the herd mentality of any group of people that ensures a social campaign’s success. A campaign run by Som la Pera in Spain to combat teenage obesity found that campaigns run in schools are more effective due to influence of teachers and peers, and students’ high visibility, and their interaction with one another. Opinion leaders in schools created the logo and branding for the campaign, built content for social media and led in-school presentations to engage audience interaction. It was thus concluded that the success of the campaign was rooted in the fact that its means of communication was the audience itself, giving the target audience a sense of ownership and empowerment. As mentioned previously, students exert a high level of influence over one anothers, and by encouraging stronger personalities to lead opinions, the organizers of the campaign were able to secure the attention of other students who identified with the reference group. Herd behaviour not only applies to students in schools where they are highly visible, but also amongst communities where perceived action plays a strong role. Between 2003 and 2004, California State University carried out a study to measure household conservation of energy, and motivations for doing so. It was found that factors like saving the environment, saving money or social responsibility did not have as great an impact on each household as the perceived behaviour of their neighbours did. Although the financial incentives of saving money, closely followed by moral incentives of protecting the environment, are often thought of as being a community’s greatest guiding compass, more households responded to the encouragement to save energy when they were told that 77% of their neighbours were using fans instead of air conditioning, proving that communities are more likely to engage in a behaviour if they think that everyone else is already taking part. Herd behaviours shown in the two examples exemplify that it can be a powerful tool in social marketing, and if harnessed correctly, has the potential to achieve great change. It is clear that opinion leaders and their influence achieve huge reach amongst their reference groups and thusly can be used as the loudest voices to encourage others in any collective direction. - Bandwagon effect - Collective behavior - Collective consciousness - Collective effervescence - Collective intelligence - Crowd psychology - Group behavior - Herd mentality - Hive mind - Informational cascade - Mass hysteria - Mean world syndrome - Mob rule - Moral panic - Social proof - Spontaneous order - Swarm intelligence - Team player - The 2009 Birmingham, Millennium Point stampede - Symmetry breaking of escaping ants - Braha, D (2012). "Global Civil Unrest: Contagion, Self-Organization, and Prediction.". PLoS ONE. 7 (10): e48596. doi:10.1371/journal.pone.0048596. - Raafat, R. M.; Chater, N.; Frith, C. (2009). "Herding in humans". Trends in Cognitive Sciences. 13 (10): 420–428. doi:10.1016/j.tics.2009.08.002. - Burke, C. J.; Tobler, P. N.; Schultz, W.; Baddeley, M. (2010). "Striatal BOLD response reflects the impact of herd information on financial decisions". Frontiers in Human Neuroscience. 4: 48. doi:10.3389/fnhum.2010.00048. PMID 20589242. - Hamilton, W. D. (1971). "Geometry for the Selfish Herd". Journal of Theoretical Biology. 31 (2): 295–311. doi:10.1016/0022-5193(71)90189-5. PMID 5104951. - Altshuler, E.; Ramos, O.; Nuñez, Y.; Fernández, J. "Panic-induced symmetry breaking in escaping ants" (PDF). University of Havana, Havana, Cuba. Retrieved 2011-05-18. - Altshuler, E.; Ramos, O.; Núñez, Y.; Fernández, J.; Batista-Leyva, A. J.; Noda, C. (2005). "Symmetry Breaking in Escaping Ants". The American Naturalist. 166 (6): 643–649. doi:10.1086/498139. PMID 16475081. - Markus K. Brunnermeier, Asset Pricing under Asymmetric Information: Bubbles, Crashes, Technical Analysis, and Herding, Oxford University Press (2001). - Robert Prechter, The Wave Principle of Human Social Behavior, New Classics Library (1999), pp. 152–153. - Shiller, Robert J. (2000). Irrational Exuberance. Princeton University Press. pp. 149–153. Retrieved 4 March 2013. - In Focus article (8 June 2012), "WNFM: A Focus on Fundamentals One Year After Fukushima", Reproduced article from Nuclear Market Review, TradeTech, retrieved 4 March 2013 There are several reproduced In Focus articles on this page. The relevant one is near the bottom, under the title in this reference - UraniumSeek.com, Gold Seek LLC (2008-08-22). "Uranium Has Bottomed: Two Uranium Bulls to Jump on Now". UraniumSeek.com. Retrieved 2011-09-19. - "Uranium Bubble & Spec Market Outlook". News.goldseek.com. Retrieved 2011-09-19. - Banerjee, Abhijit V. (1992). "A Simple Model of Herd Behavior". Quarterly Journal of Economics. 107 (3): 797–817. doi:10.2307/2118364. - Bikhchandani, Sushil; Hirshleifer, David; Welch, Ivo (1992). "A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades". Journal of Political Economy. 100 (5): 992–1026. doi:10.1086/261849. - Froot, K; Schaferstein, DS; Jeremy Stein, J (1992). "Herd on the street: Informational inefficiencies in a market with short-term speculation" (PDF). Journal of Finance. 47: 1461–1484. doi:10.1111/j.1540-6261.1992.tb04665.x. - Hirshleifer, D; Teoh, SH (2003). "Herd behaviour and cascading in capital markets: A review and synthesis" (PDF). European Financial Management. 9 (1): 25–66. doi:10.1111/1468-036X.00207. - Chen, Yi-Fen (2008-09-01). "Herd behavior in purchasing books online". Computers in Human Behavior. Including the Special Issue: Internet Empowerment. 24 (5): 1977–1992. doi:10.1016/j.chb.2007.08.004. - Lessig, V (1977). "Students and Housewives: Differences in Susceptibility to Reference Group Influence". Journal of Consumer Research. - Zhang, Xiaoling; Li, Shibo; Burke, Raymond R.; Leykin, Alex (2014-05-13). "An Examination of Social Influence on Shopper Behavior Using Video Tracking Data". Journal of Marketing. 78 (5): 24–41. doi:10.1509/jm.12.0106. ISSN 0022-2429. - Dhar, Joydip; Jha, Abhishek Kumar (2014-10-03). "Analyzing Social Media Engagement and its Effect on Online Product Purchase Decision Behavior". Journal of Human Behavior in the Social Environment. 24 (7): 791–798. doi:10.1080/10911359.2013.876376. ISSN 1091-1359. - Gunawan, Dedy Darsono; Huarng, Kun-Huang (2015-11-01). "Viral effects of social network and media on consumers' purchase intention". Journal of Business Research. 68 (11): 2237–2241. doi:10.1016/j.jbusres.2015.06.004. - Cheung, Christy M. K.; Xiao, Bo Sophia; Liu, Ivy L. B. (2014-09-01). "Do actions speak louder than voices? The signaling role of social information cues in influencing consumer purchase decisions". Decision Support Systems. Crowdsourcing and Social Networks Analysis. 65: 50–58. doi:10.1016/j.dss.2014.05.002. - James M. Cronin; Mary B. McCarthy (2011-07-12). "Preventing game over: A study of the situated food choice influences within the videogames subculture". Journal of Social Marketing. 1 (2): 133–153. doi:10.1108/20426761111141887. ISSN 2042-6763. - Lozano, Natàlia; Prades, Jordi; Montagut, Marta (2015-10-01). "Som la Pera: How to develop a social marketing and public relations campaign to prevent obesity among teenagers in Catalonia". Catalan Journal of Communication & Cultural Studies. 7 (2): 251–259. doi:10.1386/cjcs.7.2.251_1. - Nolan, Jessica M.; Schultz, P. Wesley; Cialdini, Robert B.; Goldstein, Noah J.; Griskevicius, Vladas (2008-07-01). "Normative Social Influence is Underdetected". Personality and Social Psychology Bulletin. 34 (7): 913–923. doi:10.1177/0146167208316691. ISSN 0146-1672. PMID 18550863. - Bikhchandani, Sushil; Hirshleifer, David; Welch, Ivo (1992). "A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades". Journal of Political Economy. 100 (5): 992–1026. doi:10.1086/261849. JSTOR 2138632. - Trotter, Wilfred (1914). The Instincts of the Herd in Peace and War. - Brunnermeier, Markus Konrad (2001). Asset Pricing under Asymmetric Information: Bubbles, Crashes, Technical Analysis, and Herding. Oxford, UK ; New York: Oxford University Press. - Rook, Laurens (2006). "An Economic Psychological Approach to Herd Behavior". Journal of Economic Issues. 40 (1): 75–95. - Hamilton, W. D. (1970). Geometry for the Selfish Herd. Diss. Imperial College. - Stanford, Craig B. (2001). "Avoiding Predators: Expectations and Evidence in Primate Antipredator Behaviour". International Journal of Primatology. 23: 741–757. doi:10.1023/A:1015572814388. Ebsco. Fall. Keyword: Herd Behaviour. - Ottaviani, Marco; Sorenson, Peter (2000). "Herd Behavior and Investment: Comment". American Economic Review. 90 (3): 695–704. doi:10.1257/aer.90.3.695. JSTOR 117352. - Altshuler, E.; et al. (2005). "Symmetry Breaking in Escaping Ants". The American Naturalist. 166: 643–649. doi:10.1086/498139. PMID 16475081. - Hey, John D.; Morone, Andrea (2004). "Do Markets Drive out Lemmings—or Vice Versa?". Economica. 71 (284): 637–659. doi:10.1111/j.0013-0427.2004.00392.x. JSTOR 3548984.
Videos and solutions to help Grade 6 students explore and discover that Euclid’s Algorithm is a more efficient means to finding the greatest common factor of larger numbers and determine that Euclid’s Algorithm is based on long division. Plans and Worksheets for Grade 6 Plans and Worksheets for all Grades Lessons for Grade 6 Common Core For Grade 6 New York State Common Core Math Grade 6, Module 2, Lesson 19 NYS Math Module 2 Grade 6 Lesson 19 Classwork Euclid’s Algorithm is used to find the greatest common factor (GCF) of two whole numbers. 1. Divide the larger of the two numbers by the smaller one. 2. If there is a remainder, divide it into the divisor. 3. Continue dividing the last divisor by the last remainder until the remainder is zero. 4. The final divisor is the GCF of the original pair of numbers. In application, the algorithm can be used to find the side length of the largest square that can be used to completely fill a rectangle so that there is no overlap or gaps. Example 1: Euclid’s Algorithm Conceptualized What is the GCF of 60 and 100? Example 2: Lesson 18 Classwork Revisited a. Let’s apply Euclid’s Algorithm to some of the problems from our last lesson. i. What is the GCF of 30 and 50? Example 3: Larger Numbers Example 4: Area Problems The greatest common factor has many uses. Among them, the GCF lets us find out the maximum size of squares that will cover a rectangle. Whenever we solve problems like this, we cannot have any gaps or any overlapping squares. Of course, the maximum size squares will be the minimum number of squares needed. A rectangular computer table measures 30 inches by 50 inches. We need to cover it with square tiles. What is the side length of the largest square tile we can use to completely cover the table, so that there is no overlap or gaps? 1. Use Euclid’s algorithm to find the greatest common factor of the following pairs of numbers: a. GCF (12,78) b. GCF (18,176) 2. Juanita and Samuel are planning a pizza party. They order a rectangular sheet pizza that measures 21 inches by 36 inches. They tell the pizza maker not to cut it because they want to cut it themselves. a. All pieces of pizza must be square with none left over. What is the side length of the largest square pieces into which Juanita and Samuel can cut the pizza? b. How many pieces of this size can be cut? 3. Shelly and Mickelle are making a quilt. They have a piece of fabric that measures 48 inches by 168 inches. a. All pieces of fabric must be square with none left over. What is the side length of the largest square pieces into which Shelly and Mickelle can cut the fabric? b. How many pieces of this size can Shelly and Mickelle cut? Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
Examine the image of the Hubble Deep Field (HDF) below. (By deep, astronomers mean dim and distant!) The HDF takes us far out into space and far back in time to see some of the faintest objects ever detected, about 4 billion times fainter than the naked human eye can see. The image contains thousands of galaxies of many shapes and colors. To create it, the Hubble Space Telescope exposed its electronic detectors for about 100 hours over the course of 10 days, pointed at a tiny region of space near the Big Dipper. The amount of sky in the image is about the same as the size of a tennis ball at a distance a little over 100 yards, an area about 1/100 of the full Moon. Use this file to print the image of distant galaxies. As you record each of the galaxies with a redshift, mark it off on the printed copy so you know which ones you have already recorded. After this image was obtained, the 10-meter Keck telescope in Hawaii observed the faint blue galaxies in the image to measure their redshifts. Next to many of the galaxies in the HDF is that galaxy’s redshift, z, (except for a few cases, the corresponding galaxy is usually the galaxy located to the upper left of the redshift number). Since the redshifts of objects are related to their distances, we can examine the actual distribution of galaxies in this direction in the sky using a histogram. If you need a review of how to construct a histogram or to see some examples of histograms, take a look here: http://support.minitab.com/en-us/minitab/17/topic-library/basic-statistics-and-graphs/graphs/graphs-of-distributions/histograms/histogram/ (Links to an external site.)Links to an external site. or here: https://www.khanacademy.org/math/probability/data-distributions-a1/displays-of-distributions/v/histograms-intro (Links to an external site.)Links to an external site. In our case, our bin size is 0.1 in redshift and there are too many galaxies to count visually how many have redshifts between, say, 2.2 and 2.3. Instead, for each galaxy with a measured redshift, put an “X” corresponding to its redshift in the histogram plot in the worksheet. The goal is to see how galaxies are distributed as function of redshift. Examine the distribution of galaxies as a function of redshift, or distance. Are the galaxies smoothly distributed at all redshifts (distances), or are they clumped together at a few specific distances? What do these data suggest about how matter is distributed in the universe? Is it smoothly distributed or clumpy? Do galaxies fill most of space, or are there vast, empty spaces between groups of galaxies? After you have recorded each of the redshifts on the histogram, answer the questions on the worksheet.
In the 1950s, NASA engineers picked the Mojave Desert to build a network of large radio antennas suited with parabolic dishes and receivers listening for faint radio signals. The quiet location, situated near the U.S. Army’s Fort Irwin and the small town of Goldstone, has no interference from commercial radio and television transmitters or power lines. The first antenna constructed at the Goldstone Deep Communications Complex was named Pioneer Station, and measures 85 feet. In 1958, it began receiving data from the Pioneer 3 mission, which was “the first attempts by the United States to send a spacecraft to the moon. Because of a malfunction with the spacecraft’s booster rocket, it lacked the velocity to make it beyond Earth’s orbit and fell back to Earth, burning up over Africa,” according to NASA Jet Propulsion Laboratory (JPL). Before NASA moved on from Pioneer Station in 1981, it helped with several high-stakes missions, including Apollo, Mariner, Viking, and Voyager. With its impressive resume, the design was declared a National Historic Monument in 1985. The 230-foot Mars Station is the largest and most sensitive structure at the complex, standing 24 stories tall and weighing 16 million pounds. Its satellites currently communicate with the Voyager 1 spacecraft, which launched in 1977. According to NASA, “Voyager 1 is now more than 13 billion miles (21 billion kilometers) from Earth, making it the most distant human-made object in the universe.” The Mojave Desert is also home to Echo Station, which serves as the complex’s current administrative center. The Echo 1 communications balloon, known as the world’s first communications satellite, inspired the station’s name. The balloon soared 1,000 miles above Earth, relaying radar and radio signals between Echo’s antenna and a New Jersey receiving station. The next design on this Goldstone Complex tour is the Gemini Station, named after the twin star constellation. NASA JPL built the antennas for the Army, and transferred the tech to NASA in 1994. Gemini first supported the Solar & Heliospheric Observatory (SOHO) project, which “was designed to study the internal structure of the Sun, its extensive outer atmosphere, and the origin of the solar wind—the stream of highly ionized gas that blows continuously outward through the solar system,” according to NASA. Supporting NASA’s deep space network, you can see the layout of the Goldstone Complex in the image below. Filed Under: Aerospace + defense
When two or more waves simultaneously pass through the same medium, each wave acts on every particle of the medium, as if the other waves are not present. The resultant displacement of any particle is the vector addition of the displacements due to the individual waves. Two sources are said to be coherent if they emit light waves of the same wave length and start with same phase or have a constant phase difference. Two independent monochromatic sources, emit waves of same wave length. But the waves are not in phase. So they are not coherent. This is because, atoms cannot emit light waves in same phase and these sources are said to be incoherent sources. Two slits A and B illuminated by a single monochromatic source S act as coherent sources. The waves from these two coherent sources travel in the same medium and superpose at various points as shown in Fig. 5.15. The crest of the wavetrains are shown by thick continuous lines and troughs are shown by broken lines. At points where the crest of one wave meets the crest of the other wave or the trough of one wave meets the trough of the other wave, the waves are in phase, the displacement is maximum and these points appear bright. These points are marked by crosses (x). This type of interference is said to be constructive interference. At points where the crest of one wave meets the trough of the other wave, the waves are in opposite phase, the displacement is minimum and these points appear dark. These points are marked by circles (O). This type of interference is said to be destructive interference. Therefore, on a screen XY the intensity of light will be alternatively maximum and minimum i.e. bright and dark bands which are referred as interference fringes. The redistribution of intensity of light on account of the superposition of two waves is called interference. If a1 and a2 are the amplitude of the two interfering waves, then The interference pattern in which the positions of maximum and minimum intensity of light remain fixed with time, is called sustained or permanent interference pattern. The conditions for the formation of sustained interference may be stated as : 1. The two sources should be coherent 2. Two sources should be very narrow 3. The sources should lie very close to each other to form distinct and broad fringes. The phenomenon of interference was first observed and demonstrated by Thomas Young in 1801. The experimental set up is shown in Fig 5.16. Light from a narrow slit S, illuminated by a monochromatic source, is allowed to fall on two narrow slits A and B placed very close to each other. The width of each slit is about 0.03 mm and they are about 0.3 mm apart. Since A and B are equidistant from S, light waves from S reach A and B in phase. So A and B act as coherent sources. According to Huygen’s principle, wavelets from A and B spread out and overlapping takes place to the right side of AB. When a screen XY is placed at a distance of about 1 metre from the slits, equally spaced alternate bright and dark fringes appear on the screen. These are called interference fringes or bands. Using an eyepiece the fringes can be seen directly. At P on the screen, waves from A and B travel equal distances and arrive in phase. These two waves constructively interfere and bright fringe is observed at P. This is called central bright fringe. When one of the slits is covered, the fringes disappear and there is uniform illumination on the screen. This shows clearly that the bands are due to interference. Let d be the distance between two coherent sources A and B of wavelength λ. A screen XY is placed parallel to AB at a distance D from the coherent sources. C is the mid point of AB. O is a point on the screen equidistant from A and B. P is a point at a distance x from O, as shown in Fig 5.17. Waves from A and B meet at P in phase or out of phase depending upon the path difference between two waves. By the principle of interference, condition for constructive interference is the path difference = nλ This equation gives the distance of the nth bright fringe from the point O. By the principle of interference, condition for destructive interference is the path difference This equation gives the distance of the nth dark fringe from the point O. Thus, on the screen alternate dark and bright bands are seen on either side of the central bright band. The distance between any two consecutive bright or dark bands is called bandwidth. The distance between (n+1)th and nth order consecutive bright fringes from O is given by Similarly, it can be proved that the distance between two consecutive dark bands is also equal to Dλ/d. . Since bright and dark fringes are of same width, they are equi−spaced on either side of central maximum. 1. The screen should be as far away from the source as possible. 2. The wavelength of light used must be larger. 3. The two coherent sources must be as close as possible. Everyone is familiar with the brilliant colours exhibited by a thin oil film spread on the surface of water and also by a soap bubble. These colours are due to interference between light waves reflected from the top and the bottom surfaces of thin films. When white light is incident on a thin film, the film appears coloured and the colour depends upon the thickness of the film and also the angle of incidence of the light. Consider a transparent thin film of uniform thickness t and its refractive index µ bounded by two plane surfaces K and K′ (Fig 5.18). A ray of monochromatic light AB incident on the surface K of the film is partly reflected along BC and partly refracted into the film along BD. At the point D on the surface K′, the ray of light is partly reflected along DE and partly transmitted out of the film along DG. The reflected light then emerges into air along EF which is parallel to BC. The ray EH after refraction at H, finally emerges along HJ. BC and EF are reflected rays parallel to each other and DG and HJ are transmitted rays parallel to each other. Rays BC and EF interfere and similarly the rays DG and HJ interfere. EM is drawn normal to BC from E. Now the path difference between the waves BC and EF A ray of light travelling in air and getting reflected at the surface of a denser medium, undergoes an automatic phase change of π (or) an additional path difference of λ/2. Since the reflection at B is at the surface of a denser medium, there is an additional path difference λ/2. The path difference between the transmitted rays DG and HJ is, in a similar way, δ = 2µt cos r. In this case there is no additional path difference introduced because both reflections at the point D and E take place backed by rarer medium Hence, condition for brightness is 2 µt cos r = nλ and condition for darkness is 2µt cos r = (2n – 1) λ/2. An important application of interference in thin films is the formation of Newton’s rings. When a plano convex lens of long focal length is placed over an optically plane glass plate, a thin air film with varying thickness is enclosed between them. The thickness of the air film is zero at the point of contact and gradually increases outwards from the point of contact. When the air film is illuminated by monochromatic light normally, alternate bright and dark concentric circular rings are formed with dark spot at the centre. These rings are known as Newton’s rings. When viewed with white light, the fringes are coloured (shown in the wrapper of the text book). Fig 5.19 shows an experimental arrangement for producing and observing Newton’s rings. A monochromatic source of light S is kept at the focus of a condensing lens L1. The parallel beam of light emerging from L1 falls on the glass plate G kept at 45o. The glass plate reflects a part of the incident light vertically downwards, normally on the thin air film, enclosed by the plano convex lens L and plane glass plate P. The reflected beam from the air film is viewed with a microscope. Alternate bright and dark circular rings with dark spot as centre is seen. The formation of Newton’s rings can be explained on the basis of interference between waves which are partially reflected from the top and bottom surfaces of the air film. If t is the thickness of the air film at a point on the film, the refracted wavelet from the lens has to travel a distance tinto the film and after reflection from the top surface of the glass plate, has to travel the same distance back to reach the point again. Thus, it travels a total path 2t. One of the two reflections takes place at the surface of the denser medium and hence it introduces an additional phase change of π or an equivalent path difference λ/2 between two wavelets. ∴ The condition for brightness is, Path difference, The thickness of the air film at the point of contact of lens L with glass plate P is zero. Hence, there is no path difference between the interfering waves. So, it should appear bright. But the wave reflected from the denser glass plate has suffered a phase change of π while the wave reflected at the spherical surface of the lens has not suffered any phase change. Hence the point O appears dark. Around the point of contact alternate bright and dark rings are formed. Let us consider the vertical section SOP of the plano convex lens through its centre of curvature C, as shown in Fig 5.20. Let R be the radius of curvature of the plano convex lens and O be the point of contact of the lens with the plane surface. Let t be the thickness of the air film at S and P. Draw ST and PQ perpendiculars to the plane surface of the glass plate. Then ST = AO = PQ = t Let rn be the radius of the nth dark ring which passes through the points S and P. Then SA = AP = rn If ON is the vertical diameter of the circle, then by the law of segments (i) Using the method of Newton’s rings, the wavelength of a given monochromatic source of light can be determined. The radius of nth dark ring and (n+m)th dark ring are given by Copyright © 2018-2021 BrainKart.com; All Rights Reserved. (BS) Developed by Therithal info, Chennai.
An mRNA vaccine is a type of vaccine that uses a copy of a molecule called messenger RNA (mRNA) to produce an immune response. The vaccine delivers molecules of antigen-encoding mRNA into immune cells, which use the designed mRNA as a blueprint to build foreign protein that would normally be produced by a pathogen (such as a virus) or by a cancer cell. These protein molecules stimulate an adaptive immune response that teaches the body to identify and destroy the corresponding pathogen or cancer cells. The mRNA is delivered by a co-formulation of the RNA encapsulated in lipid nanoparticles that protect the RNA strands and help their absorption into the cells. File:MRNA vaccines against the coronavirus.webm Reactogenicity, the tendency of a vaccine to produce adverse reactions, is similar to that of conventional non-RNA vaccines. People susceptible to an autoimmune response may have an adverse reaction to messenger RNA vaccines. The advantages of mRNA vaccines over traditional vaccines are ease of design, speed and lower cost of production, the induction of both cellular and humoral immunity, and lack of interaction with the genomic DNA. While some messenger RNA vaccines, such as the Pfizer–BioNTech COVID-19 vaccine, have the disadvantage of requiring ultracold storage before distribution, other mRNA vaccines, such as the Moderna, CureVac, and Walvax COVID-19 vaccines, do not have such requirements. In RNA therapeutics, messenger RNA vaccines have attracted considerable interest as COVID-19 vaccines. In December 2020, Pfizer–BioNTech and Moderna obtained authorization for their mRNA-based COVID-19 vaccines. On 2 December, the UK Medicines and Healthcare products Regulatory Agency (MHRA) became the first medicines regulator to approve an mRNA vaccine, authorizing the Pfizer–BioNTech vaccine for widespread use. On 11 December, the US Food and Drug Administration (FDA) issued an emergency use authorization for the Pfizer–BioNTech vaccine and a week later similarly authorized the Moderna vaccine. The first successful transfection of designed mRNA packaged within a liposomal nanoparticle into a cell was published in 1989. "Naked" (or unprotected) lab-made mRNA was injected a year later into the muscle of mice. These studies were the first evidence that in vitro transcribed mRNA with a chosen gene was able to deliver the genetic information to produce a desired protein within living cell tissue and led to the concept proposal of messenger RNA vaccines. Liposome-encapsulated mRNA encoding a viral antigen was shown in 1993 to stimulate T cells in mice. The following year self-amplifying mRNA was developed by including both a viral antigen and replicase encoding gene. The method was used in mice to elicit both a humoral and cellular immune response against a viral pathogen. The next year mRNA encoding a tumor antigen was shown to elicit a similar immune response against cancer cells in mice. The first human clinical trial using ex vivo dendritic cells transfected with mRNA encoding tumor antigens (therapeutic cancer mRNA vaccine) was started in 2001. Four years later, the successful use of modified nucleosides as a method to transport mRNA inside cells without setting off the body's defense system was reported. Clinical trial results of an mRNA vaccine directly injected into the body against cancer cells were reported in 2008. BioNTech in 2008, and Moderna in 2010, were founded to develop mRNA biotechnologies. The US research agency DARPA launched at this time the biotechnology research program ADEPT to develop emerging technologies for the US military. The agency recognized the potential of nucleic acid technology for defense against pandemics and began to invest in the field. DARPA grants were seen as a vote of confidence that in turn encouraged other government agencies and private investors to invest in mRNA technology. DARPA awarded at the time a $25 million grant to Moderna. The first human clinical trials using an mRNA vaccine against an infectious agent (rabies) began in 2013. Over the next few years, clinical trials of mRNA vaccines for a number of other viruses were started. mRNA vaccines for human use have been studied for infectious agents such as influenza, Zika virus, cytomegalovirus, and Chikungunya virus. In March 2022 Moderna announced the development of mRNA vaccines for 15 diseases: Chikungunya virus, COVID-19, Crimean-Congo haemorrhagic fever, Dengue, Ebola virus disease, HIV, Malaria, Marburg virus disease, Lassa fever, Middle East respiratory syndrome coronavirus (MERS-CoV), Nipah and henipaviral diseases, Rift Valley fever, Severe fever with Thrombocytopenia syndrome, Tuberculosis and Zika. The COVID-19 pandemic, and sequencing of the causative virus SARS-CoV-2 at the beginning of 2020, led to the rapid development of the first approved mRNA vaccines. BioNTech and Moderna in December of the same year obtained approval for their mRNA-based COVID-19 vaccines. On 2 December, seven days after its final eight-week trial, the UK Medicines and Healthcare products Regulatory Agency (MHRA) became the first global medicines regulator in history to approve an mRNA vaccine, granting emergency authorization for Pfizer–BioNTech's BNT162b2 COVID-19 vaccine for widespread use. On 11 December, the FDA gave emergency use authorization for the Pfizer–BioNTech COVID-19 vaccine and a week later similar approval for the Moderna COVID-19 vaccine. Other mRNA vaccines continued under development. |Main Manufacturer||Country||Amplification||Clinical phase| |Walvax Biotechnology||China||None||3 (booster)| |Gennova Bio*||India||Self||2/3 (comparator)| |Vinbiocare Biotechnology**||Vietnam||Self||1/2/3 (comparator)| |Daiichi Sankyo||Japan||None||1/2/3 (booster)| |Arcturus Therapeutics**||United States||Self||2| |Elixirgen Therapeutics||United States||Self||1/2| |AIM Vaccine Group||China||Unknown||1/2| |HDT Bio*||United States||Self||1| |GlaxoSmithKline (GSK)||United States||Self||1| |Imperial College London||England||Self||1| |Gritstone Bio||England||Self||1 (booster)| |University of Melbourne||Australia||None||1 (booster)| */** denote shared technology The goal of a vaccine is to stimulate the adaptive immune system to create antibodies that precisely target that particular pathogen. The markers on the pathogen that the antibodies target are called antigens. Traditional vaccines stimulate an antibody response by injecting either antigens, an attenuated (weakened) virus, an inactivated (dead) virus, or a recombinant antigen-encoding viral vector (harmless carrier virus with an antigen transgene) into the body. These antigens and viruses are prepared and grown outside the body. In contrast, mRNA vaccines introduce a short-lived synthetically created fragment of the RNA sequence of a virus into the individual being vaccinated. These mRNA fragments are taken up by dendritic cells through phagocytosis. The dendritic cells use their internal machinery (ribosomes) to read the mRNA and produce the viral antigens that the mRNA encodes. The body degrades the mRNA fragments within a few days of introduction. Although non-immune cells can potentially also absorb vaccine mRNA, produce antigens, and display the antigens on their surfaces, dendritic cells absorb the mRNA globules much more readily. The mRNA fragments are translated in the cytoplasm and do not affect the body's genomic DNA, located separately in the cell nucleus. Once the viral antigens are produced by the host cell, the normal adaptive immune system processes are followed. Antigens are broken down by proteasomes. Class I and class II MHC molecules then attach to the antigen and transport it to the cellular membrane, "activating" the dendritic cell. Once activated, dendritic cells migrate to lymph nodes, where they present the antigen to T cells and B cells. This triggers the production of antibodies specifically targeted to the antigen, ultimately resulting in immunity. The central component of a mRNA vaccine is its mRNA construct. The in vitro transcribed mRNA is generated from an engineered plasmid DNA, which has an RNA polymerase promoter and sequence which corresponds to the mRNA construct. By combining T7 phage RNA polymerase and the plasmid DNA, the mRNA can be transcribed in the lab. Efficacy of the vaccine is dependent on the stability and structure of the designed mRNA. The in vitro transcribed mRNA has the same structural components as natural mRNA in eukaryotic cells. It has a 5' cap, a 5'-untranslated region (UTR) and 3'-UTR, an open reading frame (ORF), which encodes the relevant antigen, and a 3'-poly(A) tail. By modifying these different components of the synthetic mRNA, the stability and translational ability of the mRNA can be enhanced, and in turn, the efficacy of the vaccine improved. The mRNA can be improved by using synthetic 5'-cap analogues which enhance the stability and increase protein translation. Similarly, regulatory elements in the 5'-untranslated region and the 3'-untranslated region can be altered, and the length of the poly(A) tail optimized, to stabilize the mRNA and increase protein production. The mRNA nucleotides can be modified to both decrease innate immune activation and increase the mRNA's half-life in the host cell. The nucleic acid sequence and codon usage impacts protein translation. Enriching the sequence with guanine-cytosine content improves mRNA stability and half-life and, in turn, protein production. Replacing rare codons with synonymous codons frequently used by the host cell also enhances protein production. For a vaccine to be successful, sufficient mRNA must enter the host cell cytoplasm to stimulate production of the specific antigens. Entry of mRNA molecules, however, faces a number of difficulties. Not only are mRNA molecules too large to cross the cell membrane by simple diffusion, they are also negatively charged like the cell membrane, which causes a mutual electrostatic repulsion. Additionally, mRNA is easily degraded by RNAases in skin and blood. Various methods have been developed to overcome these delivery hurdles. The method of vaccine delivery can be broadly classified by whether mRNA transfer into cells occurs within (in vivo) or outside (ex vivo) the organism. Dendritic cells display antigens on their surfaces, leading to interactions with T cells to initiate an immune response. Dendritic cells can be collected from patients and programmed with the desired mRNA, then administered back into patients to create an immune response. The simplest way that ex vivo dendritic cells take up mRNA molecules is through endocytosis, a fairly inefficient pathway in the laboratory setting that can be significantly improved through electroporation. Since the discovery that the direct administration of in vitro transcribed mRNA leads to the expression of antigens in the body, in vivo approaches have been investigated. They offer some advantages over ex vivo methods, particularly by avoiding the cost of harvesting and adapting dendritic cells from patients and by imitating a regular infection. Different routes of injection, such as into the skin, blood, or muscles, result in varying levels of mRNA uptake, making the choice of administration route a critical aspect of in vivo delivery. One study showed, in comparing different routes, that lymph node injection leads to the largest T-cell response. Naked mRNA injection means that the delivery of the vaccine is only done in a buffer solution. This mode of mRNA uptake has been known since the 1990s. The first worldwide clinical studies used intradermal injections of naked mRNA for vaccination. A variety of methods have been used to deliver naked mRNA, such as subcutaneous, intravenous, and intratumoral injections. Although naked mRNA delivery causes an immune response, the effect is relatively weak, and after injection the mRNA is often rapidly degraded. Cationic polymers can be mixed with mRNA to generate protective coatings called polyplexes. These protect the recombinant mRNA from ribonucleases and assist its penetration in cells. Protamine is a natural cationic peptide and has been used to encapsulate mRNA for vaccination.Template:Primary-source inline The first time the FDA approved the use of lipid nanoparticles as a drug delivery system was in 2018, when the agency approved the first siRNA drug, Onpattro. Encapsulating the mRNA molecule in lipid nanoparticles was a critical breakthrough for producing viable mRNA vaccines, solving a number of key technical barriers in delivering the mRNA molecule into the host cell. Research into using lipids to deliver siRNA to cells became a foundation for similar research into using lipids to deliver mRNA. However, new lipids had to be invented to encapsulate mRNA strands, which are much longer than siRNA strands. Principally, the lipid provides a layer of protection against degradation, allowing more robust translational output. In addition, the customization of the lipid's outer layer allows the targeting of desired cell types through ligand interactions. However, many studies have also highlighted the difficulty of studying this type of delivery, demonstrating that there is an inconsistency between in vivo and in vitro applications of nanoparticles in terms of cellular intake. The nanoparticles can be administered to the body and transported via multiple routes, such as intravenously or through the lymphatic system. One issue with lipid nanoparticles is that several of the breakthroughs leading to the practical use of that technology involve the use of microfluidics. Microfluidic reaction chambers are difficult to scale up, since the entire point of microfluidics is to exploit the microscale behaviors of liquids. The only way around this obstacle is to run an extensive number of microfluidic reaction chambers in parallel, a novel task requiring custom-built equipment. For COVID-19 mRNA vaccines, this was the main manufacturing bottleneck. Pfizer used such a parallel approach to solve the scaling problem. After verifying that impingement jet mixers could not be directly scaled up, Pfizer made about 100 of the little mixers (each about the size of a U.S. half-dollar coin), connected them together with pumps and filters with a "maze of piping," and set up a computer system to regulate flow and pressure through the mixers. Another issue, with the large-scale use of this delivery method, is the availability of the novel lipids used to create lipid nanoparticles, especially ionizable cationic lipids. Before 2020, such lipids were manufactured in small quantities measured in grams or kilograms, and they were used for medical research and a handful of drugs for rare conditions. As the safety and efficacy of mRNA vaccines became clear in 2020, the few companies able to manufacture the requisite lipids were confronted with the challenge of scaling up production to respond to orders for several tons of lipids. In addition to non-viral delivery methods, RNA viruses have been engineered to achieve similar immunological responses. Typical RNA viruses used as vectors include retroviruses, lentiviruses, alphaviruses and rhabdoviruses, each of which can differ in structure and function. Clinical studies have utilized such viruses on a range of diseases in model animals such as mice, chicken and primates. mRNA vaccines offer specific advantages over traditional vaccines. Because mRNA vaccines are not constructed from an active pathogen (or even an inactivated pathogen), they are non-infectious. In contrast, traditional vaccines require the production of pathogens, which, if done at high volumes, could increase the risks of localized outbreaks of the virus at the production facility. Another biological advantage of mRNA vaccines is that since the antigens are produced inside the cell, they stimulate cellular immunity, as well as humoral immunity. mRNA vaccines have the production advantage that they can be designed swiftly. Moderna designed their mRNA-1273 vaccine for COVID-19 in 2 days. They can also be manufactured faster, more cheaply, and in a more standardized fashion (with fewer error rates in production), which can improve responsiveness to serious outbreaks. The Pfizer–BioNTech vaccine originally required 110 days to mass-produce (before Pfizer began to optimize the manufacturing process to only 60 days), which was substantially faster than traditional flu and polio vaccines. Within that larger timeframe, the actual production time is only about 22 days: two weeks for molecular cloning of DNA plasmids and purification of DNA, four days for DNA-to-RNA transcription and purification of mRNA, and four days to encapsulate mRNA in lipid nanoparticles followed by fill and finish. The majority of the days needed for each production run are allocated to rigorous quality control at each stage. In addition to sharing the advantages of theoretical DNA vaccines over established traditional vaccines, mRNA vaccines also have additional advantages over DNA vaccines. The mRNA is translated in the cytosol, so there is no need for the RNA to enter the cell nucleus, and the risk of being integrated into the host genome is averted. Modified nucleosides (for example, pseudouridines, 2'-O-methylated nucleosides) can be incorporated to mRNA to suppress immune response stimulation to avoid immediate degradation and produce a more persistent effect through enhanced translation capacity. The open reading frame (ORF) and untranslated regions (UTR) of mRNA can be optimized for different purposes (a process called sequence engineering of mRNA), for example through enriching the guanine-cytosine content or choosing specific UTRs known to increase translation. An additional ORF coding for a replication mechanism can be added to amplify antigen translation and therefore immune response, decreasing the amount of starting material needed. Because mRNA is fragile, some vaccines must be kept at very low temperatures to avoid degrading and thus giving little effective immunity to the recipient. Pfizer–BioNTech's BNT162b2 mRNA vaccine has to be kept between −80 and −60 °C (−112 and −76 °F). Moderna says their mRNA-1273 vaccine can be stored between −25 and −15 °C (−13 and 5 °F), which is comparable to a home freezer, and that it remains stable between 2 and 8 °C (36 and 46 °F) for up to 30 days. In November 2020, Nature reported, "While it's possible that differences in LNP formulations or mRNA secondary structures could account for the thermostability differences [between Moderna and BioNtech], many experts suspect both vaccine products will ultimately prove to have similar storage requirements and shelf lives under various temperature conditions." Several platforms are being studied that may allow storage at higher temperatures. Before 2020, no mRNA technology platform (drug or vaccine) had been authorized for use in humans, so there was a risk of unknown effects. The 2020 COVID-19 pandemic required faster production capability of mRNA vaccines, made them attractive to national health organisations, and led to debate about the type of initial authorization mRNA vaccines should get (including emergency use authorization or expanded access authorization) after the eight-week period of post-final human trials. Reactogenicity is similar to that of conventional, non-RNA vaccines. However, those susceptible to an autoimmune response may have an adverse reaction to mRNA vaccines. The mRNA strands in the vaccine may elicit an unintended immune reaction – this entails the body believing itself to be sick, and the person feeling as if they are as a result. To minimize this, mRNA sequences in mRNA vaccines are designed to mimic those produced by host cells. Strong but transient reactogenic effects were reported in trials of novel COVID-19 mRNA vaccines; most people will not experience severe side effects which include fever and fatigue. Severe side effects are defined as those that prevent daily activity. The COVID-19 mRNA vaccines from Moderna and Pfizer–BioNTech have efficacy rates of 90 to 95 percent. Prior mRNA, drug trials on pathogens other than COVID-19 were not effective and had to be abandoned in the early phases of trials. The reason for the efficacy of the new mRNA vaccines is not clear. Physician-scientist Margaret Liu stated that the efficacy of the new COVID-19 mRNA vaccines could be due to the "sheer volume of resources" that went into development, or that the vaccines might be "triggering a nonspecific inflammatory response to the mRNA that could be heightening its specific immune response, given that the modified nucleoside technique reduced inflammation but hasn't eliminated it completely", and that "this may also explain the intense reactions such as aches and fevers reported in some recipients of the mRNA SARS-CoV-2 vaccines". These reactions though severe were transient and another view is that they were believed to be a reaction to the lipid drug delivery molecules. There is misinformation implying that mRNA vaccines could alter DNA in the nucleus. mRNA in the cytosol is very rapidly degraded before it would have time to gain entry into the cell nucleus. In fact, mRNA vaccines must be stored at very low temperature to prevent mRNA degradation. Retrovirus can be single-stranded RNA (just as SARS-CoV-2 vaccine is single-stranded RNA) which enters the cell nucleus and uses reverse transcriptase to make DNA from the RNA in the cell nucleus. A retrovirus has mechanisms to be imported into the nucleus, but other mRNA lack these mechanisms. Once inside the nucleus, creation of DNA from RNA cannot occur without a primer, which accompanies a retrovirus, but which would not exist for other mRNA if placed in the nucleus. mRNA vaccines use either non-amplifying (conventional) mRNA or self-amplifying mRNA. Pfizer–BioNTech and Moderna vaccines use non-amplifying mRNA. Both mRNA types continue to be investigated as vaccine methods against other potential pathogens and cancer. The initial mRNA vaccines use a non-amplifying mRNA construct. Non-amplifying mRNA has only one open reading frame that codes for the antigen of interest. The total amount of mRNA available to the cell is equal to the amount delivered by the vaccine. Dosage strength is limited by the amount of mRNA that can be delivered by the vaccine. Non-amplifying vaccines replace uridine with N1-Methylpseudouridine in an attempt to reduce toxicity. Self-amplifying mRNA (saRNA) vaccines replicate their mRNA after transfection. Self-amplifying mRNA has two open reading frames. The first frame, like conventional mRNA, codes for the antigen of interest. The second frame codes for an RNA-dependent RNA polymerase (and its helper proteins) which replicates the mRNA construct in the cell. This allows smaller vaccine doses. The mechanisms and consequently the evaluation of self-amplifying mRNA may be different, as self-amplifying mRNA is a much bigger molecule. SaRNA vaccines being researched include a malaria vaccine. Gritstone bio started in 2021 a phase 1 trial of an saRNA COVID-19 vaccine, used as a booster vaccine. The vaccine is designed to target both the spike protein of the SARS‑CoV‑2 virus, and viral proteins that may be less prone to genetic variation, to provide greater protection against SARS‑CoV‑2 variants. saRNA vaccines must use uridine, which is required for reproduction to occur.
Pasadena, CA – Scientists have used data from NASA’s Cassini mission to delve into the impact craters on the surface of Titan, revealing more detail than ever before about how the craters evolve and how weather drives changes on the surface of Saturn’s mammoth moon. Like Earth, Titan has a thick atmosphere that acts as a protective shield from meteoroids; meanwhile, erosion and other geologic processes efficiently erase craters made by meteoroids that do reach the surface. The result is far fewer impacts and craters than on other moons. Even so, because impacts stir up what lies beneath and expose it, Titan’s impact craters reveal a lot. The new examination showed that they can be split into two categories: those in the fields of dunes around Titan’s equator and those in the vast plains at midlatitudes (between the equatorial zone and the poles). Their location and their makeup are connected: The craters among the dunes at the equator consist completely of organic material, while craters in the midlatitude plains are a mix of organic materials, water ice, and a small amount of methane-like ice. From there, scientists took the connections a step further and found that craters actually evolve differently, depending on where they lie on Titan. Some of the new results reinforce what scientists knew about the craters – that the mixture of organic material and water ice is created by the heat of impact, and those surfaces are then washed by methane rain. But while researchers found that cleaning process happening in the midlatitude plains, they discovered that it’s not happening in the equatorial region; instead, those impact areas are quickly covered by a thin layer of sand sediment. That means Titan’s atmosphere and weather aren’t just shaping the surface of Titan; they’re also driving a physical process that affects which materials remain exposed at the surface, the authors found. “The most exciting part of our results is that we found evidence of Titan’s dynamic surface hidden in the craters, which has allowed us to infer one of the most complete stories of Titan’s surface evolution scenario to date,” said Anezina Solomonidou, a research fellow at ESA (European Space Agency) and the lead author of the new study. “Our analysis offers more evidence that Titan remains a dynamic world in the present day.” The new work, published recently in Astronomy & Astrophysics, used data from visible and infrared instruments aboard the Cassini spacecraft, which operated between 2004 and 2017 and conducted more than 120 flybys of the Mercury-size moon. “Locations and latitudes seem to unveil many of Titan’s secrets, showing us that the surface is actively connected with atmospheric processes and possibly with internal ones,” Solomonidou said. Scientists are eager to learn more about Titan’s potential for astrobiology, which is the study of the origins and evolution of life in the universe. Titan is an ocean world, with a sea of water and ammonia under its crust. And as scientists look for pathways for organic material to travel from the surface to the ocean underneath, impact craters offer a unique window into the subsurface. The new research also found that one impact site, called Selk Crater, is completely covered with organics and untouched by the rain process that cleans the surface of other craters. Selk is in fact a target of NASA’s Dragonfly mission, set to launch in 2027; the rotorcraft-lander will investigate key astrobiology questions as it searches for biologically important chemistry similar to early Earth before life emerged. NASA got its first close-up encounter with Titan some 40 years ago, on Nov. 12, 1980, when the agency’s Voyager 1 spacecraft flew by at a range of just 2,500 miles (4,000 kilometers). Voyager images showed a thick, opaque atmosphere, and data revealed that liquid might be present on the surface (it was – in the form of liquid methane and ethane), and indicated that prebiotic chemical reactions might be possible on Titan. Managed by NASA’s Jet Propulsion Laboratory in Southern California, Cassini was an orbiter that observed Saturn for more than 13 years before exhausting its fuel supply. The mission plunged it into the planet’s atmosphere in September 2017, in part to protect moons that have the potential of holding conditions suitable for life. The Cassini-Huygens mission is a cooperative project of NASA, ESA, and the Italian Space Agency. JPL, a division of Caltech in Pasadena, manages the mission for NASA’s Science Mission Directorate in Washington. JPL designed, developed, and assembled the Cassini orbiter. More information about Cassini can be found here:
A chemical element is a species of atoms that have a given number of protons in their nuclei, including the pure substance consisting only of that species. Unlike chemical compounds, chemical elements cannot be broken down into simpler substances by any chemical reaction. The number of protons in the nucleus is the defining property of an element, and is referred to as its atomic number (represented by the symbol Z) – all atoms with the same atomic number are atoms of the same element. Almost all of the baryonic matter of the universe is composed of chemical elements (among rare exceptions are neutron stars). When different elements undergo chemical reactions, atoms are rearranged into new compounds held together by chemical bonds. Only a minority of elements, such as silver and gold, are found uncombined as relatively pure native element minerals. Nearly all other naturally occurring elements occur in the Earth as compounds or mixtures. Air is primarily a mixture of the elements nitrogen, oxygen, and argon, though it does contain compounds including carbon dioxide and water. The history of the discovery and use of the elements began with primitive human societies that discovered native minerals like carbon, sulfur, copper and gold (though the concept of a chemical element was not yet understood). Attempts to classify materials such as these resulted in the concepts of classical elements, alchemy, and various similar theories throughout human history. Much of the modern understanding of elements developed from the work of Dmitri Mendeleev, a Russian chemist who published the first recognizable periodic table in 1869. This table organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The periodic table summarizes various properties of the elements, allowing chemists to derive relationships between them and to make predictions about compounds and potential new ones. By November 2016, the International Union of Pure and Applied Chemistry had recognized a total of 118 elements. The first 94 occur naturally on Earth, and the remaining 24 are synthetic elements produced in nuclear reactions. Save for unstable radioactive elements (radionuclides) which decay quickly, nearly all of the elements are available industrially in varying amounts. The discovery and synthesis of further new elements is an ongoing area of scientific study. The lightest chemical elements are hydrogen and helium, both created by Big Bang nucleosynthesis during the first 20 minutes of the universe in a ratio of around 3:1 by mass (or 12:1 by number of atoms), along with tiny traces of the next two elements, lithium and beryllium. Almost all other elements found in nature were made by various natural methods of nucleosynthesis. On Earth, small amounts of new atoms are naturally produced in nucleogenic reactions, or in cosmogenic processes, such as cosmic ray spallation. New atoms are also naturally produced on Earth as radiogenic daughter isotopes of ongoing radioactive decay processes such as alpha decay, beta decay, spontaneous fission, cluster decay, and other rarer modes of decay. Of the 94 naturally occurring elements, those with atomic numbers 1 through 82 each have at least one stable isotope (except for technetium, element 43 and promethium, element 61, which have no stable isotopes). Isotopes considered stable are those for which no radioactive decay has yet been observed. Elements with atomic numbers 83 through 94 are unstable to the point that radioactive decay of all isotopes can be detected. Some of these elements, notably bismuth (atomic number 83), thorium (atomic number 90), and uranium (atomic number 92), have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy metals before the formation of our Solar System. At over 1.9×1019 years, over a billion times longer than the current estimated age of the universe, bismuth-209 (atomic number 83) has the longest known alpha decay half-life of any naturally occurring element, and is almost always considered on par with the 80 stable elements. The very heaviest elements (those beyond plutonium, element 94) undergo radioactive decay with half-lives so short that they are not found in nature and must be synthesized. There are now 118 known elements. In this context, "known" means observed well enough, even from just a few decay products, to have been differentiated from other elements. Most recently, the synthesis of element 118 (since named oganesson) was reported in October 2006, and the synthesis of element 117 (tennessine) was reported in April 2010. Of these 118 elements, 94 occur naturally on Earth. Six of these occur in extreme trace quantities: technetium, atomic number 43; promethium, number 61; astatine, number 85; francium, number 87; neptunium, number 93; and plutonium, number 94. These 94 elements have been detected in the universe at large, in the spectra of stars and also supernovae, where short-lived radioactive elements are newly being made. The first 94 elements have been detected directly on Earth as primordial nuclides present from the formation of the Solar System, or as naturally occurring fission or transmutation products of uranium and thorium. The remaining 24 heavier elements, not found today either on Earth or in astronomical spectra, have been produced artificially: these are all radioactive, with very short half-lives; if any atoms of these elements were present at the formation of Earth, they are extremely likely, to the point of certainty, to have already decayed, and if present in novae have been in quantities too small to have been noted. Technetium was the first purportedly non-naturally occurring element synthesized, in 1937, although trace amounts of technetium have since been found in nature (and also the element may have been discovered naturally in 1925). This pattern of artificial production and later natural discovery has been repeated with several other radioactive naturally occurring rare elements. List of the elements are available by name, atomic number, density, melting point, boiling point and by symbol, as well as ionization energies of the elements. The nuclides of stable and radioactive elements are also available as a list of nuclides, sorted by length of half-life for those that are unstable. One of the most convenient, and certainly the most traditional presentation of the elements, is in the form of the periodic table, which groups together elements with similar chemical properties (and usually also similar electronic structures). Main article: Atomic number The atomic number of an element is equal to the number of protons in each atom, and defines the element. For example, all carbon atoms contain 6 protons in their atomic nucleus; so the atomic number of carbon is 6. Carbon atoms may have different numbers of neutrons; atoms of the same element having different numbers of neutrons are known as isotopes of the element. The number of protons in the atomic nucleus also determines its electric charge, which in turn determines the number of electrons of the atom in its non-ionized state. The electrons are placed into atomic orbitals that determine the atom's various chemical properties. The number of neutrons in a nucleus usually has very little effect on an element's chemical properties (except in the case of hydrogen and deuterium). Thus, all carbon isotopes have nearly identical chemical properties because they all have six protons and six electrons, even though carbon atoms may, for example, have 6 or 8 neutrons. That is why the atomic number, rather than mass number or atomic weight, is considered the identifying characteristic of a chemical element. The symbol for atomic number is Z. Isotopes are atoms of the same element (that is, with the same number of protons in their atomic nucleus), but having different numbers of neutrons. Thus, for example, there are three main isotopes of carbon. All carbon atoms have 6 protons in the nucleus, but they can have either 6, 7, or 8 neutrons. Since the mass numbers of these are 12, 13 and 14 respectively, the three isotopes of carbon are known as carbon-12, carbon-13, and carbon-14, often abbreviated to 12C, 13C, and 14C. Carbon in everyday life and in chemistry is a mixture of 12C (about 98.9%), 13C (about 1.1%) and about 1 atom per trillion of 14C. Most (66 of 94) naturally occurring elements have more than one stable isotope. Except for the isotopes of hydrogen (which differ greatly from each other in relative mass—enough to cause chemical effects), the isotopes of a given element are chemically nearly indistinguishable. All of the elements have some isotopes that are radioactive (radioisotopes), although not all of these radioisotopes occur naturally. The radioisotopes typically decay into other elements upon radiating an alpha or beta particle. If an element has isotopes that are not radioactive, these are termed "stable" isotopes. All of the known stable isotopes occur naturally (see primordial isotope). The many radioisotopes that are not found in nature have been characterized after being artificially made. Certain elements have no stable isotopes and are composed only of radioactive isotopes: specifically the elements without any stable isotopes are technetium (atomic number 43), promethium (atomic number 61), and all observed elements with atomic numbers greater than 82. Of the 80 elements with at least one stable isotope, 26 have only one single stable isotope. The mean number of stable isotopes for the 80 stable elements is 3.1 stable isotopes per element. The largest number of stable isotopes that occur for a single element is 10 (for tin, element 50). The mass number of an element, A, is the number of nucleons (protons and neutrons) in the atomic nucleus. Different isotopes of a given element are distinguished by their mass numbers, which are conventionally written as a superscript on the left hand side of the atomic symbol (e.g. 238U). The mass number is always a whole number and has units of "nucleons". For example, magnesium-24 (24 is the mass number) is an atom with 24 nucleons (12 protons and 12 neutrons). Whereas the mass number is the total number of neutrons and protons and is thus a natural (or whole) number, the atomic mass of a single atom is a real number giving the mass of a particular isotope (or "nuclide") of the element, expressed in atomic mass units (symbol: u). In general, the mass number of a given nuclide differs in value slightly from its atomic mass, since the mass of each proton and neutron is not exactly 1 u; since the electrons contribute a lesser share to the atomic mass as neutron number exceeds proton number; and (finally) because of the nuclear binding energy. For example, the atomic mass of chlorine-35 to five significant digits is 34.969 u and that of chlorine-37 is 36.966 u. However, the atomic mass in u of each isotope is quite close to its simple mass number (always within 1%). The only isotope whose atomic mass is exactly a natural number is 12C, which by definition has a mass of exactly 12 because u is defined as 1/12 of the mass of a free neutral carbon-12 atom in the ground state. The standard atomic weight (commonly called "atomic weight") of an element is the average of the atomic masses of all the chemical element's isotopes as found in a particular environment, weighted by isotopic abundance, relative to the atomic mass unit. This number may be a fraction that is not close to a whole number. For example, the relative atomic mass of chlorine is 35.453 u, which differs greatly from a whole number as it is an average of about 76% chlorine-35 and 24% chlorine-37. Whenever a relative atomic mass value differs by more than 1% from a whole number, it is due to this averaging effect, as significant amounts of more than one isotope are naturally present in a sample of that element. Chemists and nuclear scientists have different definitions of a pure element. In chemistry, a pure element means a substance whose atoms all (or in practice almost all) have the same atomic number, or number of protons. Nuclear scientists, however, define a pure element as one that consists of only one stable isotope. For example, a copper wire is 99.99% chemically pure if 99.99% of its atoms are copper, with 29 protons each. However it is not isotopically pure since ordinary copper consists of two stable isotopes, 69% 63Cu and 31% 65Cu, with different numbers of neutrons. However, a pure gold ingot would be both chemically and isotopically pure, since ordinary gold consists only of one isotope, 197Au. Main article: Allotropy Atoms of chemically pure elements may bond to each other chemically in more than one way, allowing the pure element to exist in multiple chemical structures (spatial arrangements of atoms), known as allotropes, which differ in their properties. For example, carbon can be found as diamond, which has a tetrahedral structure around each carbon atom; graphite, which has layers of carbon atoms with a hexagonal structure stacked on top of each other; graphene, which is a single layer of graphite that is very strong; fullerenes, which have nearly spherical shapes; and carbon nanotubes, which are tubes with a hexagonal structure (even these may differ from each other in electrical properties). The ability of an element to exist in one of many structural forms is known as 'allotropy'. The reference state of an element is defined by convention, usually as the thermodynamically most stable allotrope and physical state at a pressure of 1 bar and a given temperature (typically at 298.15K). However, for phosphorus, the reference state is white phosphorus even though it is not the most stable allotrope. In thermochemistry, an element is defined to have an enthalpy of formation of zero in its reference state. For example, the reference state for carbon is graphite, because the structure of graphite is more stable than that of the other allotropes. Several kinds of descriptive categorizations can be applied broadly to the elements, including consideration of their general physical and chemical properties, their states of matter under familiar conditions, their melting and boiling points, their densities, their crystal structures as solids, and their origins. Several terms are commonly used to characterize the general physical and chemical properties of the chemical elements. A first distinction is between metals, which readily conduct electricity, nonmetals, which do not, and a small group, (the metalloids), having intermediate properties and often behaving as semiconductors. A more refined classification is often shown in colored presentations of the periodic table. This system restricts the terms "metal" and "nonmetal" to only certain of the more broadly defined metals and nonmetals, adding additional terms for certain sets of the more broadly viewed metals and nonmetals. The version of this classification used in the periodic tables presented here includes: actinides, alkali metals, alkaline earth metals, halogens, lanthanides, transition metals, post-transition metals, metalloids, reactive nonmetals, and noble gases. In this system, the alkali metals, alkaline earth metals, and transition metals, as well as the lanthanides and the actinides, are special groups of the metals viewed in a broader sense. Similarly, the reactive nonmetals and the noble gases are nonmetals viewed in the broader sense. In some presentations, the halogens are not distinguished, with astatine identified as a metalloid and the others identified as nonmetals. Another commonly used basic distinction among the elements is their state of matter (phase), whether solid, liquid, or gas, at a selected standard temperature and pressure (STP). Most of the elements are solids at conventional temperatures and atmospheric pressure, while several are gases. Only bromine and mercury are liquids at 0 degrees Celsius (32 degrees Fahrenheit) and normal atmospheric pressure; caesium and gallium are solids at that temperature, but melt at 28.4 °C (83.2 °F) and 29.8 °C (85.6 °F), respectively. Melting and boiling points, typically expressed in degrees Celsius at a pressure of one atmosphere, are commonly used in characterizing the various elements. While known for most elements, either or both of these measurements is still undetermined for some of the radioactive elements available in only tiny quantities. Since helium remains a liquid even at absolute zero at atmospheric pressure, it has only a boiling point, and not a melting point, in conventional presentations. Main article: Densities of the elements (data page) The density at selected standard temperature and pressure (STP) is frequently used in characterizing the elements. Density is often expressed in grams per cubic centimeter (g/cm3). Since several elements are gases at commonly encountered temperatures, their densities are usually stated for their gaseous forms; when liquefied or solidified, the gaseous elements have densities similar to those of the other elements. When an element has allotropes with different densities, one representative allotrope is typically selected in summary presentations, while densities for each allotrope can be stated where more detail is provided. For example, the three familiar allotropes of carbon (amorphous carbon, graphite, and diamond) have densities of 1.8–2.1, 2.267, and 3.515 g/cm3, respectively. Main article: Crystal structure The elements studied to date as solid samples have eight kinds of crystal structures: cubic, body-centered cubic, face-centered cubic, hexagonal, monoclinic, orthorhombic, rhombohedral, and tetragonal. For some of the synthetically produced transuranic elements, available samples have been too small to determine crystal structures. Main article: Abundance of elements in Earth's crust Chemical elements may also be categorized by their origin on Earth, with the first 94 considered naturally occurring, while those with atomic numbers beyond 94 have only been produced artificially as the synthetic products of man-made nuclear reactions. Of the 94 naturally occurring elements, 83 are considered primordial and either stable or weakly radioactive. The remaining 11 naturally occurring elements possess half lives too short for them to have been present at the beginning of the Solar System, and are therefore considered transient elements. Of these 11 transient elements, 5 (polonium, radon, radium, actinium, and protactinium) are relatively common decay products of thorium and uranium. The remaining 6 transient elements (technetium, promethium, astatine, francium, neptunium, and plutonium) occur only rarely, as products of rare decay modes or nuclear reaction processes involving uranium or other heavy elements. No radioactive decay has been observed for elements with atomic numbers 1 through 82, except 43 (technetium) and 61 (promethium). Observationally stable isotopes of some elements (such as tungsten and lead), however, are predicted to be slightly radioactive with very long half-lives: for example, the half-lives predicted for the observationally stable lead isotopes range from 1035 to 10189 years. Elements with atomic numbers 43, 61, and 83 through 94 are unstable enough that their radioactive decay can readily be detected. Three of these elements, bismuth (element 83), thorium (element 90), and uranium (element 92) have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy elements before the formation of the Solar System. For example, at over 1.9×1019 years, over a billion times longer than the current estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any naturally occurring element. The very heaviest 24 elements (those beyond plutonium, element 94) undergo radioactive decay with short half-lives and cannot be produced as daughters of longer-lived elements, and thus are not known to occur in nature at all. Main article: Periodic table The properties of the chemical elements are often summarized using the periodic table, which powerfully and elegantly organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The current standard table contains 118 confirmed elements as of 2021. Although earlier precursors to this presentation exist, its invention is generally credited to the Russian chemist Dmitri Mendeleev in 1869, who intended the table to illustrate recurring trends in the properties of the elements. The layout of the table has been refined and extended over time as new elements have been discovered and new theoretical models have been developed to explain chemical behavior. Use of the periodic table is now ubiquitous within the academic discipline of chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior. The table has also found wide application in physics, geology, biology, materials science, engineering, agriculture, medicine, nutrition, environmental health, and astronomy. Its principles are especially important in chemical engineering. The various chemical elements are formally identified by their unique atomic numbers, by their accepted names, and by their symbols. The known elements have atomic numbers from 1 through 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as "through", "beyond", or "from ... through", as in "through iron", "beyond uranium", or "from lanthanum through lutetium". The terms "light" and "heavy" are sometimes also used informally to indicate relative atomic numbers (not densities), as in "lighter than carbon" or "heavier than lead", although technically the weight or mass of atoms of an element (their atomic weights or atomic masses) do not always increase monotonically with their atomic numbers. Main article: Naming of elements The naming of various substances now known as elements precedes the atomic theory of matter, as names were given locally by various cultures to various minerals, metals, compounds, alloys, mixtures, and other materials, although at the time it was not known which chemicals were elements and which compounds. As they were identified as elements, the existing names for anciently known elements (e.g., gold, mercury, iron) were kept in most countries. National differences emerged over the names of elements either for convenience, linguistic niceties, or nationalism. For a few illustrative examples: German speakers use "Wasserstoff" (water substance) for "hydrogen", "Sauerstoff" (acid substance) for "oxygen" and "Stickstoff" (smothering substance) for "nitrogen", while English and some romance languages use "sodium" for "natrium" and "potassium" for "kalium", and the French, Italians, Greeks, Portuguese and Poles prefer "azote/azot/azoto" (from roots meaning "no life") for "nitrogen". For purposes of international communication and trade, the official names of the chemical elements both ancient and more recently recognized are decided by the International Union of Pure and Applied Chemistry (IUPAC), which has decided on a sort of international English language, drawing on traditional English names even when an element's chemical symbol is based on a Latin or other traditional word, for example adopting "gold" rather than "aurum" as the name for the 79th element (Au). IUPAC prefers the British spellings "aluminium" and "caesium" over the U.S. spellings "aluminum" and "cesium", and the U.S. "sulfur" over the British "sulphur". However, elements that are practical to sell in bulk in many countries often still have locally used national names, and countries whose national language does not use the Latin alphabet are likely to use the IUPAC element names. According to IUPAC, chemical elements are not proper nouns in English; consequently, the full name of an element is not routinely capitalized in English, even if derived from a proper noun, as in californium and einsteinium. Isotope names of chemical elements are also uncapitalized if written out, e.g., carbon-12 or uranium-235. Chemical element symbols (such as Cf for californium and Es for einsteinium), are always capitalized (see below). In the second half of the twentieth century, physics laboratories became able to produce nuclei of chemical elements with half-lives too short for an appreciable amount of them to exist at any time. These are also named by IUPAC, which generally adopts the name chosen by the discoverer. This practice can lead to the controversial question of which research group actually discovered an element, a question that delayed the naming of elements with atomic number of 104 and higher for a considerable amount of time. (See element naming controversy). Precursors of such controversies involved the nationalistic namings of elements in the late 19th century. For example, lutetium was named in reference to Paris, France. The Germans were reluctant to relinquish naming rights to the French, often calling it cassiopeium. Similarly, the British discoverer of niobium originally named it columbium, in reference to the New World. It was used extensively as such by American publications before the international standardization (in 1950). For listings of current chemical symbols, symbols not currently used, and other symbols that may look like chemical symbols, see Chemical symbol. Before chemistry became a science, alchemists had designed arcane symbols for both metals and common compounds. These were however used as abbreviations in diagrams or procedures; there was no concept of atoms combining to form molecules. With his advances in the atomic theory of matter, John Dalton devised his own simpler symbols, based on circles, to depict molecules. The current system of chemical notation was invented by Berzelius. In this typographical system, chemical symbols are not mere abbreviations—though each consists of letters of the Latin alphabet. They are intended as universal symbols for people of all languages and alphabets. The first of these symbols were intended to be fully universal. Since Latin was the common language of science at that time, they were abbreviations based on the Latin names of metals. Cu comes from cuprum, Fe comes from ferrum, Ag from argentum. The symbols were not followed by a period (full stop) as with abbreviations. Later chemical elements were also assigned unique chemical symbols, based on the name of the element, but not necessarily in English. For example, sodium has the chemical symbol 'Na' after the Latin natrium. The same applies to "Fe" (ferrum) for iron, "Hg" (hydrargyrum) for mercury, "Sn" (stannum) for tin, "Au" (aurum) for gold, "Ag" (argentum) for silver, "Pb" (plumbum) for lead, "Cu" (cuprum) for copper, and "Sb" (stibium) for antimony. "W" (wolfram) for tungsten ultimately derives from German, "K" (kalium) for potassium ultimately from Arabic. Chemical symbols are understood internationally when element names might require translation. There have sometimes been differences in the past. For example, Germans in the past have used "J" (for the alternate name Jod) for iodine, but now use "I" and "Iod". The first letter of a chemical symbol is always capitalized, as in the preceding examples, and the subsequent letters, if any, are always lower case (small letters). Thus, the symbols for californium and einsteinium are Cf and Es. There are also symbols in chemical equations for groups of chemical elements, for example in comparative formulas. These are often a single capital letter, and the letters are reserved and not used for names of specific elements. For example, an "X" indicates a variable group (usually a halogen) in a class of compounds, while "R" is a radical, meaning a compound structure such as a hydrocarbon chain. The letter "Q" is reserved for "heat" in a chemical reaction. "Y" is also often used as a general chemical symbol, although it is also the symbol of yttrium. "Z" is also frequently used as a general variable group. "E" is used in organic chemistry to denote an electron-withdrawing group or an electrophile; similarly "Nu" denotes a nucleophile. "L" is used to represent a general ligand in inorganic and organometallic chemistry. "M" is also often used in place of a general metal. At least two additional, two-letter generic chemical symbols are also in informal usage, "Ln" for any lanthanide element and "An" for any actinide element. "Rg" was formerly used for any rare gas element, but the group of rare gases has now been renamed noble gases and the symbol "Rg" has now been assigned to the element roentgenium. Isotopes are distinguished by the atomic mass number (total protons and neutrons) for a particular isotope of an element, with this number combined with the pertinent element's symbol. IUPAC prefers that isotope symbols be written in superscript notation when practical, for example 12C and 235U. However, other notations, such as carbon-12 and uranium-235, or C-12 and U-235, are also used. As a special case, the three naturally occurring isotopes of the element hydrogen are often specified as H for 1H (protium), D for 2H (deuterium), and T for 3H (tritium). This convention is easier to use in chemical equations, replacing the need to write out the mass number for each atom. For example, the formula for heavy water may be written D2O instead of 2H2O. Main article: Nucleosynthesis Only about 4% of the total mass of the universe is made of atoms or ions, and thus represented by chemical elements. This fraction is about 15% of the total matter, with the remainder of the matter (85%) being dark matter. The nature of dark matter is unknown, but it is not composed of atoms of chemical elements because it contains no protons, neutrons, or electrons. (The remaining non-matter part of the mass of the universe is composed of the even less well understood dark energy). The 94 naturally occurring chemical elements were produced by at least four classes of astrophysical process. Most of the hydrogen, helium and a very small quantity of lithium were produced in the first few minutes of the Big Bang. This Big Bang nucleosynthesis happened only once; the other processes are ongoing. Nuclear fusion inside stars produces elements through stellar nucleosynthesis, including all elements from carbon to iron in atomic number. Elements higher in atomic number than iron, including heavy elements like uranium and plutonium, are produced by various forms of explosive nucleosynthesis in supernovae and neutron star mergers. The light elements lithium, beryllium and boron are produced mostly through cosmic ray spallation (fragmentation induced by cosmic rays) of carbon, nitrogen, and oxygen. During the early phases of the Big Bang, nucleosynthesis of hydrogen nuclei resulted in the production of hydrogen-1 (protium, 1H) and helium-4 (4He), as well as a smaller amount of deuterium (2H) and very minuscule amounts (on the order of 10−10) of lithium and beryllium. Even smaller amounts of boron may have been produced in the Big Bang, since it has been observed in some very old stars, while carbon has not. No elements heavier than boron were produced in the Big Bang. As a result, the primordial abundance of atoms (or ions) consisted of roughly 75% 1H, 25% 4He, and 0.01% deuterium, with only tiny traces of lithium, beryllium, and perhaps boron. Subsequent enrichment of galactic halos occurred due to stellar nucleosynthesis and supernova nucleosynthesis. However, the element abundance in intergalactic space can still closely resemble primordial conditions, unless it has been enriched by some means. On Earth (and elsewhere), trace amounts of various elements continue to be produced from other elements as products of nuclear transmutation processes. These include some produced by cosmic rays or other nuclear reactions (see cosmogenic and nucleogenic nuclides), and others produced as decay products of long-lived primordial nuclides. For example, trace (but detectable) amounts of carbon-14 (14C) are continually produced in the atmosphere by cosmic rays impacting nitrogen atoms, and argon-40 (40Ar) is continually produced by the decay of primordially occurring but unstable potassium-40 (40K). Also, three primordially occurring but radioactive actinides, thorium, uranium, and plutonium, decay through a series of recurrently produced but unstable radioactive elements such as radium and radon, which are transiently present in any sample of these metals or their ores or compounds. Three other radioactive elements, technetium, promethium, and neptunium, occur only incidentally in natural materials, produced as individual atoms by nuclear fission of the nuclei of various heavy elements or in other rare nuclear processes. In addition to the 94 naturally occurring elements, several artificial elements have been produced by human nuclear physics technology. As of 2021[update], these experiments have produced all elements up to atomic number 118. Main article: Abundance of the chemical elements The following graph (note log scale) shows the abundance of elements in our Solar System. The table shows the twelve most common elements in our galaxy (estimated spectroscopically), as measured in parts per million, by mass. Nearby galaxies that have evolved along similar lines have a corresponding enrichment of elements heavier than hydrogen and helium. The more distant galaxies are being viewed as they appeared in the past, so their abundances of elements appear closer to the primordial mixture. As physical laws and processes appear common throughout the visible universe, however, scientist expect that these galaxies evolved elements in similar abundance. The abundance of elements in the Solar System is in keeping with their origin from nucleosynthesis in the Big Bang and a number of progenitor supernova stars. Very abundant hydrogen and helium are products of the Big Bang, but the next three elements are rare since they had little time to form in the Big Bang and are not made in stars (they are, however, produced in small quantities by the breakup of heavier elements in interstellar dust, as a result of impact by cosmic rays). Beginning with carbon, elements are produced in stars by buildup from alpha particles (helium nuclei), resulting in an alternatingly larger abundance of elements with even atomic numbers (these are also more stable). In general, such elements up to iron are made in large stars in the process of becoming supernovas. Iron-56 is particularly common, since it is the most stable element that can easily be made from alpha particles (being a product of decay of radioactive nickel-56, ultimately made from 14 helium nuclei). Elements heavier than iron are made in energy-absorbing processes in large stars, and their abundance in the universe (and on Earth) generally decreases with their atomic number. The abundance of the chemical elements on Earth varies from air to crust to ocean, and in various types of life. The abundance of elements in Earth's crust differs from that in the Solar System (as seen in the Sun and heavy planets like Jupiter) mainly in selective loss of the very lightest elements (hydrogen and helium) and also volatile neon, carbon (as hydrocarbons), nitrogen and sulfur, as a result of solar heating in the early formation of the solar system. Oxygen, the most abundant Earth element by mass, is retained on Earth by combination with silicon. Aluminum at 8% by mass is more common in the Earth's crust than in the universe and solar system, but the composition of the far more bulky mantle, which has magnesium and iron in place of aluminum (which occurs there only at 2% of mass) more closely mirrors the elemental composition of the solar system, save for the noted loss of volatile elements to space, and loss of iron which has migrated to the Earth's core. The composition of the human body, by contrast, more closely follows the composition of seawater—save that the human body has additional stores of carbon and nitrogen necessary to form the proteins and nucleic acids, together with phosphorus in the nucleic acids and energy transfer molecule adenosine triphosphate (ATP) that occurs in the cells of all living organisms. Certain kinds of organisms require particular additional elements, for example the magnesium in chlorophyll in green plants, the calcium in mollusc shells, or the iron in the hemoglobin in vertebrate animals' red blood cells. |Elements in our galaxy||Parts per million| |Nutritional elements in the periodic table| Essential trace elements Deemed essential trace element by U.S., not by European Union Suggested function from deprivation effects or active metabolic handling, but no clearly-identified biochemical function in humans Limited circumstantial evidence for trace benefits or biological action in mammals No evidence for biological action in mammals, but essential in some lower organisms. (In the case of lanthanum, the definition of an essential nutrient as being indispensable and irreplaceable is not completely applicable due to the extreme similarity of the lanthanides. The stable early lanthanides up to Sm are known to stimulate the growth of various lanthanide-using organisms.) The concept of an "element" as an undivisible substance has developed through three major historical phases: Classical definitions (such as those of the ancient Greeks), chemical definitions, and atomic definitions. Ancient philosophy posited a set of classical elements to explain observed patterns in nature. These elements originally referred to earth, water, air and fire rather than the chemical elements of modern science. The term 'elements' (stoicheia) was first used by the Greek philosopher Plato in about 360 BCE in his dialogue Timaeus, which includes a discussion of the composition of inorganic and organic bodies and is a speculative treatise on chemistry. Plato believed the elements introduced a century earlier by Empedocles were composed of small polyhedral forms: tetrahedron (fire), octahedron (air), icosahedron (water), and cube (earth). Aristotle, c. 350 BCE, also used the term stoicheia and added a fifth element called aether, which formed the heavens. Aristotle defined an element as: Element – one of those bodies into which other bodies can decompose, and that itself is not capable of being divided into other. In 1661, Robert Boyle proposed his theory of corpuscularism which favoured the analysis of matter as constituted by irreducible units of matter (atoms) and, choosing to side with neither Aristotle's view of the four elements nor Paracelsus' view of three fundamental elements, left open the question of the number of elements. The first modern list of chemical elements was given in Antoine Lavoisier's 1789 Elements of Chemistry, which contained thirty-three elements, including light and caloric. By 1818, Jöns Jakob Berzelius had determined atomic weights for forty-five of the forty-nine then-accepted elements. Dmitri Mendeleev had sixty-six elements in his periodic table of 1869. From Boyle until the early 20th century, an element was defined as a pure substance that could not be decomposed into any simpler substance. Put another way, a chemical element cannot be transformed into other chemical elements by chemical processes. Elements during this time were generally distinguished by their atomic weights, a property measurable with fair accuracy by available analytical techniques. The 1913 discovery by English physicist Henry Moseley that the nuclear charge is the physical basis for an atom's atomic number, further refined when the nature of protons and neutrons became appreciated, eventually led to the current definition of an element based on atomic number (number of protons per atomic nucleus). The use of atomic numbers, rather than atomic weights, to distinguish elements has greater predictive value (since these numbers are integers), and also resolves some ambiguities in the chemistry-based view due to varying properties of isotopes and allotropes within the same element. Currently, IUPAC defines an element to exist if it has isotopes with a lifetime longer than the 10−14 seconds it takes the nucleus to form an electronic cloud. By 1914, seventy-two elements were known, all naturally occurring. The remaining naturally occurring elements were discovered or isolated in subsequent decades, and various additional elements have also been produced synthetically, with much of that work pioneered by Glenn T. Seaborg. In 1955, element 101 was discovered and named mendelevium in honor of D.I. Mendeleev, the first to arrange the elements in a periodic manner. Ten materials familiar to various prehistoric cultures are now known to be chemical elements: Carbon, copper, gold, iron, lead, mercury, silver, sulfur, tin, and zinc. Three additional materials now accepted as elements, arsenic, antimony, and bismuth, were recognized as distinct substances prior to 1500 AD. Phosphorus, cobalt, and platinum were isolated before 1750. Most of the remaining naturally occurring chemical elements were identified and characterized by 1900, including: Elements isolated or produced since 1900 include: The first transuranium element (element with atomic number greater than 92) discovered was neptunium in 1940. Since 1999 claims for the discovery of new elements have been considered by the IUPAC/IUPAP Joint Working Party. As of January 2016, all 118 elements have been confirmed as discovered by IUPAC. The discovery of element 112 was acknowledged in 2009, and the name copernicium and the atomic symbol Cn were suggested for it. The name and symbol were officially endorsed by IUPAC on 19 February 2010. The heaviest element that is believed to have been synthesized to date is element 118, oganesson, on 9 October 2006, by the Flerov Laboratory of Nuclear Reactions in Dubna, Russia. Tennessine, element 117 was the latest element claimed to be discovered, in 2009. On 28 November 2016, scientists at the IUPAC officially recognized the names for four of the newest chemical elements, with atomic numbers 113, 115, 117, and 118. Main article: List of chemical elements The following sortable table shows the 118 known chemical elements. |Element||Origin of name||Group||Period||Block||Standard |Density[b][c]||Melting point[d]||Boiling point[e]||Specific |Origin[i]||Phase at r.t.[j]| |Symbol||Name||(Da)||(g/cm3)||(K)||(K)||(J/g · K)||(mg/kg)| |1||H||Hydrogen||Greek elements hydro- and -gen, 'water-forming'||1||1||s-block||1.0080||0.00008988||14.01||20.28||14.304||2.20||1400||primordial||gas| |2||He||Helium||Greek hḗlios, 'sun'||18||1||s-block||4.0026||0.0001785||–[k]||4.22||5.193||–||0.008||primordial||gas| |3||Li||Lithium||Greek líthos, 'stone'||1||2||s-block||6.94||0.534||453.69||1560||3.582||0.98||20||primordial||solid| |4||Be||Beryllium||Beryl, a mineral (ultimately from the name of Belur in southern India)||2||2||s-block||9.0122||1.85||1560||2742||1.825||1.57||2.8||primordial||solid| |5||B||Boron||Borax, a mineral (from Arabic bawraq, Middle Persian *bōrag)||13||2||p-block||10.81||2.34||2349||4200||1.026||2.04||10||primordial||solid| |6||C||Carbon||Latin carbo, 'coal'||14||2||p-block||12.011||2.267||>4000||4300||0.709||2.55||200||primordial||solid| |7||N||Nitrogen||Greek nítron and -gen, 'niter-forming'||15||2||p-block||14.007||0.0012506||63.15||77.36||1.04||3.04||19||primordial||gas| |8||O||Oxygen||Greek oxy- and -gen, 'acid-forming'||16||2||p-block||15.999||0.001429||54.36||90.20||0.918||3.44||461000||primordial||gas| |9||F||Fluorine||Latin fluere, 'to flow'||17||2||p-block||18.998||0.001696||53.53||85.03||0.824||3.98||585||primordial||gas| |10||Ne||Neon||Greek néon, 'new'||18||2||p-block||20.180||0.0009002||24.56||27.07||1.03||–||0.005||primordial||gas| |11||Na||Sodium||English (from medieval Latin) soda · Symbol Na is derived from New Latin natrium, coined from German Natron, 'natron' |12||Mg||Magnesium||Magnesia, a district of Eastern Thessaly in Greece||2||3||s-block||24.305||1.738||923||1363||1.023||1.31||23300||primordial||solid| |13||Al||Aluminium||Alumina, from Latin alumen (gen. aluminis), 'bitter salt, alum'||13||3||p-block||26.982||2.70||933.47||2792||0.897||1.61||82300||primordial||solid| |14||Si||Silicon||Latin silex, 'flint' (originally silicium)||14||3||p-block||28.085||2.3290||1687||3538||0.705||1.9||282000||primordial||solid| |15||P||Phosphorus||Greek phōsphóros, 'light-bearing'||15||3||p-block||30.974||1.823||317.30||550||0.769||2.19||1050||primordial||solid| |16||S||Sulfur||Latin sulphur, 'brimstone'||16||3||p-block||32.06||2.07||388.36||717.87||0.71||2.58||350||primordial||solid| |17||Cl||Chlorine||Greek chlōrós, 'greenish yellow'||17||3||p-block||35.45||0.0032||171.6||239.11||0.479||3.16||145||primordial||gas| |18||Ar||Argon||Greek argós, 'idle' (because of its inertness)||18||3||p-block||39.95||0.001784||83.80||87.30||0.52||–||3.5||primordial||gas| |19||K||Potassium||New Latin potassa, 'potash', itself from pot and ash · Symbol K is derived from Latin kalium |20||Ca||Calcium||Latin calx, 'lime'||2||4||s-block||40.078||1.55||1115||1757||0.647||1.00||41500||primordial||solid| |21||Sc||Scandium||Latin Scandia, 'Scandinavia'||3||4||d-block||44.956||2.985||1814||3109||0.568||1.36||22||primordial||solid| |22||Ti||Titanium||Titans, the sons of the Earth goddess of Greek mythology||4||4||d-block||47.867||4.506||1941||3560||0.523||1.54||5650||primordial||solid| |23||V||Vanadium||Vanadis, an Old Norse name for the Scandinavian goddess Freyja||5||4||d-block||50.942||6.11||2183||3680||0.489||1.63||120||primordial||solid| |24||Cr||Chromium||Greek chróma, 'colour'||6||4||d-block||51.996||7.15||2180||2944||0.449||1.66||102||primordial||solid| |25||Mn||Manganese||Corrupted from magnesia negra; see § magnesium||7||4||d-block||54.938||7.21||1519||2334||0.479||1.55||950||primordial||solid| |26||Fe||Iron||English word, from Proto-Celtic *īsarnom ('iron'), from a root meaning 'blood' · Symbol Fe is derived from Latin ferrum |27||Co||Cobalt||German Kobold, 'goblin'||9||4||d-block||58.933||8.90||1768||3200||0.421||1.88||25||primordial||solid| |28||Ni||Nickel||Nickel, a mischievous sprite of German miner mythology||10||4||d-block||58.693||8.908||1728||3186||0.444||1.91||84||primordial||solid| |29||Cu||Copper||English word, from Latin cuprum, from Ancient Greek Kýpros 'Cyprus'||11||4||d-block||63.546||8.96||1357.77||2835||0.385||1.90||60||primordial||solid| |30||Zn||Zinc||Most likely from German Zinke, 'prong' or 'tooth', though some suggest Persian sang, 'stone'||12||4||d-block||65.38||7.14||692.88||1180||0.388||1.65||70||primordial||solid| |31||Ga||Gallium||Latin Gallia, 'France'||13||4||p-block||69.723||5.91||302.9146||2673||0.371||1.81||19||primordial||solid| |32||Ge||Germanium||Latin Germania, 'Germany'||14||4||p-block||72.630||5.323||1211.40||3106||0.32||2.01||1.5||primordial||solid| |33||As||Arsenic||French arsenic, from Greek arsenikón 'yellow arsenic' (influenced by arsenikós, 'masculine' or 'virile'), from a West Asian wanderword ultimately from Old Iranian *zarniya-ka, 'golden'||15||4||p-block||74.922||5.727||1090[l]||887||0.329||2.18||1.8||primordial||solid| |34||Se||Selenium||Greek selḗnē, 'moon'||16||4||p-block||78.971||4.81||453||958||0.321||2.55||0.05||primordial||solid| |35||Br||Bromine||Greek brômos, 'stench'||17||4||p-block||79.904||3.1028||265.8||332.0||0.474||2.96||2.4||primordial||liquid| |36||Kr||Krypton||Greek kryptós, 'hidden'||18||4||p-block||83.798||0.003749||115.79||119.93||0.248||3.00||1×10−4||primordial||gas| |37||Rb||Rubidium||Latin rubidus, 'deep red'||1||5||s-block||85.468||1.532||312.46||961||0.363||0.82||90||primordial||solid| |38||Sr||Strontium||Strontian, a village in Scotland, where it was found||2||5||s-block||87.62||2.64||1050||1655||0.301||0.95||370||primordial||solid| |39||Y||Yttrium||Ytterby, Sweden, where it was found; see also terbium, erbium, ytterbium||3||5||d-block||88.906||4.472||1799||3609||0.298||1.22||33||primordial||solid| |40||Zr||Zirconium||Zircon, a mineral, from Persian zargun, 'gold-hued'||4||5||d-block||91.224||6.52||2128||4682||0.278||1.33||165||primordial||solid| |41||Nb||Niobium||Niobe, daughter of king Tantalus from Greek mythology; see also tantalum||5||5||d-block||92.906||8.57||2750||5017||0.265||1.6||20||primordial||solid| |42||Mo||Molybdenum||Greek molýbdaina, 'piece of lead', from mólybdos, 'lead', due to confusion with lead ore galena (PbS)||6||5||d-block||95.95||10.28||2896||4912||0.251||2.16||1.2||primordial||solid| |43||Tc||Technetium||Greek tekhnētós, 'artificial'||7||5||d-block||[a]||11||2430||4538||–||1.9||~ 3×10−9||from decay||solid| |44||Ru||Ruthenium||New Latin Ruthenia, 'Russia'||8||5||d-block||101.07||12.45||2607||4423||0.238||2.2||0.001||primordial||solid| |45||Rh||Rhodium||Greek rhodóeis, 'rose-coloured', from rhódon, 'rose'||9||5||d-block||102.91||12.41||2237||3968||0.243||2.28||0.001||primordial||solid| |46||Pd||Palladium||Pallas, an asteroid, considered a planet at the time||10||5||d-block||106.42||12.023||1828.05||3236||0.244||2.20||0.015||primordial||solid| · Symbol Ag is derived from Latin argentum |48||Cd||Cadmium||New Latin cadmia, from King Kadmos||12||5||d-block||112.41||8.65||594.22||1040||0.232||1.69||0.159||primordial||solid| |49||In||Indium||Latin indicum, 'indigo', the blue colour found in its spectrum||13||5||p-block||114.82||7.31||429.75||2345||0.233||1.78||0.25||primordial||solid| · Symbol Sn is derived from Latin stannum |51||Sb||Antimony||Latin antimonium, the origin of which is uncertain: folk etymologies suggest it is derived from Greek antí ('against') + mónos ('alone'), or Old French anti-moine, 'Monk's bane', but it could plausibly be from or related to Arabic ʾiṯmid, 'antimony', reformatted as a Latin word · Symbol Sb is derived from Latin stibium 'stibnite' |52||Te||Tellurium||Latin tellus, 'the ground, earth'||16||5||p-block||127.60||6.24||722.66||1261||0.202||2.1||0.001||primordial||solid| |53||I||Iodine||French iode, from Greek ioeidḗs, 'violet'||17||5||p-block||126.90||4.933||386.85||457.4||0.214||2.66||0.45||primordial||solid| |54||Xe||Xenon||Greek xénon, neuter form of xénos 'strange'||18||5||p-block||131.29||0.005894||161.4||165.03||0.158||2.60||3×10−5||primordial||gas| |55||Cs||Caesium||Latin caesius, 'sky-blue'||1||6||s-block||132.91||1.93||301.59||944||0.242||0.79||3||primordial||solid| |56||Ba||Barium||Greek barýs, 'heavy'||2||6||s-block||137.33||3.51||1000||2170||0.204||0.89||425||primordial||solid| |57||La||Lanthanum||Greek lanthánein, 'to lie hidden'||n/a||6||f-block||138.91||6.162||1193||3737||0.195||1.1||39||primordial||solid| |58||Ce||Cerium||Ceres, a dwarf planet, considered a planet at the time||n/a||6||f-block||140.12||6.770||1068||3716||0.192||1.12||66.5||primordial||solid| |59||Pr||Praseodymium||Greek prásios dídymos, 'green twin'||n/a||6||f-block||140.91||6.77||1208||3793||0.193||1.13||9.2||primordial||solid| |60||Nd||Neodymium||Greek néos dídymos, 'new twin'||n/a||6||f-block||144.24||7.01||1297||3347||0.19||1.14||41.5||primordial||solid| |61||Pm||Promethium||Prometheus, a figure in Greek mythology||n/a||6||f-block||||7.26||1315||3273||–||1.13||2×10−19||from decay||solid| |62||Sm||Samarium||Samarskite, a mineral named after V. Samarsky-Bykhovets, Russian mine official||n/a||6||f-block||150.36||7.52||1345||2067||0.197||1.17||7.05||primordial||solid| |64||Gd||Gadolinium||Gadolinite, a mineral named after Johan Gadolin, Finnish chemist, physicist and mineralogist||n/a||6||f-block||157.25||7.90||1585||3546||0.236||1.2||6.2||primordial||solid| |65||Tb||Terbium||Ytterby, Sweden, where it was found; see also yttrium, erbium, ytterbium||n/a||6||f-block||158.93||8.23||1629||3503||0.182||1.2||1.2||primordial||solid| |66||Dy||Dysprosium||Greek dysprósitos, 'hard to get'||n/a||6||f-block||162.50||8.540||1680||2840||0.17||1.22||5.2||primordial||solid| |67||Ho||Holmium||New Latin Holmia, 'Stockholm'||n/a||6||f-block||164.93||8.79||1734||2993||0.165||1.23||1.3||primordial||solid| |68||Er||Erbium||Ytterby, Sweden, where it was found; see also yttrium, terbium, ytterbium||n/a||6||f-block||167.26||9.066||1802||3141||0.168||1.24||3.5||primordial||solid| |69||Tm||Thulium||Thule, the ancient name for an unclear northern location||n/a||6||f-block||168.93||9.32||1818||2223||0.16||1.25||0.52||primordial||solid| |70||Yb||Ytterbium||Ytterby, Sweden, where it was found; see also yttrium, terbium, erbium||n/a||6||f-block||173.05||6.90||1097||1469||0.155||1.1||3.2||primordial||solid| |71||Lu||Lutetium||Latin Lutetia, 'Paris'||3||6||d-block||174.97||9.841||1925||3675||0.154||1.27||0.8||primordial||solid| |72||Hf||Hafnium||New Latin Hafnia, 'Copenhagen' (from Danish havn, harbour)||4||6||d-block||178.49||13.31||2506||4876||0.144||1.3||3||primordial||solid| |73||Ta||Tantalum||King Tantalus, father of Niobe from Greek mythology; see also niobium||5||6||d-block||180.95||16.69||3290||5731||0.14||1.5||2||primordial||solid| |74||W||Tungsten||Swedish tung sten, 'heavy stone' · Symbol W is from Wolfram, originally from Middle High German wolf-rahm 'wolf's foam' describing the mineral wolframite |75||Re||Rhenium||Latin Rhenus, 'the Rhine'||7||6||d-block||186.21||21.02||3459||5869||0.137||1.9||7×10−4||primordial||solid| |76||Os||Osmium||Greek osmḗ, 'smell'||8||6||d-block||190.23||22.59||3306||5285||0.13||2.2||0.002||primordial||solid| |77||Ir||Iridium||Iris, the Greek goddess of the rainbow||9||6||d-block||192.22||22.56||2719||4701||0.131||2.20||0.001||primordial||solid| |78||Pt||Platinum||Spanish platina, 'little silver', from plata 'silver'||10||6||d-block||195.08||21.45||2041.4||4098||0.133||2.28||0.005||primordial||solid| |79||Au||Gold||English word, from the same root as 'yellow' · Symbol Au is derived from Latin aurum |80||Hg||Mercury||Mercury, Roman god of commerce, communication, and luck, known for his speed and mobility · Symbol Hg is derived from its Latin name hydrargyrum, from Greek hydrárgyros, 'water-silver' |81||Tl||Thallium||Greek thallós, 'green shoot or twig'||13||6||p-block||204.38||11.85||577||1746||0.129||1.62||0.85||primordial||solid| |82||Pb||Lead||English word, from Proto-Celtic *ɸloudom, from a root meaning 'flow' · Symbol Pb is derived from Latin plumbum |83||Bi||Bismuth||German Wismut, from weiß Masse 'white mass', unless from Arabic||15||6||p-block||208.98||9.78||544.7||1837||0.122||2.02||0.009||primordial||solid| |84||Po||Polonium||Latin Polonia, 'Poland', home country of Marie Curie||16||6||p-block||[a]||9.196||527||1235||–||2.0||2×10−10||from decay||solid| |85||At||Astatine||Greek ástatos, 'unstable'||17||6||p-block||||(8.91–8.95)||575||610||–||2.2||3×10−20||from decay||unknown phase| |86||Rn||Radon||Radium emanation, originally the name of the isotope Radon-222||18||6||p-block||||0.00973||202||211.3||0.094||2.2||4×10−13||from decay||gas| |87||Fr||Francium||France, home country of discoverer Marguerite Perey||1||7||s-block||||(2.48)||281||890||–||>0.79||~ 1×10−18||from decay||unknown phase| |88||Ra||Radium||French radium, from Latin radius, 'ray'||2||7||s-block||||5.5||973||2010||0.094||0.9||9×10−7||from decay||solid| |89||Ac||Actinium||Greek aktís, 'ray'||n/a||7||f-block||||10||1323||3471||0.12||1.1||5.5×10−10||from decay||solid| |90||Th||Thorium||Thor, the Scandinavian god of thunder||n/a||7||f-block||232.04||11.7||2115||5061||0.113||1.3||9.6||primordial||solid| |91||Pa||Protactinium||Proto- (from Greek prôtos, 'first, before') + actinium, since actinium is produced through the radioactive decay of protactinium||n/a||7||f-block||231.04||15.37||1841||4300||–||1.5||1.4×10−6||from decay||solid| |92||U||Uranium||Uranus, the seventh planet in the Solar System||n/a||7||f-block||238.03||19.1||1405.3||4404||0.116||1.38||2.7||primordial||solid| |93||Np||Neptunium||Neptune, the eighth planet in the Solar System||n/a||7||f-block||||20.45||917||4273||–||1.36||≤ 3×10−12||from decay||solid| |94||Pu||Plutonium||Pluto, a dwarf planet, considered a planet in the Solar System at the time||n/a||7||f-block||||19.85||912.5||3501||–||1.28||≤ 3×10−11||from decay||solid| |95||Am||Americium||The Americas, where the element was first synthesised, by analogy with its homologue § europium||n/a||7||f-block||||12||1449||2880||–||1.13||–||synthetic||solid| |96||Cm||Curium||Pierre Curie and Marie Curie, French physicists and chemists||n/a||7||f-block||||13.51||1613||3383||–||1.28||–||synthetic||solid| |97||Bk||Berkelium||Berkeley, California, where the element was first synthesised||n/a||7||f-block||||14.78||1259||2900||–||1.3||–||synthetic||solid| |98||Cf||Californium||California, where the element was first synthesised in the LBNL laboratory||n/a||7||f-block||||15.1||1173||(1743)[b]||–||1.3||–||synthetic||solid| |99||Es||Einsteinium||Albert Einstein, German physicist||n/a||7||f-block||||8.84||1133||(1269)||–||1.3||–||synthetic||solid| |100||Fm||Fermium||Enrico Fermi, Italian physicist||n/a||7||f-block||||(9.7)[b]||(1125) |101||Md||Mendelevium||Dmitri Mendeleev, Russian chemist who proposed the periodic table||n/a||7||f-block||||(10.3)||(1100)||–||–||1.3||–||synthetic||unknown phase| |102||No||Nobelium||Alfred Nobel, Swedish chemist and engineer||n/a||7||f-block||||(9.9)||(1100)||–||–||1.3||–||synthetic||unknown phase| |103||Lr||Lawrencium||Ernest Lawrence, American physicist||3||7||d-block||||(14.4)||(1900)||–||–||1.3||–||synthetic||unknown phase| |104||Rf||Rutherfordium||Ernest Rutherford, chemist and physicist from New Zealand||4||7||d-block||||(17)||(2400)||(5800)||–||–||–||synthetic||unknown phase| |105||Db||Dubnium||Dubna, Russia, where the element was discovered in the JINR laboratory||5||7||d-block||||(21.6)||–||–||–||–||–||synthetic||unknown phase| |106||Sg||Seaborgium||Glenn T. Seaborg, American chemist||6||7||d-block||||(23–24)||–||–||–||–||–||synthetic||unknown phase| |107||Bh||Bohrium||Niels Bohr, Danish physicist||7||7||d-block||||(26–27)||–||–||–||–||–||synthetic||unknown phase| |108||Hs||Hassium||New Latin Hassia, 'Hesse', a state in Germany||8||7||d-block||||(27–29)||–||–||–||–||–||synthetic||unknown phase| |109||Mt||Meitnerium||Lise Meitner, Austrian physicist||9||7||d-block||||(27–28)||–||–||–||–||–||synthetic||unknown phase| |110||Ds||Darmstadtium||Darmstadt, Germany, where the element was first synthesised in the GSI laboratories||10||7||d-block||||(26–27)||–||–||–||–||–||synthetic||unknown phase| |111||Rg||Roentgenium||Wilhelm Conrad Röntgen, German physicist||11||7||d-block||||(22–24)||–||–||–||–||–||synthetic||unknown phase| |112||Cn||Copernicium||Nicolaus Copernicus, Polish astronomer||12||7||d-block||||(14.0)||(283±11)||(340±10)[b]||–||–||–||synthetic||unknown phase| |113||Nh||Nihonium||Japanese Nihon, 'Japan', where the element was first synthesised in the Riken laboratories||13||7||p-block||||(16)||(700)||(1400)||–||–||–||synthetic||unknown phase| |114||Fl||Flerovium||Flerov Laboratory of Nuclear Reactions, part of JINR, where the element was synthesised; itself named after Georgy Flyorov, Russian physicist||14||7||p-block||||(11.4±0.3)||(284±50)[b]||–||–||–||–||synthetic||unknown phase| |115||Mc||Moscovium||Moscow, Russia, where the element was first synthesised in the JINR laboratories||15||7||p-block||||(13.5)||(700)||(1400)||–||–||–||synthetic||unknown phase| |116||Lv||Livermorium||Lawrence Livermore National Laboratory in Livermore, California||16||7||p-block||||(12.9)||(700)||(1100)||–||–||–||synthetic||unknown phase| |117||Ts||Tennessine||Tennessee, United States, where Oak Ridge National Laboratory is located||17||7||p-block||||(7.1–7.3)||(700)||(883)||–||–||–||synthetic||unknown phase| |118||Og||Oganesson||Yuri Oganessian, Russian physicist||18||7||p-block||||(7)||(325±15)||(450±10)||–||–||–||synthetic||unknown phase| ((cite book)): CS1 maint: uses authors parameter (link)
NATIONAL MUSEUM OF NATURAL HISTORY What an Asteroid Could Tell Us About Ancient Earth As OSIRIS-REx approaches Asteroid Bennu, a new study suggests that massive boulders on its surface have moved a lot over the past few hundred thousand years. From telescopes on Earth, Bennu’s surface appears smooth. That’s one of the reasons why NASA picked the asteroid as a destination for its OSIRIS-REx spacecraft. But in 2018, when OSIRIS-REx approached the asteroid, scientists discovered that Bennu’s surface was covered with massive boulders. It turns out those boulders moved a lot over the last few hundred thousand years, according to recent research. “When you think of small asteroids, you’d think they aren't very dynamic because they have no atmosphere or volcanic activity. But Bennu is so small and its gravity is so weak that material can move around much more easily than on a planet,” said Dr. Erica Jawin, a postdoctoral fellow in the Department of Mineral Sciences at the Smithsonian's National Museum of Natural History and the study’s lead author. Bennu spun out of the asteroid belt millions of years ago and now circles the sun between Earth and Mars, much closer than its original location in the asteroid belt. Because the asteroid currently has an orbit near Earth’s, it is easier to sample it than any asteroid in the main belt. By modeling how Bennu’s boulders moved in the past, Jawin can predict where rocks in OSIRIS-REx's sample might have come from on the asteroid’s surface. Knowing those rocks’ origins will help scientists learn more about the composition of objects in the solar system and asteroid belt. “Asteroids are always gravitationally interacting and essentially sharing material. Earth gets meteorites from asteroids and asteroids also get meteorites from other asteroids,” said Dr. Tim McCoy, Curator of Meteorites at the museum and a co-author on the study. A moving history Bennu is shaped like a three-dimensional diamond. It is relatively small for an asteroid — only about a third of a mile wide at its equator. But its surface is geologically active. Rocks on Bennu’s surface move so easily because the asteroid’s gravity is very weak. Because of the weak gravity, rotational forces can move the rocks. This is what causes boulders and rocks to move about or potentially fly into space. “As Bennu rotates, its surface absorbs thermal energy from the Sun. It then radiates that heat back into space as the asteroid rotates. This provides a torque on the asteroid, which affects how quickly the asteroid rotates and over time can change the orbit of the asteroid. This effect also may have caused Bennu to leave the asteroid belt and come closer to Earth,” said Jawin. Studying Bennu’s pristine rocks could reveal what material exists in the outer solar system. And that material could yield information about the composition of primordial Earth. “On Earth, we’ve had life for potentially billions of years. Everything has been processed so much. In order to really understand how life started, you really need to go somewhere where there’s no life yet,” said Jawin. Since Earth has an atmosphere and active plate tectonics, its oldest rocks are weathered or have been pushed deep into the mantle. So, researchers often use meteorites to learn more about both ancient Earth’s and the solar system’s composition. “Meteorites have been described as the poor man’s space probe, because they are constantly coming to Earth. Just picking them up, we can learn about our solar system and its history,” said McCoy. “But at the same time, we’re trying to figure out what the entire asteroid belt and early solar system looked like from these bits and pieces.” Examining Bennu’s rocks will give McCoy and his colleagues more tools, helping them trace meteorites in the museum’s collection back to the asteroid belt. What happens next Once the rock sample from Bennu finally reaches Earth in three years on September 24, 2023, part of it will be loaned to McCoy’s Smithsonian team. There, McCoy and Jawin will analyze it to see if any meteorites currently in the Smithsonian’s National Meteorite Collection have similar compositions. If there's a match, it could suggest that the object is related to Bennu or it was part of another asteroid in the region where Bennu came from. “Most meteorites in our collection came from asteroids at some point, but we’ve only been able to link a very small fraction of the meteorites in our collection to their parent asteroids. If you just pick up a meteorite on the ground, you don’t know how long it’s been sitting there. So, it’s likely not in pristine condition,” said Jawin. “The OSIRIS-REx mission will give us pristine samples to compare to our collection and bridge that gap.” McCoy also suspects the Bennu sample could yield rocks unlike anything on Earth, complicating what scientists know about the geology of the solar system. “Every few years, we find a new kind of meteorite so it’s very possible that Bennu also has new kinds of rocks we don’t have in our collection. It’s possible we’ll get something entirely new,” said McCoy. These new rocks could maybe decode some of the collection’s more enigmatic meteorites. The meteorite collection exists not only for scientists currently seeking to understand the solar system, but also for future scientists conducting experiments yet to be invented. Part of the Bennu sample will immediately be sealed for the foreseeable future, saved for the future as technology advances. “We will be able to use tools and equipment that haven’t been invented yet to ask questions we haven’t even thought of yet. But because we have those samples, we'll be able to answer those questions,” said McCoy. “Think of it as the gift that keeps on giving.” 10 Popular Scientific Discoveries from 2019 How to Identify Rocks and Other Questions From Our Readers
|Languages||Akkadian, Eblaite, Elamite, Hattic, Hittite, Hurrian, Luwian, Sumerian, Urartian, Old Persian| |Created||around 3200 BC| |c. 31st century BC to 2nd century AD| |None; influenced shape of Ugaritic; apparently inspired Old Persian| Cuneiform, or Sumero-Akkadian cuneiform,[a] was one of the earliest systems of writing, invented by Sumerians in ancient Mesopotamia.[b] It is distinguished by its wedge-shaped marks on clay tablets, made by means of a blunt reed for a stylus. The term cuneiform comes from "cunea", Latin for "wedge". Emerging in Sumer in the late fourth millennium BC (the Uruk IV period) to convey the Sumerian language, which was a language isolate, cuneiform writing began as a system of pictograms, stemming from an earlier system of shaped tokens used for accounting. In the third millennium, the pictorial representations became simplified and more abstract as the number of characters in use grew smaller (Hittite cuneiform). The system consists of a combination of logophonetic, consonantal alphabetic, and syllabic signs. The original Sumerian script was adapted for the writing of the Semitic Akkadian (Assyrian/Babylonian), Eblaite and Amorite languages, the language isolates Elamite, Hattic, Hurrian and Urartian, as well as Indo-European languages Hittite and Luwian; it inspired the later Semitic Ugaritic alphabet as well as Old Persian cuneiform. Cuneiform writing was gradually replaced by the Phoenician alphabet during the Neo-Assyrian Empire (911–612 BC). By the second century AD, the script had become extinct, its last traces being found in Assyria and Babylonia, and all knowledge of how to read it was lost until it began to be deciphered in the 19th century. Geoffrey Sampson stated that Egyptian hieroglyphs "came into existence a little after Sumerian script, and, probably, [were] invented under the influence of the latter", and that it is "probable that the general idea of expressing words of a language in writing was brought to Egypt from Sumerian Mesopotamia". There are many instances of Egypt-Mesopotamia relations at the time of the invention of writing, and standard reconstructions of the development of writing generally place the development of the Sumerian proto-cuneiform script before the development of Egyptian hieroglyphs, with the suggestion the former influenced the latter. Between half a million and two million cuneiform tablets are estimated to have been excavated in modern times, of which only approximately 30,000–100,000 have been read or published. The British Museum holds the largest collection (c. 130,000), followed by the Vorderasiatisches Museum Berlin, the Louvre, the Istanbul Archaeology Museums, the National Museum of Iraq, the Yale Babylonian Collection (c. 40,000), and Penn Museum. Most of these have "lain in these collections for a century without being translated, studied or published", as there are only a few hundred qualified cuneiformists in the world. The origins of writing appear during the start at of the pottery-phase of the Neolithic, when clay tokens were used to record specific amounts of livestock or commodities. These token were initially impressed on the surface of round clay envelopes and then stored in them. The token were then progressively replaced by flat tablets, on which signs were recorded with a stylus. Actual writing is first recorded in Uruk, at the end of the 4th millennium BC, and soon after in various parts of the Near-East. An ancient Mesopotamian poem gives the first known story of the invention of writing: Because the messenger's mouth was heavy and he couldn't repeat (the message), the Lord of Kulaba patted some clay and put words on it, like a tablet. Until then, there had been no putting words on clay. The cuneiform writing system was in use for more than three millennia, through several stages of development, from the 31st century BC down to the second century AD. Ultimately, it was completely replaced by alphabetic writing (in the general sense) in the course of the Roman era, and there are no cuneiform systems in current use. It had to be deciphered as a completely unknown writing system in 19th-century Assyriology. Successful completion of its deciphering is dated to 1857. The cuneiform script underwent considerable changes over a period of more than two millennia. The image below shows the development of the sign SAĜ "head" (Borger nr. 184, U+12295 𒊕). Pictographic and proto-cuneiform characters (circa 3500 BC) The cuneiform script was developed from pictographic proto-writing in the late 4th millennium BC, stemming from the near eastern token system used for accounting. These tokens were in use from the 9th millennium BC and remained in occasional use even late in the 2nd millennium BC. Early tokens with pictographic shapes of animals, associated with numbers, were discover in Tell Brak, and date to the mid-4th millennium BC. It has been suggested that the token shapes were the original basis for some of the Sumerian pictographs. Mesopotamia's "proto-literate" period spans roughly the 35th to 32nd centuries BC. The first unequivocal written documents start with the Uruk IV period, from circa 3,300 BC, followed by tablets found in Uruk III, Jemdet Nasr and Susa (in Proto-Elamite) dating to the period until circa 2,900 BC. Originally, pictographs were either drawn on clay tablets in vertical columns with a sharpened reed stylus or incised in stone. This early style lacked the characteristic wedge shape of the strokes. Certain signs to indicate names of gods, countries, cities, vessels, birds, trees, etc., are known as determinatives and were the Sumerian signs of the terms in question, added as a guide for the reader. Proper names continued to be usually written in purely "logographic" fashion. Archaic cuneiform (circa 3000 BC) The first inscribed tablets were purely pictographic, which makes it technically impossible to know in which language they were written, but later tablets after circa 2,900 BC start to use syllabic elements, which clearly show a language structure typical of the non-Indo-European agglutinative Sumerian language. The first tablets using syllabic elements date to the Early Dynastic I-II, circa 2,800 BC, and they are clearly in Sumerian. This is the time when some pictographic element started to be used for their phonetical value, permitting the recording of abstract ideas or personal names. Many pictographs began to lose their original function, and a given sign could have various meanings depending on context. The sign inventory was reduced from some 1,500 signs to some 600 signs, and writing became increasingly phonological. Determinative signs were re-introduced to avoid ambiguity. Cuneiform writing proper thus arises from the more primitive system of pictographs at about that time (Early Bronze Age II). The earliest known Sumerian king, whose name appears on contemporary cuneiform tablets, is Enmebaragesi of Kish (fl. c. 2600 BC). Surviving records only very gradually become less fragmentary and more complete for the following reigns, but by the end of the pre-Sargonic period, it had become standard practice for each major city-state to date documents by year-names commemorating the exploits of its lugal (king). Proto-cuneiform tablet, Jemdet Nasr period, c. 3100–2900 BC. The Blau Monuments combine proto-cuneiform characters and illustrations, 3100–2700 BC. British Museum. Early Dynastic cuneiform (circa 2500 BC) Early cuneiform inscription used simple linear inscriptions, made by using a pointed stylus, sometimes called "linear cuneiform", before the introduction of new wedge-type styluses with their typical wedge-shaped signs. Many of the early dynastic inscriptions, particularly those made on stone continued to use the linear style as late as circa 2000 BCE. In the mid-3rd millennium BC, a new wedge-tipped stylus was introduced which was pushed into the clay, producing wedge-shaped ("cuneiform") signs; the development made writing quicker and easier, especially when writing on soft clay. By adjusting the relative position of the stylus to the tablet, the writer could use a single tool to make a variety of impressions. For numbers, a round-tipped stylus was initially used, until the wedge-tipped stylus was generalized. The direction of writing remained to be from top-to-bottom and right-to-left, until the mid-2nd millennium BC. Cuneiform clay tablets could be fired in kilns to bake them hard, and so provide a permanent record, or they could be left moist and recycled if permanence was not needed. Many of the clay tablets found by archaeologists have been preserved by chance, baked when attacking armies burned the buildings in which they were kept. The script was also widely used on commemorative stelae and carved reliefs to record the achievements of the ruler in whose honor the monument had been erected. The spoken language included many homophones and near-homophones, and in the beginning, similar-sounding words such as "life" [til] and "arrow" [ti] were written with the same symbol. After the Semites conquered Southern Mesopotamia, some signs gradually changed from being pictograms to syllabograms, most likely to make things clearer in writing. In that way, the sign for the word "arrow" would become the sign for the sound "ti". Words that sounded alike would have different signs; for instance, the syllable "gu" had fourteen different symbols. When the words had a similar meaning but very different sounds they were written with the same symbol. For instance "tooth" [zu], "mouth" [ka] and "voice" [gu] were all written with the symbol for "voice". To be more accurate, scribes started adding to signs or combining two signs to define the meaning. They used either geometrical patterns or another cuneiform sign. As time went by, the cuneiform got very complex and the distinction between a pictogram and syllabogram became vague. Several symbols had too many meanings to permit clarity. Therefore, symbols were put together to indicate both the sound and the meaning of a compound. The word "Raven" [UGA] had the same logogram as the word "soap" [NAGA], the name of a city [EREŠ], and the patron goddess of Eresh [NISABA]. Two phonetic complements were used to define the word [u] in front of the symbol and [gu] behind. Finally, the symbol for "bird" [MUŠEN] was added to ensure proper interpretation.[clarification needed] For unknown reasons, cuneiform pictographs, until then written vertically, were rotated 90° to the left, in effect putting them on their side. This change first occurred slightly before the Akkadian period, at the time of the Uruk ruler Lugalzagesi (r.c. 2294 - 2270 BC). The vertical style remained for monumental purposes on stone stellas until the middle of the 2nd millennium. Written Sumerian was used as a scribal language until the first century AD. The spoken language died out around the 18th century BC. (circa 2200 BC) The archaic cuneiform script was adopted by the Akkadian Empire from the 23rd century BC (short chronology). The Akkadian language being Semitic, its structure was completely different from Sumerian. There was no way to use the Sumerian writing system as such, and the Akkadians found a practical solution in writing their language phonetically, using the corresponding Sumerian phonetic signs. Still, some of the Sumerian characters were retained for their pictorial value as well: for example the character for "sheep" was retained, but was now pronounced immerū, rather then the Sumerian "udu-meš". The Semitic languages employed equivalents for many signs that were distorted or abbreviated to represent new values because the syllabic nature of the script as refined by the Sumerians was not intuitive to Semitic speakers. From the beginning of the Middle Bronze Age (20th century BC), the script evolved to accommodate the various dialects of Akkadian: Old Akkadian, Babylonian and Assyrian. In particular, the Old Assyrian cuneiform employed many modifications to Sumerian orthography. At this stage, the former pictograms were reduced to a high level of abstraction, and were composed of only five basic wedge shapes: horizontal, vertical, two diagonals and the Winkelhaken impressed vertically by the tip of the stylus. The signs exemplary of these basic wedges are: - AŠ (B001, U+12038) 𒀸: horizontal; - DIŠ (B748, U+12079) 𒁹: vertical; - GE23, DIŠ tenû (B575, U+12039) 𒀹: downward diagonal; - GE22 (B647, U+1203A) 𒀺: upward diagonal; - U (B661, U+1230B) 𒌋: the Winkelhaken. Except for the Winkelhaken which has no tail, the length of the wedges' tails could vary as required for sign composition. Signs tilted by about 45 degrees are called tenû in Akkadian, thus DIŠ is a vertical wedge and DIŠ tenû a diagonal one. If a sign is modified with additional wedges, this is called gunû or "gunification"; if signs are cross-hatched with additional Winkelhaken, they are called šešig; if signs are modified by the removal of a wedge or wedges, they are called nutillu. "Typical" signs have about five to ten wedges, while complex ligatures can consist of twenty or more (although it is not always clear if a ligature should be considered a single sign or two collated, but distinct signs); the ligature KAxGUR7 consists of 31 strokes. Most later adaptations of Sumerian cuneiform preserved at least some aspects of the Sumerian script. Written Akkadian included phonetic symbols from the Sumerian syllabary, together with logograms that were read as whole words. Many signs in the script were polyvalent, having both a syllabic and logographic meaning. The complexity of the system bears a resemblance to Old Japanese, written in a Chinese-derived script, where some of these Sinograms were used as logograms and others as phonetic characters. (circa 650 BC) This "mixed" method of writing continued through the end of the Babylonian and Assyrian empires, although there were periods when "purism" was in fashion and there was a more marked tendency to spell out the words laboriously, in preference to using signs with a phonetic complement. Yet even in those days, the Babylonian syllabary remained a mixture of logographic and phonemic writing. Hittite cuneiform is an adaptation of the Old Assyrian cuneiform of c. 1800 BC to the Hittite language. When the cuneiform script was adapted to writing Hittite, a layer of Akkadian logographic spellings was added to the script, thus the pronunciations of many Hittite words which were conventionally written by logograms are now unknown. In the Iron Age (c. 10th to 6th centuries BC), Assyrian cuneiform was further simplified. From the 6th century, the Akkadian language was marginalized by Aramaic, written in the Aramaean alphabet, but Neo-Assyrian cuneiform remained in use in the literary tradition well into the times of the Parthian Empire (250 BC–226 AD). The last known cuneiform inscription, an astronomical text, was written in 75 AD. The ability to read cuneiform may have persisted until the third century AD. Old Persian cuneiform (5th century BCE) (circa 500 BC) The complexity of the system prompted the development of a number of simplified versions of the script. Old Persian was written in a subset of simplified cuneiform characters known today as Old Persian cuneiform, developed by Darius the Great in the 5th century BC. It formed a semi-alphabetic syllabary, using far fewer wedge strokes than Assyrian used, together with a handful of logograms for frequently occurring words like "god" (𐏎), "king" (𐏋) or "country" (𐏌). This almost purely alphabetical form of the cuneiform script (36 phonetic characters and 8 logograms), was specially designed and used by the early Achaemenid rulers from the 6th century BC. For centuries, travelers to Persepolis, located in Iran, had noticed carved cuneiform inscriptions and were intrigued. Attempts at deciphering these Old Persian writings date back to Arabo-Persian historians of the medieval Islamic world, though these early attempts at decipherment were largely unsuccessful. In the 15th century, the Venetian Giosafat Barbaro explored ancient ruins in the Middle East and came back with news of a very odd writing he had found carved on the stones in the temples of Shiraz and on many clay tablets. Antonio de Gouvea, a professor of theology, noted in 1602 the strange writing he had had occasion to observe during his travels a year earlier in Persia which took in visits to ruins. In 1625, the Roman traveler Pietro Della Valle, who had sojourned in Mesopotamia between 1616 and 1621, brought to Europe copies of characters he had seen in Persepolis and inscribed bricks from Ur and the ruins of Babylon. The copies he made, the first that reached circulation within Europe, were not quite accurate but Della Valle understood that the writing had to be read from left to right, following the direction of wedges, but did not attempt to decipher the scripts. Englishman Sir Thomas Herbert, in the 1638 edition of his travel book Some Yeares Travels into Africa & Asia the Great, reported seeing at Persepolis carved on the wall "a dozen lines of strange characters...consisting of figures, obelisk, triangular, and pyramidal" and thought they resembled Greek. In the 1677 edition he reproduced some and thought they were 'legible and intelligible' and therefore decipherable. He also guessed, correctly, that they represented not letters or hieroglyphics but words and syllables, and were to be read from left to right. Herbert is rarely mentioned in standard histories of the decipherment of cuneiform. Carsten Niebuhr brought the first reasonably complete and accurate copies of the inscriptions at Persepolis to Europe in 1767.:9 Bishop Friedrich Münter of Copenhagen discovered that the words in the Persian inscriptions were divided from one another by an oblique wedge and that the monuments must belong to the age of Cyrus and his successors. One word, which occurs without any variation towards the beginning of each inscription, he correctly inferred to signify "king".:10 By 1802 Georg Friedrich Grotefend had determined that two kings' names mentioned were Darius and Xerxes (but in their native Old Persian forms, which were unknown at the time and therefore had to be conjectured), and had been able to assign correct alphabetic values to the cuneiform characters which composed the two names. Although Grotefend's Memoir was presented to the Göttingen Academy of Sciences and Humanities on September 4, 1802, the Academy refused to publish it; it was subsequently published in Heeren's work in 1815, but was overlooked by most researchers at the time. In 1836, the eminent French scholar Eugène Burnouf discovered that the first of the inscriptions published by Niebuhr contained a list of the satrapies of Darius. With this clue in his hand, he identified and published an alphabet of thirty letters, most of which he had correctly deciphered.:14 A month earlier, a friend and pupil of Burnouf's, Professor Christian Lassen of Bonn, had also published his own work on The Old Persian Cuneiform Inscriptions of Persepolis. He and Burnouf had been in frequent correspondence, and his claim to have independently detected the names of the satrapies, and thereby to have fixed the values of the Persian characters, was consequently fiercely attacked. According to Sayce, whatever his obligations to Burnouf may have been, Lassen's ...contributions to the decipherment of the inscriptions were numerous and important. He succeeded in fixing the true values of nearly all the letters in the Persian alphabet, in translating the texts, and in proving that the language of them was not Zend, but stood to both Zend and Sanskrit in the relation of a sister.— Sayce:15 Meanwhile, in 1835 Henry Rawlinson, a British East India Company army officer, visited the Behistun Inscriptions in Persia. Carved in the reign of King Darius of Persia (522–486 BC), they consisted of identical texts in the three official languages of the empire: Old Persian, Akkadian and Elamite. The Behistun inscription was to the decipherment of cuneiform what the Rosetta Stone was to the decipherment of Egyptian hieroglyphs. Rawlinson correctly deduced that the Old Persian was a phonetic script and he successfully deciphered it. In 1837, he finished his copy of the Behistun inscription, and sent a translation of its opening paragraphs to the Royal Asiatic Society. Before his article could be published, however, the works of Lassen and Burnouf reached him, necessitating a revision of his article and the postponement of its publication. Then came other causes of delay. In 1847, the first part of the Rawlinson's Memoir was published; the second part did not appear until 1849.[c] The task of deciphering the Persian cuneiform texts was virtually accomplished.:17 After translating the Persian, Rawlinson and, working independently of him, the Irish Assyriologist Edward Hincks, began to decipher the others. (The actual techniques used to decipher the Akkadian language have never been fully published; Hincks described how he sought the proper names already legible in the deciphered Persian while Rawlinson never said anything at all, leading some to speculate that he was secretly copying Hincks.) They were greatly helped by the excavations of the French naturalist Paul Émile Botta and English traveler and diplomat Austen Henry Layard of the city of Nineveh from 1842. Among the treasures uncovered by Layard and his successor Hormuzd Rassam were, in 1849 and 1851, the remains of two libraries, now mixed up, usually called the Library of Ashurbanipal, a royal archive containing tens of thousands of baked clay tablets covered with cuneiform inscriptions. By 1851, Hincks and Rawlinson could read 200 Babylonian signs. They were soon joined by two other decipherers: young German-born scholar Julius Oppert, and versatile British Orientalist William Henry Fox Talbot. In 1857, the four men met in London and took part in a famous experiment to test the accuracy of their decipherments. Edwin Norris, the secretary of the Royal Asiatic Society, gave each of them a copy of a recently discovered inscription from the reign of the Assyrian emperor Tiglath-Pileser I. A jury of experts was impaneled to examine the resulting translations and assess their accuracy. In all essential points, the translations produced by the four scholars were found to be in close agreement with one another. There were, of course, some slight discrepancies. The inexperienced Talbot had made a number of mistakes, and Oppert's translation contained a few doubtful passages which the jury politely ascribed to his unfamiliarity with the English language. But Hincks' and Rawlinson's versions corresponded remarkably closely in many respects. The jury declared itself satisfied, and the decipherment of Akkadian cuneiform was adjudged a fait accompli. In the early days of cuneiform decipherment, the reading of proper names presented the greatest difficulties. However, there is now a better understanding of the principles behind the formation and the pronunciation of the thousands of names found in historical records, business documents, votive inscriptions, literary productions, and legal documents. The primary challenge was posed by the characteristic use of old Sumerian non-phonetic logograms in other languages that had different pronunciations for the same symbols. Until the exact phonetic reading of many names was determined through parallel passages or explanatory lists, scholars remained in doubt or had recourse to conjectural or provisional readings. However, in many cases, there are variant readings, the same name being written phonetically (in whole or in part) in one instance and logographically in another. Cuneiform has a specific format for transliteration. Because of the script's polyvalence, transliteration requires certain choices of the transliterating scholar, who must decide in the case of each sign which of its several possible meanings is intended in the original document. For example, the sign DINGIR in a Hittite text may represent either the Hittite syllable an or may be part of an Akkadian phrase, representing the syllable il, it may be a Sumerogram, representing the original Sumerian meaning, 'god' or the determinative for a deity. In transliteration, a different rendition of the same glyph is chosen depending on its role in the present context. Therefore, a text containing DINGIR and MU in succession could be construed to represent the words "ana", "ila", god + "a" (the accusative case ending), god + water, or a divine name "A" or Water. Someone transcribing the signs would make the decision how the signs should be read and assemble the signs as "ana", "ila", "Ila" ("god"+accusative case), etc. A transliteration of these signs, however, would separate the signs with dashes "il-a", "an-a", "DINGIR-a" or "Da". This is still easier to read than the original cuneiform, but now the reader is able to trace the sounds back to the original signs and determine if the correct decision was made on how to read them. A transliterated document thus presents the reading preferred by the transliterating scholar as well as an opportunity to reconstruct the original text. There are differing conventions for transliterating Sumerian, Akkadian (Babylonian), and Hittite (and Luwian) cuneiform texts. One convention that sees wide use across the different fields is the use of acute and grave accents as an abbreviation for homophone disambiguation. Thus, u is equivalent to u1, the first glyph expressing phonetic u. An acute accent, ú, is equivalent to the second, u2, and a grave accent ù to the third, u3 glyph in the series (while the sequence of numbering is conventional but essentially arbitrary and subject to the history of decipherment). In Sumerian transliteration, a multiplication sign 'x' is used to indicate typographic ligatures. As shown above, signs as such are represented in capital letters, while the specific reading selected in the transliteration is represented in small letters. Thus, capital letters can be used to indicate a so-called Diri compound – a sign sequence that has, in combination, a reading different from the sum of the individual constituent signs (for example, the compound IGI.A – "eye" + "water" – has the reading imhur, meaning "foam"). In a Diri compound, the individual signs are separated with dots in transliteration. Capital letters may also be used to indicate a Sumerogram (for example, KÙ.BABBAR – Sumerian for "silver" – being used with the intended Akkadian reading kaspum, "silver"), an Akkadogram, or simply a sign sequence of whose reading the editor is uncertain. Naturally, the "real" reading, if it is clear, will be presented in small letters in the transliteration: IGI.A will be rendered as imhur4. Since the Sumerian language has only been widely known and studied by scholars for approximately a century, changes in the accepted reading of Sumerian names have occurred from time to time. Thus the name of a king of Ur, read Ur-Bau at one time, was later read as Ur-Engur, and is now read as Ur-Nammu or Ur-Namma; for Lugal-zage-si, a king of Uruk, some scholars continued to read Ungal-zaggisi; and so forth. Also, with some names of the older period, there was often uncertainty whether their bearers were Sumerians or Semites. If the former, then their names could be assumed to be read as Sumerian, while, if they were Semites, the signs for writing their names were probably to be read according to their Semitic equivalents, though occasionally Semites might be encountered bearing genuine Sumerian names. There was also doubt whether the signs composing a Semite's name represented a phonetic reading or a logographic compound. Thus, e.g. when inscriptions of a Semitic ruler of Kish, whose name was written Uru-mu-ush, were first deciphered, that name was first taken to be logographic because uru mu-ush could be read as "he founded a city" in Sumerian, and scholars accordingly retranslated it back to the original Semitic as Alu-usharshid. It was later recognized that the URU sign can also be read as rí and that the name is that of the Akkadian king Rimush. The tables below show signs used for simple syllables of the form CV or VC. As used for the Sumerian language, the cuneiform script was in principle capable of distinguishing at least 16 consonants, transliterated as - b, d, g, g̃, ḫ, k, l, m, n, p, r, ř, s, š, t, z as well as four vowel qualities, a, e, i, u. The Akkadian language had no use for g̃ or ř but needed to distinguish its emphatic series, q, ṣ, ṭ, adopting various "superfluous" Sumerian signs for the purpose (e.g. qe=KIN, qu=KUM, qi=KIN, ṣa=ZA, ṣe=ZÍ, ṭur=DUR etc.)[clarification needed] Hittite, as it adopted the Akkadian cuneiform, further introduced signs such as wi5=GEŠTIN. |a 𒀀, | |e 𒂊, ||i 𒄿, | |b-||ba 𒁀, ||be=BAD 𒁁, ||bi 𒁉, ||bu 𒁍,| |d-||da 𒁕, | |de=DI 𒁲, | |di 𒁲, | |g-||ga 𒂵, | |ge=GI 𒄀, ||gi 𒄀, ||gu 𒄖,| |ḫ-||ḫa 𒄩, ||ḫe=ḪI 𒄭, | |ḫi 𒄭, | |k-||ka 𒅗, ||ke=KI 𒆠, | |ki 𒆠, | |l-||la 𒆷, ||le=LI 𒇷, | |li 𒇷, | |m-||ma 𒈠, | |me 𒈨, ||mi 𒈪, ||mu 𒈬,| |n-||na 𒈾, ||ne 𒉈, | |ni 𒉌, | |p-||pa 𒉺, | |pe=PI 𒉿, | |pi 𒉿, ||pu=BU 𒁍,| |r-||ra 𒊏, | |re=RI 𒊑, | |ri 𒊑, | |s-||sa 𒊓, ||se=SI 𒋛, | |si 𒋛, | |š-||ša 𒊭, ||še 𒊺, | |ši=IGI 𒅆, | |t-||ta 𒋫, | |te 𒋼, | |ti 𒋾, ||tu 𒌅,| |z-||za 𒍝, | |ze=ZI 𒍣, | |zi 𒍣, ||zu 𒍪,| |g̃-||g̃á=GÁ 𒂷||g̃e26=GÁ 𒂷||g̃i6=MI 𒈪||g̃u10=MU 𒈬| |ř-||řá=DU 𒁺||ře6=DU 𒁺| |a 𒀀, | |e 𒂊, ||i 𒄿, | |-b||ab 𒀊, | |eb=IB 𒅁, | |ib 𒅁, | |-d||ad 𒀜, | |ed=Á 𒀉||id=Á 𒀉, | |-g||ag 𒀝, | |eg=IG 𒅅, | |ig 𒅅, | |-ḫ||aḫ 𒄴, | |eḫ=AḪ 𒄴||iḫ=AḪ 𒄴||uḫ=AḪ 𒄴,| |-k||ak=AG 𒀝||ek=IG 𒅅||ik=IG 𒅅||uk=UG 𒊌| |-l||al 𒀠, | |el 𒂖, | |il 𒅋, | |-m||am 𒄠/𒂔, | |em=IM 𒅎||im 𒅎, | |-n||an 𒀭||en 𒂗, | |in 𒅔, ||un 𒌦,| |-p||ap=AB 𒀊||ep=IB, | |ip=IB 𒅁, | |-r||ar 𒅈, | |er=IR 𒅕||ir 𒅕, | |-s||as=AZ 𒊍||es=GIŠ 𒄑, | |is=GIŠ 𒄑, | |-š||aš 𒀸, | |eš 𒌍/𒐁, | |iš 𒅖, | |-t||at=AD 𒀜, | át=GÍR gunû 𒄉 |et=Á 𒀉||it=Á 𒀉||ut=UD 𒌓,| |-z||az 𒊍||ez=GIŠ 𒄑, | |iz= GIŠ 𒄑, | |-g̃||ág̃=ÁG 𒉘||èg̃=ÁG 𒉘||ìg̃=ÁG 𒉘||ùg̃=UN 𒌦| The Sumerian cuneiform script had on the order of 1,000 distinct signs (or about 1,500 if variants are included). This number was reduced to about 600 by the 24th century BC and the beginning of Akkadian records. Not all Sumerian signs are used in Akkadian texts, and not all Akkadian signs are used in Hittite. A. Falkenstein (1936) lists 939 signs used in the earliest period (late Uruk, 34th to 31st centuries). (See #Bibliography for the works mentioned in this paragraph.) With an emphasis on Sumerian forms, Deimel (1922) lists 870 signs used in the Early Dynastic II period (28th century, Liste der archaischen Keilschriftzeichen or "LAK") and for the Early Dynastic IIIa period (26th century, Šumerisches Lexikon or "ŠL"). Rosengarten (1967) lists 468 signs used in Sumerian (pre-Sargonian) Lagash, and Mittermayer and Attinger (2006, Altbabylonische Zeichenliste der Sumerisch-Literarischen Texte or "aBZL") list 480 Sumerian forms, written in Isin-Larsa and Old Babylonian times. Regarding Akkadian forms, the standard handbook for many years was Borger (1981, Assyrisch-Babylonische Zeichenliste or "ABZ") with 598 signs used in Assyrian/Babylonian writing, recently superseded by Borger (2004, Mesopotamisches Zeichenlexikon or "MesZL") with an expansion to 907 signs, an extension of their Sumerian readings and a new numbering scheme. Signs used in Hittite cuneiform are listed by Forrer (1922), Friedrich (1960) and Rüster and Neu (1989, Hethitisches Zeichenlexikon or "HZL"). The HZL lists a total of 375 signs, many with variants (for example, 12 variants are given for number 123 EGIR). The Sumerians used a numerical system based on 1, 10, and 60. The way of writing a number like 70 would be the sign for 60 and the sign for 10 right after. (c. 2094–2047 BC) Cuneiform script was used in many ways in ancient Mesopotamia. It was used to record laws, like the Code of Hammurabi. It was also used for recording maps, compiling medical manuals, and documenting religious stories and beliefs, among other uses. Studies by Assyriologists like Claus Wilcke and Dominique Charpin suggest that cuneiform literacy was not reserved solely for the elite but was common for average citizens. According to the Oxford Handbook of Cuneiform Culture, cuneiform script was used at a variety of literacy levels: average citizens needed only a basic, functional knowledge of cuneiform script to write personal letters and business documents. More highly literate citizens put the script to more technical use, listing medicines and diagnoses and writing mathematical equations. Scholars held the highest literacy level of cuneiform and mostly focused on writing as a complex skill and an art form. As of version 8.0, the following ranges are assigned to the Sumero-Akkadian Cuneiform script in the Unicode Standard: - U+12000–U+123FF (922 assigned characters) "Cuneiform" - U+12400–U+1247F (116 assigned characters) "Cuneiform Numbers and Punctuation" - U+12480–U+1254F (196 assigned characters) "Early Dynastic Cuneiform" The final proposal for Unicode encoding of the script was submitted by two cuneiform scholars working with an experienced Unicode proposal writer in June 2004. The base character inventory is derived from the list of Ur III signs compiled by the Cuneiform Digital Library Initiative of UCLA based on the inventories of Miguel Civil, Rykle Borger (2003) and Robert Englund. Rather than opting for a direct ordering by glyph shape and complexity, according to the numbering of an existing catalog, the Unicode order of glyphs was based on the Latin alphabetic order of their "last" Sumerian transliteration as a practical approximation. List of major cuneiform tablet discoveries |Location||Number of tablets||Initial discovery||Language| |Kuyunkjik hill on Tigris River, Outside of Mosul, now in Iraq||NA||1840–1842| |Khorsabad hill on Tigris River, Outside of Mosul, now in Iraq||Significant||1843| |Library of Ashurbanipal||20,000–24,000||1849||Akkadian| |Sippar||Tens of thousands||1880||Babylonian| |Persepolis, Iran||1933||Old Persian| |Ebla tablets||c.5,000||1974||Sumerian and Eblaite| |Tablet V of the Epic of Gilgamesh||1||2011||Old Babylonian| - // kew-NEE-i-form or // kew-NAY-i-form or // KEW-ni-form - Egyptian hieroglyphs date to about the same period, and it is unsettled which system began first. - It seems that various parts of Rawlisons' paper formed Vol X of this journal. The final part III comprised chapters IV (Analysis of the Persian Inscriptions of Behistunand) and V (Copies and Translations of the Persian Cuneiform Inscriptions of Persepolis, Hamadan, and Van), pp. 187–349. - Feldherr, Andrew; Hardy, Grant, eds. (February 17, 2011). The Oxford History of Historical Writing: Volume 1: Beginnings to AD 600. Oxford University Press. p. 5. doi:10.1093/acprof:osobl/9780199218158.001.0001. ISBN 9780199218158. - "Definition of cuneiform in English". Oxford Dictionaries. Archived from the original on September 25, 2016. Retrieved July 30, 2017. - Cuneiform: Irving Finkel & Jonathan Taylor bring ancient inscriptions to life. The British Museum. June 4, 2014. Archived from the original on October 17, 2015. Retrieved July 30, 2017. - Visible language : inventions of writing in the ancient Middle East and beyond. Woods, Christopher, 1968-, Emberling, Geoff., Teeter, Emily., University of Chicago. Oriental Institute. Chicago, Ill.: Oriental Institute of the University of Chicago. 2010. p. 13. ISBN 9781885923769. OCLC 664327312.CS1 maint: others (link) - Starr, Jerald Jack. "The invention and evolution of Sumerian writing". Sumerian Shakespeare. Retrieved April 30, 2019. - Cammarosano, Michele (2017–2018). "Cuneiform Writing Techniques". cuneiform.neocities.org. Retrieved July 18, 2018.CS1 maint: date format (link) - Cammarosano, Michele (2014). "The Cuneiform Stylus". Mesopotamia. XLIX: 53–90 – via https://osf.io/dfng4/. - Bramanti, Armando (2015). "The Cuneiform Stylus. Some Addenda". Cuneiform Digital Library Notes. 2015 (12). - Taylor, Jonathan. "Wedge Order in Cuneiform: a Preliminary Survey". Cite journal requires - from a New Latin cuneiformis, composed of cuneus "wedge" and forma "shape" (17th century) of the script in the 19th century (Henry Creswicke Rawlinson, The Persian Cuneiform Inscription at Behistun, Decyphered and Tr.; with a Memoir on Persian Cuneiform Inscriptions in General, and on that of Behistun in Particular (1846). Different shape-derived names occur in several other languages, such as Finnish nuolenpääkirjoitus "arrowhead script", Hebrew כתב יתדות "stake script", and Persian میخی and Dutch spijkerschrift, both meaning "nail script". - The word "cuneiform" was coined in 1700 by the English orientalist Thomas Hyde (1663–1703): - Thomas Hyde, Historia Religionis Veterum Persarum, ... [History of religion of the ancient Persians ... ] (Oxford, England: Sheldonian Theater, 1700), p. 526. [in Latin] On pages 526–527, Hyde discusses the cuneiform found at Persepolis. From p. 526: "Istiusmodi enim ductuli pyramidal seu Cuneiformes non veniunt in Gavrorum literis, nec in Telesmaticis, nec in Hieroglyphicis Ægypti; sed tales ductus (tam inter seinvicem juxta positi quam per seinvicem transmissi) sunt peculiares Persepoli ..." (Because such thin pyramidal or wedge forms do not occur in the letters of the Gavres [variously spelled Gabres, Guebers, Ghebers, or Chebers, was an old English name for Zoroastrians, an ancient cult of fire worshippers; the word Gavres was derived from the Persian word gaur for "infidel"], nor in talismans, nor in Egyptian hieroglyphs; but such drawings (so closely placed among each other as [intended to] be conveyed by means of each other) are peculiar to Persepolis, ... ) - (Meade, 1974), p. 5. Archived December 19, 2016, at the Wayback Machine - Kaempfer, Engelbert, Amoenitatum Exoticarum ... [Of Foreign Charms ... ] (Lippe (Lemgoviae), (Germany): Heinrich Wilhelm Meyer, 1712), p. 331. On p. 331 Kaempfer describes cuneiform as: " ... formam habentibus cuneolorum; ... " ( ... having the form of wedges; ... ). [Note: A sample of the cuneiform from Persepolis appears on the plate following p. 332. ] - From pp. 317–318: "Cl. Thomas Hyde, Anglus, Vir in linguis & rebus exoticis præclare doctus, in Hist. Relig. vet. Pers. & Med. ... " (The famous Thomas Hyde, an Englishman, a man well trained in languages and in exotic things, in [his] Historia Religionis Veterum Persarum ... ) - Maurray, Stuart A. P. (2009). The Library: An Illustrated History. Chicago: Skyhorse Publishing. p. 7. ISBN 1-61608-453-7. - Geoffrey Sampson (January 1, 1990). Writing Systems: A Linguistic Introduction. Stanford University Press. pp. 78–. ISBN 978-0-8047-1756-4. Retrieved October 31, 2011. - Geoffrey W. Bromiley (June 1995). The international standard Bible encyclopedia. Wm. B. Eerdmans Publishing. pp. 1150–. ISBN 978-0-8028-3784-4. Retrieved October 31, 2011. - Iorwerth Eiddon Stephen Edwards, et al., The Cambridge Ancient History (3d ed. 1970) pp. 43–44. - Barraclough, Geoffrey; Stone, Norman (1989). The Times Atlas of World History. Hammond Incorporated. p. 53. ISBN 9780723003045. - "Cuneiform Tablets: Who's Got What?", Biblical Archaeology Review, 31 (2), 2005, archived from the original on July 15, 2014 - Watkins, Lee; Snyder, Dean (2003), The Digital Hammurabi Project (PDF), The Johns Hopkins University, archived (PDF) from the original on July 14, 2014, Since the decipherment of Babylonian cuneiform some 150 years ago museums have accumulated perhaps 300,000 tablets written in most of the major languages of the Ancient Near East – Sumerian, Akkadian (Babylonian and Assyrian), Eblaite, Hittite, Persian, Hurrian, Elamite, and Ugaritic. These texts include genres as variegated as mythology and mathematics, law codes and beer recipes. In most cases these documents are the earliest exemplars of their genres, and cuneiformists have made unique and valuable contributions to the study of such moderns disciplines as history, law, religion, linguistics, mathematics, and science. In spite of continued great interest in mankind's earliest documents it has been estimated that only about 1/10 of the extant cuneiform texts have been read even once in modern times. There are various reasons for this: the complex Sumero/Akkadian script system is inherently difficult to learn; there is, as yet, no standard computer encoding for cuneiform; there are only a few hundred qualified cuneiformists in the world; the pedagogical tools are, in many cases, non-optimal; and access to the widely distributed tablets is expensive, time-consuming, and, due to the vagaries of politics, becoming increasingly difficult. - "Image gallery: tablet / cast". British Museum. - "Beginning in the pottery-phase of the Neolithic, clay tokens are widely attested as a system of counting and identifying specific amounts of specified livestock or commodities. The tokens, enclosed in clay envelopes after being impressed on their rounded surface, were gradually replaced by impressions on flat or plano-convex tablets, and these in turn by more or less conventionalized pictures of the tokens incised on the clay with a reed stylus. That final step completed the transition to full writing, and with it the consequent ability to record contemporary events for posterity" W. Hallo; W. Simpson (1971). The Ancient Near East. New York: Harcourt, Brace, Jovanovich. p. 25. - Daniels, Peter T. (1996). The World's Writing Systems. Oxford University Press. p. 45. ISBN 9780195079937. - Boudreau, Vincent (2004). The First Writing: Script Invention as History and Process. Cambridge University Press. p. 71. ISBN 9780521838610. - Adkins 2003, p. 47. - Cunningham, Lawrence S.; Reich, John J.; Fichner-Rathus, Lois (2014). Culture and Values: A Survey of the Western Humanities, Volume 1. Cengage Learning. p. 13. ISBN 978-1-285-45818-2. - Denise Schmandt-Besserat, "An Archaic Recording System and the Origin of Writing." Syro Mesopotamian Studies, vol. 1, no. 1, pp. 1–32, 1977 - Walker, C. (1987). Reading The Past Cuneiform. British Museum. pp. 7-6. - Denise Schmandt-Besserat, An Archaic Recording System in the Uruk-Jemdet Nasr Period, American Journal of Archaeology, vol. 83, no. 1, pp. 19–48, (Jan., 1979) - Walker, C. (1987). Reading The Past Cuneiform. British Museum. p. 9. - Walker, C. (1987). Reading The Past Cuneiform. British Museum. p. 7. - Walker, C. (1987). Reading The Past Cuneiform. British Museum. p. 14. - Walker, C. (1987). Reading The Past Cuneiform. British Museum. p. 12. - Walker, C. (1987). Reading The Past Cuneiform. British Museum. pp. 11-12. - Walker, C. (1987). Reading The Past Cuneiform. British Museum. p. 13. - "Proto-cuneiform tablet". www.metmuseum.org. - Daniels, Peter T.; Bright, William (1996). The World's Writing Systems. Oxford University Press. p. 38. ISBN 978-0-19-507993-7. - Walker, C. (1987). Reading The Past Cuneiform. British Museum. p. 14. - Krejci, Jaroslav (1990). Before the European Challenge: The Great Civilizations of Asia and the Middle East. SUNY Press. p. 34. ISBN 978-0-7914-0168-2. - Mémoires. Mission archéologique en Iran. 1900. p. 53. - Walker, C. (1987). Reading The Past Cuneiform. British Museum. p. 16. - Krejci, Jaroslav (1990). Before the European Challenge: The Great Civilizations of Asia and the Middle East. SUNY Press. p. 34. ISBN 978-0-7914-0168-2. - Geller, Marckham (1997). "The Last Wedge". Zeitschrift für Assyriologie und vorderasiatische Archäologie. 87 (1): 43–95. doi:10.1515/zava.19220.127.116.11. - Michałowski, Piotr (2003). "The Libraries of Babel: Text, Authority, and Tradition in Ancient Mesopotamia". In Dorleijn, Gillis J.; Vanstiphout, Herman L. J. (eds.). Cultural Repertoires: Structure, Function, and Dynamics. Leuven, Paris, Dudley: Peeters Publishers. p. 108. ISBN 978-90-429-1299-1. Retrieved August 20, 2019. - Anderson, Terence J.; Twining, William (2015). "Law and archaeology: Modified Wigmorean Analysis". In Chapman, Robert; Wylie, Alison (eds.). Material Evidence: Learning from Archaeological Practice. Abingdon, UK; New York, NY: Routledge. p. 290. ISBN 978-1-317-57622-8. Retrieved August 20, 2019. - Schmitt, R. (2008), "Old Persian", in Roger D. Woodard (ed.), The Ancient Languages of Asia and the Americas (illustrated ed.), Cambridge University Press, p. 77, ISBN 978-0521684941 - Sayce 1908. - El Daly, Okasha (2004). Egyptology: The Missing Millennium : Ancient Egypt in Medieval Arabic Writings. Routledge. pp. 39–40 & 65. ISBN 1-84472-063-2. - C. Wade Meade, Road to Babylon: Development of U.S. Assyriology, Archived December 19, 2016, at the Wayback Machine Brill Archive, 1974 p.5. - Gouvea, Antonio de, Relaçam em que se tratam as guerras e grandes vitórias que alcançou o grande Rey de Persia Xá Abbas, do grão Turco Mahometo, e seu Filho Amethe ... [An account in which are treated the wars and great victories that were attained by the great king of Persia Shah Abbas against the great Turk Mehmed and his son, Ahmed ... ] (Lisbon, Portugal: Pedro Crasbeeck, 1611), p. 32. Archived March 20, 2018, at the Wayback Machine [in Portuguese] - French translation: Gouvea, Antonio de, with Alexis de Meneses, trans., Relation des grandes guerres et victoires obtenues par le roy de Perse Cha Abbas contre les empereurs de Turquie Mahomet et Achmet son fils, ... (Rouen, France: Nicolas Loyselet, 1646), pp. 81–82. Archived March 20, 2018, at the Wayback Machine [in French] From pp. 81–82: "Peu esloigné de là estoit la sepulture de la Royne, qui estoit fort peu differente. L'escriture qui donnoit cognoissance par qui, pourquoy, & en quel temps cest grande masse avoit esté bastie est fort distincte en plusieurs endroits du bastiment: mais il n'y a personne qui y entende rien, parce que les carracteres ne sont Persiens, Arabes, Armeniens ny Hebreux, qui sont les langages aujourd'hui en usage en ces quartiers là, ... " (Not far from there [i.e., Persepolis or "Chelminira"] was the sepulchre of the queen, which wasn't much different. The writing that announced by whom, why, and at what time this great mass had been built, is very distinct in several locations in the building: but there wasn't anyone who understood it, because the characters were neither Persian, Arabic, Armenian, nor Hebrew, which are the languages in use today in those quarters ... ) - In 1619, Spain's ambassador to Persia, García de Silva Figueroa (1550–1624), sent a letter to the Marquesse of Bedmar, discussing various subjects regarding Persia, including his observations on the cuneiform inscriptions at Persepolis. This letter was originally printed in 1620: - Figueroa, Garcia Silva, Garciae Silva Figueroa ... de Rebus Persarum epistola v. Kal. an. M.DC.XIX Spahani exarata ad Marchionem Bedmari (Antwerp, (Belgium): 1620), 16 pages. [in Latin]. - "Letter from Don Garcia Silva Figueroa Embassador from Philip the Third King of Spain, to the Persian, Written at Spahan, or Hispahan Anno 1619 to the Marquese Bedmar Touching Matters of Persia," Archived March 20, 2018, at the Wayback Machine in: Purchas, Samuel, Purchas His Pilgrimes (London, England: William Stansby, 1625), vol. 2, book IX, Chap. XI, pp. 1533–1535. - Figueroa, Don Garcia Silva, "Chap. XI. Letter from Don Garcia Silva Figueroa Embassador from Philip the Third King of Spain, to the Persian, Written at Spahan, or Hispahan Anno 1619 to the Marquese Bedmar Touching Matters of Persia," in Purchas, Samuel, Hakluytus Posthumus or Purchas His Pilgrimes, ... (Glasgow, Scotland: James MacLehose and Sons, 1905), vol. 9, pp. 190–196. On pp. 192–193, Figueroa describes the cuneiform at Persepolis: "The Letters themselves are neither Chaldæan, nor Hebrew, nor Greek, nor Arabic, nor of any other Nation, which was ever found of old, or at this day, to be extant. They are all three-cornered, but somewhat long, of the forme of a Pyramide, or such a little Obeliske, as I have set in the margine: so that in nothing doe they differ one from another, but in their placing and situation, yet so conformed that they are wondrous plaine distinct and perspicuous." - Hilprecht, Hermann Vollrat (1904). The Excavations in Assyria and Babylonia. Cambridge University Press. p. 17. ISBN 9781108025645. - Pallis, Svend Aage (1954) "Early exploration in Mesopotamia, with a list of the Assyro-Babylonian cuneiform texts published before 1851," Det Kongelige Danske Videnskabernes Selskab: Historisk-filologiske Meddelelser (The Royal Danish Society of Science: Historical-philological Communications), 33 (6) : 1–58; see p. 10. Available at: Royal Danish Society of Science Archived October 6, 2017, at the Wayback Machine - Valle, Pietro della, Viaggi di Pietro della Valle, Il Pellegrino [The journeys of Pietro della Valle, the pilgrim] (Brighton, England: G. Gancia, 1843), vol. 2, pp. 252–253. From p. 253: "Mi da indizio che possa scriversi dalla sinistra alla destra al modo nostro, ... " (It indicates to me that it [i.e., cuneiform] might be written from left to right in our way, ... ) - Herbert, Thomas, Some Yeares Travels into Africa & Asia the Great. ... (London, England: R. Bishop, 1638), pp. 145–146. From pages 145–146: "In part of this great roome [i.e., in the palace at Persepolis] (not farre from the portall) in a mirrour of polisht marble, wee noted above a dozen lynes of strange characters, very faire and apparent to the eye, but so mysticall, so odly framed, as no Hierogliphick, no other deep conceit can be more difficultly fancied, more adverse to the intellect. These consisting of Figures, obelisk, triangular, and pyramidall, yet in such Simmetry and order as cannot well be called barbarous. Some resemblance, I thought some words had of the Antick Greek, shadowing out Ahashuerus Theos. And though it have small concordance with the Hebrew, Greek, or Latine letter, yet questionless to the Inventer it was well knowne; and peradventure may conceale some excellent matter, though to this day wrapt up in the dim leafes of envious obscuritie." - Herbert, Sir Thomas, Some Years Travels into Divers Parts of Africa and Asia the Great, 4th ed. (London, England: R. Everingham, 1677), pp. 141–142. From p. 141: " ... albeit I rather incline to the first [possibility], and that they comprehended words or syllables, as in Brachyography or Short-writing we familiarly practise: ... Nevertheless, by the posture and tendency of some of the Characters (which consist of several magnitudes) it may be supposed that this writing was rather from the left hand to the right, ... " Page 142 shows an illustration of some cuneiform. - Niebuhr, Carsten, Reisebeschreibung nach Arabien und andern umliegender Ländern (Account of travels to Arabia and other surrounding lands), vol. 2 (Kopenhagen, Denmark: Nicolaus Möller, 1778), p. 150; see also the fold-out plate (Tabelle XXXI) after p. 152. From p. 150: "Ich will auf der Tabelle XXXI, noch eine, oder vielmehr vier Inschriften H, I, K, L beyfügen, die ich etwa in der Mitte an der Hauptmauer nach Süden, alle neben einander, angetroffen habe. Der Stein worauf sie stehen, ist 26 Fuß lang, und 6 Fuß hoch, und dieser ist ganz damit bedeckt. Man kann also daraus die Größe der Buchstaben beurtheilen. Auch hier sind drey verschiedene Alphabete." (I want to include in Plate XXXI another, or rather four inscriptions H, I, K, L, which I found approximately in the middle of the main wall to the south [in the ruined palace at Persepolis], all side by side. The stone on which they appear, is 26 feet long and 6 feet high, and it's completely covered with them. One can thus judge therefrom the size of the letters. Also here, [there] are three different alphabets.) - Münter, Frederik (1800a) "Undersögelser om de Persepolitanske Inscriptioner. Förste Afhandling." (Investigations of the inscriptions of Persepolis. First part.), Det Kongelige Danske Videnskabers-Selskabs Skrivter (Writings of the Royal Danish Society of Science), 3rd series, 1 (1) : 253–292. [in Danish] - Münter, Frederik (1800b) "Undersögelser om de Persepolitanske Inscriptioner. Anden Afhandling." (Investigations of the inscriptions of Persepolis. Second part.), Det Kongelige Danske Videnskabers-Selskabs Skrivter (Writings of the Royal Danish Society of Science), 3rd series, 1 (2) : 291–348. [in Danish] On p. 339, Münter presents the Old Persian word for "king" written in cuneiform. - Reprinted in German as: Münter, Friederich, Versuch über die keilförmigen Inschriften zu Persepolis [Attempt at the cuneiform inscription at Persepolis] (Kopenhagen, Denmark: C. G. Prost, 1802). - Heeren 1815. - Ceram, C.W., Gods, Graves and Scholars, 1954 - Grotefend, G. F., "Ueber die Erklärung der Keilschriften, und besonders der Inschriften von Persepolis" [On the explanation of cuneiform, and especially of the inscriptions of Persepolis] in: Heeren, Arnold Hermann Ludwig, Ideen über die Politik, den Verkehr und den Handel der vornehmsten Völker der alten Welt [Ideas about the politics, commerce, and trade of the most distinguished peoples of the ancient world], part 1, section 1, (Göttingen, (Germany): Bandelhoel und Ruprecht, 1815), 563–609. [in German] - English translation: Grotefend, G.F., "Appendix II: On the cuneiform character, and particularly the inscriptions at Persepolis" in: Heeren, Arnold Hermann Ludwig, with David Alphonso Talboys, trans., Historical Researches into the Politics, Intercourse, and Trade of the Principal Nations of Antiquity, vol. 2, (Oxford, England: D.A. Talboys, 1833), pp. 313–360. Grotefend's determinations of the values of several characters in cuneiform are also briefly mentioned in vol. 1, p. 196. - Senner, Wayne M. (1991). The Origins of Writing. University of Nebraska Press. p. 77. ISBN 9780803291676. - Burnouf 1836 - Prichard 1844, pp. 30–31 - Adkins 2003.[full citation needed] - Rawlinson 1847. - Daniels 1996. - Cathcart, Kevin J. (2011). "The Earliest Contributions to the Decipherment of Sumerian and Akkadian". Cuneiform Digital Library Journal (1). ISSN 1540-8779. - Finkel, Irving (July 24, 2019). Cracking Ancient Codes: Cuneiform Writing - with Irving Finkel. The Royal Institution. Event occurs at 32:10. Retrieved July 29, 2019. - Rawlinson, Henry; Fox Talbot, William Henry; Hincks, Edward; and Oppert, Julius, Inscription of Tiglath-Pileser I., King of Assyria, B.C. 1150, ... (London, England: J. W. Parker and Son, 1857). For a description of the "experiment" in the translation of cuneiform, see pp. 3–7. - "Site officiel du musée du Louvre". cartelfr.louvre.fr. - Foxvog, Daniel A. Introduction to Sumerian grammar (PDF). pp. 16–17, 20–21. Archived (PDF) from the original on January 3, 2017 (about phonemes g̃ and ř and their representation using cuneiform signs). - Jagersma, A. H. A descriptive grammar of Sumerian (PDF) (Thesis). pp. 43–45, 50–51. Archived (PDF) from the original on November 25, 2015 (about phonemes g̃ and ř and their representation using cuneiform signs). - "Nimintabba tablet". British Museum. - Enderwitz, Susanne; Sauer, Rebecca (2015). Communication and Materiality: Written and Unwritten Communication in Pre-Modern Societies. Walter de Gruyter GmbH & Co KG. p. 28. ISBN 978-3-11-041300-7. - "(For the goddess) Nimintabba, his lady, Shulgi, mighty man, king of Ur, king of Sumer and Akkad, her house, built." in Expedition. University Museum of the University of Pennsylvania. 1986. p. 30. - "The World's Oldest Writing". Archaeology. 69 (3). May 2016. Retrieved September 18, 2016 – via Virtual Library of Virginia.[permanent dead link] - Wilcke, Claus (2000). Wer las und schrieb in Babylonien und Assyrien. München: Verlag der Bayerischen Akademie der Wissenschaften. ISBN 978-3-7696-1612-5. - Charpin, Dominique. 2004. "Lire et écrire en Mésopotamie: Une affaire dé spécialistes?" Comptes rendus de l'Académie des Inscriptions et Belles Lettres: 481–501. - Veldhuis, Niek (2011). "Levels of Literacy". The Oxford Handbook of Cuneiform Culture. doi:10.1093/oxfordhb/9780199557301.001.0001. hdl:10261/126580. ISBN 9780199557301. - Everson, Michael; Feuerherm, Karljürgen; Tinney, Steve (June 8, 2004). "Final proposal to encode the Cuneiform script in the SMP of the UCS Archived October 17, 2016, at the Wayback Machine." - "Persepolis Fortification Archive | The Oriental Institute of the University of Chicago". oi.uchicago.edu. Archived from the original on September 29, 2016. Retrieved September 18, 2016. - Bertman, Stephen (2005). Handbook to Life in Ancient Mesopotamia. Oxford University Press. ISBN 978-0195183641. - Ellermeier, Friedrich., and Margret. Studt. Sumerisches Glossar. Bd. 3, T. 6, Handbuch Assur / Friedrich Ellmermeier; Margret Studt.Hardegsen bei Göttingen: Selbstverl. Ellermeier, 2003. Print. Theologische und orientalistische Arbeiten aus Göttingen, 4; Theologische und orientalistische Arbeiten aus Göttingen, 4. - "The Hittite cuneiform tablets from Bogazköy | United Nations Educational, Scientific and Cultural Organization". www.unesco.org. Archived from the original on September 19, 2016. Retrieved September 18, 2016. - Michel, Cecile, Old Assyrian Bibliography, 2001. - Tablets from the site surfaced on the market as early as 1880, when three tablets made their way to European museums. By the early 1920s, the number of tablets sold from the site exceeded 4,000. While the site of Kültepe was suspected as the source of the tablets, and the site was visited several times, it was not until 1925 when Bedrich Hrozny corroborated this identification by excavating tablets from the fields next to the tell that were related to tablets already purchased. - Lauinger, Jacob (January 1, 2007). Archival practices at Old Babylonian/Middle Bronze Age Alalakh (Level VII) (Thesis). THE UNIVERSITY OF CHICAGO. Archived from the original on July 14, 2014. - Moorey, P.R.S. (1992). A Century of Biblical Archaeology. Westminster Knox Press. ISBN 978-0664253929. - Amin, Osama S. M. (September 24, 2015). "The newly discovered tablet V of the Epic of Gilgamesh". Ancient History et cetera. Archived from the original on September 3, 2016. Retrieved September 18, 2016. - Adkins, Lesley, Empires of the Plain: Henry Rawlinson and the Lost Languages of Babylon, New York, St. Martin's Press (2003) ISBN 0-312-33002-2 - Bertman, Stephen (2005), Handbook to Life in Ancient Mesopotamia, Oxford University Press, ISBN 9780195183641 - R. Borger, Assyrisch-Babylonische Zeichenliste, 2nd ed., Neukirchen-Vluyn (1981) - Borger, Rykle (2004). Dietrich, M.; Loretz, O. (eds.). Mesopotamisches Zeichenlexikon. Alter Orient und Altes Testament. 305. Münster: Ugarit Verlag. ISBN 3-927120-82-0. - Burnouf, E. (1836). "Mémoire sur deux Inscriptions Cunéiformes trouvées près d'Hamadan et qui font partie des papiers du Dr Schulz", [Memoir on two cuneiform inscriptions [that were] found near Hamadan and that form part of the papers of Dr. Schulz], Imprimerie Royale, Paris. - Cammarosano, M. (2017–2018) "Cuneiform Writing Techniques", cuneiform.neocities.org (with further bibliography) - Daniels, Peter; Bright, William (1996). The World's Writing Systems. Oxford University Press. p. 146. ISBN 0-19-507993-0. - A. Deimel (1922), Liste der archaischen Keilschriftzeichen ("LAK"), WVDOG 40, Berlin. - A. Deimel (1925–1950), Šumerisches Lexikon, Pontificum Institutum Biblicum. - F. Ellermeier, M. Studt, Sumerisches Glossar - A. Falkenstein, Archaische Texte aus Uruk, Berlin-Leipzig (1936) - Charpin, Dominique. 2004. 'Lire et écrire en Mésopotamie: une affaire dé spécialistes?’ Comptes rendus de l’Académie des Inscriptions et Belles Lettres: 481–501. - E. Forrer, Die Keilschrift von Boghazköi, Leipzig (1922) - J. Friedrich, Hethitisches Keilschrift-Lesebuch, Heidelberg (1960) - Jean-Jacques Glassner, The Invention of Cuneiform, English translation, Johns Hopkins University Press (2003), ISBN 0-8018-7389-4. - Hayes, John L. (2000). A Manual of Sumerian Grammar and Texts. Aids and Research Tools in Ancient Near Eastern Studies. 5 (2d ed.). Malibu: Undena Publications. ISBN 0-89003-197-5. - Heeren (1815) "Ideen über die Politik, den Verkehr und den Handel der vornehmsten Volker der alten Welt", vol. i. pp. 563 seq., translated into English in 1833. - Kramer, Samuel Noah (1981). "Appendix B: The Origin of the Cuneiform Writing System". History Begins at Sumer: Thirty-Nine Firsts in Man's Recorded History (3d revised ed.). Philadelphia: University of Pennsylvania Press. pp. 381–383. ISBN 0-8122-7812-7. - René Labat, Manuel d'epigraphie Akkadienne, Geuthner, Paris (1959); 6th ed., extended by Florence Malbran-Labat (1999), ISBN 2-7053-3583-8. - Lassen, Christian (1836) Die Altpersischen Keil-Inschriften von Persepolis. Entzifferung des Alphabets und Erklärung des Inhalts. [The Old-Persian cuneiform inscriptions of Persepolis. Decipherment of the alphabet and explanation of its content.] Eduard Weber, Bonn, (Germany). - Mittermayer, Catherine; Attinger, Pascal (2006). Altbabylonische Zeichenliste der Sumerisch-Literarischen Texte. Orbis Biblicus et Orientalis. Special Edition. Academic Press Fribourg. ISBN 978-3-7278-1551-5. - Moorey, P.R.S. (1992). A Century of Biblical Archaeology. Westminster Knox Press. ISBN 978-0664253929. - O. Neugebauer, A. Sachs (eds.), Mathematical Cuneiform Texts, New Haven (1945). - Patri, Sylvain (2009). "La perception des consonnes hittites dans les langues étrangères au XIIIe siècle." Zeitschrift für Assyriologie und vorderasiatische Archäologie 99(1): 87–126. doi:10.1515/ZA.2009.003. - Prichard, James Cowles (1844). "Researches Into the Physical History of Mankind", 3rd ed., vol IV, Sherwood, Gilbert and Piper, London. - Rawlinson, Henry (1847) "The Persian Cuneiform Inscription at Behistun, decyphered and translated; with a Memoir on Persian Cuneiform Inscriptions in general, and on that of Behistun in Particular," The Journal of the Royal Asiatic Society of Great Britain and Ireland, vol. X. JSTOR 25581217. - Y. Rosengarten, Répertoire commenté des signes présargoniques sumériens de Lagash, Paris (1967) - Chr. Rüster, E. Neu, Hethitisches Zeichenlexikon (HZL), Wiesbaden (1989) - Sayce, Rev. A. H. (1908). "The Archaeology of the Cuneiform Inscriptions", Second Edition-revised, 1908, Society for Promoting Christian Knowledge, London, Brighton, New York; at pp 9–16 Not in copyright - Nikolaus Schneider, Die Keilschriftzeichen der Wirtschaftsurkunden von Ur III nebst ihren charakteristischsten Schreibvarianten, Keilschrift-Paläographie; Heft 2, Rom: Päpstliches Bibelinstitut (1935). - Wilcke, Claus. 2000. Wer las und schrieb in Babylonien und Assyrien. Sitzungsberichte der Bayerischen Akademie der Wissenschaften Philosophisch-historische Klasse. 2000/6. München: Verlag der Bayerischen Akademie der Wissenschaften. - Wolfgang Schramm, Akkadische Logogramme, Goettinger Arbeitshefte zur Altorientalischen Literatur (GAAL) Heft 4, Goettingen (2003), ISBN 3-936297-01-0. - F. Thureau-Dangin, Recherches sur l'origine de l'écriture cunéiforme, Paris (1898). - Ronald Herbert Sack, Cuneiform Documents from the Chaldean and Persian Periods, (1994) ISBN 0-945636-67-9
round is used to round off the given digit which can be in float or double. It returns the nearest integral value to provided parameter in round function, with halfway cases rounded away from zero. Instead of round(), std::round() can also be used . Header files used -> cmath, ctgmath Parameters: x, value to be rounded double round (double x); float round (float x); long double round (long double x); double round (T x); // additional overloads for integral types Returns: The value of x rounded to the nearest integral (as a floating-point value). Nearest value of x :13 Nearest value of y :13 Nearest value of z :15 lround(-0.0) = 0 lround(2.3) = 2 lround(2.5) = 3 lround(2.7) = 3 lround(-2.3) = -2 lround(-2.5) = -3 lround(-2.7) = -3 llround(-0.01234) = 0 llround(2.3563) = 2 llround(2.555) = 3 llround(2.7896) = 3 llround(-2.323) = -2 llround(-2.5258) = -3 llround(-2.71236) = -3 Here, in the above program we have just calculated the nearest integral value of given float or double value. which has been calculated accurately. - Handling the mismatch between fractions and decimal : One use of rounding numbers is shorten all the three’s to the right of the decimal point in converting 1/3 to decimal. Most of the time, we will use the rounded numbers 0.33 or 0.333 when we need to work with 1/3 in decimal. We usually work with just two or three digits to the right of the decimal point when there is no exact equivalent to the fraction in decimal. - Changing multiplied result : There will be difference between multiplication of 25, 75 and 0.25, 0.75 we get 0.875 .We started with 2 digits to the right of the decimal point and ended up with 4. Many times we will just round up the result to 0.19 . // C+++ code for above explanation // Driver program // Initializing values for int type a1 = 25, b1 = 30; // Initializing values for double type a2 = .25, b2 = .30; ans_1 = (a1 * b1); ans_2 = (a2 * b2); // Rounded result for both "From first multiplication :" << round(ans_1) << "From second multiplication :" << round(ans_2) << From first multiplication :750 From second multiplication :0 - Fast calculation : Suppose in need of fast calculation we take approx value and then calculate nearest answer. For example, we get an answer 298.78 after any calculation and by rounding off we get an absolute answer of 300. - Getting estimate : Sometimes you want to round integers instead of decimal numbers. Usually you are interested in rounding to the nearest multiple of 10, 100, 1, 000 or million. For example, in 2006 the census department determined that the population of New York City was 8, 214, 426. That number is hard to remember and if we say the population of New York City is 8 million it is a good estimate because it doesn’t make any real difference what the exact number is. Reference : www.mathworksheetcenter.com, www.cplusplus.com This article is contributed by Himanshu Ranjan. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. - Precision of floating point numbers in C++ (floor(), ceil(), trunc(), round() and setprecision()) - Different ways to use Const with Reference to a Pointer in C++ - std::to_address in C++ with Examples - Program to create Custom Vector Class in C++ - std::is_trivially_copy_constructible in C/C++ - Difference between Python and C++ - tgamma() method in C/C++ with Examples - boost::type_traits::is_array Template in C++ - boost is_pointer template in C++ - Stack of Pair in C++ STL with Examples - Modulo Operator (%) in C/C++ with Examples - fpclassify() method in C/C++ with Examples - Operator Overloading '<<' and '>>' operator in a linked list class - How to find the Entry with largest Value in a C++ Map If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Using a sheet of paper and scissors, you can cut out two faces to form a cylinder in the following way: - Cut the paper horizontally (parallel to the shorter side) to get two rectangular parts. - From the first part, cut out a circle of maximum radius. The circle will form the bottom of the cylinder. - Roll the second part up in such a way that it has a perimeter of equal length with the circle\'s circumference, and attach one end of the roll to the circle. Note that the roll may have some overlapping parts in order to get the required length of the perimeter. Given the dimensions of the sheet of paper, can you calculate the biggest possible volume of a cylinder which can be constructed using the procedure described above? The input consists of several test cases. Each test case consists of two numbers w and h (1 ≤ w ≤ h ≤ 100), which indicate the width and height of the sheet of paper. The last test case is followed by a line containing two zeros. For each test case, print one line with the biggest possible volume of the cylinder. Round this number to 3 places after the decimal point. 10 10 10 50 10 30 0 0 54.247 785.398 412.095
The deviation from the actual mean of a population is known as the standard error. In statistics the standard deviation of the sampling distribution is known as the standard error. Standard error is used to measure the standard deviation of the samples like mean or median. If the standard error is small it means that more appropriate representation of a sample is being given. Since, Standard error is inversely proportional to the sample size therefore if the sample size is larger than only standard error will be smaller. The most common usage of the standard error is to find the mean of itself with the help of formula as stated below: Standard error= Standard Deviation / Square Root Of the population Size Or, it can also be found with by dividing the range of values used as a data in the standard deviation with the square root of the number. The square root size is the size of all the random samples possible. Standard error is the approach that tells you that a population mean can be this close to the sample mean however, standard deviation measures the degree to which the individuals within a sample differs from the sample mean. Since standard error only gives an estimate of an unknown quantity is likely to use an approach that does not involve standard error. Practically the standard deviation of the error is normally unknown, as a result of which it must be taken into account that what has been done to find the true value. Student’s t-distribution can be used to measure confidence intervals for mean or their differences instead of standard error to have more realistic values. However, standard error can be used to make an estimate of the size of the uncertainty in the confidence intervals and, its tests should be only be taken into account when the sample size is moderately large. Standard error, as it is responsible to measure the standard deviation, therefore it is also used in the regression analysis to find the standard error of the regression. In regression analysis it as used to find the standard deviation of the underlying errors in the ordinary tests means. - Debt ratios - Liquidity ratios - Profitability ratios - Asset management ratios - Cash Flow Indicator Ratios - Market value ratios - Financial analysis - Business Terms - Financial education - International Financial Reporting Standards (EU) - IFRS Interpretations (EU) - Financial software Most WantedFinancial Terms - Most Important Financial Ratios - Debt-to-Equity Ratio - Financial Leverage - Current Ratio - Interest Coverage Ratio (ICR) - Solvency Ratio - Receivable Turnover Ratio - Return On Capital Employed (ROCE) - Debt Service Coverage Ratio - Accounts Payable Turnover Ratio Have 10 minutes to relax?Play our unique Play The Game
The previous article covered the basics of Probability Distributions and talked about the Uniform Probability Distribution. This article covers the Exponential Probability Distribution which is also a Continuous distribution just like Uniform Distribution. Suppose we are posed with the question- How much time do we need to wait before a given event occurs? The answer to this question can be given in probabilistic terms if we model the given problem using the Exponential Distribution. Since the time we need to wait is unknown, we can think of it as a Random Variable. If the probability of the event happening in a given interval is proportional to the length of the interval, then the Random Variable has an exponential distribution. The support (set of values the Random Variable can take) of an Exponential Random Variable is the set of all positive real numbers. Probability Density Function – For a positive real number the probability density function of a Exponentially distributed Random variable is given by- Here is the rate parameter and its effects on the density function are illustrated below – To check if the above function is a legitimate probability density function, we need to check if it’s integral over its support is 1. Cumulative Density Function – As we know, the cumulative density function is nothing but the sum of probability of all events upto a certain value of . In the Exponential distribution, the cumulative density function is given by- Expected Value – To find out the expected value, we simply multiply the probability distribution function with x and integrate over all possible values(support). Variance and Standard deviation – The variance of the Exponential distribution is given by- The Standard Deviation of the distribution – - Example – Let X denote the time between detections of a particle with a Geiger counter and assume that X has an exponential distribution with E(X) = 1.4 minutes. What is the probability that we detect a particle within 30 seconds of starting the counter? - Solution – Since the Random Variable (X) denoting the time between successive detection of particles is exponentially distributed, the Expected Value is given by- To find the probability of detecting the particle within 30 seconds of the start of the experiment, we need to use the cumulative density function discussed above. We convert the given 30 seconds in minutes since we have our rate parameter in terms of minutes. Lack of Memory Property – Now consider that in the above example, after detecting a particle at the 30 second mark, no particle is detected three minutes since. Because we have been waiting for the past 3 minutes, we feel that a detection is due i.e. the probability of detection of a particle in the next 30 seconds should be higher than 0.3. However. this is not true for the exponential distribution. We can prove so by finding the probability of the above scenario, which can be expressed as a conditional probability- The fact that we have waited three minutes without a detection does not change the probability of a detection in the next 30 seconds. Therefore, the probability only depends on the length of the interval being considered. - Mathematics | Probability Distributions Set 5 (Poisson Distribution) - Mathematics | Probability Distributions Set 1 (Uniform Distribution) - Mathematics | Probability Distributions Set 3 (Normal Distribution) - Mathematics | Probability Distributions Set 4 (Binomial Distribution) - Mathematics | Hypergeometric Distribution model - Mathematics | Probability - Mathematics | Law of total probability - Mathematics | Conditional Probability - Mathematics | Renewal processes in probability - Bayes's Theorem for Conditional Probability - Mathematics | Generalized PnC Set 2 - Mathematics | Generalized PnC Set 1 - Mathematics | Introduction to Proofs - Mathematics | Indefinite Integrals - Mathematics | PnC and Binomial Coefficients If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Distributive Property of Multiplication Start your free trial quickly and easily, and have fun improving your grades! Videos for all grades and subjects that explain school material in a short and concise way. Boost your confidence in class by studying before tests and mock tests with our fun exercises. Learn on the go with worksheets to print out – combined with the accompanying videos, these worksheets create a complete learning unit. 24-hour help provided by teachers who are always there to assist when you need it. Basics on the topic Distributive Property of Multiplication - In This Video on 3rd Grade Distributive Property - What is Distributive Property of Multiplication? - Distributive Property Examples - Distributive Property Multiplication Summary In This Video on 3rd Grade Distributive Property Mr. Squeaks needs to use the distributive property of multiplication to help Big ‘Arry find out how many squares he has in all. After he learns what does distributive property mean, Mr. Squeaks learns how to do distributive property, and solves Big ‘Arry’s problem. At the bottom, you will find a distributive property worksheet! What is Distributive Property of Multiplication? What is the distributive property of multiplication? The distributive property of multiplication is when you break down one factor into smaller parts, to multiply the broken down parts by the other whole factor, and then adding them together to get the product of the original factor pair. This is the distributive property definition math. Below is a quick example of distributive property. Now you have the answer to ‘what is the distributive property in math’, why is it useful? It is useful because it can help break down more challenging multiplication problems into smaller ones. Distributive Property Examples Let’s have a go at distributive property math! Below, we have the factor pair four and six. First, we will break down the six into three and three. Then, we will distribute the four to be threes, to make four times three and four times three. Next, we find the product of both smaller factor pairs. Since they are both four times three, the product for both is twelve, so write the solution. Finally, add the two products together. Twelve plus twelve gives a sum of twenty-four. The product of four times six is twenty-four! Distributive Property Multiplication Summary To use the distributive property of multiplication, remember: - Break down one of the factor pairs - Distribute the whole factor to the broken down factor pairs - Multiply both smaller factor pairs - Add the product of both to find the product of the original factor pair Underneath, you will find more practice examples of distributive property and a distributive property of multiplication worksheet. Transcript Distributive Property of Multiplication Big 'Arry seems upset, it looks like Sheriff Squeaks is needed! "Big 'Arry! What seems to be the problem?" "Help! I don't know how many squares I have in all, but apparently, the Lil 'Arry's can help!" Let's help Sheriff Squeaks calculate how much Big 'Arry has in all by using the distributive property of multiplication. Distribute simply means to give out, so the distributive property of multiplication teaches us that breaking down one factor, or number, into smaller parts to multiply them separately by the other factor, and adding the products together gives the same product as multiplying the two factors together. The distributive property of multiplication is very useful, because you can break down bigger multiplication problems into smaller ones! Let's practice with four times six. We can set up an array to model this problem. Break down one of the factors into smaller numbers that add up to the factor. For this problem, we will break down the six into three plus three. Next, distribute, or give out, the factor outside the parenthesis to each number inside the parentheses which is four times three for both. We can make two arrays with four rows of three to help. Now, find the products of the two smaller factor pairs. Four times three gives a product of twelve. We can check the answer by counting the arrays. Finally, add together the products. This means we will find the sum of twelve plus twelve... which is twenty-four. Let's check the answer by finding the product of the original problem! The product of the original problem is twenty-four, so the answer is correct! Now we have looked at the distributive property of multiplication and how it works, let's help Sheriff Squeaks calculate how many squares Big 'Arry has in all! Big 'Arry is seven rows by five, so set up an array with seven rows of five! What is the next step? Break down one of the factors! Since multiples of fives are easier than multiples of seven, let's break down the seven. How can we break down the seven? We can break the seven into four plus three. What is the next step? Distribute the outside factor, five, to the two numbers inside the parentheses. What should we do next? Find the product of both! Four times five equals twenty, and three times five equals fifteen! Check your work by counting the arrays. What is the next step? Add both products together! Twenty plus fifteen equals thirty-five. What is the final step? Check your answer by finding the product of the original equation. Seven times five is thirty-five, so the answer is correct! While Sheriff Squeaks informs Big 'Arry how many squares he has in all, let's review! Remember, to use the distributive property of multiplication, first identify a factor to break down into smaller parts. Next, distribute the other factor to the smaller factors, and solve. Then, add together the two products to get the final product. Finally, check your answer by finding the product of the original problem! "So Big 'Arry, you basically have as many squares as the Lil 'Arry's have put together!" "Woah, what's going on here!" "Imani, you will never believe what I just saw in that virtual reality world!"