text
stringlengths
4
602k
1651–1748: Early seeds As early as 1651, the English government had sought to regulate trade in the American colonies, and Parliament passed the Navigation Acts on October 9 to provide the plantation colonies of the south with a profitable export market. The Acts prohibited British producers from growing tobacco and also encouraged shipbuilding, particularly in the New England colonies. Some argue that the economic impact was minimal on the colonists, but the political friction which the acts triggered was more serious, as the merchants most directly affected were most politically active. King Philip's War ended in 1678 which the New England colonies fought without any military assistance from England, and this contributed to the development of a unique identity, separate from that of the British people. But King Charles II determined to bring the New England colonies under a more centralized administration in the 1680s in order to regulate trade more effectively. The New England colonists fiercely opposed his efforts, so the Crown nullified their colonial charters. Charles' successor James II finalized these efforts in 1686, establishing the Dominion of New England. Dominion rule triggered bitter resentment throughout New England; the enforcement of the unpopular Navigation Acts and the curtailing of local democracy angered the colonists. New Englanders were encouraged, however, by a change of government in England which saw James II effectively abdicate, and a populist uprising overthrew Dominion rule on April 18, 1689. Colonial governments reasserted their control in the wake of the revolt, and successive governments made no more attempts to restore the Dominion. Subsequent English governments continued in their efforts to tax certain goods, passing acts regulating the trade of wool, hats, and molasses. The Molasses Act of 1733 was particularly egregious to the colonists, as a significant part of colonial trade relied on molasses. The taxes severely damaged the New England economy and resulted in a surge of smuggling, bribery, and intimidation of customs officials. Colonial wars fought in America were also a source of considerable tension. The British captured the fortress of Louisbourg during King George's War but then ceded it back to France in 1748. New England colonists resented their losses of lives, as well as the effort and expenditure involved in subduing the fortress, only to have it returned to their erstwhile enemy. Some writers begin their histories of the American Revolution with the British coalition victory in the Seven Years' War in 1763, viewing the French and Indian War as though it were the American theater of the Seven Years' War. Lawrence Henry Gipson writes: It may be said as truly that the American Revolution was an aftermath of the Anglo-French conflict in the New World carried on between 1754 and 1763. The Royal Proclamation of 1763 redrew boundaries of the lands west of Quebec and west of a line running along the crest of the Allegheny Mountains, making them Indian territory and barred to colonial settlement for two years. The colonists protested, and the boundary line was adjusted in a series of treaties with the Indians. In 1768, Indians agreed to the Treaty of Fort Stanwix and the Treaty of Hard Labour, followed in 1770 by the Treaty of Lochaber. The treaties opened most of Kentucky and West Virginia to colonial settlement. The new map was drawn up at the Treaty of Fort Stanwix in 1768 which moved the line much farther to the west, from the green line to the red line on the map at right. 1764–1766: Taxes imposed and withdrawn Notice of Stamp Act of 1765 in newspaper Prime Minister George Grenville asserted in 1762 that the whole revenue of the custom houses in America amounted to one or two thousand pounds a year, and that the English exchequer was paying between seven and eight thousand pounds a year to collect . Adam Smith wrote in The Wealth of Nations that Parliament "has never hitherto demanded of [the American colonies] anything which even approached to a just proportion to what was paid by their fellow subjects at home." As early as 1651, the English government had sought to regulate trade in the American colonies. On October 9, 1651, they passed the Navigation Acts to pursue a mercantilist policy intended to ensure that trade enriched Great Britain but prohibited trade with any other nations. Parliament also passed the Sugar Act, decreasing the existing customs duties on sugar and molasses but providing stricter measures of enforcement and collection. That same year, Grenville proposed direct taxes on the colonies to raise revenue, but he delayed action to see whether the colonies would propose some way to raise the revenue themselves. Parliament finally passed the Stamp Act in March 1765 which imposed direct taxes on the colonies for the first time. All official documents, newspapers, almanacs, and pamphlets were required to have the stamps—even decks of playing cards. The colonists did not object that the taxes were high; they were actually low. They objected to the fact that they had no representation in the Parliament, and thus no voice concerning legislation that affected them. Benjamin Franklin testified in Parliament in 1766 that Americans already contributed heavily to the defense of the Empire. He said that local governments had raised, outfitted, and paid 25,000 soldiers to fight France—as many as Britain itself sent—and spent many millions from American treasuries doing so in the French and Indian War alone. London had to deal with 1,500 politically well-connected British Army soldiers. The decision was to keep them on active duty with full pay, but they had to be stationed somewhere. Stationing a standing army in Great Britain during peacetime was politically unacceptable, so the decision was made to station them in America and have the Americans pay them. The soldiers had no military mission; they were not there to defend the colonies because there was no threat to the colonies. The Sons of Liberty formed that same year in 1765, and they used public demonstrations, boycott, and threats of violence to ensure that the British tax laws were unenforceable. In Boston, the Sons of Liberty burned the records of the vice admiralty court and looted the home of chief justice Thomas Hutchinson. Several legislatures called for united action, and nine colonies sent delegates to the Stamp Act Congress in New York City in October. Moderates led by John Dickinson drew up a "Declaration of Rights and Grievances" stating that taxes passed without representation violated their rights as Englishmen, and colonists emphasized their determination by boycotting imports of British merchandise. The Parliament at Westminster saw itself as the supreme lawmaking authority throughout all British possessions and thus entitled to levy any tax without colonial approval. They argued that the colonies were legally British corporations that were completely subordinate to the British parliament, and they pointed to numerous instances where Parliament had made laws in the past that were binding on the colonies. Parliament insisted that the colonies effectively enjoyed a "virtual representation" as most British people did, as only a small minority of the British population elected representatives to Parliament, but Americans such as James Otis maintained that they were not in fact virtually represented at all. The Rockingham government came to power in July 1765, and Parliament debated whether to repeal the stamp tax or to send an army to enforce it. Benjamin Franklin made the case for repeal, explaining that the colonies had spent heavily in manpower, money, and blood in defense of the empire in a series of wars against the French and Indians, and that further taxes to pay for those wars were unjust and might bring about a rebellion. Parliament agreed and repealed the tax on February 21, 1766, but they insisted in the Declaratory Act of March 1766 that they retained full power to make laws for the colonies "in all cases whatsoever". The repeal nonetheless caused widespread celebrations in the colonies. 1767–1773: Townshend Acts and the Tea Act In 1767, the Parliament passed the Townshend Acts which placed duties on a number of essential goods, including paper, glass, and tea, and established a Board of Customs in Boston to more rigorously execute trade regulations. The new taxes were enacted on the belief that Americans only objected to internal taxes and not to external taxes such as custom duties. The Americans, however, argued against the constitutionality of the act because its purpose was to raise revenue and not regulate trade. Colonists responded by organizing new boycotts of British goods. These boycotts were less effective, however, as the Townshend goods were widely used. In February 1768, the Assembly of Massachusetts Bay issued a circular letter to the other colonies urging them to coordinate resistance. The governor dissolved the assembly when it refused to rescind the letter. Meanwhile, a riot broke out in Boston in June 1768 over the seizure of the sloop Liberty, owned by John Hancock, for alleged smuggling. Customs officials were forced to flee, prompting the British to deploy troops to Boston. A Boston town meeting declared that no obedience was due to parliamentary laws and called for the convening of a convention. A convention assembled but only issued a mild protest before dissolving itself. In January 1769, Parliament responded to the unrest by reactivating the Treason Act 1543 which called for subjects outside the realm to face trials for treason in England. The governor of Massachusetts was instructed to collect evidence of said treason, and the threat caused widespread outrage, though it was not carried out. On March 5, 1770, a large crowd gathered around a group of British soldiers. The crowd grew threatening, throwing snowballs, rocks, and debris at them. One soldier was clubbed and fell. There was no order to fire, but the soldiers fired into the crowd anyway. They hit 11 people; three civilians died at the scene of the shooting, and two died after the incident. The event quickly came to be called the Boston Massacre. The soldiers were tried and acquitted (defended by John Adams), but the widespread descriptions soon began to turn colonial sentiment against the British. This, in turn, began a downward spiral in the relationship between Britain and the Province of Massachusetts. A new ministry under Lord North came to power in 1770, and Parliament withdrew all taxes except the tax on tea, giving up its efforts to raise revenue while maintaining the right to tax. This temporarily resolved the crisis, and the boycott of British goods largely ceased, with only the more radical patriots such as Samuel Adams continuing to agitate. In June 1772, American patriots, including John Brown, burned a British warship that had been vigorously enforcing unpopular trade regulations in what became known as the Gaspee Affair. The affair was investigated for possible treason, but no action was taken. In 1772, it became known that the Crown intended to pay fixed salaries to the governors and judges in Massachusetts, which had been paid by local authorities. This would reduce the influence of colonial representatives over their government. Samuel Adams in Boston set about creating new Committees of Correspondence, which linked Patriots in all 13 colonies and eventually provided the framework for a rebel government. Virginia, the largest colony, set up its Committee of Correspondence in early 1773, on which Patrick Henry and Thomas Jefferson served. A total of about 7000 to 8000 Patriots served on "Committees of Correspondence" at the colonial and local levels, comprising most of the leadership in their communities. Loyalists were excluded. The committees became the leaders of the American resistance to British actions, and largely determined the war effort at the state and local level. When the First Continental Congress decided to boycott British products, the colonial and local Committees took charge, examining merchant records and publishing the names of merchants who attempted to defy the boycott by importing British goods. In 1773, private letters were published in which Massachusetts Governor Thomas Hutchinson claimed that the colonists could not enjoy all English liberties, and Lieutenant Governor Andrew Oliver called for the direct payment of colonial officials. The letters' contents were used as evidence of a systematic plot against American rights, and discredited Hutchinson in the eyes of the people; the Assembly petitioned for his recall. Benjamin Franklin, postmaster general for the colonies, acknowledged that he leaked the letters, which led to him being berated by British officials and fired from his job. Meanwhile, Parliament passed the Tea Act to lower the price of taxed tea exported to the colonies in order to help the East India Company undersell smuggled Dutch tea. Special consignees were appointed to sell the tea in order to bypass colonial merchants. The act was opposed by those who resisted the taxes and also by smugglers who stood to lose business. In most instances, the consignees were forced to resign and the tea was turned back, but Massachusetts governor Hutchinson refused to allow Boston merchants to give in to pressure. A town meeting in Boston determined that the tea would not be landed, and ignored a demand from the governor to disperse. On December 16, 1773, a group of men, led by Samuel Adams and dressed to evoke the appearance of American Indians, boarded the ships of the British East India Company and dumped £10,000 worth of tea from their holds (approximately £636,000 in 2008) into Boston Harbor. Decades later, this event became known as the Boston Tea Party and remains a significant part of American patriotic lore. 1774–1775: Intolerable Acts and the Quebec Act The British government responded by passing several Acts which came to be known as the Intolerable Acts, which further darkened colonial opinion towards the British. They consisted of four laws enacted by the British parliament. The first was the Massachusetts Government Act which altered the Massachusetts charter and restricted town meetings. The second act was the Administration of Justice Act which ordered that all British soldiers to be tried were to be arraigned in Britain, not in the colonies. The third Act was the Boston Port Act, which closed the port of Boston until the British had been compensated for the tea lost in the Boston Tea Party. The fourth Act was the Quartering Act of 1774, which allowed royal governors to house British troops in the homes of citizens without requiring permission of the owner. In response, Massachusetts patriots issued the Suffolk Resolves and formed an alternative shadow government known as the "Provincial Congress" which began training militia outside British-occupied Boston. In September 1774, the First Continental Congress convened, consisting of representatives from each of the colonies, to serve as a vehicle for deliberation and collective action. During secret debates, conservative Joseph Galloway proposed the creation of a colonial Parliament that would be able to approve or disapprove of acts of the British Parliament, but his idea was not accepted. The Congress instead endorsed the proposal of John Adams that Americans would obey Parliament voluntarily but would resist all taxes in disguise. Congress called for a boycott beginning on 1 December 1774 of all British goods; it was enforced by new committees authorized by the Congress.
Fossil fuel power station This article may be in need of reorganization to comply with Wikipedia's layout guidelines. (January 2019) A fossil fuel power station is a thermal power station which burns a fossil fuel, such as coal or natural gas, to produce electricity. Fossil fuel power stations have machinery to convert the heat energy of combustion into mechanical energy, which then operates an electrical generator. The prime mover may be a steam turbine, a gas turbine or, in small plants, a reciprocating gas engine. All plants use the energy extracted from expanding gas, either steam or combustion gases. Although different energy conversion methods exist, all thermal power station conversion methods have efficiency limited by the Carnot efficiency and therefore produce waste heat. Fossil fuel power stations provide most of the electrical energy used in the world. Some fossil-fired power stations are designed for continuous operation as baseload power plants, while others are used as peaker plants. However, starting from the 2010s, in many countries plants designed for baseload supply are being operated as dispatchable generation to balance increasing generation by variable renewable energy. By-products of fossil fuel power plant operation must be considered in their design and operation. Flue gas from combustion of the fossil fuels contains carbon dioxide and water vapor, as well as pollutants such as nitrogen oxides (NOx), sulfur oxides (SOx), and, for coal-fired plants, mercury, traces of other metals, and fly ash. Usually all of the carbon dioxide and some of the other pollution is discharged to the air. Solid waste ash from coal-fired boilers must also be removed. Fossil fueled power stations are major emitters of carbon dioxide (CO2), a greenhouse gas which is a major contributor to global warming. The results of a recent study show that the net income available to shareholders of large companies could see a significant reduction from the greenhouse gas emissions liability related to only natural disasters in the United States from a single coal-fired power plant. However, as of 2015, no such cases have awarded damages in the United States. Per unit of electric energy, brown coal emits nearly twice as much CO2 as natural gas, and black coal emits somewhat less than brown. As of 2019[update] carbon capture and storage of emissions is not economically viable for fossil fuel power stations. As of 2019[update] keeping global warming below 1.5 °C is still possible but only if no more fossil fuel power plants are built and some existing fossil fuel power plants are shut down early, together with other measures such as reforestation. Basic concepts: heat into mechanical energy In a fossil fuel power plant the chemical energy stored in fossil fuels such as coal, fuel oil, natural gas or oil shale and oxygen of the air is converted successively into thermal energy, mechanical energy and, finally, electrical energy. Each fossil fuel power plant is a complex, custom-designed system. Multiple generating units may be built at a single site for more efficient use of land, natural resources and labor. Most thermal power stations in the world use fossil fuel, outnumbering nuclear, geothermal, biomass, or concentrated solar power plants. The second law of thermodynamics states that any closed-loop cycle can only convert a fraction of the heat produced during combustion into mechanical work. The rest of the heat, called waste heat, must be released into a cooler environment during the return portion of the cycle. The fraction of heat released into a cooler medium must be equal or larger than the ratio of absolute temperatures of the cooling system (environment) and the heat source (combustion furnace). Raising the furnace temperature improves the efficiency but complicates the design, primarily by the selection of alloys used for construction, making the furnace more expensive. The waste heat cannot be converted into mechanical energy without an even cooler cooling system. However, it may be used in cogeneration plants to heat buildings, produce hot water, or to heat materials on an industrial scale, such as in some oil refineries, plants, and chemical synthesis plants. Typical thermal efficiency for utility-scale electrical generators is around 37% for coal and oil-fired plants, and 56 – 60% (LEV) for combined-cycle gas-fired plants. Plants designed to achieve peak efficiency while operating at capacity will be less efficient when operating off-design (i.e. temperatures too low.) Practical fossil fuels stations operating as heat engines cannot exceed the Carnot cycle limit for conversion of heat energy into useful work. Fuel cells do not have the same thermodynamic limits as they are not heat engines. The efficiency of a fossil fuel plant may be expressed as its heat rate, expressed in BTU/kilowatthour or megajoules/kilowatthour. In a steam turbine power plant, fuel is burned in a furnace and the hot gasses flow through a boiler. Water is converted to steam in the boiler; additional heating stages may be included to superheat the steam. The hot steam is sent through controlling valves to a turbine. As the steam expands and cools, its energy is transferred to the turbine blades which turn a generator. The spent steam has very low pressure and energy content; this water vapor is fed through a condenser, which removes heat from the steam. The condensed water is then pumped into the boiler to repeat the cycle. Emissions from the boiler include carbon dioxide, oxides of sulfur, and in the case of coal fly ash from non-combustible substances in the fuel. Waste heat from the condenser is transferred either to the air, or sometimes to a cooling pond, lake or river. Gas turbine and combined gas/steam One type of fossil fuel power plant uses a gas turbine in conjunction with a heat recovery steam generator (HRSG). It is referred to as a combined cycle power plant because it combines the Brayton cycle of the gas turbine with the Rankine cycle of the HRSG. The turbines are fueled either with natural gas or fuel oil. Diesel engine generator sets are often used for prime power in communities not connected to a widespread power grid. Emergency (standby) power systems may use reciprocating internal combustion engines operated by fuel oil or natural gas. Standby generators may serve as emergency power for a factory or data center, or may also be operated in parallel with the local utility system to reduce peak power demand charge from the utility. Diesel engines can produce strong torque at relatively low rotational speeds, which is generally desirable when driving an alternator, but diesel fuel in long-term storage can be subject to problems resulting from water accumulation and chemical decomposition. Rarely used generator sets may correspondingly be installed as natural gas or LPG to minimize the fuel system maintenance requirements. Spark-ignition internal combustion engines operating on gasoline (petrol), propane, or LPG are commonly used as portable temporary power sources for construction work, emergency power, or recreational uses. Reciprocating external combustion engines such as the Stirling engine can be run on a variety of fossil fuels, as well as renewable fuels or industrial waste heat. Installations of Stirling engines for power production are relatively uncommon. Historically, the first central stations used reciprocating steam engines to drive generators. As the size of the electrical load to be served grew, reciprocating units became too large and cumbersome to install economically. The steam turbine rapidly displaced all reciprocating engines in central station service. Coal is the most abundant fossil fuel on the planet, and widely used as the source of energy in thermal power stations and is a relatively cheap fuel. Coal is an impure fuel and produces more greenhouse gas and pollution than an equivalent amount of petroleum or natural gas. For instance, the operation of a 1000-MWe coal-fired power plant results in a nuclear radiation dose of 490 person-rem/year, compared to 136 person-rem/year, for an equivalent nuclear power plant including uranium mining, reactor operation and waste disposal. Coal is delivered by highway truck, rail, barge, collier ship or coal slurry pipeline. Generating stations adjacent to a mine may receive coal by conveyor belt or massive diesel-electric-drive trucks. Coal is usually prepared for use by crushing the rough coal to pieces less than 2 inches (5 cm) in size. Gas is a very common fuel and has mostly replaced coal in countries where gas was found in the late 20th century or early 21st century, such as the US and UK. Sometimes coal-fired steam plants are refitted to use natural gas to reduce net carbon dioxide emissions. Oil-fuelled plants may be converted to natural gas to lower operating cost. Heavy fuel oil was once a significant source of energy for electric power generation. After oil price increases of the 1970s, oil was displaced by coal and later natural gas. Distillate oil is still important as the fuel source for diesel engine power plants used especially in isolated communities not interconnected to a grid. Liquid fuels may also be used by gas turbine power plants, especially for peaking or emergency service. Of the three fossil fuel sources, oil has the advantages of easier transportation and handling than solid coal, and easier on-site storage than natural gas. Combined heat and power Combined heat and power (CHP), also known as cogeneration, is the use of a thermal power station to provide both electric power and heat (the latter being used, for example, for district heating purposes). This technology is practiced not only for domestic heating (low temperature) but also for industrial process heat, which is often high temperature heat. Calculations show that Combined Heat and Power District Heating (CHPDH) is the cheapest method in reducing (but not eliminating) carbon emissions, if conventional fossil fuels remain to be burned.[unreliable source?] Thermal power plants are one of the main artificial sources of producing toxic gases and particulate matter. Fossil fuel power plants cause the emission of pollutants such as NO x , SOx, CO 2, CO, PM, organic gases and polycyclic aromatic hydrocarbons. World organizations and international agencies, like the IEA, are concerned about the environmental impact of burning fossil fuels, and coal in particular. The combustion of coal contributes the most to acid rain and air pollution, and has been connected with global warming. Due to the chemical composition of coal there are difficulties in removing impurities from the solid fuel prior to its combustion. Modern day coal power plants pollute less than older designs due to new "scrubber" technologies that filter the exhaust air in smoke stacks. However, emission levels of various pollutants are still on average several times greater than natural gas power plants and the scrubbers transfer the captured pollutants to wastewater, which still requires treatment in order to avoid pollution of receiving water bodies. In these modern designs, pollution from coal-fired power plants comes from the emission of gases such as carbon dioxide, nitrogen oxides, and sulfur dioxide into the air, as well a significant volume of wastewater which may contain lead, mercury, cadmium and chromium, as well as arsenic, selenium and nitrogen compounds (nitrates and nitrites). Acid rain is caused by the emission of nitrogen oxides and sulfur dioxide. These gases may be only mildly acidic themselves, yet when they react with the atmosphere, they create acidic compounds such as sulfurous acid, nitric acid and sulfuric acid which fall as rain, hence the term acid rain. In Europe and the US, stricter emission laws and decline in heavy industries have reduced the environmental hazards associated with this problem, leading to lower emissions after their peak in 1960s. |Pollutant||Hard coal||Brown coal||Fuel oil||Other oil||Gas| |Non methane organic compounds (g/GJ)||4.92||7.78||3.70||3.24||1.58| |Particulate matter (g/GJ)||1,203||3,254||16||1.91||0.1| |Flue gas volume total (m3/GJ)||360||444||279||276||272| Electricity generation using carbon-based fuels is responsible for a large fraction of carbon dioxide (CO2) emissions worldwide and for 34% of U.S. man-made carbon dioxide emissions in 2010. In the U.S. 70% of electricity is generated by combustion of fossil fuels. Coal contains more carbon than oil or natural gas fossil fuels, resulting in greater volumes of carbon dioxide emissions per unit of electricity generated. In 2010, coal contributed about 81% of CO2 emissions from generation and contributed about 45% of the electricity generated in the United States. In 2000, the carbon intensity (CO2 emissions) of U.S. coal thermal combustion was 2249 lbs/MWh (1,029 kg/MWh) while the carbon intensity of U.S. oil thermal generation was 1672 lb/MWh (758 kg/MWh or 211 kg/GJ) and the carbon intensity of U.S. natural gas thermal production was 1135 lb/MWh (515 kg/MWh or 143 kg/GJ). The Intergovernmental Panel on Climate Change (IPCC) reports that increased quantities of the greenhouse gas carbon dioxide within the atmosphere will "very likely" lead to higher average temperatures on a global scale (global warming). Concerns regarding the potential for such warming to change the global climate prompted IPCC recommendations calling for large cuts to CO2 emissions worldwide. Emissions can be reduced with higher combustion temperatures, yielding more efficient production of electricity within the cycle. As of 2019[update] the price of emitting CO2 to the atmosphere is much lower than the cost of adding carbon capture and storage (CCS) to fossil fuel power stations, so owners have not done so. Estimation of carbon dioxide emissions The CO2 emissions from a fossil fuel power station can be estimated with the following formula: where "capacity" is the "nameplate capacity" or the maximum allowed output of the plant, "capacity factor" or "load factor" is a measure of the amount of power that a plant produces compared with the amount it would produce if operated at its rated capacity nonstop, heat rate is thermal energy in/electrical energy out, emission intensity (also called emission factor) is the CO2 emitted per unit of heat generated for a particular fuel. As an example, a new 1500 MW supercritical lignite-fueled power station running on average at half its capacity might have annual CO2 emissions estimated as: = 1500MW x 0.5 x 100/40 x 101000 kg/TJ x 1year = 1500MJ/s x 0.5 x 2.5 x 0.101 kg/MJ x 365x24x60x60s = 1.5x103 x 5x10−1 x 2.5 x 1.01−1 x 3.1536x107 kg = 59.7 x103-1-1+7 kg = 5.97 Mt Thus the example power station is estimated to emit about 6 megatonnes of carbon dioxide each year. The results of similar estimations are mapped by organisations such as Global Energy Monitor, Carbon Tracker and ElectricityMap. Alternatively it may be possible to measure CO 2 emissions (perhaps indirectly via another gas) from satellite observations. Another problem related to coal combustion is the emission of particulates that have a serious impact on public health. Power plants remove particulate from the flue gas with the use of a bag house or electrostatic precipitator. Several newer plants that burn coal use a different process, Integrated Gasification Combined Cycle in which synthesis gas is made out of a reaction between coal and water. The synthesis gas is processed to remove most pollutants and then used initially to power gas turbines. Then the hot exhaust gases from the gas turbines are used to generate steam to power a steam turbine. The pollution levels of such plants are drastically lower than those of "classic" coal power plants. Particulate matter from coal-fired plants can be harmful and have negative health impacts. Studies have shown that exposure to particulate matter is related to an increase of respiratory and cardiac mortality. Particulate matter can irritate small airways in the lungs, which can lead to increased problems with asthma, chronic bronchitis, airway obstruction, and gas exchange. There are different types of particulate matter, depending on the chemical composition and size. The dominant form of particulate matter from coal-fired plants is coal fly ash, but secondary sulfate and nitrate also comprise a major portion of the particulate matter from coal-fired plants. Coal fly ash is what remains after the coal has been combusted, so it consists of the incombustible materials that are found in the coal. The size and chemical composition of these particles affects the impacts on human health. Currently coarse (diameter greater than 2.5 μm) and fine (diameter between 0.1 μm and 2.5 μm) particles are regulated, but ultrafine particles (diameter less than 0.1 μm) are currently unregulated, yet they pose many dangers. Unfortunately much is still unknown as to which kinds of particulate matter pose the most harm, which makes it difficult to come up with adequate legislation for regulating particulate matter. There are several methods of helping to reduce the particulate matter emissions from coal-fired plants. Roughly 80% of the ash falls into an ash hopper, but the rest of the ash then gets carried into the atmosphere to become coal-fly ash. Methods of reducing these emissions of particulate matter include: The baghouse has a fine filter that collects the ash particles, electrostatic precipitators use an electric field to trap ash particles on high-voltage plates, and cyclone collectors use centrifugal force to trap particles to the walls. A recent study indicates that sulfur emissions from fossil fueled power stations in China may have caused a 10-year lull in global warming (1998-2008). Fossil-fuel power stations, particularly coal-fired plants, are a major source of industrial wastewater. Wastewater streams include flue-gas desulfurization, fly ash, bottom ash and flue gas mercury control. Plants with air pollution controls such as wet scrubbers typically transfer the captured pollutants to the wastewater stream. Ash ponds, a type of surface impoundment, are a widely used treatment technology at coal-fired plants. These ponds use gravity to settle out large particulates (measured as total suspended solids) from power plant wastewater. This technology does not treat dissolved pollutants. Power stations use additional technologies to control pollutants, depending on the particular wastestream in the plant. These include dry ash handling, closed-loop ash recycling, chemical precipitation, biological treatment (such as an activated sludge process), membrane systems, and evaporation-crystallization systems. In 2015 EPA published a regulation pursuant to the Clean Water Act that requires US power plants to use one or more of these technologies. Technological advancements in ion exchange membranes and electrodialysis systems has enabled high efficiency treatment of flue-gas desulfurization wastewater to meet the updated EPA discharge limits. Radioactive trace elements Coal is a sedimentary rock formed primarily from accumulated plant matter, and it includes many inorganic minerals and elements which were deposited along with organic material during its formation. As the rest of the Earth's crust, coal also contains low levels of uranium, thorium, and other naturally occurring radioactive isotopes whose release into the environment leads to radioactive contamination. While these substances are present as very small trace impurities, enough coal is burned that significant amounts of these substances are released. A 1,000 MW coal-burning power plant could have an uncontrolled release of as much as 5.2 metric tons per year of uranium (containing 74 pounds (34 kg) of uranium-235) and 12.8 metric tons per year of thorium. In comparison, a 1,000 MW nuclear plant will generate about 30 metric tons of high-level radioactive solid packed waste per year. It is estimated that during 1982, US coal burning released 155 times as much uncontrolled radioactivity into the atmosphere as the Three Mile Island incident. The collective radioactivity resulting from all coal burning worldwide between 1937 and 2040 is estimated to be 2,700,000 curies or 0.101 EBq. During normal operation, the effective dose equivalent from coal plants is 100 times that from nuclear plants. Normal operation however, is a deceiving baseline for comparison: just the Chernobyl nuclear disaster released, in iodine-131 alone, an estimated 1.76 EBq. of radioactivity, a value one order of magnitude above this value for total emissions from all coal burned within a century, while the iodine-131, the major radioactive substance which comes out in accident situations, has a half life of just 8 days. Water and air contamination by coal ash A study released in August 2010 that examined state pollution data in the United States by the organizations Environmental Integrity Project, the Sierra Club and Earthjustice found that coal ash produced by coal-fired power plants dumped at sites across 21 U.S. states has contaminated ground water with toxic elements. The contaminants including the poisons arsenic and lead. The study concluded that the problem of coal ash-caused water contamination is even more extensive in the United States than has been estimated. The study brought to 137 the number of ground water sites across the United States that are contaminated by power plant-produced coal ash. Arsenic has been shown to cause skin cancer, bladder cancer and lung cancer, and lead damages the nervous system. Coal ash contaminants are also linked to respiratory diseases and other health and developmental problems, and have disrupted local aquatic life. Coal ash also releases a variety of toxic contaminants into nearby air, posing a health threat to those who breathe in fugitive coal dust. U.S. government scientists tested fish in 291 streams around the country for mercury contamination. They found mercury in every fish tested, according to the study by the U.S. Department of the Interior. They found mercury even in fish of isolated rural waterways. Twenty five percent of the fish tested had mercury levels above the safety levels determined by the U.S. Environmental Protection Agency (EPA) for people who eat the fish regularly. The largest source of mercury contamination in the United States is coal-fueled power plant emissions. Conversion of fossil fuel power plants Several methods exist to reduce pollution and reduce or eliminate carbon emissions of fossil fuel power plants. A frequently used and cost-efficient method is to convert a plant to run on a different fuel. This includes conversions of coal power plants to energy crops/biomass or waste and conversions of natural gas power plants to biogas or hydrogen. Conversions of coal powered power plants to waste-fired power plants have an extra benefit in that they can reduce landfilling. In addition, waste-fired power plants can be equipped with material recovery, which is also beneficial to the environment. In some instances, torrefaction of biomass may benefit the power plant if energy crops/biomass is the material the converted fossil fuel power plant will be using. Also, when using energy crops as the fuel, and if implementing biochar production, the thermal power plant can even become carbon negative rather than just carbon neutral. Improving the energy efficiency of a coal-fired power plant can also reduce emissions. Besides simply converting to run on a different fuel, some companies also offer the possibility to convert existing fossil-fuel power stations to grid energy storage systems which use electric thermal energy storage (ETES) Coal pollution mitigation Coal pollution mitigation is a process whereby coal is chemically washed of minerals and impurities, sometimes gasified, burned and the resulting flue gases treated with steam, with the purpose of removing sulfur dioxide, and reburned so as to make the carbon dioxide in the flue gas economically recoverable, and storable underground (the latter of which is called "carbon capture and storage"). The coal industry uses the term "clean coal" to describe technologies designed to enhance both the efficiency and the environmental acceptability of coal extraction, preparation and use, but has provided no specific quantitative limits on any emissions, particularly carbon dioxide. Whereas contaminants like sulfur or mercury can be removed from coal, carbon cannot be effectively removed while still leaving a usable fuel, and clean coal plants without carbon sequestration and storage do not significantly reduce carbon dioxide emissions. James Hansen in an open letter to then U.S. President Barack Obama advocated a "moratorium and phase-out of coal plants that do not capture and store CO2". In his book Storms of My Grandchildren, similarly, Hansen discusses his Declaration of Stewardship, the first principle of which requires "a moratorium on coal-fired power plants that do not capture and sequester carbon dioxide". Running the power station on hydrogen converted from natural gas Gas-fired power plants can also be modified to run on hydrogen. Hydrogen can at first be created from natural gas through steam reforming, as a step towards a hydrogen economy, thus eventually reducing carbon emissions. Since 2013, the conversion process has been improved by scientists at Karlsruhe Liquid-metal Laboratory (KALLA), using a process called methane pyrolysis. They succeeded in allowing the soot to be easily removed (soot is a byproduct of the process and damaged the working parts in the past -most notably the nickel-iron-cobaltcatalyst-). The soot (which contains the carbon) can then be stored underground and is not released into the atmosphere. Phase out of fossil fuel power plants As of 2019[update] there is still a chance of keeping global warming below 1.5 °C if no more fossil fuel power plants are built and some existing fossil fuel power plants are shut down early, together with other measures such as reforestation. Alternatives to fossil fuel power plants include nuclear power, solar power, geothermal power, wind power, hydropower, biomass power plants and other renewable energies (see non-carbon economy). Most of these are proven technologies on an industrial scale, but others are still in prototype form. Some countries only include the cost to produce the electrical energy, and do not take into account the social cost of carbon or the indirect costs associated with the many pollutants created by burning coal (e.g. increased hospital admissions due to respiratory diseases caused by fine smoke particles). Relative cost by generation source When comparing power plant costs, it is customary[clarification needed] to start by calculating the cost of power at the generator terminals by considering several main factors. External costs such as connections costs, the effect of each plant on the distribution grid are considered separately as an additional cost to the calculated power cost at the terminals. Initial factors considered are: - Capital costs, including waste disposal and decommissioning costs for nuclear energy. - Operating and maintenance costs. - Fuel costs for fossil fuel and biomass sources, and which may be negative for wastes. - Likely annual hours per year run or load factor, which may be as low as 30% for wind energy, or as high as 90% for nuclear energy. - Offset sales of heat, for example in combined heat and power district heating (CHP/DH). - Biomass power station - Boiler (power generation) - Coal analyzer - Coal mining - Combined heat and power - Cooling tower system - Environmental impact of the coal industry - Flue gas stacks - Fossil fuel phase-out - Geothermal power - Global Energy Monitor - Global warming - Greenhouse gas - List of coal power stations - List of thermal power station failures - Mercury vapor turbine - Natural gas - Power station - Relative cost of electricity generated by different sources - Renewable energy power station - Steam reforming - Steam turbine - Thermal power station - Water-tube boiler - "Electricity". International Energy Agency. 2020. Data Browser section, Electricity Generation by Source indicator. Retrieved 17 July 2021. - "Getting Wind and Sun onto the Grid" (PDF). International Energy Agency. Archived (PDF) from the original on 16 December 2018. Retrieved 9 May 2019. - Heidari, N.; Pearce, J. M. (2016). "A Review of Greenhouse Gas Emission Liabilities as the Value of Renewable Energy for Mitigating Lawsuits for Climate Change Related Damages". Renewable and Sustainable Energy Reviews. 55: 899–908. doi:10.1016/j.rser.2015.11.025. - "Why carbon capture could be the game-changer the world needs". World Economic Forum. Archived from the original on 9 May 2019. Retrieved 9 May 2019. - "We have too many fossil-fuel power plants to meet climate goals". Environment. 1 July 2019. Archived from the original on 3 July 2019. Retrieved 8 July 2019. - Sonal Patel (4 January 2017). "Who Has the World's Most Efficient Coal Power Plant Fleet?". Archived from the original on 23 June 2018. Retrieved 5 September 2018. - "Electric Generation Efficiency: Working Document of the NPC Global Oil & Gas Study" (PDF). National Petroleum Council. 18 July 2007. p. 5. Archived from the original (PDF) on 4 July 2010. Retrieved 18 July 2007. - Energy Information Administration - Trivelpiece, Alvin (1993). "The Future of Nuclear Research Centers" (PDF). Oak Ridge National Laboratory Review. 26 (3 & 4): 28. Archived (PDF) from the original on 31 January 2017. Retrieved 23 February 2017. - "Claverton-energy.co.uk". Archived from the original on 5 October 2011. Retrieved 25 August 2009. - SEC Mohave Generation Station Archived 14 September 2008 at the Wayback Machine Retrieved 24-07-2008 - Fouladi Fard, Reza; Naddafi, K.; Yunesian, M.; Nabizadeh Nodehi, R.; et al. (2016). "The assessment of health impacts and external costs of natural gas-fired power plant of Qom". Environmental Science and Pollution Research. 23 (20): 20922–20936. doi:10.1007/s11356-016-7258-0. PMID 27488708. S2CID 25937869. - "Steam Electric Power Generating Effluent Guidelines - 2015 Final Rule". Washington, DC: US Environmental Protection Agency (EPA). 4 September 2020. - Air pollution from electricity-generating large combustion plants (PDF), Copenhagen: European Environment Agency (EEA), 2008, ISBN 978-92-9167-355-1, archived from the original on 16 July 2011 - "The Phoenix Sun | Dirty numbers | The 200 Most Polluting Power Plants in the World". Archived from the original on 26 March 2014. Retrieved 17 September 2013. - "Sources Climate Change". EPA. 2012. Archived from the original on 9 September 2012. Retrieved 26 August 2012. - "Electricity Sector Emissions Climate Change". EPA. 2012. Archived from the original on 25 September 2012. Retrieved 26 August 2012. - "US EPA Clean Energy—Coal". Archived from the original on 11 May 2010. Retrieved 21 October 2009. - "US EPA Clean Energy—Oil". Archived from the original on 11 May 2010. Retrieved 21 October 2009. - "US EPA Clean Energy—Gas". Archived from the original on 3 April 2009. Retrieved 21 October 2009. - Solomon, S.; et al. (2007). "Summary for policymakers" (PDF). A report of Working Group I of the Intergovernmental Panel on Climate Change. IPCC. Archived (PDF) from the original on 7 May 2017. Retrieved 24 March 2010. - "Estimating carbon dioxide emissions from coal plants". Global Energy Monitor. Retrieved 8 February 2020. - "A methodology to constrain carbon dioxide emissions from coal-fired power plants using satellite observations of co-emitted nitrogen dioxide" (PDF). Atmospheric Chemistry and Physics. - Committee on Benefits of DOE R&D on Energy Efficiency and Fossil Energy, US NRC (2001). Energy research at DOE: was it worth it? Energy efficiency and fossil energy research 1978 to 2000. National Academies Press. p. 174. ISBN 978-0-309-07448-3. - Nel, A. (2005, May 6). Air Pollution-Related Illness: Effects of Particles. Science, 308(5723), 804-806. - Grahame, T., & Schlesinger, R. (2007, April 15). Health Effects of Airborne Particulate Matter: Do We Know Enough to Consider Regulating Specific Particle Types or Sources?. Inhalation Toxicology, 19(6–7), 457–481. - Schobert, H. H. (2002). Energy and Society. New York: Taylor & Francis, 241–255. - Freedman, Andrew (5 July 2011). "New study blames 10-year lull in global warming on China coal use, air pollution". Washington Post. Archived from the original on 16 July 2017. Retrieved 29 October 2018. - "Lowering Cost and Waste in Flue Gas Desulfurization Wastewater Treatment". Power Mag. Electric Power. March 2017. Archived from the original on 7 April 2017. Retrieved 6 April 2017. - Coal Combustion: Nuclear Resource or Danger? Archived 5 February 2007 at the Wayback Machine by Alex Gabbard, ORNL Review, Summer/Fall 1993, Vol. 26, Nos. 3 and 4. - Thompson, Linda. "Vitrification of Nuclear Waste". PH240 - Fall 2010: Introduction to the Physics of Energy. Stanford University. Archived from the original on 18 October 2015. Retrieved 10 August 2014. - Physics.ohio-state.edu Archived 27 March 2009 at the Wayback Machine - "Fukushima radioactive fallout nears Chernobyl levels". Newscientist.com. Archived from the original on 26 March 2011. Retrieved 24 April 2011. - "Study of Coal Ash Sites Finds Extensive Water Contamination" Archived 29 August 2010 at the Wayback Machine McClatchy; also archived at: commondreams.org Archived 28 August 2010 at the Wayback Machine - EarthJustice news release, 2010 Sept. 16, "New Report—Coal Ash Linked To Cancer and Other Maladies; Coal's Waste Is Poisoning Communities in 34 States" Archived 19 September 2010 at the Wayback Machine Earthjustice.org and Physicians for Social Responsibility, "Coal Ash: The Toxic Threat to Our Communities and Our Environment" Archived 6 October 2010 at the Wayback Machine 2010 September 16, earthjustice.org - nytimes.com "Mercury Found in Every Fish Tested, Scientists Say" Archived 29 December 2016 at the Wayback Machine New York Times, 2009 Aug. 19 - "Coal to biomass power plant conversion" (PDF). Archived from the original (PDF) on 6 March 2017. Retrieved 31 July 2019. - "Coal to biomass conversion by Georgia Power". Archived from the original on 3 December 2010. Retrieved 26 April 2009. - Conversion of coal to waste-fired power plant Archived 21 July 2009 at the Wayback Machine - "MHPS Will Convert Dutch CCGT to Run on Hydrogen". Archived from the original on 3 May 2019. Retrieved 3 May 2019. - "Torrefaction of biomass sometimes needed when using biomass in converted FFPS". Archived from the original on 29 November 2014. Retrieved 24 November 2014. - Siemens Gamesa ETES-switch solution - AustralianCoal.com.au Archived 7 December 2007 at the Wayback Machine—Clean Coal Overview - Hansen, James (2009). Storms of My Grandchildren. London: Bloomsbury Publishing. p. 242. ISBN 978-1-4088-0745-3. - "The plan to convert the North to run on hydrogen". Utility Week. 30 November 2018. Archived from the original on 9 May 2019. Retrieved 9 May 2019. - "H-vision: blue hydrogen for a green future". Gas World. Archived from the original on 9 May 2019. Retrieved 9 May 2019. - Natural gas to hydrogen: Natural gas reforming - KITT/IASS - Producing CO2 free hydrogen from natural gas for energy usage - The reaction that would give us clean fossil fuels forever - Hydrogen from methane without CO2 emissions - The Full Cost of Electricity (PDF). University of Texas at Austin. April 2018. p. 11. Archived (PDF) from the original on 10 May 2019. Retrieved 10 May 2019. - Steam: Its Generation and Use (2005). 41st edition, Babcock & Wilcox Company, ISBN 0-9634570-0-4 - Steam Plant Operation (2011). 9th edition, Everett B. Woodruff, Herbert B. Lammers, Thomas F. Lammers (coauthors), McGraw-Hill Professional, ISBN 978-0-07-166796-8 - Power Generation Handbook: Fundamentals of Low-Emission, High-Efficiency Power Plant Operation (2012). 2nd edition. Philip Kiameh, McGraw-Hill Professional, ISBN 978-0-07-177227-3 - Standard Handbook of Powerplant Engineering (1997). 2nd edition, Thomas C. Elliott, Kao Chen, Robert Swanekamp (coauthors), McGraw-Hill Professional, ISBN 0-07-019435-1 |Wikimedia Commons has media related to Fossil fuel-fired power plants.| - Conventional coal-fired power plant - Large industrial cooling towers - Coal Power more deadly than Nuclear - "Must We Suffer Smoke" , May 1949, Popular Science article on early methods of scrubbing emissions from coal-fired power plants - Gas Power Plant News from Power Engineering Magazine
Language is a fundamental aspect of human communication, allowing us to convey thoughts, emotions, and ideas to others. But is there a critical period for learning language? Scientists have long been fascinated by this question, seeking to understand the factors that affect our ability to acquire and use language throughout our lives. Research has shown that children have an easier time learning languages than adults, leading some to suggest that there may be a critical period during childhood when language acquisition is most effective. However, recent studies have challenged this assumption, indicating that adults can still learn languages proficiently with the right approach and mindset. In this article, we’ll explore the science behind language acquisition, how age affects language learning, and whether or not there truly is a critical period for language learning. We’ll also share strategies for successful language learning and provide insights into how you can unlock your linguistic potential. If you’re interested in learning more about language acquisition and how to become a better language learner, keep reading! The Science Behind Language Acquisition Language acquisition is a complex process that involves multiple cognitive and social factors. From a young age, humans are wired to learn language through exposure and interaction with others. But what exactly happens in the brain when we learn a new language? Let’s explore the science behind language acquisition to gain a better understanding. One key factor in language acquisition is the critical period hypothesis, which suggests that there is a limited time frame during childhood when language learning is most effective. However, recent studies have challenged this hypothesis, indicating that adults can still learn languages proficiently with the right approach and mindset. Despite this, there are certain aspects of language acquisition that are more difficult for adults, such as developing native-like pronunciation. Factors That Affect Language Acquisition Several factors can impact language acquisition, including exposure to the language, motivation, cognitive abilities, and social factors. Here are a few examples: - Exposure: The amount and quality of exposure to a language can affect how quickly and effectively someone learns it. Immersion in a language through activities such as watching TV shows, reading books, or speaking with native speakers can be beneficial. - Motivation: Someone who is motivated to learn a language, such as for work or travel, may be more successful in language acquisition. Motivation can be enhanced by setting achievable goals and finding a supportive community. - Cognitive abilities: Certain cognitive abilities, such as memory and attention, can impact language acquisition. For example, someone with a strong working memory may be able to better retain vocabulary words. Strategies for Successful Language Learning While language learning can be challenging, there are several strategies that can help make the process more effective. Here are a few tips: - Focus on practical vocabulary: Learning words and phrases that are relevant to your daily life can help you see progress more quickly and feel more motivated to continue. - Practice regularly: Consistent practice, even for just a few minutes a day, can be more effective than sporadic, intensive study sessions. - Immerse yourself: Surrounding yourself with the language, such as through watching TV shows or listening to music, can help you become more comfortable with it. Overall, the science behind language acquisition is complex and fascinating. By understanding the factors that affect language learning and utilizing effective strategies, anyone can become a successful language learner. How Age Affects Language Learning As we grow older, our ability to learn and acquire new skills changes. This is true for language learning as well. Young children have a remarkable ability to learn new languages quickly and easily, while adults often struggle with the process. So how does age affect our ability to learn a new language? Research has shown that there is a critical period for language acquisition, which typically ends around puberty. During this time, the brain is particularly sensitive to language input and is able to process and learn new linguistic structures more easily. As we get older, our brains become less flexible and it becomes more difficult to learn a new language. Factors that Affect Language Learning in Children - Exposure to Language: Children who are exposed to multiple languages at a young age are more likely to become proficient in those languages. - Motivation: Children who are motivated to learn a new language, either through interest or necessity, tend to be more successful in their language acquisition. - Learning Environment: Children who learn in a supportive and interactive environment, such as through play or with a language tutor, tend to acquire language more easily. Factors that Affect Language Learning in Adults - Motivation: Adults who are motivated to learn a new language, either through personal interest or professional necessity, tend to be more successful in their language acquisition. - Learning Environment: Adults who learn in a supportive and interactive environment, such as through language immersion programs or with a language tutor, tend to acquire language more easily. - Prior Language Knowledge: Adults who have prior knowledge of the language they are learning, or a similar language, tend to acquire language more easily than those starting from scratch. While age does play a role in language acquisition, it is not the only factor. Motivation, learning environment, and exposure to language all play a significant role in our ability to learn a new language. Whether you are a young child or an adult, it is never too late to start learning a new language. By understanding the factors that affect language learning and taking steps to create a supportive learning environment, anyone can achieve proficiency in a new language. Can You Learn a Language Past the Critical Period? For years, it was believed that the critical period for language acquisition ended around puberty, making it difficult for adults to learn a new language. However, recent research has challenged this notion, suggesting that it is possible to learn a language past the critical period. One theory is that although the brain becomes less plastic with age, it is still capable of adapting and learning new skills, including language acquisition. Another theory is that individual motivation and learning strategies play a significant role in language learning, regardless of age. The Role of Motivation Motivation is one of the key factors in language learning success, regardless of age. While young children may learn a language more easily due to their brain plasticity, they may lack the motivation and discipline required to learn a language independently. Adults, on the other hand, may have stronger motivation to learn a new language and may have developed effective learning strategies. The Importance of Practice Regardless of age, practice is essential to language learning success. Language learning requires consistent practice, including regular exposure to the language, engaging in conversation, and completing exercises and assignments. However, adults may have more time and resources to dedicate to language learning, allowing them to practice more consistently than children who may have other obligations, such as school. The Role of Prior Knowledge Prior knowledge of related languages can also play a significant role in language learning success. For example, if someone already speaks Spanish, they may find it easier to learn Italian due to similarities in grammar and vocabulary. Adults may have more prior knowledge of languages, making it easier for them to learn a new language, even past the critical period. In conclusion, while the critical period for language acquisition does exist, recent research has suggested that it is possible to learn a language past this period. Motivation, practice, and prior knowledge are all factors that can contribute to language learning success, regardless of age. Strategies for Successful Language Learning Learning a new language can be a challenging but rewarding experience. Here are some strategies that can help you achieve success in language learning: Immerse Yourself in the Language: Surround yourself with the language as much as possible. This could include watching movies or TV shows in the language, listening to music, or even finding a language exchange partner to practice with. Practice Consistently: Consistency is key in language learning. Try to practice a little bit every day instead of cramming for long periods of time. This will help you retain the information better and build good habits. - Use Flashcards: Flashcards can be a great way to learn new vocabulary. Write the word on one side and the definition on the other, and quiz yourself regularly. - Read in the Language: Reading is a great way to expand your vocabulary. Start with children’s books or other easy reading material and work your way up. Speaking and Listening: - Find a Conversation Partner: Speaking with a native speaker can help you improve your speaking and listening skills. Consider finding a language exchange partner or taking a conversation class. - Listen to Podcasts: Podcasts in the language can help you improve your listening skills. Look for ones with transcripts available so you can follow along. Grammar and Writing: - Use Online Resources: There are many online resources available for learning grammar and writing in a new language. Look for reputable sites and resources. - Practice Writing: Writing in the language can help you solidify your understanding of grammar and improve your writing skills. Start with short sentences and work your way up to longer pieces. Remember, language learning takes time and effort, but with the right strategies and consistent practice, you can achieve success and become fluent in your new language. Unlocking Your Linguistic Potential Learning a new language can be an enriching experience, but it can also be a daunting task. With the right approach, however, anyone can unlock their linguistic potential and become proficient in a new language. One of the key factors in successful language learning is motivation. Without a genuine desire to learn, it can be challenging to stay committed and make progress. Another crucial element is finding the right learning method that works for you. Fortunately, there are many effective strategies and tools available to help you achieve your language learning goals. Here are some tips to get you started: Immerse Yourself in the Language One of the most effective ways to learn a new language is to immerse yourself in it. This means exposing yourself to the language as much as possible, whether through reading, listening to music, or speaking with native speakers. Another way to immerse yourself in the language is to travel to a country where the language is spoken. Being surrounded by the language and culture on a daily basis can significantly accelerate the learning process. Consistency is key when it comes to language learning. It’s better to practice a little bit every day than to cram for hours once a week. Make language learning a part of your daily routine, whether it’s practicing vocabulary during your morning commute or listening to podcasts in the language during your workout. Another important aspect of practice is to actively use the language. Whether through conversation with native speakers, writing in a journal, or participating in a language exchange program, actively using the language helps reinforce what you’ve learned and build your confidence. Use Technology to Your Advantage Technology has made language learning more accessible than ever before. There are countless language learning apps, online courses, and interactive tools available to help you learn a new language. Additionally, social media platforms and language exchange websites can provide opportunities to connect with native speakers and practice your skills in a supportive environment. - Find language learning apps that cater to your learning style. - Take advantage of online courses and interactive tools. - Connect with native speakers through social media and language exchange websites. Frequently Asked Questions Is there a critical period for learning language? Yes, there is a critical period for learning language. According to research, the brain is more receptive to language acquisition during the first few years of life. After this period, it becomes more difficult to learn a language with the same ease as a young child. This is because the brain’s ability to create new neural connections decreases as we age, and existing connections become stronger. Can adults learn a new language? Yes, adults can learn a new language. Although it may be more challenging than for children, adults have certain advantages such as cognitive maturity and prior language experience. The key to successful language learning as an adult is finding the right method that works for you and dedicating enough time and effort. What is the best way to learn a new language? There is no one-size-fits-all answer to this question as different methods work for different people. However, some effective strategies for language learning include immersion, practicing with native speakers, using language learning apps, and setting achievable goals. How long does it take to learn a new language? The time it takes to learn a new language varies depending on factors such as the language being learned, the learner’s prior language experience, and the amount of time dedicated to learning. Some estimates suggest it takes around 600-750 hours of study to achieve a basic level of proficiency in a new language. Can learning a new language improve cognitive function? Yes, learning a new language can improve cognitive function. Research suggests that bilingualism can enhance cognitive flexibility, working memory, and attention control. It may also delay the onset of cognitive decline and reduce the risk of dementia in older adults. Is it possible to learn a language without studying grammar? Yes, it is possible to learn a language without studying grammar extensively. Many language learners opt for a more natural approach, where they learn grammar rules intuitively through exposure to the language. This is especially effective for languages with simpler grammar structures such as Spanish or French. However, studying grammar can still be beneficial in building a more solid foundation for language learning.
The study of the far corners of the solar system is an important area of scientific activity of man. Before Voyager and Voyager 2 US research missions Voyager andVoyager 2, which were launched with a difference of a week in 1977, today are among the farthest from the Earth artificial space objects. Now automatic interplanetary stations are located at a distance of about 18 billion km from Earth - outside the heliopause, but still inside the solar system. Until the end it is not clear how many stations will leaveour system — it is surrounded by the Oort cloud — a hypothetical giant cluster of comets that are affected by the gravity of the sun. In practice, the existence of the cloud has not yet been confirmed, but many mathematical models indicate its presence. According to experts, Voyager can go beyond the outer limits of the cloud in about 30 thousand years. In this case, the first mission of Voyager, launched more40 years ago, is the fastest artificial object in the universe. Despite the fact that a similar probe - New Horizons - was launched much later and technically faster, Voyager made a successful gravitational maneuver between the planets, which greatly accelerated it. For example, the implementation of the gravitational maneuver of Voyager 2 in Jupiter, Saturn and Uranus allowed the station to reach Neptune 20 years earlier than its speed allowed by direct movement. Now the approximate speed of the stations isabout 17.5 km / s - or 0.005% of the speed of sunlight. In a certain period of the year, the distance between Voyager 1 and the Earth decreases. This is due to the fact that the speed of the Earth in its orbit around the Sun (about 30 km / s) is higher than the speed with which Voyager 1 moves away from it. Initially both Voyager missions launched forstudies of the far corners of the Solar System - Jupiter, Saturn, Uranus (Voyager 2 is the only probe that reached this planet, and astrophysicists still use data from it to study it) and Neptune, as well as key satellites of these planets. Later - after 2025 - both deviceswill lose touch with the Earth. Their sensors are not enough to transmit data over such distances. According to calculations, only after another 40 thousand years Voyager will reach its first star - Ross 248 - a single star in the constellation Andromeda, located 10.4 light years away from the Sun. In the period from 1958 to 2019, mankindlaunched into space 224 research missions and several thousand commercial and domestic satellites. The first successfully launched automatic interplanetary station was the Soviet "Luna-1", which flew past the moon due to an error in the calculations. Automatic station New Horizons - anotherAmerican mission to explore the far corners of the solar system. The New Horizons mission was launched in 2006, and the initial time of using the probe was calculated as 15-17 years. At launch, it was assumed that New Horizonswill become the fastest artificial object in the Universe — when launched, its speed was 16.2 km / s relative to the Earth, and the heliocentric speed exceeded 45 km / s, which would allow the mission to leave the Solar System even without a gravitational maneuver near Jupiter. Gradually, however, the speed of New Horizons began to decline and today it turned out to be at the level of 14.5 km / s. The main goal of New Horizons is to researchthe formation of the system of Pluto and Charon, the study of the Kuiper belt, as well as the processes that took place in the early stages of the evolution of the solar system. The mission will study the surface and atmosphere of the objects of the Pluto system and its immediate environment - to make maps, explore geology and look for the atmosphere. As a result, after collecting all the information about theseplanets, the mission decided to send New Horizons to the Kuiper Belt - a giant asteroid region on the outskirts of the investigated zone of the solar system. It contains hundreds of thousands of asteroids over 100 km in diameter, partially Pluto, long-period comets from the Oort cloud with a 200-year orbit around the Sun and trillions of comets. In the same place, New Horizons will study one of the most distant asteroids in the solar system, Ultima Thule, which we described in detail here. Recently, New Horizons has recorded a hugehydrogen mass at the edge of the solar system, where interstellar hydrogen collides with the solar wind. Scientists analyzed a 360-degree snapshot of ultraviolet radiation around the probe and found a strange brightness — it could mean the presence of potentially condensed hydrogen. It is believed that around this place the solar wind reduces its speed, so in this way interstellar hydrogen and radiation coming from other stars can influence it. In addition to scientific equipment, aboard NewHorizons has the flags of the United States, a fragment of the first habitable private spacecraft SpaceShipOne, a CD with photos of the device and its developers, a US postage stamp, two coins and a capsule with a piece of the dust of astronomer Clyde Tombo, the discoverer of Pluto. Parker solar probe NASA's Parker Solar Probe spacecraftlaunched relatively recently - in the summer of 2018. Its main mission is to study the outer corona of the Sun from a distance of 6.1 million km - in this place the temperature will exceed 2 million degrees Celsius, while the probe even touches it and does not melt. The probe will not melt due to the fact that the crown,through which the Parker Solar Probe will fly, has an extremely high temperature, but a very low density. Due to this property, the heat shield covering the Parker Solar Probe will only heat up by 1,644 ° C. In more detail about the Parker Solar Probe mission and the features of the solar corona we have already told in a separate article. The Parker Solar Probe holds the record among all the objects that reach the Sun - previously, several space probes reached a distance of about 7 million km from the Sun. Thanks to the Parker Solar Probe, scientists will tryfind out how the solar wind appears, what influence magnetic fields have on it, and study plasma particles around the Sun and the impact on the solar wind and the formation of energy particles. So far, humanity knows very little about solarcrown. Sources for studying for decades were only solar eclipses, because the moon blocked the brightest part of the star - this allowed us to observe the dim external atmosphere of the sun. And only in recent years, NASA began to launch a mission to study it. It’s too early to talk about the results of the mission - less than a year has passed since the launch of the Parker Solar Probe, and the first full-fledged rapprochement with the Sun will happen only in 2024. The launch of the Martian mission InSight watched live literally the whole world - on November 26, 2018, NASA and hundreds of media outlets conducted their broadcasts from this event. InSight's mission is for 720 days. During this time, the probe will study the seismic activity of the planet and, most importantly, drill a well with a depth of up to 5 m. Perhaps this will allow to detect the accumulation of liquid water or ice under the surface of Mars. InSight has now drilled a well with a depth of just50 cm, when the drill came across an obstacle, and the mission team decided to temporarily stop this process. An analysis has shown that the barrier is not a boulder, but a layer of duristrust. Engineers believe that the drill will be able to overcome it, however, due to the shedding of the well, the tool return will inevitably fall. Now scientists are going to raise a littleInSight with IDA (Instrument Deployment Arm) robotic arm, thereby compensating for recoil on impact. There is probably a lack of adhesion between the drill and the surrounding soil due to the fact that the well is filled with detrital material. According to the plans, the process of lifting InSight will take place in several stages from the end of June 2019. Whew - winding down after a long day, but I’vedone it: I’ve placed on the surface of Mars! With SEIS, I’ll be able to give you a heartbeat of #Mars. https://t.co/GYNO4txPPi pic.twitter.com/18eQHXOfiO - NASA InSight (@NASAInSight) December 20, 2018 InSight seismometer recorded in MarchThe first marshake with an amplitude of 2.5 points, while science is not the first time trying to record earthquakes on the Red Planet. In 1975, the Viking-1 and Viking-2 rovers with similar missions were launched to Mars, but the first device did not make a seismometer, and the second did not have sufficient sensitivity, because it was installed in the probe itself, and not in the soil of mars. In early January 2019, the Chinese probe Chang’e4 for the first time in history, sat on the reverse side of the Moon in the crater Von Karma - one of the most unexplored areas on the surface of the Earth satellite with a length of almost 2 thousand km and a depth of 10 km. It is planned that Chang’e 4 will not bring anything to Earth from the far side of the Moon — that would be a very complicated and expensive mission. Chang’e 4 will study the insides of the moon withback side thanks to a powerful radar, as well as a mobile laboratory. The moon rover also delivered an aluminum container with mustard seeds, potatoes and silkworm eggs to the moon, and scientists reported that they managed to germinate one of the cotton seeds. However, with the onset of the first moonlit night - January 12, a few days after landing, the rover went into sleep mode and the experiment had to be interrupted. All the information received rover will transmiton the artificial satellite of the moon, when it will fly over its location, and from the satellite the signal will already go to the mission team. In addition, China has another satellite Queqiao, located at the Lagrange point Earth-Moon at a distance of 37 thousand km from Earth. This will also allow faster transmission of signals to Earth. You can read more about how the Lagrange points are arranged and why there is no gravity in them, you can read in a special study “High-tech”. In addition to scientific tasks, the mission allowed China to test opportunities in the implementation of long-distance space communications systems. Now Chinese engineers intend to buildthe mission of Chang’e 5 is the first probe in the history of the country returned from the moon, which should bring more than 2 kg of lunar soil. The launch of the mission is scheduled for December 2019.
Hyperinflation is an extreme economic condition characterized by rapid and uncontrolled increases in the general price level of goods and services in an economy. It is a situation where the inflation rate reaches extraordinary levels, typically over 50% per month or even per day, causing the value of a currency to drop drastically. In such circumstances, money loses its purchasing power rapidly, leading to economic and social instability. Understanding the causes and effects of hyperinflation is crucial for individuals, businesses, and governments to mitigate its impact and prepare for such situations. In this article, we will explore the causes and effects of hyperinflation, provide historical examples, and offer tips on preparing for such economic crises. Causes of hyperinflation The following are the causes of hyperinflation: - Excessive money supply Hyperinflation can occur when there is an excessive increase in the money supply in an economy. When the central bank or the government prints too much money or injects too much liquidity into the economy, money is available more than the demand for goods and services. This leads to excess money chasing a limited number of goods, causing the prices to skyrocket. - Increase in money velocity Another cause of hyperinflation is an increase in the velocity of money. This refers to the rate at which money changes hands in an economy. When people start to spend money faster, and the same money is used for multiple transactions, it leads to a higher demand for goods and services, increasing prices. If this trend continues, it can result in hyperinflation. - Printing money to finance government spending Governments may resort to printing money to finance their spending, especially during a financial crisis or war. However, if an increase does not match the increase in the money supply in production, it can lead to inflation and, eventually, hyperinflation. Moreover, when a government continues to print money to finance its spending, it erodes the currency’s value and leads to a loss of confidence in the economy. Effects of hyperinflation The following are the effects of hyperinflation: - Loss of purchasing power Hyperinflation results in a loss of purchasing power for consumers and businesses. As prices increase rapidly, the value of money decreases, and people can afford to buy less with the same amount of money. This leads to a decrease in the standard of living and increased poverty. - Unemployment and reduced economic activity Hyperinflation can also lead to high levels of unemployment and reduced economic activity. Businesses may struggle to adjust to rapidly changing prices, leading to reduced production, layoffs, and closures. Additionally, high inflation rates can discourage investment, leading to reduced economic growth. - Disruption of international trade Hyperinflation can lead to a disruption of international trade as businesses and consumers lose confidence in the currency. When the value of the currency drops rapidly, foreign buyers may not want to trade in that currency, making it difficult for the country to import goods and services. This can lead to shortages and further price increases. 4. Poverty and inequality Hyperinflation can lead to increased poverty and inequality as those on fixed incomes, such as pensioners, are particularly affected by the loss of purchasing power. The cost of basic necessities such as food and healthcare can become unaffordable for many people. Read also: Exploring inflation: what is it and how does it affect your wallet Examples of hyperinflation Here are some examples of hyperinflation: Weimar Republic (Germany) The Weimar Republic experienced hyperinflation in the early 1920s following World War I. The government printed money to pay off war debts, rapidly increasing the money supply. As a result, the prices of goods and services increased rapidly, doubling every few days. By November 1923, the monthly inflation rate had reached 29,500%, leading to widespread social and economic chaos. Zimbabwe experienced hyperinflation from the late 1990s to 2009, with the peak inflation rate reaching over 79.6 billion percent per month in November 2008. The government printed money to finance its spending, leading to a rapid increase in the money supply. This caused prices to increase rapidly, with prices doubling every day. It led to widespread poverty, unemployment, and social unrest in Zimbabwe. Venezuela has been experiencing hyperinflation since 2016, with the inflation rate reaching over 10 million percent in 2019. The government printed money to finance its spending, leading to a rapid increase in the money supply. This led to a sharp price increase, making it difficult for people to afford basic necessities such as food, medicine, and housing. How to prepare for it Hyperinflation can have a devastating effect on the economy and the financial well-being of individuals. Here are some ways to prepare for hyperinflation: - Diversify your assets Diversifying your assets can help reduce the risk of losing all your wealth. For example, investing in a mix of stocks, bonds, and real estate can help you hedge against inflation. - Invest in hard assets Investing in hard assets such as real estate, land, and commodities can help protect your wealth during hyperinflation. These assets have intrinsic value and can retain their value even when the currency loses its value. - Keep cash and other liquid assets During hyperinflation, cash can lose its value rapidly. However, keeping some cash and other liquid assets, such as precious metals and foreign currencies, can help you meet your immediate needs. - Consider moving money to a stable currency If you are worried about the stability of your country’s currency, you may consider moving your money to a stable foreign currency such as the US dollar, euro, or Swiss franc. - Invest in companies that are resistant to inflation Some companies are resistant to inflation, such as those that produce essential goods and services, such as food and utilities. Investing in these companies can help you protect your wealth during hyperinflation. - Consider gold and other precious metals Gold and other precious metals have been considered safe haven assets during economic uncertainty. They have a long history of retaining their value during hyperinflation and can be an effective hedge against inflation. Hyperinflation can lead to severe economic and social instability Hyperinflation is a phenomenon that can have severe consequences on individuals, businesses, and entire economies. The economic effects of hyperinflation can include a loss of purchasing power, unemployment, and disruption of international trade. Examples of hyperinflation include the Weimar Republic, Zimbabwe, Venezuela, and Hungary. By taking the measures discussed above, individuals can protect themselves from the devastating effects of hyperinflation. Read also: Financial security, 9 ways to achieve it
The density (more precisely, the volumetric mass density; also known as specific mass), of a substance is its mass per unit volume. The symbol most often used for density is ρ (the lower case Greek letter rho), although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume: where ρ is the density, m is the mass, and V is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its weight per unit volume, although this is scientifically inaccurate – this quantity is more specifically called specific weight. For a pure substance the density has the same numerical value as its mass concentration. Different materials usually have different densities, and density may be relevant to buoyancy, purity and packaging. Osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser. To simplify comparisons of density across different systems of units, it is sometimes replaced by the dimensionless quantity "relative density" or "specific gravity", i.e. the ratio of the density of the material to that of a standard material, usually water. Thus a relative density less than one means that the substance floats in water. The density of a material varies with temperature and pressure. This variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance (with a few exceptions) decreases its density by increasing its volume. In most materials, heating the bottom of a fluid results in convection of the heat from the bottom to the top, due to the decrease in the density of the heated fluid. This causes it to rise relative to more dense unheated material. The reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density; rather it increases its mass. In a well-known but probably apocryphal tale, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a golden wreath dedicated to the gods and replacing it with another, cheaper alloy. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass; but the king did not approve of this. Baffled, Archimedes is said to have taken an immersion bath and observed from the rise of the water upon entering that he could calculate the volume of the gold wreath through the displacement of the water. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!" (Εύρηκα! Greek "I have found it"). As a result, the term "eureka" entered common parlance and is used today to indicate a moment of enlightenment. The story first appeared in written form in Vitruvius' books of architecture, two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time. From the equation for density (ρ = m/V), mass density has units of mass divided by volume. As there are many units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per cubic metre (kg/m3) and the cgs unit of gram per cubic centimetre (g/cm3) are probably the most commonly used units for density. One g/cm3 is equal to one thousand kg/m3. One cubic centimetre (abbreviation cc) is equal to one millilitre. In industry, other larger or smaller units of mass and or volume are often more practical and US customary units may be used. See below for a list of some of the most common units of density. Measurement of density A number of techniques as well as standards exist for the measurement of density of materials. Such techniques include the use of a hydrometer (a buoyancy method for liquids), Hydrostatic balance (a buoyancy method for liquids and solids), immersed body method (a buoyancy method for liquids), pycnometer (liquids and solids), air comparison pycnometer (solids), oscillating densitometer (liquids), as well as pour and tap (solids). However, each individual method or technique measures different types of density (e.g. bulk density, skeletal density, etc.), and therefore it is necessary to have an understanding of the type of density being measured as well as the type of material in question. The density at all points of a homogeneous object equals its total mass divided by its total volume. The mass is normally measured with a scale or balance; the volume may be measured directly (from the geometry of the object) or by the displacement of a fluid. To determine the density of a liquid or a gas, a hydrometer, a dasymeter or a Coriolis flow meter may be used, respectively. Similarly, hydrostatic weighing uses the displacement of water due to a submerged object to determine the density of the object. If the body is not homogeneous, then its density varies between different regions of the object. In that case the density around any given location is determined by calculating the density of a small volume around that location. In the limit of an infinitesimal volume the density of an inhomogeneous object at a point becomes: , where is an elementary volume at position . The mass of the body then can be expressed as In practice, bulk materials such as sugar, sand, or snow contain voids. Many materials exist in nature as flakes, pellets, or granules. Voids are regions which contain something other than the considered material. Commonly the void is air, but it could also be vacuum, liquid, solid, or a different gas or gaseous mixture. The bulk volume of a material—inclusive of the void fraction—is often obtained by a simple measurement (e.g. with a calibrated measuring cup) or geometrically from known dimensions. Mass divided by bulk volume determines bulk density. This is not the same thing as volumetric mass density. To determine volumetric mass density, one must first discount the volume of the void fraction. Sometimes this can be determined by geometrical reasoning. For the close-packing of equal spheres the non-void fraction can be at most about 74%. It can also be determined empirically. Some bulk materials, however, such as sand, have a variable void fraction which depends on how the material is agitated or poured. It might be loose or compact, with more or less air space depending on handling. In practice, the void fraction is not necessarily air, or even gaseous. In the case of sand, it could be water, which can be advantageous for measurement as the void fraction for sand saturated in water—once any air bubbles are thoroughly driven out—is potentially more consistent than dry sand measured with an air void. In the case of non-compact materials, one must also take care in determining the mass of the material sample. If the material is under pressure (commonly ambient air pressure at the earth's surface) the determination of mass from a measured sample weight might need to account for buoyancy effects due to the density of the void constituent, depending on how the measurement was conducted. In the case of dry sand, sand is so much denser than air that the buoyancy effect is commonly neglected (less than one part in one thousand). Mass change upon displacing one void material with another while maintaining constant volume can be used to estimate the void fraction, if the difference in density of the two voids materials is reliably known. Changes of density In general, density can be changed by changing either the pressure or the temperature. Increasing the pressure always increases the density of a material. Increasing the temperature generally decreases the density, but there are notable exceptions to this generalization. For example, the density of water increases between its melting point at 0 °C and 4 °C; similar behavior is observed in silicon at low temperatures. The effect of pressure and temperature on the densities of liquids and solids is small. The compressibility for a typical liquid or solid is 10−6 bar−1 (1 bar = 0.1 MPa) and a typical thermal expansivity is 10−5 K−1. This roughly translates into needing around ten thousand times atmospheric pressure to reduce the volume of a substance by one percent. (Although the pressures needed may be around a thousand times smaller for sandy soil and some clays.) A one percent expansion of volume typically requires a temperature increase on the order of thousands of degrees Celsius. In contrast, the density of gases is strongly affected by pressure. The density of an ideal gas is where M is the molar mass, P is the pressure, R is the universal gas constant, and T is the absolute temperature. This means that the density of an ideal gas can be doubled by doubling the pressure, or by halving the absolute temperature. In the case of volumic thermal expansion at constant pressure and small intervals of temperature the temperature dependence of density is : where is the density at a reference temperature, is the thermal expansion coefficient of the material at temperatures close to . Density of solutions Mass (massic) concentration of each given component ρi in a solution sums to density of the solution. provided that there is no interaction between the components. Knowing the relation between excess volumes and activity coefficients of the components, one can determine the activity coefficients. - Selected chemical elements are listed here. For the densities of all chemical elements, see List of chemical elements |Material||ρ (kg/m3)[note 1]||Notes| |Metallic microlattice||0.9||[note 2]| |Air||1.2||At sea level| |Tungsten hexafluoride||12.4||One of the heaviest known gases at standard conditions| |Liquid hydrogen||70||At approx. −255 °C| |Ice||916.7||At temperature < 0 °C| |Water (fresh)||1,000||At 4 °C, the temperature of its maximum density| |Liquid oxygen||1,141||At approx. −219 °C| |Plastics||1,175||Approx.; for polypropylene and PETE/PVC| |Diiodomethane||3,325||Liquid at room temperature| |Interstellar medium||1×10−19||Assuming 90% H, 10% He; variable T| |The Earth||5,515||Mean density.| |Earth's inner core||13,000||Approx., as listed in Earth.| |The core of the Sun||33,000–160,000||Approx.| |Super-massive black hole||9×105||Density of a 4.5-million-solar-mass black hole| Event horizon radius is 13.5 million km. |White dwarf star||2.1×109||Approx.| |Atomic nuclei||2.3×1017||Does not depend strongly on size of nucleus| |Stellar-mass black hole||1×1018||Density of a 4-solar-mass black hole| Event horizon radius is 12 km. |Temp. (°C)[note 1]||Density (kg/m3)| |T (°C)||ρ (kg/m3)| Molar volumes of liquid and solid phase of elements The SI unit for density is: The litre and metric tons are not part of the SI, but are acceptable for use with it, leading to the following units: Densities using the following metric units all have exactly the same numerical value, one thousandth of the value in (kg/m3). Liquid water has a density of about 1 kg/dm3, making any of these SI units numerically convenient to use as most solids and liquids have densities between 0.1 and 20 kg/dm3. - kilogram per cubic decimetre (kg/dm3) - gram per cubic centimetre (g/cm3) - 1 g/cm3 = 1000 kg/m3 - megagram (metric ton) per cubic metre (Mg/m3) In US customary units density can be stated in: - Avoirdupois ounce per cubic inch (1 g/cm3 ≈ 0.578036672 oz/cu in) - Avoirdupois ounce per fluid ounce (1 g/cm3 ≈ 1.04317556 oz/US fl oz = 1.04317556 lb/US fl pint) - Avoirdupois pound per cubic inch (1 g/cm3 ≈ 0.036127292 lb/cu in) - pound per cubic foot (1 g/cm3 ≈ 62.427961 lb/cu ft) - pound per cubic yard (1 g/cm3 ≈ 1685.5549 lb/cu yd) - pound per US liquid gallon (1 g/cm3 ≈ 8.34540445 lb/US gal) - pound per US bushel (1 g/cm3 ≈ 77.6888513 lb/bu) - slug per cubic foot Imperial units differing from the above (as the Imperial gallon and bushel differ from the US units) in practice are rarely used, though found in older documents. The Imperial gallon was based on the concept that an Imperial fluid ounce of water would have a mass of one Avoirdupois ounce, and indeed 1 g/cm3 ≈ 1.00224129 ounces per Imperial fluid ounce = 10.0224129 pounds per Imperial gallon. The density of precious metals could conceivably be based on Troy ounces and pounds, a possible cause of confusion. - List of elements by density - Air density - Area density - Bulk density - Charge density - Density prediction by the Girolami method - Energy density - Lighter than air - Linear density - Number density - Orthobaric density - Paper density - Specific weight - Spice (oceanography) - Standard temperature and pressure - The National Aeronautic and Atmospheric Administration's Glenn Research Center. "Gas Density Glenn research Center". grc.nasa.gov. Archived from the original on April 14, 2013. Retrieved April 9, 2013. - "Density definition in Oil Gas Glossary". Oilgasglossary.com. Archived from the original on August 5, 2010. Retrieved September 14, 2010. - Archimedes, A Gold Thief and Buoyancy Archived August 27, 2007, at the Wayback Machine – by Larry "Harris" Taylor, Ph.D. - Vitruvius on Architecture, Book IX[permanent dead link], paragraphs 9–12, translated into English and in the original Latin. - "EXHIBIT: The First Eureka Moment". Science. 305 (5688): 1219e. 2004. doi:10.1126/science.305.5688.1219e. - Fact or Fiction?: Archimedes Coined the Term "Eureka!" in the Bath, Scientific American, December 2006. - "OECD Test Guideline 109 on measurement of density". - New carbon nanotube struructure aerographite is lightest material champ Archived October 17, 2013, at the Wayback Machine. Phys.org (July 13, 2012). Retrieved on July 14, 2012. - Aerographit: Leichtestes Material der Welt entwickelt – SPIEGEL ONLINE Archived October 17, 2013, at the Wayback Machine. Spiegel.de (July 11, 2012). Retrieved on July 14, 2012. - "Re: which is more bouyant [sic] styrofoam or cork". Madsci.org. Archived from the original on February 14, 2011. Retrieved September 14, 2010. - Raymond Serway; John Jewett (2005), Principles of Physics: A Calculus-Based Text, Cengage Learning, p. 467, ISBN 0-534-49143-X, archived from the original on May 17, 2016 - "Wood Densities". www.engineeringtoolbox.com. Archived from the original on October 20, 2012. Retrieved October 15, 2012. - "Density of Wood". www.simetric.co.uk. Archived from the original on October 26, 2012. Retrieved October 15, 2012. - CRC Press Handbook of tables for Applied Engineering Science, 2nd Edition, 1976, Table 1-59 - glycerol composition at Archived February 28, 2013, at the Wayback Machine. Physics.nist.gov. Retrieved on July 14, 2012. - "Density of Concrete - The Physics Factbook". hypertextbook.com. - Hugh D. Young; Roger A. Freedman. University Physics with Modern Physics Archived April 30, 2016, at the Wayback Machine. Addison-Wesley; 2012. ISBN 978-0-321-69686-1. p. 374. - Density of the Earth, wolframalpha.com, archived from the original on October 17, 2013 - Density of Earth's core, wolframalpha.com, archived from the original on October 17, 2013 - Density of the Sun's core, wolframalpha.com, archived from the original on October 17, 2013 - Extreme Stars: White Dwarfs & Neutron Stars Archived September 25, 2007, at the Wayback Machine, Jennifer Johnson, lecture notes, Astronomy 162, Ohio State University. Accessed: May 3, 2007. - Nuclear Size and Density Archived July 6, 2009, at the Wayback Machine, HyperPhysics, Georgia State University. Accessed: June 26, 2009. - Encyclopædia Britannica. 8 (11th ed.). 1911. . - The New Student's Reference Work. 1914. . - Video: Density Experiment with Oil and Alcohol - Video: Density Experiment with Whiskey and Water - Glass Density Calculation – Calculation of the density of glass at room temperature and of glass melts at 1000 – 1400°C - List of Elements of the Periodic Table – Sorted by Density - Calculation of saturated liquid densities for some components - Field density test - On-line calculator for densities and partial molar volumes of aqueous solutions of some common electrolytes and their mixtures, at temperatures up to 323.15 K.[permanent dead link] - Water – Density and specific weight - Temperature dependence of the density of water – Conversions of density units - A delicious density experiment - Water density calculator Water density for a given salinity and temperature. - Liquid density calculator Select a liquid from the list and calculate density as a function of temperature. - Gas density calculator Calculate density of a gas for as a function of temperature and pressure. - Densities of various materials. - Determination of Density of Solid, instructions for performing classroom experiment. - density prediction - density prediction
Electronic computing was born in the form of massive machines in air-conditioned rooms, migrated to desktops and laptops, and lives today in tiny devices like watches and smartphones. But why stop there, asks an international team of Stanford-led engineers. Why not build an entire computer onto a single chip? It could have processing circuits, memory storage and power supply to perform a given task, such as measuring moisture in a row of crops. Equipped with machine learning algorithms, the chip could make on-the-spot decisions such as when to water. And with wireless technology it could send and receive data over the internet. Engineers call this vision of ubiquitous computing the Internet of Everything. But to achieve it they’ll need to develop a new class of chips to serve as its foundation. The researchers will unveil the prototype for such a computer-on-a-chip Feb. 19 at the International Solid-State Circuits Conference in San Francisco. The prototype’s data processing and memory circuits uses less than a tenth as much electricity as any comparable electronic device, yet despite its size it is designed to perform many advanced computing feats. “This is what engineers do,” said Subhasish Mitra, a professor of electrical engineering and of computer science who worked on the chip. “We create a whole that is greater than the sum of its parts.” New memory is the key The prototype is built around a new data storage technology called RRAM (resistive random access memory), which has features essential for this new class of chips: storage density to pack more data into less space than other forms of memory; energy efficiency that won’t overtax limited power supplies; and the ability to retain data when the chip hibernates, as it is designed to do as an energy-saving tactic. RRAM has another essential advantage. Engineers can build RRAM directly atop a processing circuit to integrate data storage and computation into a single chip. Stanford researchers have pioneered this concept of uniting memory and processing into one chip because it’s faster and more energy efficient than passing data back and forth between separate chips as is the case today. The French team at CEA-LETI was responsible for grafting the RRAM onto a silicon processor. In order to improve the storage capacity of RRAM, the Stanford group made a number of changes. One was to increase how much information each storage unit, called a cell, can hold. Memory devices typically consist of cells that can store either a zero or a one. The researchers devised a way to pack five values into each memory cell, rather than just the two standard options. A second enhancement improved the endurance of RRAM. Think about data storage from a chip’s point of view: As data is continuously written to a chip’s memory cells, they can become exhausted, scrambling data and causing errors. The researchers developed an algorithm to prevent such exhaustion. They tested the endurance of their prototype and found that it should have a 10-year lifespan. Mitra said the team’s computer scientists and electrical engineers worked together to integrate many software and hardware technologies on the prototype, which is currently about the diameter of a pencil eraser. Although that is too large for futuristic, Internet of Everything applications, even now the way that the prototype combines memory and processing could be incorporated into the chips found in smartphones and other mobile devices. Chip manufacturers are already showing interest in this new architecture, which was one of the goals of the Stanford-led team. Mitra said experience gained manufacturing one generation of chips fuels efforts to make the next iteration smaller, faster, cheaper and more capable. “The SystemX Alliance has allowed a great collaboration between Stanford and CEA-LETI on edge AI application, covering circuit architecture, circuit design, down to advanced technologies,” said Emmanuel Sabonnadière, CEO of the French research institute.
Each tooth has a crown and root portion. The crown iscovered with enamel and the root portion covered with cementum. The crown and root join at the cementoenamel junction (CEJ). This junction also called the cervical line. The main bulk of the tooth is composed of dentin, which is clear in a cross section of the tooth. This cross section displays a pulp chamber and a pulp canal, which normally contain the pulp tissue. The pulp chamber is in the crown portion mainly, and the pulp canal is in the root. The spaces are continuous with each other and are spoken of collectively as the pulp cavity. The four tooth tissues are enamel, cementum, dentin, and pulp. The first three are known as hard tissues, the last as soft tissue. The pulp tissue furnishes the blood and nerve supply to the tooth. Enamel : Makes up the protective outer surface of the crown of the tooth. Enamel is translucent and can vary in color from yellowish to grayish white. The different colors of enamel are attributed to the variation in the thickness, translucent proprieties, the quality of the crystal structure, and surface stains of enamel. Enamel is the calcified substance that covers the entire anatomic crown of the tooth and protects the dentin. It is the hardest tissue in the human body and consists of approximately 96% inorganic minerals, 1% organic materials, and 3% water. Calcium and phosphorus (as hydroxyapatite) are its main inorganic components. Enamel can endure crushing pressure of approximately 100,000 pounds per square inch., produces a cushioning effect of the tooth’s different structures enabling it to endure the pressures of mastication (chewing). Structurally, enamel is composed of millions of enamel rods or prisms. Each rod begins at the dentinoenamel junction (junction between the enamel and dentin) and extends to the outer surface of the crown. After formation enamel has no power of further growth or repair. 2. Dentin: Makes up the majority of the inner surface of the tooth. It cannot normally be seen except on x-rays. Dentin is the light yellow substance . The pulp chamber is located on the internal surface of the dentin walls. Dentin is harder than bone but softer than enamel. Dentin consists of approximately 70% inorganic matter and 30% organic matter and water. Calcium and phosphorus are its chief inorganic components. Dentin is a living tissue and must be protected during operative or prosthetic procedures from dehydration (drying) and thermal shock. The dentin is perforated by tubules (similar to tiny straws) that run between the cementoenamel junction (CEJ) and the pulp. Dentin transmits pain stimuli by the way of dentinal fibers. Because dentin is a living tissue, it has the ability for constant growth and repair that reacts to physiologic (functional) and pathologic (disease) stimuli. This is the area inside the tooth that holds the nerves and blood vessels of the tooth. It is in the center of the tooth and is in both the crown and the root of the tooth. The dental pulp, is the soft tissue inside the tooth developed from the connective tissue of the dental papilla. Within the crown, the chamber containing the dental pulp called the pulp chamber. The coronal pulp and pulp horns are within the crown and the radicular pulp is within the root. The apical foramen is at the end or apex of the radicular pulp. Blood vessels, nerves, and connective tissue pass through this area to reach the interior of the tooth. The chief function of the pulp is the formation of dentin. It furnishes nourishment to the dentin; provides sensation to the tooth; and responds to irritation, either by forming reparative secondary dentin or by becoming inflamed. Makes up the outer surface of the root of the tooth. It is much softer than enamel. Cementum is the bonelike tissue that covers the roots of the teeth in a thin layer. It is light yellow in color, slightly lighter than dentin. The cementum is composed of approximately 55% organic material and 45% inorganic material; the inorganic components are mainly calcium salts. The cementum joins the enamel at the cervix of the tooth forming the CEJ. In most teeth the Cementum overlaps the enamel for a short distance. In some, the enamel meets the cementum in a sharp line. In a few, a gap may be present between the enamel and the Cementum, exposing a narrow area of root dentin. Such areas may be very sensitive to thermal, chemical, or mechanical stimuli. The main function of cementum is to anchor the teeth to the bony walls of the tooth sockets in the periodontium. This accomplished by the fibers of the periodontal ligament or membrane. Cementum formed continuously throughout the life of the tooth to compensate for the loss of tooth substance because of occlusal wear and to allow for the attachment of new fibers of the periodontal ligament to the surface of the root. Cementum is the only tissue considered as both a basic part of the tooth and a component of the periodontium. It is a thin, calcified layer of tissue that completely covers the dentin of the tooth root. Cementum is formed during the development of the root and throughout the life of the tooth. Cementum functions as an area of attachment for the periodontal ligament fibers. NOTE: The tissues that surround and support the teeth are collectively called the periodontium. Their main functions are to support, protect, and provide nourishment to the teeth. The periodontium consists of cementum, alveolar process of the maxillae and mandible, periodontal ligament, and gingiva. PERIODONTAL LIGAMENT: The periodontal ligament is a thin, fibrous ligament that connects the tooth to the bony socket. Normally, teeth do not contact the bone directly; a tooth is suspended in its socket by the fibers of the ligament. This arrangement allows each tooth limited individual movement. The fibers act as shock absorbers to cushion the force of mastication. ANATOMY OF THE CROWN: The crown of an incisor tooth may have an incisal ridge or edge, as in the central and lateral incisors; a single cusp, as in the canines; or two or more cusps, as on premolars and molars. Incisal ridges and cusps form the cutting surfaces on tooth crowns. Therefore, the crowns of the incisors and canines have four surfaces and a ridge, and the crowns of the premolars and molars have five surfaces. The surfaces are named according to their positions and uses. The root portion of the tooth may be single, with one apex or terminal end, as usually found in anterior teeth and some of the premolars; or multiple, with a bifurcation or trifurcation dividing the root portion into two or more extensions or roots with their apices or terminal ends, as found on all molars and in some premolars. The root portion of the tooth is firmly fixed in the bony process of the jaw, so that each tooth is held in its position relative to the others in the dental arch. That portion of the jaw serving as support for the tooth is called the alveolar process. The bone of the tooth socket is called the alveolus. The crown portion is never covered by bone tissue after it is fully erupted, but it is partly covered at the cervical third in young adults by soft tissue of the mouth known as the gingiva or gingival tissue, or “gums.” In some persons, all of the enamel and frequently some cervical cementum may not be covered by the gingiva. DESCRIPTION OF THE TEETH: To facilitate description, teeth are divided into thirds, Line Angles, and Point Angles. The crowns and roots of teeth have been divided into thirds, and junctions of the crown surfaces are described as line angles and point angles. Actually, there are no angles or points or plane surfaces on the teeth anywhere except those that appear from wear (e.g., attrition, abrasion) or from accidental fracture. When the surfaces of the crown and root portions are divided into thirds, these thirds are named according to their location. Wear of the teeth develops line angles that are not found in natural condition. Looking at the tooth from the labial or buccal aspect, we see that the crown and root may be divided into thirds from the incisal or occlusal surface of the crown to the apex of the root . The crown is divided into an incisal or occlusal third, a middle third, and a cervical third. The root is divided into a cervical third, a middle third, and an apical third. The crown may be divided into thirds in three directions: (inciso- or occlusocervically) , ( mesiodistally ) , or ( labio- or buccolingually). Mesiodistally, the crown is divided into the mesial, middle, and distal thirds. Labio or buccolingually it is divided into labial or buccal, middle, and lingual thirds. Each of the five surfaces of a crown may be so divided. There will be one middle third and two other thirds, which are named according to their location, for example, cervical, occlusal, mesial, lingual. A line angle is formed by the junction of two surfaces and derives its name from the combination of the two surfaces that join. For instance, on an anterior tooth, the junction of the mesial and labial surfaces is called the mesiolabial line angle. The line angles of the anterior teeth are as follows: 1-mesiolabial line angle. 2-distolingual line angle. 3-distolabial line angle. 4-labioincisal line angle. 5-mesiolingual line angle. 6-linguoincisal line angle. Because the mesial and distal incisal angles of anterior teeth are rounded, mesioincisal line angles and distoincisal line angles are usually considered nonexistent. They are spoken of as mesial and distal incisal angles only. The line angles of the posterior teeth are as follows: 1-mesiobuccal line angle. 2- distolingual line angle. 3-bucco-occlusal line angle 4- distobuccal line angle. 5-mesio-occlusal line angle. 6-linguo-occlusal line angle. 7-mesiolingual line angle. 8-disto-occlusal line angle A point angleis formed by the junction of three surfaces. The point angle also derives its name from the combination of the names of the surfaces forming it. For example, the junction of the mesial, buccal, and occlusal surfaces of a molar is called the mesiobucco-occlusal point angle. The oral cavity is made up of specialized epithelial tissues that surround the teeth and serve as a lining. These tissues are called the oral mucosa and consist of three types: masticatory mucosa, lining mucosa, and specialized mucosa. MASTICATORY MUCOSA: Masticatory mucosa is comprised of the tissue that covers the hard palate and the gingiva. It is light pink in color (can vary with skin color) and is keratinized. Keratinized tissue has a tough, protective outer layer of tissue. A-Hard palate (Roof of the Mouth): The hard palate is covered with masticatory mucosa and is firmly adhered to the palatine process (bone). Its color is pale pink. Important structures of the hard palate are: INCISIVE PAPILLA: Located at the midline, directly posterior of the maxillary central incisors (pear-shaped in appearance). PALATINE RAPHE: Extends from the incisive papilla posteriorly at the midline (may be ridge shaped in appearance with a whitish streak at the midline). PALATINE RUGAE.—Extends laterally (along side) from the incisive papilla and from the palatine raphe (wrinkled, irregular ridges in appearance). B- Gingiva: The gingiva is specialized masticatory mucosa covering the alveolar process. Gingiva is firm and resilient encircling the necks of the teeth. It aids in the support of the teeth, and protects the alveolar process and periodontal ligament from bacterial invasion. The color of healthy gingiva range from pale pink to darker shades (purple to black) depending on each individual’s pigmentation. Under normal flossing and brushing activities it does not bleed. 2- Lining Mucosa: Lining mucosa is found on the inside of the lips, cheeks, vestibule, soft palate, and under the tongue. It consists of a thin, fragile tissue that is very vascular. Lining mucosa is brighter red in color than masticatory mucosa. Included in the lining mucosa is alveolar mucosa which is loosely attached and lies apical to the mucogingival junction (line where the attached gingiva and alveolar mucosa meet). 3-Specialized Mucosa: Specialized mucosa is the mucous membrane on the tongue in the form of lingual papillae, which are structures associated with sensations of taste. Tooth Identification In both the maxillary and mandibular arch: There are similar teeth. There are four types of teeth in both arches. These include the incisors, the canines, the premolars and the molars. Each of these teeth is located in a different area of the mouth and serves different functions. Incisors: The four front teeth in the mouth are known as incisors. They are located in both the maxillary and mandibular arches. The two center teeth are known as central incisors and the teeth on either side of them are known as lateral incisors. All of these teeth are responsible for cutting or biting food. They act like scissors. Canines: The teeth located distal to the lateral incisors are known as canines. These teeth form the corners of the mouth. There are 2 canines in the maxillary arch and 2 canines in the mandibular arch. These teeth are responsible for tearing food particles when chewing. Premolars: The teeth located distal to the canines are known as premolars. There are 4 premolars in each arch and two are located behind each canine in the arch. These teeth are smaller than the molars and are responsible for crushing food in the chewing process. These teeth are also only present in the permanent dentition. The primary dentition only consists of incisors, canines and molars. Molars: There are normally 6 molars in each arch; three on the left and three on the right side. They are referred to as first, second and third molars. Some people never develop third molars and often these are the molars that are so far back in the mouth that they have difficulty coming in and may have to be taken out. The role of the molars in chewing is to grind the food. Tooth Numbering Systems: In order to effectively and efficiently refer to teeth we often use numbering or lettering systems. There are several systems that are used throughout the world. These include the Palmer Notation System, theUniversal Numbering System, and the International Numbering System. 1-The Palmer Notation System: divides the permanent teeth into ( 1,2,3,4,5,6,7,8) as (central,laterai,canine,first premolar,second premolar,first molar,second molar,third molar) and an angle that represents the direction and position of the teeth as: For deciduous teeth, they are named as ( A,B,C,D,E). The teeth are numbered 1-8 for permanent and A-E for deciduous (as with the Palmer notation method). For example, permanent upper left first molar: UL6 2-TheUniversal Numbering System: The most widely used system in U.S. dental schools is the Universal Numbering System. It is also called the "American system".The uppercase lettersAthroughTare used for primary teeth and the numbers 1 - 32 are used for permanent teeth. For permanent teeth, it consists of assigning numbers to the teeth in the permanent dentition from 1 to 32 starting with the upper right third molar and continuing over to the upper left third molar and then down to the lower left third molar and onto to the lower right third molar. For example: The mandibular right canine tooth would be tooth #27. Using the Universal Numbering System for the primary dentition is identified by using letters. Beginning at the second molar on the upper right, the teeth in the maxillary arch are assigned letters A – J. Then continuing with the mandibular left second molar and around to the mandibular right second molar, the teeth are assigned letters K – T. UNIVERSAL TOOTH NUMBERING SYSTEM Alternative system for deciduous teeth.. The International Numbering System: or called ( FDI Numbering system ) divides the mouth into 4 quadrant, starting from upper right quadrant to a clock wise direction .each quadrant was numbered as: Upper right quadrant = 1. Upper left quadrant = 2. Lower left quadrant = 3. Lower right quadrant = 4. While the teeth were numbered as (1,2,3,4,5,6,7,8) as (central,laterai,canine,first premolar,second premolar,first molar,second molar,third molar). To record the teeth, the right number refers to the arch ,the left number refers to the tooth as following: Tooth No. 11 = maxillary right central. Tooth No. 12 = maxillary right lateral. Tooth No. 28 = maxillary left third molar. Tooth No. 31 = mandibular left central. Tooth No. 44 = mandibular right first premolar. The International Numbering System for permanent teeth. Another classification can be used for deciduous teeth, where the arches are divided as: Upper right quadrant = 5. Upper left quadrant = 6. Lower left quadrant = 7. Lower right quadrant = 8 And the deciduous teeth are numbered from 1 to 5 as: Tooth No. 55 = maxillary right central. Tooth No. 85 = mandibular right molar. The International Numbering System for deciduous teeth.
Otitis media: Description:Otitis media is an infection or inflammation of the middle ear. This inflammation often begins when infections that cause sore throats, colds, or other respiratory or breathing problems spread to themiddle ear. These can be viral or bacterial infections. Seventy-five percent of children experience at least one episode of otitis media by their third birthday. Almost half of these children will have three or more ear infections during their first 3 years. It is estimated that medical costs and lost wages because of otitis media amount to $5 billion a year in the United States. Although otitis media is primarily a disease of infants and young children, it can also affect adults.The ear consists of three major parts: the outer ear, the middle ear, and the inner ear. The outer ear includes the pinna - the visible part of the ear - and the ear canal. The outer ear extends to the tympanic membrane or eardrum, which separates the outer ear from the middle ear. The middle ear is an air-filled space that is located behind the eardrum. The middle ear contains three tiny bones, the malleus, incus, and stapes, which transmit sound from the eardrum to the inner ear. The inner ear contains the hearing and balance organs. The cochlea contains the hearing organ which converts sound into electrical signals which are associated with the origin of impulses carried by nerves to the brain where their meanings are appreciated.There are many reasons why children are more likely to suffer from otitis media than adults. First, children have more trouble fighting infections. This is because their immune systems are still developing. Another reason has to do with the child's eustachian tube. The eustachian tube is a small passageway that connects the upper part of the throat to the middle ear. It is shorter and straighter in the child than in the adult. It can contribute to otitis media in several ways.The eustachian tube is usually closed but opens regularly to ventilate or replenish the air in the middle ear. This tube also equalizes middle ear air pressure in response to air pressure changes in the environment. However, a eustachian tube that is blocked by swelling of its lining or plugged with mucus from a cold or for some other reason cannot open to ventilate the middle ear. The lack of ventilation may allow fluid from the tissue that lines the middle ear to accumulate. If the eustachian tube remains plugged, the fluid cannot drain and begins to collect in the normally air-filled middle ear.One more factor that makes children more susceptible to otitis media is that adenoids in children are larger than they are in adults. Adenoids are composed largely of cells (lymphocytes) that help fight infections. They are positioned in the back of the upper part of the throat near the eustachian tubes. Enlarged adenoids can, because of their size, interfere with the eustachian tube opening. In addition, adenoids may themselves become infected, and the infection may spread into the eustachian tubes.Bacteria reach the middle ear through the lining or the passageway of the eustachian tube and can then produce infection, which causes swelling of the lining of the middle ear, blocking of the eustachian tube, and migration of white cells from the bloodstream to help fight the infection. In this process the white cells accumulate, often killing bacteria and dying themselves, leading to the formation of pus, a thick yellowish-white fluid in the middle ear. As the fluid increases, the child may have trouble hearing because the eardrum and middle ear bones are unable to move as freely as they should. As the infection worsens, many children also experience severe ear pain. Too much fluid in the ear can put pressure on the eardrum and eventually tear it.There are several types of otitis media: Otitis media without effusion is an inflammation of the eardrum without fluid in the middle ear. Acute otitis media occurs when there is fluid in the middle ear accompanied by the rapid onset of signs and symptoms of middle ear infection. Otitis media with effusion is the presence of fluid in the middle ear without signs or symptoms of ear infection. It is also sometimes called serous otitis media. Chronic otitis media occurs when infection persists. This can cause ongoing damage to the middle ear and eardrum. Several avenues of research are being explored to further improve the prevention, diagnosis, and treatment of otitis media. For example, research is better defining those children who are at high risk for developing otitis media and conditions that predispose certain individuals to middle ear infections. Emphasis is being placed on discovering the reasons why some children have more ear infections than other children. The effects of otitis media on children's speech and language development are important areas of study, as is research to develop more accurate methods to help physicians detect middle ear infections. How the defense molecules and cells involved with immunity respond to bacteria and viruses that often lead to otitis media is also under investigation. Scientists are evaluating the success of certain drugs currently being used for the treatment of otitis media and are examining new drugs that may be more effective, easier to administer, and better at preventing new infections. Most important, research is leading to the availability of vaccines that will prevent otitis media.Symptoms:Serous otitis media may not cause any symptoms, however, fluid remaining in the middle ear for a long period of time may result in hearing loss. Although this condition can develop on its own, it most commonly occurs after being treated for acute otitis media.Acute otitis media causes sudden, severe earache, deafness, and tinnitus (ringing or buzzing in the ear), sense of fullness in the ear, irritability, tugging or rubbing the ear, an unwillingness to lie down, fever, headache, a change in appetite or sleeping patterns, fluid leaking from the ear, nausea and difficulty in speaking and hearing. Occasionally, the eardrum can burst, which causes a discharge of pus and relief of pain. Complications of a single episode of otitis media are rare and include otitis externa (inflammation of the outer ear), and spread inward from the ear to the skull, causing, mastoiditis (inflammation of the mastoid bone cells), or into the brain, causing meningitis (inflammation of the membranes covering the brain and spinal cord) or a brain abscess.Complications recurrent in otitis media include damage to the bones in the middle ear (sometimes causing total deafness) or a cholesteatoma (a matted ball of skin debris which can erode bone and cause further damage to the ear). Causes and Riskfactors:Children are more commonly affected than adults because of the small size and horizontal position of their eustachian tube (the passage that connects the back of the nose to the middle ear). Otitis media affects about 2/3 of youngsters at least once before they reach their second birthday. The four main causes of otitis media are allergy, infection, blockage of the eustachian tube and nutritional deficiency. Allergy: Studies have shown that food and airborne allergies can cause otitis media. The most common offending foods are milk products (from cows), wheat, egg white, peanut products, soy, corn, oranges, tomatoes and chicken. The most common airborne allergens are cigarette smoke, pollen, animal dander, house dust, mold, fungi, sulfur dioxide, bacteria and volatile organic compounds such as formaldehyde, pesticides and herbicides. Infection: Otitis media infections are caused by viruses or bacteria that infect the cells lining the eustachian tube, throat and middle ear. When infected, these cells become swollen and secrete a thick mucus that may clog the eustachian tube and cause fluid and pressure to build behind the eardrum. Some of the most common bacteria to cause this infection are Streptococcus pneumoniae, Haemophilus influenzae and moraxella catarrhalis. Blockage of the eustachian tube: This obstruction can be a result of swollen tonsils or adenoids or problems involving the bones of the cranium, the temporomandibular joint (located at the jaw) or the cervical spine. Nutritional deficiency: Researchers have found that children with vitamin A, zinc and iron deficiencies are more susceptible to upper respiratory and ear infections. Additionally, large amounts of prostaglandins (fatty acids found naturally in all people) and leukotrienes may also play a part. Additionally, in infants, otitis media has been associated with bottle feeding. Breast feeding provides two protective mechanisms. One is the suction created by sucking on the breast helps close the ear canal and prevents reflux of particles and bacteria into the middle ear. Second is the general protection from infections provided by the mother's antibodies crossing over to the baby in the mother's milk.Possible Complications: Cyst of the middle ear cyst (cholesteatoma) Facial paralysis Infection of one of the skull bones (mastoiditis) Inflammation around the brain (epidural abscess) Permanent damage to the ear with partial or complete deafness Most children will have temporary and minor hearing loss during and right after an ear infection, because fluid can linger in the ear. Although this fluid can go unnoticed, it can cause significant hearing problems in children. Any fluid in the ear that lasts longer than 8-12 weeks is cause for concern. In children, hearing problems may cause speech to develop slowly.Permanent hearing loss is rare, but the risk increases with the number and length of infections.Diagnosis:The doctor should be sure to ask the parent if the child has had a recent cold, flu, or other respiratory infection. If the child complains of pain or has other symptoms of otitis media, such as redness and inflammation, the doctor should rule out any other causes. These may include, but are not limited to, the following: Otitis media with effusion. OME is commonly confused with acute otitis media. It must be ruled out because it does not respond to antibiotics. Dental problems (such as teething). Infection in the outer ear. Symptoms include pain, redness, itching, and discharge. Infection in the outer ear, however, can be confirmed by wiggling the ears, which will produce pain. (This movement will have no significant effect if the infection is in the middle ear.) Foreign objects in the ear. This can be dangerous. A doctor should always check for this first when a small child indicates pain or problems in the ear. Viral infection can produce redness and inflammation. Such infections, however, are not treatable with antibiotics and resolve on their own. A parent's or child's attempts to remove earwax. Intense crying can cause redness and inflammation in the ear. Physical Examination: Instruments Used for Examining the Ear. An ear examination should be part of any routine physical examination in children, particularly because the problem is so common and may not cause symptoms.The doctor first removes any ear wax (called cerumen) in order to get a clear view of the middle ear. The doctor uses a small flashlight-like instrument called an otoscope to view the ear directly. This is the most important diagnostic step. The otoscope can reveal signs of acute otitis media, bulging eardrum, and blisters. An otoscope is a tool that shines a beam of light to help visualize and examine the condition of the ear canal and eardrum. Examining the ear can reveal the cause of symptoms such as an earache, the ear feeling full, or hearing loss.To determine an ear infection, the doctor should always use a pneumatic otoscope. This device detects any reduction in eardrum motion. It has a rubber bulb attachment that the doctor presses to push air into the ear. Pressing the bulb and observing the action of the air against the eardrum allows the doctor to gauge the eardrum's movement. Some doctors may use tympanometry to evaluate the ear. In this case, a small probe is held to the entrance of the ear canal and forms an airtight seal. While the air pressure is varied, a sound with a fixed tone is directed at the eardrum and its energy is measured. This device can detect fluid in the middle air and also obstruction in the Eustachian tube. A procedure similar to tympanometry, called reflectometry, also measures reflected sound. It can be used to detect fluid and obstruction, but does not require an airtight seal at the canal. Neither tympanometry nor reflectometry are substitutes for the pneumatic otoscope, which allows a direct view of the middle ear.Findings Indicating AOM or OME. A diagnosis of AOM requires all three of the following criteria:History of recent sudden symptoms: Symptoms may include fever, pulling on the ear, pain, irritability, or discharge (otorrhea) from the ear. Presence of fluid in the middle ear. This may be indicated by fullness or bulging of the eardrum or limited mobility. Signs and symptoms of inflammation. These may include redness of the eardrum as well as assessment of the child's discomfort. Ear pain that is severe enough to interfere with sleep may indicate inflammation. AOM (fluid and infection) is often difficult to differentiate from OME (fluid without infection). It is important for a doctor to make this distinction as OME does not require antibiotic treatment. In patients with OME, an air bubble may be visible and the eardrum is often cloudy and very immobile. A scarred, thick, or opaque eardrum may make it difficult for the doctor to distinguish between acute otitis media and OME.Severity of AOM: Acute otitis media is characterized as severe or non-severe.Non-severe AOM: Mild-to-moderate pain, temperature less than 102.2° F (39° C). Severe AOM: Moderate-to-severe pain, temperature of 102.2° F (39° C) or higher.Home Diagnosis: Parents can also use a sonar-like device, such as the EarCheck Monitor, to determine if there is fluid in their child's middle ear. EarCheck employs acoustic reflectometry technology which bounces sound waves off the eardrum to assess mobility. When fluid is present behind the middle ear (a symptom of AOM and OME), the eardrum will not be as mobile. The device works like an ear thermometer and is painless. Results indicate the likelihood of the presence of fluid and may help patients decide whether they need to contact their child's doctor.Tympanocentesis: On rare occasions the doctor may need to draw fluid from the ear using a needle for identifying specific bacteria, a procedure called tympanocentesis. This procedure can also relieve severe ear pain. This is most often performed by an ear, nose, and throat (ENT) specialist, and usually only in severe or recurrent cases. In most cases, tympanocentesis is not necessary in order to obtain an accurate enough diagnosis for effective treatment.Determining Hearing Problems: Hearing tests performed by an audiologist are usually recommended for children with persistent otitis media with effusion. A hearing loss below 20 decibels usually indicates problems.Determining Impaired Hearing in Infants and Small Children. Unfortunately, it is very difficult to test children under 2 years old for hearing problems. One way to determine hearing problems in infants is to gauge the baby's language development:At 4 - 6 weeks most babies with normal hearing are making cooing sounds. By around 5 months the child should be laughing out loud and making one-syllable sounds with both a vowel and consonant. Between 6 - 8 months, the infants should be able to make word-like sounds with more than one syllable. Usually starting around 7 months the baby babbles (makes many word-like noises) and should be doing this by 10 months. Around 10 months, the baby is able to identify and use some term for the parent. The baby speaks his or her first word usually by the end of the first year. If a child's progress is significantly delayed beyond these times, a parent should suspect possible hearing problems.Determining Impaired Hearing in Older Children. Hearing loss in older children may be detected by the following behaviors:They may not respond to speech spoken beyond 3 feet away. They may have difficulty following directions. Their vocabulary may be limited. They may have social and behavioral problems.Treatment:Treatments for ear infections cost the U.S. between 3 - 4 billion dollars each year, and many of these treatments, particularly heavy antibiotic use and surgical procedures, are often unnecessary in many children.Experts continue to argue about the best approach for treating ear infections. The major debates rest on the use of antibiotics, surgery, and watchful waiting in both acute otitis media (AOM) and otitis media with effusion (OME).Treatment Guidelines for Acute Otis Media (AOM): In 2004, the American Academy of Pediatrics (AAP) and the American Academy of Family Physicians (AAFP) released updated guidelines for the management and diagnosis of acute otitis media.These guidelines include the following recommendations: Accurate diagnosis of AOM including differentiation from OME. Children less than 6 months of age should receive immediate antibiotic treatment. Children 6 months or older should be treated for pain within the first 24 hours with either acetaminophen or ibuprofen. An initial observation period of 48 - 72 hours is recommended for select children to determine if the infection will resolve on its own without antibiotic treatment. (Most children do improve within 72 hours.) For children aged 6 months - 2 years, criteria for recommending an observation period are an uncertain diagnosis of AOM and a determination that the AOM is not severe. For children older than 2 years, the observation period criteria are non-severe symptoms or uncertain diagnosis. Severe AOM symptoms include moderate to severe pain and a fever of at least 102.2° Fahrenheit (39° Celsius). (A 2006 Lancet study suggested that antibiotics are only useful in this age group when both ears are affected.) If antibiotics are needed, amoxicillin is recommended as first-line treatment (except in children who are allergic to penicillins).Treatment Guidelines for Otitis Media with Effusion (OME): The American Academy of Pediatrics (AAP), the American Academy of Family Physicians (AAFP), and the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) released updated clinical practice guidelines for OME in 2004. These guidelines include the following treatment recommendations:Watchful Waiting for OME: The child is typically monitored for the first 3 months. Antibiotics are not helpful for most patients with OME. For one, the condition resolves without treatment in nearly all children, especially those whose OME followed an acute ear infection. Approximately 75 - 90% of OME cases that result from AOM resolve within 3 months. If OME last longer than 3 months, a hearing test should be conducted. Even if OME lasts for longer than 3 months, the condition may resolve on its own and intervention may not be necessary. The doctor will re-evaluate the child at periodic intervals to determine if there is risk for hearing loss.Drug Treatment: Antibiotics and corticosteroids do not help and are not recommended for routine management of OME. These drugs are not effective for OME, either when used alone or in combination. In fact, a 2006 study suggested that antihistamines and decongestants may cause more harm than good by provoking side effects such as stomach upset and drowsiness. At present, there is no compelling evidence to indicate that allergy treatment can assist with OME management nor has a causal relationship between allergies and OME been established.Surgery: Children may be considered candidates for surgery if they have if they have OME lasting longer than 4 months that is accompanied by hearing loss. OME that is persistent or recurrent (even if there is no hearing loss) and may put the child at risk for developmental delays or structural damage to the ear. OME and structural damage to the eardrum or middle ear. The decision to pursue surgery must be determined on an individual basis.Tympanostomy tube insertion is the first choice for surgical intervention. Approximately 20 - 50% of children who undergo this procedure may have OME relapse and require additional surgery. Adenoidectomy plus myringotomy, with or without tube insertion, is recommended as a repeat surgical procedure. Tube insertion may be advised for children younger than 4 years of age. Adenoidectomy is not recommended as an initial procedure unless some other condition (chronic sinusitis, nasal obstruction, adenoiditis) is present. Neither myringotomy alone or tonsillectomy is recommended for OME treatment.Medicine and medications:Until recently, nearly every American child with an ear infection who visited a doctor received antibiotics. In one region of the U.S., more than 70% of children received antibiotics before they were 7 months old, and the most common reason for these medications was acute otitis media.Major studies now indicate that antibiotics are unnecessary in most cases of acute otitis media. Between 80 - 90% of all children with uncomplicated ear infections recover within a week without antibiotics. Antibiotics are rarely recommended for otitis media with effusion.Antibiotic Resistance. The intense and widespread use of antibiotics is leading to a serious global problem of bacterial resistance to common antibiotics. In the U.S., nearly a quarter of S. pneumoniae are currently resistant to at least three antibiotics. High rates of resistance strains are even being observed in infants. In general, regions and institutions with the highest rate of resistance are those in which antibiotics are the most heavily prescribed.Because of the high rate of antibiotic resistance, and the fact that non-severe AOM usually resolves without antibiotics, many pediatric guidelines recommend a “watchful waiting” period before antibiotics are prescribed. (See "Watchful Waiting" in the Treatment section of this report.) Current guidelines released by the American Academy of Pediatrics and the American Academy of Family Physicians recommend an initial observation period of 48 - 72 hours for select children. Pain relief can initially be given with acetaminophen (Tylenol), ibuprofen (Advil), or topical benzocaine drops.If there is no improvement or symptoms worsen, parents can schedule an appointment with the child's doctor to determine if antibiotics are needed. (Parents should contact the doctor within the first 24 hours if their child is 6 months or younger and has fever or other severe symptoms.) Another option is to ask the doctor for a Safety Net Antibiotic Prescription (SNAP) that can be filled if symptoms do not improve within 48 - 72 hours.Antibiotic Regimens for Acute Otitis Media (AOM): When antibiotics are needed, a number of different classes are available for treating acute ear infections. Amoxicillin is a penicillin antibiotic and the drug of first choice. Other antibiotics are available for children who are allergic to penicillin or who do not respond within 2 - 3 days.Duration. If a child needs antibiotics for acute otitis media, experts recommend they be taken for the following periods of time:A 10-day course of antibiotics is usually recommended for children younger than 6 years of age, and for those with severe AOM. Antibiotic therapy for 5 - 7 days is recommended for children 6 years of age or older with mild-to-moderate symptoms. Parents should be sure their child finishes the entire course of therapy. Failure to finish is a major factor in the growth of bacterial strains that are resistant to antibiotics.What to Expect. Earaches usually resolve within 8 - 24 hours after taking an antibiotic, although about 10% of children who are treated do not respond. This may occur when a virus is present or if the bacteria causing the ear infection is resistant to the prescribed antibiotic. A different antibiotic may be needed.In some children whose treatment is successful, fluid will still remain in the middle ear for weeks or months, even after the infection has resolved. During that period, children may have some hearing problems, but eventually the fluid almost always drains away. Antibiotics should not be used to treat residual fluid.Specific Antibiotics Used for Acute Otitis Media (AOM): The selection of an antibiotic is determined in part by the severity of the child's condition as well as a history of response/non-response to antibiotic therapy. Treatment decisions take into account whether the child's condition is severe or non-severe.Amoxicillin is generally recommended for first-line treatment of AOM. The combination drug amoxicillin-clavunate is prescribed for patients who have severe pain or a fever higher than 102.2° (39° Celsius). Other drug classes may be prescribed if a child is allergic to penicillins or does not respond to the initial therapy.The following treatment guidelines provide general recommendations based on the severity of a child's AOM.First-line treatment for non-severe AOM:Amoxicillin 80-90 mg/kg per day orally. Amoxicillin is a penicillin antibiotic. If the patient has an allergy or a history of non-response to penicillin drugs, one of the following antibiotics may be prescribed:Azithromycin or clarithromycin. These drugs are in the macrolide class and are administered orally. Cefdinir, cefuroxime, or cefpodoxime. These drugs are classified as cephalosporins and are taken by mouth. They may cause reactions in penicillin-allergic patients. If the patient does not respond to amoxicillin or alternative antibiotic drugs after 48 - 72 hours, one of the following drugs may be prescribed:Amoxicillin-clavulanate, clindamycin, or ceftriaxone. Ceftriaxone is injected intramuscularly. The other two drugs are administered orally. Each of these drugs is a different type of antibiotic. Amoxicillin-clavulanate (Augmentin) is classified as a penicillin; ceftriaxone (Rocephin) is a cephalosporin; clindamycin (Cleocin) is a lincosamide. First-line treatment for severe AOM:Amoxicillin-clavulanate (Augmentin). This antibiotic is known as an augmented penicillin. It works against a wide spectrum of bacteria and is administered orally. Second-line treatment for severe AOM:Ceftriaxone. Ceftriaxone (Rocephin) is an injectable cephalosporin that may be prescribed as an alternative to amoxicillin-clavulanate, especially for children who have vomiting or other conditions that hamper oral administration. Tympanocentesis or clindamycin. Patients with severe AOM who have failed to respond to amoxicillin-clavulanate after 48 - 72 hours may require the withdrawal of fluid from the ear (tympanocentesis) in order to identify the bacterial strain causing the infection. If tympanocentesis cannot be performed, clindamycin may be prescribed orally to treat penicillin-resistant pathogens that have not responded to prior drug therapy.Side Effects of Antibiotics: The most common side effects of nearly all antibiotics are gastrointestinal problems, including cramps, nausea, vomiting, and diarrhea. This can be a significant problem in infants and small children. One study reported that giving such children a soy-based formula that contained fiber (Isomil DF) was helpful in reducing these side effects. Amoxicillin use during infancy may lead to enamel defects and discolorations of permanent teeth Allergic reactions can also occur with all antibiotics but are most common with medications derived from penicillin or sulfa. These reactions can range from mild skin rashes to rare but severe, even life-threatening anaphylactic shock. Some drugs, including certain over-the-counter medications, interact with antibiotics. Parents should tell the doctor about all medications their children are taking.DISCLAIMER: This information should not substitute for seeking responsible, professional medical care. Recent research, performed by Sara Seidelman, a cardiologist and nutrition researcher from Brigham and Women’s Hospital in Boston, USA, finds that diets which ban entire food groups from the eating plan, for example, ketogenic diet, may actually harm your health. The... According to a recent study, completed by the scientists from the Duke University Medical Center in Durham, USA, regular bedtime is important for heart health and metabolism. A team of scientists examined the sleeping patterns of approximately 2,000 adults aged... It is very entertaining to be a sport fan. There is a big variety of sport games that are extremely interesting to follow. Moreover, it is always fun to anticipate the score and watch the enthusiasm live. One of the benefits of being sports fan is using different... A new study of nearly 18,000 participants found that those with high fitness at middle age were significantly less likely to die from heart disease in later life, even if they were diagnosed with depression. Doctor's Tips: How to Stay Fit While Treating Depression Dr.... The warm ups are supposed to increase body temperature and blood flow so the muscles and surrounding joints become more responsive and prepared for physical activity. Although there’s a neurological element to warm-ups, most research focuses on the physiological...
History of Germany Part of a series on the |History of Germany| The concept of Germany as a distinct region in central Europe can be traced to Roman commander Julius Caesar, who referred to the unconquered area east of the Rhine as Germania, thus distinguishing it from Gaul (France), which he had conquered. The victory of the Germanic tribes in the Battle of the Teutoburg Forest (AD 9) prevented annexation by the Roman Empire, although the Roman provinces of Germania Superior and Germania Inferior were established along the Rhine. Following the Fall of the Western Roman Empire, the Franks conquered the other West Germanic tribes. When the Frankish Empire was divided among Charlemagne's heirs in 843, the eastern part became East Francia. In 962, Otto I became the first emperor of the Holy Roman Empire, the medieval German state. In the High Middle Ages, the regional dukes, princes and bishops gained power at the expense of the emperors. Martin Luther led the Protestant Reformation against the Catholic Church after 1517, as the northern states became Protestant, while the southern states remained Catholic. The two parts of the Holy Roman Empire clashed in the Thirty Years' War (1618–1648), which was ruinous to the twenty million civilians living in both parts. The Thirty Years' War brought tremendous destruction to Germany; more than 1/4 of the population and 1/2 of the male population in the German states were killed by the catastrophic war. 1648 marked the effective end of the Holy Roman Empire and the beginning of the modern nation-state system, with Germany divided into numerous independent states, such as Prussia, Bavaria and Saxony. After the French Revolution and the Napoleonic Wars (1803–1815), feudalism fell away and liberalism and nationalism clashed with reaction. The 1848 March Revolution failed. The Industrial Revolution modernized the German economy, led to the rapid growth of cities and to the emergence of the Socialist movement in Germany. Prussia, with its capital Berlin, grew in power. German universities became world-class centers for science and the humanities, while music and the arts flourished. The Unification of Germany was achieved under the leadership of Chancellor Otto von Bismarck with the formation of the German Empire in 1871 which solved the Kleindeutsche Lösung, the small Germany solution (Germany without Austria), or Großdeutsche Lösung, the greater Germany solution (Germany with Austria), the former prevailed. The new Reichstag, an elected parliament, had only a limited role in the imperial government. Germany joined the other powers in colonial expansion in Africa and the Pacific. Germany was the dominant power on the continent. By 1900, its rapidly expanding industrial economy passed Britain's, allowing a naval race and an aggressive foreign policy. Germany led the Central Powers in World War I (1914–1918) against France, Great Britain, Russia and (by 1917) the United States. Defeated and partly occupied, Germany was forced to pay war reparations by the Treaty of Versailles and was stripped of its colonies as well as Polish areas and Alsace-Lorraine. The German Revolution of 1918–19 deposed the emperor and the various kings and princes, leading to the establishment of the Weimar Republic, an unstable parliamentary democracy. In the early 1930s, the worldwide Great Depression hit Germany hard, as unemployment soared and people lost confidence in the government. In 1933, the Nazi party under Adolf Hitler came to power and quickly established a totalitarian regime. Political opponents were killed or imprisoned. Nazi Germany's foreign policy aimed to create a Greater Germany which consequently saw the remilitarization of the Rhineland in 1936, the annexing of Austria in the Anschluss and parts of Czechoslovakia with the Munich Agreement in 1938 (although in 1939 Hitler annexed further territory of Czechoslovakia), and its invasion of Poland on 1 September 1939 which initiated the Second World War. After forming a pact with the Soviet Union in 1939, Hitler and Stalin divided Eastern Europe. After a "Phoney War" in spring 1940 the German blitzkrieg swept Scandinavia, the Low Countries and France, giving Germany control of nearly all of Western Europe. Only the British Commonwealth and Empire stood opposed. Hitler invaded the Soviet Union in June 1941. In Germany, but predominantly in the German-occupied areas, the systematic genocide program known as The Holocaust killed six million Jews, as well as five million others including Poles, Romanies, Russians, Soviets (Russian and non-Russian), and others. In 1942, the German invasion of the Soviet Union faltered, and after the United States had entered the war, Britain became the base for massive Anglo-American bombings of German cities. Germany fought the war on multiple fronts through 1942–1944, however following the Allied invasion of Normandy (June 1944), the German army was pushed back on all fronts until the final collapse in May 1945. Under occupation by the Allies, German territories were split up, denazification took place, and the Cold War resulted in the division of the country into democratic West Germany and communist East Germany. Millions of ethnic Germans fled from Communist areas into West Germany, which experienced rapid economic expansion, and became the dominant economy in Western Europe. West Germany was rearmed in the 1950s under the auspices of NATO, but without access to nuclear weapons. The Franco-German friendship became the basis for the political integration of Western Europe in the European Union. In 1989, the Berlin Wall was destroyed, the Soviet Union collapsed and East Germany was reunited with West Germany in 1990. In 1998–1999, Germany was one of the founding countries of the eurozone. Germany remains one of the economic powerhouses of Europe, contributing about one quarter of the eurozone's annual gross domestic product. In the early 2010s, Germany played a critical role in trying to resolve the escalating euro crisis, especially with regard to Greece and other Southern European nations. In the middle of the decade, the country faced the European migrant crisis, as the main receiver of asylum seekers from Syria and other troubled regions. For more events, see Timeline of German history - 1 Prehistory - 2 Germanic tribes, 750 BC – 768 AD - 3 Middle Ages - 4 Early modern Germany - 5 1648–1815 - 6 1815–1867 - 6.1 Overview - 6.2 German Confederation - 6.3 Society and economy - 6.4 Politics of restoration and revolution - 7 German Empire, 1871–1918 - 7.1 Overview - 7.2 Age of Bismarck - 7.3 Wilhelminian Era - 7.4 World War I - 7.5 Homefront - 7.6 Revolution 1918 - 8 Weimar Republic, 1919–1933 - 9 Nazi Germany, 1933–1945 - 10 Germany during the Cold War, 1945–1990 - 11 Federal Republic of Germany, 1990–present - 12 Historiography - 13 See also - 14 Notes - 15 References - 16 Further reading The discovery of the Mauer 1 mandible in 1907 shows that ancient humans were present in Germany at least 600,000 years ago. The oldest complete hunting weapons ever found anywhere in the world were discovered in a coal mine in Schoningen, Germany in 1995 where three 380,000-year-old wooden javelins 6-7.5 feet (1.8-2.3 meter) long were unearthed. The Neander valley in Germany was the location where the first ever non-modern human fossil was discovered and recognised in 1856; the new species of human was named Neanderthal man. The Neanderthal 1 fossils are now known to be 40,000 years old. At a similar age, evidence of modern humans has been found in caves in the Swabian Jura near Ulm. The finds include 42,000-year-old bird bone and mammoth ivory flutes which are the oldest musical instruments ever found, the 40,000-year-old Ice Age Löwenmensch figurine which is the oldest uncontested figurative art ever discovered, and the 35,000-year-old Venus of Hohle Fels which is the oldest uncontested human figurative art ever discovered. Germanic tribes, 750 BC – 768 AD Migration and conquest The ethnogenesis of the Germanic tribes is assumed[by whom?] to have occurred during the Nordic Bronze Age, or at the latest during the Pre-Roman Iron Age. From their homes in southern Scandinavia and northern Germany the tribes began expanding south, east and west in the 1st century BC, coming into contact with the Celtic tribes of Gaul, as well as with Iranian, Baltic, and Slavic cultures in Central/Eastern Europe. In the first years of the 1st century AD Roman legions conducted a long campaign in Germania, the area north of the Upper Danube and east of the Rhine, in an attempt to expand the Empire's frontiers and to shorten its frontier line. Rome subdued several Germanic tribes, such as the Cherusci. The tribes became familiar with Roman tactics of warfare while maintaining their tribal identity. In 9 AD a Cherusci chieftain known to the Romans as Arminius defeated a Roman army in the Battle of the Teutoburg Forest, a victory credited with stopping the Roman advance into Germanic territories and marking the beginning of recorded German history.[need quotation to verify] That part of the territory of modern Germany that lay east of the Rhine remained outside the Roman Empire. By AD 100, the time of Tacitus's Germania, Germanic tribes had settled along the Roman frontier along the Rhine and the Danube (the Limes Germanicus), occupying most of the area of modern Germany; however, imperial Rome organised territory later included in the modern states of Austria, Baden-Württemberg, southern Bavaria, southern Hesse, Saarland and the Rhineland as Roman provinces (Noricum, Raetia, and Germania). The Roman provinces in western Germany, Germania Inferior (with the capital situated at Colonia Claudia Ara Agrippinensium, modern Cologne) and Germania Superior (with its capital at Mogontiacum, modern Mainz), were formally established in 85 AD, after a long period of military occupation beginning in the reign of the Roman emperor Augustus (27 BC - 14 AD). The 3rd century saw the emergence of a number of large West Germanic tribes: the Alamanni, Franks, Bavarii, Chatti, Saxons, Frisii, Sicambri, and Thuringii. Around 260 the Germanic peoples broke through the limes and the Danube frontier into Roman-controlled lands. Seven large German-speaking tribes – the Visigoths, Ostrogoths, Vandals, Burgundians, Lombards, Saxons and Franks – moved west and witnessed the decline of the Roman Empire and the transformation of the old Western Roman Empire. Christianity was spread to western Germany during the Roman era, and Christian religious structures such as the Aula Palatina of Trier were built during the reign of Constantine I (r. (306-337 AD). At the end of the 4th century the Huns invaded the unoccupied part of present-day Germany and the Migration Period began. Hunnic hegemony over Germania lasted until the death of Attila's son Dengizich in 469. Stem Duchies and Marches Stem duchies (tribal duchies) in Germany originated as the areas of the Germanic tribes of a given region. The concept of such duchies survived especially in the areas which in the mid-9th century would become part of East Francia (for example: Bavaria, Swabia, Saxony, Franconia, Thuringia) rather than further west in Middle Francia (for example: Burgundy, Lorraine ). In the 5th century, the Völkerwanderung (or Germanic migrations) brought a number of "barbarian" tribes into the failing Roman Empire. Tribes that became stem duchies were originally the Alamanni, the Thuringii, the Saxons, the Franks, the Burgundians, and the Rugii. In contrast to later duchies, these entities did not have strictly delineated administrative boundaries, but approximated the area of settlement of major Germanic tribes. Over the next few centuries, some tribes warred, migrated, and merged. Eventually the Franks subjugated all these tribes in Germania. However, remnants of several stem duchies survive today as states or regions in modern Western Europe countries: German states such as Bavaria and Saxony, German regions like Swabia, and French régions such as of Burgundy/Franche-Comté and Lorraine. In the east, successive rulers of the German lands founded a series of border counties or marches. To the north, these included Lusatia, the North March (which would become Brandenburg and the heart of the future Prussia), and the Billung March. In the south, the marches included Carniola, Styria, and the March of Austria that would become Austria. After the fall of the Western Roman Empire in the 5th century, the Franks, like other post-Roman Western Europe, emerged as a tribal confederacy in the Rhine-Weser region referred to as "Austrasia," now Franconia. They absorbed much former Roman territory as they spread west into Gaul beginning in 250, unlike the Alamanni to their south in Swabia. By 500, the Frankish king Clovis I, of the Merovingian dynasty, had united the Frankish tribes and ruled all of Gaul, and was proclaimed king some time from 509 to 511. Clovis also, contrary to the tradition of Germanic rulers of the time, was baptized directly into Roman Catholicism and not Arianism, and his successors would work closely with papal missionaries, among them Saint Boniface. The faith of the Franks, the vast size of Francia, and the Franks' control of the passes through the Alps led to the alliance between the Merovingian realm, which by 750 now extended Gaul and north-western Germany to include Swabia, Burgundy (and western Switzerland by extension), with the Pope in Rome against the Lombards, whom now posed the greatest threat to the Holy See. A Papal envoy was sent to Charles Martel, Mayor of the Palace, in 732 following his victory at the Battle of Tours, though this alliance would lapse with Charles' death and be renewed after the Frankish Civil War. The Merovingian kings of the Germanic Franks conquered northern Gaul in 486 AD. Swabia became a duchy under the Frankish Empire in 496, following the Battle of Tolbiac; in 530 the Saxons and the Franks destroyed the Kingdom of Thuringia. In the 5th and 6th centuries the Merovingian kings conquered several[quantify] other Germanic tribes and kingdoms. King Chlothar I (reigned 558–561) ruled the greater part of what is now Germany and made expeditions into Saxony, while the Southeast of modern Germany remained under the influence of the Ostrogoths. Saxons inhabited the area down[clarification needed] to the Unstrut River. The Merovingians placed the various regions of their Frankish Empire under the control of semi-autonomous dukes - Franks or local rulers. Frankish colonists were encouraged[by whom?] to move to the newly conquered territories. While allowed to preserve their own laws, the local Germanic tribes faced pressure to adopt non-Arian Christianity. The territories which would later become parts of modern Germany came under the region of Austrasia (meaning "eastern land"), the northeastern portion of the Kingdom of the Merovingian Franks. As a whole, Austrasia comprised parts of present-day France, Germany, Belgium, Luxembourg and the Netherlands. After the death of the Frankish king Clovis I in 511, his four sons partitioned his kingdom including Austrasia. Authority over Austrasia passed back and forth from autonomy to royal subjugation, as successive Merovingian kings alternately united and subdivided the Frankish lands. In 718 Charles Martel, the Frankish Mayor of the Palace, made war against Saxony because of its help for the Neustrians. His son Carloman started a new war against Saxony in 743, because the Saxons gave aid to Duke Odilo of Bavaria. In 751 Pippin III, Mayor of the Palace under the Merovingian king, himself assumed the title of king and was anointed by the Church. Now the Frankish kings were set up[by whom?] as protectors of the pope, and Charles the Great (who ruled the Franks from 774 to 814) launched a decades-long military campaign against the Franks' heathen rivals, the Saxons and the Avars. The campaigns and insurrections of the Saxon Wars lasted from 772 to 804. The Franks eventually overwhelmed the Saxons and Avars, forcibly converted the people to Christianity, and annexed their lands to the Carolingian Empire. Foundation of the Holy Roman Empire After the death of Frankish king Pepin the Short in 768, his oldest son "Charlemagne" ("Charles the Great") consolidated his power over and expanded the Kingdom. In 773-74, Charlemagne ended 200 years of Royal Lombard rule with the Siege of Pavia, and installed himself as King of the Lombards and loyal Frankish nobles replaced the old Lombard elite following a rebellion in 776. The next 30 years of his reign were spent ruthlessly strengthening his power in Francia and conquering the territories of all west Germanic peoples, including the Saxons and the Baiuvarii (Bavarians). On Christmas Day, 800 AD, Charlemagne was crowned Emperor in Rome by Pope Leo III. Fighting among Charlemagne's grandchildren caused the Carolingian empire to be partitioned into three parts in 843. The German region developed out of the East Frankish kingdom, East Francia. From 919 to 936, the Germanic peoples – Franks, Saxons, Swabians, and Bavarians – were united under Henry the Fowler, Duke of Saxony, who took the title of king. Imperial strongholds, called Kaiserpfalzen, became economic and cultural centers, of which Aachen was the most famous. Map of the Kingdom of Germany within the Holy Roman Empire and within Europe circa 1004, after the incorporation of the Duchy of Bohemia Holy Roman Empire, 10th century The Holy Roman Empire at its greatest territorial extent during the Hohenstaufen dynasty in the early and middle 13th century The Holy Roman Empire at its greatest territorial extent during the Hohenstaufen dynasty in the early and middle 13th century (detailed) The Holy Roman Empire at its greatest territorial extent during the Hohenstaufen dynasty in the early and middle 13th century (superimposed on modern borders) Otto the Great In 936, Otto I was crowned as king at Aachen; his coronation as emperor by Pope John XII at Rome in 962 inaugurated what became later known as the Holy Roman Empire, which came to be identified with Germany. Otto strengthened the royal authority by re-asserting the old Carolingian rights over ecclesiastical appointments. Otto wrested from the nobles the powers of appointment of the bishops and abbots, who controlled large land holdings. Additionally, Otto revived the old Carolingian program of appointing missionaries in the border lands. Otto continued to support celibacy for the higher clergy, so ecclesiastical appointments never became hereditary. By granting land to the abbots and bishops he appointed, Otto actually made these bishops into "princes of the Empire" (Reichsfürsten); in this way, Otto was able to establish a national church. Outside threats to the kingdom were contained with the decisive defeat of the Hungarian Magyars at the Battle of Lechfeld in 955. The Slavs between the Elbe and the Oder rivers were also subjugated. Otto marched on Rome and drove John XII from the papal throne and for years controlled the election of the pope, setting a firm precedent for imperial control of the papacy for years to come. During the reign of Conrad II's son, Henry III (1039 to 1056), the empire supported the Cluniac reforms of the Church, the Peace of God, prohibition of simony (the purchase of clerical offices), and required celibacy of priests. Imperial authority over the Pope reached its peak. In the Investiture Controversy which began between Henry IV and Pope Gregory VII over appointments to ecclesiastical offices, the emperor was compelled to submit to the Pope at Canossa in 1077, after having been excommunicated. In 1122 a temporary reconciliation was reached between Henry V and the Pope with the Concordat of Worms. The consequences of the investiture dispute were a weakening of the Ottonian church (Reichskirche), and a strengthening of the Imperial secular princes. The time between 1096 and 1291 was the age of the crusades. Knightly religious orders were established, including the Knights Templar, the Knights of St John (Knights Hospitaller), and the Teutonic Order. The term sacrum imperium (Holy Empire) was first used under Friedrich I, documented first in 1157, but the words Sacrum Romanum Imperium, Holy Roman Empire, were only combined in July 1180 and would never consistently appear on official documents from 1254 onwards. Long-distance trade in the Baltic intensified, as the major trading towns became drawn together in the Hanseatic League, under the leadership of Lübeck. The Hanseatic League was a business alliance of trading cities and their guilds that dominated trade along the coast of Northern Europe. Each of the Hanseatic cities had its own legal system and a degree of political autonomy. The chief cities were Cologne on the Rhine River, Hamburg and Bremen on the North Sea, and Lübeck on the Baltic. The League flourished from 1200 to 1500, and continued with lesser importance after that. The German colonisation and the chartering of new towns and villages began into largely Slav-inhabited territories east of the Elbe, such as Bohemia, Silesia, Pomerania, and Livonia. Beginning in 1226, the Teutonic Knights began their conquest of Prussia. The native Baltic Prussians were conquered and Christianized by the Knights with much warfare, and numerous German towns were established along the eastern shore of the Baltic Sea. Church and state Henry V (1086–1125), great-grandson of Conrad II, became Holy Roman Emperor in 1106 in the midst of a civil war. Hoping to gain complete control over the church inside the Empire, Henry V appointed Adalbert of Saarbrücken as the powerful archbishop of Mainz in 1111. Adalbert began to assert the powers of the Church against secular authorities, that is, the Emperor. This precipitated the "Crisis of 1111", part of the long-term Investiture Controversy. In 1137 the magnates turned back to the Hohenstaufen family for a candidate, Conrad III. Conrad III tried to divest Henry the Proud of his two duchies – Bavaria and Saxony – leading to war in southern Germany as the Empire divided into two factions. The first faction called themselves the "Welfs" or "Guelphs" after Henry the Proud's family, which was the ruling dynasty in Bavaria; the other faction was known as the "Waiblings." In this early period, the Welfs generally represented ecclesiastical independence under the papacy plus "particularism" (a strengthening of the local duchies against the central imperial authority). The Waiblings, on the other hand, stood for control of the Church by a strong central Imperial government. Between 1152 and 1190, during the reign of Frederick I (Barbarossa), of the Hohenstaufen dynasty, an accommodation was reached with the rival Guelph party by the grant of the duchy of Bavaria to Henry the Lion, duke of Saxony. Austria became a separate duchy by virtue of the Privilegium Minus in 1156. Barbarossa tried to reassert his control over Italy. In 1177 a final reconciliation was reached between the emperor and the Pope in Venice. From 1184 to 1186, the Hohenstaufen empire under Frederick I Barbarossa reached its peak in the Reichsfest (imperial celebrations) held at Mainz and the marriage of his son Henry in Milan to the Norman princess Constance of Sicily. The power of the feudal lords was undermined by the appointment of "ministerials" (unfree servants of the Emperor) as officials. Chivalry and the court life flowered, leading to a development of German culture and literature (see Wolfram von Eschenbach). Between 1212 and 1250, Frederick II established a modern, professionally administered state from his base in Sicily. He resumed the conquest of Italy, leading to further conflict with the Papacy. In the Empire, extensive sovereign powers were granted to ecclesiastical and secular princes, leading to the rise of independent territorial states. The struggle with the Pope sapped the Empire's strength, as Frederick II was excommunicated three times. After his death, the Hohenstaufen dynasty fell, followed by an interregnum during which there was no Emperor. The failure of negotiations between Emperor Louis IV and the papacy led in 1338 to the declaration at Rhense by six electors to the effect that election by all or the majority of the electors automatically conferred the royal title and rule over the empire, without papal confirmation. As result, the monarch was no longer subject to papal approbation and became increasingly dependent on the favour of the electors. Between 1346 and 1378 Emperor Charles IV of Luxembourg, king of Bohemia, sought to restore the imperial authority. The Golden Bull of 1356 stipulated that in future the emperor was to be chosen by four secular electors and three spiritual electors. The secular electors were the King of Bohemia, the Count Palatine of the Rhine, the Duke of Saxony, and the Margrave of Brandenburg; the three spiritual electors were the Archbishops of Mainz, Trier, and Cologne. Around 1350, Germany and almost the whole of Europe were ravaged by the Black Death. Jews were persecuted on religious and economic grounds; many fled to Poland. The Black Death is estimated to have killed 30–60 percent of Europe's population in the 14th century. Change and reform After the disasters of the 14th century – war, plague, and schism – early-modern European society gradually came into being as a result of economic, religious, and political changes. A money economy arose which provoked social discontent among knights and peasants. Gradually, a proto-capitalistic system evolved out of feudalism. The Fugger family gained prominence through commercial and financial activities and became financiers to both ecclesiastical and secular rulers. The knightly classes found their monopoly on arms and military skill undermined by the introduction of mercenary armies and foot soldiers. Predatory activity by "robber knights" became common. From 1438 the Habsburgs, who controlled most of the southeast of the Empire (more or less modern-day Austria and Slovenia, and Bohemia and Moravia after the death of King Louis II in 1526), maintained a constant grip on the position of the Holy Roman Emperor until 1806 (with the exception of the years between 1742 and 1745). This situation, however, gave rise to increased disunity among the Holy Roman Empire's territorial rulers and prevented sections of the country from coming together to form nations in the manner of France and England. During his reign from 1493 to 1519, Maximilian I tried to reform the Empire. An Imperial supreme court (Reichskammergericht) was established, imperial taxes were levied, and the power of the Imperial Diet (Reichstag) was increased. The reforms, however, were frustrated by the continued territorial fragmentation of the Empire. Towns and cities The German lands had a population of about 5 or 6 million. The great majority were farmers, typically in a state of serfdom under the control of nobles and monasteries. A few towns were starting to emerge. From 1100, new towns were founded around imperial strongholds, castles, bishops' palaces, and monasteries. The towns began to establish municipal rights and liberties (see German town law). Several cities such as Cologne became Imperial Free Cities, which did not depend on princes or bishops, but were immediately subject to the Emperor. The towns were ruled by patricians: merchants carrying on long-distance trade. Craftsmen formed guilds, governed by strict rules, which sought to obtain control of the towns; a few were open to women. Society was divided into sharply demarcated classes: the clergy, physicians, merchants, various guilds of artisans, and peasants; full citizenship was not available to paupers. Political tensions arose from issues of taxation, public spending, regulation of business, and market supervision, as well as the limits of corporate autonomy. Cologne's central location on the Rhine river placed it at the intersection of the major trade routes between east and west and was the basis of Cologne's growth. The economic structures of medieval and early modern Cologne were characterized by the city's status as a major harbor and transport hub upon the Rhine. It was the seat of the archbishops, who ruled the surrounding area and (from 1248 to 1880) built the great Cologne Cathedral, with sacred relics that made it a destination for many worshippers. By 1288 the city had secured its independence from the archbishop (who relocated to Bonn), and was ruled by its burghers. From the early Medieval period and continuing through to the 18th century, Germanic law assigned women to a subordinate and dependent position relative to men. Salic (Frankish) law, from which the laws of the German lands would be based, placed women at a disadvantage with regard to property and inheritance rights. Germanic widows required a male guardian to represent them in court. Unlike Anglo-Saxon law or the Visigothic Code, Salic law barred women from royal succession. Social status was based on military and biological roles, a reality demonstrated in rituals associated with newborns, when female infants were given a lesser value than male infants. The use of physical force against wives was condoned until the 18th century in Bavarian law. Some women of means asserted their influence during the Middle Ages, typically in royal court or convent settings. Hildegard of Bingen, Gertrude the Great, Elisabeth of Bavaria (1478–1504), and Argula von Grumbach are among the women who pursued independent accomplishments in fields as diverse as medicine, music composition, religious writing, and government and military politics. Science and culture Benedictine abbess Hildegard von Bingen (1098–1179) wrote several influential theological, botanical, and medicinal texts, as well as letters, liturgical songs, poems, and arguably the oldest surviving morality play, while supervising brilliant miniature Illuminations. About 100 years later, Walther von der Vogelweide (c. 1170 - c. 1230) became the most celebrated of the Middle High German lyric poets. Around 1439, Johannes Gutenberg of Mainz, used movable type printing and issued the Gutenberg Bible. He was the global inventor of the printing press, thereby starting the Printing Revolution. Cheap printed books and pamphlets played central roles for the spread of the Reformation and the Scientific Revolution. Around the transition from the 15th to the 16th century, Albrecht Dürer from Nuremberg established his reputation across Europe as painter, printmaker, mathematician, engraver, and theorist when he was still in his twenties and secured his reputation as one of the most important figures of the Northern Renaissance. The addition Nationis Germanicæ (of German Nation) to the emperor's title appeared first in the 15th century: in a 1486 law decreed by Frederick III and in 1512 in reference to the Imperial Diet in Cologne by Maximilian I. By then, the emperors had lost their influence in Italy and Burgundy. In 1525, the Heilbronn reform plan – the most advanced document of the German Peasants' War (Deutscher Bauernkrieg) – referred to the Reich as von Teutscher Nation (of German nation). Hildegard von Bingen (1098–1179) Walther von der Vogelweide (c. 1170–1230) Johannes Gutenberg (c. 1398–1468) Albrecht Dürer (1471–1528) Early modern Germany - See List of states in the Holy Roman Empire for subdivisions and the political structure In the early 16th century there was much discontent occasioned by abuses such as indulgences in the Catholic Church, and a general desire for reform. In 1517 the Reformation began with the publication of Martin Luther's 95 Theses; he posted them in the town square and gave copies of them to German nobles, but it is debated whether he nailed them to the church door in Wittenberg as is commonly said. The list detailed 95 assertions Luther believed to show corruption and misguidance within the Catholic Church. One often cited example, though perhaps not Luther's chief concern, is a condemnation of the selling of indulgences; another prominent point within the 95 Theses is Luther's disagreement both with the way in which the higher clergy, especially the pope, used and abused power, and with the very idea of the pope. In 1521 Luther was outlawed at the Diet of Worms. But the Reformation spread rapidly, helped by the Emperor Charles V's wars with France and the Turks. Hiding in the Wartburg Castle, Luther translated the Bible from Latin to German, establishing the basis of the German language. A curious fact is that Luther spoke a dialect which had minor importance in the German language of that time. After the publication of his Bible, his dialect suppressed the others and evolved into what is now the modern German. In 1524 the German Peasants' War broke out in Swabia, Franconia and Thuringia against ruling princes and lords, following the preaching of Reformers. But the revolts, which were assisted by war-experienced noblemen like Götz von Berlichingen and Florian Geyer (in Franconia), and by the theologian Thomas Münzer (in Thuringia), were soon repressed by the territorial princes. As many as 100,000 German peasants were massacred during the revolt. With the protestation of the Lutheran princes at the Imperial Diet of Speyer (1529) and rejection of the Lutheran "Augsburg Confession" at Augsburg (1530), a separate Lutheran church emerged. From 1545 the Counter-Reformation began in Germany. The main force was provided by the Jesuit order, founded by the Spaniard Ignatius of Loyola. Central and northeastern Germany were by this time almost wholly Protestant, whereas western and southern Germany remained predominantly Catholic. In 1547, Holy Roman Emperor Charles V defeated the Schmalkaldic League, an alliance of Protestant rulers. The Peace of Augsburg in 1555 brought recognition of the Lutheran faith. But the treaty also stipulated that the religion of a state was to be that of its ruler (Cuius regio, eius religio). Thirty Years War, 1618–1648 From 1618 to 1648 the Thirty Years' War raged in the Holy Roman Empire. Its causes were the conflicts between Catholics and Protestants, the efforts by the various states within the Empire to increase their power, and the Catholic Emperor's attempt to achieve the religious and political unity of the Empire. The immediate occasion for the war was the uprising of the Protestant nobility of Bohemia against the emperor, but the conflict was widened into a European war by the intervention of King Christian IV of Denmark (1625–29), Gustavus Adolphus of Sweden (1630–48) and France under Cardinal Richelieu. Germany became the main theatre of war and the scene of the final conflict between France and the Habsburgs for predominance in Europe. The fighting often was out of control, with marauding bands of hundreds or thousands of starving soldiers spreading plague, plunder, and murder. The armies that were under control moved back and forth across the countryside year after year, levying heavy taxes on cities, and seizing the animals and food stocks of the peasants without payment. The enormous social disruption over three decades caused a dramatic decline in population because of killings, disease, crop failures, declining birth rates and random destruction, and the out-migration of terrified people. One estimate shows a 38% drop from 16 million people in 1618 to 10 million by 1650, while another shows "only" a 20% drop from 20 million to 16 million. The Altmark and Württemberg regions were especially hard hit. It took generations for Germany to fully recover. The war ended in 1648 with the Peace of Westphalia. Alsace was permanently lost to France, Pomerania was temporarily lost to Sweden, and the Netherlands officially left the Empire. Imperial power declined further as the states' rights were increased. Culture and literacy The German population reached about twenty million people, the great majority of whom were peasant farmers. The Reformation was a triumph of literacy and the new printing press. Luther's translation of the Bible into German was a decisive moment in the spread of literacy, and stimulated as well the printing and distribution of religious books and pamphlets. From 1517 onward religious pamphlets flooded Germany and much of Europe. By 1530 over 10,000 publications are known, with a total of ten million copies. The Reformation was thus a media revolution. Luther strengthened his attacks on Rome by depicting a "good" against "bad" church. From there, it became clear that print could be used for propaganda in the Reformation for particular agendas. Reform writers used pre-Reformation styles, clichés, and stereotypes and changed items as needed for their own purposes. Especially effective were Luther's Small Catechism, for use of parents teaching their children, and Larger Catechism, for pastors. Using the German vernacular they expressed the Apostles' Creed in simpler, more personal, Trinitarian language. Illustrations in the newly translated Bible and in many tracts popularized Luther's ideas. Lucas Cranach the Elder (1472–1553), the great painter patronized by the electors of Wittenberg, was a close friend of Luther, and illustrated Luther's theology for a popular audience. He dramatized Luther's views on the relationship between the Old and New Testaments, while remaining mindful of Luther's careful distinctions about proper and improper uses of visual imagery. Luther's German translation of the Bible was also decisive for the German language and its evolution from Early New High German to Modern Standard. His bible promoted the development of non-local forms of language and exposed all speakers to forms of German from outside their own area. Decisive scientific developments took place during the 16th and 17th centuries, especially in the fields of astronomy, mathematics and physics. The German astronomical community played a dominant role in Europe at this time, as its scientists kept in close touch with one another. They included Copernicus, who was Polish but lived in East Prussia, and Tycho Brahe who worked in Denmark. Copernicus, for example, was little known outside the German community. Astronomer Johannes Kepler from Stuttgart was a leader in the 17th-century scientific revolution. He is best known for his laws of planetary motion. His ideas influenced contemporary Italian scientist Galileo Galilei and provided one of the foundations for Englishman Isaac Newton's theory of universal gravitation. From 1640, Brandenburg-Prussia had started to rise under the "Great Elector," Frederick William. The Peace of Westphalia in 1648 strengthened it even further, through the acquisition of East Pomerania. From 1713 to 1740, King Frederick William I, also known as the "Soldier King", established a highly centralized, militarized state with a heavily rural population of about three million (compared to the nine million in Austria). In terms of the boundaries of 1914, Germany in 1700 had a population of 16 million, increasing slightly to 17 million by 1750, and growing more rapidly to 24 million by 1800. Wars continued, but they were no longer so devastating to the civilian population; famines and major epidemics did not occur, but increased agricultural productivity led to a higher birth rate, and a lower death rate. Louis XIV of France conquered parts of Alsace and Lorraine (1678–1681), and had invaded and devastated the Electorate of the Palatinate (1688–1697) in the War of Palatinian Succession. Louis XIV benefited from the Empire's problems with the Turks, which were menacing Austria. Louis XIV ultimately had to relinquish the Electorate of the Palatinate. Afterwards Hungary was reconquered from the Turks; Austria, under the Habsburgs, developed into a great power. Frederick II "the Great" is best known for his military genius, his reorganization of Prussian armies, his battlefield successes, his enlightened rule, and especially his making Prussia one of the great powers, as well as escaping from almost certain national disaster at the last minute. He was especially a role model for an aggressively expanding Germany down to 1945, and even today retains his heroic image in Germany. In the War of Austrian Succession (1740–1748) Maria Theresa fought successfully for recognition of her succession to the throne. But in the Silesian Wars and in the Seven Years' War she had to cede 95% of Silesia to Frederick the Great. After the Peace of Hubertsburg in 1763 between Austria, Prussia and Saxony, Prussia won recognition as a great power, thus launching a century-long rivalry with Austria for the leadership of the German peoples. From 1763, against resistance from the nobility and citizenry, an "enlightened absolutism" was established in Prussia and Austria, according to which the ruler governed according to the best precepts of the philosophers. The economies developed and legal reforms were undertaken, including the abolition of torture and the improvement in the status of Jews. Emancipation of the peasants slowly began. Compulsory education was instituted. In 1772–1795 Prussia took the lead in the partitions of Poland, with Austria and Russia splitting the rest. Prussia occupied the western territories of the former Polish–Lithuanian Commonwealth that surrounded existing Prussian holdings. Poland again became independent in 1918. Completely overshadowed by Prussia and Austria, according to historian Hajo Holborn, the smaller German states were generally characterized by political lethargy and administrative inefficiency, often compounded by rulers who were more concerned with their mistresses and their hunting dogs than with the affairs of state. Bavaria was especially unfortunate in this regard; it was a rural land with very heavy debts and few growth centers. Saxony was in economically good shape, although its government was seriously mismanaged, and numerous wars had taken their toll. During the time when Prussia rose rapidly within Germany, Saxony was distracted by foreign affairs. The house of Wettin concentrated on acquiring and then holding on to the Polish throne which was ultimately unsuccessful. In Württemberg the duke lavished funds on palaces, mistresses, great celebration, and hunting expeditions. Many of the city-states of Germany were run by bishops, who in reality were from powerful noble families and showed scant interest in religion. None developed a significant reputation for good government. In Hesse-Kassel, the Landgrave Frederick II, ruled 1760–1785 as an enlightened despot, and raised money by renting soldiers (called "Hessians") to Great Britain to help fight the American Revolutionary War. He combined Enlightenment ideas with Christian values, cameralist plans for central control of the economy, and a militaristic approach toward diplomacy. Hanover did not have to support a lavish court—its rulers were also kings of England and resided in London. George III, elector (ruler) from 1760 to 1820, never once visited Hanover. The local nobility who ran the country opened the University of Göttingen in 1737; it soon became a world-class intellectual center. The smaller states failed to form coalitions with each other, and were eventually overwhelmed by Prussia. Between 1807 and 1871, Prussia swallowed up many of the smaller states, with minimal protest, then went on to found the German Empire. In the process, Prussia became too heterogeneous, lost its identity, and by the 1930s had become an administrative shell of little importance. In a heavily agrarian society, land ownership played a central role. Germany's nobles, especially those in the East – called Junkers – dominated not only the localities, but also the Prussian court, and especially the Prussian army. Increasingly after 1815, a centralized Prussian government based in Berlin took over the powers of the nobles, which in terms of control over the peasantry had been almost absolute. To help the nobility avoid indebtedness, Berlin set up a credit institution to provide capital loans in 1809, and extended the loan network to peasants in 1849. When the German Empire was established in 1871, the Junker nobility controlled the army and the Navy, the bureaucracy, and the royal court; they generally set governmental policies. Peasants and rural life Peasants continued to center their lives in the village, where they were members of a corporate body, and to help manage the community resources and monitor the community life. In the East, they were serfs who were bound permanently to parcels of land. In most of Germany, farming was handled by tenant farmers who paid rents and obligatory services to the landlord, who was typically a nobleman. Peasant leaders supervised the fields and ditches and grazing rights, maintained public order and morals, and supported a village court which handled minor offenses. Inside the family the patriarch made all the decisions, and tried to arrange advantageous marriages for his children. Much of the villages' communal life centered around church services and holy days. In Prussia, the peasants drew lots to choose conscripts required by the army. The noblemen handled external relationships and politics for the villages under their control, and were not typically involved in daily activities or decisions. The emancipation of the serfs came in 1770–1830, beginning with Schleswig in 1780. The peasants were now ex-serfs and could own their land, buy and sell it, and move about freely. The nobles approved for now they could buy land owned by the peasants. The chief reformer was Baron vom Stein (1757–1831), who was influenced by The Enlightenment, especially the free market ideas of Adam Smith. The end of serfdom raised the personal legal status of the peasantry. A bank was set up so that landowners could borrow government money to buy land from peasants (the peasants were not allowed to use it to borrow money to buy land until 1850). The result was that the large landowners obtained larger estates, and many peasants became landless tenants, or moved to the cities or to America. The other German states imitated Prussia after 1815. In sharp contrast to the violence that characterized land reform in the French Revolution, Germany handled it peacefully. In Schleswig the peasants, who had been influenced by the Enlightenment, played an active role; elsewhere they were largely passive. Indeed, for most peasants, customs and traditions continued largely unchanged, including the old habits of deference to the nobles whose legal authority remained quite strong over the villagers. Although the peasants were no longer tied to the same land as serfs had been, the old paternalistic relationship in East Prussia lasted into the 20th century. The agrarian reforms in northwestern Germany in the era 1770–1870 were driven by progressive governments and local elites. They abolished feudal obligations and divided collectively owned common land into private parcels and thus created a more efficient market-oriented rural economy, which increased productivity and population growth and strengthened the traditional social order because wealthy peasants obtained most of the former common land, while the rural proletariat was left without land; many left for the cities or America. Meanwhile, the division of the common land served as a buffer preserving social peace between nobles and peasants. In the east the serfs were emancipated but the Junker class maintained its large estates and monopolized political power. Around 1800 the Catholic monasteries, which had large land holdings, were nationalized and sold off by the government. In Bavaria they had controlled 56% of the land. Bourgeois values spread to rural Germany A major social change occurring between 1750-1850, depending on region, was the end of the traditional "whole house" ("ganzes Haus") system, in which the owner's family lived together in one large building with the servants and craftsmen he employed. They reorganized into separate living arrangements. No longer did the owner's wife take charge of all the females in the different families in the whole house. In the new system, farm owners became more professionalized and profit-oriented. They managed the fields and the household exterior according to the dictates of technology, science, and economics. Farm wives supervised family care and the household interior, to which strict standards of cleanliness, order, and thrift applied. The result was the spread of formerly urban bourgeois values into rural Germany. The lesser families were now living separately on wages. They had to provide for their own supervision, health, schooling, and old-age. At the same time, because of the demographic transition, there were far fewer children, allowing for much greater attention to each child. Increasingly the middle-class family valued its privacy and its inward direction, shedding too-close links with the world of work. Furthermore, the working classes, the middle classes and the upper classes became physically, psychologically and politically more separate. This allowed for the emergence of working-class organizations. It also allowed for declining religiosity among the working-class, who were no longer monitored on a daily basis. Before 1750 the German upper classes looked to France for intellectual, cultural and architectural leadership; French was the language of high society. By the mid-18th century the "Aufklärung" (German for "The Enlightenment") had transformed German high culture in music, philosophy, science and literature. Christian Wolff (1679–1754) was the pioneer as a writer who expounded the Enlightenment to German readers; he legitimized German as a philosophic language. Prussia took the lead among the German states in sponsoring the political reforms that Enlightenment thinkers urged absolute rulers to adopt. However, there were important movements as well in the smaller states of Bavaria, Saxony, Hanover, and the Palatinate. In each case Enlightenment values became accepted and led to significant political and administrative reforms that laid the groundwork for the creation of modern states. The princes of Saxony, for example, carried out an impressive series of fundamental fiscal, administrative, judicial, educational, cultural, and general economic reforms. The reforms were aided by the country's strong urban structure and influential commercial groups, and modernized pre-1789 Saxony along the lines of classic Enlightenment principles. Johann Gottfried von Herder (1744–1803) broke new ground in philosophy and poetry, as a leader of the Sturm und Drang movement of proto-Romanticism. Weimar Classicism ("Weimarer Klassik") was a cultural and literary movement based in Weimar that sought to establish a new humanism by synthesizing Romantic, classical, and Enlightenment ideas. The movement, from 1772 until 1805, involved Herder as well as polymath Johann Wolfgang von Goethe (1749–1832) and Friedrich Schiller (1759–1805), a poet and historian. Herder argued that every folk had its own particular identity, which was expressed in its language and culture. This legitimized the promotion of German language and culture and helped shape the development of German nationalism. Schiller's plays expressed the restless spirit of his generation, depicting the hero's struggle against social pressures and the force of destiny. In remote Königsberg philosopher Immanuel Kant (1724–1804) tried to reconcile rationalism and religious belief, individual freedom, and political authority. Kant's work contained basic tensions that would continue to shape German thought – and indeed all of European philosophy – well into the 20th century. The German Enlightenment won the support of princes, aristocrats, and the middle classes, and it permanently reshaped the culture. Before the 19th century, young women lived under the economic and disciplinary authority of their fathers until they married and passed under the control of their husbands. In order to secure a satisfactory marriage, a woman needed to bring a substantial dowry. In the wealthier families, daughters received their dowry from their families, whereas the poorer women needed to work in order to save their wages so as to improve their chances to wed. Under the German laws, women had property rights over their dowries and inheritances, a valuable benefit as high mortality rates resulted in successive marriages. Before 1789, the majority of women lived confined to society’s private sphere, the home. The Age of Reason did not bring much more for women: men, including Enlightenment aficionados, believed that women were naturally destined to be principally wives and mothers. Within the educated classes, there was the belief that women needed to be sufficiently educated to be intelligent and agreeable interlocutors to their husbands. However, the lower-class women were expected to be economically productive in order to help their husbands make ends meet. French Revolution, 1789–1815 German reaction to the French Revolution was mixed at first. German intellectuals celebrated the outbreak, hoping to see the triumph of Reason and The Enlightenment. The royal courts in Vienna and Berlin denounced the overthrow of the king and the threatened spread of notions of liberty, equality, and fraternity. By 1793, the execution of the French king and the onset of the Terror disillusioned the Bildungsbürgertum (educated middle classes). Reformers said the solution was to have faith in the ability of Germans to reform their laws and institutions in peaceful fashion. Europe was racked by two decades of war revolving around France's efforts to spread its revolutionary ideals, and the opposition of reactionary royalty. War broke out in 1792 as Austria and Prussia invaded France, but were defeated at the Battle of Valmy (1792). The German lands saw armies marching back and forth, bringing devastation (albeit on a far lower scale than the Thirty Years' War, almost two centuries before), but also bringing new ideas of liberty and civil rights for the people. Prussia and Austria ended their failed wars with France but (with Russia) partitioned Poland among themselves in 1793 and 1795. The French took control of the Rhineland, imposed French-style reforms, abolished feudalism, established constitutions, promoted freedom of religion, emancipated Jews, opened the bureaucracy to ordinary citizens of talent, and forced the nobility to share power with the rising middle class. Napoleon created the Kingdom of Westphalia (1807–1813) as a model state. These reforms proved largely permanent and modernized the western parts of Germany. When the French tried to impose the French language, German opposition grew in intensity. A Second Coalition of Britain, Russia, and Austria then attacked France but failed. Napoleon established direct or indirect control over most of western Europe, including the German states apart from Prussia and Austria. The old Holy Roman Empire was little more than a farce; Napoleon simply abolished it in 1806 while forming new countries under his control. In Germany Napoleon set up the "Confederation of the Rhine," comprising most of the German states except Prussia and Austria. Prussia tried to remain neutral while imposing tight controls on dissent, but with German nationalism sharply on the rise, the small nation blundered by going to war with Napoleon in 1806. Its economy was weak, its leadership poor, and the once mighty Prussian army was a hollow shell. Napoleon easily crushed it at the Battle of Jena (1806). Napoleon occupied Berlin, and Prussia paid dearly. Prussia lost its recently acquired territories in western Germany, its army was reduced to 42,000 men, no trade with Britain was allowed, and Berlin had to pay Paris heavy reparations and fund the French army of occupation. Saxony changed sides to support Napoleon and join his Confederation of the Rhine; its elector was rewarded with the title of king and given a slice of Poland taken from Prussia. After Napoleon's fiasco in Russia in 1812, including the deaths of many Germans in his invasion army, Prussia joined with Russia. Major battles followed in quick order, and when Austria switched sides to oppose Napoleon, his situation grew tenuous. He was defeated in a great Battle of Leipzig in late 1813, and Napoleon's empire started to collapse. One after another the German states switched to oppose Napoleon, but he rejected peace terms. Allied armies invaded France in early 1814, Paris fell, and in April Napoleon surrendered. He returned for 100 days in 1815, but was finally defeated by the British and German armies at Waterloo. Prussia was the big winner at the Vienna peace conference, gaining extensive territory. Europe in 1815 was a continent in a state of complete exhaustion following the French Revolutionary and Napoleonic Wars, and started to turn from the liberal ideas of the Enlightenment and Revolutionary era and to Romanticism under such writers as Edmund Burke, Joseph de Maistre, and Novalis. Politically, the victorious allies set out to build a new balance of powers in order to keep the peace, and decided that a stable German region would be able to keep French imperialism at bay. To make this a possibility, the idea of reforming the defunct Holy Roman Empire was discarded, and Napoleon's reorganization of the German states was kept and the remaining princes were allowed to keep their titles. In 1813, in return for guarantees from the Allies that the sovereignty and integrity of the Southern German states (Baden, Württemberg, and Bavaria) would be preserved, they broke with the French. The German Confederation (German: Deutscher Bund) was the loose association of 39 states created in 1815 to coordinate the economies of separate German-speaking countries. It acted as a buffer between the powerful states of Austria and Prussia. Britain approved of it because London felt that there was need for a stable, peaceful power in central Europe that could discourage aggressive moves by France or Russia. According to Lee (1985), most historians have judged the Confederation to be weak and ineffective, as well as an obstacle to German nationalist aspirations. It collapsed because of the rivalry between Prussia and Austria (known as German dualism), warfare, the 1848 revolution, and the inability of the multiple members to compromise. It was replaced by the North German Confederation in 1866. Society and economy The population of the German Confederation (excluding Austria) grew 60% from 1815 to 1865, from 21,000,000 to 34,000,000. The era saw the Demographic Transition take place in Germany. It was a transition from high birth rates and high death rates to low birth and death rates as the country developed from a pre-industrial to a modernized agriculture and supported a fast-growing industrialized urban economic system. In previous centuries, the shortage of land meant that not everyone could marry, and marriages took place after age 25. After 1815, increased agricultural productivity meant a larger food supply, and a decline in famines, epidemics, and malnutrition. This allowed couples to marry earlier, and have more children. Arranged marriages became uncommon as young people were now allowed to choose their own marriage partners, subject to a veto by the parents. The high birthrate was offset by a very high rate of infant mortality and emigration, especially after about 1840, mostly to the German settlements in the United States, plus periodic epidemics and harvest failures. The upper and middle classes began to practice birth control, and a little later so too did the peasants. Before 1850 Germany lagged far behind the leaders in industrial development – Britain, France, and Belgium. In 1800, Germany's social structure was poorly suited to entrepreneurship or economic development. Domination by France during the era of the French Revolution (1790s to 1815), however, produced important institutional reforms. Reforms included the abolition of feudal restrictions on the sale of large landed estates, the reduction of the power of the guilds in the cities, and the introduction of a new, more efficient commercial law. Nevertheless, traditionalism remained strong in most of Germany. Until mid-century, the guilds, the landed aristocracy, the churches, and the government bureaucracies had so many rules and restrictions that entrepreneurship was held in low esteem, and given little opportunity to develop. From the 1830s and 1840s, Prussia, Saxony, and other states reorganized agriculture. The introduction of sugar beets, turnips, and potatoes yielded a higher level of food production, which enabled a surplus rural population to move to industrial areas. The beginnings of the industrial revolution in Germany came in the textile industry, and was facilitated by eliminating tariff barriers through the Zollverein, starting in 1834. By mid-century, the German states were catching up. By 1900 Germany was a world leader in industrialization, along with Britain and the United States. Historian Thomas Nipperdey sums it up: - On the whole, industrialisation in Germany must be considered to have been positive in its effects. Not only did it change society and the countryside, and finally the world...it created the modern world we live in. It solved the problems of population growth, under-employment and pauperism in a stagnating economy, and abolished dependency on the natural conditions of agriculture, and finally hunger. It created huge improvements in production and both short- and long-term improvements in living standards. However, in terms of social inequality, it can be assumed that it did not change the relative levels of income. Between 1815 and 1873 the statistical distribution of wealth was on the order of 77% to 23% for entrepreneurs and workers respectively. On the other hand, new problems arose, in the form of interrupted growth and new crises, such as urbanisation, 'alienation', new underclasses, proletariat and proletarian misery, new injustices and new masters and, eventually, class warfare. Industrialization brought rural Germans to the factories, mines and railways. The population in 1800 was heavily rural, with only 10% of the people living in communities of 5000 or more people, and only 2% living in cities of more than 100,000. After 1815, the urban population grew rapidly, due primarily to the influx of young people from the rural areas. Berlin grew from 172,000 in 1800, to 826,000 in 1870; Hamburg grew from 130,000 to 290,000; Munich from 40,000 to 269,000; and Dresden from 60,000 to 177,000. Offsetting this growth, there was extensive emigration, especially to the United States. Emigration totaled 480,000 in the 1840s, 1,200,000 in the 1850s, and 780,000 in the 1860s. The takeoff stage of economic development came with the railroad revolution in the 1840s, which opened up new markets for local products, created a pool of middle managers, increased the demand for engineers, architects and skilled machinists and stimulated investments in coal and iron. Political disunity of three dozen states and a pervasive conservatism made it difficult to build railways in the 1830s. However, by the 1840s, trunk lines did link the major cities; each German state was responsible for the lines within its own borders. Economist Friedrich List summed up the advantages to be derived from the development of the railway system in 1841: - As a means of national defence, it facilitates the concentration, distribution and direction of the army. - It is a means to the improvement of the culture of the nation. It brings talent, knowledge and skill of every kind readily to market. - It secures the community against dearth and famine, and against excessive fluctuation in the prices of the necessaries of life. - It promotes the spirit of the nation, as it has a tendency to destroy the Philistine spirit arising from isolation and provincial prejudice and vanity. It binds nations by ligaments, and promotes an interchange of food and of commodities, thus making it feel to be a unit. The iron rails become a nerve system, which, on the one hand, strengthens public opinion, and, on the other hand, strengthens the power of the state for police and governmental purposes. Lacking a technological base at first, the Germans imported their engineering and hardware from Britain, but quickly learned the skills needed to operate and expand the railways. In many cities, the new railway shops were the centres of technological awareness and training, so that by 1850, Germany was self-sufficient in meeting the demands of railroad construction, and the railways were a major impetus for the growth of the new steel industry. Observers found that even as late as 1890, their engineering was inferior to Britain’s. However, German unification in 1870 stimulated consolidation, nationalisation into state-owned companies, and further rapid growth. Unlike the situation in France, the goal was support of industrialisation, and so, heavy lines crisscrossed the Ruhr and other industrial districts, and provided good connections to the major ports of Hamburg and Bremen. By 1880, Germany had 9,400 locomotives pulling 43,000 passengers and 30,000 tons of freight a day, and forged ahead of France. Newspapers and magazines A large number of newspapers and magazines flourished; A typical small city had one or two newspapers; Berlin and Leipzig had dozens. The audience was limited to perhaps five percent of the adult men, chiefly from the aristocratic and middle classes, who followed politics. Liberal papers outnumbered conservative ones by a wide margin. Foreign governments bribed editors to guarantee a favorable image. Censorship was strict, and the government issued the political news they were supposed to report. After 1871, strict press laws were used by Bismarck to shut down the Socialist, and to threaten hostile editors. There were no national newspapers. Editors focused on political commentary, but also included in a nonpolitical cultural page, focused on the arts and high culture. Especially popular was the serialized novel, with a new chapter every week. Magazines were politically more influential, and attracted the leading intellectuals as authors. Science and culture German artists and intellectuals, heavily influenced by the French Revolution and by the great German poet and writer Johann Wolfgang von Goethe (1749–1832), turned to Romanticism after a period of Enlightenment. Philosophical thought was decisively shaped by Immanuel Kant (1724–1804). Ludwig van Beethoven (1770–1827) was the leading composer of Romantic music. His use of tonal architecture in such a way as to allow significant expansion of musical forms and structures was immediately recognized as bringing a new dimension to music. His later piano music and string quartets, especially, showed the way to a completely unexplored musical universe, and influenced Franz Schubert (1797–1828) and Robert Schumann (1810–1856). In opera, a new Romantic atmosphere combining supernatural terror and melodramatic plot in a folkloric context was first successfully achieved by Carl Maria von Weber (1786–1826) and perfected by Richard Wagner (1813–1883) in his Ring Cycle. The Brothers Grimm (1785–1863 & 1786–1859) not only collected folk stories into the popular Grimm's Fairy Tales, but were also linguists, now counted among the founding fathers of German studies. They were commissioned to begin the Deutsches Wörterbuch ("The German Dictionary"), which remains the most comprehensive work on the German language. At the universities high-powered professors developed international reputations, especially in the humanities led by history and philology, which brought a new historical perspective to the study of political history, theology, philosophy, language, and literature. With Georg Wilhelm Friedrich Hegel (1770–1831) in philosophy, Friedrich Schleiermacher (1768–1834) in theology and Leopold von Ranke (1795–1886) in history, the University of Berlin, founded in 1810, became the world's leading university. Von Ranke, for example, professionalized history and set the world standard for historiography. By the 1830s mathematics, physics, chemistry, and biology had emerged with world class science, led by Alexander von Humboldt (1769–1859) in natural science and Carl Friedrich Gauss (1777–1855) in mathematics. Young intellectuals often turned to politics, but their support for the failed Revolution of 1848 forced many into exile. Immanuel Kant (1724–1804) Johann Wolfgang von Goethe (1749–1832) Alexander von Humboldt (1769–1859) Ludwig van Beethoven (1770–1827) Carl Friedrich Gauss (1777–1855) Brothers Grimm (1785–1863 & 1786–1859) Two main developments reshaped religion in Germany. Across the land, there was a movement to unite the larger Lutheran and the smaller Reformed Protestant churches. The churches themselves brought this about in Baden, Nassau, and Bavaria. However, in Prussia King Frederick William III was determined to handle unification entirely on his own terms, without consultation. His goal was to unify the Protestant churches, and to impose a single standardized liturgy, organization and even architecture. The long-term goal was to have fully centralized royal control of all the Protestant churches. In a series of proclamations over several decades the Church of the Prussian Union was formed, bringing together the more numerous Lutherans, and the less numerous Reformed Protestants. The government of Prussia now had full control over church affairs, with the king himself recognized as the leading bishop. Opposition to unification came from the "Old Lutherans" in Silesia who clung tightly to the theological and liturgical forms they had followed since the days of Luther. The government attempted to crack down on them, so they went underground. Tens of thousands migrated, to South Australia, and especially to the United States, where they formed the Missouri Synod, which is still in operation as a conservative denomination. Finally in 1845 a new king Frederick William IV offered a general amnesty and allowed the Old Lutherans to form a separate church association with only nominal government control. From the religious point of view of the typical Catholic or Protestant, major changes were underway in terms of a much more personalized religiosity that focused on the individual more than the church or the ceremony. The rationalism of the late 19th century faded away, and there was a new emphasis on the psychology and feeling of the individual, especially in terms of contemplating sinfulness, redemption, and the mysteries and the revelations of Christianity. Pietistic revivals were common among Protestants. Among Catholics there was a sharp increase in popular pilgrimages. In 1844 alone, half a million pilgrims made a pilgrimage to the city of Trier in the Rhineland to view the Seamless robe of Jesus, said to be the robe that Jesus wore on the way to his crucifixion. Catholic bishops in Germany had historically been largely independent Of Rome, but now the Vatican exerted increasing control, a new "ultramontanism" of Catholics highly loyal to Rome. A sharp controversy broke out in 1837-38 in the largely Catholic Rhineland over the religious education of children of mixed marriages, where the mother was Catholic and the father Protestant. The government passed laws to require that these children always be raised as Protestants, contrary to Napoleonic law that had previously prevailed and allowed the parents to make the decision. It put the Catholic Archbishop under house arrest. In 1840, the new King Frederick William IV sought reconciliation and ended the controversy by agreeing to most of the Catholic demands. However Catholic memories remained deep and led to a sense that Catholics always needed to stick together in the face of an untrustworthy government. Politics of restoration and revolution After the fall of Napoleon, Europe's statesmen convened in Vienna in 1815 for the reorganisation of European affairs, under the leadership of the Austrian Prince Metternich. The political principles agreed upon at this Congress of Vienna included the restoration, legitimacy and solidarity of rulers for the repression of revolutionary and nationalist ideas. The German Confederation (German: Deutscher Bund) was founded, a loose union of 39 states (35 ruling princes and 4 free cities) under Austrian leadership, with a Federal Diet (German: Bundestag) meeting in Frankfurt am Main. It was a loose coalition that failed to satisfy most nationalists. The member states largely went their own way, and Austria had its own interests. In 1819 a student radical assassinated the reactionary playwright August von Kotzebue, who had scoffed at liberal student organisations. In one of the few major actions of the German Confederation, Prince Metternich called a conference that issued the repressive Carlsbad Decrees, designed to suppress liberal agitation against the conservative governments of the German states. The Decrees terminated the fast-fading nationalist fraternities (German: Burschenschaften), removed liberal university professors, and expanded the censorship of the press. The decrees began the "persecution of the demagogues", which was directed against individuals who were accused of spreading revolutionary and nationalist ideas. Among the persecuted were the poet Ernst Moritz Arndt, the publisher Johann Joseph Görres and the "Father of Gymnastics" Ludwig Jahn. In 1834 the Zollverein was established, a customs union between Prussia and most other German states, but excluding Austria. As industrialisation developed, the need for a unified German state with a uniform currency, legal system, and government became more and more obvious. Growing discontent with the political and social order imposed by the Congress of Vienna led to the outbreak, in 1848, of the March Revolution in the German states. In May the German National Assembly (the Frankfurt Parliament) met in Frankfurt to draw up a national German constitution. But the 1848 revolution turned out to be unsuccessful: King Frederick William IV of Prussia refused the imperial crown, the Frankfurt parliament was dissolved, the ruling princes repressed the risings by military force, and the German Confederation was re-established by 1850. Many leaders went into exile, including a number who went to the United States and became a political force there. The 1850s were a period of extreme political reaction. Dissent was vigorously suppressed, and many Germans emigrated to America following the collapse of the 1848 uprisings. Frederick William IV became extremely depressed and melancholy during this period, and was surrounded by men who advocated clericalism and absolute divine monarchy. The Prussian people once again lost interest in politics. Prussia not only expanded its territory but began to industrialize rapidly, while maintaining a strong agricultural base. Bismarck takes charge, 1862–1866 In 1857, the king had a stroke and his brother William became regent, then became King William I in 1861. Although conservative, William I was far more pragmatic. His most significant accomplishment was naming Otto von Bismarck as chancellor in 1862. The combination of Bismarck, Defense Minister Albrecht von Roon, and Field Marshal Helmut von Moltke set the stage for victories over Denmark, Austria, and France, and led to the unification of Germany. The obstacle to German unification was Austria, and Bismarck solved the problem with a series of wars that united the German states north of Austria. In 1863–64, disputes between Prussia and Denmark grew over Schleswig, which was not part of the German Confederation, and which Danish nationalists wanted to incorporate into the Danish kingdom. The dispute led to the short Second War of Schleswig in 1864. Prussia, joined by Austria, easily defeated Denmark and occupied Jutland. The Danes were forced to cede both the duchy of Schleswig and the duchy of Holstein to Austria and Prussia. In the aftermath, the management of the two duchies caused escalating tensions between Austria and Prussia. The former wanted the duchies to become an independent entity within the German Confederation, while the latter wanted to annex them. The Seven Weeks War between Austria and Prussia broke out in June 1866. In July, the two armies clashed at Sadowa-Königgrätz (Bohemia) in an enormous battle involving half a million men. The Prussian breech-loading needle guns carried the day over the slow muzzle-loading rifles of the Austrians, who lost a quarter of their army in the battle. Austria ceded Venice to Italy, but Bismarck was deliberately lenient with the loser to keep alive a long-term alliance with Austria in a subordinate role. Now the French faced an increasingly strong Prussia. North German Federation, 1866–1871 In 1866, the German Confederation was dissolved. In its place the North German Federation (German Norddeutscher Bund) was established, under the leadership of Prussia. Austria was excluded, and the Austrian influence in Germany that had begun in the 15th century finally came to an end. The North German Federation was a transitional organisation that existed from 1867 to 1871, between the dissolution of the German Confederation and the founding of the German Empire. German Empire, 1871–1918 After Germany was united by Otto von Bismarck into the "German Reich", he determined German politics until 1890. Bismarck tried to foster alliances in Europe, on one hand to contain France, and on the other hand to consolidate Germany's influence in Europe. On the domestic front Bismarck tried to stem the rise of socialism by anti-socialist laws, combined with an introduction of health care and social security. At the same time Bismarck tried to reduce the political influence of the emancipated Catholic minority in the Kulturkampf, literally "culture struggle". The Catholics only grew stronger, forming the Center (Zentrum) Party. Germany grew rapidly in industrial and economic power, matching Britain by 1900. Its highly professional army was the best in the world, but the navy could never catch up with Britain's Royal Navy. In 1888, the young and ambitious Kaiser Wilhelm II became emperor. He could not abide advice, least of all from the most experienced politician and diplomat in Europe, so he fired Bismarck. The Kaiser opposed Bismarck's careful foreign policy and wanted Germany to pursue colonialist policies, as Britain and France had been doing for decades, as well as build a navy that could match the British. The Kaiser promoted active colonization of Africa and Asia for those areas that were not already colonies of other European powers; his record was notoriously brutal and set the stage for genocide. The Kaiser took a mostly unilateral approach in Europe with as main ally the Austro-Hungarian Empire, and an arms race with Britain, which eventually led to the situation in which the assassination of the Austrian-Hungarian crown prince could spark off World War I. Age of Bismarck The new empire Disputes between France and Prussia increased. In 1868, the Spanish queen Isabella II was expelled by a revolution, leaving that country's throne vacant. When Prussia tried to put a Hohenzollern candidate, Prince Leopold, on the Spanish throne, the French angrily protested. In July 1870, France declared war on Prussia (the Franco-Prussian War). The debacle was swift. A succession of German victories in northeastern France followed, and one French army was besieged at Metz. After a few weeks, the main army was finally forced to capitulate in the fortress of Sedan. French Emperor Napoleon III was taken prisoner and a republic hastily proclaimed in Paris. The new government, realising that a victorious Germany would demand territorial acquisitions, resolved to fight on. They began to muster new armies, and the Germans settled down to a grim siege of Paris. The starving city surrendered in January 1871, and the Prussian army staged a victory parade in it. France was forced to pay indemnities of 5 billion francs and cede Alsace-Lorraine. It was a bitter peace that would leave the French thirsting for revenge. During the Siege of Paris, the German princes assembled in the Hall of Mirrors of the Palace of Versailles and proclaimed the Prussian King Wilhelm I as the "German Emperor" on 18 January 1871. The German Empire was thus founded, with the German states unified into a single economic, political, and administrative unit. The empire comprised 25 states, three of which were Hanseatic free cities. It was dubbed the "Little German" solution, since it excluded the Austrian territories and the Habsburgs. Bismarck, again, was appointed to serve as Chancellor. The new empire was characterised by a great enthusiasm and vigor. There was a rash of heroic artwork in imitation of Greek and Roman styles, and the nation possessed a vigorous, growing industrial economy, while it had always been rather poor in the past. The change from the slower, more tranquil order of the old Germany was very sudden, and many, especially the nobility, resented being displaced by the new rich. And yet, the nobles clung stubbornly to power, and they, not the bourgeois, continued to be the model that everyone wanted to imitate. In imperial Germany, possessing a collection of medals or wearing a uniform was valued more than the size of one's bank account, and Berlin never became a great cultural center as London, Paris, or Vienna were. The empire was distinctly authoritarian in tone, as the 1871 constitution gave the emperor exclusive power to appoint or dismiss the chancellor. He also was supreme commander-in-chief of the armed forces and final arbiter of foreign policy. But freedom of speech, association, and religion were nonetheless guaranteed by the constitution. Bismarck's domestic policies as Chancellor of Germany were characterised by his fight against perceived enemies of the Protestant Prussian state. In the Kulturkampf (1871–1878), he tried to minimize the influence of the Roman Catholic Church and of its political arm, the Catholic Centre Party, through various measures—like the introduction of civil marriage—but without much success. The Kulturkampf antagonised many Protestants as well as Catholics, and was eventually abandoned. Millions of non-Germans subjects in the German Empire, like the Polish, Danish and French minorities, were discriminated against, and a policy of Germanisation was implemented. The new Empire provided rich new opportunities at the top for the nobility of Prussia, and the other states, to fill. They dominated the diplomatic service, the Army, and the civil service. Through their control of the civil service, the aristocracy had a dominant voice in decisions affecting the universities and the churches. In 1914, Germany's diplomats consisted of eight princes 29 counts 20 barons 54 other nobles, and a mere 11 commoners. The commoners were chiefly the sons of leading industrialists or bankers. Almost all the diplomats had been socialized into the feudal student corps at the universities. The consular corps comprised commoners, but they had little decision-making ability. Since the days of Frederick the great, it had been difficult for commoners to achieve high ranking the Army. It was considered a suitable role for young aristocrats. The new Constitution put Military affairs under the direct control of the Emperor, and largely out of reach of the Reichstag. With its large corps of reserve officers across Germany, the military strengthened its role as "The estate which upheld the nation." Historian Hans-Ulrich Wehler says, "it became an almost separate, self-perpetuating caste." Power increasingly was centralized in the national capital of Berlin (including neighboring Potsdam.) where 7000 aristocrats drew a sharp line between themselves and everyone else. Berlin's rapidly increasing rich middle-class aped and copied the aristocracy and tried to marry into it. The closed system stood in contrast to Britain where the top levels of the elite were far more open with routes available through a public school education, Oxford, and Cambridge, the Inns of Court, appointment to high office, or leadership in the House of Commons. A peerage could permanently boost a rich industrial family into the upper reaches of the establishment. In Germany, the process worked in the other direction as the nobility became industrialists. For example, 221 of the 243 mines in Silesia were owned by nobles or by the King of Prussia himself. Germany's middle class, based in the cities, grew exponentially, although it never gained the political power it had in France, Britain or the United States. The Bund Deutscher Frauenvereine (Association of German Women's Organizations or BDF) was established in 1894 to encompass the proliferating women's organizations that had sprung up since the 1860s. From the beginning the BDF was a bourgeois organization, its members working toward equality with men in such areas as education, financial opportunities, and political life. Working-class women were not welcome; they were organized by the Socialists. The rise of the Socialist Workers' Party (later known as the Social Democratic Party of Germany, SPD), declared its aim to establish peacefully a new socialist order through the transformation of existing political and social conditions. From 1878, Bismarck tried to repress the social democratic movement by outlawing the party's organisation, its assemblies and most of its newspapers. When it finally was allowed to run candidates, the Social Democrats were stronger than ever. Bismarck built on a tradition of welfare programs in Prussia and Saxony that began as early as the 1840s. In the 1880s he introduced old age pensions, accident insurance, medical care, and unemployment insurance that formed the basis of the modern European welfare state. His paternalistic programs won the support of German industry because its goals were to win the support of the working classes for the Empire and reduce the outflow of immigrants to America, where wages were higher but welfare did not exist. Bismarck further won the support of both industry and skilled workers by his high tariff policies, which protected profits and wages from American competition, although they alienated the liberal intellectuals who wanted free trade. Bismarck would not tolerate any power outside Germany—as in Rome—having a say in German affairs. He launched a Kulturkampf ("culture war") against the power of the pope and the Catholic Church in 1873, but only in Prussia. This gained strong support from German liberals, who saw the Catholic Church as the bastion of reaction and their greatest enemy. The Catholic element, in turn, saw in the National-Liberals as its worst enemy and formed the Center Party. Catholics, although nearly a third of the national population, were seldom allowed to hold major positions in the Imperial government, or the Prussian government. After 1871, there was a systematic purge of the remaining Catholics; in the powerful interior ministry, which handled all police affairs, the only Catholic was a messenger boy. Jews were likewise heavily discriminated against. Most of the Kulturkampf was fought out in Prussia, but Imperial Germany passed the Pulpit Law which made it a crime for any cleric to discuss public issues in a way that displeased the government. Nearly all Catholic bishops, clergy, and laymen rejected the legality of the new laws and defiantly faced the increasingly heavy penalties and imprisonments imposed by Bismarck's government. Historian Anthony Steinhoff reports the casualty totals: - As of 1878, only three of eight Prussian dioceses still had bishops, some 1,125 of 4,600 parishes were vacant, and nearly 1,800 priests ended up in jail or in exile....Finally, between 1872 and 1878, numerous Catholic newspapers were confiscated, Catholic associations and assemblies were dissolved, and Catholic civil servants were dismissed merely on the pretence of having Ultramontane sympathies. Bismarck underestimated the resolve of the Catholic Church and did not foresee the extremes that this struggle would attain. The Catholic Church denounced the harsh new laws as anti-Catholic and mustered the support of its rank and file voters across Germany. In the following elections, the Center Party won a quarter of the seats in the Imperial Diet. The conflict ended after 1879 because Pope Pius IX died in 1878 and Bismarck broke with the Liberals to put his main emphasis on tariffs, foreign policy, and attacking socialists. Bismarck negotiated with the conciliatory new pope Leo XIII. Peace was restored, the bishops returned and the jailed clerics were released. Laws were toned down or taken back (Mitigation Laws 1880-1883 and Peace Laws 1886/87), but the laws concerning education, civil registry of marriages and religious disaffiliation remained in place. The Center Party gained strength and became an ally of Bismarck, especially when he attacked socialism. Bismarck's post-1871 foreign policy was conservative and basically aimed at security and preventing the dreaded scenario of a Franco-Russian alliance, which would trap Germany between the two in a war. The League of Three Emperors (Dreikaisersbund) was signed in 1872 by Russia, Austria, and Germany. It stated that republicanism and socialism were common enemies and that the three powers would discuss any matters concerning foreign policy. Bismarck needed good relations with Russia in order to keep France isolated. In 1877–1878, Russia fought a victorious war with the Ottoman Empire and attempted to impose the Treaty of San Stefano on it. This upset the British in particular, as they were long concerned with preserving the Ottoman Empire and preventing a Russian takeover of the Bosphorus Strait. Germany hosted the Congress of Berlin (1878), whereby a more moderate peace settlement was agreed to. Germany had no direct interest in the Balkans, however, which was largely an Austrian and Russian sphere of influence, although King Carol of Romania was a German prince. In 1879, Bismarck formed a Dual Alliance of Germany and Austria-Hungary, with the aim of mutual military assistance in the case of an attack from Russia, which was not satisfied with the agreement reached at the Congress of Berlin. The establishment of the Dual Alliance led Russia to take a more conciliatory stance, and in 1887, the so-called Reinsurance Treaty was signed between Germany and Russia: in it, the two powers agreed on mutual military support in the case that France attacked Germany, or in case of an Austrian attack on Russia. Russia turned its attention eastward to Asia and remained largely inactive in European politics for the next 25 years. In 1882, Italy joined the Dual Alliance to form a Triple Alliance. Italy wanted to defend its interests in North Africa against France's colonial policy. In return for German and Austrian support, Italy committed itself to assisting Germany in the case of a French military attack. For a long time, Bismarck had refused to give in to widespread public demands to give Germany "a place in the sun" through the acquisition of overseas colonies. In 1880 Bismarck gave way, and a number of colonies were established overseas. In Africa, these were Togo, the Cameroons, German South-West Africa, and German East Africa; in Oceania, they were German New Guinea, the Bismarck Archipelago, and the Marshall Islands. In fact, it was Bismarck himself who helped initiate the Berlin Conference of 1885. He did it to "establish international guidelines for the acquisition of African territory" (see Colonisation of Africa). This conference was an impetus for the "Scramble for Africa" and "New Imperialism". In 1888, emperor William I died at the age of 90. His son Frederick III, the hope of German liberals, was already stricken with throat cancer and died three months later. Frederick's son Wilhelm II then became emperor at the age of 29. Having had a problematic relationship with his liberal parents, Wilhelm had early on decided to renew the top level of the state. The two years that Bismarck remained in office feigned continuity, but a difference of opinion on social politics served as an excuse for the young Kaiser to force the chancellor into retirement in March 1890. Following a principle known as "personal regiment" (German: persönliches Regiment), Wilhelm aimed to exercise influence on every government decision. Alliances and diplomacy The young Kaiser Wilhelm sought aggressively to increase Germany's influence in the world (Weltpolitik). After the removal of Bismarck, foreign policy was in the hands of the erratic Kaiser, who played an increasingly reckless hand, and the powerful foreign office under the leadership of Friedrich von Holstein. The foreign office argued that: first, a long-term coalition between France and Russia had to fall apart; secondly, Russia and Britain would never get together; and, finally, Britain would eventually seek an alliance with Germany. Germany refused to renew its treaties with Russia. But Russia did form a closer relationship with France in the Dual Alliance of 1894, since both were worried about the possibilities of German aggression. Furthermore, Anglo–German relations cooled as Germany aggressively tried to build a new empire and engaged in a naval race with Britain; London refused to agree to the formal alliance that Germany sought. Berlin's analysis proved mistaken on every point, leading to Germany's increasing isolation and its dependence on the Triple Alliance, which brought together Germany, Austria-Hungary, and Italy. The Triple Alliance was undermined by differences between Austria and Italy, and in 1915 Italy switched sides. Meanwhile, the German Navy under Admiral Alfred von Tirpitz had ambitions to rival the great British Navy, and dramatically expanded its fleet in the early 20th century to protect the colonies and exert power worldwide. Tirpitz started a programme of warship construction in 1898. In 1890, Germany had gained the island of Heligoland in the North Sea from Britain in exchange for the eastern African island of Zanzibar, and proceeded to construct a great naval base there. This posed a direct threat to British hegemony on the seas, with the result that negotiations for an alliance between Germany and Britain broke down. The British, however, kept well ahead in the naval race by the introduction of the highly advanced new Dreadnought battleship in 1907. In the First Moroccan Crisis of 1905, Germany nearly came to blows with Britain and France when the latter attempted to establish a protectorate over Morocco. The Germans were upset at having not been informed about French intentions, and declared their support for Moroccan independence. William II made a highly provocative speech regarding this. The following year, a conference was held in which all of the European powers except Austria-Hungary (by now little more than a German satellite) sided with France. A compromise was brokered by the United States where the French relinquished some, but not all, control over Morocco. The Second Moroccan Crisis of 1911 saw another dispute over Morocco erupt when France tried to suppress a revolt there. Germany, still smarting from the previous quarrel, agreed to a settlement whereby the French ceded some territory in central Africa in exchange for Germany's renouncing any right to intervene in Moroccan affairs. This confirmed French control over Morocco, which became a full protectorate of that country in 1912. The economy continued to industrialize and urbanize, with heavy industry – especially coal and steel – becoming important in the Ruhr, and manufacturing growing in the cities, the Ruhr, and Silesia. Perkins (1981) argues that more important than Bismarck's new tariff on imported grain was the introduction of the sugar beet as a main crop. Farmers quickly abandoned traditional, inefficient practices in favor of modern methods, including use of new fertilizers and new tools. The knowledge and tools gained from the intensive farming of sugar and other root crops made Germany the most efficient agricultural producer in Europe by 1914. Even so, farms were small in size, and women did much of the field work. An unintended consequence was the increased dependence on migratory, especially foreign, labor. Based on its leadership in chemical research in the universities and industrial laboratories, Germany became dominant in the world's chemical industry in the late 19th century. At first, the production of dyes was critical. Germany became Europe's leading steel-producing nation in the 1890s, thanks in large part to the protection from American and British competition afforded by tariffs and cartels. The leading firm was "Friedrich Krupp AG Hoesch-Krupp," run by the Krupp family. The merger of several major firms into the Vereinigte Stahlwerke (United Steel Works) in 1926 was modeled on the U.S. Steel corporation in the United States. The new company emphasized rationalization of management structures and modernization of the technology; it employed a multi-divisional structure and used return on investment as its measure of success. By 1913, American and German exports dominated the world steel market, as Britain slipped to third place. In machinery, iron and steel, and other industries, German firms avoided cut-throat competition and instead relied on trade associations. Germany was a world leader because of its prevailing "corporatist mentality", its strong bureaucratic tradition, and the encouragement of the government. These associations regulate competition and allowed small firms to function in the shadow of much larger companies. Germany's unification process after 1871 was heavily dominated by men and give priority to the "Fatherland" theme and related male issues, such as military prowess. Nevertheless, middle class women enrolled in the Bund Deutscher Frauenvereine, the Union of German Feminist Organizations (BDF). Founded in 1894, it grew to include 137 separate women's rights groups from 1907 until 1933, when the Nazi regime disbanded the organization. The BDF gave national direction to the proliferating women's organizations that had sprung up since the 1860s. From the beginning the BDF was a bourgeois organization, its members working toward equality with men in such areas as education, financial opportunities, and political life. Working-class women were not welcome; they were organized by the Socialists. Formal organizations for promoting women's rights grew in numbers during the Wilhelmine period. German feminists began to network with feminists from other countries, and participated in the growth of international organizations. By the 1890s, German colonial expansion in Asia and the Pacific (Kiauchau in China, the Marianas, the Caroline Islands, Samoa) led to frictions with Britain, Russia, Japan and the United States. The construction of the Baghdad Railway, financed by German banks, was designed to eventually connect Germany with the Turkish Empire and the Persian Gulf, but it also collided with British and Russian geopolitical interests. The largest colonial enterprises were in Africa. The harsh treatment of the Nama and Herero in what is now Namibia in Africa in 1906–07 led to charges of genocide against the Germans. Historians are examining the links and precedents between the Herero and Namaqua Genocide and the Holocaust of the 1940s. World War I Ethnic demands for nation states upset the balance between the empires that dominated Europe, leading to World War I, which started in August 1914. Germany stood behind its ally Austria in a confrontation with Serbia, but Serbia was under the protection of Russia, which was allied to France. Germany was the leader of the Central Powers, which included Austria-Hungary, the Ottoman Empire, and later Bulgaria; arrayed against them were the Allies, consisting chiefly of Russia, France, Britain, and in 1915 Italy. In explaining why neutral Britain went to war with Germany, Kennedy (1980) recognized it was critical for war that Germany become economically more powerful than Britain, but he downplays the disputes over economic trade imperialism, the Baghdad Railway, confrontations in Central and Eastern Europe, high-charged political rhetoric and domestic pressure-groups. Germany's reliance time and again on sheer power, while Britain increasingly appealed to moral sensibilities, played a role, especially in seeing the invasion of Belgium as a necessary military tactic or a profound moral crime. The German invasion of Belgium was not important because the British decision had already been made and the British were more concerned with the fate of France (pp. 457–62). Kennedy argues that by far the main reason was London's fear that a repeat of 1870 — when Prussia and the German states smashed France — would mean that Germany, with a powerful army and navy, would control the English Channel and northwest France. British policy makers insisted that would be a catastrophe for British security. In the west, Germany sought a quick victory by encircling Paris using the Schlieffen Plan. But it failed due to Belgian resistance, Berlin's diversion of troops, and very stiff French resistance on the Marne, north of Paris. The Western Front became an extremely bloody battleground of trench warfare. The stalemate lasted from 1914 until early 1918, with ferocious battles that moved forces a few hundred yards at best along a line that stretched from the North Sea to the Swiss border. The British imposed a tight naval blockade in the North Sea which lasted until 1919, sharply reducing Germany's overseas access to raw materials and foodstuffs. Food scarcity became a serious problem by 1917. The United States joined with the Allies in April 1917. The entry of the United States into the war – following Germany's declaration of unrestricted submarine warfare – marked a decisive turning-point against Germany. More wide open was the fighting on the Eastern Front. In the east, there were decisive victories against the Russian army, the trapping and defeat of large parts of the Russian contingent at the Battle of Tannenberg, followed by huge Austrian and German successes. The breakdown of Russian forces – exacerbated by internal turmoil caused by the 1917 Russian Revolution – led to the Treaty of Brest-Litovsk the Bolsheviks were forced to sign on 3 March 1918 as Russia withdrew from the war. It gave Germany control of Eastern Europe. Spencer Tucker says, "The German General Staff had formulated extraordinarily harsh terms that shocked even the German negotiator." When Germany later complained that the Treaty of Versailles of 1919 was too harsh on them, the Allies responded that it was more benign than Brest-Litovsk. By defeating Russia in 1917 Germany was able to bring hundreds of thousands of combat troops from the east to the Western Front, giving it a numerical advantage over the Allies. By retraining the soldiers in new storm-trooper tactics, the Germans expected to unfreeze the Battlefield and win a decisive victory before the American army arrived in strength. However, the spring offensives all failed, as the Allies fell back and regrouped, and the Germans lacked the reserves necessary to consolidate their gains. In the summer, with the Americans arriving at 10,000 a day, and the German reserves exhausted, it was only a matter of time before multiple Allied offenses destroyed the German army. Unexpectedly Germany plunged into World War I (1914–1918). It rapidly mobilized its civilian economy for the war effort, the economy was handicapped by the British blockade that cut off food supplies. Meanwhile, conditions deteriorated rapidly on the home front, with severe food shortages reported in all urban areas. Causes involved the transfer of many farmers and food workers into the military, an overburdened railroad system, shortages of coal, and the British blockade that cut off imports from abroad. The winter of 1916–1917 was known as the "turnip winter," because that vegetable, usually fed to livestock, was used by people as a substitute for potatoes and meat, which were increasingly scarce. Thousands of soup kitchens were opened to feed the hungry people, who grumbled that the farmers were keeping the food for themselves. Even the army had to cut the rations for soldiers. Morale of both civilians and soldiers continued to sink. 1918 was also the year of the deadly 1918 Spanish Flu pandemic which struck hard at a population weakened by years of malnutrition. The end of October 1918, in Wilhelmshaven, in northern Germany, saw the beginning of the German Revolution of 1918–19. Units of the German Navy refused to set sail for a last, large-scale operation in a war which they saw as good as lost, initiating the uprising. On 3 November, the revolt spread to other cities and states of the country, in many of which workers' and soldiers' councils were established. Meanwhile, Hindenburg and the senior commanders had lost confidence in the Kaiser and his government. The Kaiser and all German ruling princes abdicated. On 9 November 1918, the Social Democrat Philipp Scheidemann proclaimed a Republic. On 11 November, the Compiègne armistice was signed, ending the war. The Treaty of Versailles was signed on 28 June 1919. Germany was to cede Alsace-Lorraine to France. Eupen-Malmédy would temporarily be ceded to Belgium, with a plebiscite to be held to allow the people the choice of the territory either remaining with Belgium or being returned to German control. Following a plebiscite, the territory was allotted to Belgium on 20 September 1920. The future of North Schleswig was to be decided by plebiscite. In the Schleswig Plebiscites, the Danish-speaking population in the north voted for Denmark and the southern, German speaking populace, part voted for Germany. Schleswig was thus partitioned. Holstein remained German without a referendum. Memel was ceded to the Allied and Associated powers, to decide the future of the area. On 9 January 1923, Lithuanian forces invaded the territory. Following negotiations, on 8 May 1924, the League of Nations ratified the annexation on the grounds that Lithuania accepted the Memel Statute, a power-sharing arrangement to protect non-Lithuanians in the territory and its autonomous status. Until 1929, German-Lithuanian co-operation increased and this power sharing arrangement worked. Poland was restored and most of the provinces of Posen and West Prussia, and some areas of Upper Silesia were reincorporated into the reformed country after plebiscites and independence uprisings. All German colonies were to be handed over to League of Nations, who then assigned them as Mandates to Australia, France, Japan, New Zealand, Portugal, and the United Kingdom. The new owners were required to act as a disinterested trustee over the region, promoting the welfare of its inhabitants in a variety of ways until they were able to govern themselves. The left and right banks of the Rhine were to be permanently demilitarised. The industrially important Saarland was to be governed by the League of Nations for 15 years and its coalfields administered by France. At the end of that time a plebiscite was to determine the Saar's future status. To ensure execution of the treaty's terms, Allied troops would occupy the left (German) bank of the Rhine for a period of 5–15 years. The German army was to be limited to 100,000 officers and men; the general staff was to be dissolved; vast quantities of war material were to be handed over and the manufacture of munitions rigidly curtailed. The navy was to be similarly reduced, and no military aircraft were allowed. Germany was also required to pay reparations for all civilian damage caused during the war. Weimar Republic, 1919–1933 The humiliating peace terms in the Treaty of Versailles provoked bitter indignation throughout Germany, and seriously weakened the new democratic regime. The greatest enemies of democracy had already been constituted. In December 1918, the Communist Party of Germany (KPD) was founded, and in 1919 it tried and failed to overthrow the new republic. Adolf Hitler in 1919 took control of the new National Socialist German Workers' Party (NSDAP), which failed in a coup in Munich in 1923. Both parties, as well as parties supporting the republic, built militant auxiliaries that engaged in increasingly violent street battles. Electoral support for both parties increased after 1929 as the Great Depression hit the economy hard, producing many unemployed men who became available for the paramilitary units. The Nazis, with a mostly rural and lower middle class base, overthrew the Weimar regime and ruled Germany in 1933–1945; the KPD, with a mostly urban and working class base, came to power (in the East) in 1945–1989. The early years On 30 December 1918, the Communist Party of Germany was founded by the Spartacus League, who had split from the Social Democratic Party during the war. It was headed by Rosa Luxemburg and Karl Liebknecht, and rejected the parliamentary system. In 1920, about 300,000 members from the Independent Social Democratic Party of Germany joined the party, transforming it into a mass organization. The Communist Party had a following of about 10% of the electorate. In the first months of 1920, the Reichswehr was to be reduced to 100,000 men, in accordance with the Treaty of Versailles. This included the dissolution of many Freikorps – units made up of volunteers. In an attempt at a coup d'état in March 1920, the Kapp Putsch, extreme right-wing politician Wolfgang Kapp let Freikorps soldiers march on Berlin and proclaimed himself Chancellor of the Reich. After four days the coup d'état collapsed, due to popular opposition and lack of support by the civil servants and the officers. Other cities were shaken by strikes and rebellions, which were bloodily suppressed. Germany was the first state to establish diplomatic relations with the new Soviet Union. Under the Treaty of Rapallo, Germany accorded the Soviet Union de jure recognition, and the two signatories mutually cancelled all pre-war debts and renounced war claims. When Germany defaulted on its reparation payments, French and Belgian troops occupied the heavily industrialised Ruhr district (January 1923). The German government encouraged the population of the Ruhr to passive resistance: shops would not sell goods to the foreign soldiers, coal-miners would not dig for the foreign troops, trams in which members of the occupation army had taken seat would be left abandoned in the middle of the street. The passive resistance proved effective, insofar as the occupation became a loss-making deal for the French government. But the Ruhr fight also led to hyperinflation, and many who lost all their fortune would become bitter enemies of the Weimar Republic, and voters of the anti-democratic right. See 1920s German inflation. In September 1923, the deteriorating economic conditions led Chancellor Gustav Stresemann to call an end to the passive resistance in the Ruhr. In November, his government introduced a new currency, the Rentenmark (later: Reichsmark), together with other measures to stop the hyperinflation. In the following six years the economic situation improved. In 1928, Germany's industrial production even regained the pre-war levels of 1913. In October 1925 the Treaty of Locarno was signed by Germany, France, Belgium, Britain and Italy; it recognised Germany's borders with France and Belgium. Moreover, Britain, Italy and Belgium undertook to assist France in the case that German troops marched into the demilitarised Rheinland. Locarno paved the way for Germany's admission to the League of Nations in 1926. The actual amount of reparations that Germany was obliged to pay out was not the 132 billion marks decided in the London Schedule of 1921 but rather the 50 million marks stipulated in the A and B Bonds. Historian Sally Marks says the 112 billion marks in "C bonds" were entirely chimerical—a device to fool the public into thinking Germany would pay much more. The actual total payout from 1920 to 1931 (when payments were suspended indefinitely) was 20 billion German gold marks, worth about $5 billion US dollars or £1 billion British pounds. 12.5 billion was cash that came mostly from loans from New York bankers. The rest was goods like coal and chemicals, or from assets like railway equipment. The reparations bill was fixed in 1921 on the basis of a German capacity to pay, not on the basis of Allied claims. The highly publicized rhetoric of 1919 about paying for all the damages and all the veterans' benefits was irrelevant for the total, but it did determine how the recipients spent their share. Germany owed reparations chiefly to France, Britain, Italy and Belgium; the US received $100 million. Economic collapse and political problems, 1929–1933 The Wall Street Crash of 1929 marked the beginning of the worldwide Great Depression, which hit Germany as hard as any nation. In July 1931, the Darmstätter und Nationalbank – one of the biggest German banks – failed. In early 1932, the number of unemployed had soared to more than 6,000,000. On top of the collapsing economy came a political crisis: the political parties represented in the Reichstag were unable to build a governing majority in the face of escalating extremism from the far right (the Nazis, NSDAP) and the far left (the Communists, KPD). In March 1930, President Hindenburg appointed Heinrich Brüning Chancellor, invoking article 48 of Weimar's constitution, which allowed him to override the Parliament. To push through his package of austerity measures against a majority of Social Democrats, Communists and the NSDAP (Nazis), Brüning made use of emergency decrees and dissolved Parliament. In March and April 1932, Hindenburg was re-elected in the German presidential election of 1932. The Nazi Party was the largest party in the national elections of 1932. On 31 July 1932 it received 37.3% of the votes, and in the election on 6 November 1932 it received less, but still the largest share, 33.1%, making it the biggest party in the Reichstag. The Communist KPD came third, with 15%. Together, the anti-democratic parties of far right and far left were now able to hold the majority of seats in Parliament, but they were at sword's point with each other, fighting it out in the streets. The Nazis were particularly successful among Protestants, among unemployed young voters, among the lower middle class in the cities and among the rural population. It was weakest in Catholic areas and in large cities. On 30 January 1933, pressured by former Chancellor Franz von Papen and other conservatives, President Hindenburg appointed Hitler as Chancellor. Science and culture The Weimar years saw a flowering of German science and high culture, before the Nazi regime resulted in a decline in the scientific and cultural life in Germany and forced many renowned scientists and writers to flee. German recipients dominated the Nobel prizes in science. Germany dominated the world of physics before 1933, led by Hermann von Helmholtz, Joseph von Fraunhofer, Daniel Gabriel Fahrenheit, Wilhelm Conrad Röntgen, Albert Einstein, Max Planck and Werner Heisenberg. Chemistry likewise was dominated by German professors and researchers at the great chemical companies such as BASF and Bayer and persons like Fritz Haber. Theoretical mathematicians included Carl Friedrich Gauss in the 19th century and David Hilbert in the 20th century. Karl Benz, the inventor of the automobile, was one of the pivotal figures of engineering. Among the most important German writers were Thomas Mann (1875–1955), Hermann Hesse (1877–1962) and Bertolt Brecht (1898–1956). The pessimistic historian Oswald Spengler wrote The Decline of the West (1918–23) on the inevitable decay of Western Civilization, and influenced intellectuals in Germany such as Martin Heidegger, Max Scheler, and the Frankfurt School, as well as intellectuals around the world. After 1933, Nazi proponents of "Aryan physics," led by the Nobel Prize-winners Johannes Stark and Philipp Lenard, attacked Einstein's theory of relativity as a degenerate example of Jewish materialism in the realm of science. Many scientists and humanists emigrated; Einstein moved permanently to the U.S. but some of the others returned after 1945. Karl Benz (1844–1929) Max Planck (1858–1947) Thomas Mann (1875–1955) Hermann Hesse (1877–1962) Albert Einstein (1879–1955) Nazi Germany, 1933–1945 The Nazi regime restored economic prosperity and ended mass unemployment using heavy spending on the military, while suppressing labor unions and strikes. The return of prosperity gave the Nazi Party enormous popularity, with only minor, isolated and subsequently unsuccessful cases of resistance among the German population over the 12 years of rule. The Gestapo (secret police) under Heinrich Himmler destroyed the political opposition and persecuted the Jews, trying to force them into exile, while taking their property. The Party took control of the courts, local government, and all civic organizations except the Protestant and Catholic churches. All expressions of public opinion were controlled by Hitler's propaganda minister, Joseph Goebbels, who made effective use of film, mass rallies, and Hitler's hypnotic speaking. The Nazi state idolized Hitler as its Führer (leader), putting all powers in his hands. Nazi propaganda centered on Hitler and was quite effective in creating what historians called the "Hitler Myth"—that Hitler was all-wise and that any mistakes or failures by others would be corrected when brought to his attention. In fact Hitler had a narrow range of interests and decision making was diffused among overlapping, feuding power centers; on some issues he was passive, simply assenting to pressures from whoever had his ear. All top officials reported to Hitler and followed his basic policies, but they had considerable autonomy on a daily basis. Establishment of the Nazi regime In order to secure a majority for his Nazi Party in the Reichstag, Hitler called for new elections. On the evening of 27 February 1933, a fire was set in the Reichstag building. Hitler swiftly blamed an alleged Communist uprising, and convinced President Hindenburg to sign the Reichstag Fire Decree. This decree, which would remain in force until 1945, repealed important political and human rights of the Weimar constitution. Communist agitation was banned, but at this time not the Communist Party itself. Eleven thousand Communists and Socialists were arrested and brought into hastily prepared Nazi concentration camps such as Kemna concentration camp, where they were at the mercy of the Gestapo, the newly established secret police force (9,000 were found guilty and most executed). Communist Reichstag deputies were taken into protective custody (despite their constitutional privileges). Despite the terror and unprecedented propaganda, the last free General Elections of 5 March 1933, while resulting in 43.9% failed to bring the majority for the NSDAP that Hitler had hoped for. Together with the German National People's Party (DNVP), however, he was able to form a slim majority government. With accommodations to the Catholic Centre Party, Hitler succeeded in convincing a required two-thirds of a rigged Parliament to pass the Enabling act of 1933 which gave his government full legislative power. Only the Social Democrats voted against the Act. The Enabling Act formed the basis for the dictatorship, dissolution of the Länder; the trade unions and all political parties other than the Nazi Party were suppressed. A centralised totalitarian state was established, no longer based on the liberal Weimar constitution. Germany left the League of Nations. The coalition parliament was rigged on this fateful 23 March 1933 by defining the absence of arrested and murdered deputies as voluntary and therefore cause for their exclusion as wilful absentees. Subsequently, in July the Centre Party was voluntarily dissolved in a quid pro quo with the Pope under the anti-communist Pope Pius XI for the Reichskonkordat; and by these manoeuvres Hitler achieved movement of these Catholic voters into the Nazi party, and a long-awaited international diplomatic acceptance of his regime. It is interesting to note, however, that according to Professor Dick Geary the Nazis gained a larger share of their vote in Protestant areas than in Catholic areas, in the elections held between 1928 and November 1932. The Communist Party was proscribed in April 1933. On the weekend of 30 June 1934, he gave order to the SS to seize Röhm and his lieutenants, and to execute them without trial (known as the Night of the Long Knives). Upon Hindenburg's death on 2 August 1934, Hitler's cabinet passed a law proclaiming the presidency to be vacant and transferred the role and powers of the head of state to Hitler as Führer (Leader) and Chancellor. However, many leaders of the Nazi SA were disappointed. The Chief of Staff of the SA, Ernst Röhm, was pressing for the SA to be incorporated into the army. Hitler had long been at odds with Röhm and felt increasingly threatened by these plans and in the "Night of the Long Knives" in 1934 killed Röhm and the top SA leaders using their notorious homosexuality as an excuse. The SS became an independent organisation under the command of the Reichsführer SS Heinrich Himmler. He would become the supervisor of the Gestapo and of the concentration camps, soon also of the ordinary police. Hitler also established the Waffen-SS as a separate troop. Antisemitism and the Holocaust The Nazi regime was particularly hostile towards Jews, who became the target of unending antisemitic propaganda attacks. The Nazis attempted to convince the German people to view and treat Jews as "subhumans" and immediately after winning almost 44% of parliamentary seats in the 1933 federal elections the Nazis imposed a nationwide boycott of Jewish businesses. On March 20, 1933 the first Nazi concentration camp was established at Dachau in Bavaria and from 1933 to 1935 the Nazi regime consolidated their power and imposed the Nuremberg Laws of 1935 which banned all Jews from civil service and academics positions. Jews lost their German citizenship, and a ban on sexual relations between people classified as "Aryans" and "non-Aryans" was created. Jews continued to suffer persecution under the Nazi regime, exemplified by the Kristallnacht pogrom of 1938, and about half of Germany's 500,000 Jews fled the country before 1939, after which escape became almost impossible. In 1941, the Nazi leadership decided to implement a plan that they called the "Final Solution" which came to be known as the Holocaust. Under the plan, Jews and other "lesser races" along with political opponents from Germany as well as occupied countries were systematically murdered at murder sites, Nazi concentration camps, and starting in 1942, at extermination camps. Between 1941 and 1945 Jews, Gypsies, Slavs, communists, homosexuals, the mentally and physically disabled and members of other groups were targeted and methodically murdered — the origin of the word "genocide". In total approximately 11 million people were killed during the Holocaust including 1.1 million children. Hitler re-established the Luftwaffe (air force) and reintroduced universal military service. This was in breach of the Treaty of Versailles; Britain, France and Italy issued notes of protest. Hitler had the officers swear their personal allegiance to him. In 1936 German troops marched into the demilitarised Rhineland. Britain and France did not intervene. The move strengthened Hitler's standing in Germany. His reputation swelled further with the 1936 Summer Olympics, which were held in the same year in Berlin, and which proved another great propaganda success for the regime as orchestrated by master propagandist Joseph Goebbels. Historians have paid special attention to the efforts by Nazi Germany to reverse the gains women made before 1933, especially in the relatively liberal Weimar Republic. It appears the role of women in Nazi Germany changed according to circumstances. Theoretically the Nazis believed that women must be subservient to men, avoid careers, devote themselves to childbearing and child-rearing, and be a helpmate of the traditional dominant father in the traditional family. However, before 1933, women played important roles in the Nazi organization and were allowed some autonomy to mobilize other women. After Hitler came to power in 1933, the activist women were replaced by bureaucratic women who emphasized feminine virtues, marriage, and childbirth. As Germany prepared for war, large numbers were incorporated into the public sector and with the need for full mobilization of factories by 1943, all women were required to register with the employment office. Women's wages remained unequal and women were denied positions of leadership or control. In 1944-45 more than 500,000 women were volunteer uniformed auxiliaries in the German armed forces (Wehrmacht). About the same number served in civil aerial defense, 400,000 volunteered as nurses, and many more replaced drafted men in the wartime economy. In the Luftwaffe they served in combat roles helping to operate the anti-aircraft systems that shot down Allied bombers. Hitler's diplomatic strategy in the 1930s was to make seemingly reasonable demands, threatening war if they were not met. When opponents tried to appease him, he accepted the gains that were offered, then went to the next target. That aggressive strategy worked as Germany pulled out of the League of Nations (1933), rejected the Versailles Treaty and began to re-arm (1935), won back the Saar (1935), remilitarized the Rhineland (1936), formed an alliance ("axis") with Mussolini's Italy (1936), sent massive military aid to Franco in the Spanish Civil War (1936–39), annexed Austria (1938), took over Czechoslovakia after the British and French appeasement of the Munich Agreement of 1938, formed a peace pact with Joseph Stalin's Soviet Union in August 1939, and finally invaded Poland in September 1939. Britain and France declared war and World War II began – somewhat sooner than the Nazis expected or were ready for. After establishing the "Rome-Berlin axis" with Benito Mussolini, and signing the Anti-Comintern Pact with Japan – which was joined by Italy a year later in 1937 – Hitler felt able to take the offensive in foreign policy. On 12 March 1938, German troops marched into Austria, where an attempted Nazi coup had been unsuccessful in 1934. When Austrian-born Hitler entered Vienna, he was greeted by loud cheers. Four weeks later, 99% of Austrians voted in favour of the annexation (Anschluss) of their country Austria to the German Reich. After Austria, Hitler turned to Czechoslovakia, where the 3.5 million-strong Sudeten German minority was demanding equal rights and self-government. At the Munich Conference of September 1938, Hitler, the Italian leader Benito Mussolini, British Prime Minister Neville Chamberlain and French Prime Minister Édouard Daladier agreed upon the cession of Sudeten territory to the German Reich by Czechoslovakia. Hitler thereupon declared that all of German Reich's territorial claims had been fulfilled. However, hardly six months after the Munich Agreement, in March 1939, Hitler used the smoldering quarrel between Slovaks and Czechs as a pretext for taking over the rest of Czechoslovakia as the Protectorate of Bohemia and Moravia. In the same month, he secured the return of Memel from Lithuania to Germany. Chamberlain was forced to acknowledge that his policy of appeasement towards Hitler had failed. World War II At first Germany's military moves were brilliantly successful, as in the "blitzkrieg" invasions of Poland (1939), Norway (1940), the Low Countries (1940), and above all the stunningly successful invasion and quick conquest of France in 1940. Hitler probably wanted peace with Britain in late 1940, but Prime Minister Winston Churchill, standing alone, was dogged in his defiance. Churchill had major financial, military, and diplomatic help from President Franklin D. Roosevelt in the U.S., another implacable foe of Hitler. Hitler's emphasis on maintaining high living standards postponed the full mobilization of the national economy until 1942, years after the great rivals Britain, Russia, and the U.S. had fully mobilized. Germany invaded the Soviet Union in June 1941 – weeks behind schedule – but swept forward until it reached the gates of Moscow. The tide turned in December 1941, when the invasion of Russia stalled in cold weather and the United States joined the war. After surrender in North Africa and losing the Battle of Stalingrad in 1942–43, the Germans were on the defensive. By late 1944, the United States, Canada, France, and Great Britain were closing in on Germany in the West, while the Soviets were closing from the East. Overy estimated in 2014 that in all about 353,000 civilians were killed by British and American strategic bombing of German cities, and nine million left homeless. Nazi Germany collapsed as Berlin was taken by the Red Army in a fight to the death on the city streets. Hitler committed suicide on 30 April 1945. Final German surrender was signed on 8 May 1945. By September 1945, the Third Reich (which lasted only 12 years) and its Axis partners (Italy and Japan) had been defeated, chiefly by the forces of the Soviet Union, the United States, and Great Britain. Much of Europe lay in ruins, over 60 million people had been killed (most of them civilians), including approximately 6 million Jews and 5 million non-Jews in what became known as the Holocaust. World War II resulted in the destruction of Germany's political and economic infrastructure and led directly to its partition, considerable loss of territory (especially in the east), and historical legacy of guilt and shame. Germany during the Cold War, 1945–1990 As a consequence of the defeat of Nazi Germany in 1945 and the onset of the Cold War in 1947, the country was split between the two global blocs in the East and West, a period known as the division of Germany. Millions of refugees from Central and Eastern Europe moved west, most of them to West Germany. Two states emerged: West Germany was a parliamentary democracy, a NATO member, a founding member of what since became the European Union and one of the world's largest economies, while East Germany was a totalitarian Communist dictatorship that was a satellite of Moscow. With the collapse of Communism in 1989, reunion on West Germany's terms followed. No one doubted Germany's economic and engineering prowess; the question was how long bitter memories of the war would cause Europeans to distrust Germany, and whether Germany could demonstrate it had rejected totalitarianism and militarism and embraced democracy and human rights. The total of German war dead was 8% to 10% out of a prewar population of 69,000,000, or between 5.5 million and 7 million people. This included 4.5 million in the military, and between 1 and 2 million civilians. There was chaos as 11 million foreign workers and POWs left, while 14 million displaced refugees from the east and soldiers returned home. During the Cold War, the West German government estimated a death toll of 2.2 million civilians due to the flight and expulsion of Germans and through forced labour in the Soviet Union. This figure remained unchallenged until the 1990s, when some historians put the death toll at 500,000–600,000 confirmed deaths. In 2006 the German government reaffirmed its position that 2.0–2.5 million deaths occurred. At the Potsdam Conference, Germany was divided into four military occupation zones by the Allies and did not regain independence until 1949. The provinces east of the Oder and Neisse rivers (the Oder-Neisse line) were transferred to Poland, Lithuania, and Russia (Kaliningrad oblast); the 6.7 million Germans living in Poland and the 2.5 million in Czechoslovakia were forced to move west, although most had already left when the war ended. Denazification removed, imprisoned, or executed most top officials of the old regime, but most middle and lower ranks of civilian officialdom were not seriously affected. In accordance with the Allied agreement made at the Yalta conference millions of POWs were used as forced labor by the Soviet Union and other European countries. In the East, the Soviets crushed dissent and imposed another police state, often employing ex-Nazis in the dreaded Stasi. The Soviets extracted about 23% of the East German GNP for reparations, while in the West reparations were a minor factor. In 1945–46 housing and food conditions were bad, as the disruption of transport, markets, and finances slowed a return to normal. In the West, bombing had destroyed the fourth of the housing stock, and over 10 million refugees from the east had crowded in, most living in camps. Food production in 1946–48 was only two-thirds of the prewar level, while grain and meat shipments – which usually supplied 25% of the food – no longer arrived from the East. Furthermore, the end of the war brought the end of large shipments of food seized from occupied nations that had sustained Germany during the war. Coal production was down 60%, which had cascading negative effects on railroads, heavy industry, and heating. Industrial production fell more than half and reached prewar levels only at the end of 1949. Allied economic policy originally was one of industrial disarmament plus building the agricultural sector. In the western sectors, most of the industrial plants had minimal bomb damage and the Allies dismantled 5% of the industrial plants for reparations. However, deindustrialization became impractical and the U.S. instead called for a strong industrial base in Germany so it could stimulate European economic recovery. The U.S. shipped food in 1945–47 and made a $600 million loan in 1947 to rebuild German industry. By May 1946 the removal of machinery had ended, thanks to lobbying by the U.S. Army. The Truman administration finally realised that economic recovery in Europe could not go forward without the reconstruction of the German industrial base on which it had previously been dependent. Washington decided that an "orderly, prosperous Europe requires the economic contributions of a stable and productive Germany." In 1945 the occupying powers took over all newspapers in Germany and purged them of Nazi influence. The American occupation headquarters, the Office of Military Government, United States (OMGUS) began its own newspaper based in Munich, Die Neue Zeitung. It was edited by German and Jewish émigrés who fled to the United States before the war. Its mission was to encourage democracy by exposing Germans to how American culture operated. The paper was filled with details on American sports, politics, business, Hollywood, and fashions, as well as international affairs. In 1949 the Soviet zone became the "Deutsche Demokratische Republik" – "DDR" ("German Democratic Republic" – "GDR", simply often "East Germany"), under control of the Socialist Unity Party. Neither country had a significant army until the 1950s, but East Germany built the Stasi into a powerful secret police that infiltrated every aspect of the society. East Germany was an Eastern bloc state under political and military control of the Soviet Union through her occupation forces and the Warsaw Treaty. Political power was solely executed by leading members (Politburo) of the communist-controlled Socialist Unity Party (SED). A Soviet-style command economy was set up; later the GDR became the most advanced Comecon state. While East German propaganda was based on the benefits of the GDR's social programs and the alleged constant threat of a West German invasion, many of her citizens looked to the West for political freedoms and economic prosperity. Walter Ulbricht (1893–1973) was the party boss from 1950 to 1971. In 1933, Ulbricht had fled to Moscow, where he served as a Comintern agent loyal to Stalin. As World War II was ending, Stalin assigned him the job of designing the postwar German system that would centralize all power in the Communist Party. Ulbricht became deputy prime minister in 1949 and secretary (chief executive) of the Socialist Unity (Communist) party in 1950. Some 2.6 million people had fled East Germany by 1961 when he built the Berlin Wall to stop them — shooting those who attempted it. What the GDR called the "Anti-Fascist Protective Wall" was a major embarrassment for the program during the Cold War, but it did stabilize East Germany and postpone its collapse. Ulbricht lost power in 1971, but was kept on as a nominal head of state. He was replaced because he failed to solve growing national crises, such as the worsening economy in 1969–70, the fear of another popular uprising as had occurred in 1953, and the disgruntlement between Moscow and Berlin caused by Ulbricht's détente policies toward the West. The transition to Erich Honecker (General Secretary from 1971 to 1989) led to a change in the direction of national policy and efforts by the Politburo to pay closer attention to the grievances of the proletariat. Honecker's plans were not successful, however, with the dissent growing among East Germany's population. In 1989, the socialist regime collapsed after 40 years, despite its omnipresent secret police, the Stasi. Main reasons for the collapse include severe economic problems and growing emigration towards the West. East Germany's culture was shaped by Communism and particularly Stalinism. It was characterized by East German psychoanalyst Hans-Joachim Maaz in 1990 as having produced a "Congested Feeling" among Germans in the East as a result of Communist policies criminalizing personal expression that deviates from government approved ideals, and through the enforcement of Communist principals by physical force and intellectual repression by government agencies, particularly the Stasi. Critics of the East German state have claimed that the state's commitment to communism was a hollow and cynical tool of a ruling elite. This argument has been challenged by some scholars who claim that the Party was committed to the advance of scientific knowledge, economic development, and social progress. However, the vast majority regarded the state's Communist ideals to be nothing more than a deceptive method for government control. According to German historian Jürgen Kocka (2010): - Conceptualizing the GDR as a dictatorship has become widely accepted, while the meaning of the concept dictatorship varies. Massive evidence has been collected that proves the repressive, undemocratic, illiberal, nonpluralistic character of the GDR regime and its ruling party. West Germany (Bonn Republic) In 1949, the three western occupation zones (American, British, and French) were combined into the Federal Republic of Germany (FRG, West Germany). The government was formed under Chancellor Konrad Adenauer and his conservative CDU/CSU coalition. The CDU/CSU was in power during most of the period since 1949. The capital was Bonn until it was moved to Berlin in 1990. In 1990 FRG absorbed East Germany and gained full sovereignty over Berlin. At all points West Germany was much larger and richer than East Germany, which became a dictatorship under the control of the Communist Party and was closely monitored by Moscow. Germany, especially Berlin, was a cockpit of the Cold War, with NATO and the Warsaw Pact assembling major military forces in west and east. However, there was never any combat. West Germany enjoyed prolonged economic growth beginning in the early 1950s (Wirtschaftswunder or "Economic Miracle"). Industrial production doubled from 1950 to 1957, and gross national product grew at a rate of 9 or 10% per year, providing the engine for economic growth of all of Western Europe. Labor union supported the new policies with postponed wage increases, minimized strikes, support for technological modernization, and a policy of co-determination (Mitbestimmung), which involved a satisfactory grievance resolution system as well as requiring representation of workers on the boards of large corporations. The recovery was accelerated by the currency reform of June 1948, U.S. gifts of $1.4 billion as part of the Marshall Plan, the breaking down of old trade barriers and traditional practices, and the opening of the global market. West Germany gained legitimacy and respect, as it shed the horrible reputation Germany had gained under the Nazis. 1948 currency reform The most dramatic and successful policy event was the currency reform of 1948. Since the 1930s, prices and wages had been controlled, but money had been plentiful. That meant that people had accumulated large paper assets, and that official prices and wages did not reflect reality, as the black market dominated the economy and more than half of all transactions were taking place unofficially. On 21 June 1948, the Western Allies withdrew the old currency and replaced it with the new Deutsche Mark at the rate of 1 new per 10 old. This wiped out 90% of government and private debt, as well as private savings. Prices were decontrolled, and labor unions agreed to accept a 15% wage increase, despite the 25% rise in prices. The result was that prices of German export products held steady, while profits and earnings from exports soared and were poured back into the economy. The currency reforms were simultaneous with the $1.4 billion in Marshall Plan money coming in from the United States, which was used primarily for investment. In addition, the Marshall Plan forced German companies, as well as those in all of Western Europe, to modernize their business practices and take account of the international market. Marshall Plan funding helped overcome bottlenecks in the surging economy caused by remaining controls (which were removed in 1949), and Marshall Plan business reforms opened up a greatly expanded market for German exports. Overnight, consumer goods appeared in the stores, because they could be sold for realistic prices, emphasizing to Germans that their economy had turned a corner. The success of the currency reform angered the Soviets, who cut off all road, rail, and canal links between the western zones and West Berlin. This was the Berlin Blockade, which lasted from 24 June 1948 to 12 May 1949. In response, the U.S. and Britain launched an airlift of food and coal and distributed the new currency in West Berlin as well. The city thereby became economically integrated into West Germany. Konrad Adenauer (1876–1967) was the dominant leader in West Germany. He was the first chancellor (top official) of the FRG, 1949–63, and until his death was the founder and leader of the Christian Democratic Union (CDU), a coalition of conservatives, ordoliberals, and adherents of Protestant and Catholic social teaching that dominated West Germany politics for most of its history. During his chancellorship, the West Germany economy grew quickly, and West Germany established friendly relations with France, participated in the emerging European Union, established the country's armed forces (the Bundeswehr), and became a pillar of NATO as well as firm ally of the United States. Adenauer's government also commenced the long process of reconciliation with the Jews and Israel after the Holocaust. Ludwig Erhard (1897–1977) was in charge of economic policy as economics director for the British and American occupation zones and was Adenauer's long-time economics minister. Erhard's decision to lift many price controls in 1948 (despite opposition from both the social democratic opposition and Allied authorities), plus his advocacy of free markets, helped set the Federal Republic on its strong growth from wartime devastation. Norbert Walter, a former chief economist at Deutsche Bank, argues that "Germany owes its rapid economic advance after World War II to the system of the Social Market Economy, established by Ludwig Erhard." Erhard was politically less successful when he served as the CDU Chancellor from 1963 until 1966. Erhard followed the concept of a social market economy, and was in close touch with professional economists. Erhard viewed the market itself as social and supported only a minimum of welfare legislation. However, Erhard suffered a series of decisive defeats in his effort to create a free, competitive economy in 1957; he had to compromise on such key issues as the anti-cartel legislation. Thereafter, the West German economy evolved into a conventional west European welfare state. Meanwhile, in adopting the Godesberg Program in 1959, the Social Democratic Party of Germany (SPD) largely abandoned Marxism ideas and embraced the concept of the market economy and the welfare state. Instead it now sought to move beyond its old working class base to appeal the full spectrum of potential voters, including the middle class and professionals. Labor unions cooperated increasingly with industry, achieving labor representation on corporate boards and increases in wages and benefits. In 1966 Erhard lost support and Kurt Kiesinger (1904–1988) was elected as Chancellor by a new CDU/CSU-SPD alliance combining the two largest parties. Socialist (SPD) leader Willy Brandt was Deputy Federal Chancellor and Foreign Minister. The Grand Coalition lasted 1966–69 and is best known for reducing tensions with the Soviet bloc nations and establishing diplomatic relations with Czechoslovakia, Romania and Yugoslavia. With a booming economy short of unskilled workers, especially after the Berlin Wall cut off the steady flow of East Germans, the FRG negotiated migration agreements with Italy (1955), Spain (1960), Greece (1960), and Turkey (1961) that brought in hundreds of thousands of temporary guest workers, called Gastarbeiter. In 1968 the FRG signed a guest worker agreement with Yugoslavia that employed additional guest workers. Gastarbeiter were young men who were paid full-scale wages and benefits, but were expected to return home in a few years. The agreement with Turkey ended in 1973 but few workers returned because there were few good jobs in Turkey. By 2010 there were about 4 million people of Turkish descent in Germany. The generation born in Germany attended German schools, but had a poor command of either German or Turkish, and had either low-skilled jobs or were unemployed. Brandt and Ostpolitik Willy Brandt (1913–1992) was the leader of the Social Democratic Party in 1964–87 and West German Chancellor in 1969–1974. Under his leadership, the German government sought to reduce tensions with the Soviet Union and improve relations with the German Democratic Republic, a policy known as the Ostpolitik. Relations between the two German states had been icy at best, with propaganda barrages in each direction. The heavy outflow of talent from East Germany prompted the building of the Berlin Wall in 1961, which worsened Cold War tensions and prevented East Germans from travel. Although anxious to relieve serious hardships for divided families and to reduce friction, Brandt's Ostpolitik was intent on holding to its concept of "two German states in one German nation." Ostpolitik was opposed by the conservative elements in Germany, but won Brandt an international reputation and the Nobel Peace Prize in 1971. In September 1973, both West and East Germany were admitted to the United Nations. The two countries exchanged permanent representatives in 1974, and, in 1987, East Germany's leader Erich Honecker paid an official state visit to West Germany. Economic crisis of 1970s After 1973, Germany was hard hit by a worldwide economic crisis, soaring oil prices, and stubbornly high unemployment, which jumped from 300,000 in 1973 to 1.1 million in 1975. The Ruhr region was hardest hit, as its easy-to-reach coal mines petered out, and expensive German coal was no longer competitive. Likewise the Ruhr steel industry went into sharp decline, as its prices were undercut by lower-cost suppliers such as Japan. The welfare system provided a safety net for the large number of unemployed workers, and many factories reduce their labor force and began to concentrate on high-profit specialty items. After 1990 the Ruhr moved into service industries and high technology. Cleaning up the heavy air and water pollution became a major industry in its own right. Meanwhile, formerly rural Bavaria became a high-tech center of industry. A spy scandal forced Brandt to step down as Chancellor while remaining as party leader. He was replaced by Helmut Schmidt (b. 1918), of the SPD, who served as Chancellor in 1974–1982. Schmidt continued the Ostpolitik with less enthusiasm. He had a PhD in economics and was more interested in domestic issues, such as reducing inflation. The debt grew rapidly as he borrowed to cover the cost of the ever more expensive welfare state. After 1979, foreign policy issues grew central as the Cold War turned hot again. The German peace movement mobilized hundreds of thousands of demonstrators to protest against American deployment in Europe of new medium-range ballistic missiles. Schmidt supported the deployment but was opposed by the left wing of the SPD and by Brandt. The pro-business Free Democratic Party (FDP) had been in coalition with the SPD, but now it changed direction. Led by Finance Minister Otto Graf Lambsdorff (1926–2009) the FDP adopted the market-oriented "Kiel Theses" in 1977; it rejected the Keynesian emphasis on consumer demand, and proposed to reduce social welfare spending, and try to introduce policies to stimulate production and facilitate jobs. Lambsdorff argued that the result would be economic growth, which would itself solve both the social problems and the financial problems. As a consequence, the FDP switched allegiance to the CDU and Schmidt lost his parliamentary majority in 1982. For the only time in West Germany's history, the government fell on a vote of no confidence. Helmut Kohl (1930-2017) brought the conservatives back to power with a CDU/CSU-FDP coalition in 1982, and served as Chancellor until 1998. After repeated victories in 1983, 1987, 1990 and 1994 he was finally defeated by a landslide that was the biggest on record, for the left in the 1998 federal elections, and was succeeded as Chancellor by Gerhard Schröder of the SPD. Kohl is best known for orchestrating reunification with the approval of all the Four Powers from World War II, who still had a voice in German affairs. During the summer of 1989, rapid changes known as peaceful revolution or Die Wende took place in East Germany, which quickly led to German reunification. Growing numbers of East Germans emigrated to West Germany, many via Hungary after Hungary's reformist government opened its borders. Thousands of East Germans also tried to reach the West by staging sit-ins at West German diplomatic facilities in other East European capitals, most notably in Prague. The exodus generated demands within East Germany for political change, and mass demonstrations in several cities continued to grow. Unable to stop the growing civil unrest, Erich Honecker was forced to resign in October, and on 9 November, East German authorities unexpectedly allowed East German citizens to enter West Berlin and West Germany. Hundreds of thousands of people took advantage of the opportunity; new crossing points were opened in the Berlin Wall and along the border with West Germany. This led to the acceleration of the process of reforms in East Germany that ended with the German reunification that came into force on 3 October 1990. Federal Republic of Germany, 1990–present The SPD in coalition with the Greens won the elections of 1998. SPD leader Gerhard Schröder positioned himself as a centrist "Third Way" candidate in the mold of Britain's Tony Blair and America's Bill Clinton. Schröder, in March 2003, reversed his position and proposed a significant downsizing of the welfare state, known as Agenda 2010. He had enough support to overcome opposition from the trade unions and the SPD's left wing. Agenda 2010 had five goals: tax cuts; labor market deregulation, especially relaxing rules protecting workers from dismissal and setting up Hartz concept job training; modernizing the welfare state by reducing entitlements; decreasing bureaucratic obstacles for small businesses; and providing new low-interest loans to local governments. From 2005 to 2009, Germany was ruled by a grand coalition led by the CDU's Angela Merkel as chancellor. Since the 2009 elections, Merkel has headed a centre-right government of the CDU/CSU and FDP. Together with France and other EU states, Germany has played the leading role in the European Union. Germany (especially under Chancellor Helmut Kohl) was one of the main supporters of admitting many East European countries to the EU. Germany is at the forefront of European states seeking to exploit the momentum of monetary union to advance the creation of a more unified and capable European political, defence and security apparatus. German Chancellor Schröder expressed an interest in a permanent seat for Germany in the UN Security Council, identifying France, Russia, and Japan as countries that explicitly backed Germany's bid. Germany formally adopted the Euro on 1 January 1999 after permanently fixing the Deutsche Mark rate on 21 December 1998. Since 1990, the German Bundeswehr has participated in a number of peacekeeping and disaster relief operations abroad. Since 2002, German troops formed part of the International Security Assistance Force in the war in Afghanistan, resulting in the first German casualties in combat missions since World War II. In the worldwide economic recession that began in 2008, Germany did relatively well. However, the economic instability of Greece and several other EU nations in 2010–11 forced Germany to reluctantly sponsor a massive financial rescue. In the wake of the disaster to the nuclear industry in Japan following its 2011 earthquake and tsunami, German public opinion turned sharply against nuclear power in Germany, which produces a fourth of the electricity supply. In response Merkel has announced plans to close down the nuclear system over the next decade, and to rely even more heavily on wind and other alternative energy sources, in addition to coal and natural gas. For further information, see Germany in 2011. Germany was affected by the European migrant crisis in 2015 as it became the final destination of choice for many asylum seekers from Africa and the Middle East entering the EU. The country took in over a million refugees and migrants and developed a quota system which redistributed migrants around its federal states based on their tax income and existing population density. The decision by Merkel to authorize unrestricted entry led to heavy criticism in Germany as well as within Europe. A major historiographical debate about the German history concerns the Sonderweg, the alleged "special path" that separated German history from the normal course of historical development, and whether or not Nazi Germany was the inevitable result of the Sonderweg. Proponents of the Sonderweg theory such as Fritz Fischer point to such events of the Revolution of 1848, the authoritarianism of the Second Empire and the continuation of the Imperial elite into the Weimar and Nazi periods. Opponents such as Gerhard Ritter of the Sonderweg theory argue that proponents of the theory are guilty of seeking selective examples, and there was much contingency and chance in German history. In addition, there was much debate within the supporters of the Sonderweg concept as for the reasons for the Sonderweg, and whether or not the Sonderweg ended in 1945. Was there a Sonderweg? Winkler says: - "For a long time, educated Germans answered it in the positive, initially by laying claim to a special German mission, then, after the collapse of 1945, by criticizing Germany's deviation from the West. Today, the negative view is predominant. Germany did not, according to the now prevailing opinion, differ from the great European nations to an extent that would justify speaking of a 'unique German path.' And, in any case, no country on earth ever took what can be described as the 'normal path.'" - Conservatism in Germany - Economic history of Germany - Feminism in Germany - German monarchs Family tree - History of Austria - History of Berlin - History of German foreign policy - History of German journalism - History of German women - History of the Jews in Germany - Liberalism in Germany - List of Chancellors of Germany - List of German monarchs - Medieval East Colonisation by German noblemen and farmers - Military history of Germany - Names of Germany for terminology applied to Germany - Politics of Germany - Territorial evolution of Germany - Wagner 2010, pp. 19726–19730. - "World's Oldest Spears - Archaeology Magazine Archive". archaeology.org. - "Earliest music instruments found". BBC News. - "Ice Age Lion Man is world's earliest figurative sculpture - The Art Newspaper". The Art Newspaper. - "The Venus of Hohle Fels". donsmaps.com. - Kristinsson 2010, p. 147: "In the 1st century BC it was the Suebic tribes who were expanding most conspicuously. [...] Originating from central Germania, they moved to the south and southwest. [...] As Rome was conquering the Gauls, Germans were expanding to meet them, and this was the threat from which Caesar claimed to be saving the Gauls. [...] For the next half century the expansion concentrated on southern Germany and Bohemia, assimilating or driving out the previous Gallic or Celtic inhabitants. The oppida in this area fell and were abandoned one after another as simple, egalitarian Germanic societies replaced the complex, stratified Celtic ones." - Green & Heather 2003, p. 29: "Greek may have followed the Persians in devising its own terms for their military formations, but the Goths were dependent [...] on Iranians of the Pontic region for terms which followed the Iranian model more closely in using the cognate Gothic term for the second element of its compounds. (Gothic dependence on Iranian may have gone even further, affecting the numeral itself, if we recall that the two Iranian loanwords in Crimean Gothic are words for 'hundred' and 'thousand')." - Fortson 2011, p. 433: "Baltic territory began to shrink shortly before the dawn of the Christian era due to the Gothic migrations into their southwestern territories [...]." - Green 2000, pp. 172-73: "Jordanes [...] mentions the Slavs (Getica 119) and associates them more closely than the Balts with the centre of Gothic power. [...] This location of the early Slavs partly at least in the region covered by the Cernjahov culture, together with their contacts (warlike or not) with the Goths under Ermanric and almost certainly before, explains their openness to Gothic loanword influence. That this may have begun early, before the expansion of the Slavs from their primeval habitat, is implied by the presence of individual loan-words in a wide range of Slavonic languages." - Claster 1982, p. 35. - Smithsonian (September 2005). - Ozment 2005, pp. 2-21. - Fichtner 2009, p. xlviii: "When the Romans began to appear in the region, shortly before the beginning of the Christian era, they turned Noricum into an administrative province, which encompassed much of what today is Austria." - The Journal of the Anthropological Society of Bombay. 10: 647. 1917 https://books.google.com/books?id=2hg7AQAAMAAJ. [...] Raetia (modern Bavaria and the adjoining country) [...].Missing or empty - Ramirez-Faria 2007, p. 267: "Provinces of Germany[:] Germania was the name of two Roman provinces on the left bank of the Rhine, but also the general Roman designation for the lands east of the Rhine." - Rüger 2004, pp. 527-28. - Bowman 2005, p. 442. - Heather 2010. - Heather 2006, p. 349: "By 469, just sixteen years after [Attila's] death, the last of the Huns were seeking asylum inside the eastern Roman Empire." - Bradbury 2004, p. 154: "East Francia consisted of four main principalities, the stem duchies – Saxony, Bavaria, Swabia and Franconia." - Rodes 1964, p. 3: "It was plagued by the existence of immensely strong tribal duchies, such as Bavaria, Swabia, Thuringia, and Saxony — often referred to as stem duchies, from the German word Stamm, meaning tribe [...]." - Wiesflecker 1991, p. 292: "Er mußte bekanntlich den demütigenden Vertrag von Arras (1482) hinnehmen und seine Tochter Margarethe mit dem Stammherzogtum Burgund-Bourgogne und vielen anderen Herrschaften an Frankreich ausliefern. [One has to recognise that [Maxiimilian I] had to accept the humiliating Treaty of Arras (1482) and to deliver to France his daughter Margaret along with the stem-duchy of Burgundy-Bourgogne and many other lordships.]" - Historicus 1935, p. 50: "Franz von Lothringen muß sein Stammherzogtum an Stanislaus Leszinski, den französischen Kandidaten für Polen, ueberlassen [...]. [Francis of Lorraine had to bequeath his stem-duchy to Stanislaus Leszinski, the French candidate for the Polish crown [...].]" - Compare: Langer, William Leonard, ed. (1968). An encyclopedia of world history: ancient, medieval and modern, chronologically arranged (4 ed.). Harrap. p. 174. Retrieved 2015-11-23. These stem duchies were: Franconia [...]; Lorraine (not strictly a stem duchy but with a tradition of unity); Swabia [...] . - "Germany". Encyclopædia Britannica Online. Encyclopædia Britannica Inc. 2012. Retrieved 12 September 2012. - Goffart 1988. - "Germany, the Stem Duchies & Marches". Friesian.com. 1945-02-13. Retrieved 2012-10-18. - Wilson 2016, p. 24. - Wilson 2016, p. 25. - Van Dam & Fouracre 1995, p. 222: "Surrounding the core of Frankish kingdoms were other regions more or less subservient to the Merovingian kings. In some regions the Merovingians appointed, or perhaps simply acknowledged, various dukes, such as the duke of the Alamans, the duke of the Vascones in the western Pyrenees, and the duke of the Bavarians. [...] Since these dukes, unlike those who served at the court of the Merovingians or administered particular regions in the Merovingian kingdoms, ruled over distinct ethnic groups, they had much local support and tended to act independently of the Merovingians, and even to make war on them occasionally." - Damminger 2003, p. 74: "The area of Merovingian settlement in southwest Germany was pretty much confined to the so called 'Altsiedelland', those fertile regions which had been under the plough since neolithic times [...]." - Drew 2011, pp. 8-9: "Some of the success of the Merovingian Frankish rulers may be their acceptance of the personality of law policy. Not only did Roman law remain in use among Gallo-Romans and churchmen, Burgundian law among the Burgundians, and Visigothic law among the Visigoths, but the more purely Germanic peoples of the eastern frontier were allowed to retain their own 'national' law." - Hen 1995, p. 17: "Missionaries, mainly from the British Isles, continued to operate in the Merovingian kingdoms throughout the sixth to the eighth centuries. Yet, their efforts were directed at the fringes of the Merovingian territory, that is, at Frisia, north-east Austrasia and Thuringia. These areas were hardly Romanised, if at all, and therefore lacked any social, cultural or physical basis for the expansion of Christianity. These areas stayed pagan long after Merovingian society completed its conversion, and thus attracted the missionaries' attention. [...] Moreover, there is evidence of missionary and evangelising activity from Merovingian Gual, out of places like Metz, Strasbourg or Worms, into the 'pagan regions' [...]." - Kibler 1995, p. 1159: "From time to time, Austrasia received a son of the Merovingian king as an autonomous ruler." - Wilson 2016, p. 26. - Wilson 2016, pp. 26-27. - Nelson, Janet L. (1998), Charlemagne's church at Aachen, Volume 48 (1), History Today, pp. 62–64 - Schulman 2002, pp. 325-27. - Barraclough 1984, p. 59. - Wilson 2016, p. 19. - Day 1914, p. 252. - Thompson 1931, pp. 146-79. - Istvan Szepesi, "Reflecting the Nation: The Historiography of Hanseatic Institutions." Waterloo Historical Review 7 (2015). online - Carsten 1958, pp. 52-68. - Blumenthal, Uta-Renate (1991). The Investiture Controversy: Church and Monarchy from the Ninth to the Twelfth Century. pp. 159–73. - Fuhrmann, Horst (1986). Germany in the High Middle Ages, c. 1050–1200. Cambridge University Press. - Kahn, Robert A. (1974). A History of the Habsburg Empire 1526–1918. p. 5. - Kantorowicz, Ernst (1957). Frederick the Second, 1194–1250. - Austin Alchon, Suzanne (2003). A pest in the land: new world epidemics in a global perspective. University of New Mexico Press. p. 21. ISBN 0-8263-2871-7. - Haverkamp, Alfred (1988). Medieval Germany, 1056–1273. Oxford University Press. - Nicholas, David (1997). The Growth of the Medieval City: From Late Antiquity to the Early Fourteenth Century. Longman. pp. 69–72, 133–42, 202–20, 244–45, 300–307. - Strait, Paul (1974). Cologne in the Twelfth Century. - Huffman, Joseph P. (1998). Family, Commerce, and Religion in London and Cologne. – covers from 1000 to 1300. - Sagarra, Eda (1977). A Social History of Germany: 1648 - 1914. p. 405. - Judith M. Bennett and Ruth Mazo Karras, eds. The Oxford Handbook of Women and Gender in Medieval Europe (2013). - Michael G. Baylor, The German Reformation and the Peasants' War: A Brief History with Documents (2012) - John Lotherington, The German Reformation (2014) - John Lotherington, The Counter-Reformation (2015) - Wilson, Peter H. (2009). The Thirty Years War: Europe's Tragedy. - Geoffrey Parker, The Thirty Years' War (1997) p. 178 has 15–20% decline; Tryntje Helfferich, The Thirty Years War: A Documentary History (2009) p. xix, estimates a 25% decline. Wilson (2009) pp. 780–95 reviews the estimates. - Holborn, Hajo (1959). A History of Germany: The Reformation. p. 37. - Edwards, Jr., Mark U. (1994). Printing, Propaganda, and Martin Luther. - See texts at Project Wittenberg: "Selected Hymns of Martin Luther" - Weimer, Christoph (2004). "Luther and Cranach on Justification in Word and Image". Lutheran Quarterly. 18 (4): 387–405. - R. Taton; C. Wilson; Michael Hoskin (2003). Planetary Astronomy from the Renaissance to the Rise of Astrophysics, Part A, Tycho Brahe to Newton. p. 20. - Sheehan 1989, pp. 75, 207-291, 291-323, 324-371, 802-820. - Dennis Showalter, Frederick the Great: A Military History (2012) - Ritter, Gerhard (1974) . Peter Peret, ed. Frederick the Great: A Historical Profile. Berkeley: University of California Press. ISBN 0-520-02775-2.; called by Russell Weigley "The best introduction to Frederick the Great and indeed to European warfare in his time." Russell Frank Weigley (2004). The Age of Battles: The Quest for Decisive Warfare from Breitenfeld to Waterloo. Indiana U.P. p. 550. - Lucjan R. Lewitter, "The Partitions of Poland" in A. Goodwyn, ed. The New Cambridge Modern History: vol 8 1763-93 (1965) pp 333-59 - Holborn, Hajo (1964). A History of Modern Germany: 1648–1840. pp. 291–302. - Ingrao, Charles W. (2003). The Hessian Mercenary State: Ideas, Institutions, and Reform under Frederick II, 1760–1785. - Liebel, Helen P. (1965). "Enlightened bureaucracy versus enlightened despotism in Baden, 1750-1792". Transactions of the American Philosophical Society. 55 (5): 1–132. doi:10.2307/1005911. - Segarra, Eda (1977). A Social History of Germany: 1648–1914. pp. 37–55, 183–202. - Sagarra, Eda (1977). A Social History of Germany: 1648–1914. pp. 140–154, 341–45. - For details on the life of a representative peasant farmer, who migrated in 1710 to Pennsylvania, see Bernd Kratz, "Jans Stauffer: A Farmer in Germany before his Emigration to Pennsylvania," Genealogist, Fall 2008, Vol. 22 Issue 2, pp 131–169 - Ford, Guy Stanton (1922). Stein and the era of reform in Prussia, 1807–1815. pp. 199–220. - Brakensiek, Stefan (April 1994), "Agrarian Individualism in North-Western Germany, 1770–1870", German History, 12 (2), pp. 137–179 - Perkins, J. A. (April 1986), "Dualism in German Agrarian Historiography", Comparative Studies in Society and History, 28 (2), pp. 287–330 - Thomas Nipperdey, Germany from Napoleon to Bismarck: 1800–1866 (1996) p. 59 - Marion W. Gray, Productive men, reproductive women: the agrarian household and the emergence of separate spheres during the German Enlightenment (2000). - Marion W. Gray and June K. Burton, "Bourgeois Values in the Rural Household, 1810–1840: The New Domesticity in Germany," The Consortium on Revolutionary Europe, 1750-1850 23 (1994): 449–56. - Nipperdey, ch 2. - Eda Sagarra, An introduction to Nineteenth century Germany (1980) pp 231-33. - Gagliardo, John G. (1991). Germany under the Old Regime, 1600–1790. pp. 217–34, 375–95. - Charles W. Ingrao, "A Pre-Revolutionary Sonderweg." German History 20#3 (2002), pp 279-286. - Katrin Keller, "Saxony: Rétablissement and Enlightened Absolutism." German History 20.3 (2002): 309-331. - Richter, Simon J., ed. (2005), The Literature of Weimar Classicism - Owens, Samantha; Reul, Barbara M.; Stockigt, Janice B., eds. (2011). Music at German Courts, 1715–1760: Changing Artistic Priorities. - Kuehn, Manfred (2001). Kant: A Biography. - Van Dulmen, Richard; Williams, Anthony, eds. (1992). The Society of the Enlightenment: The Rise of the Middle Class and Enlightenment Culture in Germany. - Ruth-Ellen B. Joeres and Mary Jo Maynes, German women in the eighteenth and nineteenth centuries: a social and literary history (1986). - Eda Sagarra, A Social History of Germany: 1648 - 1914 (1977). - James J. Sheehan, German History, 1770-1866 (1993) pp 207-88 - Connelly, Owen (1966). "6". Napoleon's satellite kingdoms. - Raff, Diethher (1988), History of Germany from the Medieval Empire to the Present, pp. 34–55, 202–206 - Carr 1991, pp. 1-2. - Lee 1985, pp. 332-46. - Nipperdey 1996, p. 86. - Nipperdey 1996, pp. 87-92, 99. - Tilly, Richard (1967), "Germany: 1815–1870", in Cameron, Rondo, Banking in the Early Stages of Industrialization: A Study in Comparative Economic History, Oxford University Press, pp. 151–182 - Thomas Nipperdey, Germany from Napoleon to Bismarck: 1800-1866 (1996) p 178 - Nipperdey, Germany from Napoleon to Bismarck: 1800–1866 (1996) pp. 96-97 - Nipperdey, Germany from Napoleon to Bismarck: 1800–1866 (1996) p. 165 - Mitchell, Allan (2000). Great Train Race: Railways and the Franco-German Rivalry, 1815–1914. - Theodore S. Hamerow, The Social Foundations of German Unification, 1858-1871: Ideas and Institutions (1969) pp 284-91 - Kenneth E. Olson, The history makers: The press of Europe from its beginnings through 1965 (LSU Press, 1966) pp 99-134 - Elmer H. Antonsen, James W. Marchand, and Ladislav Zgusta, eds. The Grimm brothers and the Germanic past (John Benjamins Publishing, 1990). - Sheehan, James J. (1989). German History: 1770–1866. pp. 75, 207–291, 291–323, 324–371, 802–820. - Christopher Clark, Iron Kingdom (2006) pp 412-19 - Christopher Clark, "Confessional policy and the limits of state action: Frederick William III and the Prussian Church Union 1817–40." Historical Journal 39.04 (1996) pp: 985-1004. in JSTOR - Hajo Holborn, A History of Modern Germany 1648-1840 (1964) pp 485-91 - Christopher Clark, Iron Kingdom (2006) pp 419-21 - Holborn, A History of Modern Germany 1648-1840 (1964) pp 498-509 - Taylor, A.J.P. (2001). The Course of German History. p. 52. - Williamson, George S. (Dec 2000). "What Killed August von Kotzebue? The Temptations of Virtue and the Political Theology of German Nationalism, 1789–1819". Journal of Modern History. 72 (4): 890–943. doi:10.1086/318549. - Holborn, A History of Modern Germany: 1840–1945 pp 131-67 - Edgar Feuchtwanger, Bismarck: A Political History (2nd ed., Routledge, 2014) pp 83-98 - Holborn, A History of Modern Germany: 1840–1945 pp 167-88 - Feuchtwanger, Bismarck: A Political History (2014) pp 99-147 - Gordon A. Craig, Germany, 1866–1945 (1978) pp 11-22 online edition - "A German Voice of Opposition to Germanization (1914)". German History in Documents and Images. German Historical Institute (www.ghi-dc.org). - "Germanization Policy: Speech by Ludwik Jazdzewski in a Session of the Prussian House of Representatives (January 15, 1901)". German History in Documents and Images. German Historical Institute (www.ghi-dc.org). - John C.G. Röhl, "Higher Civil Servants in Germany, 1890–1900." Journal of Contemporary History 2#3 (1967): 101-121. in JSTOR - Clark, Iron kingdom: the rise and downfall of Prussia, 1600-1947 (2006) p 158-59, 603-23. - Hans-Ulrich Wehler,The German Empire, 1871-1918 (1985): 146-57, quote p 157. - Alexandra Richie, Faust’s Metropolis. A History of Berlin (1998) p 207. - David Blackbourn, The Long Nineteenth Century: A History of Germany, 1780-1918 (1998) p 32. - Mazón, Patricia M. (2003). Gender and the Modern Research University: The Admission of Women to German Higher Education, 1865-1914. Stanford U.P. p. 53. - Moses, John Anthony (1982). Trade Unionism in Germany from Bismarck to Hitler, 1869-1933. Rowman & Littlefield. p. 149. - Hennock, E. P. (2007), The Origin of the Welfare State in England and Germany, 1850–1914: Social Policies Compared - Beck, Hermann (1995), Origins of the Authoritarian Welfare State in Prussia, 1815–1870 - Spencer, Elaine Glovka (Spring 1979), "Rules of the Ruhr: Leadership and Authority in German Big Business Before 1914", Business History Review, 53 (1), pp. 40–64, JSTOR 3114686 - Lambi, Ivo N. (March 1962), "The Protectionist Interests of the German Iron and Steel Industry, 1873–1879", Journal of Economic History, 22 (1), pp. 59–70, JSTOR 2114256 - Douglas W. Hatfield, "Kulturkampf: The Relationship of Church and State and the Failure of German Political Reform," Journal of Church and State (1981) 23#3 pp. 465-484 in JSTOR(1998) - John C.G. Roehl, "Higher civil servants in Germany, 1890-1900" in James J. Sheehan, ed., Imperial Germany (1976) pp 128-151 - Margaret Lavinia Anderson, and Kenneth Barkin. "The myth of the Puttkamer purge and the reality of the Kulturkampf: Some reflections on the historiography of Imperial Germany." Journal of Modern History (1982): 647-686. esp. pp 657-62 in JSTOR - Anthony J. Steinhoff, "Christianity and the creation of Germany," in Sheridan Gilley and Brian Stanley, eds., Cambridge History of Christianity: Volume 8: 1814-1914 (2008) p 295 - John K. Zeender in The Catholic Historical Review, Vol. 43, No. 3 (Oct., 1957), pp. 328-330. - Rebecca Ayako Bennette, Fighting for the Soul of Germany: The Catholic Struggle for Inclusion after Unification (Harvard U.P. 2012) - Blackbourn, David (Dec 1975). "The Political Alignment of the Centre Party in Wilhelmine Germany: A Study of the Party's Emergence in Nineteenth-Century Württemberg". Historical Journal. 18 (4): 821–850. doi:10.1017/s0018246x00008906. JSTOR 2638516. - Clark, Christopher (2006). Iron Kingdom: The Rise and Downfall of Prussia, 1600–1947. pp. 568–576. - Ronald J. Ross, The failure of Bismarck's Kulturkampf: Catholicism and state power in imperial Germany, 1871-1887 (1998). - Weitsman, Patricia A. (2004), Dangerous alliances: proponents of peace, weapons of war, p. 79 - Belgum, Kirsten (1998). Popularizing the Nation: Audience, Representation, and the Production of Identity in "Die Gartenlaube," 1853–1900. p. 149. - Neugebauer, Wolfgang (2003). Die Hohenzollern. Band 2 - Dynastie im säkularen Wandel (in German). Stuttgart: W. Kohlhammer. pp. 174–175. ISBN 3-17-012097-2. - Kroll, Franz-Lothar (2000), "Wilhelm II. (1888 - 1918)", in Kroll, Franz-Lothar, Preussens Herrscher. Von den ersten Hohenzollern bis Wilhelm II. (in German), Munich: C.H. Beck, p. 290 - Christopher Clark, Kaiser Wilhelm II (2000) pp 35-47 - John C.G. Wilhelm II: the Kaiser's personal monarchy, 1888-1900 (2004). - On the Kaiser's "histrionic personality disorder", see Tipton (2003), Pp. 243–45 - Röhl, J.C.G. (Sep 1966). "Friedrich von Holstein". Historical Journal. 9 (3): 379–388. doi:10.1017/s0018246x00026716. - Woodward, David (July 1963). "Admiral Tirpitz, Secretary of State for the Navy, 1897–1916". History Today. 13 (8): 548–555. - Herwig, Holger (1980). Luxury Fleet: The Imperial German Navy 1888–1918. - Esthus, Raymond A. (1970). Theodore Roosevelt and the International Rivalries. pp. 66–111. - Perkins, J.A. (Spring 1981). "The Agricultural Revolution in Germany 1850–1914". Journal of European Economic History. 10 (1): 71–119. - Haber, Ludwig Fritz (1958), The chemical industry during the nineteenth century - Webb, Steven B. (June 1980). "Tariffs, Cartels, Technology, and Growth in the German Steel Industry, 1879 to 1914". Journal of Economic History. 40 (2): 309–330. doi:10.1017/s0022050700108228. JSTOR 2120181. - James, Harold (2012). Krupp: A History of the Legendary German Firm. Princeton University Press. - Allen, Robert C. (Dec 1979). "International Competition in Iron and Steel, 1850–1913". Journal of Economic History. 39 (4): 911–37. doi:10.1017/s0022050700098673. JSTOR 2120336. - Feldman, Gerald D.; Nocken, Ulrich (Winter 1975). "Trade Associations and Economic Power: Interest Group Development in the German Iron and Steel and Machine Building Industries, 1900–1933". Business History Review. 49 (4): 413–45. JSTOR 3113169. - Brigitte Young, Triumph of the fatherland: German unification and the marginalization of women (1999). - Guido, Diane J. (2010). The German League for the Prevention of Women's Emancipation: Anti-Feminism in Germany, 1912-1920. p. 3. - Mazón, Patricia M. (2003). Gender and the Modern Research University: The Admission of Women to German Higher Education, 1865-1914. Stanford U.P. p. 53. - John Anthony Moses and Paul M. Kennedy, Germany in the Pacific and Far East, 1870-1914 (1977). - sean McMeekin, The Berlin-Baghdad express: the Ottoman Empire and Germany's bid for world power, 1898-1918 (Penguin, 2011) - Gann, L., and Peter Duignan, The Rulers of German Africa, 1884–1914 (1977) focuses on political and economic history; Perraudin, Michael, and Jürgen Zimmerer, eds. German Colonialism and National Identity (2010) focuses on cultural impact in Africa and Germany. - Tilman Dedering, "The German‐Herero war of 1904: revisionism of genocide or imaginary historiography?." Journal of Southern African Studies (1993) 19#1 pp: 80-88. - Jeremy Sarkin, Germany's Genocide of the Herero: Kaiser Wilhelm II, His General, His Settlers, His Soldier (2011) - Kirsten Dyck, "Situating the Herero Genocide and the Holocaust among European Colonial Genocides." Przegląd Zachodni (2014) #1 pp: 153-172. abstract - Kennedy, Paul M. (1980). The Rise of the Anglo-German Antagonism, 1860–1914. pp. 464–470. - Winter, J.M. (1999). Capital Cities at War: Paris, London, Berlin, 1914–1919. - Strachan, Hew (2004). The First World War. - Spencer C. Tucker (2005). World War One. ABC-CLIO. p. 225. - Zara S. Steiner (2005). The Lights that Failed: European International History, 1919-1933. Oxford U.P. p. 68. - Herwig, Holger H. (1996). The First World War: Germany and Austria–Hungary 1914–1918. - Paschall, Rod (1994). The defeat of imperial Germany, 1917–1918. - Feldman, Gerald D. "The Political and Social Foundations of Germany's Economic Mobilization, 1914-1916," Armed Forces & Society (1976) 3#1 pp 121-145. online - Chickering, Roger (2004). Imperial Germany and the Great War, 1914–1918. pp. 141–42. - For a comparison see Timothy S. Brown, Weimar radicals: Nazis and communists between authenticity and performance (2009) pp 149–53 - "The political parties in the Weimar Republic" (PDF). Deutscher Bundestag. March 2006. Retrieved 18 September 2011. - Marks, Sally (1978). "The Myths of Reparations". Central European History. 11 (3): 231–55. doi:10.1017/s0008938900018707. JSTOR 4545835. - Richard J. Evans, Coming of the Third Reich (2004) pp 247-83 - Richard F. Hamilton, Who Voted for Hitler? (1982) - Evans, Coming of the Third Reich (2004) pp 283-308 - "Nobel Prize". Nobelprize.org. Retrieved 19 November 2009. - Joll, James (April 1985). "Two Prophets of the Twentieth Century: Spengler and Toynbee". Review of International Studies. 11 (2): 91–104. doi:10.1017/s026021050011424x. - Stackelberg, Roderick (2007). The Routledge companion to Nazi Germany. p. 135. - Ash, Mitchell G.; Söllner, Alfons, eds. (1996). Forced Migration and Scientific Change: Emigré German-Speaking Scientists and Scholars after 1933. - Kershaw, Ian (2001). The "Hitler Myth": Image and Reality in the Third Reich. - Williamson, David (2002). "Was Hitler a Weak Dictator?". History Review: 9+. - Geary, Dick (October 1998). "Who voted for the Nazis? (electoral history of the National Socialist German Workers Party)". History Today. 48 (10): 8–14. - Jablonsky, David (July 1988). "Rohm and Hitler: The Continuity of Political-Military Discord". Journal of Contemporary History. 23 (3): 367–386. doi:10.1177/002200948802300303. JSTOR 260688. - M. Patricia Marchak (2003). Reigns of Terror. McGill-Queen's Press — MQUP. p. 195. ISBN 978-0-7735-2642-6. - Friedlander, Saul (1998). Nazi Germany and the Jews. 1: The Years of Persecution 1933–1939. - Interpreting the 20th Century: The Struggle Over Democracy, The Holocaust, Pamela Radcliff, p. 104-107 - Jennifer Rosenberg. "Holocaust Facts". About.com Education. - Bullock, Alan (1991). Hitler: a study in tyranny. p. 170. - Thacker, Toby (2009). Joseph Goebbels: Life and Death. pp. 182–184. - Bridenthal, Renate; Grossmann, Atina; Kaplan, Marion (1984). When Biology Became Destiny: Women in Weimar and Nazi Germany. - Stephenson, Jill (2001). Women in Nazi Germany. - Koonz, Claudia (1988). Mothers in the Fatherland: Women, the Family and Nazi Politics. - Hagemann, Karen (2011). "Mobilizing Women for War: The History, Historiography, and Memory of German Women's War Service in the Two World Wars". Journal of Military History. 75 (4): 1055–1094. - Campbell, D'Ann (April 1993). "Women in Combat: The World War Two Experience in the United States, Great Britain, Germany, and the Soviet Union". Journal of Military History. 57: 301–323. doi:10.2307/2944060. - Richard Overy, The Bombers and the Bombed: Allied Air War Over Europe 1940-1945 (2014) pp 306-7 - David Clay Large (2001). Berlin. Basic Books. p. 482. - Peter Stearns (2013). Demilitarization in the Contemporary World. University of Illinois Press. p. 176. - Bessel, Richard (2009). Germany 1945: From War to Peace. Harper Collins Publishers. ISBN 978-0-06-054036-4. - Robert Bard, Historical Memory and the expulsion of ethnic Germans in Europe, 1944 (PhD. Diss. University of Hertfordshire, 2009) online - "The Potsdam Declaration". Carlisle Barracks, Pa.: Book Department, Army Information School. May 1946. - Schechtman, Joseph B. (April 1953). "Postwar Population Transfers in Europe: A Survey". Review of Politics. 15 (2): 151–178. doi:10.1017/s0034670500008081.. "Most had left" is p. 158 in JSTOR - Davidson, Eugene. The death and life of Germany: an account of the American occupation. p. 121. - Liberman, Peter (1996). Does Conquest Pay? The Exploitation of Occupied Industrial Societies. p. 147. - 2.3 million units out of 9.5 million were destroyed. - Tipton, Frank B. (2003). A History of Modern Germany since 1815. pp. 508–513, 596–599. - Hoover, Calvin B. (May 1946). "The Future of the German Economy". American Economic Review. 36 (2): 642–649. JSTOR 1818235. - Milward, Alan S. (1984). The Reconstruction of Western Europe: 1945–51. pp. 356, 436. - Ardagh, John (1987). Germany and the Germans. pp. 74–82, 84. - Gareau, Frederick H. (Jun 1961). "Morgenthau's Plan for Industrial Disarmament in Germany". Western Political Quarterly. 14 (2): 517–534. doi:10.2307/443604. - "Conferences: Pas de Pagaille!". Time. 28 July 1947. - For US and Allied official policy statements see U.S. Dept. of State Germany, 1947–1949: The Story in Documents (1950) - available online; these are primary sources. - Gienow-Hecht, Jessica C.E. (1999). "Art is democracy and democracy is art: Culture, propaganda, and the Neue Zeitung in Germany". Diplomatic History. 23 (1): 21–43. doi:10.1111/0145-2096.00150. - Bruce, Gary (2010), The Firm: The Inside Story of the Stasi - Fulbrook, Mary (2008). The People's State: East German Society from Hitler to Honecker. - Granville, Johanna (Sep 2006). "East Germany in 1956: Walter Ulbricht's Tenacity in the Face of Opposition". Australian Journal of Politics and History. 52 (3): 417–438. doi:10.1111/j.1467-8497.2006.00427.x. - Biesinger, Joseph A. (2006), Germany: a reference guide from the Renaissance to the present, p. 270 - Taylor, Frederick (2008), The Berlin Wall: A World Divided, 1961–1989 - Pence, Katherine; Betts, Paul (2011). Socialist modern: East German everyday culture and politics (4 ed.). University of Michigan Press. pp. 37, 59. - Jürgen Kocka (2010). Civil Society and Dictatorship in Modern German History. UPNE. p. 37. - The Christian Social Union or CSU is the Bavaria branch of the CDU. It has always operated in close collaboration with the CDU, and the CDU/CSU is usually treated as a single party in national affairs. - Jürgen Weber, Germany, 1945-1990: A Parallel History (Budapest, Central European University Press, 2004) in Questia - Weber, Jurgen (2004). Germany, 1945–1990. Central European University Press. pp. 37–60, 103–18, 167–88, 221–264. - Fürstenberg, Friedrich (May 1977). "West German Experience with Industrial Democracy". Annals of the American Academy of Political and Social Science. 431: 44–53. doi:10.1177/000271627743100106. JSTOR 1042033. - Junker, Detlef, ed. (2004). The United States and Germany in the Era of the Cold War, 1945–1968. 1. Cambridge University Press. pp. 291–309. - Sauermann, Heinz (1950). "The Consequences of the Currency Reform in Western Germany". Review of Politics. 12 (2): 175–196. doi:10.1017/s0034670500045009. JSTOR 1405052. - Giangreco, D. M.; Griffin, Robert E. (1988). Airbridge to Berlin: The Berlin Crisis of 1948, Its Origins and Aftermath. Presidio Press. - Williams, Charles (2000). Konrad Adenauer: The Father of the New Germany. - Hiscocks, Richard (1975). The Adenauer era. p. 290. - Granieri, Ronald J. (2005). "Review". Journal of Interdisciplinary History. 36 (2): 262, 263. doi:10.1162/0022195054741190. - Walter, Norbert. "The Evolving German Economy: Unification, the Social Market, European and Global Integration". SAIS Review (15 (Special Issue 1995)): 55–81.. Quote from p. 64 - Mierzejewski, Alfred C. (2004). Ludwig Erhard: a biography. - Mierzejewski, Alfred C. (2004), "1957: Ludwig Erhard's Annus Terribilis", Essays in Economic and Business History, 22: 17–27, ISSN 0896-226X - Turner, Henry Ashby (1987). The two Germanies since 1945. pp. 80–82. - Shonick, Kaja (Oct 2009). "Politics, Culture, and Economics: Reassessing the West German Guest Worker Agreement with Yugoslavia". Journal of Contemporary History. 44 (4): 719–736. doi:10.1177/0022009409340648. - Castles, Stephen. "The Guests Who Stayed – The Debate on 'Foreigners Policy' in the German Federal Republic". International Migration Review. 19 (3): 517–534. JSTOR 2545854. - Ewing, Katherine Pratt (Spring–Summer 2003). "Living Islam in the Diaspora: Between Turkey and Germany". South Atlantic Quarterly. 102 (2/3): 405–431. doi:10.1215/00382876-102-2-3-405.. In Project MUSE - Mandel, Ruth (2008). Cosmopolitan Anxieties: Turkish Challenges to Citizenship and Belonging in Germany. Duke University Press. - Fink, Carole; Schaefer, Bernd, eds. (2009). Ostpolitik, 1969–1974: European and Global Responses. - Fulbrook, Mary (2002). History of Germany, 1918–2000: the divided nation. p. 170. - Sinn, Hans-Werner (2007). Can Germany be saved?: the malaise of the world's first welfare state. MIT Press. p. 183. - Cerny, Karl H. (1990). Germany at the polls: the Bundestag elections of the 1980s. p. 113. - For a primary source see Helmut Schmidt, Men and Power: A Political Retrospective (1990) - Pruys, Karl (1996). Kohl: Genius of the Present: A Biography of Helmut Kohl. - For primary sources in English translation and a brief survey see Konrad H. Jarausch, and Volker Gransow, eds. Uniting Germany: Documents and Debates, 1944–1993 (1994) - Hockenos, Paul (2008). Joschka Fischer and the making of the Berlin Republic. pp. 313–14. - Bolgherini, Silvia; Grotz, Florian, eds. (2010). Germany After the Grand Coalition: Governance and Politics in a Turbulent Environment. Palgrave Macmillan. - Mufson, Steven (30 May 2011). "Germany to close all of its nuclear plants by 2022". Washington Post. - "Migrant crisis: Migration to Europe explained in seven charts". 28 January 2016. Retrieved 31 January 2016. - "Chancellor Running Out of Time on Refugee Issue". 19 January 2016. Retrieved 7 June 2017. - "Merkel Critic Says Chancellor's Refugee Policy Is a 'Time Bomb'". 9 August 2016. Retrieved 7 June 2017. - Heinrich August Winkler, Germany: The Long Road West (2006), vol 1 p 1 - Barraclough, Geoffrey (1984). The Origins of Modern Germany?. - Bradbury, Jim (2004). The Routledge Companion to Medieval Warfare. Routledge Companions to History. Routledge. ISBN 9781134598472. Retrieved 2015-11-20. - Bowman, Alan K.; Garnsey, Peter; Cameron, Averil (2005). The Crisis of Empire, A.D. 193–337. The Cambridge Ancient History. 12. Cambridge University Press. ISBN 0-521-30199-8. - Carr, William (1991). A History of Germany: 1815-1990 (4 ed.). Routledge. ISBN 0-340-55930-6. - Carsten, Francis (1958). The Origins of Prussia. - Claster, Jill N. (1982). Medieval Experience: 300–1400. New York University Press. ISBN 0-8147-1381-5. - Damminger, Folke (2003). "Dwellings, Settlements and Settlement Patterns in Merovingian Southwest Germany and adjacent areas". In Wood, Ian. Franks and Alamanni in the Merovingian Period: An Ethnographic Perspective. Studies in Historical Archaeoethnology. Volume 3 (Revised ed.). Boydell & Brewer. ISBN 9781843830351. ISSN 1560-3687. Retrieved 2015-11-23. - Day, Clive (1914). A History of Commerce. - Drew, Katherine Fischer (2011). The Laws of the Salian Franks. The Middle Ages Series. University of Pennsylvania Press. ISBN 9780812200508. Retrieved 2015-11-24. - Fichtner, Paula S. (2009). Historical Dictionary of Austria. Volume 70 (2nd ed.). Scarecrow Press. ISBN 9780810863101. - Fortson, Benjamin W. (2011). Indo-European Language and Culture: An Introduction. Blackwell Textbooks in Linguistics. Volume 30 (2nd ed.). John Wiley & Sons. ISBN 9781444359688. - Green, Dennis H. (2000). Language and History in the Early Germanic World (Revised ed.). Cambridge University Press. ISBN 9780521794237. - Green, Dennis H. (2003). "Linguistic evidence for the early migrations of the Goths". In Heather, Peter. The Visigoths from the Migration Period to the Seventh Century: An Ethnographic Perspective. Volume 4 (Revised ed.). Boydell & Brewer. ISBN 9781843830337. - Goffart, Walter A. (1988). The Narrators of Barbarian History (A.D. 550–800): Jordanes, Gregory of Tours, Bede, and Paul the Deacon. Princeton University Press. - Heather, Peter J. (2006). The Fall of the Roman Empire: A New History of Rome and the Barbarians (Reprint ed.). Oxford University Press. ISBN 9780195159547. - Historicus (1935). Frankreichs 33 Eroberungskriege [France's 33 wars of conquest] (in German). Translated from the French. Foreword by Alcide Ebray (3rd ed.). Internationaler Verlag. Retrieved 2015-11-21. - Heather, Peter (2010). Empires and Barbarians: The Fall of Rome and the Birth of Europe. Oxford University Press. - Hen, Yitzhak (1995). Culture and Religion in Merovingian Gaul: A.D. 481-751. Cultures, Beliefs and Traditions: Medieval and Early Modern Peoples Series. Volume 1. Brill. ISBN 9789004103474. Retrieved 2015-11-26. - Kibler, William W., ed. (1995). Medieval France: An Encyclopedia. Garland Encyclopedias of the Middle Ages. Volume 2. Psychology Press. ISBN 9780824044442. Retrieved 2015-11-26. - Kristinsson, Axel (2010). "Germanic expansion and the fall of Rome". Expansions: Competition and Conquest in Europe Since the Bronze Age. ReykjavíkurAkademían. ISBN 9789979992219. - Nipperdey, Thomas (1996). Germany from Napoleon to Bismarck: 1800-1866. Princeton University Press. ISBN 0691607559. - Ozment, Steven (2004). A Mighty Fortress: A New History of the German People. Harper Perennial. ISBN 978-0060934835. - Rodes, John E. (1964). Germany: A History. Holt, Rinehart and Winston. ASIN B0000CM7NW. - Rüger, C. (2004) . "Germany". In Bowman, Alan K.; Champlin, Edward; Lintott, Andrew. The Cambridge Ancient History: X, The Augustan Empire, 43 B.C. - A.D. 69. Volume 10 (2nd ed.). Cambridge University Press. ISBN 0-521-26430-8. - Schulman, Jana K. (2002). The Rise of the Medieval World, 500–1300: A Biographical Dictionary. Greenwood Press. - Sheehan, James J. (1989). German History: 1770–1866. - Thompson, James Westfall (1931). Economic and Social History of Europe in the Later Middle Ages (1300–1530). - Van Dam, Raymond (1995). "8: Merovingian Gaul and the Frankish conquests". In Fouracre, Paul. The New Cambridge Medieval History. 1, C.500-c.700. Cambridge University Press. ISBN 9780521853606. Retrieved 2015-11-23. - Wiesflecker, Hermann (1991). Maximilian I (in German). Verlag für Geschichte und Politik. ISBN 9783702803087. Retrieved 2015-11-21. - Wilson, Peter H. (2016). Heart of Europe: A History of the Holy Roman Empire. Belknap Press. ISBN 978-0-674-05809-5. - Wanger, Günther A. "Radiometric dating of the type-site for Homo heidelbergensis at Mauer, Germany". Proceedings of the National Academy of Sciences. 107. doi:10.1073/pnas.1012722107. Retrieved 6 October 2010. - Bordewich, Fergus M. (September 2005). "The Ambush That Changed History: An amateur archaeologist discovers the field where wily Germanic warriors halted the spread of the Roman Empire". Smithsonian magazine. - Lee, Loyd E. (1985). "The German Confederation and the Consolidation of State Power in the South German States, 1815–1848". Consortium on Revolutionary Europe 1750–1850: Proceedings. 15. - Biesinger, Joseph A. Germany: a reference guide from the Renaissance to the present (2006) - Bithell, Jethro, ed. Germany: A Companion to German Studies (5th ed. 1955), 578pp; essays on German literature, music, philosophy, art and, especially, history. online edition - Bösch, Frank. Mass Media and Historical Change: Germany in International Perspective, 1400 to the Present (Berghahn, 2015). 212 pp. online review - Buse, Dieter K. ed. Modern Germany: An Encyclopedia of History, People, and Culture 1871–1990 (2 vol 1998) - Clark, Christopher. Iron Kingdom: The Rise and Downfall of Prussia, 1600–1947 (2006) - Detwiler, Donald S. Germany: A Short History (3rd ed. 1999) 341pp; online edition - Fulbrook, Mary (1990). A Concise History of Germany. Cambridge concise histories. Cambridge University Press. ISBN 0521-36836-7. This text has updated editions. - Gall, Lothar. Milestones - Setbacks - Sidetracks: The Path to Parliamentary Democracy in Germany, Historical Exhibition in the Deutscher Dom in Berlin (2003), exhibit catalog; heavily illustrated, 420pp; political history since 1800 - Holborn, Hajo. A History of Modern Germany (1959–64); vol 1: The Reformation; vol 2: 1648–1840; vol 3: 1840–1945; standard scholarly survey - Maehl, William Harvey. Germany in Western Civilization (1979), 833pp; focus on politics and diplomacy - Ozment, Steven. A Mighty Fortress: A New History of the German People (2005), focus on cultural history - Raff, Diether. History of Germany from the Medieval Empire to the Present (1988) 507pp - Reinhardt, Kurt F. Germany: 2000 Years (2 vols., 1961), stress on cultural topics - Richie, Alexandra. Faust's Metropolis: A History of Berlin (1998), 1168 pp by scholar; excerpt and text search; emphasis on 20th century - Sagarra, Eda. A Social History of Germany 1648–1914 (1977, 2002 edition) - Schulze, Hagen, and Deborah Lucas Schneider. Germany: A New History (2001) - Smith, Helmut Walser, ed. The Oxford Handbook of Modern German History (2011), 862 pp; 35 essays by specialists; Germany since 1760 - Snyder, Louis, ed. Documents of German history (1958) online. 620pp; 167 primary sources in English translation - Taylor, A.J.P. The Course of German History: A Survey of the Development of German History since 1815. (2001). 280pp; online edition - Watson, Peter. The German Genius (2010). 992 pp covers many thinkers, writers, scientists etc. since 1750; ISBN 978-0-7432-8553-7 - Winkler, Heinrich August. Germany: The Long Road West (2 vol, 2006), since 1789; excerpt and text search vol 1 - Zabecki, David T., ed. Germany at War: 400 Years of Military History (4 vol. 2015) - Arnold, Benjamin. Medieval Germany, 500–1300: A Political Interpretation (1998) - Arnold, Benjamin. Power and Property in Medieval Germany: Economic and Social Change, c. 900–1300 (Oxford University Press, 2004) online edition - Barraclough, Geoffrey. The Origins of Modern Germany (2d ed., 1947) - Fuhrmann, Horst. Germany in the High Middle Ages: c. 1050–1200 (1986) - Haverkamp, Alfred, Helga Braun, and Richard Mortimer. Medieval Germany 1056–1273 (1992) - Innes; Matthew. State and Society in the Early Middle Ages: The Middle Rhine Valley, 400–1000 (Cambridge U.P. 2000) online edition - Jeep, John M. Medieval Germany: An Encyclopedia (2001), 928pp, 650 articles by 200 scholars cover AD 500 to 1500 - Nicholas, David. The Northern Lands: Germanic Europe, c. 1270–c. 1500 (Wiley-Blackwell, 2009). 410 pages. - Reuter, Timothy. Germany in the Early Middle Ages, c. 800–1056 (1991) - Bainton, Roland H. Here I Stand: A Life of Martin Luther (1978; reprinted 1995) - Dickens, A. G. Martin Luther and the Reformation (1969), basic introduction - Holborn, Hajo. A History of Modern Germany: vol 1: The Reformation (1959) - Junghans, Helmar. Martin Luther: Exploring His Life and Times, 1483–1546. (book plus CD ROM) (1998) - MacCulloch, Diarmaid. The Reformation (2005), influential recent survey - Ranke, Leopold von. History of the Reformation in Germany (1905) 792 pp; by Germany's foremost scholar complete text online free - Smith, Preserved. The Age of the Reformation (1920) 861 pages; complete text online free Early Modern to 1815 - Asprey, Robert B. Frederick the Great: The Magnificent Enigma (2007) - Atkinson, C.T. A history of Germany, 1715–1815 (1908) old; focus on political-military-diplomatic history of Germany and Austria online edition - Blanning, Tim. Frederick the Great: King of Prussia (2016), major new scholarly biography - Clark, Christopher. Iron Kingdom: The Rise and Downfall of Prussia, 1600–1947 (2006) - Gagliardo, John G. Germany under the Old Regime, 1600–1790 (1991) online edition - Heal, Bridget. The Cult of the Virgin Mary in Early Modern Germany: Protestant and Catholic Piety, 1500–1648 (2007) - Holborn, Hajo. A History of Modern Germany. Vol 2: 1648–1840 (1962) - Hughes, Michael. Early Modern Germany, 1477–1806 (1992) - Ogilvie, Sheilagh. Germany: A New Social and Economic History, Vol. 1: 1450–1630 (1995) 416pp; Germany: A New Social and Economic History, Vol. 2: 1630–1800 (1996), 448pp - Ogilvie, Sheilagh. A Bitter Living: Women, Markets, and Social Capital in Early Modern Germany (2003) DOI:10.1093/acprof:oso/9780198205548.001.0001 online - Ozment, Steven. Flesh and Spirit: Private Life in Early Modern Germany (2001). - Blackbourn, David. The Long Nineteenth Century: A History of Germany, 1780–1918 (1998) excerpt and text search - Blackbourn, David, and Geoff Eley. The Peculiarities of German History: Bourgeois Society and Politics in Nineteenth-Century Germany (1984) online edition - Brandenburg, Erich. From Bismarck to the World War: A History of German Foreign Policy 1870-1914 (19330 online 562pp; an old standard scholarly history - Brose, Eric Dorn. German History, 1789–1871: From the Holy Roman Empire to the Bismarckian Reich. (1997) online edition - Craig, Gordon A. Germany, 1866–1945 (1978) online edition - Hamerow, Theodore S. ed. Age of Bismarck: Documents and Interpretations (1974), 358pp; 133 excerpts from primary sources put in historical context by Professor Hamerow - Hamerow, Theodore S. ed. Otto Von Bismarck and Imperial Germany: A Historical Assessment (1993), excerpts from historians and primary sources - Nipperdey, Thomas. Germany from Napoleon to Bismarck: 1800-1866 (1996; online edition from Princeton University Press 2014), very dense coverage of every aspect of German society, economy and government. excerpt - Ogilvie, Sheilagh, and Richard Overy. Germany: A New Social and Economic History Volume 3: Since 1800 (2004) - Pflanze Otto, ed. The Unification of Germany, 1848–1871 (1979), essays by historians - Sheehan, James J. German History, 1770–1866 (1993), the major survey in English - Steinberg, Jonathan. Bismarck: A Life (2011), a major scholarly biography - Stern, Fritz. Gold and Iron: Bismark, Bleichroder, and the Building of the German Empire (1979) Bismark worked closely with this leading banker and financier excerpt and text search - Taylor, A.J.P. Bismarck: The Man and the Statesman (1967) online edition - Wehler, Hans-Ulrich. The German Empire 1871–1918 (1984) - Berghahn, Volker Rolf. Modern Germany: society, economy, and politics in the twentieth century (1987) ACLS E-book - Berghahn, Volker Rolf. Imperial Germany, 1871–1914: Economy, Society, Culture, and Politics (2nd ed. 2005) - Brandenburg, Erich. From Bismarck to the World War: A History of German Foreign Policy 1870-1914 (1927) online. - Cecil, Lamar. Wilhelm II: Prince and Emperor, 1859–1900 (1989) online edition; vol2: Wilhelm II: Emperor and Exile, 1900–1941 (1996) online edition - Craig, Gordon A. Germany, 1866–1945 (1978) online edition - Dugdale, E.T.S. ed. German Diplomatic Documents 1871-1914 (4 vol 1928-31), in English translation. online - Gordon, Peter E., and John P. McCormick, eds. Weimar Thought: A Contested Legacy (Princeton U.P. 2013) 451 pages; scholarly essays on law, culture, politics, philosophy, science, art and architecture - Herwig, Holger H. The First World War: Germany and Austria–Hungary 1914–1918 (1996), ISBN 0-340-57348-1 - Kolb, Eberhard. The Weimar Republic (2005) - Mommsen, Wolfgang J. Imperial Germany 1867–1918: Politics, Culture and Society in an Authoritarian State (1995) - Peukert, Detlev. The Weimar Republic (1993) - Retallack, James. Imperial Germany, 1871–1918 (Oxford University Press, 2008) - Scheck, Raffael. "Lecture Notes, Germany and Europe, 1871–1945" (2008) full text online, a brief textbook by a leading scholar - Watson, Alexander. Ring of Steel: Germany and Austria-Hungary in World War I (2014), excerpt - Burleigh, Michael. The Third Reich: A New History. (2000). 864 pp. Stress on antisemitism; - Evans, Richard J. The Coming of the Third Reich: A History. 2004. 622 pp., a major scholarly survey; The Third Reich in Power: 1933–1939. (2005). 800 pp.; The Third Reich at war 1939–1945 (2009) - Overy, Richard. The Dictators: Hitler's Germany and Stalin's Russia (2004). comparative history - Roderick, Stacke. Hitler's Germany: Origins, Interpretations, Legacies (1999) - Spielvogel, Jackson J. and David Redles. Hitler and Nazi Germany (6th ed. 2009) excerpt and text search, 5th ed. 2004 - Zentner, Christian and Bedürftig, Friedemann, eds. The Encyclopedia of the Third Reich. (2 vol. 1991). 1120 pp.; see Encyclopedia of the Third Reich - Bullock, Alan. Hitler: A Study in Tyranny, (1962) online edition - Friedlander, Saul. Nazi Germany and the Jews, 1933–1945 (2009) abridged version of the standard two volume history - Kershaw, Ian. Hitler, 1889–1936: Hubris. vol. 1. 1999. 700 pp. ; vol 2: Hitler, 1936–1945: Nemesis. 2000. 832 pp.; the leading scholarly biography. - Koonz, Claudia. Mothers in the Fatherland: Women, Family Life, and Nazi Ideology, 1919–1945. (1986). 640 pp. The major study - Speer, Albert. Inside the Third Reich: Memoirs 1970. - Stibbe, Matthew. Women in the Third Reich, 2003, 208 pp. - Tooze, Adam. The Wages of Destruction: The Making and Breaking of the Nazi Economy (2007), highly influential new study; online review by Richard Tilly; summary of reviews - Thomsett, Michael C. The German Opposition to Hitler: The Resistance, the Underground, and Assassination Plots, 1938–1945 (2nd ed 2007) 278 pages - Bark, Dennis L., and David R. Gress. A History of West Germany Vol 1: From Shadow to Substance, 1945–1963 (1992); ISBN 978-0-631-16787-7; vol 2: Democracy and Its Discontents 1963–1988 (1992) ISBN 978-0-631-16788-4 - Berghahn, Volker Rolf. Modern Germany: society, economy, and politics in the twentieth century (1987) ACLS E-book online - Hanrieder, Wolfram F. Germany, America, Europe: Forty Years of German Foreign Policy (1989) ISBN 0-300-04022-9 - Jarausch, Konrad H. After Hitler: Recivilizing Germans, 1945–1995 (2008) - Junker, Detlef, ed. The United States and Germany in the Era of the Cold War (2 vol 2004), 150 short essays by scholars covering 1945–1990 excerpt and text search vol 1; excerpt and text search vol 2 - Main, Steven J. "The Soviet Occupation of Germany. Hunger, Mass Violence and the Struggle for Peace, 1945–1947." Europe-Asia Studies (2014) 66#8 pp: 1380-1382. - Schwarz, Hans-Peter. Konrad Adenauer: A German Politician and Statesman in a Period of War, Revolution and Reconstruction (2 vol 1995) excerpt and text search vol 2; also full text vol 1; and full text vol 2 - Smith, Gordon, ed, Developments in German Politics (1992) ISBN 0-8223-1266-2, broad survey of reunified nation - Weber, Jurgen. Germany, 1945–1990 (Central European University Press, 2004) online edition - Beate Ruhm Von Oppen, ed. Documents on Germany under Occupation, 1945-1954 (Oxford University Press, 1955) online - Fulbrook, Mary. Anatomy of a Dictatorship: Inside the GDR, 1949–1989 (1998) - Fulbrook, Mary. The People's State: East German Society from Hitler to Honecker (2008) excerpt and text search - Harsch, Donna. Revenge of the Domestic: Women, the Family, and Communism in the German Democratic Republic (2008) - Jarausch, Konrad H.. and Eve Duffy. Dictatorship As Experience: Towards a Socio-Cultural History of the GDR (1999) - Jarausch, Konrad H., and Volker Gransow, eds. Uniting Germany: Documents and Debates, 1944–1993 (1994), primary sources on reunification - A. James McAdams, "East Germany and Detente." Cambridge University Press, 1985. - McAdams, A. James. "Germany Divided: From the Wall to Reunification." Princeton University Press, 1992 and 1993. - Pence, Katherine, and Paul Betts, eds. Socialist Modern: East German Everyday Culture and Politics (2008) excerpt and text search - Pritchard, Gareth. The Making of the GDR, 1945–53 (2004) - Ross, Corey. The East German Dictatorship: Problems and Perspectives in the Interpretation of the GDR (2002) - Steiner, André. The Plans That Failed: An Economic History of East Germany, 1945–1989 (2010) - Berghahn, Volker R., and Simone Lassig, eds. Biography between Structure and Agency: Central European Lives in International Historiography (2008) - Chickering, Roger, ed. Imperial Germany: A Historiographical Companion (1996), 552pp; 18 essays by specialists; - Evans, Richard J. Rereading German History: From Unification to Reunification, 1800–1996 (1997) online edition - Hagemann, Karen, and Jean H. Quataert, eds. Gendering Modern German History: Rewriting Historiography (2008) - Hagemann, Karen (2007). "From the Margins to the Mainstream? Women's and Gender History in Germany". Journal of Women's History. 19 (1): 193–199. doi:10.1353/jowh.2007.0014. - Hagen, William W. German History in Modern Times: Four Lives of the Nation (2012) excerpt - Jarausch, Konrad H., and Michael Geyer, eds. Shattered Past: Reconstructing German Histories (2003) - Klessmann, Christoph. The Divided Past: Rewriting Post-War German History (2001) online edition - Lehmann, Hartmut, and James Van Horn Melton, eds. Paths of Continuity: Central European Historiography from the 1930s to the 1950s (2003) - Perkins, J. A. "Dualism in German Agrarian Historiography, Comparative Studies in Society and History, Apr 1986, Vol. 28 Issue 2, pp 287–330, - Rüger, Jan, and Nikolaus Wachsmann, eds. Rewriting German history: new perspectives on modern Germany (Palgrave Macmillan, 2015). - Stuchtey, Benedikt, and Peter Wende, eds. British and German Historiography, 1750–1950: Traditions, Perceptions, and Transfers (2000)
HTTP stands for Hyper Text Transfer Protocol. It provides set of rules and standards that govern how information is transmitted on the World Wide Web. Computers on the World Wide Web use the Hypertext Transfer Protocol to talk each other. It is network protocol of the web, which is a stateless and application level protocol for communicating between distributed systems and interacts with network based hypertext information systems. It is network protocol used to deliver the virtually all files and data such as HTML files, image files or anything else on the World Wide Web. It is foundation of data communication for the World Wide Web. HTTP is called stateless protocol because each command is executed independently, regardless if they come across from the same address and server doesn’t remember previous requests. A stateless protocol refers to protocols, which do not save session state between connections. The communication takes place over TCP/IP and default port for TCP/IP is 80, but other ports can also be used. Features of HTTP Following are some of the feature of HTTP: - HTTP is an application layer protocol which is useful for retrieving web pages, sending and receiving email or transferring files. - Consider example, http://www.google.com, the first part of the address of a site on the internet specifies document written in Hypertext Markup Language(HTML). - HTTP is a client server oriented protocol by which two machines communicate using reliable, transport service such as TCP. - HTTP does server authentication, client authentication, data encryption etc. - A browser is a HTTP client because, it provides request and response mechanism where client sends request to server and server generates a response. - It supports for resource identification where each HTTP request includes URI(Uniform Resource Identifier). - It is used to transmit resources, where resource is some chunk of information that can be identified by URL that is R in URL. - Any type of data can be sent by using HTTP with client and server to handle data content and specifies the content type using MIME type. - The standard and default port for http server is 80, though they can use any port. - HTTP can be implemented on top of any other protocol on the internet or on the other networks. HTTP provides reliable transport service such as TCP. Advantages of HTTP - It is platform independent-allows cross platform porting. - No runtime support required to run. - It supports for global applications. - It is not connection oriented, so we can create and maintain session state and information. Disadvantages of HTTP - Anyone can see content. So security problem may arise. - Someone may alter the content. Since no encryption methods are used. - Authentication is sent in clear. Anyone who intercepts the request can determine username and password. HTTP ConversationThe following diagram shows the HTTP conversion: Figure 1: HTTP Conversion The HTTP protocol is request/response protocol based on client/server architecture. The client open the connection sends request to the server in the form of URI. The server gets the resource location or web address sent by the client, processes the request of the client and sends response back to the client and closes the connection. It stands for Hypertext Transport Protocol over Secure Socket Layer or HTTP over SSL. Secure Socket Layer (SSL) acts like sub layer under HTTP application layering. HTTPS encrypts the message and decrypts a message upon arrival. HTTPS uses default port 433 as opposed to the standard HTTP port of 80. URL’s beginning with HTTPS indicates the connection between client and browser is encrypted using SSL. The SSL is need only, if you have online store or accept online orders and credit cards, when logging in your site, if need to comply with privacy and security requirements. We can connect to the server via HTTP secure which consists of following: - Generating key - generating certificate signing request. - Certificate Authority signed certificate. - Configuring the web browser. Below we will see the http parameters. HTTP uses “.” numbering schemes to indicate versions of the protocol. The number is incremented when changes made to the protocol, which does not change the general message algorithm, which may add to the message semantics and additional capabilities of the sender. The number is incremented when the format of message is changed. The syntax for HTTP Version field can written as follows: HTTP-Version=”HTTP” “/” 1*DIGIT “.” 1*DIGIT For example: HTTP/1.0 or HTTP/1.1 Uniform Resource Identifier (URI) URIs known by many names such as WWW addresses, universal document identifiers and uniform resource locators. The http scheme is used to locate network resources via HTTP protocol. It can be written as follows: http_URL=”http:” “//” host[“:”port][abs_path[“?” query]] If port is empty, then port 80 is assumed and request URI for the resource is abs_path. If abs_path is not present in the URL, it must be given as “/” when used as request URI for the resource. HTTP has three different formats for the representation of the date/time stamps. - Mon, 10 Dec 1998 09:55:30 GMT ; RFC 822,updated by RFC 1123. - Monday, 10-Dec-1998 09:55:30 GMT ; RFC 850, obsolete by RFC 1036 . - Mon Dec 10 09:55:30 1998 ; ANSI C’s asctime() format. HTTP message consists of header and optional body. The message header of HTTP request consists of request line and header fields. The message header of response consists of status line and header fields. HTTP request message In this, message is sent from client to server. It includes method to apply to the resource, the identifier of the resource and version of the protocol. HttpRequest request= new BasicHttpRequest(“GET”, “/”, HttpVersion.HTTP_1_1); HTTP response message In this, message is sent by server back to the client after interpreting requested message. It includes protocol version followed by HTTP status code and textual phrase. HttpResponse response= new BasicHttpResponse(HttpVersion.HTTP_1_1,Httpstatus.SC_OK, “OK”); There are some general headers which are shared by both request and response messages: - Cache-control: Specifies information about caching. - Connection: Shows connection should be closed or not. - Date: Shows the current date. - MIME-version: Shows MIME version used such as text/plain etc. - Upgrade: Specifies preferred communication protocol. The request and response messages also include Entity Headers as follows: - Allow: It allows list of valid methods that can be used with a URL. - Content-Encoding: It specifies encoding scheme. - Content-Length: It shows the length of the document. - Content-Language: It specifies the language. - Content-Location: It specifies location of the created or moved document. - Content-Range: It specifies range of the document. - Content-Type: It specifies the medium type. - Expires: it gives data and time when contents may change. i.e. it gives expiry date and time. - Last-Modified: It gives date and time of the last change. The request header has following set: - Accept: Shows the medium format the client can accept. - Accept-Charset: It shows character set that client can handle. - Accept-Encoding: It shows encoding scheme the client can handle. - Accept-Language: It shows the language that client can accept. - Authorization: It shows permissions of client. - From: It shows email address of the user. - Host: It shows host and port number of the server. - If-Match: It sends the document only if it matches the given tag. - If-Modified-Since: It sends the document if changed since specified date. - If-Unmodified-Since: It sends the document if not changed since specified date. - If-Non-Match: It sends the document only if it doesn’t matches the given tag. - If-Range: It sends only portion of the document that is missing. - Referrer: It specifies the URL of the linked document. - User-Agent: It identifies the client program. The response header has following set: - Accept-Range: It shows if server accepts the range requested by client. - Age: It shows the age of the document. - Location: It specifies location of the document. - Proxy –Authenticate: It shows authorization credentials for connecting to a proxy. - Retry-After: It specifies the date after which the server is available. - Server: It shows server name and version number. - WWW-Authenticate: It indicates authentication scheme that should be used to access the requested entity. Following are some examples of above various fields : Host: www.google.com Date: Sun, 15 May 2008 10:30:45 GMT Server: Apache Last-Modified: Tue, 10 May 2008 Content-Length: 30 Content-Type: text/plain Expires: Fri, 01 Jul 2008 15:00:00 GMT Retry-After: Thu, 31 May 2008 20:00:00 GMT Referrer: http://www.w3c.org/http/http_messages.htm Content-Encoding: gzip HTTP Request Methods - GET: It is used to retrieve information from specified resource. - POST: It is used to submit data to the server. - HEAD: It is same as GET, but returns only HTTP headers and no document body. - PUT: It uploads representation of specified URI. - DELETE: It deletes the specified target resource given by URI. - CONNECT: It establishes TCP/IP tunnel to the server by given URI. - OPTIONS: It represents HTTP methods that server support. - TRACE: It invokes remote application layer feedback of the request message. HTTP Status Codes HTTP status codes are response codes given by server on the internet. It is common term for the HTTP status line, which includes both the HTTP status code and the HTTP reason phrase. Following are the list of codes: - 100 Continue: It informs that server has received the request headers and has not yet been rejected by the server. - 101 Switching Protocols: It means request has asked to the server to switch protocols. - 102 Processing: Server has processing the request. - 200 OK: The request has succeeded. - 201 Created: The request has been fulfilled and new resource is created. - 202 Accepted: The request has been accepted ,but the processing has not been completed. - 203 Non-Authoritative Information: The request has been processed, but information may be from another source. - 204 No Content: The request has been processed, but not returning any content. - 205 Reset Content: The request has been processed, but not returning any content and user agent should reset the document view. - 206 Partial Content: The server has fulfilled partial resource returned due to request header. - 300 Multiple Choices: It provides list of options for the resource that client can select and go to that location. - 301 Moved Permanently: The requested resource has been moved to new URI. - 302 Found: The requested resource has been temporarily moved to new URI. - 303 See Other: The requested resource found via alterative URI. - 304 Not Modified: The request has not been modified since last requested. - 305 Use Proxy: The requested resource accessed through the proxy given by the location filed. - 306 Unused: This code was used in previous version and is no longer used and the code is reserved. - 307 Temporary Redirect: The requested resource moved temporarily to new URI. 4xx: Client Error - 400 Bad Request: The request can be fulfilled due to bad request. - 401 Unauthorized: It is used when authentication is required and has failed or has not yet been provided. - 402 Payment Required: Reserved for future use. - 403 Forbidden: The request was valid, but server is refusing to respond to it. - 404 Not Found: The requested resource could not found, but available in the future. - 405 Method Not Allowed: The method specified in the request is not allowed. - 406 Not Acceptable: Content not acceptable according to the accept headers. - 407 Proxy Authentication Required: The client must authenticate before request can be served. - 408 Request Timeout: The server timed out waiting for the request. - 409 Conflict: The request could not be completed due to conflict in the resource. - 410 Gone: The requested resource is no longer available at the server. - 411 Length Required: Request did not specify the length of its content. - 412 Precondition Failed: The precondition given in one or more request header fields evaluated to false when it was tested on the server. - 413 Request Entity Too Large: The request is larger than server, so server will not accept the request. - 414 Request-url Too Long: The url is too long, so server will not accept the request. - 415 Unsupported Media Type:The server is refusing the request because the request format is supported by the server. - 416 Requested Range Not Satisfiable: The client has asked for portion of the file , but the requested byte range is not available. - 417 Expectation Failed: The server cannot meet requirements of the expect request header field. 5xx: Server Error - 500 Internal Server Error: It generic error message given when unexpected condition occurs. - 501 Not implemented: the server does not support the functionality to fulfill the request. - 502 Bad Gateway: The server cannot process the request due to high load. - 503 Service Unavailable: The service temporarily unavailable ,but may be requested in the future. - 504 Gateway Timeout: Gateway didn’t receive response from server. - 505 HTTP Version Not Supported: The server doesn’t support the HTTP protocol version. It informs application developers, information providers, and users of the security limitations in HTTP/1.1 as described follows: - Personal Information: HTTP clients often allow large amount of personal information’s such as user name, location, mail address, passwords etc and should be careful to prevent unintentional leakage of this information via HTTP protocol to other users. - Abuse of Server Log Information: A server in the position to save user’s personal data which might identify their reading patterns or subjects of interest. This information is clearly confidential in nature. - Transfer of Sensitive Information: HTTP cannot regulate the content of the data is transferred, nor is there any prior method of determining the sensitivity of any particular piece of information within the context of any given request. Therefore applications should keep control over the information to provide that information to user. - Attacks based on file and path names: Implementations of HTTP servers should be careful to restrict documents returned by HTTP requests that were intended by the server administrators. - DNS Spoofing: Clients using HTTP rely heavily on the Domain Name Service, and are thus generally prone to security attacks based on the deliberate mis-association of IP addresses and DNS names. HTTP clients should rely on their name resolver for confirmation of IP number/DNS name association rather than caching result of previous host name. - Location Headers and spoofing: If single server do not trust one another, it must check value of location headers in the responses that are generated under organizations over which they have no authority. - Authentication Credentials: HTTP doesn’t provide method for server to direct clients to ignore cached credentials . - Proxies and Caching: Proxies contain security related information, personal information about users and organizations. Log information should be gathered at proxies which contains sensitive information about organization. Caching provides additional vulnerability, because cache persists after HTTP request is complete. User believes that information is removed from the network. So cache contents protected as sensitive information.
Multiplicative inverses. That's a mouthful! Really, this term just refers to numbers that when multiplied together equal 1. These numbers are also called reciprocals of each other! Learn about multiplicative inverses by watching this tutorial. Working with fractions can be intimidating, but if you arm yourself with the right tools, you'll find that working with fractions is no harder than working with basic numbers. In this tutorial you'll see the process for multiplying 3 very simple fractions. Enjoy! Multiplying a whole number and a fraction can be confusing, but this tutorial helps to sort things out. Check it out! Reciprocals are important when it comes to dividing fractions, finding perpendicular lines, dealing with inverse proportions, and so much more! In this tutorial you can review the basics about reciprocals. Working with word problems AND fractions? This tutorial shows you how to take a word problem and translate it into a mathematical equation involving fractions. Then, you'll see how to solve and get the answer. Check it out! Solving an equation with multiple fractions in different forms isn't so bad. This tutorial shows you how to convert a mixed fraction to an improper fraction in order to solve the equation. Then, you'll see how to convert the answer back to a mixed fraction to make sense of it. Follow along with this tutorial to see how it's done! This tutorial gives an in-depth look at dividing fractions by showing you what it really means to divide them. To multiply mixed fractions together, you could first convert each to an improper fraction. Then, multiply the fractions together, simplify, and convert your answer back to a mixed fraction. This tutorial will show you how! Dividing fractions? Change that division to a multiplication by multiplying the dividend by the reciprocal of the divisor. Learn all about it by watching this tutorial! To divide mixed fractions, you could first convert each to an improper fraction. Then, switch to a multiplication problem by multiply by the reciprocal of the divisor. Simplify and convert your answer back to a mixed fraction to get your answer! This tutorial will show you how!
The ecliptic is the apparent path of the Sun on the celestial sphere, and is the basis for the ecliptic coordinate system. It also refers to the plane of this path, which is coplanar with the orbit of Earth around the Sun (and hence the apparent orbit of the Sun around Earth). The path of the Sun is not normally noticeable from Earth's surface because Earth rotates, carrying the observer through the cycles of sunrise and sunset, obscuring the apparent motion of the Sun with respect to the stars. Sun's apparent motion The motions as described above are simplifications. Due to the movement of Earth around the Earth–Moon center of mass, the apparent path of the Sun wobbles slightly, with a period of about one month. Due to further perturbations by the other planets of the Solar System, the Earth–Moon barycenter wobbles slightly around a mean position in a complex fashion. The ecliptic is actually the apparent path of the Sun throughout the course of a year. Because Earth takes one year to orbit the Sun, the apparent position of the Sun also takes the same length of time to make a complete circuit of the ecliptic. With slightly more than 365 days in one year, the Sun moves a little less than 1° eastward every day. This small difference in the Sun's position against the stars causes any particular spot on Earth's surface to catch up with (and stand directly north or south of) the Sun about four minutes later each day than it would if Earth would not orbit; a day on Earth is therefore 24 hours long rather than the approximately 23-hour 56-minute sidereal day. Again, this is a simplification, based on a hypothetical Earth that orbits at uniform speed around the Sun. The actual speed with which Earth orbits the Sun varies slightly during the year, so the speed with which the Sun seems to move along the ecliptic also varies. For example, the Sun is north of the celestial equator for about 185 days of each year, and south of it for about 180 days. The variation of orbital speed accounts for part of the equation of time. Relationship to the celestial equator Because Earth's rotational axis is not perpendicular to its orbital plane, Earth's equatorial plane is not coplanar with the ecliptic plane, but is inclined to it by an angle of about 23.4°, which is known as the obliquity of the ecliptic. If the equator is projected outward to the celestial sphere, forming the celestial equator, it crosses the ecliptic at two points known as the equinoxes. The Sun, in its apparent motion along the ecliptic, crosses the celestial equator at these points, one from south to north, the other from north to south. The crossing from south to north is known as the vernal equinox, also known as the first point of Aries and the ascending node of the ecliptic on the celestial equator. The crossing from north to south is the autumnal equinox or descending node. The orientation of Earth's axis and equator are not fixed in space, but rotate about the poles of the ecliptic with a period of about 26,000 years, a process known as lunisolar precession, as it is due mostly to the gravitational effect of the Moon and Sun on Earth's equatorial bulge. Likewise, the ecliptic itself is not fixed. The gravitational perturbations of the other bodies of the Solar System cause a much smaller motion of the plane of Earth's orbit, and hence of the ecliptic, known as planetary precession. The combined action of these two motions is called general precession, and changes the position of the equinoxes by about 50 arc seconds (about 0°.014) per year. Once again, this is a simplification. Periodic motions of the Moon and apparent periodic motions of the Sun (actually of Earth in its orbit) cause short-term small-amplitude periodic oscillations of Earth's axis, and hence the celestial equator, known as nutation. This adds a periodic component to the position of the equinoxes; the positions of the celestial equator and (vernal) equinox with fully updated precession and nutation are called the true equator and equinox; the positions without nutation are the mean equator and equinox. Obliquity of the ecliptic Obliquity of the ecliptic is the term used by astronomers for the inclination of Earth's equator with respect to the ecliptic, or of Earth's rotation axis to a perpendicular to the ecliptic. It is about 23.4° and is currently decreasing 0.013 degrees (47 arcseconds) per hundred years due to planetary perturbations. The angular value of the obliquity is found by observation of the motions of Earth and other planets over many years. Astronomers produce new fundamental ephemerides as the accuracy of observation improves and as the understanding of the dynamics increases, and from these ephemerides various astronomical values, including the obliquity, are derived. Until 1983 the obliquity for any date was calculated from work of Newcomb, who analyzed positions of the planets until about 1895: ε = 23° 27′ 08″.26 − 46″.845 T − 0″.0059 T2 + 0″.00181 T3 From 1984, the Jet Propulsion Laboratory's DE series of computer-generated ephemerides took over as the fundamental ephemeris of the Astronomical Almanac. Obliquity based on DE200, which analyzed observations from 1911 to 1979, was calculated: ε = 23° 26′ 21″.45 − 46″.815 T − 0″.0006 T2 + 0″.00181 T3 JPL's fundamental ephemerides have been continually updated. The Astronomical Almanac for 2010 specifies: ε = 23° 26′ 21″.406 − 46″.836769 T − 0″.0001831 T2 + 0″.00200340 T3 − 0″.576×10−6 T4 − 4″.34×10−8 T5 These expressions for the obliquity are intended for high precision over a relatively short time span, perhaps ± several centuries. J. Laskar computed an expression to order T10 good to 0″.04/1000 years over 10,000 years. Plane of the Solar System |Top and side views of the plane of the ecliptic, showing planets Mercury, Venus, Earth, and Mars. Most of the planets orbit the Sun very nearly in the same plane in which Earth orbits, the ecliptic.||Four planets lined up along the ecliptic in July 2010, illustrating how the planets orbit the Sun in nearly the same plane. Photo taken at sunset, looking west over Surakarta, Java, Indonesia.| Most of the major bodies of the Solar System orbit the Sun in nearly the same plane. This is likely due to the way in which the Solar System formed from a protoplanetary disk. Probably the closest current representation of the disk is known as the invariable plane of the Solar System. Earth's orbit, and hence, the ecliptic, is inclined a little more than 1° to the invariable plane, and the other major planets are also within about 6° of it. Because of this, most Solar System bodies appear very close to the ecliptic in the sky. The ecliptic is well defined by the motion of the Sun. The invariable plane is defined by the angular momentum of the entire Solar System, essentially the summation of all of the orbital motions and rotations of all the bodies of the system, a somewhat uncertain value that requires precise knowledge of every object in the system. For these reasons, the ecliptic is used as the reference plane of the Solar System out of convenience. Celestial reference plane The ecliptic forms one of the two fundamental planes used as reference for positions on the celestial sphere, the other being the celestial equator. Perpendicular to the ecliptic are the ecliptic poles, the north ecliptic pole being the pole north of the equator. Of the two fundamental planes, the ecliptic is closer to unmoving against the background stars, its motion due to planetary precession being roughly 1/100 that of the celestial equator. Spherical coordinates, known as ecliptic longitude and latitude or celestial longitude and latitude, are used to specify positions of bodies on the celestial sphere with respect to the ecliptic. Longitude is measured positively eastward 0° to 360° along the ecliptic from the vernal equinox, the same direction in which the Sun appears to move. Latitude is measured perpendicular to the ecliptic, to +90° northward or -90° southward to the poles of the ecliptic, the ecliptic itself being 0° latitude. For a complete spherical position, a distance parameter is also necessary. Different distance units are used for different objects. Within the Solar System, astronomical units are used, and for objects near Earth, Earth radii or kilometers are used. A corresponding right-handed rectangular coordinate system is also used occasionally; the x-axis is directed toward the vernal equinox, the y-axis 90° to the east, and the z-axis toward the north ecliptic pole; the astronomical unit is the unit of measure. Symbols for ecliptic coordinates are somewhat standardized; see the table. |heliocentric||l||b||r||x, y, z[note 1]| Ecliptic coordinates are convenient for specifying positions of Solar System objects, as most of the planets' orbits have small inclinations to the ecliptic, and therefore always appear relatively close to it on the sky. Because Earth's orbit, and hence the ecliptic, moves very little, it is a relatively fixed reference with respect to the stars. Because of the precessional motion of the equinox, the ecliptic coordinates of objects on the celestial sphere are continuously changing. Specifying a position in ecliptic coordinates requires specifying a particular equinox, that is, the equinox of a particular date, known as an epoch; the coordinates are referred to the direction of the equinox at that date. For instance, the Astronomical Almanac lists the heliocentric position of Mars at 0h Terrestrial Time, 4 Jan 2010 as: longitude 118° 09' 15".8, latitude +1° 43' 16".7, true heliocentric distance 1.6302454 AU, mean equinox and ecliptic of date. This specifies the mean equinox of 4 Jan 2010 0h TT as above, without the addition of nutation. Because the orbit of the Moon is inclined only about 5.145° to the ecliptic and the Sun is always very near the ecliptic, eclipses always occur on or near it. Because of the inclination of the Moon's orbit, eclipses do not occur at every conjunction and opposition of the Sun and Moon, but only when the Moon is near an ascending or descending node at the same time it is at conjunction or opposition. The ecliptic is so named because the ancients noted that eclipses only occurred when the Moon crossed it. Equinoxes and solstices The exact instants of equinoxes or solstices are the times when the apparent ecliptic longitude (including the effects of aberration and nutation) of the Sun is 0°, 90°, 180°, or 270°. Because of perturbations of Earth's orbit and peculiarities of the calendar, the dates of these are not fixed. In the constellations The ecliptic currently passes through the following constellations: The ecliptic forms the center of a band about 20° wide called the zodiac, on which the Sun, Moon, and planets are seen always to move. Traditionally, this region is divided into 12 signs of 30° longitude, each of which approximates the Sun's motion through one month. In ancient times the signs corresponded roughly to 12 of the constellations that straddle the ecliptic. These signs give us some of the terminology used today. The first point of Aries was named when the vernal equinox was actually in the constellation Aries; it has since moved into Pisces. - Formation and evolution of the Solar System - Invariable plane - Protoplanetary disk - Celestial coordinate system Notes and references - U.S. Naval Observatory Nautical Almanac Office, Nautical Almanac Office; U.K. Hydrographic Office, H.M. Nautical Almanac Office (2008). The Astronomical Almanac for the Year 2010. U.S. Govt. Printing Office. p. M5. ISBN 978-0-7077-4082-9. - U.S. Naval Observatory Nautical Almanac Office (1992). P. Kenneth Seidelmann, ed. Explanatory Supplement to the Astronomical Almanac. University Science Books, Mill Valley, CA. ISBN 0-935702-68-7., p. 11 - The directions north and south on the celestial sphere are in the sense toward the north celestial pole and toward the south celestial pole. East is the direction toward which Earth rotates, west is opposite that. - Astronomical Almanac 2010, sec. C - Explanatory Supplement (1992), sec. 1.233 - Explanatory Supplement (1992), p. 733 - Astronomical Almanac 2010, p. M2 and M6 - Explanatory Supplement (1992), sec. 1.322 and 3.21 - U.S. Naval Observatory Nautical Almanac Office; H.M. Nautical Almanac Office (1961). Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac. H.M. Stationery Office, London. , sec. 2C - Explanatory Supplement (1992), p. 731 and 737 - Chauvenet, William (1906). A Manual of Spherical and Practical Astronomy. I. J.B. Lippincott Co., Philadelphia. , art. 365-367, p. 694-695, at Google books - Laskar, J. (1986). "Secular Terms of Classical Planetary Theories Using the Results of General Relativity". , table 8, at SAO/NASA ADS - Explanatory Supplement (1961), sec. 2B - U.S. Naval Observatory, Nautical Almanac Office; H.M. Nautical Almanac Office (1989). The Astronomical Almanac for the Year 1990. U.S. Govt. Printing Office. ISBN 0-11-886934-5. , p. B18 - Astronomical Almanac 2010, p. B52 - Newcomb, Simon (1906). A Compendium of Spherical Astronomy. MacMillan Co., New York. , p. 226-227, at Google books - Meeus, Jean (1991). Astronomical Algorithms. Willmann-Bell, Inc., Richmond, VA. ISBN 0-943396-35-2. , chap. 21 - Danby, J.M.A. (1988). Fundamentals of Celestial Mechanics. Willmann-Bell, Inc., Richmond, VA. ISBN 0-943396-20-4. , sec. 9.1 - Roy, A.E. (1988). Orbital Motion (third ed.). Institute of Physics Publishing. ISBN 0-85274-229-0. , sec. 5.3 - Montenbruck, Oliver (1989). Practical Ephemeris Calculations. Springer-Verlag. ISBN 0-387-50704-3. , sec 1.4 - Explanatory Supplement (1961), sec. 2A - Explanatory Supplement (1961), sec. 1G - Dziobek, Otto (1892). Mathematical Theories of Planetary Motions. Register Publishing Co., Ann Arbor, Michigan., p. 294, at Google books - Astronomical Almanac 2010, p. E14 - Ball, Robert S. (1908). A Treatise on Spherical Astronomy. Cambridge University Press. p. 83., at Google books - Meeus (1991), chap. 26 - Serviss, Garrett P. (1908). Astronomy With the Naked Eye. Harper & Brothers, New York and London. pp. 105, 106. at Google books - Bryant, Walter W. (1907). A History of Astronomy. p. 3., at Google books - Bryant (1907), p. 4 - see, for instance, Leo, Alan (1899). Astrology for All. , p. 8, at Google books - Vallado, David A. (2001). Fundamentals of Astrodynamics and Applications (second ed.). Microcosm Press, El Segundo, CA. ISBN 1-881883-12-4. , p. 153 |Look up ecliptic in Wiktionary, the free dictionary.| |Wikiversity has learning materials about Ecliptic at| - The Ecliptic: the Sun's Annual Path on the Celestial Sphere Durham University Department of Physics - Seasons and Ecliptic Simulator University of Nebraska-Lincoln - MEASURING THE SKY A Quick Guide to the Celestial Sphere James B. Kaler, University of Illinois - Earth's Seasons U.S. Naval Observatory - The Basics - the Ecliptic, the Equator, and Coordinate Systems AstrologyClub.Org - Kinoshita, H.; Aoki, S. (1983). "The definition of the ecliptic". Celestial Mechanics. 31: 329–338. Bibcode:1983CeMec..31..329K. doi:10.1007/BF01230290.; comparison of the definitions of LeVerrier, Newcomb, and Standish.
Black-body radiation is the thermal electromagnetic radiation within or surrounding a body in thermodynamic equilibrium with its environment, or emitted by a black body (an opaque and non-reflective body). It has a specific spectrum and intensity that depends only on the body's temperature, which is assumed for the sake of calculations and theory to be uniform and constant. The thermal radiation spontaneously emitted by many ordinary objects can be approximated as black-body radiation. A perfectly insulated enclosure that is in thermal equilibrium internally contains black-body radiation and will emit it through a hole made in its wall, provided the hole is small enough to have negligible effect upon the equilibrium. A black-body at room temperature appears black, as most of the energy it radiates is infra-red and cannot be perceived by the human eye. Because the human eye cannot perceive light waves at lower frequencies, a black body, viewed in the dark at the lowest just faintly visible temperature, subjectively appears grey, even though its objective physical spectrum peak is in the infrared range. When it becomes a little hotter, it appears dull red. As its temperature increases further it becomes yellow, white, and ultimately blue-white. Although planets and stars are neither in thermal equilibrium with their surroundings nor perfect black bodies, black-body radiation is used as a first approximation for the energy they emit. Black holes are near-perfect black bodies, in the sense that they absorb all the radiation that falls on them. It has been proposed that they emit black-body radiation (called Hawking radiation), with a temperature that depends on the mass of the black hole. - 1 Spectrum - 2 Black Body - 3 Explanation of black-body radiation - 4 Equations - 5 Human-body emission - 6 Temperature relation between a planet and its star - 7 Cosmology - 8 Doppler effect for a moving black body - 9 History - 10 See also - 11 References - 12 Further reading - 13 External links Black-body radiation has a characteristic, continuous frequency spectrum that depends only on the body's temperature, called the Planck spectrum or Planck's law. The spectrum is peaked at a characteristic frequency that shifts to higher frequencies with increasing temperature, and at room temperature most of the emission is in the infrared region of the electromagnetic spectrum. As the temperature increases past about 500 degrees Celsius, black bodies start to emit significant amounts of visible light. Viewed in the dark by the human eye, the first faint glow appears as a "ghostly" grey (the visible light is actually red, but low intensity light activates only the eye's grey-level sensors). With rising temperature, the glow becomes visible even when there is some background surrounding light: first as a dull red, then yellow, and eventually a "dazzling bluish-white" as the temperature rises. When the body appears white, it is emitting a substantial fraction of its energy as ultraviolet radiation. The Sun, with an effective temperature of approximately 5800 K, is an approximate black body with an emission spectrum peaked in the central, yellow-green part of the visible spectrum, but with significant power in the ultraviolet as well. Black-body radiation provides insight into the thermodynamic equilibrium state of cavity radiation. All normal (baryonic) matter emits electromagnetic radiation when it has a temperature above absolute zero. The radiation represents a conversion of a body's internal energy into electromagnetic energy, and is therefore called thermal radiation. It is a spontaneous process of radiative distribution of entropy. Conversely all normal matter absorbs electromagnetic radiation to some degree. An object that absorbs all radiation falling on it, at all wavelengths, is called a black body. When a black body is at a uniform temperature, its emission has a characteristic frequency distribution that depends on the temperature. Its emission is called black-body radiation. The concept of the black body is an idealization, as perfect black bodies do not exist in nature. Graphite and lamp black, with emissivities greater than 0.95, however, are good approximations to a black material. Experimentally, black-body radiation may be established best as the ultimately stable steady state equilibrium radiation in a cavity in a rigid body, at a uniform temperature, that is entirely opaque and is only partly reflective. A closed box of graphite walls at a constant temperature with a small hole on one side produces a good approximation to ideal black-body radiation emanating from the opening. Black-body radiation has the unique absolutely stable distribution of radiative intensity that can persist in thermodynamic equilibrium in a cavity. In equilibrium, for each frequency the total intensity of radiation that is emitted and reflected from a body (that is, the net amount of radiation leaving its surface, called the spectral radiance) is determined solely by the equilibrium temperature, and does not depend upon the shape, material or structure of the body. For a black body (a perfect absorber) there is no reflected radiation, and so the spectral radiance is entirely due to emission. In addition, a black body is a diffuse emitter (its emission is independent of direction). Consequently, black-body radiation may be viewed as the radiation from a black body at thermal equilibrium. Black-body radiation becomes a visible glow of light if the temperature of the object is high enough. The Draper point is the temperature at which all solids glow a dim red, about . 798 K At , a small opening in the wall of a large uniformly heated opaque-walled cavity (let us call it an oven), viewed from outside, looks red; at 1000 K, it looks white. No matter how the oven is constructed, or of what material, as long as it is built so that almost all light entering is absorbed by its walls, it will contain a good approximation to black-body radiation. The spectrum, and therefore color, of the light that comes out will be a function of the cavity temperature alone. A graph of the amount of energy inside the oven per unit volume and per unit frequency interval plotted versus frequency, is called the black-body curve. Different curves are obtained by varying the temperature. 6000 K Two bodies that are at the same temperature stay in mutual thermal equilibrium, so a body at temperature T surrounded by a cloud of light at temperature T on average will emit as much light into the cloud as it absorbs, following Prevost's exchange principle, which refers to radiative equilibrium. The principle of detailed balance says that in thermodynamic equilibrium every elementary process works equally in its forward and backward sense. Prevost also showed that the emission from a body is logically determined solely by its own internal state. The causal effect of thermodynamic absorption on thermodynamic (spontaneous) emission is not direct, but is only indirect as it affects the internal state of the body. This means that at thermodynamic equilibrium the amount of every wavelength in every direction of thermal radiation emitted by a body at temperature T, black or not, is equal to the corresponding amount that the body absorbs because it is surrounded by light at temperature T. When the body is black, the absorption is obvious: the amount of light absorbed is all the light that hits the surface. For a black body much bigger than the wavelength, the light energy absorbed at any wavelength λ per unit time is strictly proportional to the black-body curve. This means that the black-body curve is the amount of light energy emitted by a black body, which justifies the name. This is the condition for the applicability of Kirchhoff's law of thermal radiation: the black-body curve is characteristic of thermal light, which depends only on the temperature of the walls of the cavity, provided that the walls of the cavity are completely opaque and are not very reflective, and that the cavity is in thermodynamic equilibrium. When the black body is small, so that its size is comparable to the wavelength of light, the absorption is modified, because a small object is not an efficient absorber of light of long wavelength, but the principle of strict equality of emission and absorption is always upheld in a condition of thermodynamic equilibrium. In the laboratory, black-body radiation is approximated by the radiation from a small hole in a large cavity, a hohlraum, in an entirely opaque body that is only partly reflective, that is maintained at a constant temperature. (This technique leads to the alternative term cavity radiation.) Any light entering the hole would have to reflect off the walls of the cavity multiple times before it escaped, in which process it is nearly certain to be absorbed. Absorption occurs regardless of the wavelength of the radiation entering (as long as it is small compared to the hole). The hole, then, is a close approximation of a theoretical black body and, if the cavity is heated, the spectrum of the hole's radiation (i.e., the amount of light emitted from the hole at each wavelength) will be continuous, and will depend only on the temperature and the fact that the walls are opaque and at least partly absorptive, but not on the particular material of which they are built nor on the material in the cavity (compare with emission spectrum). Real objects never behave as full-ideal black bodies, and instead the emitted radiation at a given frequency is a fraction of what the ideal emission would be. The emissivity of a material specifies how well a real body radiates energy as compared with a black body. This emissivity depends on factors such as temperature, emission angle, and wavelength. However, it is typical in engineering to assume that a surface's spectral emissivity and absorptivity do not depend on wavelength, so that the emissivity is a constant. This is known as the gray body assumption. With non-black surfaces, the deviations from ideal black-body behavior are determined by both the surface structure, such as roughness or granularity, and the chemical composition. On a "per wavelength" basis, real objects in states of local thermodynamic equilibrium still follow Kirchhoff's Law: emissivity equals absorptivity, so that an object that does not absorb all incident light will also emit less radiation than an ideal black body; the incomplete absorption can be due to some of the incident light being transmitted through the body or to some of it being reflected at the surface of the body. In astronomy, objects such as stars are frequently regarded as black bodies, though this is often a poor approximation. An almost perfect black-body spectrum is exhibited by the cosmic microwave background radiation. Hawking radiation is the hypothetical black-body radiation emitted by black holes, at a temperature that depends on the mass, charge, and spin of the hole. If this prediction is correct, black holes will very gradually shrink and evaporate over time as they lose mass by the emission of photons and other particles. A black body radiates energy at all frequencies, but its intensity rapidly tends to zero at high frequencies (short wavelengths). For example, a black body at room temperature () with one square meter of surface area will emit a photon in the visible range (390–750 nm) at an average rate of one photon every 41 seconds, meaning that for most practical purposes, such a black body does not emit in the visible range.[ 300 Kcitation needed] Explanation of black-body radiation According to the Classical Theory of Radiation, if each Fourier mode of the equilibrium radiation in an otherwise empty cavity with perfectly reflective walls is considered as a degree of freedom capable of exchanging energy, then, according to the equipartition theorem of classical physics, there would be an equal amount of energy in each mode. Since there are an infinite number of modes this implies infinite heat capacity (infinite energy at any non-zero temperature), as well as an unphysical spectrum of emitted radiation that grows without bound with increasing frequency, a problem known as the ultraviolet catastrophe. In the longer wavelengths this effect is not so noticeable (As hv is very small, allowing nhv to be almost infinitesimally small and thus a very large number of vibrational modes. But in the shorter wavelengths the classical theory predicted the energy emitted tended to Infinity (In the ultraviolet range; hence ultraviolet catastrophe). As all possible vibrational modes including those having energy less than hv were considered, the energy added up to infinity. It even predicted that all bodies would emit maximum energy in the ultraviolet range, clearly against the experimental data which showed a different peak wavelength at different temperatures. Instead, in quantum theory the numbers of the modes are quantized, cutting off the spectrum at high frequency in agreement with experimental observation and resolving the catastrophe. The modes definitely cannot have more energy than the thermal energy of the substance itself, and by quantization infinitesimally small modes too were not allowed. Thus for shorter wavelengths very few modes were allowed, supporting the data that energy emitted reduces for wavelength shorter than the wavelength of the observed peak of emission. Notice that there are two factors responsible for the shape of the graph. Firstly, longer wavelengths have larger number of modes associated with them. Secondly, shorter wavelengths have more energy associated per mode. The study of the laws of black bodies and the failure of classical physics to describe them helped establish the foundations of quantum mechanics. Calculating the black-body curve was a major challenge in theoretical physics during the late nineteenth century. The problem was solved in 1901 by Max Planck in the formalism now known as Planck's law of black-body radiation. By making changes to Wien's radiation law (not to be confused with Wien's displacement law) consistent with thermodynamics and electromagnetism, he found a mathematical expression fitting the experimental data satisfactorily. Planck had to assume that the energy of the oscillators in the cavity was quantized, i.e., it existed in integer multiples of some quantity. Einstein built on this idea and proposed the quantization of electromagnetic radiation itself in 1905 to explain the photoelectric effect. These theoretical advances eventually resulted in the superseding of classical electromagnetism by quantum electrodynamics. These quanta were called photons and the black-body cavity was thought of as containing a gas of photons. In addition, it led to the development of quantum probability distributions, called Fermi–Dirac statistics and Bose–Einstein statistics, each applicable to a different class of particles, fermions and bosons. The wavelength at which the radiation is strongest is given by Wien's displacement law, and the overall power emitted per unit area is given by the Stefan–Boltzmann law. So, as temperature increases, the glow color changes from red to yellow to white to blue. Even as the peak wavelength moves into the ultra-violet, enough radiation continues to be emitted in the blue wavelengths that the body will continue to appear blue. It will never become invisible—indeed, the radiation of visible light increases monotonically with temperature. The Stefan–Boltzmann law also says that the total radiant heat energy emitted from a surface is proportional to the fourth power of its absolute temperature. The law was formulated by Josef Stefan in 1879 and later derived by Ludwig Boltzmann. The formula E = σT4 is given, where E is the radiant heat emitted from a unit of area per unit time, T is the absolute temperature, and σ = 367×10−8 W·m−2⋅K−45.670 is the Stefan–Boltzmann constant. Planck's law of black-body radiation Planck's law states that - Bν(T) is the spectral radiance (the power per unit solid angle and per unit of area normal to the propagation) density of frequency ν radiation per unit frequency at thermal equilibrium at temperature T. - h is the Planck constant; - c is the speed of light in a vacuum; - k is the Boltzmann constant; - ν is the frequency of the electromagnetic radiation; - T is the absolute temperature of the body. For a black body surface the spectral radiance density (defined per unit of area normal to the propagation) is independent of the angle of emission with respect to the normal. However, this means that, following Lambert's cosine law, is the radiance density per unit area of emitting surface as the surface area involved in generating the radiance is increased by a factor with respect to an area normal to the propagation direction. At oblique angles, the solid angle spans involved do get smaller, resulting in lower aggregate intensities. Wien's displacement law Wien's displacement law shows how the spectrum of black-body radiation at any temperature is related to the spectrum at any other temperature. If we know the shape of the spectrum at one temperature, we can calculate the shape at any other temperature. Spectral intensity can be expressed as a function of wavelength or of frequency. A consequence of Wien's displacement law is that the wavelength at which the intensity per unit wavelength of the radiation produced by a black body is at a maximum, , is a function only of the temperature: where the constant b, known as Wien's displacement constant, is equal to 7729(17)×10−3 K m. 2.897 Planck's law was also stated above as a function of frequency. The intensity maximum for this is given by By integrating over the frequency the integrated radiance is by using with and with being the Stefan–Boltzmann constant. The radiance is then per unit of emitting surface. On a side note, at a distance d, the intensity per area of radiating surface is the useful expression when the receiving surface is perpendicular to the radiation. By subsequently integrating over the solid angle (where ) the Stefan–Boltzmann law is calculated, stating that the power j* emitted per unit area of the surface of a black body is directly proportional to the fourth power of its absolute temperature: |Much of a person's energy is radiated away in the form of infrared light. Some materials are transparent in the infrared, but opaque to visible light, as is the plastic bag in this infrared image (bottom). Other materials are transparent to visible light, but opaque or reflective in the infrared, noticeable by the darkness of the man's glasses.| The human body radiates energy as infrared light. The net power radiated is the difference between the power emitted and the power absorbed: Applying the Stefan–Boltzmann law, where A and T are the body surface area and temperature, is the emissivity, and T0 is the ambient temperature. The total surface area of an adult is about 2 m2, and the mid- and far-infrared emissivity of skin and most clothing is near unity, as it is for most nonmetallic surfaces. Skin temperature is about 33 °C, but clothing reduces the surface temperature to about 28 °C when the ambient temperature is 20 °C. Hence, the net radiative heat loss is about The total energy radiated in one day is about 8 MJ, or 2000 kcal (food calories). Basal metabolic rate for a 40-year-old male is about 35 kcal/(m2·h), which is equivalent to 1700 kcal per day, assuming the same 2 m2 area. However, the mean metabolic rate of sedentary adults is about 50% to 70% greater than their basal rate. There are other important thermal loss mechanisms, including convection and evaporation. Conduction is negligible – the Nusselt number is much greater than unity. Evaporation by perspiration is only required if radiation and convection are insufficient to maintain a steady-state temperature (but evaporation from the lungs occurs regardless). Free-convection rates are comparable, albeit somewhat lower, than radiative rates. Thus, radiation accounts for about two-thirds of thermal energy loss in cool, still air. Given the approximate nature of many of the assumptions, this can only be taken as a crude estimate. Ambient air motion, causing forced convection, or evaporation reduces the relative importance of radiation as a thermal-loss mechanism. Application of Wien's law to human-body emission results in a peak wavelength of For this reason, thermal imaging devices for human subjects are most sensitive in the 7–14 micrometer range. Temperature relation between a planet and its star The black-body law may be used to estimate the temperature of a planet orbiting the Sun. The temperature of a planet depends on several factors: - Incident radiation from its star - Emitted radiation of the planet, e.g., Earth's infrared glow - The albedo effect causing a fraction of light to be reflected by the planet - The greenhouse effect for planets with an atmosphere - Energy generated internally by a planet itself due to radioactive decay, tidal heating, and adiabatic contraction due by cooling. The analysis only considers the Sun's heat for a planet in a Solar System. - is the Stefan–Boltzmann constant, - is the effective temperature of the Sun, and - is the radius of the Sun. The Sun emits that power equally in all directions. Because of this, the planet is hit with only a tiny fraction of it. The power from the Sun that strikes the planet (at the top of the atmosphere) is: - is the radius of the planet and - is the distance between the Sun and the planet. Because of its high temperature, the Sun emits to a large extent in the ultraviolet and visible (UV-Vis) frequency range. In this frequency range, the planet reflects a fraction of this energy where is the albedo or reflectance of the planet in the UV-Vis range. In other words, the planet absorbs a fraction of the Sun's light, and reflects the rest. The power absorbed by the planet and its atmosphere is then: Even though the planet only absorbs as a circular area , it emits equally in all directions as a sphere. If the planet were a perfect black body, it would emit according to the Stefan–Boltzmann law where is the temperature of the planet. This temperature, calculated for the case of the planet acting as a black body by setting , is known as the effective temperature. The actual temperature of the planet will likely be different, depending on its surface and atmospheric properties. Ignoring the atmosphere and greenhouse effect, the planet, since it is at a much lower temperature than the Sun, emits mostly in the infrared (IR) portion of the spectrum. In this frequency range, it emits of the radiation that a black body would emit where is the average emissivity in the IR range. The power emitted by the planet is then: Substituting the expressions for solar and planet power in equations 1–6 and simplifying yields the estimated temperature of the planet, ignoring greenhouse effect, TP: In other words, given the assumptions made, the temperature of a planet depends only on the surface temperature of the Sun, the radius of the Sun, the distance between the planet and the Sun, the albedo and the IR emissivity of the planet. Notice that a gray (flat spectrum) ball where comes to the same temperature as a black body no matter how dark or light gray . Effective temperature of Earth Substituting the measured values for the Sun and Earth yields: With the average emissivity set to unity, the effective temperature of the Earth is: or −18.8 °C. This is the temperature of the Earth if it radiated as a perfect black body in the infrared, assuming an unchanging albedo and ignoring greenhouse effects (which can raise the surface temperature of a body above what it would be if it were a perfect black body in all spectrums). The Earth in fact radiates not quite as a perfect black body in the infrared which will raise the estimated temperature a few degrees above the effective temperature. If we wish to estimate what the temperature of the Earth would be if it had no atmosphere, then we could take the albedo and emissivity of the Moon as a good estimate. The albedo and emissivity of the Moon are about 0.1054 and 0.95 respectively, yielding an estimated temperature of about 1.36 °C. Estimates of the Earth's average albedo vary in the range 0.3–0.4, resulting in different estimated effective temperatures. Estimates are often based on the solar constant (total insolation power density) rather than the temperature, size, and distance of the Sun. For example, using 0.4 for albedo, and an insolation of 1400 W m−2, one obtains an effective temperature of about 245 K. Similarly using albedo 0.3 and solar constant of 1372 W m−2, one obtains an effective temperature of 255 K. The cosmic microwave background radiation observed today is the most perfect black-body radiation ever observed in nature, with a temperature of about 2.7 K. It is a "snapshot" of the radiation at the time of decoupling between matter and radiation in the early universe. Prior to this time, most matter in the universe was in the form of an ionized plasma in thermal, though not full thermodynamic, equilibrium with radiation. According to Kondepudi and Prigogine, at very high temperatures (above 1010 K; such temperatures existed in the very early universe), where the thermal motion separates protons and neutrons in spite of the strong nuclear forces, electron-positron pairs appear and disappear spontaneously and are in thermal equilibrium with electromagnetic radiation. These particles form a part of the black body spectrum, in addition to the electromagnetic radiation. Doppler effect for a moving black body The relativistic Doppler effect causes a shift in the frequency f of light originating from a source that is moving in relation to the observer, so that the wave is observed to have frequency f': where v is the velocity of the source in the observer's rest frame, θ is the angle between the velocity vector and the observer-source direction measured in the reference frame of the source, and c is the speed of light. This can be simplified for the special cases of objects moving directly towards (θ = π) or away (θ = 0) from the observer, and for speeds much less than c. Through Planck's law the temperature spectrum of a black body is proportionally related to the frequency of light and one may substitute the temperature (T) for the frequency in this equation. For the case of a source moving directly towards or away from the observer, this reduces to Here v > 0 indicates a receding source, and v < 0 indicates an approaching source. This is an important effect in astronomy, where the velocities of stars and galaxies can reach significant fractions of c. An example is found in the cosmic microwave background radiation, which exhibits a dipole anisotropy from the Earth's motion relative to this black-body radiation field. In 1858, Balfour Stewart described his experiments on the thermal radiative emissive and absorptive powers of polished plates of various substances, compared with the powers of lamp-black surfaces, at the same temperature. Stewart chose lamp-black surfaces as his reference because of various previous experimental findings, especially those of Pierre Prevost and of John Leslie. He wrote "Lamp-black, which absorbs all the rays that fall upon it, and therefore possesses the greatest possible absorbing power, will possess also the greatest possible radiating power." More an experimenter than a logician, Stewart failed to point out that his statement presupposed an abstract general principle, that there exist either ideally in theory or really in nature bodies or surfaces that respectively have one and the same unique universal greatest possible absorbing power, likewise for radiating power, for every wavelength and equilibrium temperature. Stewart measured radiated power with a thermo-pile and sensitive galvanometer read with a microscope. He was concerned with selective thermal radiation, which he investigated with plates of substances that radiated and absorbed selectively for different qualities of radiation rather than maximally for all qualities of radiation. He discussed the experiments in terms of rays which could be reflected and refracted, and which obeyed the Stokes-Helmholtz reciprocity principle (though he did not use an eponym for it). He did not in this paper mention that the qualities of the rays might be described by their wavelengths, nor did he use spectrally resolving apparatus such as prisms or diffraction gratings. His work was quantitative within these constraints. He made his measurements in a room temperature environment, and quickly so as to catch his bodies in a condition near the thermal equilibrium in which they had been prepared by heating to equilibrium with boiling water. His measurements confirmed that substances that emit and absorb selectively respect the principle of selective equality of emission and absorption at thermal equilibrium. Stewart offered a theoretical proof that this should be the case separately for every selected quality of thermal radiation, but his mathematics was not rigorously valid. He made no mention of thermodynamics in this paper, though he did refer to conservation of vis viva. He proposed that his measurements implied that radiation was both absorbed and emitted by particles of matter throughout depths of the media in which it propagated. He applied the Helmholtz reciprocity principle to account for the material interface processes as distinct from the processes in the interior material. He did not postulate unrealizable perfectly black surfaces. He concluded that his experiments showed that in a cavity in thermal equilibrium, the heat radiated from any part of the interior bounding surface, no matter of what material it might be composed, was the same as would have been emitted from a surface of the same shape and position that would have been composed of lamp-black. He did not state explicitly that the lamp-black-coated bodies that he used as reference must have had a unique common spectral emittance function that depended on temperature in a unique way. In 1859, not knowing of Stewart's work, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and of emission of visible light. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber. Kirchhoff then went on to consider some bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at temperature T. Here is used a notation different from Kirchhoff's. Here, the emitting power E(T, i) denotes a dimensioned quantity, the total radiation emitted by a body labeled by index i at temperature T. The total absorption ratio a(T, i) of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature T . (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.) Thus the ratio E(T, i) / a(T, i) of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power, because a(T, i) is dimensionless. Also here the wavelength-specific emitting power of the body at temperature T is denoted by E(λ, T, i) and the wavelength-specific absorption ratio by a(λ, T, i) . Again, the ratio E(λ, T, i) / a(λ, T, i) of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power. In a second report made in 1859, Kirchhoff announced a new general principle or law for which he offered a theoretical and mathematical proof, though he did not offer quantitative measurements of radiation powers. His theoretical proof was and still is considered by some writers to be invalid. His principle, however, has endured: it was that for heat rays of the same wavelength, in equilibrium at a given temperature, the wavelength-specific ratio of emitting power to absorption ratio has one and the same common value for all bodies that emit and absorb at that wavelength. In symbols, the law stated that the wavelength-specific ratio E(λ, T, i) / a(λ, T, i) has one and the same value for all bodies, that is for all values of index i . In this report there was no mention of black bodies. In 1860, still not knowing of Stewart's measurements for selected qualities of radiation, Kirchhoff pointed out that it was long established experimentally that for total heat radiation, of unselected quality, emitted and absorbed by a body in equilibrium, the dimensioned total radiation ratio E(T, i) / a(T, i), has one and the same value common to all bodies, that is, for every value of the material index i. Again without measurements of radiative powers or other new experimental data, Kirchhoff then offered a fresh theoretical proof of his new principle of the universality of the value of the wavelength-specific ratio E(λ, T, i) / a(λ, T, i) at thermal equilibrium. His fresh theoretical proof was and still is considered by some writers to be invalid. But more importantly, it relied on a new theoretical postulate of "perfectly black bodies," which is the reason why one speaks of Kirchhoff's law. Such black bodies showed complete absorption in their infinitely thin most superficial surface. They correspond to Balfour Stewart's reference bodies, with internal radiation, coated with lamp-black. They were not the more realistic perfectly black bodies later considered by Planck. Planck's black bodies radiated and absorbed only by the material in their interiors; their interfaces with contiguous media were only mathematical surfaces, capable neither of absorption nor emission, but only of reflecting and transmitting with refraction. Kirchhoff's proof considered an arbitrary non-ideal body labeled i as well as various perfect black bodies labeled BB . It required that the bodies be kept in a cavity in thermal equilibrium at temperature T . His proof intended to show that the ratio E(λ, T, i) / a(λ, T, i) was independent of the nature i of the non-ideal body, however partly transparent or partly reflective it was. His proof first argued that for wavelength λ and at temperature T, at thermal equilibrium, all perfectly black bodies of the same size and shape have the one and the same common value of emissive power E(λ, T, BB), with the dimensions of power. His proof noted that the dimensionless wavelength-specific absorption ratio a(λ, T, BB) of a perfectly black body is by definition exactly 1. Then for a perfectly black body, the wavelength-specific ratio of emissive power to absorption ratio E(λ, T, BB) / a(λ, T, BB) is again just E(λ, T, BB), with the dimensions of power. Kirchhoff considered, successively, thermal equilibrium with the arbitrary non-ideal body, and with a perfectly black body of the same size and shape, in place in his cavity in equilibrium at temperature T . He argued that the flows of heat radiation must be the same in each case. Thus he argued that at thermal equilibrium the ratio E(λ, T, i) / a(λ, T, i) was equal to E(λ, T, BB), which may now be denoted Bλ (λ, T), a continuous function, dependent only on λ at fixed temperature T, and an increasing function of T at fixed wavelength λ, at low temperatures vanishing for visible but not for longer wavelengths, with positive values for visible wavelengths at higher temperatures, which does not depend on the nature i of the arbitrary non-ideal body. (Geometrical factors, taken into detailed account by Kirchhoff, have been ignored in the foregoing.) Thus Kirchhoff's law of thermal radiation can be stated: For any material at all, radiating and absorbing in thermodynamic equilibrium at any given temperature T, for every wavelength λ, the ratio of emissive power to absorptive ratio has one universal value, which is characteristic of a perfect black body, and is an emissive power which we here represent by Bλ (λ, T) . (For our notation Bλ (λ, T), Kirchhoff's original notation was simply e.) Kirchhoff announced that the determination of the function Bλ (λ, T) was a problem of the highest importance, though he recognized that there would be experimental difficulties to be overcome. He supposed that like other functions that do not depend on the properties of individual bodies, it would be a simple function. Occasionally by historians that function Bλ (λ, T) has been called "Kirchhoff's (emission, universal) function," though its precise mathematical form would not be known for another forty years, till it was discovered by Planck in 1900. The theoretical proof for Kirchhoff's universality principle was worked on and debated by various physicists over the same time, and later. Kirchhoff stated later in 1860 that his theoretical proof was better than Balfour Stewart's, and in some respects it was so. Kirchhoff's 1860 paper did not mention the second law of thermodynamics, and of course did not mention the concept of entropy which had not at that time been established. In a more considered account in a book in 1862, Kirchhoff mentioned the connection of his law with Carnot's principle, which is a form of the second law. According to Helge Kragh, "Quantum theory owes its origin to the study of thermal radiation, in particular to the "black-body" radiation that Robert Kirchhoff had first defined in 1859–1860." - Loudon 2000, Chapter 1. - Mandel & Wolf 1995, Chapter 13. - Kondepudi & Prigogine 1998, Chapter 11. - Landsberg 1990, Chapter 13. - Partington, J.R. (1949), p. 466. - Ian Morison (2008). Introduction to Astronomy and Cosmology. J Wiley & Sons. p. 48. ISBN 0-470-03333-9. - Alessandro Fabbri; José Navarro-Salas (2005). "Chapter 1: Introduction". Modeling black hole evaporation. Imperial College Press. ISBN 1-86094-527-9. - From (Kirchhoff, 1860) (Annalen der Physik und Chemie), p. 277: "Der Beweis, welcher für die ausgesprochene Behauptung hier gegeben werden soll, … vollkommen schwarze, oder kürzer schwarze, nennen." (The proof, which shall be given here for the proposition stated [above], rests on the assumption that bodies are conceivable which in the case of infinitely small thicknesses, completely absorb all rays that fall on them, thus [they] neither reflect nor transmit rays. I will call such bodies "completely black [bodies]" or more briefly "black [bodies]".) See also (Kirchhoff, 1860) (Philosophical Magazine), p. 2. - Tomokazu Kogure; Kam-Ching Leung (2007). "§2.3: Thermodynamic equilibrium and black-body radiation". The astrophysics of emission-line stars. Springer. p. 41. ISBN 0-387-34500-0. - Wien, W. (1893). Eine neue Beziehung der Strahlung schwarzer Körper zum zweiten Hauptsatz der Wärmetheorie, Sitzungberichte der Königlich-Preußischen Akademie der Wissenschaften (Berlin), 1893, 1: 55–62. - Lummer, O., Pringsheim, E. (1899). Die Vertheilung der Energie im Spectrum des schwarzen Körpers, Verhandlungen der Deutschen Physikalischen Gessellschaft (Leipzig), 1899, 1: 23–41. - Planck 1914 - Draper, J.W. (1847). On the production of light by heat, London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, series 3, 30: 345–360. - Partington 1949, pp. 466–467, 478. - Goody & Yung 1989, pp. 482, 484 - Planck 1914, p. 42 - Wien 1894 - Planck 1914, p. 43 - Joseph Caniou (1999). "§4.2.2: Calculation of Planck's law". Passive infrared detection: theory and applications. Springer. p. 107. ISBN 0-7923-8532-2. - J. R. Mahan (2002). Radiation heat transfer: a statistical approach (3rd ed.). Wiley-IEEE. p. 58. ISBN 978-0-471-21270-6. - de Groot, SR., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. - Kondepudi & Prigogine 1998, Section 9.4. - Stewart 1858 - Huang, Kerson (1967). Statistical Mechanics. New York: John Wiley & Sons. ISBN 0-471-81518-7. - Gannon, Megan (December 21, 2012). "New 'Baby Picture' of Universe Unveiled". Space.com. Retrieved December 21, 2012. - Bennett, C.L.; Larson, L.; Weiland, J.L.; Jarosk, N.; Hinshaw, N.; Odegard, N.; Smith, K.M.; Hill, R.S.; Gold, B.; Halpern, M.; Komatsu, E.; Nolta, M.R.; Page, L.; Spergel, D.N.; Wollack, E.; Dunkley, J.; Kogut, A.; Limon, M.; Meyer, S.S.; Tucker, G.S.; Wright, E.L. (December 20, 2012). "Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Final Maps and Results". 1212: 5225. arXiv:1212.5225. Bibcode:2013ApJS..208...20B. doi:10.1088/0067-0049/208/2/20. - Planck, Max (1901). "Ueber das Gesetz der Energieverteilung im Normalspectrum" [On the law of the distribution of energy in the normal spectrum]. Annalen der Physik. 4th series (in German). 4 (3): 553–563. Bibcode:1901AnP...309..553P. doi:10.1002/andp.19013090310. - Landau, L. D.; E. M. Lifshitz (1996). Statistical Physics (3rd Edition Part 1 ed.). Oxford: Butterworth–Heinemann. ISBN 0-521-65314-2. - "Stefan-Boltzmann law". Encyclopædia Britannica. 2019. - Rybicki & Lightman 1979, p. 22 - "Wien wavelength displacement law constant". The NIST Reference on Constants, Units, and Uncertainty. NIST. Retrieved February 8, 2019. - Nave, Dr. Rod. "Wien's Displacement Law and Other Ways to Characterize the Peak of Blackbody Radiation". HyperPhysics. Provides 5 variations of Wien's displacement law - Infrared Services. "Emissivity Values for Common Materials". Retrieved 2007-06-24. - Omega Engineering. "Emissivity of Common Materials". Retrieved 2007-06-24. - Farzana, Abanty (2001). "Temperature of a Healthy Human (Skin Temperature)". The Physics Factbook. Retrieved 2007-06-24. - Lee, B. "Theoretical Prediction and Measurement of the Fabric Surface Apparent Temperature in a Simulated Man/Fabric/Environment System" (PDF). Archived from the original (PDF) on 2006-09-02. Retrieved 2007-06-24. - Harris J, Benedict F; Benedict (1918). "A Biometric Study of Human Basal Metabolism". Proc Natl Acad Sci USA. 4 (12): 370–3. Bibcode:1918PNAS....4..370H. doi:10.1073/pnas.4.12.370. PMC 1091498. PMID 16576330. - Levine, J (2004). "Nonexercise activity thermogenesis (NEAT): environment and biology". Am J Physiol Endocrinol Metab. 286 (5): E675–E685. doi:10.1152/ajpendo.00562.2003. PMID 15102614. - DrPhysics.com. "Heat Transfer and the Human Body". Retrieved 2007-06-24. - Prevost, P. (1791). Mémoire sur l'equilibre du feu. Journal de Physique (Paris), vol 38 pp. 314-322. - Iribarne, J.V., Godson, W.L. (1981). Atmospheric Thermodynamics, second edition, D. Reidel Publishing, Dordrecht, ISBN 90-277-1296-4, page 227. - NASA Sun Fact Sheet - Cole, George H. A.; Woolfson, Michael M. (2002). Planetary Science: The Science of Planets Around Stars (1st ed.). Institute of Physics Publishing. pp. 36–37, 380–382. ISBN 0-7503-0815-X. - Principles of Planetary Climate by Raymond T. Peirrehumbert, Cambridge University Press (2011), p. 146. From Chapter 3 which is available online here Archived March 28, 2012, at the Wayback Machine, p. 12 mentions that Venus' black-body temperature would be 330 K "in the zero albedo case", but that due to atmospheric warming, its actual surface temperature is 740 K. - Saari, J. M.; Shorthill, R. W. (1972). "The Sunlit Lunar Surface. I. Albedo Studies and Full Moon". The Moon. 5 (1–2): 161–178. Bibcode:1972Moon....5..161S. doi:10.1007/BF00562111. - Lunar and Planetary Science XXXVII (2006) 2406 - Michael D. Papagiannis (1972). Space physics and space astronomy. Taylor & Francis. pp. 10–11. ISBN 978-0-677-04000-4. - Willem Jozef Meine Martens & Jan Rotmans (1999). Climate Change an Integrated Perspective. Springer. pp. 52–55. ISBN 978-0-7923-5996-8. - F. Selsis (2004). "The Prebiotic Atmosphere of the Earth". In Pascale Ehrenfreund; et al. Astrobiology: Future Perspectives. Springer. pp. 279–280. ISBN 978-1-4020-2587-7. - Wallace, J.M., Hobbs, P.V. (2006). Atmospheric Science. An Introductory Survey, second edition, Elsevier, Amsterdam, ISBN 978-0-12-732951-2, exercise 4.6, pages 119-120. - White, M. (1999). "Anisotropies in the CMB". arXiv:astro-ph/9903232. Bibcode:1999dpf..conf.....W. - Kondepudi & Prigogine 1998, pp. 227–228; also Section 11.6, pages 294–296. - The Doppler Effect, T. P. Gill, Logos Press, 1965 - Siegel 1976 - Kirchhoff 1860a - Kirchhoff 1860b - Schirrmacher 2001 - Kirchhoff 1860c - Planck 1914, p. 11 - Chandrasekhar 1950, p. 8 - Milne 1930, p. 80 - Rybicki & Lightman 1979, pp. 16–17 - Mihalas & Weibel-Mihalas 1984, p. 328 - Goody & Yung 1989, pp. 27–28 - Paschen, F. (1896), personal letter cited by Hermann 1971, p. 6 - Hermann 1971, p. 7 - Kuhn 1978, pp. 8, 29 - Mehra and Rechenberg 1982, pp. 26, 28, 31, 39 - Kirchhoff & 1862/1882, p. 573 - Kragh 1999, p. 58 - Chandrasekhar, S. (1950). Radiative Transfer. Oxford University Press. - Goody, R. M.; Yung, Y. L. (1989). Atmospheric Radiation: Theoretical Basis (2nd ed.). Oxford University Press. ISBN 978-0-19-510291-8. - Hermann, A. (1971). The Genesis of Quantum Theory. Nash, C.W. (transl.). MIT Press. ISBN 0-262-08047-8. a translation of Frühgeschichte der Quantentheorie (1899–1913), Physik Verlag, Mosbach/Baden. - Kirchhoff, G.; [27 October 1859] (1860a). "Über die Fraunhofer'schen Linien" [On Fraunhofer's lines]. Monatsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin: 662–665. - Kirchhoff, G.; [11 December 1859] (1860b). "Über den Zusammenhang zwischen Emission und Absorption von Licht und Wärme" [On the relation between emission and absorption of light and heat]. Monatsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin: 783–787. - Kirchhoff, G. (1860c). "Ueber das Verhältniss zwischen dem Emissionsvermögen und dem Absorptionsvermögen der Körper für Wärme and Licht" [On the relation between bodies' emission capacity and absorption capacity for heat and light]. Annalen der Physik und Chemie. 109 (2): 275–301. Bibcode:1860AnP...185..275K. doi:10.1002/andp.18601850205. Translated by Guthrie, F. as Kirchhoff, G. (1860). "On the relation between the radiating and absorbing powers of different bodies for light and heat". Philosophical Magazine. Series 4, volume 20: 1–21. - Kirchhoff, G. (1882) , "Ueber das Verhältniss zwischen dem Emissionsvermögen und dem Absorptionsvermögen der Körper für Wärme und Licht", Gessamelte Abhandlungen, Leipzig: Johann Ambrosius Barth, pp. 571–598 - Kondepudi, D.; Prigogine, I. (1998). Modern Thermodynamics. From Heat Engines to Dissipative Structures. John Wiley & Sons. ISBN 0-471-97393-9. - Kragh, H. (1999). Quantum Generations: a History of Physics in the Twentieth Century. Princeton University Press. ISBN 0-691-01206-7. - Kuhn, T. S. (1978). Black–Body Theory and the Quantum Discontinuity. Oxford University Press. ISBN 0-19-502383-8. - Landsberg, P. T. (1990). Thermodynamics and statistical mechanics (Reprint ed.). Courier Dover Publications. ISBN 0-486-66493-7. - Lavenda, Bernard Howard (1991). Statistical Physics: A Probabilistic Approach. John Wiley & Sons. pp. 41–42. ISBN 978-0-471-54607-8. - Loudon, R. (2000) . The Quantum Theory of Light (third ed.). Cambridge University Press. ISBN 0-19-850177-3. - Mandel, L.; Wolf, E. (1995). Optical Coherence and Quantum Optics. Cambridge University Press. ISBN 0-521-41711-2. - Mehra, J.; Rechenberg, H. (1982). The Historical Development of Quantum Theory. volume 1, part 1. Springer-Verlag. ISBN 0-387-90642-8. - Mihalas, D.; Weibel-Mihalas, B. (1984). Foundations of Radiation Hydrodynamics. Oxford University Press. ISBN 0-19-503437-6. - Milne, E.A. (1930). "Thermodynamics of the Stars". Handbuch der Astrophysik. 3, part 1: 63–255. - Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry. Volume 1. Fundamental Principles. The Properties of Gases. Longmans, Green and Co. - Planck, M. (1914) . The Theory of Heat Radiation. translated by Masius, M. P. Blakiston's Sons & Co. - Rybicki, G. B.; Lightman, A. P. (1979). Radiative Processes in Astrophysics. John Wiley & Sons. ISBN 0-471-82759-2. - Schirrmacher, A. (2001). Experimenting theory: the proofs of Kirchhoff's radiation law before and after Planck. Münchner Zentrum für Wissenschafts und Technikgeschichte. - Siegel, D.M. (1976). "Balfour Stewart and Gustav Robert Kirchhoff: two independent approaches to "Kirchhoff's radiation law"". Isis. 67 (4): 565–600. doi:10.1086/351669. - Stewart, B. (1858). "An account of some experiments on radiant heat". Transactions of the Royal Society of Edinburgh. 22: 1–20. - Wien, W. (1894). "Temperatur und Entropie der Strahlung" [Temperature and entropy of radiation]. Annalen der Physik. 288 (5): 132–165. Bibcode:1894AnP...288..132W. doi:10.1002/andp.18942880511. - Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. ISBN 0-7167-1088-9. - Tipler, Paul; Llewellyn, Ralph (2002). Modern Physics (4th ed.). W. H. Freeman. ISBN 0-7167-4345-0. - Calculating Black-body Radiation Interactive calculator with Doppler Effect. Includes most systems of units. - Color-to-Temperature demonstration at Academo.org - Cooling Mechanisms for Human Body – From Hyperphysics - Descriptions of radiation emitted by many different objects - Black-Body Emission Applet - "Blackbody Spectrum" by Jeff Bryant, Wolfram Demonstrations Project, 2007.
A muscle cell is also known as a myocyte when referring to either a cardiac muscle cell (cardiomyocyte) or a smooth muscle cell, as these are both small cells. A skeletal muscle cell is long and threadlike with many nuclei and is called a muscle fiber. Muscle cells (including myocytes and muscle fibers) develop from embryonic precursor cells called myoblasts. |Anatomical terms of microanatomy| Myoblasts fuse from multinucleated skeletal muscle cells known as syncytia in a process known as myogenesis. Skeletal muscle cells and cardiac muscle cells both contain myofibrils and sarcomeres and form a striated muscle tissue. Cardiac muscle cells form the cardiac muscle in the walls of the heart chambers, and have a single central nucleus. Cardiac muscle cells are joined to neighboring cells by intercalated discs, and when joined in a visible unit they are described as a cardiac muscle fiber. Smooth muscle cells control involuntary movements such as the peristalsis contractions in the esophagus and stomach. Smooth muscle has no myofibrils or sarcomeres and is therefore non-striated. Smooth muscle cells have a single nucleus. The unusual microscopic anatomy of a muscle cell gave rise to its terminology. The cytoplasm in a muscle cell is termed the sarcoplasm; the smooth endoplasmic reticulum of a muscle cell is termed the sarcoplasmic reticulum; and the cell membrane in a muscle cell is termed the sarcolemma. The sarcolemma receives and conducts stimuli. Skeletal muscle cells Edit Skeletal muscle cells are the individual contractile cells within a muscle and are more usually known as muscle fibers because of their longer threadlike appearance. A single muscle such as the biceps brachii in a young adult human male contains around 253,000 muscle fibers. Skeletal muscle fibers are the only muscle cells that are multinucleated with the nuclei usually referred to as myonuclei. This occurs during myogenesis with the fusion of myoblasts each contributing a nucleus to the newly formed muscle cell or myotube. Fusion depends on muscle-specific proteins known as fusogens called myomaker and myomerger. A striated muscle fiber contains myofibrils consisting of long protein chains of myofilaments. There are three types of myofilaments: thin, thick, and elastic that work together to produce a muscle contraction. The thin myofilaments are filaments of mostly actin and the thick filaments are of mostly myosin and they slide over each other to shorten the fiber length in a muscle contraction. The third type of myofilament is an elastic filament composed of titin, a very large protein. In striations of muscle bands, myosin forms the dark filaments that make up the A band. Thin filaments of actin are the light filaments that make up the I band. The smallest contractile unit in the fiber is called the sarcomere which is a repeating unit within two Z bands. The sarcoplasm also contains glycogen which provides energy to the cell during heightened exercise, and myoglobin, the red pigment that stores oxygen until needed for muscular activity. The sarcoplasmic reticulum, a specialized type of smooth endoplasmic reticulum, forms a network around each myofibril of the muscle fiber. This network is composed of groupings of two dilated end-sacs called terminal cisternae, and a single T-tubule (transverse tubule), which bores through the cell and emerge on the other side; together these three components form the triads that exist within the network of the sarcoplasmic reticulum, in which each T-tubule has two terminal cisternae on each side of it. The sarcoplasmic reticulum serves as a reservoir for calcium ions, so when an action potential spreads over the T-tubule, it signals the sarcoplasmic reticulum to release calcium ions from the gated membrane channels to stimulate muscle contraction. In skeletal muscle, at the end of each muscle fiber, the outer layer of the sarcolemma combines with tendon fibers at the myotendinous junction. Within the muscle fiber pressed against the sarcolemma are multiply flattened nuclei; embryologically, this multinucleate condition results from multiple myoblasts fusing to produce each muscle fiber, where each myoblast contributes one nucleus. Cardiac muscle cells Edit The cell membrane of a cardiac muscle cell has several specialized regions, which may include the intercalated disc, and transverse tubules. The cell membrane is covered by a lamina coat which is approximately 50 nm wide. The laminar coat is separable into two layers; the lamina densa and lamina lucida. In between these two layers can be several different types of ions, including calcium. Cardiac muscle like the skeletal muscle is also striated and the cells contain myofibrils, myofilaments, and sarcomeres as the skeletal muscle cell. The cell membrane is anchored to the cell's cytoskeleton by anchor fibers that are approximately 10 nm wide. These are generally located at the Z lines so that they form grooves and transverse tubules emanate. In cardiac myocytes, this forms a scalloped surface. The cytoskeleton is what the rest of the cell builds off of and has two primary purposes; the first is to stabilize the topography of the intracellular components and the second is to help control the size and shape of the cell. While the first function is important for biochemical processes, the latter is crucial in defining the surface-to-volume ratio of the cell. This heavily influences the potential electrical properties of excitable cells. Additionally, deviation from the standard shape and size of the cell can have a negative prognostic impact. Smooth muscle cells Edit Smooth muscle cells are so-called because they have neither myofibrils nor sarcomeres and therefore no striations. They are found in the walls of hollow organs, including the stomach, intestines, bladder and uterus, in the walls of blood vessels, and in the tracts of the respiratory, urinary, and reproductive systems. In the eyes, the ciliary muscles dilate and contract the iris and alter the shape of the lens. In the skin, smooth muscle cells such as those of the arrector pili cause hair to stand erect in response to cold temperature or fear. Smooth muscle cells are spindle-shaped with wide middles, and tapering ends. They have a single nucleus and range from 30 to 200 micrometers in length. This is thousands of times shorter than skeletal muscle fibers. The diameter of their cells is also much smaller which removes the need for T-tubules found in striated muscle cells. Although smooth muscle cells lack sarcomeres and myofibrils they do contain large amounts of the contractile proteins actin and myosin. Actin filaments are anchored by dense bodies (similar to the Z discs in sarcomeres) to the sarcolemma. A myoblast is an embryonic precursor cell that differentiates to give rise to the different muscle cell types. Differentiation is regulated by myogenic regulatory factors, including MyoD, Myf5, myogenin, and MRF4. GATA4 and GATA6 also play a role in myocyte differentiation. Skeletal muscle fibers are made when myoblasts fuse together; muscle fibers therefore are cells with multiple nuclei, known as myonuclei, with each cell nucleus originating from a single myoblast. The fusion of myoblasts is specific to skeletal muscle, and not cardiac muscle or smooth muscle. Myoblasts in skeletal muscle that do not form muscle fibers dedifferentiate back into myosatellite cells. These satellite cells remain adjacent to a skeletal muscle fiber, situated between the sarcolemma and the basement membrane of the endomysium (the connective tissue investment that divides the muscle fascicles into individual fibers). To re-activate myogenesis, the satellite cells must be stimulated to differentiate into new fibers. Muscle contraction in striated muscle Edit Skeletal muscle contraction Edit When contracting, thin and thick filaments slide concerning each other by using adenosine triphosphate. This pulls the Z discs closer together in a process called the sliding filament mechanism. The contraction of all the sarcomeres results in the contraction of the whole muscle fiber. This contraction of the myocyte is triggered by the action potential over the cell membrane of the myocyte. The action potential uses transverse tubules to get from the surface to the interior of the myocyte, which is continuous within the cell membrane. Sarcoplasmic reticula are membranous bags that transverse tubules touch but remain separate from. These wrap themselves around each sarcomere and are filled with Ca2+. Excitation of a myocyte causes depolarization at its synapses, the neuromuscular junctions, which triggers an action potential. With a singular neuromuscular junction, each muscle fiber receives input from just one somatic efferent neuron. Action potential in a somatic efferent neuron causes the release of the neurotransmitter acetylcholine. When the acetylcholine is released it diffuses across the synapse and binds to a receptor on the sarcolemma, a term unique to muscle cells that refers to the cell membrane. This initiates an impulse that travels across the sarcolemma. When the action potential reaches the sarcoplasmic reticulum it triggers the release of Ca2+ from the Ca2+ channels. The Ca2+ flows from the sarcoplasmic reticulum into the sarcomere with both of its filaments. This causes the filaments to start sliding and the sarcomeres to become shorter. This requires a large amount of ATP, as it is used in both the attachment and release of every myosin head. Very quickly Ca2+ is actively transported back into the sarcoplasmic reticulum, which blocks the interaction between the thin and thick filament. This in turn causes the muscle cell to relax. There are four main types of muscle contraction: twitch, treppe, tetanus, and isometric/isotonic. Twitch contraction is the process in which a single stimulus signals a single contraction. In twitch contraction, the length of the contraction may vary depending on the size of the muscle cell. During treppe (or summation) contraction muscles do not start at maximum efficiency; instead, they achieve increased strength of contraction due to repeated stimuli. Tetanus involves a sustained contraction of muscles due to a series of rapid stimuli, which can continue until the muscles fatigue. Isometric contractions are skeletal muscle contractions that do not cause movement of the muscle. However, isotonic contractions are skeletal muscle contractions that do cause movement. Cardiac muscle contraction Edit Specialized cardiomyocytes in the sinoatrial node generate electrical impulses that control the heart rate. These electrical impulses coordinate contraction throughout the remaining heart muscle via the electrical conduction system of the heart. Sinoatrial node activity is modulated, in turn, by nerve fibers of both the sympathetic and parasympathetic nervous systems. These systems act to increase and decrease, respectively, the rate of production of electrical impulses by the sinoatrial node. The evolutionary origin of muscle cells in animals is highly debated: One view is that muscle cells evolved once, and thus all muscle cells have a single common ancestor. Another view is that muscles cells evolved more than once, and any morphological or structural similarities are due to convergent evolution, and the development of shared genes that predate the evolution of muscle – even the mesoderm (the mesoderm is the germ layer that gives rise to muscle cells in vertebrates). Schmid & Seipel (2005) argue that the origin of muscle cells is a monophyletic trait that occurred concurrently with the development of the digestive and nervous systems of all animals, and that this origin can be traced to a single metazoan ancestor in which muscle cells are present. They argue that molecular and morphological similarities between the muscles cells in cnidaria and ctenophora are similar enough to those of bilaterians that there would be one ancestor in metazoans from which muscle cells derive. In this case, Schmid & Seipel argue that the last common ancestor of bilaterian, Ctenophora, and cnidaria was a triploblast or an organism with three germ layers and that diploblasty, meaning an organism with two germ layers, evolved secondarily due to their observation of the lack of mesoderm or muscle found in most cnidarians and ctenophores. By comparing the morphology of cnidarians and ctenophores to bilaterians, Schmid & Seipel were able to conclude that there were myoblast-like structures in the tentacles and gut of some species of cnidarians and the tentacles of ctenophores. Since this is a structure unique to muscle cells, these scientists determined based on the data collected by their peers that this is a marker for striated muscles similar to that observed in bilaterians. The authors also remark that the muscle cells found in cnidarians and ctenophores are often contested due to the origin of these muscle cells being the ectoderm rather than the mesoderm or mesendoderm. The origin of true muscle cells is argued by other authors to be the endoderm portion of the mesoderm and the endoderm. However, Schmid & Seipel (2005) counter skepticism – about whether the muscle cells found in ctenophores and cnidarians are "true" muscle cells – by considering that cnidarians develop through a medusa stage and polyp stage. They note that in the hydrozoans' medusa stage, there is a layer of cells that separate from the distal side of the ectoderm, which forms the striated muscle cells in a way similar to that of the mesoderm; they call this third separated layer of cells the ectocodon. Schmid & Seipel argue that even in bilaterians, not all muscle cells are derived from the mesendoderm: Their key examples are that in both the eye muscles of vertebrates, and the muscles of spiralians, these cells derive from the ectodermal mesoderm, rather than the endodermal mesoderm. Furthermore, they argue that since myogenesis does occur in cnidarians with the help of the same molecular regulatory elements found in the specification of muscle cells in bilaterians, that there is evidence for a single origin for striated muscle. In contrast to this argument for a single origin of muscle cells, Steinmetz, Kraus, et al. (2012) argue that molecular markers such as the myosin II protein used to determine this single origin of striated muscle predate the formation of muscle cells. They use an example of the contractile elements present in the Porifera, or sponges, that do truly lack this striated muscle containing this protein. Furthermore, Steinmetz, Kraus, et al. present evidence for a polyphyletic origin of striated muscle cell development through their analysis of morphological and molecular markers that are present in bilaterians and absent in cnidarians, ctenophores, and bilaterians. Steinmetz, Kraus, et al. showed that the traditional morphological and regulatory markers such as actin, the ability to couple myosin side chains phosphorylation to higher concentrations of the positive concentrations of calcium, and other MyHC elements are present in all metazoans not just the organisms that have been shown to have muscle cells. Thus, the usage of any of these structural or regulatory elements in determining whether or not the muscle cells of the cnidarians and ctenophores are similar enough to the muscle cells of the bilaterians to confirm a single lineage is questionable according to Steinmetz, Kraus, et al. Furthermore, they explain that the orthologues of the Myc genes that have been used to hypothesize the origin of striated muscle occurred through a gene duplication event that predates the first true muscle cells (meaning striated muscle), and they show that the Myc genes are present in the sponges that have contractile elements but no true muscle cells. Steinmetz, Kraus, et al. also showed that the localization of this duplicated set of genes that serve both the function of facilitating the formation of striated muscle genes, and cell regulation and movement genes, were already separated into striated much and non-muscle MHC. This separation of the duplicated set of genes is shown through the localization of the striated much to the contractile vacuole in sponges, while the non-muscle much was more diffusely expressed during developmental cell shape and change. Steinmetz, Kraus, et al. found a similar pattern of localization in cnidarians except with the cnidarian N. vectensis having this striated muscle marker present in the smooth muscle of the digestive tract. Thus, they argue that the pleisiomorphic trait of the separated orthologues of much cannot be used to determine the monophylogeny of muscle, and additionally argue that the presence of a striated muscle marker in the smooth muscle of this cnidarian shows a fundamental different mechanism of muscle cell development and structure in cnidarians. Steinmetz, Kraus, et al. (2012) further argue for multiple origins of striated muscle in the metazoans by explaining that a key set of genes used to form the troponin complex for muscle regulation and formation in bilaterians is missing from the cnidarians and ctenophores, and 47 structural and regulatory proteins observed, Steinmetz, Kraus, et al. were not able to find even on unique striated muscle cell protein that was expressed in both cnidarians and bilaterians. Furthermore, the Z-disc seemed to have evolved differently even within bilaterians and there is a great deal of diversity of proteins developed even between this clade, showing a large degree of radiation for muscle cells. Through this divergence of the Z-disc, Steinmetz, Kraus, et al. argue that there are only four common protein components that were present in all bilaterians muscle ancestors and that of these for necessary Z-disc components only an actin protein that they have already argued is an uninformative marker through its pleisiomorphic state is present in cnidarians. Through further molecular marker testing, Steinmetz et al. observe that non-bilaterians lack many regulatory and structural components necessary for bilaterians muscle formation and do not find any unique set of proteins to both bilaterians and cnidarians and ctenophores that are not present in earlier, more primitive animals such as the sponges and amoebozoans. Through this analysis, the authors conclude that due to the lack of elements that bilaterian muscles are dependent on for structure and usage, nonbilaterian muscles must be of a different origin with a different set of regulatory and structural proteins. In another take on the argument, Andrikou & Arnone (2015) use the newly available data on gene regulatory networks to look at how the hierarchy of genes and morphogens and another mechanism of tissue specification diverge and are similar among early deuterostomes and protostomes. By understanding not only what genes are present in all bilaterians but also the time and place of deployment of these genes, Andrikou & Arnone discuss a deeper understanding of the evolution of myogenesis. In their paper, Andrikou & Arnone (2015) argue that to truly understand the evolution of muscle cells the function of transcriptional regulators must be understood in the context of other external and internal interactions. Through their analysis, Andrikou & Arnone found that there were conserved orthologues of the gene regulatory network in both invertebrate bilaterians and cnidarians. They argue that having this common, general regulatory circuit allowed for a high degree of divergence from a single well-functioning network. Andrikou & Arnone found that the orthologues of genes found in vertebrates had been changed through different types of structural mutations in the invertebrate deuterostomes and protostomes, and they argue that these structural changes in the genes allowed for a large divergence of muscle function and muscle formation in these species. Andrikou & Arnone were able to recognize not only any difference due to mutation in the genes found in vertebrates and invertebrates but also the integration of species-specific genes that could also cause divergence from the original gene regulatory network function. Thus, although a common muscle patterning system has been determined, they argue that this could be due to a more ancestral gene regulatory network being coopted several times across lineages with additional genes and mutations causing very divergent development of muscles. Thus it seems that the myogenic patterning framework may be an ancestral trait. However, Andrikou & Arnone explain that the basic muscle patterning structure must also be considered in combination with the cis regulatory elements present at different times during development. In contrast with the high level of gene family apparatuses structure, Andrikou and Arnone found that the cis-regulatory elements were not well conserved both in time and place in the network which could show a large degree of divergence in the formation of muscle cells. Through this analysis, it seems that the myogenic GRN is an ancestral GRN with actual changes in myogenic function and structure possibly being linked to later coopts of genes at different times and places. Evolutionarily, specialized forms of skeletal and cardiac muscles predated the divergence of the vertebrate / arthropod evolutionary line. This indicates that these types of muscle developed in a common ancestor sometime before 700 million years ago (mya). Vertebrate smooth muscle was found to have evolved independently from the skeletal and cardiac muscle types. Invertebrate muscle cell types Edit The properties used for distinguishing fast, intermediate, and slow muscle fibers can be different for invertebrate flight and jump muscle. To further complicate this classification scheme, the mitochondrial content, and other morphological properties within a muscle fiber, can change in a tsetse fly with exercise and age. See also Edit - Saladin, Kenneth S. (2011). Human anatomy (3rd ed.). New York: McGraw-Hill. pp. 72–73. ISBN 9780071222075. - Myocytes at the U.S. National Library of Medicine Medical Subject Headings (MeSH) - Scott, W; Stevens, J; Binder-Macleod, SA (2001). "Human skeletal muscle fiber type classifications". Physical Therapy. 81 (11): 1810–1816. doi:10.1093/ptj/81.11.1810. PMID 11694174. Archived from the original on 13 February 2015. - "Does anyone know why skeletal muscle fibers have peripheral nuclei, but the cardiomyocytes not? What are the functional advantages?". Archived from the original on 19 September 2017. - Betts, J. Gordon; Young, Kelly A.; Wise, James A.; Johnson, Eddie; Poe, Brandon; Kruse, Dean H.; Korol, Oksana; Johnson, Jody E.; Womble, Mark; Desaix, Peter (6 March 2013). "Cardiac muscle tissue". Retrieved 3 May 2021. - "Muscle tissues". Archived from the original on 13 October 2015. Retrieved 29 September 2015. - "Atrial structure, fibers, and conduction" (PDF). Retrieved 5 June 2021. - Saladin, Kenneth S. (2011). Human anatomy (3rd ed.). New York: McGraw-Hill. pp. 244–246. ISBN 9780071222075. - "Structure of Skeletal Muscle | SEER Training". training.seer.cancer.gov. - Klein, CS; Marsh, GD; Petrella, RJ; Rice, CL (July 2003). "Muscle fiber number in the biceps brachii muscle of young and old men". Muscle & Nerve. 28 (1): 62–8. doi:10.1002/mus.10386. PMID 12811774. S2CID 20508198. - Cho, CH; Lee, KJ; Lee, EH (August 2018). "With the greatest care, stromal interaction molecule (STIM) proteins verify what skeletal muscle is doing". BMB Reports. 51 (8): 378–387. doi:10.5483/bmbrep.2018.51.8.128. PMC 6130827. PMID 29898810. - Prasad, V; Millay, DP (8 May 2021). "Skeletal muscle fibers count on nuclear numbers for growth". Seminars in Cell & Developmental Biology. 119: 3–10. doi:10.1016/j.semcdb.2021.04.015. PMC 9070318. PMID 33972174. S2CID 234362466. - Saladin, K (2012). Anatomy & Physiology: The Unity of Form and Function (6th ed.). New York: McGraw-Hill. pp. 403–405. ISBN 978-0-07-337825-1. - Sugi, Haruo; Abe, T; Kobayashi, T; Chaen, S; Ohnuki, Y; Saeki, Y; Sugiura, S; Guerrero-Hernandez, Agustin (2013). "Enhancement of force generated by individual myosin heads in skinned rabbit psoas muscle fibers at low ionic strength". PLOS ONE. 8 (5): e63658. Bibcode:2013PLoSO...863658S. doi:10.1371/journal.pone.0063658. PMC 3655179. PMID 23691080. - Charvet, B; Ruggiero, F; Le Guellec, D (April 2012). "The development of the myotendinous junction. A review". Muscles, Ligaments and Tendons Journal. 2 (2): 53–63. PMC 3666507. PMID 23738275. - Bentzinger, CF; Wang, YX; Rudnicki, MA (1 February 2012). "Building muscle: molecular regulation of myogenesis". Cold Spring Harbor Perspectives in Biology. 4 (2): a008342. doi:10.1101/cshperspect.a008342. PMC 3281568. PMID 22300977. - Ferrari, Roberto. "Healthy versus sick myocytes: metabolism, structure and function" (PDF). oxfordjournals.org/en. Oxford University Press. Archived from the original (PDF) on 19 February 2015. Retrieved 12 February 2015. - Betts, J. Gordon; Young, Kelly A.; Wise, James A.; Johnson, Eddie; Poe, Brandon; Kruse, Dean H.; Korol, Oksana; Johnson, Jody E.; Womble, Mark; Desaix, Peter (6 March 2013). "Smooth muscle". Retrieved 10 June 2021. - page 395, Biology, Fifth Edition, Campbell, 1999 - Perry R, Rudnick M (2000). "Molecular mechanisms regulating myogenic determination and differentiation". Front Biosci. 5: D750–67. doi:10.2741/Perry. PMID 10966875. - Zhao R, Watt AJ, Battle MA, Li J, Bandow BJ, Duncan SA (May 2008). "Loss of both GATA4 and GATA6 blocks cardiac myocyte differentiation and results in acardia in mice". Dev. Biol. 317 (2): 614–9. doi:10.1016/j.ydbio.2008.03.013. PMC 2423416. PMID 18400219. - Zammit, PS; Partridge, TA; Yablonka-Reuveni, Z (November 2006). "The skeletal muscle satellite cell: the stem cell that came in from the cold". Journal of Histochemistry and Cytochemistry. 54 (11): 1177–91. doi:10.1369/jhc.6r6995.2006. PMID 16899758. - Chal J, Oginuma M, Al Tanoury Z, Gobert B, Sumara O, Hick A, Bousson F, Zidouni Y, Mursch C, Moncuquet P, Tassy O, Vincent S, Miyazaki A, Bera A, Garnier JM, Guevara G, Heston M, Kennedy L, Hayashi S, Drayton B, Cherrier T, Gayraud-Morel B, Gussoni E, Relaix F, Tajbakhsh S, Pourquié O (August 2015). "Differentiation of pluripotent stem cells to muscle fiber to model Duchenne muscular dystrophy". Nature Biotechnology. 33 (9): 962–9. doi:10.1038/nbt.3297. PMID 26237517. S2CID 21241434. - Dowling JJ, Vreede AP, Kim S, Golden J, Feldman EL (2008). "Kindlin-2 is required for myocyte elongation and is essential for myogenesis". BMC Cell Biol. 9: 36. doi:10.1186/1471-2121-9-36. PMC 2478659. PMID 18611274. - "Structure, and Function of Skeletal Muscles". courses.washington.edu. Archived from the original on 15 February 2015. Retrieved 13 February 2015. - "Muscle Fiber Excitation". courses.washington.edu. University of Washington. Archived from the original on 27 February 2015. Retrieved 11 February 2015. - Ziser, Stephen. "Muscle Cell Anatomy & Function" (PDF). www.austincc.edu. Archived (PDF) from the original on 23 September 2015. Retrieved 12 February 2015. - Seipel, Katja; Schmid, Volker (1 June 2005). "Evolution of striated muscle: Jellyfish and the origin of triploblasty". Developmental Biology. 282 (1): 14–26. doi:10.1016/j.ydbio.2005.03.032. PMID 15936326. - Steinmetz, Patrick R.H.; Kraus, Johanna E.M.; Larroux, Claire; Hammel, Jörg U.; Amon-Hassenzahl, Annette; Houliston, Evelyn; et al. (2012). "Independent evolution of striated muscles in cnidarians and bilaterians". Nature. 487 (7406): 231–234. Bibcode:2012Natur.487..231S. doi:10.1038/nature11180. PMC 3398149. PMID 22763458. - Andrikou, Carmen; Arnone, Maria Ina (1 May 2015). "Too many ways to make a muscle: Evolution of GRNs governing myogenesis". Zoologischer Anzeiger. Special Issue: Proceedings of the 3rd International Congress on Invertebrate Morphology. 256: 2–13. doi:10.1016/j.jcz.2015.03.005. - OOta, S.; Saitou, N. (1999). "Phylogenetic relationship of muscle tissues deduced from the superimposition of gene trees". Molecular Biology and Evolution. 16 (6): 856–867. doi:10.1093/oxfordjournals.molbev.a026170. ISSN 0737-4038. PMID 10368962. - Hoyle, Graham (1983). "8. Muscle cell diversity". Muscles and Their Neural Control. New York, NY: John Wiley & Sons. pp. 293–299. ISBN 9780471877097. - Anderson, M.; Finlayson, L.H. (1976). "The effect of exercise on the growth of mitochondria and myofibrils in the flight muscles of the Tsetse fly, Glossina morsitans". J. Morphol. 150 (2): 321–326. doi:10.1002/jmor.1051500205. S2CID 85719905.
In the last chapter we saw how to call a function. We mentioned a special memory called the stack but we did not delve into it. Let’s see in this chapter how we can use the stack and why it is important in function calls. When we call a function, that same function can call other functions and so on. It is a bit like if we suspended the execution of a function to execute another one. The constraint here is that we always resume the execution when the called function ends. This is, we return always in the inverse order of the function calls. As a consequence, we can see the number of function calls active at some point as a stack (like a stack of plates). This property is important because a function, say F, may need to keep some temporary memory. When F calls another function G, we want this memory be preserved when we return from G. Given that the activations of the functions follow a stack-like pattern, it seems natural that the temporary memory used by functions also follows this schema. This schema is so common that most architectures provide a specialized mechanism for this kind of «temporary memory associated to the activation of functions». That memory receives the name of stack basically because it follows a stack discipline: an element that is in the stack can only be removed when all the elements added after that first element have been removed. The stack and convention call Since the stack memory behaves like a stack, the only interesting thing we care about is its top element as in practice the stack is never empty. Since the stack is memory and memory is accessed using addresses, the top of the stack is an address. This address is stored in a special register called sp for stack pointer. Changing the value of sp changes the size of the stack. The whole stack memory ranges from the stack base, which is not kept anywhere and is usually conventional, and the stack pointer. The stack has, then, two basic operations: it grows and it shrinks. Growing is done when we need to add new elements to the stack. Shrinking is done when we want to remove such elements. A function will typically grow the stack to keep temporary memory and it will shrink, by the same amount of elements, the stack before the function returns. This way, when returning from a call, the caller will have the stack as it was right before the call. We have not specified how the grow and shrink operations are actually implemented. An architecture can decide make the stack grow towards higher addresses (and shrink towards lower addresses) or make the stack grow towards lower addresses (and, hence, shrink toward higher addresses). AArch64 chooses the latter, so to grow the stack we will just reduce the value in sp and to shrink it we will increase it. The convention call of functions in AArch64 also dictates an additional constraint on the values that sp can take. Without too much details, at any point we use the sp (except for strictly growing or shrinking it) its value must be a multiple of 16. This means that the addresses kept in sp will be always aligned to 16-bytes. Operating the stack The operation of adding an element of the stack is commonly known as a push. A push does two things: it first grows the stack as many bytes as the size of the element and then does a store to the top of the stack. The inverse operation, removing something from the stack, is known as a pop. In this case first a load from the top is done to retrieve the value and then the stack is shrunk as many bytes as the size of the element removed. Recall that grow will mean subtracting the number of bytes from sp and shrinking it will mean adding the number of bytes to sp. Sometimes we do not want to retrieve the value, in this case we simply shrink the stack. We could implement push and pop using a combination of ldr instructions. And it could work but these operations happen very frequently so a combination of two instructions for each looks inefficient. Luckily we can cleverly use addressing modes to grow and shrink the stack at the same time that we perform a store or a load respectively. If you recall chapter 5, we saw two addressing modes called pre-indexing and post-indexing. In these modes there is a base register plus an offset. In a pre-index mode the computed address is the offset added to the value of the base register. In a post-index mode the computed addres is just the value of the base register. Both modes update the base register with the value of the offset added to the value of the base register. These modes are useful when accessing contiguous memory, and the stack memory elements are a kind of contiguous memory. With this insight, now we can implement a push using a store that uses pre-indexed mode with sp as the base register. This works because we want the address used for the stored be the new storage and then we want to update sp to point to that new memory. For instance, we can preserve the value of x8 in the stack doing this: Note that we use an offset of 8 because x8 is a 64-bit register. The offset is negative because in AArch64 the convention tells us to use a downwards growing stack. A pop, conversely, is implemented using a pre-indexed address mode. For instance, we can restore the preserved value of x8 doing this: Note that the offset in this case is positive, because now we are shrinking the stack. Well, this could work but we're not quite there yet. The reason is that the convention tells us to keep the stack aligned to a multiple of 16. Assuming it was originally like this, just doing a single str will break this property very easily. One option is making sure the stack is still aligned by using additional sub instructions. Another option is making sure we push and pop pairs of 64-bit registers. As each register takes 8-bytes, two of them will take obviously 16-bytes. If we push in pairs the stack remains aligned in a single instruction. To do this, AArch64 provides special load/store pair instructions called stp. These instruction receive two registers and a single addressing mode. The registers are loaded/stored as consecutive elements of the address computed by the addressing mode. For instance, the following sequence: can be rewritten as As you can see the first register in the instruction will be the one in the top of the stack. The second register is just stored contiguously after that (towards the bottom of the stack). A similar thing happens with the corresponding ldp implementing a pop of these two registers. As long as we use the same order of registers inside stp we are fine. Note that this is a stack, so the first elements we put must be the last to leave. This means that if we want to keep and restore, say, x11, the right order is the following. So basically what is pushed the first is popped the last. Or similarly: the order of pops must be the opposite as the order of pushes. The archetypical example when explaining recursion is computing the Fibonacci numbers doing a direct translation of the formula. This is not an efficient way to compute such numbers (in the rare case you actually need them) but makes for a simple example involving recursion. Recursion to work many times requires a stack (except for a subset of specially recursive functions that don't) so this will showcase how to manipulate it. We will write a program that will ask for a number n and will compute the n-th Fibonacci number. Let's write the main function first. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 .data msg_input: .asciz "Please type a number: " scanf_fmt : .asciz "%d" msg_output: .asciz "Fibonacci number %d is %ld\n" .text .global main main: stp x19, x30, [sp, #-16]! // Keep x19 and x30 (link register) sub sp, sp, #16 // Grow the stack because for a local // variable used by scanf. /* Our stack at this point will look like this Contents Address | var | [sp] We will use the first 4 bytes for scanf | | [sp, #8] | x19 | [sp, #16] | x30 | [sp, #24] */ // Set up first call to printf // printf("Please type a number: "); ldr x0, addr_msg_input // x0 ← &msg_input [64-bit] bl printf // call printf // Set up call to scanf // scanf("%d", &var); mov x1, sp // x1 ← sp // the first 4 bytes pointed by sp will be 'var' ldr x0, addr_scanf_fmt // x0 ← &scanf_fmt [64-bit] bl scanf // call scanf // Set up call to fibonacci // res = fibonacci(var); ldr w0, [sp] // w0 ← *sp [32-bit] // this is var in the stack bl fibonacci // call fibonacci // Setup call to printf // printf("Fibonacci number %d is %ld\n", var, res); mov x2, x0 // x2 ← x0 // this is 'res' in the call to fibonacci ldr w1, [sp] // w1 ← *sp [32-bit] ldr x0, addr_msg_output // x0 ← &msg_output [64-bit] bl printf // call printf add sp, sp, #16 // Shrink the stack. ldp x19, x30, [sp], #16 // Restore x19 and x30 (link register) mov w0, #0 // w0 ← 0 ret // Leave the function addr_msg_input: .dword msg_input addr_msg_output: .dword msg_output addr_scanf_fmt: .dword scanf_fmt The main program first asks the user for a number using printf, reads a 32-bit integer from the input using scanf. With that number we call fibonacci (we will see its code later) and then just print the result for the given number, again using In lines 3 to 5 we define a few strings that we will need for the scanf calls. The directive .asciz means "emit these characters as ASCII bytes and add a zero byte at the end". The zero byte is required by C routines that use it to tell where the string ends. The marker scanf_fmt means "read an integer and store its value as a 32-bit signed number in the given address" (you will see this in the call of scanf below). The markers msg_output mean "print an integer as a decimal number of 32-bit/64-bit" respectively. As this function calls another function it has top keep the value of x30, line 11. Recall that executing a bl instruction will change x30 to be the address of the instruction after that ret instruction uses x30 to know where to return. So if our function calls another function it must keep x30. As we have to keep the stack aligned to 16-bytes, storing x30 is not enough so we will also keep another register even if we do not use it. Conventionally we will use x19 as it is the first callee-saved register As mentioned above, scanf reads an integer from the input and stores it in some memory. We could use a global variable for that but we can also use the stack, but first we need to make room in the stack. So we grow it by 16-bytes, line 12. Actually we only need 4 bytes but recall, the stack must be kept 16-byte aligned, so yes, we will waste 12 bytes in this case. Then we do the first call to printf, lines 25 to 26. This call only receives a parameter, which is the address of the string msg_input. So we simply load the address to w0 and then Now we do the call to scanf. This call receives first the format of the input read (the string %d that we have in scanf_fmt) and the address of the memory where we will store the result of the scanf. The first parameter is the address of scanf_fmt so we load it in w0, line 32. The second parameter is the address where we want scanf to store the read integer. In this case we will use the first 4-bytes in the address pointed by sp. So we simply copy the value of mov, line 30. Now with everything in place we can do the call to scanf, line 33. Note: the order in which we set up the registers for parameters is usually not very relevant (except for those cases where it might make things easier, of course). scanf has returned in the top of the stack, pointed by sp, we have an integer that we can pass to fibonacci. So we load it, line 37, in w0. And we're now ready for the call to fibonacci, line 39. fibonacci function receives a 32-bit integer as a parameter and returns a 64-bit integer as the result. The calling convention of AArch64 says that for the parameter we have to use w0 and return it in x0. So to prepare the call to the final printf, we have to make sure our 64-bit number is in the right register, so we copy x0 (set by the fibonacci call) to x2, line 43, and then we load again from the stack the value we passed to fibonacci, but this time we load it into w1, line 45. Lastly we load into w0 the address of msg_output. Now we can call printf to show the results of our computation. Now the only thing that remains to do in this function is to do clean up. We shrink the stack that we grew for the local variable, line 49, and then we restore the values of x30 (the link register, which was modified by the bl instructions), line 50. Now we can return, line 52, but since this is the main and returns an integer, we just make sure w0 is set to zero just before returning, line 51. Now we can see the code of the fibonacci function. 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 fibonacci: // fibonacci(n) -> result // n is 32-bit and will be passed in w0 // result is 64-bit and will be returned in x0 stp x19, x30, [sp, #-16]! // Keep x19 and x30 (link register) stp x20, x21, [sp, #-16]! // Keep x20 and x21 /* Our stack at this point will look like this Contents Address | x20 | [sp] | x21 | [sp, #8] | x19 | [sp, #16] | x30 | [sp, #24] */ cmp w0, #1 // Compare w0 with 1 and update the flags ble simple_case // if w0 <= 1 branch to simple_case // (otherwise continue to recursive_case) recursive_case: // recursive case // (this label is not used, added for clarity) mov w19, w0 // w19 ← w0 // Set up call to fibonacci // fibonacci(n-1); sub w0, w0, #1 // w0 ← w0 - 1 bl fibonacci // call fibonacci mov x20, x0 // x20 ← x0 sub w0, w19, #2 // w0 ← w19 - 2 bl fibonacci // call fibonacci mov x21, x0 // x21 ← x0 add x0, x20, x21 // x0 ← x20 + x21 b end // (unconditional) branch to end simple_case: sxtw x0, w0 // x0 ← ExtendSigned32To64(w0) end: ldp x20, x21, [sp], #16 // Restore x20 and x21 ldp x19, x30, [sp], #16 // Restore x19 and x30 (link register) ret Similar to the main function we first start by keeping all that must be restored upon leaving the function. So we keep x21, lines 64-65. In contrast to the main function, where x19 was not used but we kept it and restored it to keep the stack 16-byte aligned, we will use all the preserved registers this time. The fibonacci function is actually a sequence of numbers Fi defined by the following recurrence: - F0 = 0 - F1 = 1 - Fn = Fn-1 + Fn-2, where n > 1 This means that fibonacci(1) just return the parameter 0 and 1 respectively, the simple case. Otherwise fibonacci(n) has to add fibonacci(n-2) to compute the result. In order to tell whether this is the simple case or not, we just compare w0 with 1, line 75. If w0 ≤ 1 then this is the simple case and we branch to it, line 96. In the simple case we simply extend w0 from 32 to 64 bits, line 96. The instruction sxtw is in practice like a move that also extends (and the name of the instruction is deliberately the same as the extending operators we saw in chapter 3). For the recursive case, we first need to make sure we will not lose the value of w0 because this is a caller-saved register meaning that its content will be lost after a function call. So we keep it in register w19, line 81, which is a callee saved register, and we already preserved it at the beginning of the function. That said we have not lost the value of x0 already, so we can still use it to compute the parameter for fibonacci(n-1), so we subtract 1 from x0, line 84. Then we do the call, line 85. After the call we want to gather the result, so we will keep it in a callee-saved register, this time x20, line 86. For the second call to fibonacci, the original value of x0 from which we could compute n-2 has been lost already. However we kept in w19 so we can compute n-2 and store it in w0, line 88. Now we call fibonacci, line 89, and similarly we keep the result in another callee-saved register, this time x21, line 90. Finally we compute the result of this fibonacci as the sum of x20, which contains the value of x21, which contains the value of fibonacci(n-2), line 92. Note that we could have coalesced lines 90 and 92 in a single add x0, x0, x20, but I did in two steps for clarity. Since we do not want to run the simple case, we just branch to the end of the function, line 93. Like any other function, clean up must be in order, so we restore the registers we kept at the beginning, lines 99 and 100. OK, let's try our program. Yay! Note that this algorithm is very inefficient so the Fibonacci number 40 will be really slow to calculate. A big number will also overflow 64-bit numbers. That's all for today.
In the previous blog, we have learnt how to perform the forward propagation. In this blog, we will continue the same example and rectify the errors in prediction using the back-propagation technique. What is Back Propagation?Recall that we created a 3-layer (2 train, 2 hidden, and 2 output) network. But once we added the bias terms to our network, our network took the following shape. After completing forward propagation, we saw that our model was incorrect, in that it assigned a greater probability to Class 0 than Class 1. Now, we will correct this using backpropagation. Why Backpropagation?During forward propagation, we initialized the weights randomly. Therein lies the issue with our model. Given that we randomly initialized our weights, the probabilities we get as output are also random. Thus, we must have some means of making our weights more accurate so that our output will be more accurate. We adjust these random weights using the backpropagation. Loss FunctionWhile performing the back-propagation we need to compute how good our predictions are. To do this, we use the concept of Loss/Cost function. The Loss function is the difference between our predicted and actual values. We create a Loss function to find the minima of that function to optimize our model and improve our prediction’s accuracy. In this document, we will discuss one such technique called Gradient Descent which is used to reduce this Loss. Depending on the problem we choose a certain type of loss function. In this example, we will use the Mean Squared Error or MSE method to calculate the Loss. In the MSE method, the Loss is calculated as the sum of the squares of the differences between actual and predicted values. Loss = Sum (Predicted - Actual)²Let us say that our Loss or error in prediction looks like this: We aim to reduce the loss by changing the weights such that the loss converges to the lowest possible value. We try to reduce the loss in a controlled way, by taking small steps towards the minimum loss. This process is called Gradient Descent (GD). While performing GD, we need to know the direction in which the weights should move. In other words, we need to decide whether to increase to decrease the weights. To know this direction, we must take the derivative of our Loss function. This gives us the direction of the change of our function. Below is an equation that shows how to update weights using the Gradient Descent. Here the alpha term, α, is known as the learning rate and is multiplied by the derivative of our Loss function (J) (Please recollect that we have discussed how to calculate the derivatives of a function in the chain rule of derivatives document). We subtract this product from our initial weight to update it. It is also to be noted that this form of the derivative is known as the partial derivative. While finding the partial derivative, the remaining terms are treated as constants. If you consider the curve in the above figure as our loss function with respect a feature, then we can say that the derivative is the slope of our loss function and represents the instantaneous rate of change of y with respect to x. While performing back-propagation we are to find the derivative of our Loss function with respect to our weights. In other words, we are asking “How does our Loss function change when we change our weights by one unit?”. We then multiply this by the learning rate, alpha. The learning rate controls the step-size of the movement towards the minima. Intuitively, if we have a large learning rate we are going to take big steps. In contrast, if we have a small learning rate, we are going to take small steps. Thus, the learning rate multiplied by the derivative can be thought of as steps being taken over the domain of our Loss function. Once we make this step, we update our weights. And this process is repeated for each feature. In the example below, we will demonstrate the process of backpropagation in a stepwise manner. Backpropagation StepwiseLet’s break the process of backpropagation down into actionable steps. - Calculate Loss Function; (i.e. Total Error of Neural Network) - Calculate the Partial Derivatives of Total Error/Loss Function w.r.t. Each Weight - Perform Gradient Descent and Update Our Weights Error = (Target - Output) ²This is the error for a single class. If we want to compute the error in predicted probabilities for both the classes of an example. Then we combine errors as follows. Total Error = Error₁ + Error₂Where Error₁ and Error₂ represent the errors in predictions for the two classes. Recall that our output was a3 which was computed to be: denoting a lesser prediction for Class 1 than Class 0. In our example, we stated that Class 1 should have had a greater probability and thus been our predicted class label. To further illustrate this, we create some hypothetical target probability values for Class 0 and Class 1 for the ease of understanding. Let us assign the following target values(t) for output layer probabilities: Now, let’s compute the errors. So, the Total Error in the prediction is 0.009895. Each error contains the predicted value, and each predicted value is a function of the weights and inputs from the previous layer. Extending this logic, one can say that our total error is a function of the different weights or in other words is multivariate. And because we have multiple weights, we must use partial derivatives, or find out how a change in one specific weight changes our total error equation. This means that we must use the chain rule to decompose the errors. Once we have computed the partial derivative of our error function with respect to a weight, we can then apply Gradient Descent equation to update the weights. We repeat this for each of the weights and for all the examples in the train data. This process is repeated many times and every such pass over all the examples is called an Epoch. We perform these passes until there is a convergence in the loss, or the loss function stops improving. Now that we have understood the process of backpropagation, let’s implement it. To perform the backpropagation, we need to find the partial derivatives of our error function w.r.t each of our weights. Recall that we have a total of eight weights (i.e. Before adding bias terms). We have two weights from our first input to our hidden layer and two weights from our second input to our hidden layer. We also have four weights from our hidden layer to our output layer. Let’s label these weights as follows. We call the weights from our first input neuron as w1 and w3, and the weights from the second input neuron a w2 and w4. The weights from our hidden layer’s first neuron are w5 and w7 and the weights from the second neuron in the hidden layer are w6 and w8. In this example, we will demonstrate the backpropagation for the weight w5. Note that we can use the same process to update all the other weights in the network. Let us see how to represent the partial derivative of the loss with respect to the weight w5, using the chain rule. Where 'i' in the subscript denotes the first neuron in the output layer. To compute the first derivative of the chain, we express our total error equation as: Here j in the subscript denotes the second neuron in the output layer. The partial derivative of our error equation with respect to the output is: Substituting the corresponding values, we will get: Next, we find the second term in our equation. Recall that in the forward propagation step, we used the sigmoid or logistic function as our activation function. So, for calculating the second element in the chain we must take the partial derivative of the sigmoid with respect to its input. Now, recollect that the sigmoid function is as follows: The derivative of this activation function can also be written as follows: The derivative can be applied for the second term in the chain rule as follows: Substituting the output value in the equation above we get: 0.7333(1 - 0.733) = 0.1958Next, we compute the final term in the chain equation. Our third term encompasses the inputs that we used to pass into our sigmoid activation function. Recall that during forward propagation, the outputs of the hidden layer are multiplied by the weights. These linear combinations are then passed into the activation function and the final output layer. Recollect that these weights are given by Theta2. And let us say that the outputs from our Hidden Layer are given as follows. To visualize the matrix multiplication that follows, please see the diagram below: Here, H1 and H2 denote the hidden layer neurons. Our equation for the third term is concerned with the partial derivative of the input into the node with respect to our fifth weight. Our fifth weight is associated with the second neuron in our hidden layer as shown above. So, when we perform the partial differentiation with respect to w5, all the other weights are treated as constants and their derivatives are taken as zeros. So, when the input which is the value we received from the combination of Theta 2 and the outputs of our Hidden Layer is differentiated, then the result looks like is: Where output is the Hidden neuron H1’s output. Now that we have found the value of the last term in our equation, we can compute the product of all three terms to derive the partial derivative of our error function w.r.t w5. We can now use this partial derivative in our Gradient Descent equation as shown, to adjust the weight w5. So, the updated weight w5 is 0.3995. As you can see, the value of w5 has changed little, as our learning rate (0.1) is very small. This small change in the value of w5 may not affect the final probability much. But, if the same process s performed multiple times for both the examples and the weights are adjusted for every run (epoch) then we will get a final neural network that has the expected prediction. Updating our ModelAfter completing backpropagation and updating both the weight matrices across all the layers multiple times, we arrive at the following weight matrices corresponding to the minima. We can now use these weights and complete the forward propagation to arrive at the best possible outputs. Recall that the first step in this process is to multiply the weights with inputs as shown below. Recall that we take the transpose of our X matrix to ensure that our weights line up. Here we are using our new updated weights for Theta1, and our matrix multiplication will now look like the following: This is our new z² matrix, or the output of the first layer. Recall that our next step in forward propagation was to apply the sigmoid function element-wise to our matrix. This will yield the following: Here, a² is the output of the hidden layer. Again, this is our activation layer and will serve as the new input into our final layer. We again add back our bias term and thus our new a² looks like the following: Now we will use our new values for our Theta2 weight matrix to create the input for our output layer. We now perform the following computation to arrive at the new value of our z3, or the output matrix. After this matrix multiplication, we apply our sigmoid function element-wise and arrive at the following for our final output matrix. We can see here that after performing backpropagation and using Gradient Descent to update our weights at each layer we have a prediction of Class 1 which is consistent with our initial assumptions. If you want to learn how to apply Neural Networks in trading, then please check our new course on Neural Networks In Trading.
On Aug. 14, 2021, a small near-Earth asteroid (NEA) designated 2021 PJ1 passed our planet at a distance of over 1 million miles (about 1.7 million kilometers). Between 65 and 100 feet (20 and 30 meters) wide, the recently discovered asteroid wasn’t a threat to Earth. But this asteroid’s approach was historic, marking the 1,000th NEA to be observed by planetary radar in just over 50 years. And only seven days later, planetary radar observed the 1,001st such object, but this one was much larger. Since the first radar observation of the asteroid 1566 Icarus in 1968, this powerful technique has been used to observe passing NEAs and comets (collectively known as near-Earth objects, or NEOs). These radar detections improve our knowledge of NEO orbits, providing the data that can extend calculations of future motion by decades to centuries and help definitively predict if an asteroid is going to hit Earth, or if it’s just going to pass close by. For example, recent radar measurements of the potentially hazardous asteroid Apophis helped eliminate any possibility of it impacting Earth for the next 100 years. In addition, they can provide scientists with detailed information on physical properties that could be matched only by sending a spacecraft and observing these objects up close. Depending on an asteroid’s size and distance, radar can be used to image its surface in intricate detail while also determining its size, shape, spin rate, and whether or not it is accompanied by one or more small moons. In the case of 2021 PJ1, the asteroid was too small and the observing time too short to acquire images. But as the 1,000th NEA detected by planetary radar, the milestone highlights the efforts to study the NEAs that have passed close to Earth. “2021 PJ1 is a small asteroid, so when it passed us at a distance of over a million miles, we couldn’t obtain detailed radar imagery,” said Lance Benner, who leads NASA’s asteroid radar research program at NASA’s Jet Propulsion Laboratory in Southern California. “Yet even at that distance, planetary radar is powerful enough to detect it and measure its velocity to a very high precision, which improved our knowledge of its future motion substantially.” Benner and his team led this effort using the 70-meter (230-foot) Deep Space Station 14 (DSS-14) antenna at the Deep Space Network’s Goldstone Deep Space Complex near Barstow, California, to transmit radio waves to the asteroid and receive the radar reflections, or “echoes.” Catching (Radio) Waves Of all the asteroids observed by planetary radar, well over half were observed by the large 305-meter (1,000-foot) telescope at Arecibo Observatory in Puerto Rico before it was damaged and decommissioned in 2020. The antenna collapsed soon after. Goldstone’s DSS-14 and 34-meter (112-foot) DSS-13 antennas have observed 374 near-Earth asteroids to date. Fourteen NEAs have also been observed in Australia using antennas at the Deep Space Network’s Canberra Deep Space Communication Complex to transmit radio waves to the asteroids and the CSIRO’s Australian Telescope Compact Array and Parkes Observatory in New South Wales to receive the radar reflections. Nearly three-quarters of all NEA radar observations have been made since NASA’s NEO Observations Program, now a part of its Planetary Defense Program, increased funding for this work 10 years ago. The most recent asteroid to be observed by radar made its approach by Earth only a week after 2021 PJ1. Between Aug. 20 and 24, Goldstone imaged 2016 AJ193 as it passed our planet at a distance of 2.1 million miles (about 3.4 million kilometers). Although this asteroid was farther away than 2021 PJ1, its radar echoes were stronger because 2016 AJ193 is about 40 times larger, with a diameter of about three-quarters of a mile (1.3 kilometers). The radar images revealed considerable detail on the object’s surface, including ridges, small hills, flat areas, concavities, and possible boulders. “The 2016 AJ193 approach provided an important opportunity to study the object’s properties and improve our understanding of its future motion around the Sun,” said Shantanu Naidu, a scientist at JPL who led the Aug. 22 observations of 2016 AJ193. “It has a cometary orbit, which suggests that it may be an inactive comet. But we knew little about it before this pass, other than its size and how much sunlight its surface reflects, so we planned this observing campaign years ago.” NASA’s NEOWISE mission had previously measured 2016 AJ193’s size, but the Goldstone observations revealed more detail: It turns out to be a highly complex and interesting object that rotates with a period of 3.5 hours. Scientists will use these new observations of 2016 AJ193 – the 1,001st NEA observed by planetary radar – to better understand its size, shape, and composition. As with 2021 PJ1, measurements of its distance and speed during this approach also provided data that will reduce uncertainties in computing its orbit. “In addition to the surveys that use ground- and space-based optical telescopes to detect and track nearly 27,000 NEOs throughout our solar system, planetary radar is an important tool for monitoring asteroids that come close to Earth,” said Kelly Fast, NEO Observations Program Manager of the Planetary Defense Coordination Office at NASA Headquarters in Washington. “Reaching this milestone of now just over 1,000 radar detections of NEAs emphasizes the important contribution that has been made in characterizing this hazardous population, which is fundamental for our planetary defense efforts.” For more information about NASA's Planetary Defense Coordination Office, visit: For asteroid and comet news and updates, follow @AsteroidWatch on Twitter:
EVIDENCE FOR EVOLUTION =Evidence for Evolution= Evolution is the change in heritable traits of populations over successive generations. Over many generations new species can develop through a process called speciation. There is a wide range of evidence that support the idea that each of the species we see today evolved from a common ancestor. This evidence includes: *Fossil Evidence *Biogeography (species distribution) *Comparative anatomy *Comparative embryology *Genetic Evidence *Biochemical Evidence ==Fossil Evidence== Fossils are preserved remains or traces of animals, plants, and other organisms [image:http://i.imgur.com/TCtQXEi.png?1] Most fossils are found within layers of sedimentary rocks called strata. Deeper strata are usually older and therefore fossils from different time period can be compared. Analysis of fossils from different strata suggests that more complex, modern organisms evolved from simpler, more ancient organisms. The hominin (human) fossil record shows trends such as an increased tendency towards bipedalism (walking on two legs), smaller teeth / jaws and the development of a larger brain. Although people sometimes talk of a "missing link", in actual fact, the fossil record is full of intermediate species that no longer inhabit the Earth. [image:http://i.imgur.com/2UdzKB0.png?1] '''Transitional Fossils''' Major changes in lifestyle and anatomy would be subject to intense selection and so transitional (intermediate) forms would not be present for long periods of time. However, although less common, ''transitional'' fossils have been documented. For instance the acquisition of feathered wings by reptiles that would later evolve into birds (e.g. ''Archaeopteryx lithographica'' pictured left). ==Biogeography== [image:http://i.imgur.com/PJ7PG60.png?1] Biogeography is the study of species distributions. It examines how species have been distributed across different places at different times. The distribution of species shows a very clear pattern. More similar species tend to be found closer to one another geographically. The distribution of many animals and plants across different continents can be explained by continental drift (the movement tectonic plates). The continents were once all joined together in one giant super-continent. About 200-180 million years ago the southern half called Gondwanaland broke away. This would later split into what we now know as Antarctica, Africa, Australia, South America and India. These continents have some related species of plants and animals supporting the idea that a common ancestor once inhabited Gondwanaland. As regions separated, oceans became barriers to gene flow (inter-breeding) and different climates have caused each population to evolve into distinct species. However, they still share many features of their now extinct ancestors. ==Comparative Anatomy== Comparing the body structures (anatomy) of different species also supports the notion of a common ancestor. Closely related species have more anatomical (structural) similarities. Even less closely related species show evidence of underlying anatomical similarities, with common structural features that have been modified for a different function / purpose. [image:http://i.imgur.com/qGcQtih.png?1] Anatomical features that are derived from a common ancestor but have been adapted to a different purpose are called '''homologous structures'''. For instance the pentadactyl (5 digit) limb found in most vertebrates (animals with a spine) has the same general bone structure / pattern. However, the size and shape of each bone has been modified to serve a slightly different function. These "homologies" indicate that all of these species diverged from a common ancestor (see adaptive radiation) and that the basic limb plan has been adapted to meet the needs of different niches. [image:http://i.imgur.com/ovxSnDa.png?1] '''Vestigial organs''' Some animals possess inherited features that they no longer need. For instance whales still have the remains of a hip bone. It is significantly reduced (smaller), but serves no known function. This is evidence that whales have evolved from a once four-legged ancestor. The hind legs and hips which were no longer required have steadily become smaller and may one day be eliminated entirely. For now, whales are stuck with this "evolutionary baggage". '''''Analogous Structures''''' are features that have a very similar function but completely different anatomy. They normally occur when distantly related species occupy a similar environment. ==Comparative Embryology== [image:http://i.imgur.com/Jfc9AiB.png?2] All species start out as single celled organisms. Many species develop into much larger, more complex organisms after conception. If we compare the embryos of animals as they develop, we often find they are much more similar than their fully developed counterparts. Many of the anatomical differences between species only arise during our embryonic development. Different species often start with the same basic tissues or structures but they develop differently and are re-purposed into different structures as the organism develops. The more closely two species are related the later in development these differences usually emerge. This too supports the idea that we are descendants with modified structures that were inherited form a common ancestor. If you were to compare the embryos of these animals at what point do you think you could pick which one is human? ==Genetic Evidence== [image:http://i.imgur.com/jGfVsPP.png?1] The fact that the genetic code is universal to all living things suggests that we once had a common ancestor. Comparing the DNA sequence of two organisms can give us an idea of how closely related they are. For instance, your DNA sequence will be more similar to a direct relative than a stranger. Your DNA is more similar to other members of the same species than it is to other species. The more closely two DNA sequences match, the more recently they would have shared a common ancestor. By analysing the DNA from different species Scientists can start to generate family trees called '''''phylogenetic trees'''''. Scientists have devised a number of different ways to compare the DNA of different organisms such as: [https://www.pathwayz.org/Tree/Filter/SubTree/BIOTECHNOLOGY#!o999 DNA HYBRIDISATION], [https://www.pathwayz.org/Tree/Filter/SubTree/BIOTECHNOLOGY#!o1009 DNA PROFILING] and [https://www.pathwayz.org/Tree/Filter/SubTree/BIOTECHNOLOGY#!o1000 DNA SEQUENCING] ==Biochemical Evidence== Certain parts of our DNA sequence called genes each code for a unique sequence of amino acids called a polypeptide chain. These polypeptides fold into proteins that ultimately regulate our cellular functions thereby determining our characteristics. Evolution relies on mutations that alter the DNA sequence producing a new protein with an altered function. If the new function coveys some adaptive advantage it will be selected for (see [https://www.pathwayz.org/Tree/Filter/SubTree/PATHWAYZ/tag/41#!o289 natural selection]) However, not all mutations actually alter the amino acid sequence or structure of a protein. Therefore not every difference in the DNA sequence of two species represents an evolutionary change. Comparing the amino acid sequence or protein structures of two organisms gives a more accurate idea of their evolutionary relatedness. - Copyright © 2021 Learning Pathwayz Limited | All Rights Reserved - Website by Warp Speed Computers Copyright © 2021 Learning Pathwayz Limited | All Rights Reserved Warp Speed Computers
NASA’s New Supercomputer Simulation Reveals Spiraling Supermassive Black Holes NASA’s most recent simulation is helping researchers understand more about how supermassive black holes function. These black holes often weigh millions to billions of times more than our Sun, but little is known about what happens if they should collide. Using a supercomputer, researchers from the Goddard Space Flight Center finally have an idea as to what two black holes of this size would do if they interacted. The team used the physical effects established by Albert Einstein’s general theory of relativity. Gas in those systems would glow mostly ultraviolet and X-ray light, according to NASA. “We know galaxies with central supermassive black holes combine all the time in the universe, yet we only see a small fraction of galaxies with two of them near their centers,” said Scott Noble, an astrophysicist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “The pairs we do see aren’t emitting strong gravitational-wave signals because they’re too far away from each other. Our goal is to identify — with light alone — even closer pairs from which gravitational-wave signals may be detected in the future.” The NASA team also reminded us of this rather unsettling bit of information. Nearly every galaxy the size of the Milky Way or larger has a supermassive black hole at its heart. For decades, NASA has observed these Noble and the rest of the Goddard team published their analysis of the simulation in a recent edition of The Astrophysical Journal. Via: NASA Goddard Human error has made the soccer game entertaining and heartbreaking for as long as the game has existed. Some errors even lead to the countries losing World Cups. However, new technology offers hope for fairer play.
PDF (Acrobat) Document File Be sure that you have an application to open this file type before downloading and/or purchasing. The bears are back - and it mostly thanks to DJ Inkers for doodling such cute bears! :) This activity pack includes so many FUN and hands-on centers to reinforce beginning number skills. From number identification, place value, counting, addition and more! This pack includes so many fun activities to help get your students excited about MATH! Here is what's included: 1.Number Order: Students match the number bears with their corresponding number word cards. (numbers to 53).They also practice their number order skills from 0-5, 0-10, 0-20 and 0-50. Recording sheets included. 2.Honey Jars – What Comes After? Number sequencing activity strips. Kids read the number sequence on the strip and then find the honey jar card with the number that comes next. 3.Honey Jars – What Comes Before? Number sequencing activity strips. Kids read the number sequence on the strip and then find the honey jar card with the number that comes before or after 4.Dancing Bears. Number values practice. Students sort the cards into the correct honey jars. (< than ten, > than ten, = to ten). 5.Bears Counting Dry Erase Cards: Kids count the number of objects one ach card and write the sum on the card with a dry erase marker. 6.Bears Addition Dry Erase Cards: Kids count the number of objects on each side, record the number and then work out the sum. 7.Porride Count Ten Frames: Kids will choose a ten frame card and fill in the the frame on their recording sheets. 8.Roll & Color (3 sets of mats). Students roll the dice, add their numbers and then color in their numbers on their worksheets. The person who colors in all of them first wins. Two more easier versions of this game with identifying numbers on a die. 9.Printables. These were designed for ESL/ELL students or for whole class assessment / further practice on beginning number skills. The pages marked with ESL can be used easily with intervention students. The standards this unit covers are: Australian ACARA Standards: Number and place value Establish understanding of the language and processes of counting by naming numbers in sequences, initially to and from 20, moving from any starting point. (ACMNA001) Number and place value Connect number names, numerals and quantities, including zero, initially up to 10 and then beyond. (ACMNA002) Number and Algerbra Represent practical situations to model addition and sharing (ACMNA004) Common Core Standards: K.CC.A.3: Know number names and the count sequence. K.CC.A.2: Counting & cardinality. Know the number names & the count sequence. K.CC.C.6: counting & cardinality. Compare numbers. K.OA.A.1: operations & algebraic thinking. Understand addition as putting things together and adding to, and understand subtraction as taking apart and taking from. Thank you so much for dropping by! **Sea of Knowledge**
Mathematics Grade 7 (1) Students extend their understanding of ratios and develop understanding of proportionality to solve single- and multi-step problems. Students use their understanding of ratios and proportionality to solve a wide variety of percent problems, including those involving discounts, interest, taxes, tips, and percent increase or decrease. Students solve problems about scale drawings by relating corresponding lengths between the objects or by using the fact that relationships of lengths within an object are preserved in similar objects. Students graph proportional relationships and understand the unit rate informally as a measure of the steepness of the related line, called the slope. They distinguish proportional relationships from other relationships. (2) Students develop a unified understanding of number, recognizing fractions, decimals (that have a finite or a repeating decimal representation), and percents as different representations of rational numbers. Students extend addition, subtraction, multiplication, and division to all rational numbers, maintaining the properties of operations and the relationships between addition and subtraction, and multiplication and division. By applying these properties, and by viewing negative numbers in terms of everyday contexts (e.g., amounts owed or temperatures below zero), students explain and interpret the rules for adding, subtracting, multiplying, and dividing with negative numbers. They use the arithmetic of rational numbers as they formulate expressions and equations in one variable and use these equations to solve problems. (3) Students continue their work with area from Grade 6, solving problems involving the area and circumference of a circle and surface area of three-dimensional objects. In preparation for work on congruence and similarity in Grade 8 they reason about relationships among two-dimensional figures using scale drawings and informal geometric constructions, and they gain familiarity with the relationships between angles formed by intersecting lines. Students work with three-dimensional figures, relating them to two-dimensional figures by examining cross-sections. They solve real-world and mathematical problems involving area, surface area, and volume of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes and right prisms. (4) Students build on their previous work with single data distributions to compare two data distributions and address questions about differences between populations. They begin informal work with random sampling to generate data sets and learn about the importance of representative samples for drawing inferences. Core Standards of the Course Strand: MATHEMATICAL PRACTICES (7.MP) The Standards for Mathematical Practice in Seventh Grade describe mathematical habits of mind that teachers should seek to develop in their students. Students become mathematically proficient in engaging with mathematical content and concepts as they learn, experience, and apply these skills and attitudes (Standards 7.MP.1–8). Make sense of problems and persevere in solving them. Explain the meaning of a problem and look for entry points to its solution. Analyze givens, constraints, relationships, and goals. Make conjectures about the form and meaning of the solution, plan a solution pathway, and continually monitor progress asking, "Does this make sense?" Consider analogous problems, make connections between multiple representations, identify the correspondence between different approaches, look for trends, and transform algebraic expressions to highlight meaningful mathematics. Check answers to problems using a different method. Reason abstractly and quantitatively. Make sense of the quantities and their relationships in problem situations. Translate between context and algebraic representations by contextualizing and decontextualizing quantitative relationships. This includes the ability to decontextualize a given situation, representing it algebraically and manipulating symbols fluently as well as the ability to contextualize algebraic representations to make sense of the problem. Construct viable arguments and critique the reasoning of others. Understand and use stated assumptions, definitions, and previously established results in constructing arguments. Make conjectures and build a logical progression of statements to explore the truth of their conjectures. Justify conclusions and communicate them to others. Respond to the arguments of others by listening, asking clarifying questions, and critiquing the reasoning of others. Model with mathematics. Apply mathematics to solve problems arising in everyday life, society, and the workplace. Make assumptions and approximations, identifying important quantities to construct a mathematical model. Routinely interpret mathematical results in the context of the situation and reflect on whether the results make sense, possibly improving the model if it has not served its purpose. Use appropriate tools strategically. Consider the available tools and be sufficiently familiar with them to make sound decisions about when each tool might be helpful, recognizing both the insight to be gained as well as the limitations. Identify relevant external mathematical resources and use them to pose or solve problems. Use tools to explore and deepen their understanding of concepts. Attend to precision. Communicate precisely to others. Use explicit definitions in discussion with others and in their own reasoning. They state the meaning of the symbols they choose. Specify units of measure and label axes to clarify the correspondence with quantities in a problem. Calculate accurately and efficiently, express numerical answers with a degree of precision appropriate for the problem context. Look for and make use of structure. Look closely at mathematical relationships to identify the underlying structure by recognizing a simple structure within a more complicated structure. See complicated things, such as some algebraic expressions, as single objects or as being composed of several objects. For example, see 5 – 3(x – y)2 as 5 minus a positive number times a square and use that to realize that its value cannot be more than 5 for any real numbers x and y. Look for and express regularity in repeated reasoning. Notice if reasoning is repeated, and look for both generalizations and shortcuts. Evaluate the reasonableness of intermediate results by maintaining oversight of the process while attending to the details. Compute unit rates associated with ratios of fractions, including ratios of lengths, areas and other quantities measured in like or different units. For example, if a person walks 1/2 mile in each 1/4 hour, compute the unit rate as the complex fraction 1/2/1/4 miles per hour, equivalently 2 miles per hour. - Decide whether two quantities are in a proportional relationship, e.g., by testing for equivalent ratios in a table or graphing on a coordinate plane and observing whether the graph is a straight line through the origin. - Identify the constant of proportionality (unit rate) in tables, graphs, equations, diagrams, and verbal descriptions of proportional relationships. - Represent proportional relationships by equations. For example, if total cost t is proportional to the number n of items purchased at a constant price p, the relationship between the total cost and the number of items can be expressed as t = pn. - Explain what a point (x, y) on the graph of a proportional relationship means in terms of the situation, with special attention to the points (0, 0) and (1, r) where r is the unit rate. Use proportional relationships to solve multistep ratio and percent problems. Examples: simple interest, tax, markups and markdowns, gratuities and commissions, fees, percent increase and decrease, percent error. Apply and extend previous understandings of addition and subtraction to add and subtract rational numbers; represent addition and subtraction on a horizontal or vertical number line diagram. - Describe situations in which opposite quantities combine to make 0. For example, a hydrogen atom has 0 charge because its two constituents are oppositely charged. - Understand p + q as the number located a distance |q | from p, in the positive or negative direction depending on whether q is positive or negative. Show that a number and its opposite have a sum of 0 (are additive inverses). Interpret sums of rational numbers by describing real-world contexts. - Understand subtraction of rational numbers as adding the additive inverse, p – q = p + (–q). Show that the distance between two rational numbers on the number line is the absolute value of their difference, and apply this principle in real-world contexts. - Apply properties of operations as strategies to add and subtract rational numbers. - Understand that multiplication is extended from fractions to rational numbers by requiring that operations continue to satisfy the properties of operations, particularly the distributive property, leading to products such as (–1)(–1) = 1 and the rules for multiplying signed numbers. Interpret products of rational numbers by describing real-world contexts. - Understand that integers can be divided, provided that the divisor is not zero, and every quotient of integers (with non-zero divisor) is a rational number. If p and q are integers, then –(p/q) = (–p)/q = p/(–q). Interpret quotients of rational numbers by describing real-world contexts. - Apply properties of operations as strategies to multiply and divide rational numbers. - Convert a rational number to a decimal using long division; know that the decimal form of a rational number terminates in 0s or eventually repeats. Solve real-world and mathematical problems involving the four operations with rational numbers. Computations with rational numbers extend the rules for manipulating fractions to complex fractions. Strand: EXPRESSIONS AND EQUATIONS (7.EE) Use properties of operations to generate equivalent expressions (Standards 7.EE.1–2). Solve real-life and mathematical problems using numerical and algebraic expressions and equations (Standards 7.EE.3–4). Understand that rewriting an expression in different forms in a problem context can shed light on the problem and how the quantities in it are related. For example, a + 0.05a = 1.05a means that “increase by 5%” is the same as “multiply by 1.05.” Solve multi-step real-life and mathematical problems posed with positive and negative rational numbers in any form (whole numbers, fractions, and decimals), using tools strategically. Apply properties of operations to calculate with numbers in any form; convert between forms as appropriate; and assess the reasonableness of answers using mental computation and estimation strategies. For example: If a woman making $25 an hour gets a 10% raise, she will make an additional 1/10 of her salary an hour, or $2.50, for a new salary of $27.50. If you want to place a towel bar 9 3/4 inches long in the center of a door that is 27 1/2 inches wide, you will need to place the bar about 9 inches from each edge; this estimate can be used as a check on the exact computation. - Solve word problems leading to equations of the form px + q = r and p(x + q) = r, where p, q, and r are specific rational numbers. Solve equations of these forms fluently. Compare an algebraic solution to an arithmetic solution, identifying the sequence of the operations used in each approach. For example, the perimeter of a rectangle is 54 cm. Its length is 6 cm. What is its width? - Solve word problems leading to inequalities of the form px + q > r or px + q < r, where p, q, and r are specific rational numbers. Graph the solution set of the inequality and interpret it in the context of the problem. For example: As a salesperson, you are paid $50 per week plus $3 per sale. This week you want your pay to be at least $100. Write an inequality for the number of sales you need to make, and describe the solutions. Strand: GEOMETRY (7.G) Draw, construct, and describe geometrical figures, and describe the relationships between them (Standards 7.G.1–3). Solve real-life and mathematical problems involving angle measure, area, surface area, and volume (Standards 7.G.4–6). Draw (freehand, with ruler and protractor, and with technology) geometric shapes with given conditions. Focus on constructing triangles from three measures of angles or sides, noticing when the conditions determine a unique triangle, more than one triangle, or no triangle. Know the formulas for the area and circumference of a circle and use them to solve problems; give an informal derivation of the relationship between the circumference and area of a circle. Solve real-world and mathematical problems involving area, volume and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms. Strand: STATISTICS AND PROBABILITY (7.SP) Use random sampling to draw inferences about a population (Standards 7.SP.1–2). Draw informal comparative inferences about two populations (Standards 7.SP.3–4). Investigate chance processes and develop, use, and evaluate probability models (Standards 7.SP.5–8). Understand that statistics can be used to gain information about a population by examining a sample of the population; generalizations about a population from a sample are valid only if the sample is representative of that population. Understand that random sampling is more likely to produce representative samples and support valid inferences. Use data from a random sample to draw inferences about a population with an unknown characteristic of interest. Generate multiple samples (or simulated samples) of the same size to gauge the variation in estimates or predictions. For example, estimate the mean word length in a book by randomly sampling words from the book; predict the winner of a school election based on randomly sampled survey data. Gauge how far off the estimate or prediction might be. Informally assess the degree of visual overlap of two numerical data distributions with similar variabilities, measuring the difference between the centers by expressing it as a multiple of a measure of variability. For example, the mean height of players on the basketball team is 10 cm greater than the mean height of players on the soccer team, approximately twice the variability (mean absolute deviation) on either team; on a dot plot, the separation between the two distributions of heights is noticeable. Use measures of center and measures of variability for numerical data from random samples to draw informal comparative inferences about two populations. For example, decide whether the words in a chapter of a seventh-grade science book are generally longer than the words in a chapter of a fourth-grade science book. Understand that the probability of a chance event is a number between 0 and 1 that expresses the likelihood of the event occurring. Larger numbers indicate greater likelihood. A probability near 0 indicates an unlikely event, a probability around 1/2 indicates an event that is neither unlikely nor likely, and a probability near 1 indicates a likely event. Approximate the probability of a chance event by collecting data on the chance process that produces it and observing its long-run relative frequency, and predict the approximate relative frequency given the probability. For example, when rolling a number cube 600 times, predict that a 3 or 6 would be rolled roughly 200 times, but probably not exactly 200 times. Develop a probability model and use it to find probabilities of events. Compare probabilities from a model to observed frequencies; if the agreement is not good, explain possible sources of the discrepancy. - Develop a uniform probability model by assigning equal probability to all outcomes, and use the model to determine probabilities of events. For example, if a student is selected at random from a class, find the probability that Jane will be selected and the probability that a girl will be selected. - Develop a probability model (which may not be uniform) by observing frequencies in data generated from a chance process. For example, find the approximate probability that a spinning penny will land heads up or that a tossed paper cup will land open-end down. Do the outcomes for the spinning penny appear to be equally likely based on the observed frequencies? - Understand that, just as with simple events, the probability of a compound event is the fraction of outcomes in the sample space for which the compound event occurs. - Represent sample spaces for compound events using methods such as organized lists, tables and tree diagrams. For an event described in everyday language (e.g., “rolling double sixes”), identify the outcomes in the sample space which compose the event. - Design and use a simulation to generate frequencies for compound events. For example, use random digits as a simulation tool to approximate the answer to the question: If 40% of donors have type A blood, what is the probability that it will take at least 4 donors to find one with type A blood? http://www.uen.org - in partnership with Utah State Board of Education (USBE) and Utah System of Higher Education (USHE). Send questions or comments to USBE Specialist - Joleigh Honey and see the Mathematics - Secondary website. For general questions about Utah's Core Standards contact the Director - Jennifer Throndsen . These materials have been produced by and for the teachers of the State of Utah. Copies of these materials may be freely reproduced for teacher and classroom use. When distributing these materials, credit should be given to Utah State Board of Education. These materials may not be published, in whole or part, or in any other format, without the written permission of the Utah State Board of Education, 250 East 500 South, PO Box 144200, Salt Lake City, Utah 84114-4200.
At a young age, we learn to count on our fingers - starting out with 1-5, then 1-10, and maybe, if you're particularly enterprising as a toddler, you will learn to count to 20, 30, and beyond. No one ever attempts to enlighten us that we are actually making some more complex mathematical assumptions; we all know Base10, to be precise. In this article, we'll start by gaining a more rounded understanding of Base10 and its structure, then we will discuss binary (Base2, the building blocks of computing). Finally, we'll finish things up by talking about Base32 and Base64. At each stage we will discuss the advantages and uses for each type. We have 10 fingers. So, why did we choose Base10? It's not because the letterforms 0-9 exist; that was actually a result of the choice to use Base10. In fact, it is most likely because of the learning process we decided above - we have 10 fingers. This makes it much easier to understand the system. So, let's talk a bit about how Base10 actually is structured. This will be the foundation of understanding that we'll use in the subsequent discussion. Starting at 0, we count up to 9, filling the "1's" column. Once the ones column is full (has 9), that is the maximum for the column. So we move to the next column (to the left), and start at 1. For all intents and purposes, we can postulate that there are an infinite number of leading zeros before our first significant column. In other words, "000008" is the same as "8". So as each column fills up, the next column is then increased by one, and we start back at the previous column to fill it up again in the same manner as before. Specifically, the 1s column increases from 0-9, and then another ten is added to the tens column. This is continued, and if the tens column is at 9 and the 1s column is at 9, 1 is added to the 100's column, and so forth. We all know this piece of the pizzle. Consider the number 1020. Starting from the right, we can understand this as "0*1 + 2*10 + 0*100 + 1*1000". Now, consider the number 5,378. We can understand this as "8*1 + 7*10 + 3*100 + 5*1000". A generalized function to understand Base10, then, is as follows: (10 raised to the power of the column from the right -1) * (the number found in the column) Therefore, if there is a 6 in the 5th column from the right, 10^4*6 = 60,000. We can see that there this is a generalizable formula for understanding all base systems. This is why these systems are referred to as Base(N). The next system we will talk about is Base2, or binary. Binary consists of two digits, 0 and 1. This lends itself well to computing for many reasons, most fundamentally because computers rely on switches that have two states: on or off. Binary is the most basic system needed for all logical operations (think "true" and "false"). So, how does binary work? Take the formula from above, and instead of using ten, use two. And on that note, this is why these systems are referred to as Base(N). (2 raised to the power of the column from the right -1) * (the number found in the column) So, let's take the arbitrary number 1001101 in binary, and apply this formula. (1 * 1) + (0 * 2) + (1*4) + (1 * 8) + (16 * 0) + (32 * 0) + (64 * 1) = 77 "Wait!", you're thinking. "If binary is all that computers are made of, how would you write letters in binary?" Good question. This actually brings us to our introduction of Base16. It would instead be a single-digit representation of 10. Let's, for a moment, imagine that we had 11 fingers. We would be naturally using a system of Base11. Besides it seeming uncomfortably hard to imagine currently, what other implications would this have? Perhaps the most important implication is that we would have had another increment beyond 9 in the 1s column. But it wouldn't be a "10", because 10 isn't confined to the 1s column. It would instead be a single-digit representation of 10. And, in fact, that is exactly how letters function in base systems beyond Base10 up to Base62, with some caveats (which we'll get to later when we talk about Base32). Let's imagine using Base11, but substitute a capital A for the single-digit "10" we discussed above. How would we write the number 54? Since we know the first column from the left is the "11's" column, we would begin by dividing 54 by eleven, which gives us 4 with a remainder of 10. If "A" represents 10, in Base11 the number 54 would be represented as 4A. Let's do that in reverse, with the formula we used previously. (11 raised to the power of the column from the right - 1) * (the number found in the column) In this case, that would mean: (1 * A) + (4 * 11) Now, substitute 10 for A: (1*10) + (4*11) = 54 How is this useful, you're wondering? Base11 may not necessarily be useful (unless you have some kind of data structure that would benefit from a Base11 system). However, Base16 is used throughout computer systems for multiple purposes. Also known as hexadecimal, Base16 uses the numbers 0-9 followed by the letters a-f (not case-sensitive). In particular, you will see hexadecimals used to define RGB colors in CSS (and in most color-picker widgets on desktop software), with two digits for each of the channels red, green, and blue. So, for instance, #A79104 would produce r = A7, g = 91, b = 04. In decimals, this would be equivalent to r = 167, g = 145, b = 4; the resulting color would be a golden yellow. Two hexadecimal digits put together can represent 256 different numbers, and thus there are 256^3 (16,777,216) possible number combinations in the RGB hexadecimal system, represented by only 6 characters (or 3 if you use the shortcut method, where each of three digits is implicitly doubled; e.g. #37d == #3377dd). Base16 is often used in assembly languages, which is the lowest level accessible programming language. Because hexadecimals are easy to convert to binary, they are an easier way to write assembly code instructions. Note: The same is generally true of the popularity of Base32 and Base64; these encodings are used because they are naturally better for binary data (because they are powers of 2), and because there are, at least, 64 safe characters (and there aren't 128 safe characters) on almost every computer. For a hexadecimal example, take the number 1100 in hexadecimal, which is equivalent to 4352 in decimal. The same number in binary is 0001 0001 0000 0000. Converting from hexadecimal to binary is a simple operation of using a conversion table, where 0 in hexadecimal is 0000 in binary and F in hexadecimal is 1111 in binary. Note that the 0's to the left of the first number denotes that the binary number is in bits, where the 0's to the far left are simply empty columns. Fundamentally, these are not needed; however, you will encounter binary written this way almost exclusively. This practice is called padding, and is practiced because the length of the data is unknown, and thus could cause problems when multiple data transmissions occur; by padding the final string, the data size is guaranteed to be, for instance 4 bits long (for binary). Padding also occurs in other commonly used and specification-based encoding schemes; in particular, Base32 and Base64 both use the equals sign ("=") for padding. One might assume that Base32 is the numbers 0-9 and then the first 22 letters of the alphabet (up to V). Remember when we mentioned the caveat above? This is the caveat: the most commonly accepted Base32 definition is actually an encoding that starts with the first 26 letters of the alphabet and ends with the numbers 2-7. This is defined in The Internet Engineering Task Force's Request for Comments (RCFC) 4648, which also defines Base16 and Base64. Note, the difference is that the encoding for 0 is A, not 0. To encode a string in Base32, the following instructions happen. First, the string to be encoded is split into 5 byte blocks (40 bits in binary). Letters are represented by 8 bit blocks in ASCII (the standard for computers), so for every 5 letters, there are 40 bits. (This 8-bit definition for each letter allows for a total of 255 characters in ASCII.) Next, divide these 40 bits into 8 five-bit blocks; so, for every 5 letters, there are 8 blocks to encode in base32. Map each of these blocks to a 5-bit character mapping in the Base32 alphabet. For instance, if the five bit block is 00010 (or decimal 2), the mapped character is the letter, c. If the five bit block is 01010 (decimal 10), the mapped character is the letter K. Let's apply these steps to the string "yessir". |Character||ASCII Decimal||8-bit ASCII Binary| Let's take the binary representations and concatenate them now, splitting them into 5-bit groups 01111 00101 10010 10111 00110 11100 11011 01001 01110 010(00) null null null null null null A note on the above: because the specification defines that the encoding must be done in chunks of 8 5-bit pieces, we have to pad with 0 if the number of bits isn't divisble by 5 (hence the 010(00) on the second line) and with = if the number of chunks isn't divisible by 8. The "null" values will be replaced by the padding character, "=". Each of these 5-bit binary numbers map to a character in the 32-bit alphabet; specifically, the output for yessir would be A similar process is followed for Base64. There are a few fundamental differences between Base32 and Base64. Base64 includes the letters A-Z, a-z, numbers 0-9, and the symbols + and /. As mentioned previously, the "=" symbol is used for padding. The differences are mainly that all letters are case-sensitive, and all digits are used (instead of the subset 2-7). The symbols + and / are also added. The Base64 encoding process takes 24-bit strings (3 letters) and breaks them into four 6-bit chunks, mapping the resulting binary number to the Base64 alphabet. So, lets take a look at our previous example, the string "yessir". 8-bit binary: 01111001 01100101 01110011 01110011 01101001 01110010 6-bit chunks: 011110 010110 010101 110011 011100 110110 100101 110010 There are a few important things to note. First, Base64 is case-sensitive. Second, because the number of bits (48) was divisible by 6, no bit-padding was necessary. The number of 6-bit chunks was divisible by four as well (which also means that the number of input characters was divisible by 3), so no null ("=") padding was necessary either. A Summary of Base16, Base32, and Base64 These binary-friendly bases are leveraged throughout programming structures. These binary-friendly bases are leveraged throughout programming structures. Binary data is encoded in these bases to ensure the fidelity of the transfer and block against errors that might rise out of accidental un-encoded binary data transfer. They rely on standards-based tables of characters, and are only guaranteed to work if both the encoder and decoder use the same table; for instance, there are widely accepted modified versions of base32, including one by Douglas Crockford that changes some of the acceptible characters, including the letter "u" as to avoid unintentional obscenity. Encoding in Practice In addition to using hexadecimal numbers on a regular basis for CSS colors, Base32 and Base64 are used on the web consistently. Though the official encoding process for Base32 and Base64 bloat the size of the string, encoding numbers in Base64 or Base32 can be very beneficial for things like URL shortening, where a URL might point to /foo/id. Consider the following decimal numbers and their Base32 and Base64 equivalents. As you can see, There are signficant advantages to using Base64 or Base32 for number shortening. When every character counts, using these base encodings allows you to save characters. In many cases, the encoded number is about half the length of the non-encoded number. A Note On Base62 and Url-Modified Base64 What other types of web applications would you find uses for these encodings? If you Base64 encode the number 959, the result is O/. Of course, this isn't a url-safe value because of the "/", so a url pointing to O/ would not be decoded as O/, but as O (which is the decimal value 14). It would defeat the purpose, also, to encode the "/" as its ASCII code equivalent (%47%), as that lengthens the URL significantly. Two main solutions have risen to combat this issue. One is a url-safe variant of Base64 that replaces the + and / with - and _, respectively. It also removes the specification of adding = characters for padding. The other option is to go to a Base62 encoding, which retains almost all of the benefits of Base64 and removes the + and /. However, Base62 encoding is not as easily applicable as a binary transmission substitute, and therefore is far less popular. That's wraps it up! Now, you have a fundamental knowledge of base systems, particularly as they apply to the encoding of binary data. What other types of web applications would you find uses for these encodings?
Franco’s Spain, 1939–75 Throughout Franco’s rule, his authoritarian regime was based on the emergency war powers granted him as head of state and of the government by his fellow generals in 1936. The first decade of his government saw harsh repression by military tribunals, political purges, and economic hardship. Economic recovery was made difficult by the destruction during the Civil War (especially of railway rolling stock and communications in general), a loss of skilled labour, a series of bad droughts, and a shortage of foreign exchange and the restriction on imports of capital goods imposed by World War II and its aftermath. These difficulties were increased by Franco’s misguided policies of autarky, which aimed at economic self-sufficiency through the state control of prices and industrial development within a protected national economy cut off from the international market. The national income fell back to the levels of 1900, as industrial production and agricultural output stagnated and real wages dramatically fell. The near-famine years of the 1940s witnessed the rise of the black market and misery in rural areas that caused migration to the shantytowns of the cities. Given brutal repression and a controlled and censored press, sullen discontent could take no organized form. The regime maintained a division between the victors and the vanquished of the Civil War, with the vanquished excluded from public life. Franco’s sympathies in World War II lay with Germany and Italy, to whom he gave moral and material support. Nevertheless, Franco demanded France’s North African colonies in compensation for military cooperation against the Western Allies, on whom Spain was dependent for food and oil imports. Hitler refused. When in 1943 it appeared that the Allies would win the war, Franco reaffirmed Spain’s nominal neutrality without gaining their benevolence. The declared hostility of the great powers after 1945 and the diplomatic sanctions imposed by the United Nations (UN), from which Spain was excluded, gave Franco’s opposition in Spain and in exile new life. Juan Carlos Teresa Silverio Alfonso de Borbón y Battenberg, conde de Barcelona (popularly known as Don Juan), heir of Alfonso XIII, presented the monarchy as something acceptable to the democratic powers and offered himself as king of all Spaniards, victors and vanquished alike. Because many of Franco’s fellow generals were monarchists hostile to the Falange, demands for a restoration were parried only with difficulty. Valiant but futile guerrilla activities, inspired largely by the Communist Party (1944–48), were brutally suppressed. Franco met these serious difficulties with success, shifting the balance of power among his supporters from the Falange to Catholics. The Fuero de los Españoles (1945), guaranteeing personal freedoms (provided no attack was made on the regime), was a cosmetic device that failed to establish Franco’s democratic credentials with the Allies. More important for Franco was the support of the church, which was given control over education. The diplomatic ostracism imposed by the UN was skillfully turned into a means of rallying support for the regime in the name of national unity. Franco’s confidence came from his sense that, with the onset of the Cold War, the United States would come to consider Spain a valuable ally against the Soviet Union and that France and Britain, though declaring support for the democratic opposition, would not intervene directly to overthrow him at the cost of renewed civil war. Hence, the hopes of the opposition came to nothing. In 1953 an agreement with the United States gave Franco considerable financial aid in return for the establishment of four U.S. military bases in Spain; in the same year a concordat with the Vatican gave Spain added diplomatic respectability. By 1955, when Spain was admitted to the UN, Franco’s regime appeared secure. Internal political command remained in Franco’s hands, ensured by his control of the armed forces and by his ability to play off the groups that supported him, in particular the Falange, the monarchists, and the church. Ultimately, the Falange lost power in the National Movement, the sole legal political organization; its attempts to create a Falangist one-party state were defeated in 1956, though tensions between the Falange and the conservative elements persisted. Test Your Knowledge Football (Soccer): Fact or Fiction? Opposition to the regime took the form of student unrest, strikes, and the unsuccessful efforts of the Communist Party to forge a united front and challenge the regime (1958, 1959). The moderate opposition’s attempt in 1962 to force a democratic opening in order to enter the European Economic Community (EEC) was dismissed by the regime as treason. More serious was the bankruptcy of autarky, evident in inflation, a growing deficit in the balance of payments, and strikes. This crisis was remedied by the technocrats of Opus Dei (a conservative Roman Catholic lay organization), a number of whose members were appointed to the cabinet in February 1957. The devaluation of the European currencies forced Franco to implement a stabilization plan in 1959, which provided a fierce dose of orthodox finance. Economic nationalism, protectionism, and the state intervention characteristic of autarky were abandoned in favour of a market economy and the opening of Spain to international trade and much-needed foreign investment. The stabilization plan was followed by a development plan in 1963, which was based on French indicative planning—i.e., the setting of targets for the public sector and encouragement of the private sector. The new policies produced growth rates of more than 7 percent between 1962 and 1966, aided by a rapid increase in tourism, foreign investment, and the remittances of emigrants who, hard-hit by the immediate results of the 1959 stabilization policies, had sought employment in other European countries. There was a rural exodus from the impoverished countryside and a dramatic fall of the active population engaged in agriculture, from about two-fifths in 1960 to about one-fifth by 1976. Spain was rapidly becoming a modern industrialized country. However, the government’s policies were fiercely resisted by the Falange, who claimed that the policies were a surrender to neocapitalism. All hopes of a limited liberalization of the regime by its reformist wing were blocked by conservative elements, with the exception of Manuel Fraga’s Press Law of 1966, which gave the press greater freedom and influence. Although the new prosperity brought a novel degree of social mobility and satisfied the enlarged middle class, the workers’ movement revived. Workers, disillusioned with the “official” syndicates run by the Falange, set up Workers’ Commissions (Confederación Sindical de Comisiones Obreras; CC.OO.) to negotiate wage claims outside the official framework and called serious strikes. Sections of the church were sympathetic to claims for greater social justice and responsive to the recommendations of the Second Vatican Council. Indeed, many younger priests were sympathetic to the Workers’ Commissions. Although the bishops generally felt that the church should support the regime, they were increasingly aware of the long-term dangers of such an alliance. Peripheral nationalism constituted an intractable problem. In the Basque provinces the nationalists could count on the support of the clergy, and Basque nationalism developed a terrorist wing, ETA (Euskadi Ta Askatasuna; Basque: “Basque Homeland and Liberty”). The Burgos trials of Basque terrorists in 1970 discredited the regime abroad, and the following year the Assembly of Catalonia united the opposition with a demand for democratic institutions and the restoration of the Autonomy Statute of 1932. In the 1960s elements in the regime were increasingly troubled by its lack of “institutionalization” and the problem of the succession, as Franco was in failing health and there was no designated successor. The Organic Law of 1969 gave the regime a cosmetic constitution, and in 1969 Franco finally recognized Juan Carlos, grandson of Alfonso XIII, as his successor as king and head of state; Juan Carlos’s designation was rejected by the democratic opposition as a continuation of the regime. To secure continuity, in June 1973 Franco abandoned the premiership to Admiral Luis Carrero Blanco. However, in December Carrero Blanco was assassinated by ETA. Carlos Arias Navarro, the former minister of the interior, was selected as the new premier. His government saw a fierce struggle between reformists, led by Manuel Fraga and the new foreign minister, José Maria de Areilza, who wished to “open” the regime by limited democratization from above, and the “bunker” mentality of nostalgic Francoists. Although Arias Navarro promised liberalization in a February 1974 speech, he eventually sided with the hard-line Francoists, and his Law of Associations proved to be completely unacceptable to the opposition and a defeat for the reformists. The government severely repressed ETA’s terrorist activity in the Basque provinces, executing five terrorists in September 1975 despite international protests. Spain since 1975 Transition to democracy After Franco’s death on November 20, 1975, the accession of Juan Carlos as king opened a new era, which culminated in the peaceful transition to democracy by means of the legal instruments of Francoism. This strategy made it possible to avoid the perils of the “democratic rupture” advocated by the opposition, which had united, uneasily, on a common platform in July 1974. Arias Navarro, incapable of making the democratic transition supported by the king, was replaced in July 1976 by Adolfo Suárez González, a former Francoist minister. Suárez persuaded the Francoist right in the Cortes to pass the Law for Political Reform (November 1976), which paved the way for democratic elections. Suárez then convinced the opposition of his willingness to negotiate and his democratic intentions; in April 1977 he legalized the PCE against the wishes of the armed forces. In the elections of June 1977, Suárez’s party, a coalition of centrist groups called the Union of the Democratic Centre (UCD), emerged as the strongest party, winning 165 seats in the Cortes, closely followed by the Spanish Socialist Workers’ Party (PSOE), who captured 118 seats. It was a triumph for political moderation and the consensus politics of Suárez. The PCE gained 20 seats and the right-wing Popular Alliance 16. Suárez formed a minority government, and the political consensus held to pass the constitution of 1978. The new constitution, overwhelmingly ratified in a public referendum in December 1978, established Spain as a constitutional monarchy. Church and state were separated, and provisions were made for the creation of 17 autonomous communities throughout Spain, which extended regional autonomy beyond Euskadi (the Basque Country, encompassing the provinces of Viscaya, Guipúzcoa, and Álava) and Catalonia, both of which had already been given limited autonomy. Confronted by terrorism and economic recession, the UCD disintegrated into the factions of its “barons.” After heavy defeats in local elections and fearing a possible military coup, Suárez resigned in January 1981. The inauguration of Leopoldo Calvo Sotelo, also a member of the UCD, as prime minister was interrupted by the attempted military coup of Lieutenant Colonel Antonio Tejero, who occupied the Cortes (February 23, 1981) and held the government and the deputies captive for 18 hours. The coup attempt failed, however, because of King Juan Carlos’s resolute support of the democratic constitution. Calvo Sotelo, who was left with the task of restoring confidence in democracy, successfully engineered Spain’s entry into the North Atlantic Treaty Organization (NATO) in 1982. The administration of Felipe González, 1982–96 The election of October 1982 marked the final break with the Francoist legacy, returning the PSOE under its leader, Felipe González, whose government was the first in which none of the members had served under Francoism. The PSOE won a solid majority (202 seats), while the UCD was annihilated, winning only 12 seats. The conservative Democratic Coalition led by Manuel Fraga gained 106 seats and formed the official opposition. A radical party in 1975 committed to the replacement of capitalism, the PSOE subsequently abandoned Marxism and accepted a market economy. The new government made its main concern the battle against inflation and the modernization of industry. González’s policies were resisted by the unions (the socialist UGT and the CC.OO. controlled by the PCE), which staged violent strikes against the closing of uneconomic steel plants and shipyards. The left was further alienated by the government’s decision to continue NATO membership, despite the party’s official opposition to membership during the 1982 election. To justify this radical departure from the PSOE’s traditional neutralism, membership in NATO was submitted to a referendum and made dependent on a partial withdrawal of U.S. forces stationed in Spain under the 1953 agreements. Spain also was to make its contribution to collective defense outside the integrated military command of NATO. The government won the referendum of March 12, 1986—a triumph for González rather than evidence of understanding of or enthusiasm for NATO. González also secured Spain’s entry into the EEC in January 1986 after prolonged and difficult negotiations. The government lost some support on the left with the creation of the United Left (Izquierda Unida; IU), the core of which was remnants of the PCE, and the right capitalized on law-and-order issues, focusing on the fight against terrorism, disorder on the streets, the rise in crime, and the development of a serious drug problem. The government was accused of using its large majority to force through a major reform of university and secondary education and of abandoning socialist policies in the battle against inflation and in its support of a capitalist market economy. However, the government’s control of the PSOE was ensured by its manipulation of political patronage. It was furthermore troubled by frictions created by the demands of Euskadi and Catalonia for greater autonomy. But the success of the government’s economic policies (inflation fell and growth was resumed) and the popularity of González enabled the socialists in the election of June 1986 to retain their majority (184 seats), whereas Fraga’s conservative Popular Coalition (105 seats) failed to make any gains and fell apart. In its second term, the government’s economic policies continued to provoke the hostility of the trade unions—unemployment ran at nearly 20 percent—and on December 14, 1988, the CC.OO. and the socialist UGT staged a general strike. In foreign policy, all the major parties, with the exception of the United Left, supported the government’s decision to offer logistical support to the United States and its allies in 1991 in the Persian Gulf War; however, massive demonstrations against the war revealed widespread neutralist sentiments. Tensions between the central government and the autonomous governments of Euskadi and Catalonia continued. Although ETA terrorists lost political support, the rise of nationalism in the disintegrating Soviet Union sparked outbursts of separatism in Spain. The Spanish government favoured greater political union with the EEC, the country’s major trading partner. Following Spain’s success in hosting football’s (soccer’s) World Cup a decade earlier, the country again achieved international prominence in 1992, when it hosted the Expo ’92 world’s fair in Sevilla and the Olympic Games in Barcelona. Even before the glamour of those international events had faded, Spain entered a difficult period. The economy experienced a downturn, the government was rocked by a series of corruption scandals, and infighting within the PSOE reached intolerable levels. In these highly unfavourable circumstances, Felipe González called new elections for 1993. Surprisingly, the Socialists remained the largest party in the Cortes, though without an absolute majority; they were forced to rely upon the support of Catalan and Basque nationalists. González’s fourth term got off to a rocky start. Investigations led by judge Baltasar Garzón into the “dirty war” against ETA during the mid-1980s led to accusations that senior government officials had lent support to the Antiterrorist Liberation Groups (Grupos Antiteroristas de Liberación), whose activities included the kidnapping and murder of suspected ETA militants. Another scandal, involving missing security documents, led to the resignation of two ministers, including the deputy prime minister, Narcís Serra. When Catalan leader Jordi Pujol withdrew his party’s support for the government, González called new elections for March 1996, which were won by the conservative Popular Party (Partido Popular; PP), although by a much narrower margin than had been expected and without a parliamentary majority. Overall, the PP captured 156 of the Cortes’ 350 seats, while the PSOE was reduced to 141 seats.
NASA Mission Helps Solve a Mystery: Why Are Some Asteroid Surfaces Rocky? by Mikayla Mace Kelley The University of Arizona Scientists thought Bennu’s surface was like a sandy beach, abundant in fine sand and pebbles, which would have been perfect for collecting samples. Past telescope observations from Earth had suggested the presence of large swaths of fine-grained material smaller than a few centimeters called fine regolith. But when NASA’s OSIRIS-REx mission arrived at Bennu in late 2018, the mission saw a surface covered in boulders. The mysterious lack of fine regolith became even more surprising when mission scientists observed evidence of processes potentially capable of grinding boulders into fine regolith. New research, published in Nature and led by Saverio Cambioni, of the University of Arizona, used machine learning and surface temperature data to solve the mystery. Cambioni conducted the research at the university’s Lunar and Planetary Laboratory. He and his colleagues ultimately found that Bennu’s highly porous rocks are responsible for the surface’s surprising lack of fine regolith. “The ‘REx’ in OSIRIS-REx stands for Regolith Explorer, so mapping and characterizing the surface of the asteroid was a main goal,” said study co-author and OSIRIS-REx Principal Investigator Dante Lauretta, a Regents Professor of Planetary Sciences at the University of Arizona. “The spacecraft collected very high-resolution data for Bennu’s entire surface, which was down to 3 millimeters per pixel at some locations. Beyond scientific interest, the lack of fine regolith became a challenge for the mission itself, because the spacecraft was designed to collect such material.” A Rocky Start and Solid Answers “When the first images of Bennu came in, we noted some areas where the resolution was not high enough to see whether there were small rocks or fine regolith. We started using our machine learning approach to distinguish fine regolith from rocks using thermal emission (infrared) data,” Cambioni said. The thermal emission from fine regolith is different from that of larger rocks, because the size of its particles controls the former, while the latter is controlled by rock porosity. The team first built a library of thermal emissions associated with fine regolith mixed in different proportions with rocks of various porosity. Next, they used machine-learning techniques to teach a computer how to “connect the dots” between the examples, Cambioni said. They analyzed 122 areas on the surface of Bennu, that were observed both during the day and the night. “Only machine learning could efficiently explore a dataset this large,” Cambioni said. Cambioni and his collaborators found something surprising when the data analysis was completed: the fine regolith was not randomly distributed on Bennu. Instead, it was up to several tens of percent in those very few areas where rocks are non-porous, and systematically lower where rocks have higher porosity, which is most of the surface. The team concluded that very little fine regolith is produced from Bennu’s highly porous rocks because these are compressed rather than fragmented by meteoroid impacts. Like a sponge, the voids within rocks cushion the blow from incoming meteoroids. These findings are also in agreement with laboratory experiments from other research groups. “Basically, a big part of the energy of the impact goes into crushing the pores restricting the fragmentation of the rocks and the production of new fine regolith,” said study co-author Chrysa Avdellidou, a postdoctoral researcher at the French National Centre for Scientific Research (CNRS) – Lagrange Laboratory of the Côte d’Azur Observatory and University in France. Additionally, Cambioni and colleagues showed that cracking caused by the heating and cooling of Bennu’s rocks as the asteroid rotates through day and night proceeds more slowly in porous rocks than in denser rocks, further frustrating the production of fine regolith. “When OSIRIS-REx delivers its sample of Bennu (to Earth) in September 2023, scientists will be able to study the samples in detail,” said Jason Dworkin, OSIRIS-REx project scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “This includes testing the physical properties of the rocks to verify this study.” Other missions have evidence to support the team’s findings. The Japan Aerospace and Exploration Agency (JAXA) Hayabusa2 mission to Ryugu, a carbonaceous asteroid like Bennu, found that Ryugu also lacks fine regolith and has high-porosity rocks. Conversely, JAXA’s Hayabusa mission in 2005 revealed abundant fine regolith on the surface of asteroid Itokawa, an S-type asteroid with rocks of a different composition than Bennu and Ryugu. A previous study also from Cambioni and colleagues provided evidence that its rocks are less porous than Bennu’s and Ryugu’s using observations from Earth. “For decades, astronomers disputed that small, near-Earth asteroids could have bare-rock surfaces,” said study co-author Marco Delbo, research director with CNRS, also at the Lagrange Laboratory. “The most indisputable evidence that these small asteroids could have substantial fine regolith emerged when spacecraft visited S-type asteroids Eros and Itokawa in the 2000s and found fine regolith on their surfaces.” The team predicts that large swaths of fine regolith should be uncommon on carbonaceous asteroids, the most common of all asteroid types observed, and which the team expects to have high-porosity rocks like Bennu. By contrast, they predict terrains rich in fine regolith to be common on S-type asteroids, the second-most populous type of asteroids observed in the solar system, which they expect to have denser, less porous rocks than carbonaceous asteroids. “This is an important piece in the puzzle of what drives the diversity of asteroids’ surfaces,” Cambioni said. “Asteroids are thought to be relics of the early solar system, so understanding the evolution they have undergone in time is crucial to comprehend how the solar system formed and evolved. Now that we know this fundamental difference between carbonaceous and S-type asteroids, future teams can better prepare sample collection missions depending on the nature of the target asteroid.” Cambioni is continuing his research on planetary diversity as a distinguished postdoctoral fellow in the Department of Earth, Atmospheric and Planetary Sciences at the Massachusetts Institute of Technology. The University of Arizona leads the OSIRIS-REx science team and the mission’s science observation planning and data processing. NASA’s Goddard Space Flight Center in Greenbelt, Maryland, provides overall mission management, systems engineering, and the safety and mission assurance for OSIRIS-REx. Lockheed Martin Space in Littleton, Colorado, built the spacecraft and provides flight operations. Goddard and KinetX Aerospace are responsible for navigating the OSIRIS-REx spacecraft. OSIRIS-REx is the third mission in NASA’s New Frontiers Program, managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama, for the agency’s Science Mission Directorate at NASA Headquarters in Washington, D.C. – Advertisement –
Refraction is the change in direction of wave propagation due to a change in its transmission medium. The phenomenon is explained by the conservation of energy and the conservation of momentum. Owing to the change of medium, the phase velocity of the wave is changed but its frequency remains constant. This is most commonly observed when a wave passes from one medium to another at any angle other than 0° from the normal. Refraction of light is the most commonly observed phenomenon, but any type of wave can refract when it interacts with a medium, for example when sound waves pass from one medium into another or when water waves move into water of a different depth. Refraction follows Snell's law, which states that, for a given pair of media and a wave with a single frequency, the ratio of the sines of the angle of incidence θ1 and angle of refraction θ2 is equivalent to the ratio of phase velocities (v1 / v2) in the two media, or equivalently, to relative indices of refraction (n2 / n1) of the two media. Epsilon() and mu() represent the dielectric constant and the magnetic moment of the different media: In optics, refraction is a phenomenon that often occurs when waves travel from a medium with a given refractive index to a medium with another at an oblique angle. At the boundary between the media, the wave's phase velocity is altered, usually causing a change in direction. Its wavelength increases or decreases, but its frequency remains constant. For example, a light ray will refract as it enters and leaves glass, as there is a change in refractive index. A ray traveling along the normal (perpendicular to the boundary) will suffer change in speed, but not direction. Refraction still occurs in this case (by Snell's Law as angle of incidence will be 0°). Understanding of this concept led to the invention of lenses and the refracting telescope. Refraction can be seen when looking into a bowl of water. Air has a refractive index of about 1.0003, and water has a refractive index of about 1.3333. If a person looks at a straight object, such as a pencil or straw, which is placed at a slant, partially in the water, the object appears to bend at the water's surface. This is due to the bending of light rays as they move from the water to the air. Once the rays reach the eye, the eye traces them back as straight lines (lines of sight). The lines of sight (shown as dashed lines) intersect at a higher position than where the actual rays originated. This causes the pencil to appear higher and the water to appear shallower than it really is. The depth that the water appears to be when viewed from above is known as the apparent depth. This is an important consideration for spearfishing from the surface because it will make the target fish appear to be in a different place, and the fisher must aim lower to catch the fish. Conversely, an object above the water has a higher apparent height when viewed from below the water. The opposite correction must be made by an archer fish. For small angles of incidence (measured from the normal, when sin θ is approximately the same as tan θ), the ratio of apparent to real depth is the ratio of the refractive indexes of air to that of water. But, as the angle of incidence approaches 90o, the apparent depth approaches zero, albeit reflection increases, which limits observation at high angles of incidence. Conversely, the apparent height approaches infinity as the angle of incidence (from below) increases, but even earlier, as the angle of total internal reflection is approached, albeit the image also fades from view as this limit is approached. The diagram on the right shows an example of refraction in water waves. Ripples travel from the left and pass over a shallower region inclined at an angle to the wavefront. The waves travel slower in the more shallow water, so the wavelength decreases and the wave bends at the boundary. The dotted line represents the normal to the boundary. The dashed line represents the original direction of the waves. This phenomenon explains why waves on a shoreline tend to strike the shore close to a perpendicular angle. As the waves travel from deep water into shallower water near the shore, they are refracted from their original direction of travel to an angle more normal to the shoreline. Refraction is also responsible for rainbows and for the splitting of white light into a rainbow-spectrum as it passes through a glass prism. Glass has a higher refractive index than air. When a beam of white light passes from air into a material having an index of refraction that varies with frequency, a phenomenon known as dispersion occurs, in which different coloured components of the white light are refracted at different angles, i.e., they bend by different amounts at the interface, so that they become separated. The different colors correspond to different frequencies. While refraction allows for phenomena such as rainbows, it may also produce peculiar optical phenomena, such as mirages and Fata Morgana. These are caused by the change of the refractive index of air with temperature. The refractive index of materials can also be nonlinear, as occurs with the Kerr effect when high intensity light leads to a refractive index proportional to the intensity of the incident light. Recently, some metamaterials have been created that have a negative refractive index. With metamaterials, we can also obtain total refraction phenomena when the wave impedances of the two media are matched. There is then no reflected wave. Also, since refraction can make objects appear closer than they are, it is responsible for allowing water to magnify objects. First, as light is entering a drop of water, it slows down. If the water's surface is not flat, then the light will be bent into a new path. This round shape will bend the light outwards and as it spreads out, the image you see gets larger. An analogy that is often put forward to explain the refraction of light is as follows: "Imagine a marching band as it marches at an oblique angle from a pavement (a fast medium) into mud (a slower medium). The marchers on the side that runs into the mud first will slow down first. This causes the whole band to pivot slightly toward the normal (make a smaller angle from the normal)." Why refraction occurs when light travels from a medium with a given refractive index to a medium with another, can be explained by the path integral formulation of quantum mechanics (the complete method was developed in 1948 by Richard Feynman). Feynman humorously explained it himself in the recording "QED: Fits of Reflection and Transmission - Quantum Behaviour - Richard Feynman (The Sir Douglas Robb Lectures, University of Auckland, 1979)".[clarification needed] The effects of refraction between materials can be minimised through index matching, the close matching of their respective indices of refraction. In medicine, particularly optometry, ophthalmology and orthoptics, refraction (also known as refractometry) is a clinical test in which a phoropter may be used by the appropriate eye care professional to determine the eye's refractive error and the best corrective lenses to be prescribed. A series of test lenses in graded optical powers or focal lengths are presented to determine which provides the sharpest, clearest vision. This section needs expansion. You can help by adding to it. (May 2018) In underwater acoustics, refraction is the bending or curving of a sound ray that results when the ray passes through a sound speed gradient from a region of one sound speed to a region of a different speed. The amount of ray bending is dependent on the amount of difference between sound speeds, that is, the variation in temperature, salinity, and pressure of the water. Similar acoustics effects are also found in the Earth's atmosphere. The phenomenon of refraction of sound in the atmosphere has been known for centuries; however, beginning in the early 1970s, widespread analysis of this effect came into vogue through the designing of urban highways and noise barriers to address the meteorological effects of bending of sound rays in the lower atmosphere. - Birefringence (double refraction) - Diffraction, which occurs when a wave encounters an obstacle and propagates around it - Huygens–Fresnel principle - List of indices of refraction - Negative refraction - Parallax, a visually similar principle caused by angle of perspective - Refractive index - Snell's law - Total internal reflection - Born and Wolf (1959). Principles of Optics. New York, NY: Pergamon Press INC. p. 37. - "Wavelength and the Index of Refraction". www.rpi.edu. Retrieved 2018-05-09. - Dill, Lawrence M. (1977). "Refraction and the spitting behavior of the archerfish (Toxotes chatareus)". Behavioral Ecology and Sociobiology. 2 (2): 169–184. doi:10.1007/BF00361900. JSTOR 4599128. - "Shoaling, Refraction, and Diffraction of Waves". University of Delaware Center for Applied Coastal Research. Retrieved 2009-07-23. - Ward, David W; Nelson, Keith A; Webb, Kevin J (2005). "On the physical origins of the negative index of refraction". New Journal of Physics. 7: 213. arXiv: . Bibcode:2005NJPh....7..213W. doi:10.1088/1367-2630/7/1/213. - "Refraction". eyeglossary.net. Retrieved 2006-05-23. - Navy Supplement to the DOD Dictionary of Military and Associated Terms (PDF). Department Of The Navy. August 2006. NTRP 1-02.[permanent dead link] - Mary Somerville (1840), On the Connexion of the Physical Sciences, J. Murray Publishers, (originally by Harvard University) - Hogan, C. Michael (1973). "Analysis of highway noise". Water, Air, & Soil Pollution. 2 (3): 387–392. Bibcode:1973WASP....2..387H. doi:10.1007/BF00159677. |Wikimedia Commons has media related to Refraction.| - Reflections and Refractions in Ray Tracing, a simple but thorough discussion of the mathematics behind refraction and reflection. - Flash refraction simulation- includes source, Explains refraction and Snell's Law.
Recall that at the end of the last lecture we had started to discuss joint probability functions of two (or more) random variables. With two random variables X and Y, we define joint probability functions as follows: For discrete variables, we let p(i,j) be the probability that X=i and Y=j. This give a function p, called the joint probability function of X and Y that is defined on (some subset of) the set of pairs of integers and such that for all i and j and When we find it convenient to do so, we will set p(i,j)=0 for all i and j outside the domain we are considering. For continuous variables, we define the joint probability density function p(x,y) on (some subset of) the plane of pairs of real numbers. We interpret the function as follows: p(x,y)dxdy is (approximately) the probability that X is between x and x+dx and Y is between y and y+dy (with error that goes to zero faster than dx and dy as they both go to zero). Thus, p(x,y) must be a non-negative valued function with the property that As with discrete variables, if our random variables always lie in some subset of the plane, we will define p(x,y) to be 0 for all (x,y) outside that subset. We take one simple example of each kind of random variable. For the discrete random variable, we consider the roll of a pair of dice. We assume that we can tell the dice apart, so there are thirty-six possible outcomes and each is equally likely. Thus our joint probability function will be and p(i,j)=0 otherwise. For our continuous example, we take the example mentioned at the end of the last lecture: for (x,y) in the triangle with vertices (0,0), (2,0) and (2,2), and p(x,y)=0 otherwise. We checked last time that this is a probability density function (its integral is 1). Often when confronted with the joint probability of two random variables, we wish to restrict our attention to the value of just one or the other. We can calculate the probability distribution of each variable separately in a straightforward way, if we simply remember how to interpret probability functions. These separated probability distributions are called the marginal distributions of the respective individual random variables. Given the joint probability function p(i,j) of the discrete variables X and Y, we will show how to calculate the marginal distributions of X and of Y. To calculate , we recall that is the probability that X=i. It is certainly equal to the probability that X=i and Y=0, or X=i and Y=1, or .... In other words the event X=i is the union of the events X=i and Y=j as j runs over all possible values. Since these events are disjoint, the probability of their union is the sum of the probabilities of the events (namely, the sum of p(i,j)). Thus: Make sure you understand the reasoning behind these two formulas! An example of the use of this formula is provided by the roll of two dice discussed above. Each of the 36 possible rolls has probability 1/36 of occurring, so we have probability function p(i,j) as indicated in the following table: The marginal probability distributions are given in the last column and last row of the table. They are the probabilities for the outcomes of the first (resp second) of the dice, and are obtained either by common sense or by adding across the rows (resp down the columns). For continuous random variables, the situation is similar. Given the joint probability density function p(x,y) of a bivariate distribution of the two random variables X and Y (where p(x,y) is positive on the actual sample space subset of the plane, and zero outside it), we wish to calculate the marginal probability density functions of X and Y. To do this, recall that is (approximately) the probability that X is between x and x+dx. So to calculate this probability, we should sum all the probabilities that both X is in [x,x+dx] and Y is in [y,y+dy] over all possible values of Y. In the limit as dy approaches zero,this becomes an integral: In other words, Again, you should make sure you understand the intuition and the reasoning behind these important formulas. We return to our example: for (x,y) in the triangle with vertices (0,0), (2,0) and (2,2), and p(x,y)=0 otherwise, and compute its marginal density functions. The easy one is so we do that one first. Note that for a given value of x between 0 and 2, y ranges from 0 to x inside the triangle: if , and otherwise. This indicates that the values of X are uniformly distributed over the interval from 0 to 2 (this agrees with the intuition that the random points occur with greater density toward the left side of the triangle but there is more area on the right side to balance this out). To calculate , we begin with the observation that for each value of y between 0 and 2, x ranges from y to 2 inside the triangle: if and otherwise. Note that approaches infinity as y approaches 0 from above, and approaches 0 as y approaches 2. You should check that this function is actually a probability density function on the interval [0,2], i.e., that its integral is 1. Frequently, it is necessary to calculate the probability (density) function of a function of two random variables, given the joint probability (density) function. By far, the most common such function is the sum of two random variables, but the idea of the calculation applies in principle to any function of two (or more!) random variables. The principle we will follow for discrete random variables is as follows: to calculate the probability function for F(X,Y), we consider the events for each value of f that can result from evaluating F at points of the sample space of (X,Y). Since there are only countably many points in the sample space, the random variable F that results is discrete. Then the probability function is This seems like a pretty weak principle, but it is surprisingly useful when combined with a little insight (and cleverness). As an example, we calculate the distribution of the sum of the two dice. Since the outcome of each of the dice is a number between 1 and 6, the outcome of the sum must be a number between 2 and 12. So for each f between 2 and 12: A table of the probabilities of various sums is as follows: The "tent-shaped" distribution that results is typical of the sum of (independent) uniformly distributed random variables. For continuous distributions, our principle will be a little more complicated, but more powerful as well. To enunciate it, we recall that to calculate the probability of the event F<f, we integrate the pdf of F from to f: Conversely, to recover the pdf of F, we can differentiate the resulting function: (this is simply the first fundamental theorem of calculus). Our principle for calculating the pdf of a function of two random variables F(X,Y) will be to calculate the probabilities of the events (by integrating the joint pdf over the region of the plane defined by this inequality), and then to differentiate with respect to f to get the pdf. We apply this principle to calculate the pdf of the sum of the random variables X and Y in our example: for (x,y) in the triangle T with vertices (0,0), (2,0) and (2,2), and p(x,y)=0 otherwise. Let Z=X+Y. To calculate the pdf , we first note that for any fixed number z, the region of the plane where Z<z is the half plane below and to the left of the line y=z-x. To calculate the probability , we must integrate the joint pdf p(x,y) over this region. Of course, for , we get zero since the half plane z<0 has no points in common with the triangle where the pdf is supported. Likewise, since both X and Y are always between 0 and 2 the biggest the sum can be is 4. Therefore for all . For z between 0 and 4, we need to integrate 1/2x over the intersection of the half-plane x+y<z and the triangle T. The shape of this intersection is different, depending upon whether z is greater than or less than 2: If , the intersection is a triangle with vertices at the points (0,0), (z/2,z/2) and (z,0). In this case, it is easier to integrate first with respect to x and then with respect to y, and we can calculate: And since the (cumulative) probability that Z<z is for 0<z<2, the pdf over this range is . The calculation of the pdf for is somewhat trickier because the intersection of the half-plane x+y<z and the triangle T is more complicated. The intersection in this case is a quadrilateral with vertices at the points (0,0), (z/2,z/2), (2,z-2) and (2,0). We could calculate by integrating p(x,y) over this quadrilateral. But we will be a little more clever: Note that the quadrilateral is the "difference" of two sets. It consists of points inside the triangle with vertices (0,0), (z/2,z/2), (z,0) that are to the left of the line x=2. In other words it is points inside this large triangle (and note that we already have computed the integral of 1/2x over this large triangle to be ) that are not inside the triangle with vertices (2,0), (2,z-2) and (z,0). Thus, for , we can calculate as To get the pdf for 2<z<4, we need only differentiate this quantity, to get Now we have the pdf of Z = X+Y for all values of z. It is for 0<z<2, it is for 2<z<4 and it is 0 otherwise. It would be good practice to check that the integral of is 1. In our study of stochastic processes, we will often be presented with situations where we have some knowledge that will affect the probability of whether some event will occur. For example, in the roll of two dice, suppose we already know that the sum will be greater than 7. This changes the probabilities from those that we computed above. The event has probability (5+4+3+2+1)/36=15/36. So we are restricted to less than half of the original sample space. We might wish to calculate the probability of getting a 9 under these conditions. The quantity we wish to calculate is denoted , read "the probability that F=9 given that ". In general to calculate for two events A and B (it is not necessary that A is a subset of B), we need only realize that we need to compute the fraction of the time that the event B is true, it is also the case the A is true. In symbols, we have For our dice example (noting that the event F=9 is a subset of the event ), we get As another example (with continuous probability this time), we calculate for our 1/2x on the triangle example the conditional probabilities: as well as (just to show that the probabilities of A given B and B given A are usually different). First . This one is easy! Note that in the triangle with vertices (0,0), (2,0) and (2,2) it is true that Y>1 implies that X>1. Therefore the events and Y>1 are the same, so the fraction we need to compute will have the same numerator and denominator. Thus . For we actually need to compute something. But note that Y>1 is a subset of the event X>1 in the triangle, so we get: Two events A and B are called independent if the probability of A given B is the same as the probability of A (with no knowledge of B) and vice versa. The assumption of independence of certain events is essential to many probabilistic arguments. Independence of two random variables is expressed by the equations: Two random variables X and Y are independent if the probability that a<X<b remains unaffected by knowledge of the value of Y and vice versa. This reduces to the fact that the joint probability (or probability density) function of X and Y "splits" as a product: of the marginal probabilities (or probability densities). This formula is a straightforward consequence of the definition of independence and is left as an exercise. 1. Let p(x,y) be the uniform joint probability density on the unit disk, i.e., and p(x,y)=0 otherwise. Calculate the pdf of X+Y. Also find the expected value and variance of X+Y. 2. Suppose X and Y are independent random variables, each distributed according to the exponential distribution with parameter . Find the joint pdf of X and Y (easy). Find the pdf of X+Y. Also find the mean and variance of X+Y. 3. Prove that two random variables are independent if and only if their joint probability (density) function is the product of the marginal probability (density) functions.
Percentage Increase and Decrease Example Our "Percentage Increase and Decrease Example" teaching resource is extremely helpful for teaching and learning the basic ideas of percentages for the students in KS3 and KS4. This resource is a valuable tool for educators and parents, designed to enhance students' understanding of percentage calculations in real-life contexts. This printable PDF resource simplifies the process of multiplying percentages which is crucial for developing quick and efficient mathematical thinking. What Is a Percentage Increase and Decrease Example The "Percentage Increase and Decrease Example" is a printable PDF resource. It features an image depicting the resource, a concise description, and its relevance to school year groups KS3. This tool is aimed at reinforcing students' skills in applying percentage increases and decreases in various scenarios. Importance of the Topic in Real Life Understanding percentage increases and decreases is crucial in real life. It applies to financial literacy, shopping discounts, interest rates, and statistical data analysis. Mastery of this topic enables students to make informed decisions, fostering numerical confidence in daily situations. Why Is This Resource Helpful for Learning This teaching resource is beneficial for several reasons. First, it offers a clear, structured approach to tackling percentage problems, making complex concepts accessible. Second, it aligns with the school curriculum, ensuring relevance for students across different year groups. Lastly, the free PDF format allows for easy distribution and accessibility, supporting diverse learning environments. In summary, the "Percentage Increase and Decrease Example" is an essential tool for teaching key maths skills. It promotes practical understanding and application of percentages, preparing students for real-world challenges. By integrating this resource into the classroom, educators and parents can enhance maths education for school students, contributing to their overall academic success. Check out our percentage worksheets, easily downloadable in PDF format, to boost the effective improvement of your students on the basic fraction, decimal, and percentage skills.
The Abell 901/902 supercluster is located a little over two billion light-years from Earth. The existence of superclusters indicates that the galaxies in our Universe are not uniformly distributed; most of them are drawn together in groups and clusters, with groups containing up to some dozens of galaxies and clusters up to several thousand galaxies. Those groups and clusters and additional isolated galaxies in turn form even larger structures called superclusters. Superclusters form large structures of galaxies, called "filaments", "supercluster complexes", "walls" or "sheets", that may span between several hundred million light-years to 10 billion light-years, covering more than 5% of the observable universe. Observations of superclusters likely tell us something about the initial condition of the universe when these superclusters were created. The directions of the rotational axes of galaxies within superclusters may also give us insight into the formation process of galaxies early in the history of the Universe. Interspersed among superclusters are large voids of space in which few galaxies exist. Superclusters are frequently subdivided into groups of clusters called galaxy clouds. It contains the Local Group with our galaxy, the Milky Way. It also contains the Virgo Cluster near its center, and is sometimes called the Local Supercluster. It is thought to contain over 47,000 galaxies. Discovered in 1999 (as ClG J0848+4453, a name now used to describe the western cluster, with ClG J0849+4452 being the eastern one), it contains at least two clusters RXJ 0848.9+4452 (z=1.26) and RXJ 0848.6+4453 (z=1.27) . At the time of discovery, it became the most distant known supercluster. Additionally, seven smaller groups of galaxies are associated with the supercluster. A rich supercluster with several galaxy clusters was discovered around an unusual concentration of 23 QSOs at z=1.1 in 2001. The size of the complex of clusters may indicate a wall of galaxies exists there, instead of a single supercluster. The size discovered approaches the size of the CfA2 Great Wall filament. At the time of the discovery, it was the largest and most distant supercluster beyond z=0.5 This supercluster at the time of its discovery was the largest supercluster found so deep into space, in 2000. It consisted of two known rich clusters and one newly discovered cluster as a result of the study that discovered it. The then known clusters were Cl 1604+4304 (z=0.897) and Cl 1604+4321 (z=0.924), which then known to have 21 and 42 known galaxies respectively. The then newly discovered cluster was located at 16h 04m 25.7s, +43° 14′ 44.7″ ^Tanaka, I. (2004). "Subaru Observation of a Supercluster of Galaxies and QSOS at Z = 1.1"". Studies of Galaxies in the Young Universe with New Generation Telescope, Proceedings of Japan-German Seminar, held in Sendai, Japan, July 24–28, 2001. pp. 61–64. Bibcode:2004sgyu.conf...61T.
KY.HS.A.9 Know and apply the Binomial Theorem for the expansion of (x + y)n in powers of x and y for a positive integer n, where x and y are any numbers, with coefficients determined for example by Pascal's Triangle. KY.HS.A.14 Create a system of equations or inequalities to represent constraints within a modeling context. Interpret the solution(s) to the corresponding system as viable or nonviable options within the context. Understand solving equations as a process of reasoning and explain the reasoning. KY.HS.A.16 Understand each step in solving a simple equation as following from the equality of numbers asserted at the previous step, starting from the assumption that the original equation has a solution. Construct a viable argument to justify a solution method. KY.HS.A.19 Solve quadratic equations in one variable. KY.HS.A.19.a Solve quadratic equations by taking square roots, the quadratic formula and factoring, as appropriate to the initial form of the equation. Recognize when the quadratic formula gives complex solutions and write them as a ± bi for real numbers a and b. KY.HS.A.19.b Use the method of completing the square to transform any quadratic equation in x into an equation of the form (x – p)² = q that has the same solutions. Derive the quadratic formula from this form. KY.HS.A.24 Justify that the solutions of the equations f(x) = g(x) are the x-coordinates of the points where the graphs of y = f(x) and y = g(x) intersect. Find the approximate solutions graphically, using technology or tables.
A new theory of the origin of the terrestrial planets—that Jupiter’s gravity pulled them inward from the outer solar system—solves longstanding scientific riddles and offers a rich agenda for further investigation. The origin and distribution of water on the terrestrial planets make a good place to start investigating this theory. Radiation pressure and the solar wind pushed water molecules out beyond the “snow line” around 4.5 AU, so how did Earth come to have a relatively significant amount of water? A common explanation for Earth’s oceans—that Earth was bombarded by water-bearing comets—has never been substantiated. The ratio of deuterium to hydrogen in comets is roughly double the ratio in the water on Earth, except for those that formed close to the orbit of Jupiter; and they were short-lived and thus poor candidates. Also, the flux of comets required would be several orders of magnitude larger than appears realistic. Similar considerations hold for asteroids, few of which carry large amounts of water. As one researcher states, “the bulk of Earth’s water must have been supplied during its formation, rather than steadily throughout geologic time.”1 New findings regarding the origin of Venus provide a better explanation of why Earth has so much water. An array of new evidence supports a commonsensical explanation of ancient myths that Venus emerged from Zeus/Jupiter—that Venus was actually pulled from the outer solar system by the gravity of Jupiter and passed near the gas giant, thereby heating up from the tidal force caused by Jupiter’s immense gravitational field, losing its ice, gaining a comet tail, and being steered into the inner solar system. All this seems to have happened shortly before 2500 B.C. when Venus first began to be depicted as a comet by the ancients.2 In the early years of the solar system, Jupiter’s gravitational field is generally thought to have directed many planetesimals into orbits on the fringes of the solar system but also some into the inner solar system. So we can posit that all the terrestrial planets were orbiting outside of Jupiter and then (with the exception of Venus) were pulled by Jupiter into the inner solar system billions of years ago.3 While Jupiter’s gravitational pull ensures that no large object can remain for more than perhaps 30 million years in the Jupiter-Saturn slot,4 rapid accretion in the early solar system would have permitted Earth quickly to attain a high mass by impacts with planetesimals. Meanwhile, we know that the regions around the outer planets are exceptionally clear of debris, suggesting that it was all swept up long ago by Saturn, Uranus, and Neptune. But there is one exception. The slot between Saturn and Uranus appears to contain zones where planetesimals could have orbited without being vacuumed up into these large planets. However, this area is also very clear of debris. One explanation would be that this was the slot of Venus, which cleaned up these objects until, shortly before 2500 B.C., Saturn’s gravitational pull or some other cause steered it in the direction of Jupiter, which directed it into the inner solar system. Various features of Venus—that its surface is so hot, that it appears old yet has a new surface, that it contains 150 times as much deuterium relative to hydrogen compared to Earth (a sign of a large amount of water in the past), that it seems to have a residual tail (the famous but dwindling Black Drop), and that it rotates very slowly in a retrograde direction as if after tidal locking to Jupiter—match the explanation that it passed near Jupiter into the inner solar system. In ancient iconography, Venus was depicted as ovoid, as in this portrait of the Egyptian Venus goddess Sekhmet, consistent with being stretched by Jupiter’s gravity. The ancient Greek myth of pregnant Metis has Zeus (Jupiter) turn her into a fly that zips into his mouth and gives birth to Athena inside him. Then another myth has Athena spring from the head of Zeus (her name was originally A Fena, meaning The Phoenician Lady, aka Venus, the brilliant rising Morning Star/Comet). New evidence from China, Egypt, Armenia, and Mesoamerica shows how ancient peoples worldwide responded to Comet Venus’ terrifying, catastrophic approaches to Earth. The Venus theory also neatly explains the chronology and intricate pattern of stones of Stonehenge. Circularization of Orbits All four terrestrial planets, and the Earth’s Moon as well, presumably had highly eccentric orbits when they first entered the inner solar system. Curiously, Mercury (21% eccentricity) and Mars (9%; varies from 0 to 14%) still possess the most eccentric orbits of the eight planets, but Earth (<2%) and Venus (<1%) have very circular ones. The requirement for a speedy circularization of the orbit of Venus has been a major target of critics of ancient accounts and evidence that suggest that it entered the inner solar system shortly before 2500 B.C. What properties do Earth and Venus share that would have led their orbits to become circular? First, the Earth has oceans, and both planets have thick atmospheres, that would have created plasticity that reduced the eccentricity of their orbits.5 Second, both were very hot in their early years in the inner solar system, and this heat would have increased their plasticity and hence made them more pliant to the gravitational pull of the Sun, which would have tended to render their orbits more circular. Third, the giant cometary tail of Venus and the Earth’s Moon would have in parallel fashion tended to lessen the eccentricity. Fourth, Earth and Venus both interacted with other planets in ways that would have made their orbits more circular. Specifically, each of more than 30 passages of Venus near Earth every 52 years between ~2525 B.C. and ~700 B.C. would have, via gravitational tugging, bent Venus’ orbit and made it ever more circular. Fifth, in ancient times Venus had a markedly ovoid shape.6 Comet Venus appears to have moved at times in the direction of its major axis, and this would have added to its length and malleability under gravitational forces and hence its tendency to circularize its orbit. Having lost Mars (see below), Earth, too, would initially have had a less-than-uniformly spherical shape and thus may been less resistant to circularization of orbit. Sixth, the electromagnetic force may have played a role, though in what manner and to what extent need to be modeled, including whether the tails formed dusty plasmas. In fact, scientists generally recognize that rapid circularization must occur in the presumably originally highly elliptical orbits of short-term comets that end up with circular orbits, otherwise they would lose their material when interacting closely with the Sun on hundreds or thousands of highly elliptical passes. Comet Venus evidently resembled such comets.7 A discussion of the orbits of planetesimals concludes: “Most close encounters between planetesimals did not lead to a collision, but bodies often pass close enough for their mutual gravitational tug to change their orbits. Statistical studies show that after many such close encounters, high-mass bodies tend to acquire circular, coplanar orbits.”8 In keeping with OSSO, one can see a solution to the puzzle of the origin of the Earth-Moon system that makes sense of the capture theory, often thought implausible because of tight parameters of velocity and starting position that the Moon would need to fulfill. Of course, the various satellites in retrograde orbit around the outer planets suggest that capture is not so improbable after all. Initially, a smallish protoplanet (“Merculuna”), which had come from an orbit close to the original orbit of Earth outside of Jupiter (to make sense of the close match between the oxygen ratios of Earth and Moon), would have heated up tremendously on its passage past the gas giant. The gravitational force exerted by Jupiter would have pulled a small molten part out of the larger part. The larger part, containing the main iron core, would have continued on into the inner solar system as Mercury. Its magnetic field was shifted roughly 20% of its radius to the north of its equator, suggesting that Mercury had lost a large component (the Moon) from its north pole (this NASA image of the northern hemisphere shows a depressed polar region in blue). A large, highly reflective area in a several-kilometer deep depression at the pole strongly suggests water ice—and this water ice is in all likelihood aboriginal, too much to be conveyed by comets. Mercury’s south pole correspondingly has a small depressed area at its center surrounded by what appear to be faint concentric rings, consistent with an area of antipodal disruption that had been sucked inward as the Moon was pulling free from the north pole region. The other piece of Merculuna, composed of a small amount of iron but mainly of silicate rock, with most of the volatiles in its crust and upper mantle, including water, burned away by the heat, and with a long comet tail of rock and dust shed from its surface, would also have escaped Jupiter and proceeded into the inner solar system. This partly molten Comet Moon would have been malleable and prone to becoming entangled with the gravitational field of Earth. Its separation from Mercury would have left it skewed both in shape and in elemental distribution. The giant Oceanus Procellarum on the near side on this NASA image is the scar left by the separation event. The rectangular shape of the gravity anomaly surrounding it (here marked in red) sharply distinguishes it from impact craters. Its size roughly approximates the size of the depressed area around Mercury’s north pole. Consistent with this, the evidence of higher heating found on the surface and upper mantle of the near side can be interpreted as the consequence of the near side having been the highest energy intensity location as the Moon was torn from Mercury during the separation event. The tight parameters of the old capture theory would give way to generous parameters within which Comet Moon—exceedingly responsive to tidal forces—could have easily fit; and it would end up orbiting the Earth in an initially highly eccentric but gradually circularizing orbit, slowly cooling and losing its comet tail yet retaining the memory of its heated state in a rather small molten outer core and an extremely hot, dense, solid inner core.9 (Mercury possesses a molten core.10) This scenario of a Moon with a highly eccentric orbit upon capture provides a nice match with data regarding the Moon’s three principal moments of inertia, which are not consistent with the Giant Impact theory.11 The finding that lunar melt inclusions protected by crystals contain fairly high levels of water as well as other volatiles 12 seems consistent with this scenario of high, steady heat that caused the outgassing of almost all volatiles down to 500 km depth except those in the crystals, whereas it seems very inconsistent with the Giant Impact hypothesis of the origin of the Moon, which entails an ultra-high energy event that would presumably have melted the crystals. As the authors of the lunar melt inclusion article note, their evidence rules out the arrival of water after the heating that the crystals withstood; the water was preexisting—a good match with an origin in an icy Merculuna in the outer solar system. In effect, Comet Moon was hot enough to lose surface matter that then formed a tail, to outgas almost all volatiles down to 500 km, and to be sufficiently malleable that the Earth’s gravitational field could capture it; but not so hot as to destroy the crystals that encapsulated volatiles or to round out the indented surface where the near side had been torn apart from Mercury. Moon’s top 500 km thus were tidally heated to a high degree during the peripheral passage of Jupiter, then cooled, while the core remained exceptionally hot as a memento of the high-energy separation from Mercury. The theory also provides explanations of Mercury’s molten core; relatively high orbital eccentricity; very high iron content; and skewed distribution of magnetic field so that the northern hemisphere has a higher magnetic field,13 consistent with a separation event in which Jupiter’s gravitational field pulled material from its north pole region. Instead of two ad hoc giant impacts with complicated post-impact scenarios, a single separation event integral to OSSO during the peripheral passage of Jupiter would account for the distinctive features of the Moon and Mercury. Because Mercury has 4.5 times the mass of the Moon, it would have been more resistant to heating up during the peripheral passage of Jupiter than the Moon was. Mercury would also have been more resistant to Jupiter’s gravitational pull after separation and so might have followed a slightly more distant trajectory from Jupiter. The consequently relatively lower (but still high) temperature could explain why Mercury has much higher levels of potassium and sulfur, which presumably would have been lost by the hot Comet Moon. Data from the MESSENGER orbiter are very consistent with a Mercury-Moon separation event whereas they undermine competing hypotheses such as a giant impact that caused Mercury to lose a putative original thick coating of silicate rock.14 The Moon possesses remanent high-intensity paleomagnetism seemingly derived from a dynamo; and its intensity surpasses the capacity of the small lunar core to generate a field.15 These characteristics are very plausibly the consequences of a Merculuna dynamo that the Moon lost upon separation about 3.92 billion years ago (it is not clear, however, how the Moon’s magnetic field remained at a high level for many millions of years thereafter, suggesting that an unsuspected mechanism generated the field). Until about 3.85 billion years ago, the Moon and Mercury, on highly elliptical orbits, passed repeatedly through the asteroid belt, accounting for their similarly heavily cratered surfaces and obviating the need for the Late Heavy Bombardment hypothesis. Then, before the Moon cooled and lost plasticity as well as its comet tail, it was captured by Earth. This dating contradicts the Giant Impact hypothesis of the Moon’s origin, which places that putative event many hundreds of millions of years earlier. Supporting this interpretation is evidence from zircons that Earth’s surface temperature from 4.4 billion years ago was under 200ºC, which the authors take to mean that any Giant Impact must have happened before then; but they question whether there was such a putative Giant Impact and note that a capture of the Moon would not have affected Earth’s surface temperature.16 Comets Earth and Mars According to OSSO, Jupiter’s gravitational field would also have pulled the Earth into the inner solar system. Tidal heating caused by passing Jupiter would account for evidence that the Earth’s surface once had a magma ocean, and shedding of surface materials and loss of a primitive atmosphere would have created a cometary tail. As with Moon and Mercury, it appears that Mars and the Earth originally formed a single protoplanet (“Terramars”), and the immense gravitational field of Jupiter pulled Mars out of the Earth (forming the Pacific Basin) as the protoplanet passed by. Here are reasons to think that this is in fact correct: 1) Mars resembled Earth in originally having a great deal of water; 2) the higher density of the Earth would be consistent with a larger body from which the smaller, less dense Mars was extracted in a separation event, on the analogy with Mercury and the Moon; 3) the 9.5:1 ratio of mass between Earth and Mars is likewise consistent with such an extraction; 4) the diameter of Mars, 6792 km, is roughly consistent with the distance across the Pacific between San Francisco and Tokyo, 8266 km; 5) Mars’ north pole is surrounded by circular scarring, suggesting that it was the last part of Mars attached to Earth’s Pacific Basin as Terramars was torn apart by Jupiter’s gravity; 6) circular scarring also surrounds the south pole, suggestive of antipodal disruption, as in Mercury’s south pole; and 7) the sharp difference between the northern and southern hemispheres of Mars would have arisen from a separation event that left the northern hemisphere crust thin and vulnerable to subsequent remodeling by flood basalts provoked by other causes, though a later giant impact17 also played a major role in shaping the northern hemisphere. The extreme extent of the Borealis planitia, its irregular, non-elliptical shape, and the 2-3 km scarp that surrounds it are signs of such a pre-impact birth scar from a separation event, fittingly all on the opposite side of the planet from the tidal bulge of the southern highlands, which is not accounted for by the giant impact alone. The remanent magnetization in banded stripes of alternating polarity in Mars’ southern hemisphere is reminiscent of the magnetization of the spreading zones beneath the Earth’s oceans and indicative of a powerful, dynamo-driven alternating dipole magnetic field. It represents an outstanding anomaly in a Mars that lacks a dynamo and has only a tiny, non-dipole magnetic field. In the context of OSSO, however, it can be interpreted as having been formed by the original magnetic field of Terramars. The catastrophic separation while passing Jupiter and the interaction with the giant planet’s immense gravitational and magnetic fields diminished the dynamo in Earth while ending any dynamo in Mars. In turn, this suggests that Earth’s plate tectonics and geomagnetic field go back to Terramars. Since weathering and plate tectonics have destroyed any evidence of the original surface of the Earth, Mars’ southern hemisphere, though heavily bombarded, contains the only remaining original surface of Terramars. In contrast to Mercury and the Moon, the bombardment of the southern hemisphere of Mars began at the time it was separated from Earth approximately 4.5 billion years ago. Mars appears to have maintained an elliptical orbit that carried it repeatedly through the asteroid belt until 3.8-3.7 billion years ago, accounting for the heavy cratering in its southern hemisphere and the late formation of the Hellas Basin. As with Mercury and the Moon, this renders unnecessary the Late Heavy Bombardment hypothesis. Then Mars’ orbit became less elliptical. In an early version of the old, generally discredited fission theory of the origin of the Earth-Moon system, the Pacific Basin was a scar left over from the separation of the Moon from a rapidly rotating Earth. But according to OSSO, the center of the geomagnetic field, roughly modeled as if a bar magnet dipole were buried inside the Earth, is displaced 498 km off Earth’s center of figure in the direction of the Pacific Basin at 25º N, 153º E because Mars was separated from Earth there as Terramars passed Jupiter. Not only the skew of the geomagnetic field but also the Pacific Basin itself and the Hawaiian and South Pacific hotspots are physical leftovers from the separation of Mars from Earth. Much evidence supports this, including the Ring of Fire of seismic and volcanic activity approximately surrounding the crudely circular, appropriately sized Pacific and the thinner (by 2 km) crust of the Pacific compared to the Atlantic crust. Another leftover of the separation at the Pacific Basin is the South Atlantic Magnetic Anomaly on the opposite side of the world whereby the Van Allen radiation belt comes close to Earth as a consequence of the skew in the geomagnetic field. Arguably, the antipodal disruption caused by the emergence of Mars from the North Pacific also explains Africa’s rich deposits of exotic minerals, above all its kimberlite pipes with their diamonds. Given the long, tangled history of plate tectonics, continental drift, and other intervening phenomena, the present-day Pacific Basin has changed considerably since its origin in the primeval Panthalassic Ocean, itself a descendant of the original Mirovia Ocean. Still, seismic anisotropy reveals a unique pancake-like pattern at 160 km depth, approximately centered on the island of Hawaii,18 though Mars’ emergence was not necessarily centered on the Hawaiian hotspot, and it seems to have left an oval wound extending into the South Pacific with its hotspots and anomalies. The emergence of Mars was also the origin of the distinction between the oceanic and continental hemispheres discussed by Peter Warlow19 and divided by a secondary equator that served as an alternative to the standard equator in some episodes of True Polar Wander and inversions. The immense heat generated by the Earth’s peripheral passage of Jupiter was stored throughout its mass; this could explain why the Earth’s surface remained warm enough for water during its early years even though the Sun shone at only about 70% of its present output (the Faint Young Sun Paradox). When did Terramars separate into Earth and Mars? Proponents of the Giant Impact theory of the formation of the Moon have found evidence that various asteroids were all struck by fragments that appear to have come from a great cataclysm around 105 million years after the beginning of the solar system 4.6 billion years ago.20 They consider this cataclysm to be their Giant Impact, but according to OSSO that never occurred. Instead, it seems very possible that these fragments came from the separation of Earth from Mars, and so they provide a candidate date for this event. A separation of Earth and Mars would reduce the number of Peripheral Passages of Jupiter, thus in a sense simplifying the entire OSSO theory. In two cases (Merculuna and Terramars), a separation event occurred. In the third one (Venus), the pull of Jupiter’s gravity caused an elongation of the planet into an ovoid shape, as depicted in ancient iconography, suggesting that Venus, too, had been very close to experiencing its own separation event while passing Jupiter—i.e., that stretching to the point of separation was a normal process during a Peripheral Passage of Jupiter. Just three instances in 4.5 billion years help overcome the objection that it was very unlikely that Jupiter would throw the planets into exactly the right direction to enter the inner solar system (without hitting the Sun) instead of dispatching them to the far reaches of the solar system. These were appropriately rare events. While each of the terrestrial planets underwent unique experiences following the Peripheral Passage of Jupiter, in terms of axial tilt Mercury (0.01°) is closest to the Moon (1.54°) and the axial tilt of Earth (23.4°) is close to that of Mars (25.19°). In terms of orbital inclination to the ecliptic, again Mercury (7.01°) and the Moon (5.145°) form a fairly close match, as do Earth (0°) and Mars (1.85°). In general, OSSO provides a simple explanation of the obliquities and orbital inclinations of the terrestrial planets, in contrast to the theory of in situ formation, which requires ad hoc collisions with large objects as accretion came to a close. Thus we can explain the remarkable, anomalous lopsidedness of the terrestrial planets as a consequence of Jupiter’s gravitational pull during Peripheral Passages. The southern hemisphere of Mars and the far side of the Moon are both tidal bulges that were pulled out by Jupiter’s gravitational field. On Mars the northern hemisphere crust is 35 km thick while the southern highlands crust is 80 km. On the Moon, the near side crust is 60 km thick while the far side crust is on average up to 100 km thick if the thin crust, resulting from an impact, under the South Pole-Aitken Basin is excluded. The center of mass of Mars is displaced to the north by 3.5 km from the center of figure, while the center of mass of the Moon is displaced 1.68 km +/-50 m toward the nearside from the center of figure. Meanwhile, Mercury has a considerably higher amount of iron in its northern hemisphere than in the south, while both the Earth’s inner core and its center of mass (2.1 km from the center of figure) are asymmetrical. In both cases of separation, the lighter molten rock would have more readily been pulled by Jupiter’s gravity into the tidal bulges. We can expect that the larger partner planet emerging from the separation would have a higher density, being more resistant to the tidal force from Jupiter than the smaller one; and this is indeed the case: the uncompressed density of Mercury is 5.3 grams per cm³ while that of the Moon is 3.3, and the density of the Earth is 4.4 while that of Mars is 3.7. Venus has an uncompressed density of 4.3, which is in line with that of the Earth; and the density of Venus is situated between the densities of Mercury and the Moon.21 In other words, crustal thickness, displacement of center of mass, and uncompressed density of the terrestrial planets all are consistent with being the consequences of a Peripheral Passage of Jupiter. One can predict that a similar distribution of elements, from light to heavy across the planet, will be found on Venus. Tidal locking, as noted above, caused the anomalous very slow, retrograde rotation of Venus as well as the discrepancy between its center of figure and its center of mass. At 0.28 km, this distance is smaller than those of the other terrestrial planets, but it is much larger than expected error. It seems logical that the stretching of Venus during partial tidal locking would leave a distinguishable yet less prominent lopsidedness than in the planets that were torn in two. Meanwhile, Venus’ perfectly spheroidal shape, evidently the result of remodeling since its pronounced ovoid appearance in ancient iconography, also makes it an outlier: there is no sign of oblateness. In contrast to the other two Peripheral Passages of Jupiter in OSSO, the Peripheral Passage of Venus just before 2500 B.C. was observed by at least one Greek eyewitness and was recorded in the easy-to-interpret myths of Metis and the birth of Athena. Although Immanuel Velikovsky incorrectly assumed that Venus had emerged from Jupiter itself and that it had approached Earth around 1500 B.C., his account of the subsequent interactions of Comet Venus with Earth contains a great deal of evidence regarding the later stages of an OSSO event as observed by human eyewitnesses. OSSO provides grounds for various predictions. For instance, Mercury and Venus have abundant deep water; oxygen isotopes on Mercury closely approximate those on the Moon; Mercury’s ancient magnetic field was much greater than its current one, and it was nearly identical to the original magnetic field of the Moon; and Oceanus Procellarum on the Moon and the north pole depression on Mercury share the date of ca 3.92 billion years ago. OSSO must also lead us to revise current views on: 1. the importance of impacts. They clearly played a significant role but not so dominant a one as has been supposed. Close encounters have shaped the inner solar system in fundamental ways; 2. the presence of dust and gas in the early inner solar system. It appears that, while larger dust particles spiraled into the Sun from Poynting-Robertson drag and other kinds of drag, radiation pressure and the solar wind rapidly pushed tiny particles and gas out to or beyond the asteroid belt or beyond the “snow line” (4.5 AU). There was no need for very rapid accretion of the terrestrial planets before the inner solar system was cleared of dust and gas22 because they formed in the outer solar system. Nor, for the same reason, are the low mass of Mars and the main asteroid belt anomalous compared to the mass of Venus and Earth.23 Meanwhile, evidence for the recent finding that asteroids of the CL chondrite class were the source of Earth’s volatiles can be reinterpreted to mean that CL chondrites and Earth originated in the same region of the solar system.24 In effect, the area roughly between the present-day orbits of Jupiter and Saturn was the birthplace of the planets, from which some of them wandered outward from the Sun while others were pulled inward; 3. the origin of water on the terrestrial planets. OSSO provides a simple explanation; 4. the hypothetical Late Heavy Bombardment, which never happened; 5. the Giant Impact hypothesis of the formation of the Earth-Moon system, which is incorrect. OSSO provides superior explanations of the Moon’s heavily cratered surface, the near side/far side dichotomy (including surface features and crustal depths), the complementary low iron of the Moon and high iron of Mercury, the lunar magnetic field, Oceanus Procellarum, lunar melt inclusions, the displacement of the center of mass of the Moon, and thermal layering. All of these are integral features of OSSO, so the evidence and arguments on other topics in OSSO support them, unlike with the ad hoc evidence and arguments in the models of the Giant Impact hypothesis; 6. the origin of the Pacific Basin and the Ring of Fire that surrounds it; 7. key features of Mercury; 8. Mars, once part of our planet; and 9. Venus. OSSO offers telling evidence and explanations that make more believable the mythical accounts of the ancients25 and, following them and much other evidence, the interpretation of Immanuel Velikovsky that Venus emerged from (in fact, passed close to) Jupiter and entered the inner solar system during the Bronze Age. It provides simple, parsimonious, highly appropriate solutions to otherwise poorly explained anomalies of this remarkable planet. Kenneth J. Dillon is an historian who writes on science, medicine, and history. See the biosketch at About Us.
Explore this collection of circle facts. Learn how to find the circumference, diameter, radius, and area of a circle and get definitions of circle terms used in geometry. - A circle is a two-dimensional shape formed by all the points that are the same distance from a center point. - Technically, only the points equidistant from the center form the circle. The area enclosed within a circle is called a disc. - The word circle comes from the Greek word κρίκος (krikos), meaning “hoop” or “ring”. - A circle is the only one-sided shape containing an area. A straight line is a circle containing an infinite area. - Humans have recognized circles since ancient times. Natural circles include the shapes of the Sun and Moon, the human eye, tree cross-sections, some flowers, some shells, etc. - The distance around a circle is its circumference. - The distance from the center to the circle is its radius. - The longest distance between two points on a circle is the diameter, which is a line segment running through the center. - A circle is the shape with the shortest perimeter enclosing an area. - The circle is the most symmetric shape because every line through the center is a line of reflection symmetry. It has rotation symmetry for every angle around its center. - Pi (π) is an irrational number that is the ratio of a circle’s circumference to its diameter. It is approximately equal to 3.1415259. - Archimedes proved the area enclosed within a circle is the same as the area of a triangle with a base the length of the circle’s circumference and height equal to the circle’s radius. - The full arc of circle measures 360 degrees. - A circle is a special type of ellipse where the two foci are in the same location and the eccentricity is 0. - Written in 1700 BCE, the Rhind papyrus describes a method of finding the area of a circle. The result comes out as 256/81, which is about 3.16 (close to pi). - You can draw a special circle inside every triangle, called the incircle, where each of the three triangle sides are tangent to the circle. How to Find the Circumference of a Circle The circumference (C) is the distance around a circle. There are a few ways to find the circumference. You can calculate it from either the radius (r) or diameter (d) or you can measure it. - C = 2πr - C = πd - It’s easiest to measure a circle’s circumference using a string. Shape the string around the circle, mark the length, and then use a ruler or meter stick to measure the length of the string. How to Find the Diameter of a Circle The diameter (d) is the length of the line segment with end points on the circle that passes through its center. It is the longest distance across a circle. The diameter is twice the length of the radius. - d = 2r - d = C/π - Measure the diameter by finding the longest line segment across a circle. How to Find the Radius of a Circle The radius (r) is the distance from the center of a circle to its border. It is half the length of the diameter. - r = d/2 - r = C/2π - If you draw a circle using a compass, the radius is the distance between its two points. Measuring the radius of a circle is a bit tricky unless you know its center. Sometimes its easier to measure the circumference or diameter and calculate the radius. How to Find the Area of a Circle The area (A) of a circle is the region enclosed by a circle or the area of its disc. - A = πr2 - A = π(d/2)2 - A = Cr/2 – You can use Archimedes’ proof to find the circle area using its circumference and radius. Set the base of the triangle equal to circumference C and height equal to radius r. The triangle area formula 1/2 bh becomes A = Cr/2 Circle Vocabulary Terms Here are key circle vocabulary terms to know: - Annulus: An annulus is a ring-shape formed between two concentric circles. - Arc: An arc is any segment of a circle formed by connected points. - Center (Centre): The center is the point that is equidistant from all points on a circle. It is also called the origin. - Chord: A chord is a line segment with endpoints on the circle. The diameter is the longest chord. - Circumference: The circumference is the distance around a circle. - Closed: A region that includes its boundaries. - Diameter: The diameter is the line segment with endpoints on the circle and midpoint at its center. It is the largest distance between any two points on a circle. - Disc: A disc is the area inside a circle. - Lens: A lens is a region shared by two overlapping discs. - Open: Any region, excluding its boundaries. - Passant: A passant is a coplanar line that has no points in common with a circle. - Radius: A radius is a line segment running from the center to the circle. - Sector: A sector is an area within a circle bounded by two radii. - Segment: A segment is an area bounded by an arc and a chord. - Secant: A secant is a chord that extends beyond the circle. In other words, it is a coplanar line that intersects a circle at two points. - Semicircle: A semicircle is an arc which has the diameter as endpoints and center as midpoint. The interior of a semicircle is a half-disc. - Tangent: A tangent is a coplanar line sharing one single point in common with a circle. Practice finding the circumference and area of circles with these math worksheets. - Gamelin, Theodore (1999). Introduction to Topology. Mineola, N.Y: Dover Publications. ISBN 0486406806. - Harkness, James (1898). “Introduction to the theory of analytic functions”. Nature. 59 (1530): 30. doi:10.1038/059386a0 - Katz, Victor J. (1998). A History of Mathematics / An Introduction (2nd ed.). Addison Wesley Longman. ISBN 978-0-321-01618-8. - Ogilvy, C. Stanley (1969). Excursions in Geometry. Dover.
As a result of these discoveries, our questions about the universe have become progressively more sophisticated. What is the origin of structure in the universe? From the cosmic soup of the Big Bang, how did clusters, galaxies, stars, and planets arise? How did the chemical evolutionary history of the universe result in the genesis of life on Earth? And how common is life in the universe? The keys to answering these questions lie in deciphering the past: ancient events observable in the distant universe, the historical record preserved in the stellar populations of the Milky Way and nearby galaxies, and present-day analogues of the birth of the Sun and solar system. Although current 8-m and 10-m telescopes will greatly advance our understanding of these questions, we already know that many answers lie beyond their reach. As we describe below, answering them will require a novel combination of greater sensitivity, higher angular resolution, and larger fields of view (FOVs) than are currently available. Moreover, the discoveries that will be made by planned space-based or multi-wavelength ground-based facilities will only be fully realized with a new large-aperture optical-infrared telescope. For example, the stars that first illuminated the universe may be detected by NASA's James Webb Space Telescope (JWST), but investigating their astrophysical properties (e.g., age, metallicity) is well beyond the capabilities of either JWST or any extant ground-based telescope. Similarly, the Atacama Large Millimeter Array (ALMA) will probe the current birthplaces of stars in the Galaxy, but understanding the formation of planets in our solar system will require a joint effort, relying heavily on a next generation large optical-infrared telescope. Indeed, without investment in such a telescope with sensitivity and angular resolution matched to the power of ALMA, JWST, and SKA (Square Kilometer Array), the US community will lack access to capabilities critical to exploring the astrophysical frontiers. Advances in astronomy in the next decade will be enabled by a diverse set of observing tools covering a broad range of wavelengths, on the ground and in space. By 2015, JWST will have been launched, and perhaps will have already completed its nominal five-year mission to explore the early universe and the creation of the first stars and galaxies. It will undoubtedly have revolutionized our view of the near- to mid-infrared universe. ALMA will be operational (target completion date 2011) and examining an equally broad range of astronomical problems, from the study of dusty galaxies at high redshift to the initial conditions and physical processes that produce stars and planets. For the first time, a facility operating at mm wavelengths will provide sensitive observations of the cold, molecular universe at angular resolutions ~ 10 mas—nearly 100x that achievable today and 10x that of the Keck telescope. Before the end of the next decade, SKA will bring analogous sensitivity and angular resolution to bear on the problem of mapping the evolution of primordial hydrogen gas from the epoch of recombination to the formation of the first gaseous structures in the universe. Chandra will have long completed its inventory of the X-ray background, while Constellation-X will be on the verge of probing the formation of galaxy clusters and the distribution of hot, ionized gas in the universe. By 2015, 8-10-m class ground-based OIR (optical-infrared) telescopes will have been operating for nearly two decades. Experience with adaptive optics (AO) will have produced huge gains in sensitivity and perhaps brought us to the point of replicating at near- and mid-IR wavelengths the excellent imaging quality available in space. We will by then have glimpsed the high-redshift universe, perhaps to z > 8, as well as the large scale structure of galaxies at z ~ 4 based on studies of the brightest (> L*) galaxies. However, the nature of the first collapsed objects (z > 8) and the clustering properties of sub-L* (i.e., typical) galaxies, which provide the critical test of hierarchical formation models, will lie beyond the sensitivity and angular resolution provided by these facilities. Closer to home, we will have studied planet forming systems within ~ 200 pc using 8-10-m ground-based telescopes and thereby obtained, from samples of a few tens of systems, tantalizing hints to the questions of when and how planets form. However, definitive answers to these questions and the central question of how frequently planets form, as well as an understanding of the role of dynamical evolution and its impact on the habitability of planetary systems, will await a more powerful facility capable of providing a census of planetary architectures for thousands of forming stars and their planet forming accretion disks. In this climate of intense activity, in which numerous discoveries will be made and fundamental astrophysical problems solved, the availability of forefront ground-based optical and infrared telescopes and instrumentation will remain central to expanding the frontiers of astrophysical knowledge. Thesefacilities will not only drive unique, ground-breaking science on their own, but will also be essential to the scientific success of capabilities at other wavelengths and in space. The reasons for this are several. First, most of the abundant atomic species have important transitions at UV-optical wavelengths. These include the characteristic suites of transitions that act as a ``fingerprint'' for the identification of individual atomic species (in particular, the strong resonance transitions, which appear in the optical in the spectra of redshifted objects, and important forbidden line transitions). In addition, many abundant molecular species have their rovibrational transitions in the near- to mid-IR. Thus, the observational diagnostics on which our understanding of stars and galaxies is based (chemical composition, gravity, stellar mass, age, etc.) lie in the optical and infrared. Our ability to observe these well-understood and modeled diagnostics provides the foundation for both future progress and for the interpretation of observations made at other wavelengths. The availability of ground-based telescopes with sensitivity matched to next generation space- based and radio facilities is essential to astrophysical progress. For example, while the SKA will probe the formation of the first gaseous structures, and JWST will determine the morphologies of the first stars and galaxies, spectroscopy with a 30-m class optical-IR telescope will be needed to determine the physical properties (age, stellar content, mass, etc.) that are needed to understand the formation and evolution of these objects. A 30-m ground-based telescope is the natural spectroscopic complement to JWST in much the same way that the Keck telescopes are for the Hubble Space Telescope (see Figure 1). The scientific legacy of the Hubble Deep Field is a compelling example of how space-based imaging plus ground-based spectroscopy is a powerful combination for astrophysical discovery. Similarly, while space missions such as MAP will probe the primordial density fluctuation spectrum, and Constellation-X will probe the structure of hot gas in the universe, ground-based optical-infrared spectroscopy will be needed to obtain the critical complementary information on the large scale structure of galaxies. As another example, while ALMA will determine the range of initial conditions and physical processes that give rise to stars, optical-infrared spectroscopy will be needed to determine the properties of the ultimate products of star formation process: the stars themselves. Secondly, compared with space-based missions, ground-based facilities can deploy much more complex and renewable instrumentation that is not subject to the mass and energy limitations of space-based platforms. For example, large FOVs can be used to maximize scientific gain in situations where observations of large samples of objects are critical. The instruments needed to accept large FOVs are typically prohibitively large for space-based facilities. As a result, JWST will have a relatively small FOV (5'), which largely precludes studies of, for example, the large scale structure of galaxies, the structure of the Galactic halo, and the dynamical structure and merger history of nearby clusters. As another example, high spectral resolution also typically requires larger instruments. As a result, JWST will not explore spectral resolutions R > 10,000, which consequently precludes, for example, the possibility of detailed studies of planet formation environments, the identification of merger remnants in nearby galaxies, and the chemical enrichment histories of those galaxies. Although it is clear that the next generation ground-based OIR telescope will be central to our ability to expand the frontiers of astrophysical knowledge, an important unanswered question is what kind of OIR telescope would represent the optimal combination of scientific productivity, cost effectiveness, and technological readiness for deployment by 2015. To help address this question, we have identified several potential "discovery spaces" for a next generation telescope i.e., potential opportunities for significant scientific discovery that would be accessible if the telescope were designed to allow certain combinations of sensitivity, FOV, wavelength coverage, and spatial and spectral resolution. A large aperture, ground-based telescope can provide the critical combination of sensitivity and FOV that is needed (e.g., for definitive spectroscopic studies of the evolution of large scale structure). The same combination of sensitivity and FOV would be critical to other studies, such as stellar population studies of the Milky Way halo that would address the formation and evolution of our own Galaxy. This capability would enable the next generation telescope to fully exploit those periods (perhaps 30% of the time at the very best sites) when atmospheric conditions (e.g., light cirrus) preclude full adaptive correction to carry out observations essential to linking observed fluctuations in the cosmic background to the initial appearance of large scale structure in the gaseous and stellar components of the universe. Such a high throughput, large aperture, ground-based telescope will have the sensitivity to categorize the physical properties (e.g., star formation rates, metallicities, interstellar media, and internal dynamics) of galaxies over a range of redshifts and, potentially, of the first luminous objects in the universe from observations of their integrated spectra. A large aperture, ground-based telescope, when equipped with a moderate Strehl (~ 30%) AO system over moderate FOVs (~ 1'), will have the critical combination of sensitivity and angular resolution needed to resolve and analyze the components of forming galaxies (HII regions, nascent bulges and disks), as well as individual stars in crowded stellar fields. With this capability, we will be able to trace the properties of just-formed galaxies and putative pre-galactic fragments at redshifts 3 and greater, and at intermediate redshifts, follow the merger and star formation history of nearby galaxies. Similarly, the same combination of sensitivity and angular resolution can be used to measure the masses of galaxies from the morphological (i.e., gravitationally lensed) distortions that they induce in the appearance of background galaxies (see Galaxy Formation and Resolved Stellar Populations). A large aperture, ground-based telescope designed to minimize thermal emissivity will have the potential for high resolution (R ~ 100,000) spectroscopy at unprecedented sensitivity at IR wavelengths. With this capability, we will be able to carry out detailed studies of planet formation environments in samples large enough to address fundamental questions such as when, where, and how frequently planets form. This capability will provide an essential complement to ALMA, whose sensitivity and angular resolution are best matched to probing cooler environments inaccessible to a next generation OIR telescope. A large aperture, ground-based telescope, when equipped with high-Strehl AO and a coronagraph, can also enable detailed studies of high contrast (> 106 between a bright central star and objects or regions located within 0.1") situations at high angular resolution. A 30-m next generation OIR telescope should have the angular resolution and sensitivity needed to, for example, directly detect the light from extra-solar planets and thereby characterize their physical characteristics (masses, radii) and atmospheres for comparison with planets in our own solar system. In Galaxy Formation, we illustrate how these discovery spaces might translate into specific science opportunities, i.e., we illustrate the kind of science that would be possible if a next generation telescope were designed to allow those specific combinations of sensitivity, FOV, wavelength coverage, and spatial and spectral resolution. We wish to stress at the outset that the science opportunities discussed are not meant to be comprehensive. Rather, our goal is to describe in some detail the magnitude of the scientific gain that is possible for a selected subset of science in several areas if a next generation telescope were designed to enable a given discovery space.1
When estimating a specific parameter or characteristic of a population, several possible estimators exist. Example 1:Suppose that the underlying population distribution is symmetric. In this case the population expectation equals the population’s median. Thus the unknown expectation can be estimated using either the sample mean or the sample median. In general, the two estimators will provide different estimates. Which estimator should be used? To estimate the variance we may use either of the following: Which estimator should be used? Suppose that the underlying population distribution is Poisson. For the Poisson distribution . Therefore the unknown parameter could be estimated using the sample mean or the sample variance. Again in this case the two estimators will in general, yield different estimates. In order to obtain an objective comparison, we need to examine the properties of the estimators. Mean Squared Error A general measure of the accuracy of an estimator is the Mean Squared Deviation, or Mean Squared Error (MSE). The MSE measures the average squared distance between the estimator and the true parameter : It is straightforward to show that the MSE can be separated into two components: The first term on the right side is the variance of : The second term is the square of the bias . Hence the MSE is the sum of the variance and the squared bias of the estimator: If several estimators are available for an unknown parameter of the population, one would thus select that one with the smallest MSE. . Starting with the MSE three important properties of estimators are described, which should facilitate the search for the ”best” estimator. An estimator of the unknown parameter is unbiased, if the expectation of the estimator matches the true parameter value: That is the mean of the sampling distribution of equals the true parameter value . For an unbiased estimator the MSE equals the variance of the estimator: Thus the variance of the estimator provides a good measure of the precision of the estimator. If the estimator is biased, then the expectation of the estimator is different from the true parameter value. That is, An estimator is called asymptotically unbiased, if ,i.e. the bias converges to zero with increasing sample size . Often there are several unbiased estimators available for the same parameter. In this case, one would like to select the one with the smallest variance (which in this case is equal to the MSE). Let and be two unbiased estimators of using a sample of size . The estimator is called relatively efficient in comparison to , if the variance of is smaller than the variance of , i.e., The estimator is called efficient if its variance is smaller than that of any other unbiased estimator. The consistency of an estimator is a property which focuses on the behavior of the estimator in large samples. In particular consistency requires that the estimator be close to the true parameter value with high probability in large samples. It is sufficient if the bias and variance of the estimator converge to zero. Formally, suppose and Then the estimator is consistent. Equivalently, the two conditions may be summarized using: This notion of consistency is also referred to as ’squared mean consistency’. An alternative version, known as weak consistency is defined by the following: That is, the probability that the estimator yields values within an arbitrarily small interval around the true parameter value , converges to one with increasing sample size . The probability that the estimator differs from the true parameter value by more than , converges to zero with increasing sample size . That is, The unknown mean and variance will be estimated. A random sample of size was drawn from a population yielding the following data:1; 5; 3; 8; 7; 2; 1; 4; 3; 5; 3; 6. The sample mean is an unbiased and efficient estimator. Substituting the sample values yields This result constitutes a point estimate of . The estimator is given by: Substituting the sample values yields the point estimate Assume a population with mean and variance . Let be a random sample drawn from the population. Each random variable has and . Consider the following three estimators of the population mean: - Which estimators are unbiased? - Which estimator is most efficient? All of them are unbiased, since : The variance of each estimator is given by: The first estimator, because uses all the data, is the most efficient. This estimator is of course the sample mean. Note that even though the second and third estimators each use two observations, the third is less efficient than the second because it does not weight the observations equally. . Mean Squared Error (MSE): Recall the MSE is defined as Expanding the expression one obtains: For the middle term we have: and consequently we have The MSE does not measure the actual estimation error that has occurred in a particular sample. It measures the average squared error that would occur in repeated sample. The following figure display three estimators of a parameter . The estimators and are unbiased since their expectation coincides with the true parameter (denoted by the vertical dashed line). In contrast, the estimator is biased. For both unbiased estimators holds, as the bias equals zero. However has lower variance and is therefore preferred to It is also preferred to which has the same variance but exhibits substantial positive bias. Each of the following widely used estimators are unbiased. The sample mean is an unbiased estimator of unknown expectation since See Section Distribution of the Sample Mean. The sample proportion is an unbiased estimator for the population proportion since See Section Distribution of the Sample Fraction. Assume a random sample of size . If the expectation of the population is unknown and estimated using the sample mean, the estimator is an unbiased estimator of , since See Section Distribution of the Sample Variance The standard deviation which is the square root of the sample variance is not an unbiased estimator of , as it tends to underestimate the population standard deviation. The estimator is not unbiased, since See Section Distribution of the Sample Variance. The bias is given by: Using the estimator one will tend to underestimate the unknown variance. The estimator, however is asymptotically unbiased as with increasing sample size the bias converges to zero. Division by as in ) rather than by (as in the ) assures unbiasedness. - The sample mean is an efficient estimator of the unknown population expectation . This is true for any distribution - Suppose data are drawn from a distribution. The sample mean is an efficient estimator of . It can be shown that no unbiased estimator of exists which has a smaller variance. - The sample mean is an efficient estimator for the unknown parameter of a poisson distribution. - The sample proportion is an efficient estimator of the unknown population proportion for a dichotomous population, i.e. the underlying random variables have a common Bernoulli distribution. - For a normally distributed population the sample mean and the sample median are unbiased estimators of the unknown expectation . For random samples (with replacement) we have: Furthermore one can show that and hence The sample mean is relatively efficient in contrast to the sample median . - The relative efficiency of various estimators of the same parameter in general depends on the distribution from which one is drawing observations. Consistency is usually considered to be a minimum requirement of an estimator. Of course, consistency does not preclude the estimator having large bias and variance in small or moderately sized samples. Consistency only guarantees that bias and variance go to zero for sufficiently large samples. On the other hand, since sample size cannot usually be increased at will, consistency may provide a poor guide to the finite sample properties of the estimator. For random samples, the sample mean is a consistent estimator of the population expectation since and the variance converges to zero, i.e., For random samples the sample proportion is a consistent estimator for the population proportion as the estimator is unbiased and the variance converges to zero, i.e., For a Gaussian distributed population the sample median is a consistent estimator for the unknown parameter . For a Gaussian distribution, the estimator is consistent for the unknown variance , since the estimator is unbiased and the variance converges to zero: The sample variance is also a consistent estimator of the population variance for arbitrary distributions which have a finite mean and variance.
NCERT Solutions for Class 6 Maths Chapter 10 – Mensuration NCERT Solutions for Mensuration covers all the important topics of the chapter such as calculation of perimeter of different shapes, calculation of area of different shapes, and visualization of the area. This chapter of NCERT Solutions has 3 exercises and 30 questions in total. They are based on the latest CBSE Class 6 Maths Syllabus. Our subject matter experts have prepared all the NCERT solutions for the chapter exercise-wise for your reference. NCERT Solutions for Class 6 Chapter 10 cover all important formulae of the chapter such as the perimeter of the rectangle, the perimeter of regular shapes, area of square and rectangle. Practising these questions will help you form a good foundation for mensuration topics in higher classes. The NCERT Solutions that we have prepared for Class 6 Chapter 10 are easy to understand, and a 100% accurate. We also included some shortcut tricks and study tips that will help you in solving problems quickly and score more marks in CBSE Maths exam. Our expert teachers use the best methods of solving mensuration problem sums to ensure efficient learning. Our NCERT CBSE Class 6 Maths Chapter 10 solutions are prepared on the basis of CBSE guidelines. They will not only help you become thorough with Class 6 Maths Mensuration topics but will also help you perform well in your exams. With our CBSE NCERT Class 6 Maths Chapter 10 solutions at your disposal, you can easily accustom yourself with different question patterns significant from exam stand viewpoint. They are devised to clear all your doubts. NCERT CBSE Class 6 Maths Chapter 10 NCERT Solutions for Class 6 Maths Chapter 10 – Mensuration include various vital concepts from the chapter. They provide you with a detailed understanding of the fundamental concepts of mensuration. Mensuration is considered the most crucial chapters of CBSE Class 6 Maths Syllabus. Following are the topics covered in CBSE NCERT Class 6 Maths Chapter 10. - Introduction to Mensuration This section of the NCERT Solutions for Class 6 Chapter 10- Mensuration will introduce you to the fundamental concepts and terms essential for solving problem sums on mensuration. It teaches you the definition of mensuration and the meaning of shapes. It also elaborates in details about the measurement of length, area, and volume of different two dimensional and three-dimensional shapes. This chapter of the NCERT Solutions for Class 6 Maths Chapter 10-Mensuration describes the methods and formulas of calculating the perimeter of two-dimensional shapes like triangle, rectangle, square, polygon, and irregular shapes. For example, if a triangle has three sides a, b and c then The perimeter of the triangle= a+b+c Another example is if a square has 4 equal sides of length d then, The perimeter of the square=4 x d In this chapter of NCERT solutions for class 6 Maths Chapter 10- Mensuration, you will learn the different procedures of the calculating area of a square, rectangle, different types of triangle, and irregular shapes. From this chapter of NCERT CBSE Class 6 Maths Chapter 10, you will also learn methods of area visualization. CBSE NCERT Class 6 Maths Chapter 10 Exercises NCERT CBSE Class 6 Maths Chapter 10 includes a total of three exercises. The following section will give you a brief description of each exercise in this chapter. These exercises have been solved by our maths teachers covering all essential concepts laid out in CBSE NCERT Class 6 Maths Chapter 10 of the CBSE NCERT Class 6 Maths Book. - Exercise 10.1 of NCERT Class 6 Maths Chapter 10 has 17 questions in total that cover different subtopics of perimeter such as the perimeter of a square, perimeter of a triangle, perimeter of an irregular shape, perimeter of a rectangle, and perimeter of a polygon. - In this exercise, some questions are direct questions, some are indirect questions, and some are scenario-based questions. - Exercise 10.1 of NCERT Solutions for Class 6 Maths Chapter 10- Mensuration will give you enough practice to get comfortable with the covered topic and master the same. It has been solved in a step by step format to induce easy understanding. - In this portion of NCERT Solutions for Class 6, Chapter 10-Mensuration, you will find only one question that has 14 subparts. - For solving this question, you will need to find out the area of various two-dimensional figures given in the question. - This exercise of NCERT Solutions for Class 6 Maths Chapter 10 will help you clearly understand the covered concepts and methodologies employed for calculating the area of complex and irregular two-dimensional shapes. - In Exercise 10.3 of NCERT Solutions for Class 6, Chapter 10-Mensuration comprises of 12 questions. - You will find question 1 has four subparts, each of which asks you to find the area of the rectangle with given lengths and breadths. - Question 3 also deals with determining the area of a rectangle from the given lengths and breadths. - In Question 2, you are required to find the area of the square from each given value of the sides. - Question 4 to question 9 and question 12 consists of scenario-based problem sums where you have to find the area of different shapes based on the given data. - Question 10 and 11 requires you to split the given figures into rectangles to determine the area of each given figure. - This exercise covers various types of questions from Class 6 Maths Mensuration, gives you enough practice to clear all your doubts and ensures you do well in your Class 6 Maths exams.
|Part of a series on| In mainstream economics, economic surplus, also known as total welfare or total social welfare or Marshallian surplus (after Alfred Marshall), is either of two related quantities: - Consumer surplus, or consumers' surplus, is the monetary gain obtained by consumers because they are able to purchase a product for a price that is less than the highest price that they would be willing to pay. - Producer surplus, or producers' surplus, is the amount that producers benefit by selling at a market price that is higher than the least that they would be willing to sell for; this is roughly equal to profit (since producers are not normally willing to sell at a loss and are normally indifferent to selling at a break-even price). In the mid-19th century, engineer Jules Dupuit first propounded the concept of economic surplus, but it was the economist Alfred Marshall who gave the concept its fame in the field of economics. On a standard supply and demand diagram, consumer surplus is the area (triangular if the supply and demand curves are linear) above the equilibrium price of the good and below the demand curve. This reflects the fact that consumers would have been willing to buy a single unit of the good at a price higher than the equilibrium price, a second unit at a price below that but still above the equilibrium price, etc., yet they in fact pay just the equilibrium price for each unit they buy. Likewise, in the supply-demand diagram, producer surplus is the area below the equilibrium price but above the supply curve. This reflects the fact that producers would have been willing to supply the first unit at a price lower than the equilibrium price, the second unit at a price above that but still below the equilibrium price, etc., yet they in fact receive the equilibrium price for all the units they sell. Early writers of economic issues used surplus as a means to draw conclusions about the relationship between production and necessities. In the agricultural sector surplus was an important concept because this sector has the responsibility to feed everyone plus itself. Food is notable because people only need a specific amount of food and can only consume a limited amount. This means that excess food production must overflow to other people, and will not be rationally hoarded. The non-agricultural sector is therefore limited by the agricultural sector equaling the output of food subtracting the amount consumed by the agricultural sector. William Petty used a broad definition of necessities, leading him to focus on employment issues surrounding surplus. Petty explains a hypothetical example in which there is a territory of 1000 men and 100 of those men are capable of producing enough food for all 1000 men. The question becomes, what will the rest of the men do if only 100 are needed to provide necessities? He thereby suggests a variety of employments with some remaining unemployed. David Hume approached the agricultural surplus concept from another direction. Hume recognized that agriculture may feed more than those who cultivate it, but questioned why farmers would work to produce more than they need. Forceful production, which may occur under a feudal system, would be unlikely to generate a notable surplus in his opinion. Yet, if they could purchase luxuries and other goods beyond their necessities, they would become incentivized to produce and sell a surplus. Hume did not see this concept as abstract theory, he stated it as a fact when discussing how England developed after the introduction of foreign luxuries in his History of England. Adam Smith's thoughts on surplus drew on Hume. Smith noted that the desire for luxuries is infinite compared to the finite capacity of hunger. Smith saw the development in Europe as originating from landlords placing more importance on luxury spending rather than political power. Consumer surplus is the difference between the maximum price a consumer is willing to pay and the actual price they do pay. If a consumer is willing to pay more for a unit of a good than the current asking price, they are getting more benefit from the purchased product than they would if the price was their maximum willingness to pay. They are receiving the same benefit, the obtainment of the good, at a lesser cost. An example of a good with generally high consumer surplus is drinking water. People would pay very high prices for drinking water, as they need it to survive. The difference in the price that they would pay, if they had to, and the amount that they pay now is their consumer surplus. The utility of the first few liters of drinking water is very high (as it prevents death), so the first few liters would likely have more consumer surplus than subsequent quantities. The maximum amount a consumer would be willing to pay for a given quantity of a good is the sum of the maximum price they would pay for the first unit, the (lower) maximum price they would be willing to pay for the second unit, etc. Typically these prices are decreasing; they are given by the individual demand curve, which must be generated by a rational consumer who maximizes utility subject to a budget constraint. Because the demand curve is downward sloping, there is diminishing marginal utility. Diminishing marginal utility means a person receives less additional utility from an additional unit. However, the price of a product is constant for every unit at the equilibrium price. The extra money someone would be willing to pay for the number units of a product less than the equilibrium quantity and at a higher price than the equilibrium price for each of these quantities is the benefit they receive from purchasing these quantities. For a given price the consumer buys the amount for which the consumer surplus is highest. The consumer's surplus is highest at the largest number of units for which, even for the last unit, the maximum willingness to pay is not below the market price. Consumer surplus can be used as a measurement of social welfare, shown by Robert Willig. For a single price change, consumer surplus can provide an approximation of changes in welfare. With multiple price and/or income changes, however, consumer surplus cannot be used to approximate economic welfare because it is not single-valued anymore. More modern methods are developed later to estimate the welfare effect of price changes using consumer surplus. The aggregate consumers' surplus is the sum of the consumer's surplus for all individual consumers. This aggregation can be represented graphically, as shown in the above graph of the market demand and supply curves. The aggregate consumers' surplus can also be said to be the maxim of satisfaction a consumer derives from particular goods and services. Calculation from supply and demand The consumer surplus (individual or aggregated) is the area under the (individual or aggregated) demand curve and above a horizontal line at the actual price (in the aggregated case, the equilibrium price). If the demand curve is a straight line, the consumer surplus is the area of a triangle: where Pmkt is the equilibrium price (where supply equals demand), Qmkt is the total quantity purchased at the equilibrium price, and Pmax is the price at which the quantity purchased would fall to 0 (that is, where the demand curve intercepts the price axis). For more general demand and supply functions, these areas are not triangles but can still be found using integral calculus. Consumer surplus is thus the definite integral of the demand function with respect to price, from the market price to the maximum reservation price (i.e., the price-intercept of the demand function): where This shows that if we see a rise in the equilibrium price and a fall in the equilibrium quantity, then consumer surplus falls. Calculation of a change in consumer surplus The change in consumer surplus is used to measure the changes in prices and income. The demand function used to represent an individual's demand for a certain product is essential in determining the effects of a price change. An individual's demand function is a function of the individual's income, the demographic characteristics of the individual, and the vector of commodity prices. When the price of a product changes, the change in consumer surplus is measured as the negative value of the integral from the original actual price (P0) and the new actual price (P1) of the demand for product by the individual. If the change in consumer surplus is positive, the price change is said to have increased the individuals welfare. If the price change in consumer surplus is negative, the price change is said to have decreased the individual's welfare. Distribution of benefits when price falls When supply of a good expands, the price falls (assuming the demand curve is downward sloping) and consumer surplus increases. This benefits two groups of people: consumers who were already willing to buy at the initial price benefit from a price reduction, and they may buy more and receive even more consumer surplus; and additional consumers who were unwilling to buy at the initial price will buy at the new price and also receive some consumer surplus. Consider an example of linear supply and demand curves. For an initial supply curve S0, consumer surplus is the triangle above the line formed by price P0 to the demand line (bounded on the left by the price axis and on the top by the demand line). If supply expands from S0 to S1, the consumers' surplus expands to the triangle above P1 and below the demand line (still bounded by the price axis). The change in consumer's surplus is difference in area between the two triangles, and that is the consumer welfare associated with expansion of supply. Some people were willing to pay the higher price P0. When the price is reduced, their benefit is the area in the rectangle formed on the top by P0, on the bottom by P1, on the left by the price axis and on the right by line extending vertically upwards from Q0. The second set of beneficiaries are consumers who buy more, and new consumers, those who will pay the new lower price (P1) but not the higher price (P0). Their additional consumption makes up the difference between Q1 and Q0. Their consumer surplus is the triangle bounded on the left by the line extending vertically upwards from Q0, on the right and top by the demand line, and on the bottom by the line extending horizontally to the right from P1. Rule of one-half The rule of one-half estimates the change in consumer surplus for small changes in supply with a constant demand curve. Note that in the special case where the consumer demand curve is linear, consumer surplus is the area of the triangle bounded by the vertical line Q = 0, the horizontal line and the linear demand curve. Hence, the change in consumer surplus is the area of the trapezoid with i) height equal to the change in price and ii) mid-segment length equal to the average of the ex-post and ex-ante equilibrium quantities. Following the figure above, - CS = consumers' surplus; - Q0 and Q1 are, respectively, the quantity demanded before and after a change in supply; - P0 and P1 are, respectively, the prices before and after a change in supply. Producer surplus is the additional benefit that the owners of production factors and product providers bring to producers due to the differences between production, the supply price of the product, and the current market price. The difference between the amount actually obtained in a market transaction and the minimum amount it is willing to accept with the production factors or the products provided. Calculation of producer surplus Producer surplus is usually expressed by the area below the market price line and above the supply curve. In Figure 1, the shaded areas below the price line and above the supply curve between production zero and maximum output Q1 indicate producer surplus. Among them, OP1EQ1 below the price line. This indicates that the total revenue is the minimum total payment actually accepted by the manufacturer. The area OPMEQ1 below the S curve is the minimum total revenue that the manufacturer is willing to accept. In Figure 1, the area enclosed by the market price line, the manufacturer's supply line, and the coordinate axis is the producer surplus. Because the rectangle OP1EQ1 is the total revenue actually obtained by the manufacturer, that is, A + B, and the trapezoid OPMEQ. The minimum total profit that the manufacturer is willing to accept, that is, B, so A is the producer surplus. Obviously, the manufacturer produces and sells a certain quantity of Q1 goods at the market price P1. The manufacturer has reduced the quantity of goods for Q1, which means that the manufacturer has increased the production factors or production costs equivalent to the amount of AVC·Q1. However, at the same time, the manufacturer actually obtains a total income equivalent to the total market price P1·Q1. Since AVC is always smaller than P1, from the production and sales of goods in Q1, manufacturers not only get sales revenue equivalent to variable costs, but also get additional revenue. This part of the excess income reflects the increase in the benefits obtained by the manufacturers through market exchange. Therefore, in economics, producer surplus is usually used to measure producer welfare and is an important part of social welfare. Producer surplus is usually used to measure the economic welfare obtained by the manufacturer in the market supply. When the supply price is constant, the producer welfare depends on the market price. If the manufacturer can sell the product at the highest price, the welfare is the greatest. As part of social welfare, the size of the producer surplus depends on many factors. Generally speaking, when other factors remain constant, an increase in market price will increase producer surplus, and a decrease in supply price or marginal cost will also increase producer surplus. If there is a surplus of goods, that is, people can only sell part of the goods at market prices, and producer surplus will decrease. Obviously, the sum of the producer surplus of all manufacturers in the market constitutes the producer surplus of the entire market. Graphically, it should be expressed as the area enclosed by the market supply curve, the market price line and the coordinate axis. - Bade, R., & Parkin, M. (2017). Essential Foundations of Economics (8th ed.). Pearson. - Corporate Finance Institute. (2021, March 26). Consumer Surplus and Producer Surplus. - ^ Boulding, Kenneth E. (1945). "The Concept of Economic Surplus". The American Economic Review. 35 (5): 851–869. JSTOR 1812599. - ^ "Consumer and producer surplus|Microeconomics|Khan Academy". Khan Academy. - ^ a b c Brewer A. (2008) Surplus. In: Palgrave Macmillan (eds) The New Palgrave Dictionary of Economics. Palgrave Macmillan, London. https://doi.org/10.1057/978-1-349-95121-5_2208-1 - ^ Petty, William (1899-11-11). "The Economic Writings of Sir William Petty". Cambridge: Cambridge University Press. 2 (98): 30. doi:10.1093/nq/s9-iv.98.409b. hdl:2027/hvd.li4qq6. ISSN 1471-6941. - ^ a b c Slesnick, Daniel T. (2008). "Consumer Surplus". The New Palgrave Dictionary of Economics. pp. 1–7. doi:10.1057/978-1-349-95121-5_626-2. ISBN 978-1-349-95121-5. - ^ "What a Consumer Surplus Tells Us". - ^ Willig, Robert D. (1976). "Consumer's Surplus Without Apology". The American Economic Review. 66 (4): 589–597. ISSN 0002-8282. JSTOR 1806699. - Henry George, Progress and Poverty - Modern Microeconomics, A.Koutsyiannis - Microeconomic Theory, A Mathematical Approach, James M. Henderson and Richard E. Quandt
Your Training Diet should be a Healthy Diet A healthy diet is one that helps maintain or improve general health. It is important for lowering many chronic health risks, such as obesity, heart disease, diabetes, hypertension and cancer. A healthy diet needs to have a balance of macronutrients (fats, proteins, and carbohydrates), calories to support energy needs, and micronutrients to meet the needs for human nutrition without inducing toxicity or excessive weight gain from consuming excessive amounts and adequate water. World Health Organisation Guidelines The World Health Organisation (WHO) makes the following 5 recommendations with respect to both populations and individuals: - Achieve an energy balance and a healthy weight - Limit energy intake from total fats and shift fat consumption away from saturated fats to unsaturated fats and towards the elimination of trans-fatty acids - Increase consumption of fruits and vegetables, legumes, whole grains and nuts - Limit the intake of simple sugar. A 2003 report recommends less than 10% simple sugars. - Limit salt / sodium consumption from all sources and ensure that salt is iodised. Other recommendations include: - Sufficient essential amino acids to provide cellular replenishment and transport proteins. All essential amino acids are present in animals. Many plants such as quinoa, soy, and hemp also provide all the essential acids (known as a complete protein). Fruits such as avocado and pumpkin seeds also have all the essential amino acids. - Include all essential micronutrients such as vitamins and minerals. - Avoiding directly poisonous (e.g. heavy metals) and carcinogenic (e.g. benzene) substances. 1. Monosaccharides and Disaccharides: The Simple Sugars 2. The Polysaccharides: Complex Carbohydrates, Fibre and Resistant Starch Monosaccharides and Disaccharides - The Simple Sugars Excess leads to increased risk of obesity and diabetes. Avoid sugar and sugary food and drinks including sports drinks. Read below for hydration ideas. The Polysaccharides: Fibre and Complex Carbohydrates Complex carbohydrates are better for your health as they are low GI and provide a slow release of glucose into the blood stream. Therefore, people with blood sugar problems such as hypoglycemia, insulin resistance or diabetes, can benefit from eating whole foods and avoiding processed foods. Fibre promotes healthy digestion and waste excretion. High fibre foods include vegetables, grains, and legumes. Processed foods have the fibre removed. Fruit skins are also high in fibre. Fibre is commonly classified into two categories: - Those that don't dissolve in water (insoluble fiber) - Those that do (soluble fibre) This type of fibre promotes the movement of material through your digestive system and increases stool bulk, so it can be of benefit to those who struggle with constipation or irregular stools. Whole-wheat flour, wheat bran, nuts and many vegetables are good sources of insoluble fibre. This type of fiber dissolves in water to form a gel-like material. It can help lower blood cholesterol and glucose levels. Soluble fibre is found in oats, peas, beans, apples, citrus fruits, carrots, barley and psyllium. Benifits of Fibre - Normalises bowel movement - Dietary fibre increases the weight and size of the stool and softens it. - Helps maintain bowel integrity and health - A high-fibre diet lowers the risk of developing hemorrhoids, and small pouches in your colon (diverticular disease). Fibre undergoes healthy fermentation in the colon to promote healthy bowel flora. - Lowers blood cholesterol levels - Soluble fibre lowers total blood cholesterol levels by lowering low-density lipoprotein, or "bad," cholesterol levels. - Helps control blood sugar levels - Soluble fibre, can slow the absorption of sugar, which for people with diabetes can help improve blood sugar levels. A diet that includes insoluble fiber has been associated with a reduced risk of developing type 2 diabetes. - Aids in weight loss - High-fibre foods generally require more chewing time, which gives your body time to register when you're no longer hungry. Recommended Food Sources of Fibre |Fruit||Vegetables||Grains nuts and seeds| |Fruit with skins |Potaoes with skin Pumpkin with skin Carrots with skins |Wholemeal bread ONLY Proteins make up the majority of the structural tissue in your body, such as bone and the connective tissues that provide the shape and form to which your cells attach. The body is constantly making new proteins to replenish those lost from tissue damage, to fight invaders and to provide for growth. For example, the antibodies of the immune system, some hormones of your endocrine system, the enzymes in the digestive system and blood coagulating factors of your circulatory system are all made of proteins. A healthy adult is estimated to need around 40 to 65 grams of amino acids per day. If this is not supplied in the diet the body begins to break down its own muscle to support its need for amino acids. Inadequate intake of amino acids from protein can lead to stunting, poor muscle formation, thin and fragile hair, skin lesions, a poorly functioning immune system and many other symptoms. Recommended Protein Sources - Lean red meat - Chicken/turkey Fish - Soy (tofu and tempeh) - Whey protein Fats including saturated fats, trans-fatty acids, and cholesterol are associated with cardiovascular disease and obesity. Not all fats are bad however, some fats have been shown to be health-promoting and some fats are essential for health. Minimising the consumption of saturated fats is a good idea, but minimising the consumption of all fats is not, as the brain is approximately 70 percent fat. Healthy Fats: Monounsaturated and Polyunsaturated Fats Research scientists first noticed monounsaturated fats after discovering that people eating a traditional Mediterranean diet have a lower risk of developing cardiovascular disese, certain types of cancer and rheumatoid arthritis. Traditional Mediterranean diets contain high amounts of olive oil, which is high in oleic acid, a monounsaturated fatty acid. Other monounsaturated fats include myristoleic and palmitoleic acids. Food sources of Monounsaturated Fats - Olive oil - Canola oil or Donegal rapeseed oil The polyunsaturated fats (PUFA) are molecules that contain many unsaturated bonds. This chemical structure is the reason these fats are liquid even when cold. The Essential Fatty Acids are PUFAs. The omega-6 PUFAs, such as arachidonic acid, one of the major fats in your cell membranes, are made from linoleic acid. The omega-3 fats, such as docosahexaenoic acid, the main fat in your brain, are made from alpha-linolenic acid. The Essential PUFA Fats 1. linoleic acid (an omega-6 fatty acid) 2. alpha-linolenic acid (an omega-3 fatty acid) Omega 6 Fats Few people are deficient in the omega-6 essential fat, linoleic acid as arachidonic acid, which is made from linoleic acid, is found at high levels in animal tissue, such as beef and poultry. Since the average Western diet contains a lot of meat, most people get high quantities of arachionic acid. Food sources of Omega 6 fats/ Linoleic acid Oils from grains, nuts and legumes inc; - Peanut oil - Beef and poultry - Evening primrose oil and borage oil Omega 3 Fats The omega-3 fats, which are produced from alpha linolenic acid, are associated with a decreased incidence of chronic inflammatory diseases such as rheumatoid arthritis, inflammatory bowel disease, and cardiovascular disease, and behavioral syndromes like ADHD (attention deficit hyperactivity disorder). Alpha-linolenic acid is found in high quantities in fish and flax. Some of the most important omega-3 fats, which are synthesised from alpha-linolenic acid, are docosahaenoic acid (DHA) and eicosapentaenoic acid (EPA), and these can be obtained directly from the diet as well. Food sources of Omega-3 fats acid /alpha-linolenic acid - fish and seaweed - flax oil - green leafy vegetables - cold-water fish like salmon, makerel, herrings, sardines and tuna The ratio of omega 3 and Omega 6 and inflammation Although omega-6 fats, like arachidonic acid, play important roles in your body, consuming too many of these in comparison to the amount of omega-3 fats you consume can cause problems. The ratio of omega-6 to omega-3 fats is important. - Omega-6 fats promote inflammation - Omega-3 fats are anti-inflammatory Current research continues to support that diseases such as atherosclerosis, arthritis, inflammatory bowel disese, and asthma are benfited by consuming a diet low in omega 6 and high in omega 3. The ideal ratio of omega-3 to omega-6 is estimated to be around 1:2. This can be accomplished by reducing your comsumption of meats, dairy products, and refined foods, while increasing consumption of the omega-3 rich foods. Nutritional therapy to enhance performance Carbohydrate loading is a strategy involving changes to training and nutrition that can maximise muscle glycogen (carbohydrate) stores prior to endurance competition. Who can benefit from carbohydrate loading? Anyone exercising continuously at a moderate to high intensity for 90 minutes or longer is likely to benefit from carbohydrate loading. Typically, sports such as cycling, marathon running, longer distance triathlon, cross-country skiing and endurance swimming benefit from carbohydrate loading. Carbohydrate loading is generally not practical to achieve in team sports where games are played every 3-4 days. Although it might be argued that players in football and AFL have heavy demands on their muscle fuel stores, it may not be possible to achieve a full carbohydrate loading protocol within the weekly schedule of training and games. Carbohydrate loading and performance enhancement Carbohydrate loading enables muscle glycogen levels to be increased from 100-120 mmol/kg ww to around 150-200 mmol/kg ww. This extra supply of carbohydrate has been demonstrated to improve endurance exercise by allowing athletes to exercise at their optimal pace for a longer time. It is estimated that carbohydrate loading can improve performance over a set distance by 2-3%. Original carbohydrate loading Originally, carbohydrate loading involved a depletion phase. This required 3 or 4 hard training days plus a low carbohydrate diet. The depletion phase was thought to be necessary to stimulate the enzyme glycogen synthase. The depletion phase was followed by a loading phase, that involved 3-4 days of rest combined with a high carbohydrate diet. Modern carbohydrate loading Today's athletes use a modified carbohydrated loading method. Ongoing research has shown that the depletion phase is no longer necessary, and the disruption to preperation/training is both unecessary and damaging. Today, 1-4 days of exercise taper while following the carbohydrate diet is sufficient to elevate muscle glycogen levels. The biggest change in your schedule during the week before your event should be in your training, not in your food. Don't be tempted to do any last-minute long sessions. You need to taper your training so that your muscles have adequate time to become fully fueled (and healed). Allow at least two easy or rest days pre-event. Do not eat hundreds more calories coming up to a match. Simply need to exercise less. This way the calories you generally expend during training can be used to fuel your muscles. Maintain your tried-and-tested high-carbohydrate training diet. Drastic changes can easily lead to upset stomachs, diarrhea, or constipation. Be sure that you carb-load, not fat-load. A bigger luch may be preferabel over a big dinner. An earlier meal allows plenty of time for the food to move through your system. You can also carbo-load two days before if you will be too nervous to eat much the day before the event. (The glycogen stays in your muscles until you exercise.) Then graze on crackers, homemade soup, and other easily tolerated foods the day before your competition. You are better off eating a little bit too much than too little the day before the event. Athletes who have properly carbo-loaded should gain about one to three pounds (0.45-1.36kg). This weight gain reflects water weight and indicates the muscles are fuelled. Three ounces of water is stored for every ounce of carbohydrate in the body. Be sure to drink extra water, juices, if desired. Abstain from alcohol as it is dehydrating. Drink enough fluid to produce a significant volume of urine every two to four hours. The urine should be pale yellow. Protein and carb loading Do not avoid protein before an event. Protein is required on a daily basis. Eat a small serving of low-fat proteins such as poached eggs, yoghurt, turkey, or chicken or plant proteins such as beans and lentils (as tolerated). Carbohydrate Loading Diet example The following diet is suitable for a 70kg athlete aiming to carbohydrate load: 3 cups of date porridge with almonds and natural yoghurt and honey 1 medium banana 250ml beetroot and carrot juice Buckwheat pancake with honey and walnuts 250ml cloudy apple juice 2 sandwiches (4 slices of heavy bread) with tuna, sweetcorn and low fat mayonnaise 200g tub of low-fat fruit yoghurt 250ml grape juice Banana smoothie made with oganic milk, banana and honey and pich nutmeg and wheatgerm cereal bar/ flapjack/ nutbar 1 cup of pasta sauce with 2 cups of wholewheat pasta 2 slices garlic toast 2 glasses of cordial 1 cup broccoli 1 1/2 cups brown rice Garlic toast 2 slices 2 glasses of cordial 1 cup chili sauce with 2 cups brown rice 2 slices garlic toast 2 glasses of cordial Toasted bagel and cream cheese 250ml beetroot juice Vitamins and minerals to improve performance and reduce recovery time A sports drink beverage is designed to help athletes rehydrate when fluids are depleted after training or competition. Electrolyte replacement promotes proper rehydration, which is important in delaying the onset of fatigue during exercise. As the primary fuel utilised by exercising muscle, carbohydrates are important in maintaining exercise and sport performance. Categories of sports drinks Sports drinks can be split into three major types: 1. Isotonic sports drinks contain similar concentrations of salt and sugar as in the human body. 2. Hypertonic sports drinks contain a higher concentration of salt and sugar than the human body. 3. Hypotonic sports drinks contain a lower concentration of salt and sugar than the human body Most sports drinks are moderately isotonic, having between 4 and 5 heaped teaspoons of sugar per five ounce (13 and 19 grams per 250ml) serving. They never have a pH comparable to carbonated soft drinks. Why sports drinks are not a healthy choice Sports drinks are up to 30 times more erosive to your teeth than water. The citric acid in the sports drinks soften tooth enamel. The leading brands of sports drinks on the market typically contain as much as two-thirds the sugar of soft drinks and more sodium. They also often contain high-fructose corn syrup (HFCS), artificial flavours and food colouring. Sports drinks are high calorie. One study from the University of California at Berkeley's Robert C. and Veronica Atkins Centre for Weight and Health even found that students who drink one 20-ounce sports drink every day for a year could gain 13 pounds/6kg! Although these drinks are often referred to as “energy” drinks, in the long run the sugar they contain does just the opposite, resulting in a quick burst of energy followed by a sudden and sever drop in both blood sugar energy. Low calorie and sugar-free sports drinks contain artificial sweeteners which are worse for you than high-fructose corn syrup or sugar. Most also contain loads of processed salt, which is there to replenish the electrolytes you lose while sweating. However, unless you’re sweating profusely and for a prolonged period, that extra salt is harmful. Coconut water - The better option Coconut water is the clear liquid inside young coconuts. It is also being marketed as a natural sports drink because of its high potassium and mineral content. Fresh coconut water is one of the highest sources of electrolytes known to man, and can be used to prevent dehydration from strenuous exercise or even diarrhea. There have been cases where coconut water has been used as an intravenous hydration fluid in some developing countries where medical saline was unavailable. Electrolytes are minerals in the body and compounds that bind to them to create salts, such as sodium, potassium, magnesium, chloride, calcium, bicarbonate, phosphate, and sulphate. Electrolyte molecules are positively or negatively charged, which allows them to carry electrical impulses that transmit nerve signals and contract muscles. A normal diet provides more than enough electrolytes to meet the body’s needs for most people. But there are times when electrolytes from food alone may not be enough. Indications for use in sport - Hot climates Oral Rehydration Solutions from the pharmacy are manufactured according to World Health Organisation guidelines for the treatment and prevention of dehydration during diarrhoea and gastro-enteritis. Oral Rehydration Solutions are available in a number of pharmaceutical brands, typically as individual sachets of powder to be mixed with 200-250 ml of water. Following exercise (or "weigh in") the athlete with a moderate-large fluid deficit should follow a rehydration plan tailored to meet their estimated fluid loss. Typically, over the next hour(s) the athlete should consume a volume of fluid equal to 1.5 times their estimated fluid deficit. Risks associated with supplement use In some situations, excessive salt supplementation during exercise may lead to gastrointestinal problems or cause further impairment of fluid balance. Increasing the sodium content of a drink generally reduces the drink palatability and may interfere with the voluntary consumption of fluid. Hypertension risk DASH diet Whey protein is a rich source of branched chain amino acids (BCAAs), containing the highest known levels of any natural food source. Branched-chain amino acids (BCAAs) are used to fuel working muscles and stimulate protein synthesis, which may speed recovery and adaptation to stress (exercise). Preclinical studies have suggested that whey protein may possess anti-inflammatory or anti-cancer properties. Although whey proteins are responsible for some milk allergies, the major allergens in milk are the caseins. Whey protein contains the amino acid cysteine, which can be used to make glutathione. However, this amino acid is not essential for the synthesis of glutathione. Glutathione is an antioxidant that defends the body against free radical damage and some toxins, and studies in animals have suggested that milk proteins might reduce the risk of cancer. Risks associated with supplement use Some bodybuilding protein shakes purchased online, and in the New York metro area, exceeded USP standards for exposure to heavy metals when three servings a day were consumed. In addition, to avoid artificial sweeteners, flavours and bulking agents consider purchasing unsweetened and unflavoured supplements. Solgar Whey To Go Protein Powder is formulated with a blend of three, uniquely processed whey protein concentrates. It also includes free-form L-Glutamine and free-form Branched Chain Amino Acids (BCAAs). L-Glutamine Plays a significant role in supporting muscle mass. BCAAs help assist in decreasing the breakdown of muscles under stressful conditions. Both L-Glutamine and BCAAs are used by muscle tissue as a source of energy. It is free of gluten and fat. Calcium is important for nerve contraction and bone growth. Indications for use in sport - Athletes with an inadequate energy intake, or inadequate intake of dairy and fortified soy products. - Inadequate calcium intake during adolescence and early adulthood may lead to sub-optimal bone status. - Calcium requirements elevated by growth in childhood and adolescence. Reduced iron status is a potential problem in some athletes when dietary intake fails to meet iron requirements. There is now evidence that supplementation of female athletes, who are not anaemic but who have serum ferritin levels less than 16 or 20 ng/ml, may cause improvements in some performance related parameters. Best taken with 500 mg of vitamin C for 2-3 months or until review with sports doctor. Supplement should be taken with food Indications for use in sport - Low serum ferritin Poorly balanced vegetarian diets, chronic low-energy diets, and other dietary patterns which see infrequent intake of red meat. - Increased iron requirements; female athletes (menses), adolescent athletes undergoing growth spurts, pregnant athletes, athletes adapting to altitude or heat training. - Increased iron losses due to gastrointestinal bleeding (e.g. ulcers, some non-steroidal anti-inflammatory drugs (NSAIDs)), excessive haemolysis due to increased training stress (e.g. footstrike haemolysis in runners), and other blood losses (e.g. surgery, nosebleeds, contact sports). Risks associated with supplement use Excessive iron intake in some athletes may lead to haemochromatosis. Some iron preparations cause gastrointestinal upsets, constipation. Solgar Gentle Iron Athletes who restrict their total energy intake or dietary variety are at risk of an inadequate intake of vitamins and minerals. Indications for use in sport - Athletes undertaking a prolonged period of energy restriction (e.g. 8MJ/1900 kcal for females or 10 MJ/2300 kcal for males) for weight loss, or weight maintenance. - Athletes with a restricted dietary intake who are unable/unwilling to increase food range. - Athletes with a heavy competition schedule, involving disruption to normal eating patterns. Solgar Male Multiple. Phytonutrient multiple vitamin, mineral and herbal formula for men. Unlike other nutrients, Vitamin D can be obtained by exposure to ultraviolet radiation from sunlight, as well as through foods or supplements - Allows body cells to utilise calcium (which is essential for cell metabolism) - Allows muscle fibers to develop and grow normally - The immune system needs Vitamin D to function properly. - Every cell in the body has receptors for Vitamin D Scientists have discovered that elite female gymnasts and many of a group of distance runners also had poor Vitamin D status.Forty percent of the runners, who trained outdoors in sunny Baton Rouge, Louisiana, had insufficient Vitamin D. Four Russian sprinters were doused with artificial, ultraviolet light. Another group wasn’t. Both trained identically for the 100-meter dash. The control group lowered their sprint times by 1.7 percent. The radiated runners, in comparison, improved by an impressive 7.4 percent. More recently, when researchers tested the vertical jumping ability of a small group of adolescent athletes, Larson-Meyer says, “they found that those who had the lowest levels of Vitamin D tended not to jump as high,” intimating that too little of the nutrient may impair muscle power. A number of recent studies also have shown that, among athletes who train outside year-round, maximal oxygen intake tends to be highest in late summer Indications for use in sport High latitude countries such as Ireland Solgar Norwegian Cod Liver Oil. Cod liver oil has traditionally been one of the most popular natural sources of Vitamins A & D. Vitamins A and D help maintain bones, as well as a healthy immune system. Vitamin A assists in many other functions such as eyesight and skin maintenance. Caffeine has numerous actions on different body tissues including - The mobilisation of fats from adipose tissue and the muscle cell - Changes to muscle contractility - Alterations to the central nervous system to change perceptions of effort or fatigue - Stimulation of the release and activity of adrenaline - Effects on cardiac muscle Caffeine enhances endurance and provides a small but worthwhile enhancement of performance over a range of exercise protocols including; - Short-duration high-intensity events (1-5 min) - Prolonged high-intensity events (20-60 min) - Endurance events (90 min + continuous exercise) - Ultra-endurance events (4 hours +) - Prolonged intermittent high-intensity protocols (team and racquet sports). Traditional protocols for the use of caffeine involve the intake of caffeine one hour prior to the event, in doses equivalent to ~ 6 mg/kg (e.g. 300-500 mg for a typical athlete). Equivalent to 1 cup of real strong coffee. There is new evidence, at least from studies involving prolonged exercise lasting longer than 60 minutes, that a variety of protocols of caffeine use can enhance performance. In particular, benefits have been seen with small-moderate levels of caffeine (1-3 mg/kg BM or 70-200 mg caffeine) taken before and/or throughout exercise, or towards the end of exercise when the athlete is becoming fatigued). Furthermore, these studies show that performance benefits do not increase with increases in the caffeine dose. Risks associated with supplement use The use of larger doses of caffeine increases the risk of side-effects. Creatine and Creatine Loading Creatine is a naturally occurring compound found in large amounts in skeletal muscle. Phosphorylated creatine is a source of phosphate to regenerate ATP which is fuel for muscles. The creatine phosphate system is the most important fuel source for sprints or bouts of high-intensity exercise lasting up to 10 seconds. Recent studies have shown that prior creatine loading enhances glycogen storage and carbohydrate loading in a trained muscle. An acute weight gain of 600-1000 g is typically associated with acute loading and may represent water gain.This associated weight gain may be counterproductive to athletes competing in sports where power-to-weight is a key factor in successful performance or in sports involving weight divisions. Indications for use in sport - A developed athlete undertaking resistance training to increase lean body mass. - Interval and sprint training sessions where the athlete is required to repeat short explosive maximal efforts with brief recovery intervals. - Sports with intermittent work patterns (e.g. soccer, basketball, football, racquet sports). - Concerns Associated with Supplement Use Creatine loading promotes weight gain due to fluid retention. Creatine monohydrate is the most practical form for supplementation with creatine. Rapid Loading Protocol - 20 g daily, divided into 4 doses, for 5 days. - These doses should be taken with a meal or snack supplying a substantial amount of carbohydrate (50-100 g) - Weight gain of ~0.6-1.0 kg should be expected when using this protocol - Maintenance dose 3 g/day. Slow Loading Protocol - 3 g/day consumed with a substantial carbohydrate meal or snack. - Maintenance dose: 3 g/day. Bicarbonate and Citrate Loading Bicarbonate is the body's most important extracellular buffer. Among the types of acid produced, lactic acid generated during exercise is buffered by bicarbonate. Bicarbonate loading increases the muscle's extracellular buffering capacity and ability to dispose of excess hydrogen ions produced through anaerobic glycolysis. Studies show bicarbonate loading has a moderate effect-size in enhancing the performance of anaerobic exercise/events (Matson and Tran 1993) and that “chronic” supplementation protocol with repeated doses of bicarbonate over several days has been shown to increase buffering capacity, with effects lasting for at least 24 h following the last dose (McNaughton et al. 1999; McNaughton and Thompson 2001). Citrate loading has also been used to increase extracellular buffering capacity although some research suggests bicarbonate may be more effective (Van Montfoort et al. 2004). This area warrants further investigation given that citrate seems to be less likely to cause gut disturbances. Traditional protocols of bicarbonate and citrate supplementation have involved “acute” ingestion in the one to two hours before an exercise bout. Situations for use in sport - There is strong evidence for use by athletes competing in high-intensity competition events lasting 1-7 minutes. - High-intensity events of up to an hour - Intermittent high intensity team sports - The acute bicarbonate loading protocol typically involves a 300 mg/kg (0.3 g per kg) dose, taken 1-2 hours prior to the session. (15.25 g bicarbonate for a man of 70 kg) - The chronic bicarbonate loading protocol typically involves five days of 500 mg/kg bicarbonate, split into four doses over the day. Risks associated with supplement use Gastrointestinal distress often occurs with bicarbonate loading. This may be reduced by ingesting the capsules or dissolvable powder with sufficient fluid to decrease the osmotic ‘loading’ on the gut. Changes in the pH of urine are expected following bicarbonate supplementation. If an athlete is selected for a drug test, they may need to wait several hours before urinary pH returns to the levels that are acceptable to drug testing authorities. This may cause some disruption to the athlete’s daily routine or post-event activities. Interaction with other supplements should be considered (e.g. caffeine, creatine). Muscle carnosine is an intracellular buffer. Carnosine has an anti-oxidant role and accounts for about 10% of the muscle’s ability to buffer the acidity (H+ ions) produced by high intensity exercise. Increasing muscle carnosine levels may offer an alternative to bicarbonate/citrate loading for high-intensity exercise or as an additional strategy. Recent studies have shown that supplementation with 5-6 g/d β-alanine can increase muscle carnosine content by ~60% after 4 weeks and ~80% after 10 weeks of supplementation by about 80% (Harris et al. 2006). Dietary sources of carnosine and β-alanine include meats, especially “white” (fast twitch) meat such as the breast meat of poultry and birds, and fish. Vegetarians have lower resting muscle carnosine concentrations than meat-eaters (Harris et al. 2007). A daily β-alanine dose of~65 mg/kg appears to balance its effectiveness in raising muscle carnosine levels with the occurrence of side effects. This equates to 4.5-5.5 g/d for a 70-85 kg athlete. Take β-alanine in split doses over the day and to consume it with carbohydrate-rich foods. It is not yet know how long supplementation needs to continue to maximise muscle carnosine concentrations, or how long muscle carnosine remains elevated if supplementation is stopped. However, it appears that the rise and fall of muscle carnosine may take several months to occur. Indications for use in sport High-intensity exercise Competitive events lasting 1-7 minutes Repeated bouts of high-intensity work (sprints, lifts) which cause an exercise-limiting increase in H+ ions over time. Risks associated with supplement use Studies on β-alanine are too new to be certain about the side-effects associated from supplement use. To date, the major side-effect that has been described is paraesthesia - a prickling or “pins and needles” sensation - occurring for ~ 60 mins about 15-20 mins following a dose of β-alanine. Beetroot juice has previously been shown to reduce blood pressure. Now drinking beetroot juice has been shown to boost stamina and could help people exercise for up to 16% longer, a UK study suggests. A University of Exeter team found nitrate contained in the vegetable leads to a reduction in oxygen uptake, making exercise less tiring. The small Journal of Applied Physiology study suggests the effect is greater than that which can be achieved by regular training. The researchers believe their findings could help people with cardiovascular, respiratory, metabolic diseases and endurance athletes. 8 men aged 19-38, were given 500ml per day of organic beetroot juice for six consecutive days before completing a series of tests, involving cycling on an exercise bike. After drinking beetroot juice the group was able to cycle for an average of 11.25 minutes - 92 seconds longer than when they were given the placebo, translating into an approximate 2% reduction in the time taken to cover a set distance. The group that had consumed the beetroot juice also had lower resting blood pressure. The nitrate in the beetroot juice turning into nitric oxide in the body, reducing how much oxygen is burned up by exercise, therefore boosting stamina. Antioxidants Zinc A, C and E & Selenium Sudden increases in training stress lead to temporary increases in production of free oxygen radicals. Free radicals are unstable,molecules which form during normal metabolism and via exposure to external factors such as pollution. Free radicals have been linked to cell membrane damage, cancer and immune weakness. Supplementation with antioxidant vitamins reduces oxidative damage. There is no consistent evidence of performance enhancement following antioxidant supplementation. Regular training promotes an increase in the body’s own antioxidant defence system against free-radical damage. Antioxidants are mainly found in plant-based foods including dark coloured vegetables, citrus fruit, legumes, nuts, grains, seeds and oils. Tea (black and green) is a rich source of flavonoids. Carnitine is a nitrogenous compound found mainly in meats and is synthesised in the kidney and liver from lysine and methionine. Carnitine enhances aerobic endurance by increasing the oxidation of glucose, decreasing the accumulation of lactic acid, and enhancing fatty acid metabolism by the cellular mitohondria. A study in the Journal of Science and Medicine in Sport published a study by researchers from the University of South Australia showing that fish oil supplementation reduced heart rate in elite rugby players undertaking a high-intensity workout. Now the European Journal of Applied Physiology has published a study by researchers from the University of California-Davis showing that fish oil supplementation increased heart stroke volume and cardiac output during low- to moderate-intensity exercise. Quercetin is a flavonoid, found naturally in the skins of many red fruit and veg including red onions, tomatoes, blueberries and apples with reputed health-boosting antioxidant and anti-inflammatory properties. Recent studies have suggested that quercetin can boost endurance, increase VO2 max (ie aerobic capacity), fight fatigue, support the immune system and attenuate exercise-induced damage in the body. Solgar Quercetin Complex Probiotics have beneficial effects on health and in particular intestinal microbial balance. The two main commercially used species are lactobacillus acidophilis and bifidobacterium bifidum. There is evidence of the following beneficial effects of probiotics; - Improving intestinal tract health - Enhancing the immune system - Enhancing the bioavailability of nutrients - Reducing lactose intolerance - Decreasing the prevalence of allergy in susceptible individuals - Reducing the risk of certain types of cancers The AIS conducted a study on lactobacillus fermentum in highly trained distance runners in 2003. A highly significant favourable reduction in the number of symptoms days was observed in the probiotic group compared with placebo treatment, although the underlying immunological control mechanisms were not clearly established. A collaborative study between the AIS and the university of Newcastle published in 2006 (British Journal of Sports Medicine 40(4):351-354) indicated that fatigued athletes with lowered immune responses may benefit from probiotic supplementation. Herbal medicine and sport Herbal medicine is used in Traditonal Chinese Medicine TCM to performance enhancement. Siberian Ginseng (Eleutheroccocus senticosus) is a gentle herb appropriate for long-term use without side effects. Siberian Ginseng to be adaptogenic, that is, it helps the body find balance and adapt to stresses. It does this primarily by nourishing the adrenal glands. Effects of Siberian Ginseng include immune support, blood sugar regulation, and improvement in energy levels. It has been shown in studies to enhance athletic performance. Ginseng (Panax ginseng) is a fundamental herb for improving energy levels in general and for sports performance. It has been shown to have many positive effects for the athlete, including: shortens the latency period of and strengthens conditioned reflexes, speeds transmission of nerve impulses, promotes relaxation while restoring alertness, dilates coronary arteries and sustains proper cardiac rhythm, increases synthesis of proteins and nucleic acids, helps maintain adequate blood sugar levels, and supports adrenal, spleen, thyroid and thymus function. Cordyceps (C. sinensis) is a very safe and gentle tonic herb. In TCM it tonifies kidney yang and strengthens the immune sytem and lungs. It is a very unusual herb, as it's a moth larva which has been infected with a fungus and then dried. Cordyceps has been shown to enhance the immune system, relax spasms of the heart, bronchi and intestines, improve sexual function, and invigorate energy levels while keeping one relaxed. Tribulus / Bai Ji Li /白蒺藜 (Tribulus terrestris) is a well known aphrodisiac and male tonic. From a TCM perspective, Bai Ji Li is warming, pungent, and bitter, it calms floating Liver Yang, clears Wind-Heat (particularly from the eyes), moves Liver Qi, and relieves itching (specific for desquamation of palms and soles). Men with low testosterone levels may benefit from the use of Tribulus. Studies show that it can produce statistically significant increases in levels of testosterone, dihydrotestosterone and dehydroepiandrosterone. Bai Ji Li is a specific if low sperm counts are due to stress (Qi stagnation) and low testosterone levels. Damiana (Turnera diffusa /T. aphrodisiaca) is antidepressant, aphrodisiac, euphoric, mild diuretic, mild laxative, mild purgative, nervine, stimulant, stomachic, testosteromimetic, thymoleptic and urinary antiseptic. It is indicated for depression, nervous dyspepsia, atonic constipation, coital inadequacy, debility and lethargy. Specifically indicated in anxiety neurosis with a predominant sexual factor. Damiana is a valuable strengthening remedy for the nervous system. In particular, it has a stimulating and enhancing action on those functions related to the male reproductive system, especially where there is sexual inadequacy with a strong psychological or emotional element. The alkaloids are thought to have a testosteronal effect. It is of benefit in any debilitated condition of the central nervous system from anxiety and depression to neuralgia; and is used to contain genital herpes. Although considered to be a 'male' herb, it is not contraindicated for women with debilitated conditions. - Fertility tonic and aphrodisiac - Enhancer performance - Sports nutrition - Protects the prostate - Prevents male pattern hair loss - Tonifies the adrenal gland This mix contains a combination of aphrodisiac and tonic herbs that tonify the kidney energy in Traditional Chinese Medicine, which, according to TCM, is responsible for fertility and reproduction. The herbs tonify the adrenal glands and improve strength and resistance. The herbs also increase testosterone levels, improve sperm production, erectile function and increase libido. The mix is indicated to increase fertility and libido and is a general male tonic designed to increase stamina and promote resistance to stress. Although it is promoted as a fertility tonic, it can be used by any males, from business man driving 4 hours a day to get to and from work, to a world class athlete looking to increase muscle mass and stamina. It is specifically indicated for men who have sweaty or clammy palms, a condition known as palmer hyperhidrosis, which is a symptom of adrenal exhaustion. - Tonifies adrenals to prevent adrenal exhaustion - Treats depression / SAD Chemotherapy support - Enhances memory - Enhances performance - Tonifies the nervous system to prevent and treat neurological disease; MS, Parkinson’s, epilepsy etc An adaptogen produces an increase in power of resistance against stress whether it is physical, chemical, biological or emotional in origin. Adaptogens restore and normalise physiological functions in the event of stress. Adaptogens specifically help our bodies adapt to changes in what is known as our circadian rhythm or body clock due to seasonal changes, shift work and crossing time zones. They are used specifically to prevent the effects of SAD. When a stressful situation occurs, consuming adaptogens generates a degree of generalised adaptation that allows our physiology to handle the stressful situation in a more resourceful manner. Carahealth Adapt helps; - Lowers cortisol levels during times of stress - Boosts immunity Increase energy levels - Improve resistance to stress - Improve concentration - Improves the symptoms of SAD Acupunture for Sports Performance Enhancement At the 1993 Chinese National Games, nine Chinese women runners broke nine world records. In the 10,000 meter race, the previous record was broken by 42 seconds, an unbelievable time. The new 1500 meter record holder had been 73rd at the same distance the year before. Journalists and other athletes around the world took notice and accused the team of using steroids, even though the runners all passed steroid tests and there were no other indications of steroid use, such as acne or highly defined muscles. A press conference was held where Ma Jun Ren, the team coach, enraged by these accusations, held up a box of Chinese herbs he credited with his team's performance. It was derived from cordyceps, a traditional Chinese herb used for generations as a lung Qi tonic. Acupuncture for Athletes Acupuncture treats musculoskeletal injuries and constitutional imbalances, relieves muscle pain and spasm and improves circulation to tense or injured tissues. Acupuncture is especially effective for tendon and ligament sprain/strains and chronic injuries which have been poorly responsive to other types of treatment. Cupping for Athletes Cupping employs the localised use of negative pressure (vacuum) to reverse the centripetal pull of gravity. Simply stated, it uses gentle, controlled suction to open up muscle tissue and vastly increase local circulation of blood and fluids. This negative pressure improves blood and fluid circulation, mobilizes muscle and sinew flexibility, irons out crumpled and contracted fascia, gives breathing room to adhesions, helps to vanish scars, dredges the lymphatic system, improves skin tone, breaks up cellulite and promotes relaxation. Carina is available to lecture for your group or institution on this subject. Carina Harkin BHSc.Nat.BHSc.Hom.BHSc.Acu. is a practitioner of 11 years, complementary medicine lecturer of 4 years and mother of six in Galway, Ireland who practices what she teaches. For an appointment call Carina directly on 083 34 66 333. All products are available through www.carahealth.ie. Remember, we are here for a good time not a long time, enjoy your food life! Carahealth Galway Ireland. Acupuncture, Naturopathy, Homeopathy, Herbal Medicine, Nutrition, Nutritional Therapy, Flower Essences, Iridology, Short Courses, Cosmetic Acupuncture
In the process of deduction, you begin with some statements, called “premises,” that are assumed to be true, you then determine what else would have to be true if the premises are true. For example, you can begin by assuming that God exists, and is good, and then determine what would logically follow from such an assumption. You can begin by assuming that if you think, then you must exist, and work from there. With deduction you can provide absolute proof of your conclusions, given that your premises are correct. The premises themselves, however, remain unproven and unprovable. Examples of deductive logic: - All men are mortal. Joe is a man. Therefore Joe is mortal. If the first two statements are true, then the conclusion must be true. - Bachelors are unmarried men. Bill is unmarried. Therefore, Bill is a bachelor. - To get a Bachelor’s degree at Utah Sate University, a student must have 120 credits. Sally has more than 130 credits. Therefore, Sally has a bachelor’s degree. In the process of induction, you begin with some data, and then determine what general conclusion(s) can logically be derived from those data. In other words, you determine what theory or theories could explain the data. For example, you note that the probability of becoming schizophrenic is greatly increased if at least one parent is schizophrenic, and from that you conclude that schizophrenia may be inherited. That is certainly a reasonable hypothesis given the data. However, induction does not prove that the theory is correct. There are often alternative theories that are also supported by the data. For example, the behavior of the schizophrenic parent may cause the child to be schizophrenic, not the genes. What is important in induction is that the theory does indeed offer a logical explanation of the data. To conclude that the parents have no effect on the schizophrenia of the children is not supportable given the data, and would not be a logical conclusion. Examples of inductive logic: - This cat is black. That cat is black. A third cat is black. Therefore all cats are are black. - This marble from the bag is black. That marble from the bag is black. A third marble from the bag is black. Therefore all the marbles in the bag black. - Two-thirds of my latino neighbors are illegal immigrants. Therefore, two-thirds of latino immigrants come illegally. - Most universities and colleges in Utah ban alcohol from campus. That most universities and colleges in the U.S. ban alcohol from campus. Deduction and induction by themselves are inadequate to make a compelling argument. While deduction gives absolute proof, it never makes contact with the real world, there is no place for observation or experimentation, and no way to test the validity of the premises. And, while induction is driven by observation, it never approaches actual proof of a theory. Therefore an effective paper will include both types of logic.
have large moments of inertia to smooth out mechanical motion. This example is in a Russian museum. The moment of inertia, otherwise known as the angular mass or rotational inertia, of a rigid body is a tensor that determines the torque needed for a desired angular acceleration about a rotational axis; similar to how mass determines the force needed for a desired acceleration. It depends on the body's mass distribution and the axis chosen, with larger moments requiring more torque to change the body's rotation. It is an extensive (additive) property: for a point mass the moment of inertia is just the mass times the square of perpendicular distance to the rotation axis. The moment of inertia of a rigid composite system is the sum of the moments of inertia of its component subsystems (all taken about the same axis). One of its definitions is the second moment of mass with respect to distance from an axis r, , integrating over the entire mass . For bodies constrained to rotate in a plane, it is sufficient to consider their moment of inertia about an axis perpendicular to the plane (a scalar value). For bodies free to rotate in three dimensions, their moments can be described by a symmetric 3 × 3 matrix; each body has a set of mutually perpendicular principal axes for which this matrix is diagonal and torques around the axes act independently of each other. When a body is rotating, or free to rotate, around an axis, a torque must be applied to change its angular momentum. The amount of torque needed to cause any given angular acceleration (the rate of change in angular velocity) is proportional to the moment of inertia of the body. Moment of inertia may be expressed in units of kilogram metre squared (kg·m2) in SI units and pound-foot-second squared (lb·ft·s2) in imperial or US units. Moment of inertia plays the role in rotational kinetics that mass (inertia) plays in linear kinetics - both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, and will vary depending on the chosen axis. For a point-like mass, the moment of inertia about some axis is given by , where is the distance of the point from the axis, and is the mass. For an extended rigid body, the moment of inertia is just the sum of all the small pieces of mass multiplied by the square of their distances from the axis in question. For an extended body of a regular shape and uniform density, this summation sometimes produces a simple expression that depends on the dimensions, shape and total mass of the object. In 1673 Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, known as a compound pendulum. The term moment of inertia was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Euler's second law. The natural frequency of oscillation of a compound pendulum is obtained from the ratio of the torque imposed by gravity on the mass of the pendulum to the resistance to acceleration defined by the moment of inertia. Comparison of this natural frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body. Moment of inertia also appears in momentum, kinetic energy, and in Newton's laws of motion for a rigid body as a physical parameter that combines its shape and mass. There is an interesting difference in the way moment of inertia appears in planar and spatial movement. Planar movement has a single scalar that defines the moment of inertia, while for spatial movement the same calculations yield a 3 × 3 matrix of moments of inertia, called the inertia matrix or inertia tensor. The moment of inertia of a rotating flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. The moment of inertia of an airplane about its longitudinal, horizontal and vertical axis determines how steering forces on the control surfaces of its wings, elevators and tail affect the plane in roll, pitch and yaw. Video of rotating chair experiment, illustrating moment of inertia. When the spinning professor pulls his arms, his moment of inertia decreases; to conserve angular momentum, his angular velocity increases. Moment of inertia I is defined as the ratio of the net angular momentum L of a system to its angular velocity ω around a principal axis, that is If the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their outstretched arms or divers curl their bodies into a tuck position during a dive, to spin faster. If the shape of the body does not change, then its moment of inertia appears in Newton's law of motion as the ratio of an applied torque τ on a body to the angular acceleration α around a principal axis, that is For a simple pendulum, this definition yields a formula for the moment of inertia I in terms of the mass m of the pendulum and its distance r from the pivot point as, Thus, moment of inertia depends on both the mass m of a body and its geometry, or shape, as defined by the distance r to the axis of rotation. This simple formula generalizes to define moment of inertia for an arbitrarily shaped body as the sum of all the elemental point masses dm each multiplied by the square of its perpendicular distance r to an axis k̂. In general, given an object of mass m, an effective radius k can be defined for an axis through its center of mass, with such a value that its moment of inertia is where k is known as the radius of gyration. Moment of inertia can be measured using a simple pendulum, because it is the resistance to the rotation caused by gravity. Mathematically, the moment of inertia of the pendulum is the ratio of the torque due to gravity about the pivot of a pendulum to its angular acceleration about that pivot point. For a simple pendulum this is found to be the product of the mass of the particle m with the square of its distance r to the pivot, that is This can be shown as follows: The force of gravity on the mass of a simple pendulum generates a torque around the axis perpendicular to the plane of the pendulum movement. Here r is the distance vector perpendicular to and from the force to the torque axis. Here F is the tangential component of the net force on the mass. Associated with this torque is an angular acceleration, , of the string and mass around this axis. Since the mass is constrained to a circle the tangential acceleration of the mass is . Since the torque equation becomes: where k̂ is a unit vector perpendicular to the plane of the pendulum. (The second to last step uses the vector triple product expansion with the perpendicularity of and r.) The quantity I = mr2 is the moment of inertia of this single mass around the pivot point. The quantity I = mr2 also appears in the angular momentum of a simple pendulum, which is calculated from the velocity v = ω × r of the pendulum mass around the pivot, where ω is the angular velocity of the mass about the pivot point. This angular momentum is given by using a similar derivation the previous equation. Similarly, the kinetic energy of the pendulum mass is defined by the velocity of the pendulum around the pivot to yield This shows that the quantity I = mr2 is how mass combines with the shape of a body to define rotational inertia. The moment of inertia of an arbitrarily shaped body is the sum of the values mr2 for all of the elements of mass in the body. Pendulums used in Mendenhall gravimeter apparatus, from 1897 scientific journal. The portable gravimeter developed in 1890 by Thomas C. Mendenhall provided the most accurate relative measurements of the local gravitational field of the Earth. A compound pendulum is a body formed from an assembly of particles of continuous shape that rotates rigidly around a pivot. Its moment of inertia is the sum of the moments of inertia of each of the particles that it is composed of.:395–396:51–53 The natural frequency () of a compound pendulum depends on its moment of inertia, , where is the mass of the object, is local acceleration of gravity, and is the distance from the pivot point to the centre of mass of the object. Measuring this frequency of oscillation over small angular displacements provides an effective way of measuring moment of inertia of a body.:516–517 Thus, to determine the moment of inertia of the body, simply suspend it from a convenient pivot point so that it swings freely in a plane perpendicular to the direction of the desired moment of inertia, then measure its natural frequency or period of oscillation (), to obtain where is the period (duration) of oscillation (usually averaged over multiple periods). The moment of inertia of the body about its centre of mass, , is then calculated using the parallel axis theorem to be where is the mass of the body and is the distance from the pivot point to the centre of mass . Moment of inertia of a body is often defined in terms of its radius of gyration, which is the radius of a ring of equal mass around the centre of mass of a body that has the same moment of inertia. The radius of gyration is calculated from the body's moment of inertia and mass as the length:1296–1297 Centre of oscillation A simple pendulum that has the same natural frequency as a compound pendulum defines the length from the pivot to a point called the centre of oscillation of the compound pendulum. This point also corresponds to the centre of percussion. The length is determined from the formula, The seconds pendulum, which provides the "tick" and "tock" of a grandfather clock, takes one second to swing from side-to-side. This is a period of two seconds, or a natural frequency of π radians/second for the pendulum. In this case, the distance to the center of oscillation, , can be computed to be Notice that the distance to the center of oscillation of the seconds pendulum must be adjusted to accommodate different values for the local acceleration of gravity. Kater's pendulum is a compound pendulum that uses this property to measure the local acceleration of gravity, and is called a gravimeter. Measuring moment of inertia The moment of inertia of a complex system such as a vehicle or airplane around its vertical axis can be measured by suspending the system from three points to form a trifilar pendulum. A trifilar pendulum is a platform supported by three wires designed to oscillate in torsion around its vertical centroidal axis. The period of oscillation of the trifilar pendulum yields the moment of inertia of the system. Motion in a fixed plane Four objects with identical masses and radii racing down a plane while rolling without slipping. From back to front: - spherical shell, - solid sphere, - cylindrical ring, and - solid cylinder. The time for each object to reach the finishing line depends on their moment of inertia. (OGV version The moment of inertia about an axis of a body is calculated by summing mr2 for every particle in the body, where r is the perpendicular distance to the specified axis. To see how moment of inertia arises in the study of the movement of an extended body, it is convenient to consider a rigid assembly of point masses. (This equation can be used for axes that are not principal axes provided that it is understood that this does not fully describe the moment of inertia.) Consider the kinetic energy of an assembly of N masses mi that lie at the distances ri from the pivot point P, which is the nearest point on the axis of rotation. It is the sum of the kinetic energy of the individual masses,:516–517:1084–1085 :1296–1300 This shows that the moment of inertia of the body is the sum of each of the mr2 terms, that is Thus, moment of inertia is a physical property that combines the mass and distribution of the particles around the rotation axis. Notice that rotation about different axes of the same body yield different moments of inertia. The moment of inertia of a continuous body rotating about a specified axis is calculated in the same way, except with infinitely many point particles. Thus the limits of summation are removed, and the sum is written as follows: Another expression replaces the summation with an integral, Here, the function ρ gives the mass density at each point (x, y, z), r is a vector perpendicular to the axis of rotation and extending from a point on the rotation axis to a point (x, y, z) in the solid, and the integration is evaluated over the volume V of the body Q. The moment of inertia of a flat surface is similar with the mass density being replaced by its areal mass density with the integral evaluated over its area. Note on second moment of area: The moment of inertia of a body moving in a plane and the second moment of area of a beam's cross-section are often confused. The moment of inertia of a body with the shape of the cross-section is the second moment of this area about the z-axis perpendicular to the cross-section, weighted by its density. This is also called the polar moment of the area, and is the sum of the second moments about the x- and y-axes. The stresses in a beam are calculated using the second moment of the cross-sectional area around either the x-axis or y-axis depending on the load. The moment of inertia of a compound pendulum constructed from a thin disc mounted at the end of a thin rod that oscillates around a pivot at the other end of the rod, begins with the calculation of the moment of inertia of the thin rod and thin disc about their respective centres of mass. - The moment of inertia of a thin rod with constant cross-section s and density ρ and with length ℓ about a perpendicular axis through its centre of mass is determined by integration.:1301 Align the x-axis with the rod and locate the origin its centre of mass at the centre of the rod, then where m = ρsℓ is the mass of the rod. - The moment of inertia of a thin disc of constant thickness s, radius R, and density ρ about an axis through its centre and perpendicular to its face (parallel to its axis of rotational symmetry) is determined by integration.:1301 Align the z-axis with the axis of the disc and define a volume element as dV = sr drdθ, then where m = πR2ρs is its mass. - The moment of inertia of the compound pendulum is now obtained by adding the moment of inertia of the rod and the disc around the pivot point P as, where L is the length of the pendulum. Notice that the parallel axis theorem is used to shift the moment of inertia from the centre of mass to the pivot point of the pendulum. A list of moments of inertia formulas for standard body shapes provides a way to obtain the moment of inertia of a complex body as an assembly of simpler shaped bodies. The parallel axis theorem is used to shift the reference point of the individual bodies to the reference point of the assembly. As one more example, consider the moment of inertia of a solid sphere of constant density about an axis through its centre of mass. This is determined by summing the moments of inertia of the thin discs that form the sphere. If the surface of the ball is defined by the equation:1301 then the radius r of the disc at the cross-section z along the z-axis is Therefore, the moment of inertia of the ball is the sum of the moments of inertia of the discs along the z-axis, where m = 4/3πR3ρ is the mass of the sphere. If a mechanical system is constrained to move parallel to a fixed plane, then the rotation of a body in the system occurs around an axis k̂ perpendicular to this plane. In this case, the moment of inertia of the mass in this system is a scalar known as the polar moment of inertia. The definition of the polar moment of inertia can be obtained by considering momentum, kinetic energy and Newton's laws for the planar movement of a rigid system of particles. If a system of n particles, Pi, i = 1, …, n, are assembled into a rigid body, then the momentum of the system can be written in terms of positions relative to a reference point R, and absolute velocities vi where ω is the angular velocity of the system and V is the velocity of R. For planar movement the angular velocity vector is directed along the unit vector k which is perpendicular to the plane of movement. Introduce the unit vectors ei from the reference point R to a point ri , and the unit vector t̂i = k̂ × êi so This defines the relative position vector and the velocity vector for the rigid system of the particles moving in a plane. Note on the cross product: When a body moves parallel to a ground plane, the trajectories of all the points in the body lie in planes parallel to this ground plane. This means that any rotation that the body undergoes must be around an axis perpendicular to this plane. Planar movement is often presented as projected onto this ground plane so that the axis of rotation appears as a point. In this case, the angular velocity and angular acceleration of the body are scalars and the fact that they are vectors along the rotation axis is ignored. This is usually preferred for introductions to the topic. But in the case of moment of inertia, the combination of mass and geometry benefits from the geometric properties of the cross product. For this reason, in this section on planar movement the angular velocity and accelerations of the body are vectors perpendicular to the ground plane, and the cross product operations are the same as used for the study of spatial rigid body movement. The angular momentum vector for the planar movement of a rigid system of particles is given by Use the centre of mass C as the reference point so and define the moment of inertia relative to the centre of mass IC as then the equation for angular momentum simplifies to:1028 The moment of inertia IC about an axis perpendicular to the movement of the rigid system and through the centre of mass is known as the polar moment of inertia. Specifically, it is the second moment of mass with respect to the orthogonal distance from an axis (or pole). For a given amount of angular momentum, a decrease in the moment of inertia results in an increase in the angular velocity. Figure skaters can change their moment of inertia by pulling in their arms. Thus, the angular velocity achieved by a skater with outstretched arms results in a greater angular velocity when the arms are pulled in, because of the reduced moment of inertia. A figure skater is not, however, a rigid body. This 1906 rotary shear uses the moment of inertia of two flywheels to store kinetic energy which when released is used to cut metal stock (International Library of Technology, 1906). The kinetic energy of a rigid system of particles moving in the plane is given by Let the reference point be the centre of mass C of the system so the second term becomes zero, and introduce the moment of inertia IC so the kinetic energy is given by:1084 The moment of inertia IC is the polar moment of inertia of the body. A 1920s John Deere tractor with the spoked flywheel on the engine. The large moment of inertia of the flywheel smooths the operation of the tractor Newton's laws for a rigid system of N particles, Pi, i = 1, … N, can be written in terms of a resultant force and torque at a reference point R, to yield where ri denotes the trajectory of each particle. The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration A of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as, For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along k̂ perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors êi from the reference point R to a point ri and the unit vectors t̂i = k̂ × êi , so This yields the resultant torque on the system as where êi × êi = 0, and êi × t̂i = k̂ is the unit vector perpendicular to the plane for all of the particles Pi. Use the centre of mass C as the reference point and define the moment of inertia relative to the centre of mass IC, then the equation for the resultant torque simplifies to:1029 Motion in space of a rigid body, and the inertia matrix The scalar moments of inertia appear as elements in a matrix when a system of particles is assembled into a rigid body that moves in three-dimensional space. This inertia matrix appears in the calculation of the angular momentum, kinetic energy and resultant torque of the rigid system of particles. Let the system of particles Pi, i = 1, …, n be located at the coordinates ri with velocities vi relative to a fixed reference frame. For a (possibly moving) reference point R, the relative positions are and the (absolute) velocities are where ω is the angular velocity of the system, and VR is the velocity of R. Note that the cross product can be equivalently written as matrix multiplication by combining the first operand and the operator into a, skew-symmetric, matrix, [b], constructed from the components of b = (bx, by, bz): The inertia matrix is constructed by considering the angular momentum, with the reference point R of the body chosen to be the centre of mass C: where the terms containing VR (= C) sum to zero by the definition of centre of mass. Then, the skew-symmetric matrix [Δri] obtained from the relative position vector Δri = ri − C, can be used to define, where IC defined by is the symmetric inertia matrix of the rigid system of particles measured relative to the centre of mass C. The kinetic energy of a rigid system of particles can be formulated in terms of the centre of mass and a matrix of mass moments of inertia of the system. Let the system of particles Pi, i = 1, …,n be located at the coordinates ri with velocities vi, then the kinetic energy is where Δri = ri − C is the position vector of a particle relative to the centre of mass. This equation expands to yield three terms The second term in this equation is zero because C is the centre of mass. Introduce the skew-symmetric matrix [Δri] so the kinetic energy becomes Thus, the kinetic energy of the rigid system of particles is given by where IC is the inertia matrix relative to the centre of mass and M is the total mass. The inertia matrix appears in the application of Newton's second law to a rigid assembly of particles. The resultant torque on this system is, where ai is the acceleration of the particle Pi. The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration Ar of the reference point, as well as the angular velocity vector ω and angular acceleration vector α of the rigid system as, Use the centre of mass C as the reference point, and introduce the skew-symmetric matrix [Δri] = [ri − C] to represent the cross product (ri − C) ×, to obtain The calculation uses the identity obtained from the Jacobi identity for the triple cross product as shown in the proof below: Then, the following Jacobi identity is used on the last term: The result of applying Jacobi identity can then be continued as follows: The final result can then be substituted to the main proof as follows: Notice that for any vector , the following holds: Finally, the result is used to complete the main proof as follows: Thus, the resultant torque on the rigid system of particles is given by where IC is the inertia matrix relative to the centre of mass. Parallel axis theorem The inertia matrix of a body depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the centre of mass C and the inertia matrix relative to another point R. This relationship is called the parallel axis theorem. Consider the inertia matrix IR obtained for a rigid system of particles measured relative to a reference point R, given by Let C be the centre of mass of the rigid system, then where d is the vector from the centre of mass C to the reference point R. Use this equation to compute the inertia matrix, Distribute over the cross product to obtain The first term is the inertia matrix IC relative to the centre of mass. The second and third terms are zero by definition of the centre of mass C. And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix [d] constructed from d. The result is the parallel axis theorem, where d is the vector from the centre of mass C to the reference point R. Note on the minus sign: By using the skew symmetric matrix of position vectors relative to the reference point, the inertia matrix of each particle has the form −m[r]2, which is similar to the mr2 that appears in planar movement. However, to make this to work out correctly a minus sign is needed. This minus sign can be absorbed into the term m[r]T[r], if desired, by using the skew-symmetry property of [r]. Scalar moment of inertia in a plane The scalar moment of inertia, IL, of a body about a specified axis whose direction is specified by the unit vector k̂ and passes through the body at a point R is as follows: where IR is the moment of inertia matrix of the system relative to the reference point R, and [Δri] is the skew symmetric matrix obtained from the vector Δri = ri − R. This is derived as follows. Let a rigid assembly of N particles, Pi, i = 1, …, N, have coordinates ri. Choose R as a reference point and compute the moment of inertia around a line L defined by the unit vector k̂ through the reference point R, L(t) = R + tk̂. The perpendicular vector from this line to the particle Pi is obtained from Δri by removing the component that projects onto k̂. where E is the identity matrix, so as to avoid confusion with the inertia matrix, and k̂ k̂T is the outer product matrix formed from the unit vector k̂ along the line L. To relate this scalar moment of inertia to the inertia matrix of the body, introduce the skew-symmetric matrix [k̂] such that [k̂]y = k̂ × y, then we have the identity noting that k̂ is a unit vector. The magnitude squared of the perpendicular vector is The simplification of this equation uses the triple scalar product identity where the dot and the cross products have been interchanged. Exchanging products, and simplifying by noting that Δri and k̂ are orthogonal: Thus, the moment of inertia around the line L through R in the direction k̂ is obtained from the calculation where IR is the moment of inertia matrix of the system relative to the reference point R. This shows that the inertia matrix can be used to calculate the moment of inertia of a body around any specified rotation axis in the body. The inertia matrix is often described as the inertia tensor, which consists of the same moments of inertia and products of inertia about the three coordinate axes. The inertia tensor is constructed from the nine component tensors, (the symbol is the tensor product) where ei, i = 1, 2, 3 are the three orthogonal unit vectors defining the inertial frame in which the body moves. Using this basis the inertia tensor is given by This tensor is of degree two because the component tensors are each constructed from two basis vectors. In this form the inertia tensor is also called the inertia binor. For a rigid system of particles Pk, k = 1, …, N each of mass mk with position coordinates rk = (xk, yk, zk), the inertia tensor is given by where E is the identity tensor In this case, the components of the inertia tensor are given by The inertia tensor for a continuous body is given by where r defines the coordinates of a point in the body and ρ(r) is the mass density at that point. The integral is taken over the volume V of the body. The inertia tensor is symmetric because Iij = Iji. Alternatively it can also be written in terms of the angular momentum operator : The inertia tensor can be used in the same way as the inertia matrix to compute the scalar moment of inertia about an arbitrary axis in the direction n, where the dot product is taken with the corresponding elements in the component tensors. A product of inertia term such as I12 is obtained by the computation and can be interpreted as the moment of inertia around the x-axis when the object rotates around the y-axis. The components of tensors of degree two can be assembled into a matrix. For the inertia tensor this matrix is given by, It is common in rigid body mechanics to use notation that explicitly identifies the x, y, and z axes, such as Ixx and Ixy, for the components of the inertia tensor. Inertia matrix in different reference frames The use of the inertia matrix in Newton's second law assumes its components are computed relative to axes parallel to the inertial frame and not relative to a body-fixed reference frame. This means that as the body moves the components of the inertia matrix change with time. In contrast, the components of the inertia matrix measured in a body-fixed frame are constant. Let the body frame inertia matrix relative to the centre of mass be denoted IB C, and define the orientation of the body frame relative to the inertial frame by the rotation matrix A, such that, where vectors y in the body fixed coordinate frame have coordinates x in the inertial frame. Then, the inertia matrix of the body measured in the inertial frame is given by Notice that A changes as the body moves, while IB C remains constant. Measured in the body frame the inertia matrix is a constant real symmetric matrix. A real symmetric matrix has the eigendecomposition into the product of a rotation matrix Q and a diagonal matrix Λ, given by The columns of the rotation matrix Q define the directions of the principal axes of the body, and the constants I1, I2, and I3 are called the principal moments of inertia. This result was first shown by J. J. Sylvester (1852), and is a form of Sylvester's law of inertia. For bodies with constant density an axis of rotational symmetry is a principal axis. Inertia of an ellipsoid An ellipsoid with the semi-principal diameters labelled a, b, and c. The moment of inertia matrix in body-frame coordinates is a quadratic form that defines a surface in the body called Poinsot's ellipsoid. Let Λ be the inertia matrix relative to the centre of mass aligned with the principal axes, then the surface defines an ellipsoid in the body frame. Write this equation in the form, to see that the semi-principal diameters of this ellipsoid are given by Let a point x on this ellipsoid be defined in terms of its magnitude and direction, x = |x|n, where n is a unit vector. Then the relationship presented above, between the inertia matrix and the scalar moment of inertia In around an axis in the direction n, yields Thus, the magnitude of a point x in the direction n on the inertia ellipsoid is - ^ a b Mach, Ernst (1919). The Science of Mechanics. pp. 173–187. Retrieved November 21, 2014. - ^ Euler, Leonhard (1765). Theoria motus corporum solidorum seu rigidorum: Ex primis nostrae cognitionis principiis stabilita et ad omnes motus, qui in huiusmodi corpora cadere possunt, accommodata [The theory of motion of solid or rigid bodies: established from first principles of our knowledge and appropriate for all motions which can occur in such bodies] (in Latin). Rostock and Greifswald (Germany): A. F. Röse. p. 166. ISBN 978-1-4297-4281-8. From page 166: "Definitio 7. 422. Momentum inertiae corporis respectu eujuspiam axis est summa omnium productorum, quae oriuntur, si singula corporis elementa per quadrata distantiarum suarum ab axe multiplicentur." (Definition 7. 422. A body's moment of inertia with respect to any axis is the sum of all of the products, which arise, if the individual elements of the body are multiplied by the square of their distances from the axis.) - ^ a b c d e f Marion, JB; Thornton, ST (1995). Classical dynamics of particles & systems (4th ed.). Thomson. ISBN 0-03-097302-3. - ^ a b Symon, KR (1971). Mechanics (3rd ed.). Addison-Wesley. ISBN 0-201-07392-7. - ^ a b Tenenbaum, RA (2004). Fundamentals of Applied Dynamics. Springer. ISBN 0-387-00887-X. - ^ a b c d e f g h i Kane, T. R.; Levinson, D. A. (1985). Dynamics, Theory and Applications. New York: McGraw-Hill. - ^ a b Winn, Will (2010). Introduction to Understandable Physics: Volume I - Mechanics. AuthorHouse. p. 10.10. ISBN 1449063330. - ^ a b Fullerton, Dan (2011). Honors Physics Essentials. Silly Beagle Productions. pp. 142–143. ISBN 0983563330. - ^ Wolfram, Stephen (2014). "Spinning Ice Skater". Wolfram Demonstrations Project. Mathematica, Inc. Retrieved September 30, 2014. - ^ Hokin, Samuel (2014). "Figure Skating Spins". The Physics of Everyday Stuff. Retrieved September 30, 2014. - ^ Breithaupt, Jim (2000). New Understanding Physics for Advanced Level. Nelson Thomas. p. 64. ISBN 0748743146. - ^ Crowell, Benjamin (2003). Conservation Laws. Light and Matter. p. 107. ISBN 0970467028. - ^ Tipler, Paul A. (1999). Physics for Scientists and Engineers, Vol. 1: Mechanics, Oscillations and Waves, Thermodynamics. Macmillan. p. 304. ISBN 1572594918. - ^ a b c d e Paul, Burton (June 1979). Kinematics and Dynamics of Planar Machinery. Prentice Hall. ISBN 978-0135160626. - ^ Halliday, David; Resnick, Robert; Walker, Jearl (2005). Fundamentals of physics (7th ed.). Hoboken, NJ: Wiley. ISBN 9780471216438. - ^ French, A.P. (1971). Vibrations and waves. Boca Raton, FL: CRC Press. ISBN 9780748744473. - ^ a b c d e f Uicker, John J.; Pennock, Gordon R.; Shigley, Joseph E. (2010). Theory of Machines and Mechanisms (4th ed.). Oxford University Press. ISBN 978-0195371239. - ^ a b c d e f g h i j Ferdinand P. Beer; E. Russell Johnston; Jr., Phillip J. Cornwell (2010). Vector mechanics for engineers: Dynamics (9th ed.). Boston: McGraw-Hill. ISBN 978-0077295493. - ^ H. Williams, Measuring the inertia tensor, presented at the IMA Mathematics 2007 Conference. - ^ Gracey, William, The experimental determination of the moments of inertia of airplanes by a simplified compound-pendulum method, NACA Technical Note No. 1629, 1948 - ^ In that situation this moment of inertia only describes how a torque applied along that axis causes a rotation about that axis. But, torques not aligned along a principal axis will also cause rotations about other axes. - ^ Walter D. Pilkey, Analysis and Design of Elastic Beams: Computational Methods, John Wiley, 2002. - ^ a b c Goldstein, H. (1980). Classical Mechanics (2nd ed.). Addison-Wesley. ISBN 0-201-02918-9. - ^ L. D. Landau and E. M. Lifshitz, Mechanics, Vol 1. 2nd Ed., Pergamon Press, 1969. - ^ L. W. Tsai, Robot Analysis: The mechanics of serial and parallel manipulators, John-Wiley, NY, 1999. - ^ Sylvester, J J (1852). "A demonstration of the theorem that every homogeneous quadratic polynomial is reducible by real orthogonal substitutions to the form of a sum of positive and negative squares" (PDF). Philosophical Magazine. 4th Series. 4 (23): 138–142. doi:10.1080/14786445208647087. Retrieved June 27, 2008. - ^ Norman, C.W. (1986). Undergraduate algebra. Oxford University Press. pp. 360–361. ISBN 0-19-853248-2. - ^ Mason, Matthew T. (2001). Mechanics of Robotics Manipulation. MIT Press. ISBN 978-0-262-13396-8. Retrieved November 21, 2014.
Project management uses what-if scenarios to predict the outcomes of different scenarios, both positive and negative. Project managers can also prepare contingency plans to deal with unexpected circumstances. What Are Examples Of If Scenarios? Suppose I charged more for each loaf of bread and asked: what would happen to my revenue?? Simple analysis of the volume of bread sold does not depend on the price of the bread, so it is very straightforward. If the price per loaf increases by X%, then sales will increase by X%. What Is Scenario In Project? Describe your project in your own words. In this way, you can anticipate problems and work through them with confidence. You can show others what your project might look like in practice by doing this. What If Scenario Mean? What-if scenarios are used in scenario planning to consider the effects of variability in factors on a plan, project, or timeline. What Is The Use Of What If Scenarios? Scenarios can be used to consider many different variables A Scenario is a set of values that Excel saves and can substitute automatically in cells. worksheet and then switch to any of these new scenarios to see the different results you’ve created and saved. What Is If Scenario Pmp? PMBOK Guide states, “What-if scenario analysis is the process of evaluating scenarios in order to predict their effects, positive or negative, on a project’s objectives.”. In a scenario, there are several possible events that could affect your project’s objectives. What Do You Mean By What If Scenarios Give Example? An informal speculation about how a given situation might be handled is what a what-if scenario is. An interviewer might ask a prospective employee how he would handle a particular situation, for example. How Do You Use What If Scenarios? You can change the values in the cells that contain them by selecting them. Scenarios can be found under Tools > Scenarios. Create a new scenario by entering its name in the Create Scenario dialog box… Please add some information to the Comment box… The Settings section allows you to select or deselect options. What Is What If Scenarios How We Can Create Scenarios? The What If Analysis button can be found on the Ribbon’s Data tab. You can open the Scenario Manager by clicking it. The Scenario Manager will open. Click the Add button to add the scenario. The Scenario should be named. You can change your cells by pressing the Tab key. You will find cells B1 on the worksheet. Ctrl-click on cells B3:B4 and select them. How Do You Write A Project Scenario? Your instructor must approve your project proposal before it can be submitted. The purpose of this is to explain the scenario’s goals. Your scenario is a vision of what your project will look like when it is implemented. The project teams have organized their scenarios in different ways as they begin to write. What Is Scenario In Project Management? In essence, scenario planning is the process of identifying and analyzing potential future scenarios. You can think of scenario planning as preparing multiple contingency plans for the same event. Project managers can use scenario planning to prepare for unexpected budget cuts or to replace a key member of their team. What Is Scenario Planning Example? Farmers, for instance, use scenarios to predict whether the harvest will be good or bad, depending on the weather, for example, Farmers use scenarios to predict whether the harvest will be good or bad, depending on the weather. Their sales forecasts as well as their investment plans are made easier with it. What Is A Project Scenario Analysis? PMBOK Guide states, “What-if scenario analysis is the process of evaluating scenarios in order to predict their effects, positive or negative, on a project’s objectives.”. In this analysis, we will examine the question, “What if the situation represented by scenario X happens?”. What Is A What-if Scenario? Informal speculation about how a given situation might be handled is what is known as a what-if scenario. It is more likely that the project manager will be informed, and the project will be more predictable, if more questions are asked, answered, and reviewed throughout the project lifecycle. What Does What-if Analysis Mean? What-if analysis is a technique for determining how projected performance will be affected by changes in the assumptions that are used to make the projections. The purpose of what-if analysis is to compare different scenarios and their potential outcomes when the situation changes. What Are The Advantages Of What If Analysis? Excel’s what-if analysis allows you to see how changing one or more values will affect the outcome of a formula. A user can determine how much cash they can declare in a good scenario and how much cash they can withdraw in a bad scenario by using what-if analysis.
The Link Between DNA and Protein Production: Exploring the Role of Quizlet Subsection 1: DNA and Protein Production DNA (Deoxyribonucleic Acid) is a double-stranded molecule that contains genetic information which is necessary for the development and functioning of all living organisms. Proteins, on the other hand, are the building blocks of life and are involved in various biological processes, such as cell structure and function, metabolism, and the immune system. DNA is essential for the production of proteins as it contains the instructions that cells need to make them. The genetic code is the set of instructions encoded within the DNA molecules, which tells the cells how to manufacture proteins. This genetic code is made up of four nucleotide bases – adenine (A), guanine (G), cytosine (C), and thymine (T). Each nucleotide base is arranged in a specific sequence, which forms the genetic code. The genetic code is read in sets of three nucleotides, known as codons. Each codon codes for a specific amino acid, which makes up a protein. The process of protein production starts in the nucleus of a eukaryotic cell, where the DNA is stored. DNA is first transcribed into a complementary messenger RNA (mRNA) molecule. The mRNA molecule then leaves the nucleus and enters the cytoplasm, where it is translated by ribosomes into a protein. During the process of transcription, the DNA double helix is unwound, and one of the strands of DNA is copied into a single-stranded RNA molecule. This RNA molecule is complementary to the DNA strand and is used as a template for translation in the cytoplasm. The mRNA molecule is then transported out of the nucleus, where ribosomes bind to it and begin its translation. During the process of translation, the ribosome reads the mRNA molecule in a sequence of three nucleotides at a time. Each set of three nucleotides codes for a particular amino acid, which is added to the growing protein chain. This process continues until the ribosome reaches a stop codon, which signals the end of protein synthesis. In summary, DNA is linked to the production of proteins through the genetic code, which contains the instructions for making proteins. This information is transcribed from DNA to mRNA and translated into a protein chain by ribosomes. The DNA sequence determines the sequence of amino acids in the protein, which, in turn, determines the protein’s shape and function. Without the information stored within DNA, it would be impossible to produce the complex and diverse array of proteins necessary for life. What is DNA? Deoxyribonucleic acid, commonly known as DNA, is a complex molecule that carries genetic instructions for the development, function, and reproduction of all living things. It is located in the nucleus of cells and is comprised of long chains of nucleotides. These nucleotides consist of a sugar molecule, a phosphate group, and a nitrogenous base. The structure of DNA is a double helix, which means that it consists of two strands that are twisted around each other. The nucleotides in each strand are held together by hydrogen bonds between the nitrogenous bases. The nitrogenous bases that make up DNA are adenine, thymine, cytosine, and guanine. These bases pair up in a specific way: adenine with thymine and guanine with cytosine. The sequence of these nitrogenous bases in DNA is what determines the genetic code. The genetic code is the set of instructions that tells a cell how to make a specific protein. Proteins are the building blocks of life and are necessary for the growth, development, and functioning of all living things. How is DNA linked to the production of proteins? The production of proteins is a complex process that involves several steps, including transcription and translation. In transcription, the DNA sequence of a gene is copied into a molecule of messenger RNA (mRNA). The mRNA carries this genetic information from the nucleus to the cytoplasm, where it is used as a template for protein synthesis. During translation, the mRNA sequence is read by a ribosome, which is a complex molecular machine made up of RNA and proteins. The ribosome uses the information in the mRNA to assemble a chain of amino acids in a specific sequence. This chain of amino acids then folds into a three-dimensional structure, which determines the function of the protein. The sequence of nucleotides in DNA determines the sequence of bases in the mRNA, which in turn determines the sequence of amino acids in the protein. Because of this, the genetic code is said to be universal, meaning that the same triplet code is used by all living things to encode the same amino acids. Errors in the genetic code can lead to mutations, which can have serious consequences for an organism. Some mutations can result in non-functional proteins, while others can lead to the production of abnormal proteins that can interfere with cellular function. However, some mutations can also be beneficial, leading to new traits or adaptations that can help an organism survive and thrive in its environment. In conclusion, DNA is the basis of genetic information and is intricately linked to the production of proteins. Understanding the structure and function of DNA is fundamental to understanding the processes of life and can have far-reaching implications for fields such as medicine and biotechnology. What are Proteins and What is their Importance? Proteins are large, complex molecules made up of smaller units called amino acids. They play an essential role in living organisms, serving functions such as catalyzing chemical reactions, transporting molecules across cell membranes, and providing structural support for cells and tissues. There are many different types of proteins, each with a unique structure and function. Some proteins are enzymes, meaning they catalyze biochemical reactions in cells. Other proteins are involved in the transport of molecules across cell membranes, such as hemoglobin, the protein responsible for carrying oxygen in red blood cells. Still, other proteins serve as hormones in the body, regulating various physiological processes such as metabolism and growth. The importance of proteins to living organisms cannot be overstated. Without proteins, life as we know it would not exist. Proteins are involved in nearly every aspect of cellular function and are essential for maintaining the health and vitality of all living organisms, from bacteria to plants to animals. How is DNA Linked to the Production of Proteins? Proteins are made in cells through a process called protein synthesis. This process involves two primary steps: transcription and translation. Transcription is the process by which the information encoded in DNA is copied onto another molecule called messenger RNA (mRNA). This process occurs in the nucleus of the cell, where the DNA is located. The DNA molecule serves as a template, allowing the enzyme RNA polymerase to synthesize a complementary strand of mRNA. Once the mRNA molecule is synthesized, it moves out of the nucleus and into the cytoplasm of the cell, where it serves as a blueprint for the second step of protein synthesis, translation. Translation is the process by which the mRNA sequence is used to synthesize a specific protein. This process takes place on ribosomes, which are large complexes made up of RNA and protein molecules. The ribosome reads the mRNA sequence and uses it to assemble a sequence of amino acids, thereby creating a protein molecule. The sequence of amino acids in a protein is determined by the sequence of nucleotides in the DNA molecule. Each three-nucleotide sequence, or codon, in the DNA molecule codes for a specific amino acid in the protein. Thus, the sequence of codons in DNA determines the sequence of amino acids in the protein that is ultimately produced. Overall, the link between DNA and the production of proteins is fundamental to the functioning of all living organisms. Without this connection, cells would be unable to produce the specific proteins required for their survival and function. Gene expression is the process by which a gene’s DNA sequence is transformed into a functional protein. Gene expression is a complex process that involves several stages, including transcription and translation. In this article, we will explore how DNA is linked to the production of proteins through these key stages and the role of various molecules involved in this process. Transcription is the first stage of gene expression, where the DNA sequence on the gene is copied into an mRNA (messenger RNA) molecule in the cell nucleus. This process is facilitated by an enzyme called RNA polymerase, which binds to a specific region on the DNA molecule called the promoter region. RNA polymerase then unwinds the DNA double helix and moves along the gene, forming a complementary strand of RNA nucleotides that match the base sequence of the DNA. Once the RNA polymerase reaches the end of the gene, it stops and releases the newly formed mRNA strand. It is essential to note that before the mRNA molecule exits the nucleus into the cytoplasm, where protein synthesis occurs, it undergoes an additional step called RNA processing. During this process, any non-coding regions (introns) are removed, and the remaining coding segments (exons) are spliced together to form a mature mRNA strand. Translation is the second stage of gene expression, where the mRNA formed during transcription is translated into a protein. This process occurs on ribosomes, which are small organelles located in the cytoplasm of cells. During translation, the ribosome reads the mRNA sequence in groups of three nucleotides called codons. Each codon corresponds to a specific amino acid, which is the building block of proteins. As the ribosome reads each codon, it attracts the appropriate amino acid to the growing protein chain, forming a long chain of amino acids until it reaches a stop codon, signaling the end of the protein. The sequence of amino acids in the protein is determined by the sequence of codons in the mRNA molecule, which in turn, is dictated by the base sequence of the DNA gene. Therefore, the specific genes that are transcribed and the resulting mRNA transcripts directly regulate the type and number of proteins that a cell produces. Regulation of Gene Expression While transcription and translation are the primary processes by which gene expression occurs, they are not the only factors that influence the production of proteins. Several additional stages and regulatory mechanisms influence gene expression, including epigenetic modifications, post-transcriptional modifications, and post-translational modifications. Epigenetics refers to the modifications of the DNA structure that affect gene expression without altering the DNA sequence. These modifications, such as methylation or histone modification, can activate or deactivate genes, depending on the location and extent of the change. Post-transcriptional modifications refer to changes made to the mRNA molecule after it has been synthesized, including splicing and alternative splicing, adding a cap and tail, or degradation of the molecule. Post-translational modifications refer to changes made to the protein after it has been synthesized, including folding, phosphorylation, or adding a signal sequence. All of these regulatory mechanisms make gene expression a highly complex process that is under precise control. The dysregulation of gene expression can lead to many diseases, including cancer, genetic disorders, and other conditions. In conclusion, gene expression is a complex process that involves several stages and regulatory mechanisms. Transcription and translation are the primary processes by which gene expression occurs, where the DNA sequence on the gene is copied into an mRNA molecule and then translated into a protein. However, gene expression is not a linear process and is influenced by multiple regulatory processes, including epigenetic modifications, post-transcriptional modifications, and post-translational modifications. Understanding the mechanisms of gene expression is fundamental to understanding cellular processes and the development of new treatments for diseases. Transcription is a critical process that helps in the production of proteins. This process involves copying DNA�s genetic code onto an RNA molecule, which is then used as an intermediate molecule to create proteins. The process of transcription is controlled by various factors and is divided into three main stages: initiation, elongation, and termination. The first stage of transcription is initiation, which involves identifying the location of the starting point for RNA synthesis. This starting point is usually a specific DNA sequence called a promoter, which is recognized by RNA polymerase. Once RNA polymerase has located the promoter, it binds to the DNA and starts to unwind the double helix structure, creating a short stretch of single-stranded DNA that will serve as the template for RNA synthesis. The second stage of transcription is elongation, during which RNA polymerase synthesizes RNA by adding nucleotides to the growing RNA chain. The nucleotides are added in a precise order that is dictated by the DNA template. As the nucleotides are added, they form hydrogen bonds with complementary nucleotides on the DNA strand, creating a complementary RNA strand. The last stage of transcription is termination, which occurs when RNA polymerase reaches a specific DNA sequence called a terminator. This sequence signals the end of RNA synthesis and causes RNA polymerase to detach from the DNA template, releasing the newly synthesized RNA molecule into the cytoplasm of the cell. Transcription is a highly regulated process that requires the coordinated function of many different proteins and regulatory molecules. These molecules ensure that transcription occurs only when it is required and that the correct genes are transcribed at the appropriate time in development. The process of transcription plays a critical role in the production of proteins, which are the building blocks of all living organisms. Proteins are responsible for carrying out a wide range of functions within the cell, including catalyzing chemical reactions, transporting molecules across cellular membranes, and providing structural support to cells and tissues. The transcription and translation of genetic information into proteins is ultimately responsible for the diversity of life on Earth. The ability of cells to precisely control the transcription and translation of genetic information is what enables organisms to adapt to changing environments and to evolve over time. Translation is the key process that links DNA to the production of proteins. It is the process of converting the RNA sequence into a specific amino acid sequence, which forms a protein. Basically, it is the second and final stage of protein synthesis and occurs after transcription. The code in the DNA is first transcribed into messenger RNA (mRNA). The mRNA then travels from the nucleus to the ribosome, the site of protein synthesis in the cell. It is the job of the ribosome to read the code of the mRNA and translate it into a protein. The ribosome processes the mRNA sequence in sets of three letters or nucleotides, known as codons. Each codon corresponds to a specific amino acid that the ribosome will add to the growing protein chain. There are 20 different amino acids that the ribosome can choose from to add to the protein chain. There are three types of codons, which include start codons, stop codons, and amino acid codons. The start codon (AUG) signals the beginning of the protein sequence, while the stop codon signals the end of protein synthesis. There are three stop codons, which include UAA, UAG, and UGA. The process of protein synthesis is highly regulated by the cell and requires a great deal of energy. During translation, several other factors participate in the process, including transfer RNA (tRNA), ribosomal RNA (rRNA), and initiation factors. These factors help to ensure that the proper amino acids are added to the growing protein chain at the correct time and in the right order. Translation is an essential process in both prokaryotic and eukaryotic cells. However, the process differs slightly between the two types of cells. In prokaryotes, translation can begin before transcription has even finished, meaning that there is no separation between transcription and translation. In contrast, eukaryotes have a more complex system due to the separation of the nucleus from the cytoplasm, which requires more steps in protein synthesis. In conclusion, translation is the process that links DNA and protein synthesis, where RNA is used to convert the genetic code into a specific amino acid sequence. It is a highly complex and regulated process that requires the participation of multiple factors to ensure that the protein is synthesized accurately and efficiently. The Role of mRNA, tRNA, and Ribosomes in Protein Production Proteins are essential for the structure, function, and regulation of cells. The process of protein synthesis, or protein production, is fundamental for living organisms. It involves the translation of genetic information from DNA into usable proteins. This process is achieved through the cooperation of mRNA, tRNA, and ribosomes. What is mRNA? Messenger RNA (mRNA) is a type of RNA molecule that carries genetic information from the DNA in the nucleus of a cell to the ribosomes in the cytoplasm. mRNA serves as a template for protein synthesis. It is created during transcription when RNA polymerase binds to a gene’s DNA and makes an mRNA copy of the original DNA code. What is tRNA? Transfer RNA (tRNA) is a type of RNA molecule that carries amino acids to the ribosome during protein synthesis. tRNA has a specific sequence of three bases called the anticodon, which matches a complementary sequence within the mRNA. The correct amino acid is attached to the tRNA molecule based on its anticodon sequence. What are Ribosomes? Ribosomes are complex structures composed of RNA and proteins that serve as the site for protein synthesis. Ribosomes bind to mRNA and read the code provided by the nucleotides. They then use the information to assemble a chain of amino acids, using tRNA molecules to carry each amino acid to the ribosome in the correct sequence. The Process of Protein Synthesis The process of protein synthesis begins with transcription, where the DNA code is transcribed into a strand of mRNA. The mRNA then leaves the nucleus and binds to a ribosome in the cytoplasm. The ribosome reads the mRNA code and recruits the appropriate tRNA molecule with a matching anticodon sequence. The ribosome links the amino acids from the tRNA molecules together to form a polypeptide chain. This chain will fold to form the final protein structure. The Role of Codons The mRNA code is written in sets of three nucleotides called codons. Each codon specifies a specific amino acid. For example, the mRNA codon “AUG” always codes for the amino acid methionine. There are 64 possible codons, but only 20 amino acids, so some amino acids are specified by multiple codons. The Central Dogma of Molecular Biology The processes of transcription and translation are central dogmas of molecular biology. In the central dogma, DNA is transcribed into RNA, which is then translated into protein. This process is essential for the fundamental operation of cells, as proteins are the molecular machines that carry out most of the work in a cell. The process of protein synthesis is complex but essential for living organisms. mRNA, tRNA, and ribosomes are the key players in protein production. mRNA carries genetic information, tRNA delivers amino acids, and ribosomes serve as the site for protein synthesis. Together, these molecules form a critical system for the production of proteins that are essential for life. DNA and Protein Production: A Complex Relationship DNA and proteins are two of the most fundamental components of life. DNA — the genetic material found in every living cell — contains instructions for building all the proteins needed to sustain life. Proteins, in turn, are the building blocks of cells, tissues, and organs. Understanding the relationship between DNA and protein production is therefore essential not only to our knowledge of basic biology but also to the diagnosis and treatment of countless diseases. The process of protein production is a complex one, involving a number of different steps. It begins with the transcription of DNA — the process by which the sequence of nucleotides in a given gene is ‘read’ by an enzyme called RNA polymerase and used to produce a molecule of RNA that is complementary to the original DNA sequence. This RNA molecule then undergoes a process called translation, in which it serves as a template for the assembly of a specific protein molecule. It’s worth noting that not all the genes in an organism’s DNA are used to produce proteins. In fact, most of the DNA in any given cell is ‘noncoding,’ meaning that it doesn’t contain the instructions for building proteins. Nevertheless, even the noncoding DNA plays an important role in regulating gene expression and determining the type, quantity, and timing of protein production. The Central Dogma of Molecular Biology The relationship between DNA and protein production is often described using the ‘central dogma’ of molecular biology. According to this model, genetic information flows from DNA to RNA to protein, with very little feedback from protein back to DNA. In other words, DNA serves as the ‘master template’ for all protein production, with RNA acting as an intermediary to ensure that the correct sequence of amino acids is used to build each protein molecule. While the central dogma is generally considered to be an accurate description of the relationship between DNA and protein production, it’s important to note that there are many exceptions and variations to this model. For example, some viruses use RNA as their genetic material instead of DNA, and there are many cases where RNA molecules themselves can act as enzymes, performing functions typically associated with proteins. Nevertheless, the central dogma remains a useful framework for understanding the fundamental processes of molecular biology. The Role of Genetic Mutations in Protein Production Genetic mutations — changes in the sequence of nucleotides in DNA — can have profound effects on protein production and overall cellular function. Some mutations are benign or even beneficial, leading to the production of new proteins with improved or altered functions. Other mutations, however, can disrupt normal protein production and lead to the development of disease. One of the most well-known examples of genetic mutations affecting protein production is sickle cell anemia, a disease caused by a single nucleotide substitution in the DNA sequence for hemoglobin, the protein that carries oxygen in red blood cells. The mutation causes the hemoglobin molecules to clump together and form stiff, sickle-shaped cells that can block small blood vessels and cause pain, fatigue, and other symptoms. Understanding the relationship between DNA and protein production is therefore crucial to the study of genetics and the diagnosis and treatment of genetic diseases. By identifying mutations in DNA that affect protein production, researchers can develop new therapies and medications that target the underlying molecular mechanisms of disease. The Future of Protein Production Research The study of DNA and protein production is an active and rapidly evolving field, with new discoveries and breakthroughs being made all the time. One area of particular interest is the development of new technologies for synthesizing and manipulating DNA and RNA molecules, which could potentially unlock new approaches to protein production and lead to advances in biotechnology, medicine, and other fields. Another exciting area of research is the study of noncoding DNA and its role in gene expression and regulation. Despite being largely ignored for many years, noncoding DNA is now recognized as a critical element of the genome and an important factor in many diseases, including cancer. Researchers are developing new tools and techniques for studying noncoding DNA, and the resulting insights could have important implications for our understanding of basic biology and the development of new therapies. The Importance of Understanding DNA and Protein Production Understanding the relationship between DNA and protein production is essential to our understanding of biology, genetics, and medicine. It provides the foundation for everything from basic research on the mechanisms of disease to the development of new therapies and treatments for genetic disorders. By continuing to study and explore this complex and fascinating relationship, scientists and researchers can unlock a wealth of knowledge and potential for improving human health and well-being. The relationship between DNA and protein production is one of the most fundamental and important areas of study in the fields of biology and genetics. By understanding how DNA serves as the template for all protein production and how genetic mutations can affect this process, researchers can gain valuable insights into the mechanisms of disease and develop new treatments and therapies. As the study of DNA and protein production continues to evolve and advance, we can look forward to new discoveries and breakthroughs in fields ranging from biotechnology to medicine.
- Supermassive black holes have up to billions of times more mass than the Sun - How they became this big has been a long-standing mystery - Australia research shows big galaxies breed even bigger black holes ASTRONOMERS FROM SWINBURNE UNIVERSITY of Technology in Australia have discovered how supermassive black holes grow – and it’s not what was expected. For years, scientists had believed that supermassive black holes – millions or billions of times the mass of our Sun – located at the centres of galaxies, increased their mass in step with the growth of their host galaxy. However, new observations have revealed a dramatically different behaviour. “Black holes have been growing much faster than we thought,” Professor Alister Graham from Swinburne’s Centre for Astrophysics and Supercomputing said. Within galaxies, there is a competition of sorts for the available gas; for either the formation of new stars or feeding the central black hole. For more than a decade the leading models and theories have assigned a fixed fraction of the gas to each process, effectively preserving the ratio of black hole mass to galaxy mass. New research to be published in The Astrophysical Journal reveals that this approach needs to be changed. “We now know that each ten-fold increase of a galaxy’s stellar mass is associated with a much larger 100-fold increase in its black hole mass,” Professor Graham said. “This has widespread implications for our understanding of galaxy and black hole co-evolution.” The following animation depicts a star being devoured by a black hole. The researchers have also found the opposite behaviour to exist among the tightly packed clusters of stars that are observed at the centres of smaller galaxies and in disc galaxies like our Milky Way. “The smaller the galaxy, the greater the fraction of stars in these dense, compact clusters,” Swinburne researcher Dr Nicholas Scott said. “In the lower mass galaxies the star clusters, which can contain up to millions of stars, really dominate over the black holes.” Previously it was thought that the star clusters contained a constant 0.2 per cent of the galaxy mass. Black holes = gravitational prisons The research also appears to have solved a long-standing mystery in astronomy. ‘Intermediate mass’ black holes with masses between that of a single star and one million stars have been remarkably elusive. The new research predicts that numerous galaxies already known to harbour a black hole – albeit of a currently unknown mass – should contain these missing `intermediate mass’ black holes. “These may be big enough to be seen by the new generation of extremely large telescopes,” Dr Scott said. Professor Graham said these black holes were still capable of readily devouring any stars and their potential planets if they ventured too close. “Black holes are effectively gravitational prisons and compactors, and this may have been the fate of many past solar systems,” Professor Graham said. “Indeed, such a cosmic dance will contribute at some level to the transformation of nuclear star clusters into massive black holes.” The researchers combined observations from the Hubble Space Telescope, the European Very Large Telescope in Chile and the Keck Telescope in Hawaii to create the largest sample to date of galaxies with reliable star cluster and supermassive black hole mass measurements. Adapted from information issued by Swinburne University of Technology. Images by Gabriel Perez Diaz. Get SpaceInfo.com.au daily updates by RSS or email! Click the RSS Feed link at the top right-hand corner of this page, and then save the RSS Feed page to your bookmarks. Or, enter your email address (privacy assured) and we’ll send you daily updates. Or follow us on Twitter, @spaceinfo_oz Like this story? Please share or recommend it…
The NASA/ESA Hubble Space Telescope discovered the brightest quasar ever seen in the early universe 12.8 billion light years from Earth. The quasar’s light began its journey when the universe was only about a billion years old, so the discovery also provides an insight into the formation of galaxies. A quasar is an extremely bright core of an active galaxy whose glow is generated by a supermassive black hole surrounded by an accretion disk. Gas sucked in by the black hole releases huge amounts of energy that can be observed over all wavelengths. The discovery of the quasar, cataloged as J043947.08+163415.7, was no accident. An international team of astronomers has been searching the sky piece by piece for 20 years. Now the scientists were able to identify the quasar using data from the NASA/ESA Hubble Space Telescope and a strong gravitational lens effect. “In fact, the entire sky was searched using Sky Surveys at different wavelengths,” confirms Dr Fabian Walter of the Max Planck Institute for Astronomy in Heidelberg. “It was not a random discovery but the result of many years of searching.” The brightness of the newly discovered quasar corresponds to about 600 trillion suns and the supermassive black hole is several hundred million times more massive than our sun. “That’s something we’ve been looking for for a long time,” says Dr. Xiaohui Fan of the University of Arizona. “We don’t expect there to be many quasars brighter than this in the whole observable universe!” Despite its brightness, Hubble could only recognize the quasar because of its gravitational lensing effect, since there is a faint galaxy exactly between the quasar and Earth that deflects the quasar’s light and makes it appear three times larger and 50 times brighter than it would be without the gravitational lensing effect. Conclusions about the role of black holes in the formation of stars Based on the data collected, the scientists were able to see that the supermassive black hole not only accumulates matter extremely quickly, but also that the quasar can produce up to 10,000 stars per year. Due to the amplifying effect of the gravitational lens, however, the actual rate of star formation could also be considerably lower. By the way, the Milky Way produces about one new star every year. “Its properties and its distance make it a prime candidate for studying the development of distant quasars, as well as the role that supermassive black holes played in their centers in the formation of stars,” Walter explains, illustrating why this discovery is so important. “By making the source appear brighter than it is (through gravitational lensing), we can make more detailed statements about the properties of the quasars than would otherwise be possible, or we could only draw conclusions if we had more powerful telescopes. Quasars like J043947.08+163415.7 existed during the period of Re-ionization, the period after the Big Bang, when the radiation of young galaxies and quasars ionized the neutral hydrogen clouds in the cosmos, which had cooled 400,000 years after the Big Bang. What was responsible for the formation of quasars and to what extent they were involved in the formation of stars in the early cosmos, however, is still largely unknown. But objects like this newly discovered quasar could help to solve this mystery. Innovation Origins is an independent news platform, which has an unconventional revenue model. We are sponsored by companies that support our mission: spreading the story of innovation. Read more here. On Innovation Origins you can always read articles for free. We want to keep it that way. Have you enjoyed this article so much that you want to contribute to independent journalism? Click here:
AS Physics 2014-15 INDUCTION WORK XKCD Student Class 12 A / B / C / D Form MCC 2014 1. Physical Quantities Maths and Physics have an important but overlooked distinction by students. Numbers in Physics have meaning – they are the size of physical quantities which exist. To give numbers meaning we suffix them with units. There are two types of units: Base units These are the seven fundamental quantities defined by the Système international d’Unités (SI units). Once defined, we can make measurements using the correct unit and make comparisons between values. Derived units These are obtained by multiplying or dividing base units. Some derived units are complicated and are given simpler names, such as the unit of power Watt (W) which in SI units would be m2kgs-3. Notice that at A-Level we use the equivalent notation ms-1 rather than m/s. Do not become confused between the symbol we give to the quantity itself, and the symbol we give to the unit. For some examples, see the table on the right. Prefix femto pico nano micro milli centi kilo mega giga tera peta Symbol Name f quadrillionth p trillionth n billionth µ millionth m thousandth c hundredth k thousand M million G billion T trillion P quadrillion Multiplier 10-15 10-12 10-9 10-6 10-3 10-2 103 106 109 1012 1015 Basic quantity Mass Length Time Current Temperature Amount of substance Luminous intensity Derived quantity Volume Velocity Density Unit Name kilogram metre second ampere kelvin mole candela Unit Name cubic metre metre per second kilogram per cubic metre Symbol kg m s A K mol cd Symbols m3 ms-1 kgm-3 Quantity Quantity symbol Unit name Unit symbols Length L or l or h or d or s metre m Wavelength metre m λ Mass m or M kilogram kg Time t second s Temperature T kelvin K Charge Q coulomb C Momentum p kilogram metres per second kg ms-1 Often the value of the quantity we are interested in is very big or small. To save space and simplify these numbers, we prefix the units with a set of symbols. Knowledge of standard form and how to input it into your calculator is essential. For example: 245 x 10-12 m = 245 pm 2.45 x 103 m = 2.45 km We may need to convert units to make comparisons. For example: Which is bigger, 0.167 GW or 1500 MW? 0.167 GW = 0.167 x109 W = 167 x106 W = 167 MW < 1500 MW Physical Quantities - Questions 1) The unit of energy is the joule. Find out what this unit is expressed in terms of the base SI units. 2) Convert these numbers into normal form: a) 5.239 x 103 e) 1.951 x 10-2 b) 4.543 x 104 f) 1.905 x 105 c) 9.382 x 102 g) 6.005 x 103 d) 6.665 x 10-6 3) Convert these quantities into standard form: a) 65345 N e) 0.000567 F b) 765 s f) 0.0000605 C c) 486856 W g) 0.03000045 J d) 0.987 cm2 4) Write down the solutions to these problems, giving your answer in standard form: a) (3.45 x 10-5 + 9.5 x 10-6) ÷ 0.0024 b) 2.31 x 105 x 3.98 x 10-3 + 0.0013 5) Calculate the following: a) 20mm in metres b) 3.5kg in grams c) 589000 μm in metres d) 1m2 in cm2 (careful) e) 38 cm2 in m2 6) Find the following: a) 365 days in seconds, written in standard form b) 3.0 x 104 g written in kg c) 2.1 x 106 Ω written in MΩ d) 5.9 x 10-7 m written in μm e) Which is bigger? 1452 pF or 0.234 nF Mark = /27 2. Significant Figures Number in Physics also show us how certain we are of a value. How sure are you that the width of this page is 210.30145 mm across? Using a ruler you could not be this precise. You would be more correct to state it as being 210 mm across, since a ruler can measure to the nearest millimetre. To show the precision of a value we will quote it to the correct number of significant figures. But how can you tell which figures are significant? The Rules 1. All non-zero digits are significant. 2. In a number with a decimal point, all zeros to the right of the right-most non-zero digit are significant. 3. In a number without a decimal point, trailing zeros may or may not be significant, you can only tell from the context. Examples Value # of S.F. Hints 23 123.654 123.000 0.000654 100.32 5400 2 6 6 3 5 2, 3 or 4 There are two digits and both are non-zero, so are both significant All digits are significant – this number has high precision Trailing zeros after decimal are significant and claim the same high precision Leading zeros are only placeholders Middle zeros are always significant Are the zeros placeholders? You would have to check how the number was obtained When taking many measurements with the same piece of measuring apparatus, all your data should have the same number of significant figures. For example, measuring the width of my thumb in three different places with a micrometer: 20.91 x 10-3 m 21.22 x 10-3 m 21.00 x 10-3m all to 4 s.f Significant Figures in Calculations We must also show that calculated values recognise the precision of the values we put into a formula. We do this by giving our answer to the same number of significant figures as the least precise piece of data we use. For example: A man runs 110 m in 13 s. Calculate his average speed. There is no way we can state the runners speed this precisely. Speed = Distance / Time = 110 m / 13 s = 8.461538461538461538461538461538 m/s This is the same number of sig figs as the time, which is less precise than the distance. = 8.5 m/s to 2 s.f. Significant Figures - Questions 1) Write the following lengths to the stated number of significant figures: a) 5.0319 m to 3 s.f. b) 500.00 m to 2 s.f. c) 0.9567892159 m to 2 s.f. d) 0.000568 m to 1 s.f. 2) How many significant figures are the following numbers quoted to? a) 224.4343 b) 0.000000000003244654 c) 344012.34 d) 456 e) 4315.0002 f) 200000 stars in a small galaxy g) 4.0 3) For the numbers above that are quoted to more than 3 s.f, convert the number to standard form and quote to 3 s.f. ↑ 4) Calculate the following and write your answer to the correct number of significant figures: a) 2.65 m x 3.015 m b) 22.37 cm x 3.10 cm c) 0.16 m x 0.02 m d) 5 0 m m Mark = /19 3. Using Equations You are expected to be able to manipulate formulae correctly and confidently. You must practise rearranging and substituting equations until it becomes second nature. We shall be using quantity symbols, and not words, to make the process easier. Key points Whatever mathematical operation you apply to one side of an equation must be applied to the other. Don’t try and tackle too many steps at once. Simple formulae The most straightforward formulae are of the form (or more correctly ). Rearrange to set b as the subject: Divide both sides through by c therefore Rearrange to set c as the subject: Divide both sides through by b therefore Alternatively you can use the formula triangle method. From the formula you know put the quantities into the triangle and then cover up the quantity you need to reveal the relationship between the other two quantities. This method only works for simple formulae, it doesn’t work for some of the more complex relationships, so you must learn to rearrange a b×c More complex formulae Formulae with more than 3 terms Formulae with additions or subtractions Formulae with squares or square roots Find ρ Find h Find Divide by l Add Φ Cancel l Multiply by A Cancel A √ Square Cancel Φ Multiply by Divide by Divide by T2 Cancel Symbols on quantities Sometimes the symbol for a quantity may be combined with some other identifying symbol to give more detail about that quantity. Here are some examples. Symbol Δx Δx/Δt <x> or ̅ ⃗ x1 x2 Meaning A change in x (difference between two values of x) A rate of change of x Mean value of x Quantity x is a vector Subscripts distinguish between same types of quantity Using Equations - Questions 1) Make t the subject of each of the following equations: 2) Solve each of the following equations to find the value of t: a) V = u +at a) 30 = 3t - 3 b) S = ½ at2 b) 4(t +5) = 28 c) Y = k (t - t0) c) d) F = e) Y = m 5 t = 10 t d) 3t2 = 36 k e) t -1/2 = 6 t f) Y = 2t1/2 f) t1/3 = 3 Δs g) v = Δt Mark = /13 4. Straight Line Graphs Value along y-axis If a graph is a straight line, then there is a formula that will describe it. Value along x-axis y=mx+c gradient y-intercept Here are some examples: y=x A positive line through the origin Gradient, m = 1 y-intercept, c = 0 y=x–5 Parallel to y = x but transposed by -5. Gradient, m = 1 y-intercept, c = -5 y = 2x A positive line through the origin Gradient, m = 2 y-intercept, c = 0 y = 2x + 4 Parallel to y = 2x ,transposed by 4. Gradient, m = 2 y-intercept, c = 4 y = -x + 1 A negative line, parallel to y = -x Gradient, m = -1 y-intercept, c = 1 DIRECTLY PROPORTIONAL describes any straight line through the origin. Both y α x and Δy α Δx Using Straight Line Graphs in Physics LINEAR describes any other straight line. Only Δy α Δx. If asked to plot a graph of experimental data at GCSE, you would plot the independent variable along the x-axis and the dependent variable up the y-axis. Then you might be able to say something about how the two variables are related. At A-Level, we need to be cleverer about our choice of axes. Often we will need to find a value which is not easy to measure. We take a relationship and manipulate it into the form y = mx + c to make this possible. Example: is the relationship between the resistance R of a conductor, the resistivity ρ of the material which it is made of, its length l, and its area A. We do an experiment to find R, l and A, which are all easy to measure. We want to find the resistivity ρ, which is harder. This example doesn’t need rearranging, just rewriting into the shape y = mx + c: So it is found that by plotting R on the y-axis and l/A on the x-axis, the resitivity 𝜌 will be the gradient of the graph. Straight Line Graphs - Questions 1) For each of the following equations that represent straight line graphs, write down the gradient and the y intercept: a) y = 5x + 6 b) y = -8x + 2 c) y = 7 - x d) 2y = 8x - 3 e) y + 4x = 10 f) 3x = 5(1-y) g) 5x - 3 = 8y Mark = /14 5. Trigonometry When dealing with vector quantities or systems involving circles, it will be necessary to use simple trigonometric relationships. Angles and Arcs There are two measurements of angles used in Physics. Degrees There are 360o in a circle Radians There are π radians in a circle Whichever you use, make sure your calculator is in the correct mode! To swap from one to the other you need to find what fraction of a circle you are interested in, and then multiply it by the number of degrees or radians in a circle. degrees radians or degrees radians For example: To convert 90o into radians: (We tend to lea e answers in radians as fractions of π) radians To find the length of an arc, use . The angle must be in radians. What would the relationship be if you wanted the entire circumference? Compare to this formula. Sine, Cosine, Tangent Recall from your GCSE studies the relationships between the lengths of the sides and the angles of rightangled triangles. Using SOHCHATOA: Vector Rules A vector is a quantity which has two parts: SIZE and DIRECTION (e.g. force, velocity, acceleration) A scalar is a quantity which just has SIZE (e.g. temperature, length, time, speed) We represent vectors on diagrams with arrows. To simplify problems in mechanics we will separate a vector into horizontal and vertical components. This is done using the trigonometry rules. Trigonometry - Questions 1) Calculate: a) The circumference of a circle of radius 0.450 m b) the length of the arc of a circle of radius 0.450m for the following angles between the arc and the centre of the circle: i. 340o ii. 170o iii. 30o 2) For the triangle ABC shown, calculate: a) Angle θ if AB = 0cm and BC = 0cm b) Angle θ if AC = 80cm and AB = 5cm c) AB if θ = 36° and BC = 50 mm d) BC if θ = 65° and AC = 15 km 3) Calculate the horizontal component A and the vertical component B of a 65 N force at 40o above the horizontal. Mark = /10 6. Exam Technique It is vital that you are able to communicate a numerical answer appropriately to an examiner. Students will often make these mistakes in questions that involve calculations: Copying values or equations incorrectly from the question or the data sheet. Mistakes when rearranging formulae. Ignoring prefixes to units. Inputting into calculator wrong, especially standard form and accurate use of brackets. Having the calculator in the wrong mode (radians/degrees) If asked for, not writing final answer to the correct number of significant figures or writing the unit. Writing down a value which would be silly in the context of the question. Messy working that is difficult to decipher. A method for numerical questions Example question: Calculate the wavelength of a quantum of electromagnetic radiation with energy of 1.99 J. Data sheet: Speed of electromagnetic radiation in free space, c = 3.00 x 106 m s-1 Planck’s constant, h = 6.63 x 10-34 J s (1) Write down the values of everything you are given. (2) Convert all the values into SI units (e.g. put time into seconds, distance in meters...) and replace unit prefixes with their equivalent values in standard form. c = 3.00 x 108 ms-1 h = 6.63 x 10-34 Js E = 0.199 pJ = 0.199 x 10-12 J λ = ? (3) Pick the equation you need. If you need to find it on the data sheet, look for one that contains the quantities you know and the quantity you are trying to work out. E = (4) Rearrange the formula so the quantity you want is the subject of the equation. λ = (5) Insert the values into your equation, taking care to lay out your working clearly = Use your calculator to accurately input the numbers to find the solution. = 9.9949 x 1011 m (6) - - = 9.99 x 10-13 m to 3 s.f. (7) Write down the answer to more decimal places than you need at first, in case you need the value for later calculations. Check the answer seems sensible. In this example I got a massive wavelength the first time because I mistyped the energy as 0.199 x 10 12 J. (8) Write your final answer and underline it. All the input values were to 3 s.f., so the answer should be written to the same precision.
Use data analysis skills to match frequency tables with a corresponding bar graph or pictograph. 📊 Practice Matching Bar Graphs and Pictographs Bar graphs, pictographs, frequency tables … OH MY! Are your students practicing how to analyze different types of graphs? As we all know, graphs are an excellent way to display data or information visually. Once your students learn the basics of how to read each type of graph, answering questions and analyzing the data will be a breeze! Why not practice matching data to its corresponding chart while having fun? With this resource, students will have experience matching frequency tables with a single-unit bar graph or pictograph. - Place the frequency tables in one pile on the table. - Shuffle the bar graph, pictograph, and title cards. Spread them out face-up in the middle of the table. - Choose a frequency table and find the matching title and graph. - Set the match of 3 cards to the side. - Continue until all cards have been matched. Through this activity, students will show they can interpret bar graphs and pictographs by matching each with a corresponding frequency table. Tips for Differentiation + Scaffolding A team of dedicated, experienced educators created this resource to support your math lessons. If you have a mixture of above and below-level learners, check out these suggestions for keeping students on track with the concepts: 🆘 Support Struggling Students Help students who need help understanding the concepts by limiting the number of cards they are required to match. Additionally, students can complete this activity in a 1-on-1 setting or with a small group. ➕ Challenge Fast Finishers For students who may need a bit of a challenge, encourage them to create a dot plot to display the same set of data. Students can draw the charts on either a separate piece of paper or on a whiteboard. Plan lessons for all ability levels with our 10 Best Scaffolding Strategies! Easily Prepare This Resource for Your Students Use the dropdown icon on the Download button to choose between the PDF or editable Google Slides version of this resource. Print on cardstock for added durability and longevity. Place all pieces in a folder or large envelope for easy access. This resource was created by Allie Kleijnjans, a teacher in Pennsylvania and Teach Starter Collaborator. Don’t stop there! We’ve got more activities and resources that cut down on lesson planning time: Draw a single-unit picture graph and bar graph to represent data with this worksheet. Use data analysis skills to analyze bar graphs and pictographs with this set of task cards. Use data analysis skills to match tally charts and frequency tables with their corresponding bar graph, pictograph, or dot plot.
Operators (C# Programming Guide) In C#, an operator is a program element that is applied to one or more operands in an expression or statement. Operators that take one operand, such as the increment operator (++) or new, are referred to as unary operators. Operators that take two operands, such as arithmetic operators (+,-,*,/), are referred to as binary operators. One operator, the conditional operator (?:), takes three operands and is the sole ternary operator in C#. The following C# statement contains a single unary operator and a single operand. The increment operator, ++, modifies the value of the operand y. The following C# statement contains two binary operators, each with two operands. The assignment operator, =, has the integer variable y and the expression 2 + 3 as operands. The expression 2 + 3 itself consists of the addition operator and two operands, 2 and 3. An operand can be a valid expression that is composed of any length of code, and it can comprise any number of sub expressions. In an expression that contains multiple operators, the order in which the operators are applied is determined by operator precedence, associativity, and parentheses. Each operator has a defined precedence. In an expression that contains multiple operators that have different precedence levels, the precedence of the operators determines the order in which the operators are evaluated. For example, the following statement assigns 3 to n1. n1 = 11 - 2 * 4; The multiplication is executed first because multiplication takes precedence over subtraction. The following table separates the operators into categories based on the type of operation they perform. The categories are listed in order of precedence. Conditional member access Method and delegate invocation Array and indexer access Conditional array and indexer access Object and delegate creation Object creation with initializer. See Object and Collection Initializers (C# Programming Guide). Anonymous object initializer. See Anonymous Types (C# Programming Guide). Array creation. See Arrays (C# Programming Guide). Obtain System.Type object for T Evaluate expression in checked context Evaluate expression in unchecked context Obtain default value of type T Anonymous function (anonymous method) Explicitly convert x to type T x + y Addition, string concatenation, delegate combination x - y Subtraction, delegate removal x << y x >> y Relational and Type Operators x < y x > y x <= y Less than or equal x >= y Greater than or equal x is T Return true if x is a T, false otherwise x as T Return x typed as T, or null if x is not a T x == y x != y Logical, Conditional, and Null Operators x & y Integer bitwise AND, Boolean logical AND x ^ y Integer bitwise XOR, boolean logical XOR x | y Integer bitwise OR, boolean logical OR x && y Evaluates y only if x is true x || y Evaluates y only if x is false x ?? y Evaluates to y if x is null, to x otherwise x ? y : z Evaluates to y if x is true, z if x is false Assignment and Anonymous Operators x op= y (T x) => y Anonymous function (lambda expression) When two or more operators that have the same precedence are present in an expression, they are evaluated based on associativity. Left-associative operators are evaluated in order from left to right. For example, x * y / z is evaluated as (x * y) / z. Right-associative operators are evaluated in order from right to left. For example, the assignment operator is right associative. If it were not, the following code would result in an error. int a, b, c; c = 1; // The following two lines are equivalent. a = b = c; a = (b = c); // The following line, which forces left associativity, causes an error. //(a = b) = c; As another example the ternary operator (?:) is right associative. Most binary operators are left associative. Whether the operators in an expression are left associative or right associative, the operands of each expression are evaluated first, from left to right. The following examples illustrate the order of evaluation of operators and operands. Order of evaluation a = b a, b, = a = b + c a, b, c, +, = a = b + c * d a, b, c, d, *, +, = a = b * c + d a, b, c, *, d, +, = a = b - c + d a, b, c, -, d, +, = a += b -= c a, b, c, -=, += You can change the order imposed by operator precedence and associativity by using parentheses. For example, 2 + 3 * 2 ordinarily evaluates to 8, because multiplicative operators take precedence over additive operators. However, if you write the expression as (2 + 3) * 2, the addition is evaluated before the multiplication, and the result is 10. The following examples illustrate the order of evaluation in parenthesized expressions. As in previous examples, the operands are evaluated before the operator is applied. Order of evaluation a = (b + c) * d a, b, c, +, d, *, = a = b - (c + d) a, b, c, d, +, -, = a = (b + c) * (d - e) a, b, c, +, d, e, -, *, = You can change the behavior of operators for custom classes and structs. This process is referred to as operator overloading. For more information, see Overloadable Operators (C# Programming Guide).
From Latin ex- + -periri (akin to periculum attempt). In the scientific method, an experiment is a set of actions and observations, performed to verify or falsify a hypothesis or research a causal relationship between phenomena. The experiment is a cornerstone in empirical approach to knowledge. See the list of famous experiments for historically important scientific experiments. An experiment in baking As a simple example, consider that many bakers have noticed that the amount of "fluffiness" in a loaf of bread seems to be related to how much humidity there is in the air when the dough is being made. This can be formalized as the hypothesis: "all other things being considered equal, the greater the humidity, the fluffier the bread". While this hypothesis might arise naturally from baking many loaves over time, an experiment to determine whether or not this is really true would be to carefully prepare bread dough, as identically as possible, on two types of days: days when the humidity is high, and days when the humidity is low. If the hypothesis is true, then the bread prepared on the high humidity days should be fluffier. Several features of this experiment hold in general for all experiments: - We must try to make all other conditions of the process to be as similar as possible between the trials. For example, the amounts of flour and water added, the temperature of the butter, and the amount of kneading all may have an effect on the fluffiness; so the experiment should explicitly attempt to control the other variables which could have an effect on the outcome. This gives us some confidence in the statement "all other things being equal,...". - Although "fluffiness" may seem to be an easily understood idea, one baker's idea of "fluffy bread" may be different than another baker's. The experiment must be based on objective quantities - for example "fluffiness is measured as the total volume of the loaf of bread from one pound of flour". This idea, coupled with the exactness of the description of how the experiment is to be performed, is sometimes called the operational aspect of the experiment; the idea that all actions, quantities, and observations can be agreed upon by reasonable people. - Noting that once, on a humid day, one baked a fluffy loaf is not enough. The experiment should be repeatable; given that one performs the experiment exactly as described, one should expect to see the same results, no matter who performs the experiment or how many times it is performed. Repeatability of an experiment helps to eliminate various types of experimental errors - one may think that one has accurately described all of the relevant techniques and measurements in an experiment, but certain other effects (such as the brand of the flour, trace impurities in the water used in the dough, etc.) may actually be contributing to the observed effects. In the scientific method, someone may claim that they have performed an experiment with a particular result, and thereby supported a particular hypothesis. However, until other scientists have performed the same experiment in the same way and gotten the same results, the experiment is usually not considered as a "proven" result (see cold fusion for a recent example). - Finally, even though one has baked bread a hundred times, occasionally a loaf will completely fail "because the kitchen gods are unhappy". It is important to realize that some hypotheses cannot be tested experimentally - since we cannot make a measurement which will tell us whether or not the "kitchen gods" are "happy", we cannot perform an experiment which either proves or disproves the hypothesis "the best bread happens when the kitchen gods are happy". Experimental design attempts to balance the requirements and limitations of the field of science in which one works so that the experiment can provide the best conclusion about the hypothesis being tested. In some sciences, such as physics and chemistry, it is relatively easy to meet the requirements that all measurements be made objectively, and that all conditions can be kept controlled across experimental trials. On the other hand, in other cases such as biology, and medicine, it is often hard to ensure that the conditions of an experiment be performed consistently; and in the social sciences, it may even be difficult to determine a method for measuring the outcomes of an experiment in an objective manner. For this reason, sciences such as physics are often referred to as "hard sciences", while others such as sociology are referred to as "soft sciences"; in an attempt to capture the idea that objective measurements are often far easier in the former, and far more difficult in the latter. In addition, in the soft sciences, the requirement for a "controlled situation" may actually work against the utility of the hypothesis in a more general situation. When the desire is to test a hypothesis that works "in general", an experiment may have a great deal of internal validity, in the sense that it is valid in a highly controlled situation, while at the same time lack external validity when the results of the experiment are applied to a real world situation. One of the reasons why this may happen is because of the Hawthorne effect. As a result of these considerations, experimental design in the "hard" sciences tends to focus on the elimination of extraneous effects (type of flour, impurities in the water); while experimental design in the "soft" sciences focuses more on the problems of external validity, often through the use of statistical methods. Occasionally events occur naturally from which scientific evidence be drawn, natural experiments. In such cases the problem of the scientist is to evaluate the natural "design". - Main article: Control experiment Many hypotheses in sciences such as physics can establish causality by noting that, until some phenomenon occurs, nothing happens; then when the phenomenon occurs, a second phenomenon is observed. But often in science, this situation is difficult to obtain. For example, in the old joke, someone claims that they are snapping their fingers "to keep the tigers away"; and justifies this behaviour by saying "see - it's working!". While this "experiment" does not falsify the hypothesis "snapping fingers keeps the tigers away", it does not really support the hypothesis - not snapping your fingers keeps the tigers away as well. To demonstrate a cause and effect hypothesis, an experiment must often show that, for example, a phenomenon occurs after a certain treatment is given to a subject, and that the phenomenon does not occur in the absence of the treatment. (See Baconian method.) A controlled experiment generally compares the results obtained from an experimental sample against a control sample, which is practically identical to the experimental sample except for the one aspect whose effect is being tested. In many laboratory experiments it is good practice to have several replicate samples for the test being performed and have both a positive control and a negative control . The results from replicate samples can often be averaged, or if one of the replicates is obviously inconsistent with the results from the other samples, it can be discarded as being the result of some an experimental error (some step of the test procedure may have been mistakenly omitted for that sample). Most often, tests are done in duplicate or triplicate. A positive control is a procedure that is very similar to the actual experimental test but which is known from previous experience to give a positive result. A negative control is known to give a negative result. The positive control confirms that the basic conditions of the experiment were able to produce a positive result, even if none of the actual experimental samples produce a positive result. The negative control demonstrates the base-line result obtained when a test does not produce a measurable positive result; often the value of the negative control is treated as a "background" value to be subtracted from the test sample results. Sometimes the positive control takes the form of a standard curve. An example that is often used in teaching laboratories is a controlled protein assay. Students might be given a fluid sample containing an unknown (to the student) amount of protein. It is their job to correctly perform a controlled experiment in which they determine the concentration of protein in fluid sample (usually called the "unknown sample"). The teaching lab would be equipped with a protein standard solution with a known protein concentration. Students could make several positive control samples containing various dilutions of the protein standard. Negative control samples would contain all of the reagents for the protein assay but no protein. In this example, all samples are performed in duplicate. The assay is a colorimetric assay in which a spectrophotometer can measure the amount of protein in samples by detecting a colored complex formed by the interaction of protein molecules and molecules of an added dye. In the illustration, the results for the diluted test samples can be compared to the results of the standard curve (the blue line in the illustration) in order to determine an estimate of the amount of protein in the unknown. Controlled experiments can be particularly useful when it is difficult to exactly control all the conditions in an experiment. The experiment begins by creating two or more sample groups that are probabilistically equivalent, which means that measurements of traits should be similar among the groups and that the groups should respond in the same manner if given the same treatment. This equivalency is determined by statistical methods that take into account the amount of variation between individuals and the number of individuals in each group. In fields such as microbiology and chemistry, where there is very little variation between individuals and the group size is easily in the millions, these statistical methods are often bypassed and simply splitting a solution into equal parts is assumed to produce identical sample groups. Once equivalent groups have been formed, the experimenter tries to treat them identically except for the one variable that he or she wishes to isolate. Human experimentation requires special safeguards against outside variables such as the placebo effect. Such experiments are generally double blind, meaning that neither the volunteer nor the researcher knows which individuals are in the control group or the experimental group until after all of the data has been collected. This ensures that any effects on the volunteer are due to the treatment itself and are not a response to the knowledge that he is being treated. Sometimes controlled experiments are prohibitively difficult, so researchers resort to natural experiments. Natural experiments take advantage of predictable natural changes in simple systems to measure the effect of that change on some phenomenon. Much of astronomy relies on experiments of this type. It is clearly impractical, when trying to prove the hypothesis "suns are collapsed clouds of hydrogen", to start out with a giant cloud of hydrogen, and then perform the experiment of waiting a few billion years for it to form a sun. However, by observing various clouds of hydrogen in various states of collapse, and other implications of the hypothesis (for example, the presence of various spectral emissions from the light of stars), we can collect the experimental data we require to support the hypothesis. An early example of this type of experiment was the first verification in the 1600s that light does not travel from place to place instantaneously, but instead has a measurable speed. Observation of the appearance of the moons of Jupiter were slightly delayed when Jupiter was far from Earth, as opposed to when Jupiter was closer to Earth; and this phenomenon was used to demonstrate that the time delays were consistent with a measurable speed of light. Quasi-experiments are very much like controlled experiments except that they lack probabilistic equivalency between groups. These types of experiments often arise in the area of medicine where, for ethical reasons, it is not possible to create a truly controlled group. For example, one would not want to deny all forms of treatment for a life-threatening disease from one group of patients to evaluate the effectiveness of another treatment on a different group of patients. Researchers compensate for this with complicated statistical methods. See also quasi-empirical methods. "We have to learn again that science without contact with experiments is an enterprise which is likely to go completely astray into imaginary conjecture." — Hannes Alfven "Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality." — Nikola Tesla The Character of Physical Law, by Richard P. Feynman
Students begin to investigate the process of composition through improvisation tasks exploring the elements of dance, space, time and dynamics. Students will work through each task in the lesson sequence, developing knowledge, understanding and skill in generating diverse movement quality. - 4.2.1 identifies and explores aspects of the elements of dance in response to a range of stimuli. - 4.2.2 composes dance movement, using the elements of dance, that communicates ideas. - 5.2.1 explores the elements of dance as the basis of the communication of ideas. - 5.2.2 composes and structures dance movement that communicates an idea. How does improvisation aid in developing an intent through various movement qualities? Students will explore a range of tasks based around improvisation, enabling them to explore their own bodies capabilities in movement qualities. Students will work through the attached Exploring Improvisation presentation (PPTX 5.292 KB) exploring space, time and dynamics to create original movement. The connection of reflection and refinement within the practical work will further engage students, creating deeper understanding within the compositional process. Students are to: - document the process through composition classes in a process diary. This should be a journal, exploring reflections of each activity and lesson investigating the various modes of improvisation. This can be their class workbooks, a dance process diary, or an online blog through sites such as Class Notebook or Google classroom. All activities require students to demonstrate their learning and are all formative assessment activities. Teaching and learning activities Students are to work both individually and in small groups through improvisation activities throughout this unit, exploring how to generate movement from instructional based tasks. Students will be required to reflect on this practice through literacy and numeracy tasks with regular process diary entries. Suggested student learning activities include: - discussion and reflection activities around each of the improvisation tasks - creating a vocabulary list throughout the lesson sequence - exploring the process of improvisation through questions such as - what is the purpose of improvisation? - when could you use improvisation? - the elements of dance - exploring space, time and dynamics and discussing the qualities within each - analysing how each element could be used within improvisation - completion of the activities below. - discuss the purpose of stage space and personal space within composition - complete the instructions on slide 4 of the PowerPoint presentation, exploring the stage space around them - write a reflection on the success of the exercise exploring how the tempo altered throughout the activity, and how the use of personal space changed throughout. - repeat this above activity with only half of the class, concentrating on how they manipulate the space, time and dynamics. - discuss the effectiveness of this task in utilising stage space. Use the Action words template (PDF 4.3 MB) for this activity. Print as many copies as per the students in your class. - complete slide five and six of the PowerPoint presentation, exploring how different instructional words could hold different meanings for movement. The following activities will break it down: - phrase one - students are to perform the actions in the original order, creating unique and diverse movement for each action - phrase two - students are to shuffle the cards and re-select the order of actions, performing these in the new order - students are to perform phrase one and two in order in small groups for the class - discuss and write an entry in your process diary on how the deconstructed version changed the feeling and transitions within the phrase - phrase three - students are to pair up and combine cards, choosing a new random order from one to ten - students are to perform this phrase as a duet, altering the use of level and directions to one another - discuss the probability and percentage of each selected card - draw a bar graph showing the amount of times each action was selected. - phrase one - follow the instructions on slide seven of the PowerPoint presentation, exploring how the use of different directions in space can create intrigue and diversity within choreography - use directions such as: turn your head to each number, point your right index finger to each number, point with your elbow, then knee, then your whole body. Take this further and step, walk, run or turn towards the number, making each movement natural and unforced - discuss and write an entry on the effectiveness of utilising different directions within a work. - complete the instructions on slide eight of the PowerPoint presentation, exploring the use of abstraction with different body parts and varied levels within choreography - discuss and write a process diary entry on the effectiveness and exploration of locomotor movement on different levels. - complete the instructions on slide nine of the PowerPoint presentation exploring temporal variations - learn and manipulate a sequence taught by the classroom teacher, adding stillness and dynamics to the movement - retrograde the movement, exploring how the timing and transitions would alter - reflect on the task discussing how the temporal variations make the phrase more interesting. - complete the instructions on slide ten of the PowerPoint presentation exploring dynamic qualities in soft and forceful movement - discuss and write a process diary entry on the effectiveness of adapting to these dynamic qualities. - read and complete the improvisation task on slide 11 of the PowerPoint presentation, exploring space, time and dynamics through movement - improvisation task - choose a beginning shape on any level - melt to the floor and hold a finishing shape - reach out in any way or form from this position - rewind/return to your original starting shape - take two to ten steps the choice is yours - create a balance or hold on one leg in any shape - walk in any direction for as long as you like - run in any direction as quickly or slowly as you like - travel as though you are in quicksand - change this level: laying down, front, back, side on, etc, walking through quicksand - find a finishing shape on any level and hold this - wait for the rest of the dancers to finish and hold their position. - students are to remember this phrase and manipulate it through the following ways - change the timing and tempo of your movements - add one to two stillness - change the direction and planes of the movement whenever you would like - add dynamic qualities through the movement, e.g.: percussive (hard) versus sustained (soft) movement qualities. - complete the instructions on slide 12 of the PowerPoint presentation exploring dynamic qualities through locomotor movement - discuss how feelings and emotions are connected to the dynamic qualities and how this enhanced the movement explored. Use the Dynamic cards template (PDF 4.27 MB) for this activity. - discuss the task on slide 13 of the PowerPoint presentation exploring the generation of a short work, including three sections of different dynamic movement - perform for the class, enabling discussion around the selected dynamic qualities - reflect on the use of each dynamic and what it is like to only use one quality for an entire section. - discuss the task on slide 14 of the PowerPoint presentation exploring four prior tasks - points in space - body parts take control - temporal variations - isolating an impulse - create an original impulse work exploring each of these activities to develop movement from focal points, body-part leads, varied timing and action and reaction to force - perform for the class, enabling discussion and feedback around the success of using these different tasks to create a short work - reflect on the benefits of exploring these tasks to generate movement and how this could be adapted when it comes to creating a composition work. - discuss the previous ten tasks and how each explored and manipulated space, time and dynamics. When structuring the lessons aim to work through two tasks per week. Some tasks will run over more than one lesson. Allow students to explore the tasks to their full extent to build their knowledge, understanding and skill within improvisation. The appreciation and practical composition lessons are to be explored in partnership. Attempt to engage in reflection tasks at the end of each activity, building students competencies in writing an effective process diary. Students are to: - demonstrate a willingness to engage in improvisation tasks - explore performance opportunities presenting group works and phrases to the class around each improvisation task - demonstrate understanding and skill in manipulating the elements of dance to create engagement within their work . - create a short work, linking four of their chosen tasks together developing sections of movement - explore the use of variation and contrast to create unity within the work - write an analysis of the unity formed through these tasks - LS 2.1 - LS 2.2 - explore improvisation tasks to create movement - perform these routines to their class. Feedback is formative for the duration of the unit. This sequence and accompanying worksheets are available as word documents below: Syllabus outcomes and content descriptors from Dance 7–10 Syllabus (2003) © NSW Education Standards Authority (NESA) for and on behalf of the Crown in right of the State of New South Wales, 2017.
Welcome to our course on parallel programming. In the following weeks, we will see some of the cutting edge technologies for parallel programming. See how to write efficient parallel programs. And learn the basic principles and algorithms of this fast moving and exciting field of computing. In this first lecture, we give a general introduction to parallel computing and study various forms of parallelism. We will also give a summary about what we will expect in the rest of this course. The first big question that you need to answer is, what is parallel computing? So, lets define it. We'll say that parallel computing is a type of computation in which many calculations execute at the same time. Parallel computing is based on the following principle, a computational problem can be divided into smaller subproblems, which can then be solved simultaneously. Parallel computing assumes the existence of some sort of parallel hardware, which is capable of undertaking these computations simultaneously. That is, in parallel. Having said that, we will take a short glance into the history of parallel computing to better understand its origin. Despite its recent rise in popularity, parallel execution was present ever since the early days of computing when computers were still mechanical devices. It is difficult to identify the exact conception of the the first parallel computing machine, but we can agree on the following. In the 19th century, Charles Babbage invented the concept of a programmable computer, which he called an analytical engine. The analytical engine was able to execute various user written programs. For this, Babbage is credited for being a pioneer in the field of programmable computers. After Babbage presented his ideas about the analytical engine, an Italian general and mathematician Luigi Menabrea, wrote down a description of Babbage's computer. Menabrea envisioned an extension in which independent computations occurs simultaneously at the same time. His idea was to speed up Babbage's analytical engine. Modern computers, based on transistor technology, appeared much later, in the 20th century. Researchers from IBM, Benny John and Gene Amdahl to mention a few, were on of the early evangelists of parallel processing. Amdahl's famous quote comes from this era. He said, for over a decade prophets have voiced the contention that the organization of a single computer has reached its limits. And that the truly significant advances can be made only by interconnection of a multiplicity of computers. Regardless of Amdahl's visionary views, at the time, parallel computing did not become mainstream, but it obtained a niche with the higher performance computing community. In high performance computing, the main area of focus are parallel algorithms for physics and biology simulations, weather forecasting, cryptography, and applications that require a lot of processing power. Performance requirement of regular users were satisfied by increasing the clock frequency of the CPU, which meant that the CPU could execute a higher number of instructions per second. At the beginning of the 21st century, it became obvious that scaling the processor frequency is no longer feasible after a certain point. The reason is that the power required for the processor starts to grow a nonlinear limit frequency. So instead of increasing CPU clock frequency, processor vendors turned to providing multiple CPU cores on the same processor chip. And named these new devices, multi-core processors. It is interesting to note that these three stories share a common theme. Whether the need for faster computation arose from the sluggishness of a mechanical computer, limitations of the available technology, or the physical boundaries imposed by the power wall. Additional computing power was provided through parallelization. This brings us to the next big question. Why do we need parallel computing? As you will learn in this course, parallel programming is much harder than sequential programming. There are several reasons for this. First, separating one sequential computation into parallel subcomputation is a challenging mental task for the programmer, and is sometimes not even possible. Usually this means identifying independent parts of the program and executing them at the same time. However, some problems cannot be divided as shown in this figure. The second source of difficulty in ensuring correctness. We will see that many new types of errors can arise in parallel programming, making the programmer's life harder. With these increased complexities, why then would we bother writing parallel programs at all? As shown in earlier anecdotes, the main benefit of parallelization is increased program performance. When we write a parallel program, we expect it to be faster than its sequential counterpart. Thus, speedup is the main reason why we want to write parallel programs. In fact, achieving speedup is the third source of difficulty in parallel computing. The speedup discussion brings us to the next big topic, the relationship of parallel programming to a closely related discipline called concurrent programming. In spite of their connection, parallel and concurrent programming are not the same things. There are many contrasting definitions of these programming disciplines, and people sometimes disagree on what these disciplines are. We will decide on one particular definition, and stick to it. A parallel program is a program that uses the provided parallel hardware to execute a computation more quickly. As such, parallel programming is concerned mainly with efficiency. Parallel programming answers questions such as, how to divide a computational problem into subproblems that can be executed in parallel. Or, how to make best use of underlying parallel hardware to obtain the optimal speedup. A concurrent program is a program in which multiple executions may or may not execute at the same time. Concurrent programming serves to structure programs better in order to improve modularity, responsiveness to input and output events, or understandability of the program. As such, concurrent programming is concerned with questions like, when can a specific execution start? When and how can two concurrent executions exchange information? And how do computations manage access to shared resources, such as files or database handles? So parallel programming is mainly concerned with algorithmic problems, numerical computations, or big data applications. Examples of such applications include matrix multiplication, data processing, computer graphics rendering, or simulation of fluid movement. Concurrent programming is targeted at writing asynchronous applications such as web servers, user interfaces, or databases. While parallel programming is concerned mainly with speedup, concurrent programming is concerned with convenience, better responsiveness and maintainability. The two disciplines often overlap. Parallel programming may rely on insights from concurrent programming and vice versa. Concurrent programming may be used to solve parallel programming problems. However, neither discipline is the superset of the other. Having more clearly established what parallel programming is, let's take a look at various forms of parallelism. It turns out that parallel computing manifests itself on various granularity levels. For example, some microprocessors have historically had a four bit word size. Number ranges that did not fit into the four bits had to be represented with multiple words. This meant that variables in the eight bit ranges, such as x and y, had to be represented with eight bits, that is two words. Adding these two variables together required two separate add instructions, each operating on a four bit words. Words size subsequently increased to 8-bits, then 16-bits, and eventually 32-bits. Each time this happened, several instructions could be replaced by just a single one. This meant processing more data in the same time interval. In effect, performance was improved by parallelizing operations on the bit level. So we call this form of parallelization bit-level. A decade ago, we witnessed a switch from 32-bit to 64-bit architecture in commercial processors. More recently, a similar effect was achieved through the introduction of vector instructions in Intel and ARM processors. On a more coarse grain scale, parallelism can be achieved by executing multiple instructions at the same time. Consider the following example. Here, the processor has no reason not to compute b and c in parallel. This is because neither b or c depends on the other to be computed. This form of parallelism is thus called instruction-level parallelism. Note that the computation of d, however, can only be started after both b and c have been computed. So in summary, instruction level parallelism executes different instructions from the same instruction stream in parallel whenever this is possible. Finally, task-level parallelism deals with parallel execution of entirely separate sequences of instructions. These instructions can execute on the same or entirely different data. Bit-level and instruction-level parallelism are exploited by the underlying parallel hardware. In other words, they are in most cases implemented inside the processor itself. This course will mainly focus on task-level parallelism, which is usually achieved through software support. This level of granularity will allow us to express general parallel algorithms. The different forms of parallelism that we just saw manifest themselves on some concrete parallel hardware. It turns out that there are many classes of parallel computers out there, and we will take a look at some of them. A multi-core processor is a processor that contain's multiple processing units, called cores, on the same chip. Processor vendors, such as Intel, AMD, ARM, Oracle, IBM, and Qualcomm, produce different models of multi-core processors. A symmetric multiprocessor, or SMP, is a computer system with multiple identical processors, that share memory and connect to it by a bus. Here, multiple execution units are not on the same chip. An SMP itself can contain multi-core processors, as is the case in this illustration. A general purpose graphics expressing unit, GPGPU, is a form of a co-processor originally intended for graphics processing. As a core processor, it does not execute all user programs by default, but can execute a program when this is explicitly requested by the host processor. Field-programmable gate arrays, FPGA, is another form of a core processor which can rewire itself for a given task. Their advantages improved performance when optimized for a particular application. Usually, these are programmed in hardware description languages, such as Verilog and HTL. Final class of parallel computers that we will mention are computer clusters. Groups of computers connected via a network that do not have a common shared memory. Sometimes computers in the cluster also have co-processors, such as GPUs, and then we call them heterogeneous clusters. In this course, we focus mainly on multi-core processors and symmetric multiprocessors, which are closely related. All our examples and code will be optimized for multi-cores and SMPs. However, the algorithms that we will learn about generalize, and it can be implemented on most forms of parallel hardware. In this lecture, we learn about some basics of parallel computing. The rest of this week will focus on simple parallel programming examples, and on the performance analysis of parallel programs. The second week deals with task parallelism, and some basic parallel algorithms. The third week focuses on data parallelism, and data parallel computations, with a preview of Scala parallel collections. Finally, the fourth week will focus on data structures which are tailored for parallel computations. In the next lecture, we're going to learn how parallel computations are implemented on the JVM. We will see the basic primitives used for parallelism and learn about some concrete foundations for parallel programming.
In mathematics and computing, a root-finding algorithm is an algorithm for finding. A real number x will be called a solution or a root if it satisfies the equation, meaning . It is easy to see that the roots are exactly the x-intercepts of the quadratic function , that is the intersection between the graph of the quadratic function with the x-axis. a<0. a>0. Example 1: Find the roots of the equation. Finding the root of nonlinear algebraic equations is an issue usually found in engineering and sciences. This article presents a new method for the approximate calculation of the roots of a one-variable function through Monte Carlo Method. This method is actually based on the production of a random number in the target range of the root. Finally, some examples with acceptable error are provided to prove the efficiency of this method. use the quadratic formula hope it helps u mark as brainliest The roots of the quadratic equation ax2 + bx + c = 0, a ≠0 are given by the following formula: In this formula, the term b2–4ac is called the discriminant. If b2–4ac = 0, then the equation has a single (repeated) root. If b2–4ac > 0, the equation has two real roots. Step-by-step explanation:hope it helpsfollow me for the more helps ( 1+x )^-n = 1- nx where n is a rational number this conditions is applicable if x <<< 1 hopes it can help you. don't forgot like
Is Earth special? A large fraction of the scientific community doesn’t think so. In fact, most have adopted the Copernican principle, believing that Earth’s capacity to support life is commonplace. However, a number of factors indicate that Earth may be rare (or possibly unique) in its capacity to support life—even among the 100 sextillion terrestrial planets in the observable universe, according to a recent paper. In two decades, the exoplanet catalog has grown to over 2,000 known exoplanets. Using data from those planets and host stars, astronomers have developed models to determine information about planets not yet discovered. Based on those models, astronomers have estimated that the observable universe contains around 1020terrestrial planets!1 For comparison, somewhere between 1022 and 1024 stars exist in the observable universe, so roughly one in a hundred stars have rocky planets. These models also allow astronomers to compare terrestrial exoplanets to Earth. Amidst the comparisons, Earth stands out in at least three ways. 1. Age: Earth is younger. While most terrestrial exoplanets are between 8 and 8.4 billion years old, Earth is much younger—only 4.5 billion years old. Why is such a young planet habitable? This is probably because older planets (that formed earlier in the history of the universe) are subject to dynamical and radiation effects that diminish the possibility of hosting life. 2. Galaxy type: Most planets reside in the wrong galaxies. The number of planets per star remains largely constant with galaxy size, so most terrestrial planets reside in galaxies about twice the size of the Milky Way. However, the vast majority of galaxies this large are not spiral but elliptical. Consequently, most of the terrestrial planets in the universe reside in ellipticals, but research suggests that truly habitable planets must orbit stars in a spiral galaxy—such as the Milky Way. 3. “Dangerous” neighbors: Earth has none. Most planets that orbit otherwise life-friendly stars might have any hypothetical life exterminated due to radiation from nearby supernovae, gamma-ray bursts, active galactic nuclei, or dark matter annihilation regions. Dynamical encounters with interstellar gas clouds or dark matter clumps could also disrupt the stability of potentially habitable planets. One theological point warrants discussion. The Bible gives much information about God’s activity to bring about human life here on Earth, but it says nothing about whether He performed similar work somewhere else in the universe. Except for angelic beings (they have no physical body), the Bible leaves open the question of whether life exists elsewhere in the universe. However, it emphatically states that all things exist because of His divine action (see John 1:1–3). It seems likely scientific discoveries will continue to provide a growing body of evidence that Earth’s habitability is the exception instead of the rule. Astronomers have much work to do before they have the capacity to determine whether life exists beyond Earth, but the search is interesting from both a theological and scientific perspective. Food for Thought Would finding life on a planet outside our solar system diminish the case for God? Visit TNRTB on WordPress to comment with your response. - Erik Zackrisson et al., “Terrestrial Planets across Space and Time,” Astrophysical Journal, preprint, submitted February 1, 2016, https://arxiv.org/abs/1602.00690.
Know-How COVID-19 Spreads There is currently no vaccine to prevent coronavirus disease 2019 (COVID-19). The best way to prevent illness is to avoid being exposed to this virus. The virus is thought to spread mainly from person-to-person. Between people who are in close contact with one another (within about 6 feet). Through respiratory droplets produced when an infected person coughs or sneezes. These droplets can land in the mouths or noses of people who are nearby or possibly be inhaled into the lungs. Take steps to protect yourself Clean your hands often Wash your hands often with soap and water for at least 20 seconds especially after you have been in a public place, or after blowing your nose, coughing, or sneezing. If soap and water are not readily available, use a hand sanitizer that contains at least 60% alcohol. Cover all surfaces of your hands and rub them together until they feel dry. Avoid touching your eyes, nose, and mouth with unwashed hands. Avoid close contact Avoid close contact with people who are sick Put distance between yourself and other people if COVID-19 is spreading in your community. This is especially important for people who are at higher risk of getting very sick. Take steps to protect others Stay home if you’re sick Stay home if you are sick, except to get medical care. Learn what to do if you are sick. Cover coughs and sneezes Cover your mouth and nose with a tissue when you cough or sneeze or use the inside of your elbow. Throw used tissues in the trash. Immediately wash your hands with soap and water for at least 20 seconds. If soap and water are not readily available, clean your hands with a hand sanitizer that contains at least 60% alcohol. Wear a facemask if you are sick If you are sick: You should wear a facemask when you are around other people (e.g., sharing a room or vehicle) and before you enter a healthcare provider’s office. If you are not able to wear a facemask (for example, because it causes trouble breathing), then you should do your best to cover your coughs and sneezes, and people who are caring for you should wear a facemask if they enter your room. Learn what to do if you are sick. If you are NOT sick: You do not need to wear a facemask unless you are caring for someone who is sick (and they are not able to wear a facemask). Facemasks may be in short supply and they should be saved for caregivers. Clean and disinfect Clean AND disinfect frequently touched surfaces daily. This includes tables, doorknobs, light switches, countertops, handles, desks, phones, keyboards, toilets, faucets, and sinks. If surfaces are dirty, clean them: Use detergent or soap and water prior to disinfection. Watch for symptoms Reported illnesses have ranged from mild symptoms to severe illness and death for confirmed coronavirus disease 2019 (COVID-19) cases. The following symptoms may appear 2-14 days after exposure. Shortness of breath *This is based on what has been seen previously as the incubation period of MERS-CoV viruses. If you develop emergency warning signs for COVID-19 get medical attention immediately. Emergency warning signs include*: Difficulty breathing or shortness of breath Persistent pain or pressure in the chest New confusion or inability to arouse Bluish lips or face This list is not all-inclusive. Please consult your medical provider for any other symptoms that are severe or concerning. Workplace Health and Safety For information on protecting workers from COVID-19, refer to the Cal/OSHA Guidance on Coronavirus. Businesses and employers can visit the Centers for Disease Control and Prevention website for help with planning and responding to COVID-19. Labor and Workforce Development Agency – Resources for employers and workers including workers’ compensation and paid sick leave. Labor Commissioner’s Office FAQs – Employee leave options, compensation, and salary. Department of Fair Employment and Housing – Job protection and employment discrimination. EDD & Unemployment Insurance Programs Sick or Quarantined If you’re unable to work due to having or being exposed to COVID-19 (certified by a medical professional), you can file a Disability Insurance (DI) claim. DI provides short-term benefit payments to eligible workers who have a full or partial loss of wages due to a non-work-related illness, injury, or pregnancy. Benefit amounts are approximately 60-70 percent of wages (depending on income) and range from $50-$1,300 a week. The Governor’s Executive Order waives the one-week unpaid waiting period, so you can collect DI benefits for the first week you are out of work. If you are eligible, the EDD processes and issues payments within a few weeks of receiving a claim. For guidance on the disease, visit the California Department of Public Health website. If you’re unable to work because you are caring for an ill or quarantined family member with COVID-19 (certified by a medical professional), you can file a Paid Family Leave (PFL) claim. PFL provides up to six weeks of benefit payments to eligible workers who have a full or partial loss of wages because they need time off work to care for a seriously ill family member or to bond with a new child. Benefit amounts are approximately 60-70 percent of wages (depending on income) and range from $50-$1,300 a week. If you are eligible, the EDD processes and issues payments within a few weeks of receiving a claim. If your child’s school is closed, and you have to miss work to be there for them, you may be eligible for Unemployment Insurance benefits. Eligibility considerations include if you have no other care options and if you are unable to continue working your normal hours remotely. File an Unemployment Insurance claim and our EDD representatives will decide if you are eligible. Reduced Work Hours If your employer has reduced your hours or shut down operations due to COVID-19, you can file an Unemployment Insurance (UI) claim. UI provides partial wage replacement benefit payments to workers who lose their job or have their hours reduced, through no fault of their own. Workers who are temporarily unemployed due to COVID-19 and expected to return to work with their employer within a few weeks are not required to actively seek work each week. However, they must remain able and available and ready to work during their unemployment for each week of benefits claimed and meet all other eligibility criteria. Eligible individuals can receive benefits that range from $40-$450 per week. The Governor’s Executive Order waives the one-week unpaid waiting period, so you can collect UI benefits for the first week you are out of work. If you are eligible, the EDD processes and issues payments within a few weeks of receiving a claim. To apply over the phone call: 1-800-300-5616 The state has enacted several programs to assist California businesses in mitigating layoffs or reduced work hours caused by COVID-19 Work Share Program Paid Family Leave Guidance for Employers from the EDD, DIR, and LWDA Potential Closure or Layoffs Employers planning a closure or major layoffs as a result of the coronavirus can get help through the Rapid Response program. Rapid Response teams will meet with you to discuss your needs, help avert potential layoffs, and provide immediate on-site services to assist workers facing job losses. For more information, refer to the Rapid Response Services for Businesses Fact Sheet (DE 87144RRB) (PDF) or contact your local America’s Job Center of California. Employers experiencing hardship as a result of COVID-19 may request up to a 60-day extension of time from the EDD to file their state payroll reports and/or deposit state payroll taxes without penalty or interest. A written request for an extension must be received within 60 days from the original delinquent date of the payment or return. For questions, employers may call the EDD Taxpayer Assistance Center. Toll-free from the U.S. or Canada: 1-888-745-3886 Hearing-impaired (TTY): 1-800-547-9565 Outside the U.S. or Canada: 1-916-464-3502 US Small Business Administration The SBA hs support and resources available to assist businesses facing capital shortfalls, supply chain issues and other hardships as a result of COVID-19 Guidance for businesses or employers navigating temporary or long-term hardship Local Business Resources: (Calworks, Calfresh, Med-Cal) Both Yuba and Sutter County Health and Human Services are closed until further notice. Yuba County Health and Human Services 530-749-6311 Sutter County Health and Human Services 877-652-0735 Coordinated Entry Programs provide showers, laundry, case management, rapid rehousing and shelter. Yuba County: Ric Teagarden Life Building Center Mon-Fri 9am - 3:30pm 131 F St, Marysville, CA 530-749-6811 Home-less Hotline Yuba 530-749-6811 Sutter County: Hands of Hope Mon-Thurs 11am- 5pm 909 Spiva Ave. Yuba City, CA 530-755-3491 Home-less Hotline Sutter 530-822-5999 Covered California – Resources to get information and apply for low or no cost health insurance. Schools are closed until further notice. Marysville Joint Unified School District MJUSD Meal Service for children 18 and under, beginning Tuesday, March 17, 2020. Nutrition Services will provide an opportunity for families to pick up lunch and breakfast for the following day in a mobile walk up or drive-thru meal service for children 18 and under. Children may receive meals at any one of the locations below regardless of enrollment. Meals must be consumed off-site Questions? Contact Amber Watson 530-749-6178 USDA is an equal opportunity provider. Non–School Locations North Beale Rd Area Arboga Community Center 11:40 AM - 11:50 AM Moon Ave @ Jewett Ave 12:00 PM - 12:10 PM Country Club Ct @ Woodland Dr 12:15 PM - 12:30 PM Lowe Ave @ N Beale Rd 12:35 PM - 12:50 PM Non-School Locations Feather River Blvd Area Rio Inn, Marysville 11:45 AM - 11:55 AM Cloverleaf Market 12:00 PM - 12:15 PM Church on Feather River Blvd 12:20 PM - 12:30 PM 1140 Grand Ave 12:35 PM - 12:50 PM Non-School Locations Hallwood/District 10/ Marysville Area Hallwood Blvd @ Hooper Rd 11:50 AM - 12:00 PM Laurellen Rd @ Doc Adams Rd 12:10 PM - 12:20 PM 1720 Ellis Lake Drive 12:25 PM - 12:35 PM The Veterans Park 12:40 PM - 12:50 PM Non-School Locations Foothill Area Brownsville Fire Station 11:30 AM - 11:40 AM Willow Glen 11:50 AM - 12:00 PM Dobbins Oregon House Fire Station 12:15 PM - 12:25 PM Serving Time 12:00 PM - 12:45 PM Ella Elementary, Foothill Intermediate, Kynoch Elementary, Johnson Park Elementary and Linda Elementary Yuba CIty United School District During our school closures, YCUSD Nutrition Services will continue to serve breakfast and lunch to all children ages 18 and under. Both meals will be distributed together in front of the school sites in a mobile walk-up or drive-thru meal service. Children must be present to receive meals. Meal service will begin Thursday, March 19th. Site and Serving Times Andros Karperos 8:30AM-10:00AM April Lane Elementary 8:30AM-10:00AM Bridge Street Elementary 8:30AM-10:00AM Riverbend Elementary 8:30AM-10:00AM River Valley High School 8:30AM-10:00AM Yuba City High School 8:30AM-10:00AM Bernard Children’s Center 8:30AM-10:00AM This institution is an equal opportunity provider and employer. More information: http://www.ycusd.org/
What better way to revise the area under a curve for A Level Maths than with Beyond Revision? You. Us. Integration 🤜🎇🤛 Great, then let’s get going! - Example Question 1 - Example Question 2 - Example Question 3 - Example Question 4 - Practice Questions Area Under a Curve: Integration There are two ways of thinking about integration. Firstly, as the inverse of differentiation (integration in this context is covered in this revision blog). Secondly, as a way of summing the area’s increasingly small slices under a function in order to find the total area (you can compare this to how differentiation from first principles allows you to find the gradient of a function). This means that you can use definite integration to find the area under a curve, between two limits (if you’re not sure about definite integration, check out our revision blog on that topic before moving onto this one). - To see the content of this blog in PDF or PowerPoint formats, click here. - To practice some of the prior knowledge required, try these multiple-choice questions. - If you think you understand Area Under a Curve, try these matching cards. - If you think you’ve fully mastered integration, try these exam-style questions. Definite integration allows you to find the area bounded by a curve and the 𝑥-axis. If the area is above the 𝑥-axis the area will be positive; if the area is below the 𝑥-axis the area will be negative. In the image above, the shaded area can be found by integrating 𝑦 = f(𝑥) with respect to 𝑥 with an upper limit of 𝑏 and a lower limit of 𝑎: Example Question 1 Find the area of the region that is bounded between the curve with equation 𝑦 = 24 – 2𝑥2 – 2𝑥 and the 𝑥-axis. Begin by sketching the curve with equation 𝑦 = 24 – 2𝑥2 – 2𝑥 so you can visualise the area you are trying to find. The curve meets the 𝑥-axis when 𝑦 = 0: This means that, between 𝑥 = -4 and 𝑥 = 3, the region is bounded by the curve and the 𝑥-axis. You can now find the area under the curve: You have not been given any units, so you do not add any units to your answer. This answer can be quickly checked or worked out on your calculator using the integration button: You will still need to make sure that you show full working in exams for method marks. Example Question 2 Find the total area of the region bounded by the curve with equation 𝑦 = 𝑥3 – 16𝑥2 + 63𝑥, the 𝑥-axis, and the lines with equations 𝑥 = 2 and 𝑥 = 9. Again, start by sketching the function. The curve meets the 𝑥-axis when 𝑦 = 0: You can see that the region is bounded by the curve between the lines with equations 𝑥 = 2 and 𝑥 = 9, but that the area between the lines with equations 𝑥 = 7 and 𝑥 = 9 will be negative. You want to find the total area so we will need to add the magnitudes of the two areas. (area A and area B on the diagram). The second integration gives a negative answer, but as you are considering area take the absolute (positive) value. The total area bounded by the curve will be: Example Question 3 Find the area of the finite region bounded between the curve with equation f(𝑥) = 𝑥3 – 20𝑥2 + 100𝑥 – 225 and the straight-line with equation g(𝑥) = -12𝑥 – 33. Again, start by sketching the curve and the line. You are interested in where the functions intercept and whether they cross over the 𝑥-axis in that range. Start by finding the intercepts: The region you need to calculate the area for is between 𝑥 = 4 and 𝑥 = 12. Values where f(𝑥) = 0: 𝑥 = 14.0 (3s.f.). This is not within our region. Values where g(𝑥) = 0 𝑥 = -2.75. This is not within our region. You can find the area between the two curves by splitting the area into two separate parts: This shows . This shows the area above g(𝑥). You can calculate this using the formula for the area of a trapezium. To find the area enclosed between the two curves, subtract the area of the trapezium away from the area between the curve with equation 𝑦 = f(𝑥) and the 𝑥-axis. Remember, because this region is below the 𝑥-axis it will have a negative area; use the absolute value of its area to calculate the area of the region between the graphs with equations 𝑦 = f(𝑥) and 𝑦 = g(𝑥). As you have seen in these examples, it is very important to sketch the functions in order to identify where the functions change sign or intersect with other functions. Depending on the question, you may also need to use areas of associated rectangles, triangles, and trapeziums, as well as integrating to find the area required. Example Question 4 Find the area between the curve with equation 𝑦 = 2𝑥2 + 3, the 𝑦-axis and the lines with equations 𝑦 = 4 and 𝑦 = 7. As usual, start by sketching the curve with equation 𝑦 = 2𝑥2 + 3 to highlight the area the question is asking you to find. You will need to find the coordinates where the curve meets the lines with equations 𝑦 = 4 and 𝑦 = 7. As you are looking at the region bounded by the curve, the 𝑦-axis and the lines with equations 𝑦 = 4 and 𝑦 = 7, we need the positive values of these 𝑥-coordinates: To find the area of the shaded region, treat it like a compound shape. Here is one way to break down the area usefully: You want to find Area A. Area A + Area B is a rectangle, 3 by . Area B + Area C is the area under the curve with equation y = 2x2 + 3 between the lines 𝑥 = and 𝑥 =. Area C is a rectangle, 4 by (). Use these facts to find the area of each region: Problems like this can be more difficult if the region of interest lies on both sides of the 𝑥-axis. In situations like this, it can simplify the problem to shift both functions up or down by the same amount – this will not affect the size of the area of interest. For each question, sketch the curve then find the area specified. 1) The area under the curve with equation between the lines with equations 𝑥 = 2 and 𝑥 = 5. 2) The area bounded by the curve with equation 𝑦 = (3 – 2𝑥)(𝑥 + 2) and the 𝑥-axis. 3) The total area enclosed between the curve with the equation 𝑦 = 6𝑥3 – 11𝑥2 – 35𝑥 and the 𝑥-axis. 4) The total area enclosed between the curve with equation f(𝑥) = 2𝑥3 – 33𝑥2 + 133𝑥, the 𝑥-axis and the lines with equations 𝑥 = 2 and 𝑥 = 12. 5) The area between the curves with equations f(𝑥) = 𝑥2 + 4𝑥 + 5 and g(𝑥) = + 7. 6) The area of the region bounded by the curve with equation 𝑦 = 𝑥2 + 𝑥 + 1 and the lines with equations 𝑦 = 3 and 𝑦 = 7. This post on the area under a curve for A Level Maths students is not where the fun ends by a long stretch of the imagination! You can read even more of our blogs here! You can also subscribe to Beyond for access to thousands of secondary teaching resources. You can sign up for a free account here and take a look around at our free KS5 Maths resources before you subscribe too.
HOUSTON – (Jan. 5, 2021) – Before the solar system had planets, the sun had rings — bands of dust and gas similar to Saturn’s rings — that likely played a role in Earth’s formation, according to a new study. “In the solar system, something happened to prevent the Earth from growing to become a much larger type of terrestrial planet called a super-Earth ,” said Rice University astrophysicist André Izidoro, referring to the massive rocky planets seen around at least 30% of sun-like stars in our galaxy. Izidoro and colleagues used a supercomputer to simulate the solar system’s formation hundreds of times. Their model, which is described in a study published online in Nature Astronomy, produced rings like those seen around many distant, young stars. It also faithfully reproduced several features of the solar system missed by many previous models, including: ● An asteroid belt between Mars and Jupiter containing objects from both the inner and outer solar system. ● The locations and stable, almost circular orbits of Earth, Mars, Venus and Mercury. ● The masses of the inner planets, including Mars, which many solar system models overestimate. ● The dichotomy between the chemical makeup of objects in the inner and outer solar system. ● A Kuiper belt region of comets, asteroids and small bodies beyond the orbit of Neptune. The study by astronomers, astrophysicists and planetary scientists from Rice, the University of Bordeaux, Southwest Research Institute in Boulder, Colorado, and the Max Planck Institute for Astronomy in Heidelberg, Germany, draws on the latest astronomical research on infant star systems. Their model assumes three bands of high pressure arose within the young sun’s disk of gas and dust. Such “pressure bumps” have been observed in ringed stellar disks around distant stars, and the study explains how pressure bumps and rings could account for the solar system’s architecture, said lead author Izidoro, a Rice postdoctoral researchers who received his Ph.D. training at Sao Paulo State University in Brazil. “If super-Earths are super-common, why don't we have one in the solar system?” Izidoro said. “We propose that pressure bumps produced disconnected reservoirs of disk material in the inner and outer solar system and regulated how much material was available to grow planets in the inner solar system.” For decades, scientists believed gas and dust in protoplanetary disks gradually became less dense, dropping smoothly as a function of distance from the star. But computer simulations show planets are unlikely to form in smooth-disk scenarios. “In a smooth disk, all solid particles — dust grains or boulders — should be drawn inward very quickly and lost in the star,” said astronomer and study co-author Andrea Isella , an associate professor of physics and astronomy at Rice. “One needs something to stop them in order to give them time to grow into planets.” When particles move faster than the gas around them, they “feel a headwind and drift very quickly toward the star,” Izidoro explained. At pressure bumps, gas pressure increases, gas molecules move faster and solid particles stop feeling the headwind. “That's what allows dust particles to accumulate at pressure bumps,” he said. Isella said astronomers have observed pressure bumps and protoplanetary disk rings with the Atacama Large Millimeter/submillimeter Array, or ALMA, an enormous 66-dish radio telescope that came online in Chile in 2013. “ALMA is capable of taking very sharp images of young planetary systems that are still forming, and we have discovered that a lot of the protoplanetary disks in these systems are characterized by rings,” Isella said. “The effect of the pressure bump is that it collects dust particles, and that's why we see rings. These rings are regions where you have more dust particles than in the gaps between rings.” The model by Izidoro and colleagues assumed pressure bumps formed in the early solar system at three places where sunward-falling particles would have released large amounts of vaporized gas. “It's just a function of distance from the star, because temperature is going up as you get closer to the star,” said geochemist and study co-author Rajdeep Dasgupta , the Maurice Ewing Professor of Earth Systems Science at Rice. “The point where the temperature is high enough for ice to be vaporized, for example, is a sublimation line we call the snow line .” In the Rice simulations, pressure bumps at the sublimation lines of silicate, water and carbon monoxide produced three distinct rings. At the silicate line, the basic ingredient of sand and glass, silicon dioxide, became vapor. This produced the sun’s nearest ring, where Mercury, Venus, Earth and Mars would later form. The middle ring appeared at the snow line and the farthest ring at the carbon monoxide line. Rings birth planetesimals and planets Protoplanetary disks cool with age, so sublimation lines would have migrated toward the sun. The study showed this process could allow dust to accumulate into asteroid-sized objects called planetesimals, which could then come together to form planets. Izidoro said previous studies assumed planetesimals could form if dust were sufficiently concentrated, but no model offered a convincing theoretical explanation of how dust might accumulate. “Our model shows pressure bumps can concentrate dust, and moving pressure bumps can act as planetesimal factories,” Izidoro said. “We simulate planet formation starting with grains of dust and covering many different stages, from small millimeter-sized grains to planetesimals and then planets.” Accounting for cosmochemical signatures, Mars’ mass and the asteroid belt Many previous solar system simulations produced versions of Mars as much as 10 times more massive than Earth. The model correctly predicts Mars having about 10% of Earth’s mass because “Mars was born in a low-mass region of the disk,” Izidoro said. Dasgupta said the model also provides a compelling explanation for two of the solar system’s cosmochemical mysteries: the marked difference between the chemical compositions of inner- and outer-solar system objects, and the presence of each of those objects in the asteroid belt between Mars and Jupiter. Izidoro’s simulations showed the middle ring could account for the chemical dichotomy by preventing outer-system material from entering the inner system. The simulations also produced the asteroid belt in its correct location, and showed it was fed objects from both the inner and outer regions. “The most common type of meteorites we get from the asteroid belt are isotopically similar to Mars,” Dasgupta said. “Andre explains why Mars and these ordinary meteorites should have a similar composition. He’s provided a nuanced answer to this question.” Pressure-bump timing and super-Earths Izidoro said the delayed appearance of the sun’s middle ring in some simulations led to the formation of super-Earths, which points to the importance of pressure-bump timing. “By the time the pressure bump formed in those cases, a lot of mass had already invaded the inner system and was available to make super-Earths,” he said. “So the time when this middle pressure bump formed might be a key aspect of the solar system.” Izidoro is a postdoctoral research associate in Rice’s Department of Earth, Environment and Planetary Sciences. Additional co-authors include Sean Raymond of the University of Bordeaux, Rogerio Deienno of Southwest Research Institute and Bertram Bitsch of the Max Planck Institute for Astronomy. The research was supported by NASA (80NSSC18K0828, 80NSSC21K0387), the European Research Council (757448-PAMDORA), the Brazilian Federal Agency for Support and Evaluation of Graduate Education (88887.310463/2018-00), the Welch Foundation (C-2035) and the French National Centre for Scientific Research’s National Planetology Program. High-resolution IMAGES are available for download at: CAPTION: The addition of false color to an image captured by the Atacama Large Millimeter/submillimeter Array, or ALMA, reveals a series of rings around a young star named HD163296. (Image courtesy of Andrea Isella/Rice University) CAPTION: An illustration of three distinct, planetesimal-forming rings that could have produced the planets and other features of the solar system, according to a computational model from Rice University. The vaporization of solid silicates, water and carbon monoxide at “sublimation lines” (top) caused “pressure bumps” in the sun’s protoplanetary disk, trapping dust in three distinct rings. As the sun cooled, pressure bumps migrated sunward allowing trapped dust to accumulate into asteroid-sized planetesimals. The chemical composition of objects from the inner ring (NC) differs from the composition of middle- and outer-ring objects (CC). Inner-ring planetesimals produced the inner solar system’s planets (bottom), and planetesimals from the middle and outer rings produced the outer solar system planets and Kuiper Belt (not shown). The asteroid belt formed (top middle) from NC objects contributed by the inner ring (red arrows) and CC objects from the middle ring (white arrows). (Image courtesy of Rajdeep Dasgupta) CAPTION: Rajdeep Dasgupta (left) and André Izidoro. (Photo by Jeff Fitlow/Rice University) CAPTION: Andrea Isella (Photo by Jeff Fitlow/Rice University) This release can be found online at news.rice.edu. Follow Rice News and Media Relations via Twitter @RiceUNews. Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 4,052 undergraduates and 3,484 graduate students, Rice’s undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction and No. 1 for quality of life by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger’s Personal Finance.
Their results suggest that such magnetic fields play a key role in channeling matter to form denser clouds, and thus in setting the stage for the birth of new stars. The work will be published in the November 24 edition of the journal Nature (online version: November 16). Image of the Triangulum Galaxy M33, which presents astronomers with a bird’s eye view of its disk. The pink blobs are regions containing newly formed stars. Credit & Copyright: Thomas V. Davis (http://tvdavisastropics.com) Stars and their planets are born when giant clouds of interstellar gas and dust collapse. You've probably seen the resulting stellar nurseries in beautiful astronomical images: Colorful nebulae, lit by the bright young stars they have brought forth. Astronomers know quite a bit about these so-called molecular clouds: They consist mainly of hydrogen molecules – unusual in a cosmos where conditions are rarely right for hydrogen atoms to bond together into molecules. And if one traces the distribution of clouds in a spiral galaxy like our own Milky Way galaxy, one finds that they are lined up along the spiral arms. But how do those clouds come into being? What makes matter congregate in regions a hundred or even a thousand times more dense than the surrounding interstellar gas? One candidate mechanism involves the galaxy's magnetic fields. Everyone who has seen a magnet act on iron filings in the classic classroom experiment knows that magnetic fields can be used to impose order. Some researchers have argued that something similar goes on in the case of molecular clouds: that galaxies' magnetic fields guide and direct the condensation of interstellar matter to form denser clouds and facilitate their further collapse. Some astronomer see this as the key mechanism enabling star formation. Others contend that the cloud matter's gravitational attraction and turbulent motion of gas within the cloud are so strong as to cancel any influence of an outside magnetic field. If we were to restrict attention to our own galaxy, it would be difficult to find out who is right. We would need to see our galaxy's disk from above to make the appropriate measurements; in reality, our Solar System sits within the galactic disk. That is why Hua-bai Li and Thomas Henning from the Max Planck Institute for Astronomy chose a different target: the Triangulum galaxy, 3 million light-years from Earth and also known as M 33, which is oriented in just the right way (cf. image). Using a telescope known as the Submillimeter Array (SMA), which is located at Mauna Kea Observatory on Mauna Kea Island, Hawai'i, Li and Henning measured specific properties of radiation received from different regions of the galaxy which are correlated with the orientation of these region's magnetic fields. They found that the magnetic fields associated with the galaxy's six most massive giant molecular clouds were orderly, and well aligned with the galaxy's spiral arms. If turbulence played a more important role in these clouds than the ordering influence of the galaxy's magnetic field, the magnetic field associated with the cloud would be random and disordered. Thus, Li and Henning's observations are a strong indication that magnetic fields indeed play an important role when it comes to the formation of dense molecular clouds – and to setting the stage for the birth of stars and planetary systems like our own. Contact informationHua-bai Li (first author) The research is supported by the Max Planck Institute for Astronomy and the Harvard-Smithsonian Center for Astrophysics. The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica. Dr. Markus Pössel | Max-Planck-Institut Extremely energetic particles coupled with the violent death of a star for the first time 21.11.2019 | University of Copenhagen First detection of gamma-ray burst afterglow in very-high-energy gamma light 21.11.2019 | Max-Planck-Institut für Kernphysik Conventional light microscopes cannot distinguish structures when they are separated by a distance smaller than, roughly, the wavelength of light. Superresolution microscopy, developed since the 1980s, lifts this limitation, using fluorescent moieties. Scientists at the Max Planck Institute for Polymer Research have now discovered that graphene nano-molecules can be used to improve this microscopy technique. These graphene nano-molecules offer a number of substantial advantages over the materials previously used, making superresolution microscopy even more versatile. Microscopy is an important investigation method, in physics, biology, medicine, and many other sciences. However, it has one disadvantage: its resolution is... Nanooptical traps are a promising building block for quantum technologies. Austrian and German scientists have now removed an important obstacle to their practical use. They were able to show that a special form of mechanical vibration heats trapped particles in a very short time and knocks them out of the trap. By controlling individual atoms, quantum properties can be investigated and made usable for technological applications. For about ten years, physicists have... An international team of scientists, including three researchers from New Jersey Institute of Technology (NJIT), has shed new light on one of the central mysteries of solar physics: how energy from the Sun is transferred to the star's upper atmosphere, heating it to 1 million degrees Fahrenheit and higher in some regions, temperatures that are vastly hotter than the Sun's surface. With new images from NJIT's Big Bear Solar Observatory (BBSO), the researchers have revealed in groundbreaking, granular detail what appears to be a likely... The Fraunhofer Institute for Manufacturing Technology and Advanced Materials IFAM in Dresden has succeeded in using Selective Electron Beam Melting (SEBM) to... 15.11.2019 | Event News 15.11.2019 | Event News 05.11.2019 | Event News 21.11.2019 | Life Sciences 21.11.2019 | Physics and Astronomy 21.11.2019 | Physics and Astronomy
|Part of the nature series| A tornado is a violently rotating column of air that is in contact with both the surface of the earth and a cumulonimbus cloud or, in rare cases, the base of a cumulus cloud. They are often referred to as twisters or cyclones, although the word cyclone is used in meteorology, in a wider sense, to name any closed low pressure circulation. Tornadoes come in many shapes and sizes, but they are typically in the form of a visible condensation funnel, whose narrow end touches the earth and is often encircled by a cloud of debris and dust. Most tornadoes have wind speeds less than 110 miles per hour (177 km/h), are about 250 feet (76 m) across, and travel a few miles (several kilometers) before dissipating. The most extreme tornadoes can attain wind speeds of more than 300 miles per hour (483 km/h), stretch more than two miles (3.2 km) across, and stay on the ground for dozens of miles (more than 100 km). Various types of tornadoes include the landspout, multiple vortex tornado, and waterspout. Waterspouts are characterized by a spiraling funnel-shaped wind current, connecting to a large cumulus or cumulonimbus cloud. They are generally classified as non-supercellular tornadoes that develop over bodies of water, but there is disagreement over whether to classify them as true tornadoes. These spiraling columns of air frequently develop in tropical areas close to the equator, and are less common at high latitudes. Other tornado-like phenomena that exist in nature include the gustnado, dust devil, fire whirls, and steam devil; downbursts are frequently confused with tornadoes, though their action is dissimilar. Tornadoes have been observed on every continent except Antarctica. However, the vast majority of tornadoes occur in the Tornado Alley region of the United States, although they can occur nearly anywhere in North America. They also occasionally occur in south-central and eastern Asia, northern and east-central South America, Southern Africa, northwestern and southeast Europe, western and southeastern Australia, and New Zealand. Tornadoes can be detected before or as they occur through the use of Pulse-Doppler radar by recognizing patterns in velocity and reflectivity data, such as hook echoes or debris balls, as well as through the efforts of storm spotters. There are several scales for rating the strength of tornadoes. The Fujita scale rates tornadoes by damage caused and has been replaced in some countries by the updated Enhanced Fujita Scale. An F0 or EF0 tornado, the weakest category, damages trees, but not substantial structures. An F5 or EF5 tornado, the strongest category, rips buildings off their foundations and can deform large skyscrapers. The similar TORRO scale ranges from a T0 for extremely weak tornadoes to T11 for the most powerful known tornadoes. Doppler radar data, photogrammetry, and ground swirl patterns (cycloidal marks) may also be analyzed to determine intensity and assign a rating. - 1 Etymology - 2 Definitions - 3 Characteristics - 4 Life cycle - 5 Types - 6 Intensity and damage - 7 Climatology - 8 Detection - 9 Extremes - 10 Safety - 11 Myths and misconceptions - 12 Ongoing research - 13 Gallery - 14 See also - 15 References - 16 Further reading - 17 External links The word tornado is an altered form of the Spanish word tronada, which means "thunderstorm". This in turn was taken from the Latin tonare, meaning "to thunder". It most likely reached its present form through a combination of the Spanish tronada and tornar ("to turn"); however, this may be a folk etymology. A tornado is also commonly referred to as a "twister", and is also sometimes referred to by the old-fashioned colloquial term cyclone. The term "cyclone" is used as a synonym for "tornado" in the often-aired 1939 film The Wizard of Oz. The term "twister" is also used in that film, along with being the title of the 1996 tornado-related film Twister. A tornado is "a violently rotating column of air, in contact with the ground, either pendant from a cumuliform cloud or underneath a cumuliform cloud, and often (but not always) visible as a funnel cloud". For a vortex to be classified as a tornado, it must be in contact with both the ground and the cloud base. Scientists have not yet created a complete definition of the word; for example, there is disagreement as to whether separate touchdowns of the same funnel constitute separate tornadoes. Tornado refers to the vortex of wind, not the condensation cloud. A tornado is not necessarily visible; however, the intense low pressure caused by the high wind speeds (as described by Bernoulli's principle) and rapid rotation (due to cyclostrophic balance) usually causes water vapor in the air to condense into cloud droplets due to adiabatic cooling. This results in the formation of a visible funnel cloud or condensation funnel. There is some disagreement over the definition of funnel cloud and condensation funnel. According to the Glossary of Meteorology, a funnel cloud is any rotating cloud pendant from a cumulus or cumulonimbus, and thus most tornadoes are included under this definition. Among many meteorologists, the funnel cloud term is strictly defined as a rotating cloud which is not associated with strong winds at the surface, and condensation funnel is a broad term for any rotating cloud below a cumuliform cloud. Tornadoes often begin as funnel clouds with no associated strong winds at the surface, and not all funnel clouds evolve into tornadoes. Most tornadoes produce strong winds at the surface while the visible funnel is still above the ground, so it is difficult to discern the difference between a funnel cloud and a tornado from a distance. Outbreaks and families Occasionally, a single storm will produce more than one tornado, either simultaneously or in succession. Multiple tornadoes produced by the same storm cell are referred to as a "tornado family". Several tornadoes are sometimes spawned from the same large-scale storm system. If there is no break in activity, this is considered a tornado outbreak (although the term "tornado outbreak" has various definitions). A period of several successive days with tornado outbreaks in the same general area (spawned by multiple weather systems) is a tornado outbreak sequence, occasionally called an extended tornado outbreak. Size and shape Most tornadoes take on the appearance of a narrow funnel, a few hundred yards (meters) across, with a small cloud of debris near the ground. Tornadoes may be obscured completely by rain or dust. These tornadoes are especially dangerous, as even experienced meteorologists might not see them. Tornadoes can appear in many shapes and sizes. Small, relatively weak landspouts may be visible only as a small swirl of dust on the ground. Although the condensation funnel may not extend all the way to the ground, if associated surface winds are greater than 40 mph (64 km/h), the circulation is considered a tornado. A tornado with a nearly cylindrical profile and relative low height is sometimes referred to as a "stovepipe" tornado. Large single-vortex tornadoes can look like large wedges stuck into the ground, and so are known as "wedge tornadoes" or "wedges". The "stovepipe" classification is also used for this type of tornado, if it otherwise fits that profile. A wedge can be so wide that it appears to be a block of dark clouds, wider than the distance from the cloud base to the ground. Even experienced storm observers may not be able to tell the difference between a low-hanging cloud and a wedge tornado from a distance. Many, but not all major tornadoes are wedges. Tornadoes in the dissipating stage can resemble narrow tubes or ropes, and often curl or twist into complex shapes. These tornadoes are said to be "roping out", or becoming a "rope tornado". When they rope out, the length of their funnel increases, which forces the winds within the funnel to weaken due to conservation of angular momentum. Multiple-vortex tornadoes can appear as a family of swirls circling a common center, or they may be completely obscured by condensation, dust, and debris, appearing to be a single funnel. In the United States, tornadoes are around 500 feet (150 m) across on average and travel on the ground for 5 miles (8.0 km). However, there is a wide range of tornado sizes. Weak tornadoes, or strong yet dissipating tornadoes, can be exceedingly narrow, sometimes only a few feet or couple meters across. One tornado was reported to have a damage path only 7 feet (2 m) long. On the other end of the spectrum, wedge tornadoes can have a damage path a mile (1.6 km) wide or more. A tornado that affected Hallam, Nebraska on May 22, 2004, was up to 2.5 miles (4.0 km) wide at the ground. In terms of path length, the Tri-State Tornado, which affected parts of Missouri, Illinois, and Indiana on March 18, 1925, was on the ground continuously for 219 miles (352 km). Many tornadoes which appear to have path lengths of 100 miles (160 km) or longer are composed of a family of tornadoes which have formed in quick succession; however, there is no substantial evidence that this occurred in the case of the Tri-State Tornado. In fact, modern reanalysis of the path suggests that the tornado may have begun 15 miles (24 km) further west than previously thought. Tornadoes can have a wide range of colors, depending on the environment in which they form. Those that form in dry environments can be nearly invisible, marked only by swirling debris at the base of the funnel. Condensation funnels that pick up little or no debris can be gray to white. While traveling over a body of water (as a waterspout), tornadoes can turn very white or even blue. Slow-moving funnels, which ingest a considerable amount of debris and dirt, are usually darker, taking on the color of debris. Tornadoes in the Great Plains can turn red because of the reddish tint of the soil, and tornadoes in mountainous areas can travel over snow-covered ground, turning white. Lighting conditions are a major factor in the appearance of a tornado. A tornado which is "back-lit" (viewed with the sun behind it) appears very dark. The same tornado, viewed with the sun at the observer's back, may appear gray or brilliant white. Tornadoes which occur near the time of sunset can be many different colors, appearing in hues of yellow, orange, and pink. Dust kicked up by the winds of the parent thunderstorm, heavy rain and hail, and the darkness of night are all factors which can reduce the visibility of tornadoes. Tornadoes occurring in these conditions are especially dangerous, since only weather radar observations, or possibly the sound of an approaching tornado, serve as any warning to those in the storm's path. Most significant tornadoes form under the storm's updraft base, which is rain-free, making them visible. Also, most tornadoes occur in the late afternoon, when the bright sun can penetrate even the thickest clouds. Night-time tornadoes are often illuminated by frequent lightning. There is mounting evidence, including Doppler On Wheels mobile radar images and eyewitness accounts, that most tornadoes have a clear, calm center with extremely low pressure, akin to the eye of tropical cyclones. Lightning is said to be the source of illumination for those who claim to have seen the interior of a tornado. Tornadoes normally rotate cyclonically (when viewed from above, this is counterclockwise in the northern hemisphere and clockwise in the southern). While large-scale storms always rotate cyclonically due to the Coriolis effect, thunderstorms and tornadoes are so small that the direct influence of the Coriolis effect is unimportant, as indicated by their large Rossby numbers. Supercells and tornadoes rotate cyclonically in numerical simulations even when the Coriolis effect is neglected. Low-level mesocyclones and tornadoes owe their rotation to complex processes within the supercell and ambient environment. Approximately 1 percent of tornadoes rotate in an anticyclonic direction in the northern hemisphere. Typically, systems as weak as landspouts and gustnadoes can rotate anticyclonically, and usually only those which form on the anticyclonic shear side of the descending rear flank downdraft (RFD) in a cyclonic supercell. On rare occasions, anticyclonic tornadoes form in association with the mesoanticyclone of an anticyclonic supercell, in the same manner as the typical cyclonic tornado, or as a companion tornado either as a satellite tornado or associated with anticyclonic eddies within a supercell. Sound and seismology Tornadoes emit widely on the acoustics spectrum and the sounds are caused by multiple mechanisms. Various sounds of tornadoes have been reported, mostly related to familiar sounds for the witness and generally some variation of a whooshing roar. Popularly reported sounds include a freight train, rushing rapids or waterfall, a nearby jet engine, or combinations of these. Many tornadoes are not audible from much distance; the nature and propagation distance of the audible sound depends on atmospheric conditions and topography. The winds of the tornado vortex and of constituent turbulent eddies, as well as airflow interaction with the surface and debris, contribute to the sounds. Funnel clouds also produce sounds. Funnel clouds and small tornadoes are reported as whistling, whining, humming, or the buzzing of innumerable bees or electricity, or more or less harmonic, whereas many tornadoes are reported as a continuous, deep rumbling, or an irregular sound of "noise". Since many tornadoes are audible only when very near, sound is not reliable warning of a tornado. Tornadoes are also not the only source of such sounds in severe thunderstorms; any strong, damaging wind, a severe hail volley, or continuous thunder in a thunderstorm may produce a roaring sound. Unlike audible signatures, tornadic signatures have been isolated; due to the long distance propagation of low-frequency sound, efforts are ongoing to develop tornado prediction and detection devices with additional value in understanding tornado morphology, dynamics, and creation. Tornadoes also produce a detectable seismic signature, and research continues on isolating it and understanding the process. Electromagnetic, lightning, and other effects Tornadoes emit on the electromagnetic spectrum, with sferics and E-field effects detected. There are observed correlations between tornadoes and patterns of lightning. Tornadic storms do not contain more lightning than other storms and some tornadic cells never produce lightning. More often than not, overall cloud-to-ground (CG) lightning activity decreases as a tornado reaches the surface and returns to the baseline level when the tornado lifts. In many cases, intense tornadoes and thunderstorms exhibit an increased and anomalous dominance of positive polarity CG discharges. Electromagnetics and lightning have little or nothing to do directly with what drives tornadoes (tornadoes are basically a thermodynamic phenomenon), although there are likely connections with the storm and environment affecting both phenomena. Luminosity has been reported in the past and is probably due to misidentification of external light sources such as lightning, city lights, and power flashes from broken lines, as internal sources are now uncommonly reported and are not known to ever have been recorded. In addition to winds, tornadoes also exhibit changes in atmospheric variables such as temperature, moisture, and pressure. For example, on June 24, 2003 near Manchester, South Dakota, a probe measured a 100 mbar (hPa) (2.95 inHg) pressure decrease. The pressure dropped gradually as the vortex approached then dropped extremely rapidly to 850 mbar (hPa) (25.10 inHg) in the core of the violent tornado before rising rapidly as the vortex moved away, resulting in a V-shape pressure trace. Temperature tends to decrease and moisture content to increase in the immediate vicinity of a tornado. Tornadoes often develop from a class of thunderstorms known as supercells. Supercells contain mesocyclones, an area of organized rotation a few miles up in the atmosphere, usually 1–6 miles (2–10 km) across. Most intense tornadoes (EF3 to EF5 on the Enhanced Fujita Scale) develop from supercells. In addition to tornadoes, very heavy rain, frequent lightning, strong wind gusts, and hail are common in such storms. Most tornadoes from supercells follow a recognizable life cycle. That begins when increasing rainfall drags with it an area of quickly descending air known as the rear flank downdraft (RFD). This downdraft accelerates as it approaches the ground, and drags the supercell's rotating mesocyclone towards the ground with it. As the mesocyclone lowers below the cloud base, it begins to take in cool, moist air from the downdraft region of the storm. This convergence of warm air in the updraft, and this cool air, causes a rotating wall cloud to form. The RFD also focuses the mesocyclone's base, causing it to siphon air from a smaller and smaller area on the ground. As the updraft intensifies, it creates an area of low pressure at the surface. This pulls the focused mesocyclone down, in the form of a visible condensation funnel. As the funnel descends, the RFD also reaches the ground, creating a gust front that can cause severe damage a good distance from the tornado. Usually, the funnel cloud begins causing damage on the ground (becoming a tornado) within a few minutes of the RFD reaching the ground. Initially, the tornado has a good source of warm, moist inflow to power it, so it grows until it reaches the "mature stage". This can last anywhere from a few minutes to more than an hour, and during that time a tornado often causes the most damage, and in rare cases can be more than one mile (1.6 km) across. Meanwhile, the RFD, now an area of cool surface winds, begins to wrap around the tornado, cutting off the inflow of warm air which feeds the tornado. As the RFD completely wraps around and chokes off the tornado's air supply, the vortex begins to weaken, and become thin and rope-like. This is the "dissipating stage", often lasting no more than a few minutes, after which the tornado fizzles. During this stage the shape of the tornado becomes highly influenced by the winds of the parent storm, and can be blown into fantastic patterns. Even though the tornado is dissipating, it is still capable of causing damage. The storm is contracting into a rope-like tube and, like the ice skater who pulls her arms in to spin faster, winds can increase at this point. As the tornado enters the dissipating stage, its associated mesocyclone often weakens as well, as the rear flank downdraft cuts off the inflow powering it. Sometimes, in intense supercells, tornadoes can develop cyclically. As the first mesocyclone and associated tornado dissipate, the storm's inflow may be concentrated into a new area closer to the center of the storm. If a new mesocyclone develops, the cycle may start again, producing one or more new tornadoes. Occasionally, the old (occluded) mesocyclone and the new mesocyclone produce a tornado at the same time. Although this is a widely accepted theory for how most tornadoes form, live, and die, it does not explain the formation of smaller tornadoes, such as landspouts, long-lived tornadoes, or tornadoes with multiple vortices. These each have different mechanisms which influence their development—however, most tornadoes follow a pattern similar to this one. A multiple-vortex tornado is a type of tornado in which two or more columns of spinning air rotate around a common center. A multi-vortex structure can occur in almost any circulation, but is very often observed in intense tornadoes. These vortices often create small areas of heavier damage along the main tornado path. This is a distinct phenomenon from a satellite tornado, which is a smaller tornado which forms very near a large, strong tornado contained within the same mesocyclone. The satellite tornado may appear to "orbit" the larger tornado (hence the name), giving the appearance of one, large multi-vortex tornado. However, a satellite tornado is a distinct circulation, and is much smaller than the main funnel. A waterspout is defined by the National Weather Service as a tornado over water. However, researchers typically distinguish "fair weather" waterspouts from tornadic waterspouts. Fair weather waterspouts are less severe but far more common, and are similar to dust devils and landspouts. They form at the bases of cumulus congestus clouds over tropical and subtropical waters. They have relatively weak winds, smooth laminar walls, and typically travel very slowly. They occur most commonly in the Florida Keys and in the northern Adriatic Sea. In contrast, tornadic waterspouts are stronger tornadoes over water. They form over water similarly to mesocyclonic tornadoes, or are stronger tornadoes which cross over water. Since they form from severe thunderstorms and can be far more intense, faster, and longer-lived than fair weather waterspouts, they are more dangerous. In official tornado statistics, waterspouts are generally not counted unless they affect land, though some European weather agencies count waterspouts and tornadoes together. A landspout, or dust-tube tornado, is a tornado not associated with a mesocyclone. The name stems from their characterization as a "fair weather waterspout on land". Waterspouts and landspouts share many defining characteristics, including relative weakness, short lifespan, and a small, smooth condensation funnel which often does not reach the surface. Landspouts also create a distinctively laminar cloud of dust when they make contact with the ground, due to their differing mechanics from true mesoform tornadoes. Though usually weaker than classic tornadoes, they can produce strong winds which could cause serious damage. A gustnado, or gust front tornado, is a small, vertical swirl associated with a gust front or downburst. Because they are not connected with a cloud base, there is some debate as to whether or not gustnadoes are tornadoes. They are formed when fast moving cold, dry outflow air from a thunderstorm is blown through a mass of stationary, warm, moist air near the outflow boundary, resulting in a "rolling" effect (often exemplified through a roll cloud). If low level wind shear is strong enough, the rotation can be turned vertically or diagonally and make contact with the ground. The result is a gustnado. They usually cause small areas of heavier rotational wind damage among areas of straight-line wind damage. A dust devil resembles a tornado in that it is a vertical swirling column of air. However, they form under clear skies and are no stronger than the weakest tornadoes. They form when a strong convective updraft is formed near the ground on a hot day. If there is enough low level wind shear, the column of hot, rising air can develop a small cyclonic motion that can be seen near the ground. They are not considered tornadoes because they form during fair weather and are not associated with any clouds. However, they can, on occasion, result in major damage in arid areas. Fire whirls and steam devils Small-scale, tornado-like circulations can occur near any intense surface heat source. Those that occur near intense wildfires are called fire whirls. They are not considered tornadoes, except in the rare case where they connect to a pyrocumulus or other cumuliform cloud above. Fire whirls usually are not as strong as tornadoes associated with thunderstorms. They can, however, produce significant damage. A steam devil is a rotating updraft that involves steam or smoke. Steam devils are very rare. They most often form from smoke issuing from a power plant's smokestack. Hot springs and deserts may also be suitable locations for a steam devil to form. The phenomenon can occur over water, when cold arctic air passes over relatively warm water. Intensity and damage The Fujita scale and the Enhanced Fujita Scale rate tornadoes by damage caused. The Enhanced Fujita (EF) Scale was an update to the older Fujita scale, by expert elicitation, using engineered wind estimates and better damage descriptions. The EF Scale was designed so that a tornado rated on the Fujita scale would receive the same numerical rating, and was implemented starting in the United States in 2007. An EF0 tornado will probably damage trees but not substantial structures, whereas an EF5 tornado can rip buildings off their foundations leaving them bare and even deform large skyscrapers. The similar TORRO scale ranges from a T0 for extremely weak tornadoes to T11 for the most powerful known tornadoes. Doppler weather radar data, photogrammetry, and ground swirl patterns (cycloidal marks) may also be analyzed to determine intensity and award a rating. Tornadoes vary in intensity regardless of shape, size, and location, though strong tornadoes are typically larger than weak tornadoes. The association with track length and duration also varies, although longer track tornadoes tend to be stronger. In the case of violent tornadoes, only a small portion of the path is of violent intensity, most of the higher intensity from subvortices. In the United States, 80% of tornadoes are EF0 and EF1 (T0 through T3) tornadoes. The rate of occurrence drops off quickly with increasing strength—less than 1% are violent tornadoes (EF4, T8 or stronger). Outside Tornado Alley, and North America in general, violent tornadoes are extremely rare. This is apparently mostly due to the lesser number of tornadoes overall, as research shows that tornado intensity distributions are fairly similar worldwide. A few significant tornadoes occur annually in Europe, Asia, southern Africa, and southeastern South America, respectively. The United States has the most tornadoes of any country, nearly four times more than estimated in all of Europe, excluding waterspouts. This is mostly due to the unique geography of the continent. North America is a large continent that extends from the tropics north into arctic areas, and has no major east-west mountain range to block air flow between these two areas. In the middle latitudes, where most tornadoes of the world occur, the Rocky Mountains block moisture and buckle the atmospheric flow, forcing drier air at mid-levels of the troposphere due to downsloped winds, and causing the formation of a low pressure area downwind to the east of the mountains. Increased westerly flow off the Rockies force the formation of a dry line when the flow aloft is strong, while the Gulf of Mexico fuels abundant low-level moisture in the southerly flow to its east. This unique topography allows for frequent collisions of warm and cold air, the conditions that breed strong, long-lived storms throughout the year. A large portion of these tornadoes form in an area of the central United States known as Tornado Alley. This area extends into Canada, particularly Ontario and the Prairie Provinces, although southeast Quebec, the interior of British Columbia, and western New Brunswick are also tornado-prone. Tornadoes also occur across northeastern Mexico. The United States averages about 1,200 tornadoes per year. The Netherlands has the highest average number of recorded tornadoes per area of any country (more than 20, or 0.0013 per sq mi (0.00048 per km2), annually), followed by the UK (around 33, or 0.00035 per sq mi (0.00013 per km2), per year), but most are small and cause minor damage. In absolute number of events, ignoring area, the UK experiences more tornadoes than any other European country, excluding waterspouts. Tornadoes kill an average of 179 people per year in Bangladesh, the most in the world. This is due to high population density, poor quality of construction and lack of tornado safety knowledge, as well as other factors. Other areas of the world that have frequent tornadoes include South Africa, parts of Argentina, Paraguay, and southern Brazil, as well as portions of Europe, Australia and New Zealand, and far eastern Asia. Tornadoes are most common in spring and least common in winter, but tornadoes can occur any time of year that favorable conditions occur. Spring and fall experience peaks of activity as those are the seasons when stronger winds, wind shear, and atmospheric instability are present. Tornadoes are focused in the right front quadrant of landfalling tropical cyclones, which tend to occur in the late summer and autumn. Tornadoes can also be spawned as a result of eyewall mesovortices, which persist until landfall. Tornado occurrence is highly dependent on the time of day, because of solar heating. Worldwide, most tornadoes occur in the late afternoon, between 3 pm and 7 pm local time, with a peak near 5 pm. Destructive tornadoes can occur at any time of day. The Gainesville Tornado of 1936, one of the deadliest tornadoes in history, occurred at 8:30 am local time. Associations with climate and climate change Associations with various climate and environmental trends exist. For example, an increase in the sea surface temperature of a source region (e.g. Gulf of Mexico and Mediterranean Sea) increases atmospheric moisture content. Increased moisture can fuel an increase in severe weather and tornado activity, particularly in the cool season. Some evidence does suggest that the Southern Oscillation is weakly correlated with changes in tornado activity, which vary by season and region, as well as whether the ENSO phase is that of El Niño or La Niña. Climatic shifts may affect tornadoes via teleconnections in shifting the jet stream and the larger weather patterns. The climate-tornado link is confounded by the forces affecting larger patterns and by the local, nuanced nature of tornadoes. Although it is reasonable to suspect that global warming may affect trends in tornado activity, any such effect is not yet identifiable due to the complexity, local nature of the storms, and database quality issues. Any effect would vary by region. Rigorous attempts to warn of tornadoes began in the United States in the mid-20th century. Before the 1950s, the only method of detecting a tornado was by someone seeing it on the ground. Often, news of a tornado would reach a local weather office after the storm. However, with the advent of weather radar, areas near a local office could get advance warning of severe weather. The first public tornado warnings were issued in 1950 and the first tornado watches and convective outlooks in 1952. In 1953 it was confirmed that hook echoes are associated with tornadoes. By recognizing these radar signatures, meteorologists could detect thunderstorms probably producing tornadoes from dozens of miles away. Today, most developed countries have a network of weather radars, which remains the main method of detecting signatures probably associated with tornadoes. In the United States and a few other countries, Doppler weather radar stations are used. These devices measure the velocity and radial direction (towards or away from the radar) of the winds in a storm, and so can spot evidence of rotation in storms from more than a hundred miles (160 km) away. When storms are distant from a radar, only areas high within the storm are observed and the important areas below are not sampled. Data resolution also decreases with distance from the radar. Some meteorological situations leading to tornadogenesis are not readily detectable by radar and on occasion tornado development may occur more quickly than radar can complete a scan and send the batch of data. Also, most populated areas on Earth are now visible from the Geostationary Operational Environmental Satellites (GOES), which aid in the nowcasting of tornadic storms. In the mid-1970s, the U.S. National Weather Service (NWS) increased its efforts to train storm spotters to spot key features of storms which indicate severe hail, damaging winds, and tornadoes, as well as damage itself and flash flooding. The program was called Skywarn, and the spotters were local sheriff's deputies, state troopers, firefighters, ambulance drivers, amateur radio operators, civil defense (now emergency management) spotters, storm chasers, and ordinary citizens. When severe weather is anticipated, local weather service offices request that these spotters look out for severe weather, and report any tornadoes immediately, so that the office can warn of the hazard. Usually spotters are trained by the NWS on behalf of their respective organizations, and report to them. The organizations activate public warning systems such as sirens and the Emergency Alert System (EAS), and forward the report to the NWS. There are more than 230,000 trained Skywarn weather spotters across the United States. In Canada, a similar network of volunteer weather watchers, called Canwarn, helps spot severe weather, with more than 1,000 volunteers. In Europe, several nations are organizing spotter networks under the auspices of Skywarn Europe and the Tornado and Storm Research Organisation (TORRO) has maintained a network of spotters in the United Kingdom since 1974. Storm spotters are needed because radar systems such as NEXRAD do not detect a tornado; merely signatures which hint at the presence of tornadoes. Radar may give a warning before there is any visual evidence of a tornado or imminent tornado, but ground truth from an observer can either verify the threat or determine that a tornado is not imminent. The spotter's ability to see what radar cannot is especially important as distance from the radar site increases, because the radar beam becomes progressively higher in altitude further away from the radar, chiefly due to curvature of Earth, and the beam also spreads out. Storm spotters are trained to discern whether a storm seen from a distance is a supercell. They typically look to its rear, the main region of updraft and inflow. Under the updraft is a rain-free base, and the next step of tornadogenesis is the formation of a rotating wall cloud. The vast majority of intense tornadoes occur with a wall cloud on the backside of a supercell. Evidence of a supercell comes from the storm's shape and structure, and cloud tower features such as a hard and vigorous updraft tower, a persistent, large overshooting top, a hard anvil (especially when backsheared against strong upper level winds), and a corkscrew look or striations. Under the storm and closer to where most tornadoes are found, evidence of a supercell and likelihood of a tornado includes inflow bands (particularly when curved) such as a "beaver tail", and other clues such as strength of inflow, warmth and moistness of inflow air, how outflow- or inflow-dominant a storm appears, and how far is the front flank precipitation core from the wall cloud. Tornadogenesis is most likely at the interface of the updraft and rear flank downdraft, and requires a balance between the outflow and inflow. Only wall clouds that rotate spawn tornadoes, and usually precede the tornado by five to thirty minutes. Rotating wall clouds may be a visual manifestation of a low-level mesocyclone. Barring a low-level boundary, tornadogenesis is highly unlikely unless a rear flank downdraft occurs, which is usually visibly evidenced by evaporation of cloud adjacent to a corner of a wall cloud. A tornado often occurs as this happens or shortly after; first, a funnel cloud dips and in nearly all cases by the time it reaches halfway down, a surface swirl has already developed, signifying a tornado is on the ground before condensation connects the surface circulation to the storm. Tornadoes may also occur without wall clouds, under flanking lines, and on the leading edge. Spotters watch all areas of a storm, and the cloud base and surface. The most record-breaking tornado in recorded history was the Tri-State Tornado, which roared through parts of Missouri, Illinois, and Indiana on March 18, 1925. It was likely an F5, though tornadoes were not ranked on any scale in that era. It holds records for longest path length (219 miles, 352 km), longest duration (about 3.5 hours), and fastest forward speed for a significant tornado (73 mph, 117 km/h) anywhere on Earth. In addition, it is the deadliest single tornado in United States history (695 dead). The tornado was also the costliest tornado in history at the time (unadjusted for inflation), but in the years since has been surpassed by several others if population changes over time are not considered. When costs are normalized for wealth and inflation, it ranks third today. The deadliest tornado in world history was the Daultipur-Salturia Tornado in Bangladesh on April 26, 1989, which killed approximately 1,300 people. Bangladesh has had at least 19 tornadoes in its history kill more than 100 people, almost half of the total in the rest of the world. The most extensive tornado outbreak on record was the April 25–28, 2011 tornado outbreak, which spawned 355 confirmed tornadoes over the southeastern United States - 211 of them within a single 24 hour period. The previous record was the Super Outbreak of 1974 which spawned nearly 148 tornadoes. While direct measurement of the most violent tornado wind speeds is nearly impossible, since conventional anemometers would be destroyed by the intense winds and flying debris, some tornadoes have been scanned by mobile Doppler radar units, which can provide a good estimate of the tornado's winds. The highest wind speed ever measured in a tornado, which is also the highest wind speed ever recorded on the planet, is 301 ± 20 mph (484 ± 32 km/h) in the F5 Bridge Creek-Moore, Oklahoma, tornado which killed 36 people. Though the reading was taken about 100 feet (30 m) above the ground, this is a testament to the power of the strongest tornadoes. Storms that produce tornadoes can feature intense updrafts, sometimes exceeding 150 mph (240 km/h). Debris from a tornado can be lofted into the parent storm and carried a very long distance. A tornado which affected Great Bend, Kansas, in November 1915, was an extreme case, where a "rain of debris" occurred 80 miles (130 km) from the town, a sack of flour was found 110 miles (180 km) away, and a cancelled check from the Great Bend bank was found in a field outside of Palmyra, Nebraska, 305 miles (491 km) to the northeast. Waterspouts and tornadoes have been advanced as an explanation for instances of raining fish and other animals. Though tornadoes can strike in an instant, there are precautions and preventative measures that people can take to increase the chances of surviving a tornado. Authorities such as the Storm Prediction Center advise having a pre-determined plan should a tornado warning be issued. When a warning is issued, going to a basement or an interior first-floor room of a sturdy building greatly increases chances of survival. In tornado-prone areas, many buildings have storm cellars on the property. These underground refuges have saved thousands of lives. Some countries have meteorological agencies which distribute tornado forecasts and increase levels of alert of a possible tornado (such as tornado watches and warnings in the United States and Canada). Weather radios provide an alarm when a severe weather advisory is issued for the local area, though these are mainly available only in the United States. Unless the tornado is far away and highly visible, meteorologists advise that drivers park their vehicles far to the side of the road (so as not to block emergency traffic), and find a sturdy shelter. If no sturdy shelter is nearby, getting low in a ditch is the next best option. Highway overpasses are one of the worst places to take shelter during tornadoes, as the constricted space can be subject to increased wind speed and funneling of debris underneath the overpass. Myths and misconceptions Folklore often identifies a green sky with tornadoes, and though the phenomenon may be associated with severe weather, there is no evidence linking it specifically with tornadoes. It is often thought that opening windows will lessen the damage caused by the tornado. While there is a large drop in atmospheric pressure inside a strong tornado, it is unlikely that the pressure drop would be enough to cause the house to explode. Some research indicates that opening windows may actually increase the severity of the tornado's damage. A violent tornado can destroy a house whether its windows are open or closed. Another commonly held misconception is that highway overpasses provide adequate shelter from tornadoes. This belief is partly inspired by widely circulated video captured during the 1991 tornado outbreak near Andover, Kansas, where a news crew and several other people take shelter under an overpass on the Kansas Turnpike and safely ride out a tornado as it passes by. However, a highway overpass is a dangerous place during a tornado: the subjects of the video remained safe due to an unlikely combination of events: the storm in question was a weak tornado, did not directly strike the overpass, and the overpass itself was of a unique design. Due to the Venturi effect, tornadic winds are accelerated in the confined space of an overpass. Indeed, in the 1999 Oklahoma tornado outbreak of May 3, 1999, three highway overpasses were directly struck by tornadoes, and at all three locations there was a fatality, along with many life-threatening injuries. By comparison, during the same tornado outbreak, more than 2000 homes were completely destroyed, with another 7000 damaged, and yet only a few dozen people died in their homes. An old belief is that the southwest corner of a basement provides the most protection during a tornado. The safest place is the side or corner of an underground room opposite the tornado's direction of approach (usually the northeast corner), or the central-most room on the lowest floor. Taking shelter in a basement, under a staircase, or under a sturdy piece of furniture such as a workbench further increases chances of survival. Finally, there are areas which people believe to be protected from tornadoes, whether by being in a city, near a major river, hill, or mountain, or even protected by supernatural forces. Tornadoes have been known to cross major rivers, climb mountains, affect valleys, and have damaged several city centers. As a general rule, no area is safe from tornadoes, though some areas are more susceptible than others. Meteorology is a relatively young science and the study of tornadoes is newer still. Although researched for about 140 years and intensively for around 60 years, there are still aspects of tornadoes which remain a mystery. Scientists have a fairly good understanding of the development of thunderstorms and mesocyclones, and the meteorological conditions conducive to their formation. However, the step from supercell (or other respective formative processes) to tornadogenesis and predicting tornadic vs. non-tornadic mesocyclones is not yet well known and is the focus of much research. Also under study are the low-level mesocyclone and the stretching of low-level vorticity which tightens into a tornado, namely, what are the processes and what is the relationship of the environment and the convective storm. Intense tornadoes have been observed forming simultaneously with a mesocyclone aloft (rather than succeeding mesocyclogenesis) and some intense tornadoes have occurred without a mid-level mesocyclone. Reliably predicting tornado intensity and longevity remains a problem, as do details affecting characteristics of a tornado during its life cycle and tornadolysis. Other rich areas of research are tornadoes associated with mesovortices within linear thunderstorm structures and within tropical cyclones. Scientists still do not know the exact mechanisms by which most tornadoes form, and occasional tornadoes still strike without a tornado warning being issued. Analysis of observations including both stationary and mobile (surface and aerial) in-situ and remote sensing (passive and active) instruments generates new ideas and refines existing notions. Numerical modeling also provides new insights as observations and new discoveries are integrated into our physical understanding and then tested in computer simulations which validate new notions as well as produce entirely new theoretical findings, many of which are otherwise unattainable. Importantly, development of new observation technologies and installation of finer spatial and temporal resolution observation networks have aided increased understanding and better predictions. Research programs, including field projects such as the VORTEX projects (Verification of the Origins of Rotation in Tornadoes Experiment), deployment of TOTO (the TOtable Tornado Observatory), Doppler On Wheels (DOW), and dozens of other programs, hope to solve many questions that still plague meteorologists. Universities, government agencies such as the National Severe Storms Laboratory, private-sector meteorologists, and the National Center for Atmospheric Research are some of the organizations very active in research; with various sources of funding, both private and public, a chief entity being the National Science Foundation. The pace of research is partly constrained by the number of observations that can be taken; gaps in information about the wind, pressure, and moisture content throughout the local atmosphere; and the computing power available for simulation. Solar storms similar to tornadoes have been recorded, but it is unknown how closely related they are to their terrestrial counterparts. A tornado that occurred at Seymour, Texas in April 1979 F4 Tornado in Roanoke, Illinois on July 13, 2004 The mature stage of a tornado that occurred in Union City, Oklahoma on May 24, 1973 A radar image of a violently tornadic classic supercell near Oklahoma City, Oklahoma on May 3, 1999 F5 Tornado approaching Elie, Manitoba on June 22, 2007 - Cultural significance of tornadoes - History of tropical cyclone-spawned tornadoes - List of tornadoes and tornado outbreaks - Secondary flow - Skipping tornado - Tornado drill - Tornadoes of 2014 - "merriam-webster.com". merriam-webster.com. Retrieved 2012-09-03. - Wurman, Joshua (2008-08-29). "Doppler On Wheels". Center for Severe Weather Research. Retrieved 2009-12-13. - "Hallam Nebraska Tornado". National Weather Service. National Oceanic and Atmospheric Administration. 2005-10-02. Retrieved 2009-11-15. - Roger Edwards (2006-04-04). "The Online Tornado FAQ". Storm Prediction Center. National Oceanic and Atmospheric Administration. Retrieved 2006-09-08. - National Weather Service (2009-02-03). "15 January 2009: Lake Champlain Sea Smoke, Steam Devils, and Waterspout: Chapters IV and V". National Oceanic and Atmospheric Administration. Retrieved 2009-06-21. - Sid Perkins (2002-05-11). "Tornado Alley, USA". Science News. pp. 296–298. Archived from the original on 2006-08-25. Retrieved 2006-09-20. - "Tornado: Global occurrence". Encyclopædia Britannica Online. 2009. Retrieved 2009-12-13. - Meaden, Terrance (2004). "Wind Scales: Beaufort, T — Scale, and Fujita's Scale". Tornado and Storm Research Organisation. Retrieved 2009-09-11. - "Enhanced F Scale for Tornado Damage". Storm Prediction Center. National Oceanic and Atmospheric Administration. 2007-02-01. Retrieved 2009-06-21. - Edwards, Roger et al. (May 2013). "Tornado Intensity Estimation: Past, Present, and Future." Bulletin of the American Meteorological Society. pp. 641-653. Retrieved 2013-12-18. - Douglas Harper (2001). "Online Etymology Dictionary". Retrieved 2009-12-13. - Frederick C Mish (1993). Merriam Webster's Collegiate Dictionary (10th ed.). Merriam-Webster, Incorporated. ISBN 0-87779-709-9. Retrieved 2009-12-13. - Tim Marshall (2008-11-09). "The Tornado Project's Terrific, Timeless and Sometimes Trivial Truths about Those Terrifying Twirling Twisters!". The Tornado Project. Retrieved 2008-11-09. - "Frequently Asked Questions about Tornadoes". National Severe Storms Laboratory. 2009-07-20. - Glossary of Meteorology (2000). Section:T (2 ed.). American Meteorological Society. Retrieved 2009-11-15.[dead link] - Doswell, Moller, Anderson et al. (2005). "Advanced Spotters' Field Guide" (PDF). US Department of Commerce. Retrieved 2006-09-20.[dead link] - Charles A Doswell III (2001-10-01). "What is a tornado?". Cooperative Institute for Mesoscale Meteorological Studies. Retrieved 2008-05-28. - Nilton O. Renno (2008-07-03). "A thermodynamically general theory for convective vortices" (PDF). Tellus A (International Meteorological Institute in Stockholm) 60 (4): 688–99. Bibcode:2008TellA..60..688R. doi:10.1111/j.1600-0870.2008.00331.x. Retrieved 2009-12-12. - Glossary of Meteorology (2000-06-30). Funnel cloud (2 ed.). American Meteorological Society. Retrieved 2009-02-25.[dead link] - Michael Branick (2006). "A Comprehensive Glossary of Weather Terms for Storm Spotters". National Oceanic and Atmospheric Administration. Archived from the original on 2003-08-03. Retrieved 2007-02-27. - Thomas P Grazulis (July 1993). Significant Tornadoes 1680–1991. St. Johnsbury, VT: The Tornado Project of Environmental Films. ISBN 1-879362-03-1. - Russell S Schneider, Harold E. Brooks, and Joseph T. Schaefer (2004). "Tornado Outbreak Day Sequences: Historic Events and Climatology (1875–2003)" (PDF). Retrieved 2007-03-20. - Walter A Lyons (1997). "Tornadoes". The Handy Weather Answer Book (2nd ed.). Detroit, Michigan: Visible Ink press. pp. 175–200. ISBN 0-7876-1034-8. - Roger Edwards (2008-07-18). "Wedge Tornado". National Weather Service. National Oceanic and Atmospheric Administration. Retrieved 2007-02-28. - Singer, Oscar (May–July 1985). "27.0.0 General Laws Influencing the Creation of Bands of Strong Bands". Bible of Weather Forecasting (Singer Press) 1 (4): 57–58. - Roger Edwards (2008-07-18). "Rope Tornado". National Weather Service. National Oceanic and Atmospheric Administration. Retrieved 2007-02-28. - Dr. Charles A, III Doswell. "The Tri-State Tornado of 18 March 1925 Reanalysis Project" (Powerpoint Presentation). Archived from the original on 2007-06-04. Retrieved 2007-04-07. - Roger Edwards (2009). "Public Domain Tornado Images". National Weather Service. National Oceanic and Atmospheric Administration. Retrieved 2009-11-17. - Linda Mercer Lloyd (1996). Target: Tornado (Videotape). The Weather Channel. - "The Basics of Storm Spotting". National Weather Service. National Oceanic and Atmospheric Administration. 2009-01-15. Archived from the original on 2003-10-11. Retrieved 2009-11-17. - "Tornado Factory — Giant Simulator Probes Killer Twisters". Popular Science 213 (1): 77. 1978. Retrieved 2009-11-17. - R. Monastersky (1999-05-15). "Oklahoma Tornado Sets Wind Record". Science News. pp. 308–309. Retrieved 2006-10-20.[dead link] - Alonzo A Justice (1930). "Seeing the Inside of a Tornado" (PDF). Mon. Wea. Rev. pp. 205–6. - Roy S Hall (2003). "Inside a Texas Tornado". Tornadoes. Greenhaven Press. pp. 59–65. ISBN 0-7377-1473-5. - Robert Davies-Jones (1984). "Streamwise Vorticity: The Origin of Updraft Rotation in Supercell Storms". J. Atmos. Sci. 41 (20): 2991–3006. Bibcode:1984JAtS...41.2991D. doi:10.1175/1520-0469(1984)041<2991:SVTOOU>2.0.CO;2. - Richard Rotunno, Joseph Klemp (1985). "On the Rotation and Propagation of Simulated Supercell Thunderstorms". J. Atmos. Sci. 42 (3): 271–92. Bibcode:1985JAtS...42..271R. doi:10.1175/1520-0469(1985)042<0271:OTRAPO>2.0.CO;2. - Louis J. Wicker, Robert B. Wilhelmson (1995). "Simulation and Analysis of Tornado Development and Decay within a Three-Dimensional Supercell Thunderstorm". J. Atmos. Sci. 52 (15): 2675–703. Bibcode:1995JAtS...52.2675W. doi:10.1175/1520-0469(1995)052<2675:SAAOTD>2.0.CO;2. - Greg Forbes (2006-04-26). "anticyclonic tornado in El Reno, OK". The Weather Channel. Retrieved 2006-12-30. - John Monteverdi (2003-01-25). "Sunnyvale and Los Altos, CA Tornadoes 1998-05-04". Retrieved 2006-10-20. - Abdul Abdullah (April 1966). "The "Musical" Sound Emitted by a Tornado"". Mon. Wea. Rev. 94 (4): 213–20. Bibcode:1966MWRv...94..213A. doi:10.1175/1520-0493(1966)094<0213:TMSEBA>2.3.CO;2. - David K. Hoadley (1983-03-31). "Tornado Sound Experiences". Storm Track 6 (3): 5–9. - A. J. Bedard (January 2005). "Low-Frequency Atmospheric Acoustic Energy Associated with Vortices Produced by Thunderstorms". Mon. Wea. Rev. 133 (1): 241–63. Bibcode:2005MWRv..133..241B. doi:10.1175/MWR-2851.1. - Howard Bluestein (1999). "A History of Severe-Storm-Intercept Field Programs". Wea. Forecast. 14 (4): 558–77. Bibcode:1999WtFor..14..558B. doi:10.1175/1520-0434(1999)014<0558:AHOSSI>2.0.CO;2. - Frank Tatom, Kevin R. Knupp, and Stanley J. Vitto (1995). "Tornado Detection Based on Seismic Signal". J. Appl. Meteorol. 34 (2): 572–82. Bibcode:1995JApMe..34..572T. doi:10.1175/1520-0450(1995)034<0572:TDBOSS>2.0.CO;2. - John R Leeman, E.D. Schmitter (April 2009). "Electric signals generated by tornados". Atmos. Res. 92 (2): 277–9. doi:10.1016/j.atmosres.2008.10.029. - Timothy M. Samaras (October 2004). "A Historical Perspective of In-Situ Observations within Tornado Cores". Preprints of the 22nd Conf. Severe Local Storms. Hyannis, MA: American Meteorological Society. - Antony H Perez, Louis J. Wicker, and Richard E. Orville (1997). "Characteristics of Cloud-to-Ground Lightning Associated with Violent Tornadoes". Wea. Forecast. 12 (3): 428–37. Bibcode:1997WtFor..12..428P. doi:10.1175/1520-0434(1997)012<0428:COCTGL>2.0.CO;2. - Julian J. Lee, Timothy P. Samaras, Carl R. Young (2004-10-07). "Pressure Measurements at the ground in an F-4 tornado". Preprints of the 22nd Conf. Severe Local Storms. Hyannis, Massachusetts: American Meteorological Society. - Markowski, Straka, and Rasmussen (2002-10-14). "Tornadogenesis Resulting from the Transport of Circulation by a Downdraft: Idealized Numerical Simulations". J. Atmos. Sci. 60 (6): 28. Bibcode:2003JAtS...60..795M. doi:10.1175/1520-0469(2003)060<0795:TRFTTO>2.0.CO;2. Retrieved 2009-12-13. - Dave Zittel (2000-05-04). "Tornado Chase 2000". USA Today. Retrieved 2007-05-19. - Joseph Golden (2007-11-01). "Waterspouts are tornadoes over water". USA Today. Retrieved 2007-05-19. - Thomas P. Grazulis, Dan Flores (2003). The Tornado: Nature's Ultimate Windstorm. Norman OK: University of Oklahoma Press. p. 256. ISBN 0-8061-3538-7. - "About Waterspouts". National Oceanic and Atmospheric Administration. 2007-01-04. Retrieved 2009-12-13. - No author given (2012-01-02). "European Severe Weather Database definitions". - "Gustnado". Glossary of Meteorology. American Meteorological Society. June 2000. Retrieved 2006-09-20.[dead link] - Charles H Jones, Charlie A. Liles (1999). "Severe Weather Climatology for New Mexico". Retrieved 2006-09-29. - The Fujita Scale of Tornado Intensity - "Goshen County Tornado Given Official Rating of EF2". National Weather Service. National Oceanic and Atmospheric Administration. Retrieved 2009-11-21. - David C Lewellen, M I Zimmerman (2008-10-28). "Using Simulated Tornado Surface Marks to Decipher Near-Ground Winds" (PDF). 24th Conf. Severe Local Storms. American Meteorological Society. Retrieved 2009-12-09. - Harold E Brooks (2004). "On the Relationship of Tornado Path Length and Width to Intensity". Wea. Forecast. 19 (2): 310–9. Bibcode:2004WtFor..19..310B. doi:10.1175/1520-0434(2004)019<0310:OTROTP>2.0.CO;2. - Edwards, Moller, Purpura et al. (1998-03-31). "Basic Spotters’ Field Guide" (PDF). National Weather Service. National Oceanic and Atmospheric Administration. Retrieved 2006-11-01.[dead link] - Dotzek, Nikolai, Jürgen Grieser, Harold E. Brooks (2003-03-01). "Statistical modeling of tornado intensity distributions" (PDF). Atmos. Res. Vol. 67–68. pp. 163–87. Retrieved 2007-04-06. - Nikolai Dotzek (2003-03-20). "An updated estimate of tornado occurrence in Europe" (PDF). Atmos. Res. doi:10.1016/S0169-8095(03)00049-8. Retrieved 2009-12-13. - Huaqing Cai (2001-09-24). "Dryline cross section". University of California Los Angeles. Retrieved 2009-12-13. - "Tornadoes". Prairie Storm Prediction Centre. Environment Canada. 2007-10-07. Retrieved 2009-12-13. - J Holden, A Wright (2003-03-13). "UK tornado climatology and the development of simple prediction tools" (PDF). Q. J. R. Meteorol. Soc. 130 (598): 1009–21. Bibcode:2004QJRMS.130.1009H. doi:10.1256/qj.03.45. Archived from the original on 2007-08-24. Retrieved 2009-12-13. - Staff (2002-03-28). "Natural Disasters: Tornadoes". BBC Science and Nature. BBC. Archived from the original on 2002-10-14. Retrieved 2009-12-13. - Bimal Kanti Paul, Rejuan Hossain Bhuiyan (2005-01-18). "The April 2004 Tornado in North-Central Bangladesh: A Case for Introducing Tornado Forecasting and Warning Systems" (PDF). Retrieved 2009-12-13. - Jonathan Finch (2008-04-02). "Bangladesh and East India Tornadoes Background Information". Retrieved 2009-12-13. - Michael Graf (2008-06-28). "Synoptical and mesoscale weather situations associated with tornadoes in Europe" (PDF). Retrieved 2009-12-13. - "Structure and Dynamics of Supercell Thunderstorms". National Weather Service. National Oceanic and Atmospheric Administration. 2008-08-28. Retrieved 2009-12-13. - "Frequently Asked Questions: Are TC tornadoes weaker than midlatitude tornadoes?". Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division. National Oceanic and Atmospheric Administration. 2006-10-04. Retrieved 2009-12-13. - Kelly, Schaefer, McNulty, et al. (1978). "An Augmented Tornado Climatology" (PDF). Mon. Wea. Rev. p. 12. Retrieved 2009-12-13. - "Tornado: Diurnal patterns". Encyclopædia Britannica Online. 2007. p. G.6. Retrieved 2009-12-13. - A.M. Holzer (2000). "Tornado Climatology of Austria". Atmos. Res. (56): 203–11. Archived from the original on 2007-02-19. Retrieved 2007-02-27. - Nikolai Dotzek (2000-05-16). "Tornadoes in Germany" (PDF). Atmos. Res. Retrieved 2007-02-27. - "South African Tornadoes". South African Weather Service. 2003. Archived from the original on 2007-05-26. Retrieved 2009-12-13. - Jonathan D. Finch, Ashraf M. Dewan (2007-05-23). "Bangladesh Tornado Climatology". Retrieved 2009-12-13. - Roger Edwards, Steven J. Weiss (1996-02-23). "Comparisons between Gulf of Mexico Sea Surface Temperature Anomalies and Southern U.S. Severe Thunderstorm Frequency in the Cool Season". 18th Conf. Severe Local Storms. American Meteorological Society. - Ashton Robinson Cook, Joseph T. Schaefer (2008-01-22). "The Relation of El Nino Southern Oscillation (ENSO) to Winter Tornado Outbreaks". 19th Conf. Probability and Statistics. American Meteorological Society. Retrieved 2009-12-13. - Robert J Trapp, NS Diffenbaugh, HE Brooks, ME Baldwin, ED Robinson, and JS Pal (2007-12-12). "Changes in severe thunderstorm environment frequency during the 21st century caused by anthropogenically enhanced global radiative forcing". Proc. Natl. Acad. Sci. U.S.A. 104 (50): 19719–23. doi:10.1073/pnas.0705494104. - Susan Solomon et al. (2007). Climate Change 2007 - The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK and New York, USA: Cambridge University Press for the Intergovernmental Panel on Climate Change. ISBN 978-0-521-88009-1. Retrieved 2009-12-13. - "The First Tornadic Hook Echo Weather Radar Observations". Colorado State University. 2008. Retrieved 2008-01-30. - Paul M. Markowski (April 2002). "Hook Echoes and Rear-Flank Downdrafts: A Review". Mon. Wea. Rev. 130 (4): 852–76. Bibcode:2002MWRv..130..852M. doi:10.1175/1520-0493(2002)130<0852:HEARFD>2.0.CO;2. - Airbus (2007-03-14). "Flight Briefing Notes: Adverse Weather Operations Optimum Use of Weather Radar" (PDF). SKYbrary. p. 2. Retrieved 2009-11-19. - "Tornado Detection at Environment Canada". Environment Canada. 2004-06-02. Retrieved 2009-12-13. - Charles A. Doswell, III, Alan R. Moller, Harold E. Brooks (1999). "Storm Spotting and Public Awareness since the First Tornado Forecasts of 1948". Wea. Forecast. 14 (4): 544–57. Bibcode:1999WtFor..14..544D. doi:10.1175/1520-0434(1999)014<0544:SSAPAS>2.0.CO;2. - National Weather Service (2009-02-06). "What is SKYWARN?". National Oceanic and Atmospheric Administration. Retrieved 2009-12-13. - European Union (2009-05-31). "Skywarn Europe". Retrieved 2009-12-13. - Terence Meaden (1985). "A Brief History". Tornado and Storm Research Organisation. Retrieved 2009-12-13. - National Severe Storms Laboratory (2006-11-15). "Detecting Tornadoes: What Does a Tornado Look Like?". National Oceanic and Atmospheric Administration. Retrieved 2009-12-13. - Roger and Elke Edwards (2003). "Proposals For Changes in Severe Local Storm Warnings, Warning Criteria and Verification". Retrieved 2009-12-13. - "Questions and Answers about Tornadoes". A Severe Weather Primer. National Severe Storms Laboratory. 2006-11-15. Retrieved 2007-07-05. - Harold E Brooks, Charles A. Doswell III (2000-10-01). "Normalized Damage from Major Tornadoes in the United States: 1890–1999". Wea. Forecast. Retrieved 2007-02-28. - Anatomy of May 3's F5 tornado, The Oklahoman Newspaper, May 1, 2009 - Thomas P Grazulis (2005-09-20). "Tornado Oddities". Retrieved 2009-12-13. - Emily Yahr (2006-02-21). "Q: You've probably heard the expression, "it's raining cats and dogs." Has it ever rained animals?". USA Today. Retrieved 2009-12-13. - Roger Edwards (2008-07-16). "Tornado Safety". National Weather Service. National Oceanic and Atmospheric Administration. Retrieved 2009-11-17. - "Storm Shelters" (PDF). National Weather Service. National Oceanic and Atmospheric Administration. 2002-08-26. Archived from the original on 2006-02-23. Retrieved 2009-12-13. - "Highway Overpasses as Tornado Shelters". National Weather Service. National Oceanic and Atmospheric Administration. 2000-03-01. Archived from the original on 2000-06-16. Retrieved 2007-02-28. - Knight, Meredith (2011-04-18). "Fact or Fiction?: If the Sky Is Green, Run for Cover—A Tornado Is Coming". Scientific American. Retrieved 2012-09-03. - Thomas P Grazulis (2001). "Tornado Myths". The Tornado: Nature's Ultimate Windstorm. University of Oklahoma Press. ISBN 0-8061-3258-2. - Tim Marshall (2005-03-15). "Myths and Misconceptions about Tornadoes". The Tornado Project. Retrieved 2007-02-28.[dead link] - National Weather Service Forecast Office, Dodge City, Kansas. "Overpasses and Tornado Safety: Not a Good Mix". Tornado Overpass Information. NOAA. Retrieved 24 March 2012. - Climate Services and Monitoring Division (2006-08-17). "Tornado Myths, Facts, and Safety". National Climatic Data Center. Retrieved 2012-03-27. - Chris Cappella (2005-05-17). "Overpasses are tornado death traps". USA Today. Retrieved 2007-02-28.[dead link] - Kenneth F Dewey (2002-07-11). "Tornado Myths & Tornado Reality". High Plains Regional Climate Center and University of Nebraska–Lincoln. Archived from the original on June 11, 2008. Retrieved 2009-11-17. - John Monteverdi, Roger Edwards, Greg Stumpf, Daniel Gudgel (2006-09-13). "Tornado, Rockwell Pass, Sequoia National Park, 2004-07-07". Retrieved 2009-11-19. - National Severe Storms Laboratory (2006-10-30). "VORTEX: Unraveling the Secrets". National Oceanic and Atmospheric Administration. Retrieved 2007-02-28. - Michael H Mogil (2007). Extreme Weather. New York: Black Dog & Leventhal Publisher. pp. 210–211. ISBN 978-1-57912-743-5. - Kevin McGrath (1998-11-05). "Mesocyclone Climatology Project". University of Oklahoma. Retrieved 2009-11-19. - Seymour, Simon (2001). Tornadoes. New York City: HarperCollins. p. 32. ISBN 978-0-06-443791-2. - Thomas P. Grazulis (2001). The tornado: nature's ultimate windstorm. University of Oklahoma Press. pp. 63–65. ISBN 978-0-8061-3258-7. Retrieved 2009-11-20. - Rasmussen, Erik (2000-12-31). "Severe Storms Research: Tornado Forecasting". Cooperative Institute for Mesoscale Meteorological Studies. Archived from the original on April 7, 2007. Retrieved 2007-03-27. - United States Environmental Protection Agency (2009-09-30). "Tornadoes". Retrieved 2009-11-20. - Grazulis, Thomas P. (2001). The tornado: nature's ultimate windstorm. University of Oklahoma Press. pp. 65–69. ISBN 978-0-8061-3258-7. Retrieved 2009-11-20. - National Center for Atmospheric Research (2008). "Tornadoes". University Corporation for Atmospheric Research. Retrieved 2009-11-20. - "Scientists Chase Tornadoes to Solve Mysteries". 2010-04-09. Retrieved 2014-04-26. - "Huge tornadoes discovered on the Sun". Physorg.com. Retrieved 2012-09-03. - Howard B. Bluestein (1999). Tornado Alley: Monster Storms of the Great Plains. New York: Oxford University Press. ISBN 0-19-510552-4. - Marlene Bradford (2001). Scanning the Skies: A History of Tornado Forecasting. Norman, OK: University of Oklahoma Press. ISBN 0-8061-3302-3. - Thomas P. Grazulis (January 1997). Significant Tornadoes Update, 1992–1995. St. Johnsbury, VT: Environmental Films. ISBN 1-879362-04-X. |Find more about tornado at Wikipedia's sister projects| |Definitions and translations from Wiktionary| |Media from Commons| |Quotations from Wikiquote| |Source texts from Wikisource| |Textbooks from Wikibooks| |Travel guide from Wikivoyage| |Learning resources from Wikiversity| - NOAA Storm Database 1950–Present[dead link] - European Severe Weather Database - Social & Economic Costs of Tornadoes - Tornado Detection and Warnings - Electronic Journal of Severe Storms Meteorology - NOAA Tornado Preparedness Guide - Tornado History Project - Maps and statistics from 1950-Present
Spanish explorers claimed lands from Florida to California as they looked for gold. Spain set up missions to bring the Catholic religion to Native Americans, and forts to protect their claims. English explorers mapped and claimed parts of the Atlantic coast from Georgia to Canada. French explorers claimed areas near the Great Lakes and along the Mississippi River. They were followed by fur traders and missionaries. JAMESTOWN - 1607 In 1607, King James I granted the Virginia Company of London permission to establish the Jamestown colony on Chesapeake Bay (on the coast of Virginia). John Smith led the colony. first permanent English settlement in the Americas Hardships: low, swampy land → mosquitoes, dirty water → disease Pocahontas helped through early hard times. Survived because they learned how to grow tobacco. Brought in African slaves. House of Burgesses — first colonial legislature in the Americas PLYMOUTH - 1620 Plymouth colony, founded by the Pilgrims, was the second English colony in America, founded in Massachusetts in 1620. Hardships: freezing winters, many died. Squanto taught Pilgrims how to grow food to survive. Mayflower Compact — an agreement for self-government English kings gave permission for colonists to create 13 English colonies along the Atlantic Coast. The Appalachian Mountains were the western border. Colonial cities grew up on the coast where good harbors allowed transportation. The port cities of Boston, New York, Philadelphia, Baltimore, and Charlestown were centers of trade, population, and government. Each colony had a royal governor appointed by the king and a legislature with elected representatives from the colony. Colonists in each region, or area, adapted to the climate, soil, and geography they found. They sold their products to England. New England colonies Rocky soil and cold winters. Resources: sea, forest English Puritans came to New England seeking freedom from religious persecution MASSACHUSETTS, NEW HAMPSHIRE, CONNECTICUT, RHODE ISLAND rich soil, long growing seasons, cold winters, deep rivers called the Breadbasket — grew grain and raised livestock. fur trapping, shipping Known for diversity (many groups living together peacefully) and tolerance (acceptance of others) PENNSYLVANIA, NEW YORK, DELAWARE, NEW JERSEY rich soil, warm weather, flat land good for growing cash crops sold tobacco, indigo, rice, sugar, and cotton to England labor shortage → indentured servants and slaves plantation — a large farm that forced slaves to grow cash crops VIRGINIA, MARYLAND, THE CAROLINAS, GEORGIA GOVERNING THE COLONIES History of representation in England: 1215 Magna Carta — This document limited the power of the King and gave rights to some citizens. 1689 English Bill of Rights — guaranteed English citizens certain rights and set up a process for electing representatives in Parliament (the British Congress). How representation grew in the English colonies: 1619 Virginia House of Burgesses — the first representative government assembly in the colonies. 1620 Mayflower Compact — Pilgrims signed a contract agreeing to the rules for self-government for the colony. They agreed to follow the laws made by their representatives. Mercantilism — American colonies sent raw materials to English factories, then the colonies bought manufactured goods from England. (Colonists began to resent mercantilism controlled by England.) Triangle trade — The slave trade route between Africa and North America completed the triangle that ships traveled. AMERICAN REVOLUTION (1763 - 1783) French and Indian War Ben Franklin published this political cartoon calling American colonists to join together to fight the French. Cause: The French built forts in the Ohio River Valley, west of the Appalachian Mountains. English colonists wanted the land. THE WAR: England and France fought in the American colonies (1754-1763). American colonists sided with England, while many native American tribes fought beside the French. England won, forcing the French out of the land between the Appalachians and the Mississippi River. American settlers poured over the Appalachian Mountains, taking Indian land. Proclamation of 1763 — King George III ordered colonists not to cross Appalachians to keep peace with Native Americans. Quartering Act — Colonists had to feed and house the British soldiers who were sent to keep the peace. The British Parliament passed new tax laws to pay for the war debt. Colonial protests against British laws boycott — refusing to buy certain products as a form of protest 1765 Stamp Act (tax on paper goods) → boycott of paper goods → Stamp Act Congress → repeal of Stamp Act 1767 Townshend Acts (tax on imports, new courts to try colonists who ignored taxes) → boycott → British soldiers stationed in Boston to enforce tax laws → 1770 Boston Massacre (5 colonists died) → American colonists outraged → repeal of Townshend Acts 1773 Tea Act → boycott → 1775Boston Tea Party → Intolerable Acts (took over the Massachusetts government, closed the port of Boston) → boycotts, First Continental Congress meets Patriots v. Loyalists: Americans chose sides Patriots — supported independence from Great Britain Loyalists — were loyal to the King George III as the ruler of the English colonies in America. DECLARATION OF INDEPENDENCE — 1776 The Declaration of Independence was signed July 4, 1776, in Philadelphia by delegates to the 2nd Continental Congress. The Declaration stated: All men are created equal they are endowed by their Creator with certain unalienable rights: life, liberty, and the pursuit of happiness When a government violates those rights, the citizens have the right to abolish (get rid of) that government and create a new one. King George III has violated the rights of the American colonists. Then the Declaration listed grievances, or complaints, against King George III and Parliament (like shutting down legislatures). Key Events of the Revolution 1775 Lexington/Concord — the first battles of the Revolution. “The shot heard round the world.” Paul Revere rode to warn the colonial militia (Minutemen) about the arrival of British troops to capture their arsenal. British retreated to Boston. 1776 Trenton, NJ — Gen. Washington led troops across the Delaware River to capture Trenton in a surprise attack, after Thomas Paine’s Crisis inspired troops. 1777 Saratoga — American troops won in the Hudson River Valley and forced part of the British army to surrender. A turning point in the war. France began to help with troops and money. 1777/78 Valley Forge — General Washington and the American army lost Philadelphia and spent a horrible winter training in their winter camp. Troops suffered from starvation, disease, and freezing cold. 1781 Yorktown — Gen. Washington forced the surrender of British Gen. Cornwallis in this port town on Chesapeake Bay, with the help of French navy and army. This battle ended the Revolution. American advantages in the war: Patriot troops knew the territory. The U.S. got help from Spain and France. 1783 Treaty of Paris — The treaty between the U.S. and Great Britain gave the Americans the land from the Appalachian Mountains west to the Mississippi River and recognized American independence. Samuel Adams — leader of the Sons of Libertyin Boston, a secret protest group that began many protests including Boston Tea Party. Thomas Paine — Englishman who wrote Common Sense, a pamphlet that encouraged American colonists to declare independence from England. Later, Paine wrote Crisis, which encouraged Washington’s soldiers before the Battle at Trenton. “These are the times that try men’s souls ...” Patrick Henry — Virginia Patriot who called for independence once Boston was under siege. “Give me liberty or give me death!” Benjamin Franklin — colonial leader in Philadelphia, representative in France during the war, inventor, publisher. Thomas Jefferson — Virginia delegate to the Continental Congress who wrote the Declaration of Independence in 1776. George Washington — leader of the Continental Army during the Revolution, President of the Constitutional Convention. King George III — King of England during the American Revolution; Patriots accused him of being a tyrant. John Adams — Massachusetts Patriot who helped write the Declaration of Independence. Abigail Adams — wife of John Adams wrote a letter encouraging him to “remember the ladies” when forming the new government. CREATING THE CONSTITUTION (1783 - 1791) Articles of Confederation The 2nd Continental Congress wrote the first plan of government for the colonies after it declared independence from Britain at the beginning of the Revolution. They called it the Articles of Confederation. The Articles set up a loose alliance of the states to defend themselves against Britain. The states governed themselves, printed their own money, had their own navies, but they agreed to help protect each other. Weaknesses of the Articles of Confederation — Congress was too weak: could not tax, enforce laws, regulate trade, or control money. Congress ould not pay soldiers, and it was hard to pass bills because 9 of 13 states had to agree. No president (chief executive) or Supreme Court. Results of the weak new government 1783 Congress was chased out of Philadelphia by Continental Army soldiers who were never paid. 1786 Shays’ Rebellion — former Continental Army soldier Daniel Shays led Massachusetts farmers in armed protest after they lost their farms because of high state taxes. The weak U.S. government could not help end the conflict. Delegates went to Philadelphia to revise (change) the Articles of Confederation. Instead, they decided to write a new plan for a stronger national government. James Madison introduced the Virginia Plan — he proposed three branches of government and two houses of Congress. After five months, delegates completed the Constitution. The Constitution was ratified, or approved, in 1789, after the Bill of Rights was added. COMPROMISES at the Constitution Convention 1) The Great Compromise ended an argument between large states (Virginia) and small states (New Jersey) by creating a House of Representatives with representation based on population and a Senate with equal representation (2 senators from each state). 2) The Three-Fifths Compromise settled the argument between Northern free states and Southern slave states about how to count slaves when figuring out how many representatives each state got. Preamble - Introduction “We the People of the United States ...” The purposes of our national government: to keep the nation together (form a more perfect Union); make things fair (establish Justice), keep peace at home (insure domestic Tranquility), defend the country (provide for the common defense), take care of citizens (promote the general Welfare), and keep the country free (secure the Blessings of Liberty) PRINCIPLES OF THE CONSTITUTION The Framers of the Constitution wanted our government to be strong enough to hold the states together, but they wanted our Constitution to limit the power of the government. “a government of laws and not of men” - John Adams Government power is divided between the federal (national) and state governments. The Constitution is the supreme law of the land. The federal government only handles jobs that affect the whole nation (like income tax, treaties, and national laws). Separation of Powers The powers of government are separated into three branches of government: Legislative Branch — lawmakers. Congress makes the laws for the nation. Executive Branch — enforcers of the law. The President heads the Executive Branch. Judicial Branch — judges (who interpret the laws). The highest court is the Supreme Court. Checks and Balances Each branch can check, or limit, the power of the other two branches, so that no one branch becomes too powerful (for example, the President can veto laws, the Supreme Court can rule a law unconstitutional). “reps of the public” — Government is controlled by the people, who give their elected representatives the power to make and enforce the laws. “the people rule” The power of government rests with the people, who express their ideas through voting (consent of the governed) The unalienable rights mentioned in the Declaration and guaranteed by the Bill of Rights and other amendments to the Constitution Federalists (Alexander Hamilton, James Madison) argued in the Federalist Papers that we needed a strong central government. Antifederalists (Patrick Henry) argued that a strong national government would take away people’s and states’ rights. They insisted that a Bill of Rights be added to the Constitution to protect individual rights. Bill of Rights — 1791 Bill of Rights — the first ten amendments to the Constitution — 1 freedom of speech, press, religion, peaceable assembly, petition 2 right to bear arms (militia) 3 no quartering of soldiers in peace time 4 no unreasonable search or seizure, warrant 5-8due process for people accused of a crime (jury trial, attorney, no cruel and unusual punishment) 9-10 rights not listed in the Constitution belong to states or citizens. Amending the Constitution amend — change The Constitution can be amended (changed) to keep up with changes in society. Amendments can be proposed by Congress or state legislatures. Amendments must be approved by ¾ of state conventions. The Constitution has only been amended 27 times. EARLY YEARS OF THE NEW NATION (1791 – 1817) Northwest Ordinance of 1787 This law established a procedure for adding new territories and states to the United States. New states were equal to the original states. The law also provided free education and banned slavery in the new territories. George Washington’s Presidency “I walk on untrodden ground” — Washington knew he would be setting a precedent (example) for presidents to follow. Washington asked for advice from his “Cabinet,” including Alexander Hamilton, his Secretary of the Treasury, and Thomas Jefferson, his Secretary of State. Farewell Address: Washington encouraged the U.S. to stay neutral and to form “no entangling alliance” with other countries. He also warned against political parties, which could divide the nation. Washington’s cabinet members disagreed about how much power the national government should have. They led different political parties. Alexander Hamilton and other Federalists believed in a strong national government (supported a national bank, import tariffs to protect new American factories). Represented Northerners, urban manufacturers. Thomas Jefferson and other Democratic-Republicanssupported small government, the rights of the states, and low taxes. Represented the agricultural, rural South. Washington, D.C. — 1800 George Washington asked Benjamin Banneker, an African-American mathematician and surveyor, to help design the new capital. Marbury v. Madison — 1803 This court case established the idea of judicial review. The Supreme Court can overturn a law as unconstitutional if the court decides that the law is against the U.S. Constitution. War of 1812 Causes: the U.S. wanted to annex Canada from the British and Florida from the Spanish. British warships were seizing American ships and impressing American sailors. U.S. was angry with Britain for encouraging Native American attacks against American settlers on the frontier. British ships blockaded American ports, blocking American imports. This encouraged American manufacturing. British troops fought in America from 1812 -1814. The British burned much of Washington, D.C. Francis Scott Key wrote “The Star Spangled Banner” after witnessing the American victory that defended Fort McHenry from British attack in Baltimore Harbor. Andrew Jackson won at the Battle of New Orleans after the peace treaty was signed. Result: the Era of Good Feelings, a time when Americans felt greater nationalism and patriotism and political parties stopped fighting. Improvements in Manufacturing In England, improvements in technology created the Industrial Revolution, a change in the way goods were made. Now work was done more efficiently in factories, rather than in homes by hand. textile industry — the mass production of woven cloth by machines 1790 Samuel Slater built the first spinning mill in America. 1798 interchangeable parts — Eli Whitney invented machines to manufacture each part of a gun exactly alike. This sped up production and made repairs easier. assembly line → mass production of goods 1813 Lowell mills hired farm girls to weave cloth on power looms in factories (12 ½ hour days, low wages). urbanization (people leaving their farms and moving to cities) 5 million immigrants from Europe (Irish, German, Italian) overcrowding, poverty, poor working conditions in Northern cities Improvements in Agriculture 1793 Eli Whitney invented the cotton gin. Textile mills demanded more cotton, but the short-fibered cotton that could be grown away from the coast was hard for slaves to clean by hand. With the cotton gin, a worker could clean 50 pounds of cotton a day. Cotton profits made slaves more valuable → increased slave trade. Many farmers moved west to grow cotton and brought slaves. Settlers moving west grew food and cotton to supply the North and created a market for Northern manufactured goods. 1834 McCormick reaper allowed farmers to cut grain crops with a horse-drawn machine rather than by hand. 1837 John Deere’ssteel plow made it possible to farm the tough, muddy midwestern soil. Improvements in Transportation 1807 Fulton invented the steamboat (the Clermont), increased river transportation, made transporting goods more efficient. New Orleans became an important port on the Mississippi. 1825 The new Erie Canal let steamboats travel from the Atlantic Ocean to the Great Lakes. Made shipping between East coast and Midwest much faster and cheaper. The expanding network of railroads connected the regions, as people and goods were transported faster than ever before. Improvements in Communication 1837 Samuel Morse patented the telegraph, an innovation that sped up communication between east and west. WESTWARD EXPANSION (1803 - 1853) The first trans-Appalachian road sped up transportation west. Result: thousands of settlers moved into Kentucky and Tennessee. 1803 Louisiana Purchase Thomas Jefferson bought the Louisiana Purchase from France in 1803. The purchase of this huge territory doubled the size of the U.S. and began America’s westward expansion beyond the Mississippi River. 1804-1806 Lewis and Clark Expedition Lewis and Clark explored the Louisiana Purchase for Jefferson, mapped territory, gathered information, and established contact with Native American tribes. Sacajawea guided the expedition. 1819 Spain cedes Florida After Andrew Jackson captured Pensacola, Florida, Spain gave up Florida to the United States in the Adams-Onís Treaty. 1823 Monroe Doctrine Latin American countries won independence from Spain in the 1820’s. President Monroe said that the U.S. would not allow European countries to make any new colonies in North or South America. Andrew Jackson’s Presidency The first Western president, founder of the Democratic party. Jacksonian democracy — involving “common people” in government. Nullification Crisis — Congress passed high tariffs (import taxes) to protect new Northern factories by making foreign goods more expensive. The South protested the 1828 “Tariff of Abominations,” because it made their imported goods more expensive. Vice President John C. Calhoun argued that his state of South Carolina had the right to nullify (declare illegal) the tariff law. Jackson sent federal troops to enforce the federal law. Destroyed the national bank, removed funds → Panic of 1837 Indian Removal Act of 1830 — Jackson asked Congress to authorize the use of force to remove southeastern tribes from prized farmland. 1838 Trail of Tears — Jackson ignored the Supreme Court and ordered troops to remove Cherokee and other Native Americans from “settled” areas east of the Mississippi River to “Indian Territory” west of the Mississippi River. Many died during the forced march west. The belief that the United States had the God-given right to own and control all land between the Atlantic and Pacific Oceans. This belief drove westward expansion, the annexation of Texas and Oregon, and the Mexican War. John Gast’s American Progress. Public domain image Many Americans moved west: Oregon Trail — farmers traveled in Conestoga wagons for farm land. Mormon Trail — Mormons headed to Salt Lake City for religious reasons. Santa Fe Trail — major transportation and trade route to the Southwest Rocky Mountains were a major barrier to settlers traveling west. Mexican War (1846-1848), Mexican Cession The Republic of Texas was annexed into the United States as a slave state in 1845. The U.S. and Mexico argued about which river formed Texas’ southern border. Result: War between Mexico and the U.S. Henry David Thoreau wrote “Civil Disobedience” to protest the use of taxes to support the war. Results: Treaty of Guadalupe Hidalgo — U.S. victory and the addition of the Mexican Cession (land from Texas to California) in 1848. 1846 Oregon Territory Great Britain and the U.S. both claimed “Oregon Country.” For years, the northern border of the U.S. was not set, west of the Rockies. Many American farmers moved to Oregon Territory. Some in Congress wanted to fight for the territory. The two countries signed a treaty in 1846. In 1848, gold was discovered at Sutter’s Mill, California → population boom in California. CA gained statehood in 1850 as a free state. 1853 Gadsden Purchase U.S. bought the last piece of southern border to provide land for railroad. Temperance — a movement to ban the sale of alcohol and encourage people not to drink → 18th Amendment (prohibition) Education Reform — Horace Mann fought for high-quality public schools for all children. “Education ... is the great equalizer of the conditions of men ...” Women’s Rights Movement — Women who were banned from speaking at abolition meetings started the movement for women’s rights — suffrage (the right to vote), the right to control property. 1848 Seneca Falls Convention — Elizabeth Cady Stanton presented the Declaration of Sentiments. Susan B. Anthony, Lucretia Mott Frederick Douglass wrote the North Star. William Lloyd Garrison published The Liberator. Sojourner Truth spoke against slavery and for the rights of black women. Harriet Tubman secreted fugitive slaves to the North and Canada on the Underground Railroad. CIVIL WAR (1861 - 1865) 1861 - 1865 — The North (Union, Yankees) and the South (Confederacy, Rebels) fought the Civil War over the issues of slavery, states’ rights, and economic and sectional difference between the North and the South. The North and South had been different since colonial times ... textile mills, factories manufactured cloth, other goods tariffs helped factory owners by making their goods competitive the Union, abolition plantations, few factories exported cash crops tariffs hurt southern farmers by raising prices for imported goods states’ rights, slavery Voices of each region Sectional leaders were loyal to the interests of their region: John C. Calhoun — South Carolina senator who promoted states’ rights, “nullification,” and secession. Henry Clay (Kentucky) was called the Great Compromiser. He tried to “keep peace” between Northern and Southern interests. Daniel Webster (Massachusetts) represented the views of many Northerners in support of strong central government. EVENTS LEADING TO CIVIL WAR 1820 Missouri Compromise — As new western states applied for statehood, the split between North and South widened. Henry Clay of Kentucky negotiated a compromise in Congress. When the Missouri Territory wanted to join the Union as a slave state, Maine was admitted as a free state. This kept the number of free and slave states equal. The Compromise of 1850 included the Fugitive Slave Law, which enraged Northerners who didn’t want to help slave owners. 1852 Uncle Tom’s Cabin — Harriett Beecher Stowe published this book about the horrors of slavery. Northerners were moved by the touching story of slaves suffering. Southerners were outraged. 1854 Bloody Kansas — Senator Douglass proposed opening Kansas and Nebraska territories to slavery. Thousands of northern and southern settlers poured into the territories to fight for their side. 1858 Dred Scott v. Sandford — A slave named Dred Scott sued his owner for his freedom in the Supreme Court. Justice Taney wrote the opinion that slaves were not citizens and did not have the right to sue in court. He stated that slaves were property, not citizens. Northerners feared this could extend slavery into territories. 1858 Harper’s Ferry — Abolitionist John Brown led an armed slave revolt at Harper’s Ferry, VA. Brown was hanged. He became a hero among Northern abolitionists. Formation of the Republican Party — The Northern Abolitionists formed a political party to end slavery: the Republican Party. Abraham Lincoln of Illinois was their candidate. “A house divided against itself cannot stand. I believe this government cannot endure permanently half slave and half free.” 1860 Presidential Election — Lincoln won the presidency, because the Southern Democrats split their votes among three candidates. The South panicked, believing Lincoln would abolish slavery. South Carolina seceded from (left) the Union. More Southern states followed. They formed the Confederate States of America. Soon, the Union and the Confederacy were at war. The idea that states had the right to control all the issues in their state except for those listed in the Constitution. Southern states used the argument to nullify (ignore) laws they didn’t agree with. THE CIVIL WAR April 12, 1861 — Fort Sumter, SC. The Civil War began when Southern troops fired on Union troops who were trying to re-supply a U.S. fort. Vicksburg, MS — a Northern victory that took control of the Mississippi River from the Confederacy. A turning point in the war. Gettysburg, PA — a Northern victory in which over 35,000 Confederate and Union soldiers were killed or wounded in three days of fighting. A turning point in the war. Lincoln’s Gettysburg Address 1863 Lincoln signed the Emancipation Proclamation, which declared all slaves in the rebellious Confederate states free. Appomattox Courthouse, VA — Gen. Lee surrendered to Gen. Grant to end the Civil War. Grant showed mercy to Lee and his troops. 1865 Lincoln was assassinated while he attended a play in Washington, D.C. Civil War Leaders Abraham Lincoln — president of the U.S. during the Civil War. Believed in preserving the Union above all else. Ulysses S. Grant — commander of the Union Army Robert E. Lee — commander of the Confederate Army. Surrendered to Grant at Appomattox Courthouse. Jefferson Davis — President of the Confederate States of America a time of rebuilding after the Civil War. Federal troops went to the South to ensure that Southerners followed the new laws against slavery. 13th Amendment made slavery illegal in the U.S. 14th Amendment gave citizenship rights to all people born or naturalized in the U.S., including former slaves. Stated that citizens cannot be “deprived of life, liberty, or property without due process of the law.” All citizens will have equal protection under the law. 15th Amendment gave African-American men the right to vote.
Closing the Gap This lesson printed from: Posted October 27, 2005 Author: Cheryl McGaughey Posted: October 27, 2005 Updated: October 25, 2007 The students learn what GDP is. They will learn different measures of GDP as well as how GDP per capita can be used to compare countries. They will also calculate GDP per capita and learn how poorer countries can converge, or close the gap, with richer countries. - Define GDP and GDP per capita and understand how both measures are used to compare countries. - Calculate GDP per capita and how many years it takes for an economy's output to double. - Understand how convergence or closing the gap in GDP between countries can occur, and the benefits of convergence. Gross Domestic Product, or GDP, is defined as the total market value of all goods and services produced within an economy in a given year. Nominal GDP is GDP based on prevailing prices. GDP is also used as a measure of a country's standard of living. A standard of living is indicated by the necessities, comforts and luxuries enjoyed by an individual or group. Real GDP (Purchasing Power Parity or PPP-adjusted) adjusts for inflation. GDP controls for any year-to-year growth in the GDP due solely to changes in the average price level. GDP per capita, which adjusts both for inflation and the population of the country, is a commonly used figure that allows for comparisons of countries based on their standards of living. It is a good idea to have the students do Focus on Economic Data: Consumer Price Index and Inflation before completing this lesson. *You can read more about GDP that is PPP-adjusted in the article . Consumer Price Index and Inflation: This Council for Economic Education lesson plan discusses the Inflation Rate. Consumer Price Index and Inflation A Beginner's Guide to Purchasing Power Parity Theory: This site provides you with a Q and A about the purchasing Power Parity Theory. CIA's World Fact Book: Students can find and view all the world's countries at this site. Listed below are the countries necessary for completing the exercises in this lesson. The United States Closing the Gap: This drag-and-drop interactive has students match countries with their corresponding GDP Per Capita The Rule of 72: a mathematical rule for determining the number of years it will take for an investment to double in value. Rule of 72 The data in this lesson will be taken from the CIA World Fact Book, which adjusts for exchange rate differences but not inflation differences. The students will need to know what Real GDP per capita is and how to calculate it. - Real GDP per capita is the measure most often used to compare countries based on economic performance since it adjusts for the populations of the countries. - Real GDP per capita is simply the Real GDP of any country divided by its population. - Real GDP / Population = Real GDP per capita $27,270,000,000 / 468,571 = $58,198 per person Have the students compare Luxemburg's Real GDP per capita to the United States . Luxemburg = $58,198 United States = $40,100 Proceed by having the class discuss why Luxemburg has a higher Real GDP per capita. Answers should refer to Luxemburg's smaller population. The students should understand that GDP per capita will vary widely and that developing countries typically have much lower GDP per capita figures than developed countries. Real GDP per capita is the measure most used to compare countries based on their economic performance. It is calculated by dividing Real GDP by the population of the country. Luxembourg had a population of 468,571 and a GDP of 27.27 billion in 2004. Its GDP per capita was $27,270,000,000 / 468,571 or $58,198. This compares to the U.S. Real GDP per capita of $40,100 in 2004. Have the students complete the following drag and drop activity. Note: Qatar is a small country in the Middle East with large oil reserves, which accounts for a substantial part of its GDP. The countries with the lower GDP per capita numbers would normally be classified as developing countries. A country such as Pakistan has a large population and does not have the capital resources of developed countries such as Belgium. The Czech Republic is still emerging from its days as a member of the Soviet Union and has not fully developed as a free market economy. The next exercise is to compare Canada and Mexico . Adk the students which country has the highest GDP per capita. (Canada had a GDP per capita of $31,500 while Mexico's GDP per capita was $9,600 in 2004; most students should be able to figure out that Canada will have the higher GDP per capita due to its developed country status from the preceding discussion about developing and developed countries.) The students will use the growth rate in GDP and the Rule of 72 to calculate the expected GDP of Mexico and Canada. They will also see how a higher growth rate could cause convergence between the two countries. By using the Rule of 72, the students see how long it will take for Canada and Mexico to double their GDPs. 72 / Real GDP growth rate = the number of years for a sum to double For Canada with a rounded growth rate of 2 percent, it will take 36 years to double their GDP (72/2 = 36). For Mexico with a rounded growth rate of 4 percent, it will only take 18 years (72/4 =18), which means that Mexico will double its GDP twice in 36 years while Canada will only double once. In 36 years, Canada's GDP per capita would be $63,000 (31,500 x 2), while Mexico's GDP per capita would be $38,400 ($9,600 X 2 = 19,200 x 2 = 38,400), so there is still a significant gap between the two countries. If we assume a 6 percent growth rate for Mexico while Canada stays at 2 percent over a 36-year period, Mexico would see its GDP double three times (72/6 = 12 years and 36 years/12 years = 3) while Canada's GDP would still only double once. Mexico's GDP per capita after 36 years would be $76,800 (9600 x 2 = 19,200 x 2 = 38,400 x 2 = 76,800) while Canada's would still be at $63,000. This would completely eliminate the gap between the two countries and convergence would occur. Growth in GDP can be achieved by greater productivity through more capital resources or more skilled workers. In light of the previous exercise and discussion about developing and developed countries, most students will not be surprised that Canada has a much higher GDP per capita than Mexico. Ask the students why the big difference in GDP per capita exists between Canada and Mexico. (Explanations might include the greater capital resources of Canada, higher education rates and higher skill levels of workers, and greater infrastructure.) Ask the students if the GDP figures support their impression of the standard of living in Canada and Mexico. (Most students would have had an impression that Canadian citizens were, on average, better off than Mexican citizens, and the GDP figures reinforce that impression.) Ask the students to brainstorm how the United States would benefit from Mexico having a GDP per capita closer to the GDP per capita of Canada. Answers might include increased trade with Mexico. Even though Canada is smaller than Mexico in population (population of approximately 32 million), it is the United States's largest trading partner. Mexico has a population of 106 million, and we could expect that Mexico would greatly increase its purchases of U.S. goods if its standard of living were to increase. Another possible answer would be a decrease in the number of illegal immigrants that enter the United States from Mexico. Since many of the Mexican citizens come to the United States for greater economic opportunities, we could assume that there would be fewer illegal immigrants if Mexico had a higher GDP per capita. That would also be a benefit for the United States. Students could look up trade data and immigration figures to support their answers.
1 Part A Instructions and examples A Directions and examples Part A contains only the instructions for each exercise. Read the instructions and do the exercise while you listen to the recording. When you hear a tone (shown by a bullet in the script), give your response. Then listen to the recorded model response. A time to remember Unit 1, Exercise 1 Listen to a question, like Did your sister go to school in Korea? Answer with Yes, like Yes, she did. Were you born in Thailand? Yes, I was. Unit 1, Exercise 2 Listen to a question, like Was your brother born here? Answer with No, like No, he wasn t. Did your parents grow up in Colombia? No, they didn t. Unit 1, Exercise 3 Listen to a statement about a person s life, like My sister wasn t born here. Ask if the event happened in Brazil, like Was she born in Brazil? Then you hear an answer. My sister wasn t born here. Was she born in Brazil? Yes, she was. I studied English in school. Did you study English in Brazil? Yes, for six years. Unit 1, Exercise 4 Listen to a question with two choices, like Did you grow up here or in South America? Answer with the second choice, like I grew up in South America. Unit 1, Exercise 5 Listen to a statement, like I grew up in Toronto. Then ask for clarification, like Where did you grow up? Then you hear an answer. I grew up in Toronto. Where did you grow up? In Toronto. I played chess in high school. What did you play in high school? Chess. 1 2 Unit 1, Exercise 6 Listen and repeat. Unit 1, Exercise 7 Listen to a statement, like I collected stamps as a child. Reply like Hey, I used to collect stamps, too. I played soccer in high school. Hey, I used to play soccer, too. Unit 1, Exercise 8 Listen to people talk about things they did in the past, like WOMAN: I don t dance, but I did when I was younger. Reply like She used to dance when she was younger. MAN: I don t play sports anymore, but I did when I was in college. He used to play sports when he was in college. Unit 1, Exercise 9 [Note: This conversation is on page 2 of the Now repeat each sentence. 2 3 Caught in the rush Unit 2, Exercise 1 Listen to a word. Then make a sentence with There are too many... or There s too much..., like people There are too many people. rain There s too much rain. Unit 2, Exercise 2 Listen to a phrase, like teachers in the schools Make a sentence with There aren t enough... or There isn t enough..., like There aren t enough teachers in the schools. money in the bank There isn t enough money in the bank. Unit 2, Exercise 3 Listen to a phrase, like motorists on the streets Make a statement with There should be fewer... or There should be less..., like There should be fewer motorists on the streets. traffic on my street There should be less traffic on my street. Unit 2, Exercise 4 Listen to two words or phrases, like public transportation... pollution Combine the words or phrases in a sentence with more and less, or with more and fewer, like We need more public transportation and less pollution. traffic lights... cars We need more traffic lights and fewer cars. A Directions and examples 3 4 Unit 2, Exercise 5 Listen and repeat. Unit 2, Exercise 6 [Note: This conversation is on page 11 of the Now repeat each sentence. Unit 2, Exercise 7 Listen to the name of a place, like the bank Ask an indirect question with Can you tell me...? like Can you tell me where the bank is? Then you hear a reply. the bank Can you tell me where the bank is? On Second Street. Unit 2, Exercise 8 Listen to a question, like Where is the bank? Change it to an indirect question, like Could you tell me where the bank is? Then you hear an answer. Where is the bank? Could you tell me where the bank is? On Second Street. Unit 2, Exercise 9 Listen to a question, like When does the train come? Respond like I m not sure. Do you know when the train comes, Mary? Then you hear Mary s answer. When does the train come? I m not sure. Do you know when the train comes, Mary? In an hour. 4 5 Time for a change! Unit 3, Exercise 1 Listen and repeat. Unit 3, Exercise 2 [Note: This conversation is on page 19 of the Now repeat each sentence. Unit 3, Exercise 3 Listen to someone talk about an apartment, like The kitchen has a lot of cupboards. Disagree. Make a sentence with as many or as much, like It doesn t have as many cupboards as the last apartment. Unit 3, Exercise 4 Compare two apartments. Listen to a sentence, like The last apartment was cheaper. Reply with a sentence using almost as, like But this apartment is almost as cheap as that one. The last apartment was cheaper. But this apartment is almost as cheap as that one. Unit 3, Exercise 5 Listen to a question, like Is this neighborhood quiet? If you hear No, answer like No. It s not quiet enough for me. If you hear Yes, answer like Yes. In fact, it s too quiet. Is this neighborhood quiet? No. No. It s not quiet enough for me. Is this neighborhood quiet? Yes. Yes. In fact, it s too quiet. A Directions and examples 5 6 Unit 3, Exercise 6 Listen to a statement, like I live with my parents. Make a statement with I wish I didn t..., like I wish I didn t live with my parents. I live with my parents. I wish I didn t live with my parents. Unit 3, Exercise 7 Listen to a statement, like I don t have my own apartment. Make a statement with I wish..., like I wish I had my own apartment. I can t take a vacation this year. I wish I could take a vacation this year. Unit 3, Exercise 8 Listen to a statement, like This neighborhood is very noisy. Make another statement with I wish and weren t, like I wish it weren t so noisy. This neighborhood is very noisy. I wish it weren t so noisy. Unit 3, Exercise 9 Listen to a statement about a house, like There isn t a modern kitchen. Make a statement with I wish there were..., like I wish there were a modern kitchen. There isn t a modern kitchen. I wish there were a modern kitchen. There aren t many windows. I wish there were more windows. 6 7 I ve never heard of that! Unit 4, Exercise 1 Listen and repeat. Unit 4, Exercise 2 Listen to two words or phrases, like ceviche... Colombian Combine the words or phrases in a question, like Did you have ceviche at the Colombian restaurant last night? Then you hear an answer, like No, I didn t. Ask a second question with Have you ever...? like Have you ever had ceviche? Then you hear another response. ceviche... Colombian Did you have ceviche at the Colombian restaurant last night? No, I didn t. Have you ever had ceviche? Yes. It was delicious. Unit 4, Exercise 3 Listen to a phrase, like eat snails Ask a question with Have you ever...? like Have you ever eaten snails? Then you hear a response. eat snails Have you ever eaten snails? No, I haven t. Unit 4, Exercise 4 Listen to a question, like Have you ever eaten oysters? Answer like No, I ve never eaten oysters. Have you ever eaten oysters? No, I ve never eaten oysters. Unit 4, Exercise 5 Listen to a question, and answer with No, I didn t or No, I haven t, like Did you see the cooking show on TV last night? No, I didn t. Have you ever eaten a kebab? No, I haven t. Unit 4, Exercise 6 [Note: This conversation is on page 22 of the Now repeat each sentence. Unit 4, Exercise 7 Listen to instructions for making a cream cheese and cucumber sandwich. You will hear two sentences. Repeat the correct sentence, like First, you take two slices of bread. Then you take two slices of bread. First, you take two slices of bread. Before that, eat the sandwich! Finally, eat the sandwich! Finally, eat the sandwich! A Directions and examples 7 8 Going places Unit 5, Exercise 1 Listen to a question and a phrase, like What are you going to do next summer? go to Europe Answer like I think I ll go to Europe. What are you going to do next summer? go to Europe I think I ll go to Europe. Unit 5, Exercise 2 Listen to a question and two phrases, like What are you going to do for your vacation? stay home... go to Europe Answer like Maybe I ll stay home. I probably won t go to Europe. What are you going to do for your vacation? stay home... go to Europe Maybe I ll stay home. I probably won t go to Europe. What are you going to do for your vacation? take a class... get a job Maybe I ll take a class. I probably won t get a job. Unit 5, Exercise 3 Listen to two words or phrases. Ask a question with What kind of...? like luggage... take What kind of luggage are you going to take? Then you hear an answer. luggage... take What kind of luggage are you going to take? Just a backpack. Unit 5, Exercise 4 Listen and repeat. Unit 5, Exercise 5 Listen to a question about a trip, like Do I need to take warm clothes? Answer with Yes, you d better..., like Yes, you d better take warm clothes. Do I need to check the weather? Yes, you d better check the weather. Unit 5, Exercise 6 Listen to a question about travel arrangements, like Should I take a sleeping bag? Answer with No, you don t have to..., like No, you don t have to take a sleeping bag. Should I pack a first-aid kit? No, you don t have to pack a first-aid kit. Unit 5, Exercise 7 Listen to someone talk about travel plans, like I m going to travel alone. Respond like Oh, you shouldn t travel alone. I m going to take my jewelry. Oh, you shouldn t take your jewelry. Unit 5, Exercise 8 [Note: This conversation is on page 30 of the Now repeat each sentence. 8 9 OK. No problem! Unit 6, Exercise 1 Listen and repeat. Unit 6, Exercise 2 [Note: This conversation is on page 36 of the Now repeat each sentence. Unit 6, Exercise 3 Listen to a request, like Would somebody please hold the door open? Answer with Sure and the correct pronoun it or them, like Sure. I ll hold it open. Would somebody please hold the door open? Sure. I ll hold it open. Could somebody put the magazines away? Sure. I ll put them away. Unit 6, Exercise 4 Listen to a conversation, like WOMAN: Where have you been? The party started an hour ago! MAN: I m sorry. My car broke down. Have I missed much? Now give the person s excuse, like His car broke down. MAN: Hey, Sue. Could you lend me five dollars? WOMAN: I wish I could, but I m broke. She s broke. Unit 6, Exercise 5 Listen to a complaint, like It s too quiet in here. Who turned off the radio? Reply with I m sorry, and offer to help, like I m sorry. I did. I ll turn it on. It s cold in here. Who opened the window? I m sorry. I did. I ll close it. Unit 6, Exercise 6 Listen to someone complain, like You re late. Did you miss the bus? Reply with Yes and I m sorry, like Yes, I missed the bus. I m sorry. You missed our appointment. Did you forget? Yes, I forgot. I m sorry. Unit 6, Exercise 7 Listen to a command, like Clean off the table. Change it to a request, like Could you please clean off the table? Pick up your things. Could you please pick up your things? Unit 6, Exercise 8 Listen to a command, like Clean up this mess. Change it to a request, like Would you mind cleaning up this mess, please? Put away the dishes. Would you mind putting away the dishes, please? A Directions and examples 9 10 What s this for? Unit 7, Exercise 1 Listen and repeat. Unit 7, Exercise 2 Listen to a word or phrase, like a modem Ask a question using the word or phrase, like What s a modem used for? Then you hear an answer. a modem What s a modem used for? It s used to connect a computer to a telephone line. Unit 7, Exercise 3 Listen to a phrase, like write reports Use the phrase in a sentence about computers, like People can use computers for writing reports. pay their bills People can use computers for paying their bills. Unit 7, Exercise 4 Listen to a question followed by a word or phrase, like What is used to transmit television programs? satellites Answer the question, like Satellites are used to transmit television programs. What is used to transmit television programs? satellites Satellites are used to transmit television programs. Unit 7, Exercise 5 [Note: This conversation is on page 47 of the Now repeat each sentence. Unit 7, Exercise 6 Listen to a phrase, like spill drinks on it Use the phrase in a sentence about a computer with Be sure to... or Be sure not to..., like Be sure not to spill drinks on it. spill drinks on it Be sure not to spill drinks on it. keep the keyboard clean Be sure to keep the keyboard clean. 10 Written test 2 Name: Date: Total Score /100 points A B Alice is calling a restaurant. Listen to the phone conversation. Circle the correct answers. www.languagecentre.ir 1. Alice is calling the restaurant MODAL VERBS The modal verbs are: can, could, may, might, must, ought to, will, would, shall, should, have to, need. They take no s in the 3 rd person singular except for have to and need. They come before Starter Lesson One Back together! 1 Listen and sing. C 01 We re all back together We re all back together with friends from before. We re ready to work and learn some more. It s time to show what we can 1 Family and friends 1 Play the game with a partner. Throw a dice. Say. How to play Scores Throw a dice. Move your counter to that You square and complete the sentence. You get three points if the sentence 9 Adventures Focus Grammar Vocabulary personal experiences present perfect ever/never adventurous activities adjectives to describe experiences 1 Warm-up a Talk about the pictures with a partner. Where Part A Instructions and examples A Instructions and examples Part A contains only the instructions for each exercise. Read the instructions and do the exercise while you listen to the recording. When you 8 HERE AND THERE 2 1 4 6 7 11 12 13 68 30004_OUT_BEG_SB.indb 68 13/09/2018 09:41 IN THIS UNIT YOU LEARN HOW TO: talk about what people are doing explain why someone isn t there talk about houses and rooms Student Name CAMBRIDGE ENGLISH EMPOWER B1 PROGRESS TEST Test 10 Time 30 minutes INSTRUCTIONS TO STUDENTS Do not open this question paper until you are told to do so. Read the instructions for each part UNIT 2 COMPLETE Complete the conversation. Look at pages 23-25 in the textbook to check your answers. WOMAN: WOMAN: Excuse me. Aren t you the family moved into the Biden s old house? Yes, we. Hello, Michelle Relative clauses GRAMMAR Content You will learn how to use relative clauses to give more details on which person, place, or thing is being talked about. Learning Outcomes Learn about relative clauses Complete NAME DATE Agreement: Subject and Verb, Pronoun and Antecedent A. IDENTIFYING CORRECT SUBJECT-VERB AGREEMENT Underline the correct word or words in parentheses in each of the following sentences. Example F31 Homework GRAMMAR REFERNCE - UNIT 6 EXERCISES 1 Match the questions and answers. 1 What s Harry like? 2 What does Harry like? 3 How s Harry? a Very well, thanks. b Oh, the usual things good food and GUIA DE ESTUDIO PARA EL ETS DE SEGUNDO SEMESTRE. UNIDAD 7. 1 Underline the correct word or phrase. Example: We was / were at school yesterday. 1 Was / Were Jack and Elaine on holiday last week? 2 The shops Test 1 PAPER 1 READING AND WRITING (1 hour 1 minutes) PART 1 QUESTIONS 1 5 Which notice (A H) says this (1 5)? For questions 1 5, mark the correct letter A H on your answer sheet. You must use this door TEST ONE Paper 1 Reading AND WRITING (1 hour 10 minutes) Part 1 Before you answer the questions for this part, do the Further Practice and Guidance pages on page 5. Questions 1 5 Which notice (A H) says Unit 1 Language at work Present simple Present simple Positive: Add -s or -es after the verb with he / she / it. I / you / we / they specialize in Latin American music. He / She / It specializes in high-tech STAGE 1 1) Answer the questions in the long form. e.g. Are you Irish? - No, I m not Irish but I m English. i) Are you sitting on the floor?.. ii) Are we writing in French?. iii) Is there a book under the Upper Intermediate AK Unit b is currently being run was completed wasn t reached were announced 5 was built 6 are still being added 7 has become 8 can even be seen 9 carry out 0 are regularly tested has UNIT 1 Present simple and present continuous OJ Cross out the wrong words in bold. Write the 1 We are always making our homework together because we are in the same class. 2 You can walk around your town A) Insert a suitable modal verb. 1. Jack come to our wedding, but we aren't sure. 2. I buy the tickets for the concert? I see you're too busy. 3. We pay the fees at the fixed time. 4. You clean your room Each unit in the Focus on Grammar series presents a specific grammar structure or structures and develops a major theme, which is set by the opening text. All units follow the same unique four-step approach. STYLE School Tests for Young Learners of English Level 1 Sample Test Form A Hellenic American University, Office for Language Assessment. Distributed by the Hellenic American Union. FREE OF CHARGE LISTENING a I love this job! Grammar: Present simple and present continuous Match the questions ( 0) to the answers (a j) MY LIFE What does your dad do? Where do you usually go on Fridays? Do you often read in bed? TEST THREE PAPER 1 PART FOUR 75 Part 4 Questions 21 27 Read the article about a man who opened a restaurant, and then answer the questions. For questions 21 27, mark A, B or C on your answer sheet. OPENING A: Asks B where B usually goes on holiday. B: Cheltenham, England / end of June / camping in August with family A: Shows surprise and says he/she goes to England too during the summer to attend a language SZAKASZVIZSGA ANGOL NYELV A CSOPORT 2009/2010. Tanuló neve és osztálya: Tanára: Elért eredménye: Írásbeli: / 60 Szóbeli: /40 Összes: /100 Végső osztályzata: 1. Write questions for these answers. / 5 a.? Everyday life 4Unit In Unit 4, learn how to... use simple present statements, yes-no questions, and short answers. talk about r daily and weekly routines. answer more than yes or no to be friendly. use H32 Homework Unit 12 Grammar reference exercises 1 Tick the correct sentence in each pair. > Grammar Reference (GR) 12.2 1 I went to Finland last year. I have been to Finland last year. 2 Have you ever Countable (Can count) uncountable (cannot count) I have one cat. ( I have a cat. ) I have one milk. I have one of milk (I have a of milk) I have three cats I have three milk s (I have three of milk) examples INSTITUTO NACIONAL Teacher: Paz Cepeda WORKSHEET 8 TH GRADE UNITS 6 7 8 UNIT 6 COUNTABLE AND UNCOUNTABLE NOUNS I. Read the nouns and say if they are countable (C) or uncountable nouns (U) 1. Egg meat oil 핵심개념다지기 (Yes/No 의문문 ) 핵심개념다지기 ( 부정의문문 ) 01~05 빈칸에 Am, Are, Is, Do, Does 중알맞은것을 쓰세요. 01 you happy? 02 Tom your brother? 03 she live in New York? 04 they interesting movies? 05 Jack and Jill speak in German? Episode 1 Happy Birthday! (scene 1) 1 These people present Take a Look. Read the questions and complete the answers. 1 What s his name? His. 2 What s his name? Her. 2 Complete the months of the year in Audio scripts Transkripte (Hier werden nur die Texte aufgeführt, die nicht auf den Buchseiten abgedruckt sind.) Unit 2, Step 1 (page 29) 4b Routines (tracks 1/30 31) 1 Waiter: I enjoy my job but the working You know more than you think you know, just as you know less than you want to know (Oscar Wilde) 1. CAN MODAL VERBS ability to do sth. in the present (substitute form: to be able to) permission to do sth. CURSO:3º ESO ASIGNATURA:INGLÉS NO BILINGÜE PROFESORA: BEATRIZ MANSO CAPILLA Grammar & vocabulary Unit 1 Adverbs of degree 1 Tick (") the correct sentences. I get good marks. I m quite intelligent. 1 Joe Pre-intermediate Progress Test Units 4 6A Listening 1 Track 3 Listen to the directions and write places a) f) in the correct position on the map. L 4 C O R N O D OXFORD RD S O S N 6 S 2 R T You O BRICK COLEGIO DE BACHILLERES ECATEPEC19 GUIDE ENGLISH VI Student s name: General Instructions: Read instructions carefully and answer every exercise correctly. BLOCK 1 1. Choose the correct form of future. 1. Checkout / 6P p. 30 / 1 1 I can understand everyday situations in London. Match the sentence parts. 1. London is a really big city, 2. Not everybody has got a house or a flat in London 3. Homeless people Child s name (first & last) after* about along a lot accept a* all* above* also across against am also* across* always afraid American and* an add another afternoon although as are* after* anything almost 3 rd CSE Unit 1 mustn t and have to 1 Write sentences about the signs. 1 2 3 4 5 You mustn t smoke. 1 _ 2 _ 3 _ 4 _ 5 _ should and must 2 Complete the sentences with should(n t) or must(n t). I must get Reg. No. : Sub. Code : R 2 EN 21/ B 2 EN 21 U.G. (CBCS) DEGREE EXAMINATION, APRIL 2014. Second Semester Part II English Paper II PROSE, ONE ACT PLAYS, GRAMMAR AND COMMUNICATION SKILLS (For those who joined Support materials Download the LearnEnglish Elementary podcast. You ll find all the details on this page: http://learnenglish.britishcouncil.org/elementarypodcasts/series-02-episode-06 While you listen The present perfect: Key (pp.14-18) C. You are asking someone about things he has done in his life. Use the words in brackets to make your questions. Example: (you ever/be/to Italy?) Have you ever been 3 Reading A Read the. Do you need to match all of the sentences to gaps for the long dialogue? B Now complete the. Part 1 Complete the five conversations. Choose a, b or c. 1 Are you cooking spaghetti? Get TALKing I just can t Believe my ears! I totally agree with you. Just between you and me. Never in a million years If I were him, I d. 1 How Conditions to Get TALKing of use 1. Prompt students. What Journeys B1+ Teacher s Resource Pack Functional Language Reassuring 1 Match 1 Never 2 Don t 3 It doesn t 4 That s 5 No a matter. b mind. c OK. d problem. e worry. 2 I won t be able to finish the project ENGLIGH REVIEW Am, is or are? Write the correct word in the gaps. Then make the sentences negative. a. I a student. b. My brother a teacher. c. We from Madrid. d. My brother and father English. e. That Student Workbook ENGLISH ENGLISH AMERICAN Level 3 RosettaStone.com Level 3 ENGLISH AMERICAN 2008 Rosetta Stone Ltd. All rights reserved. xxxxxxx Tests Rosetta Stone Classroom WKT-ENG-L3-2.0 ISBN 978-1-60391-434-5 Inglés. 5Primaria PRESENT CONTINUOUS Affirmative Negative Interrogative I am (I m) playing. You are (you re) eating He is (He s) reading. She is (She s) sleeping. It is (It s) running. We are (we re) speaking. CAMBRIDGE ENGLISH for schools Worksheets Cambridge English for Schools 2 worksheet Theme A 1 What do they like doing? Likes and dislikes + -ing Look at the table and fill in the gaps in the text. Use an Units & Pre-exam Practice Match the descriptions of the people to the pictures. One description is not relevant. Name Read the text and circle the correct answer. Hi! I m Peter and this is Tom. He is my Unit 3 Rural landscapes cottage field footpath gate hedge hill lane stream village wood Urban landscapes advertisement bus stop pavement pedestrian crossing postbox road sign roadworks rubbish bin street Unit Twenty-One Q-ty is almost as tall as Jane. Target Language Q-ty is almost as tall as Jane. The tallest U.S. President was Abraham Lincoln. No other President was as tall as Abraham Lincoln. Abraham PTE Young Learners - Breakthrough Hello kids, hello boys and girls. Today s test is Breakthrough. Tasks One and Two are listening. Good luck and have fun!! Task One: Let s Go to the Zoo Mrs Brown is talking Level: Upper Intermediate Lesson: 24 Title: Getting Away Summary: Denise and Lisa are chatting over the phone about getting away. Denise and Lisa are chatting over the phone. Listen to their conversation LISTENING Test The Listening Section of the test (items 1 30) is divided into 4 parts. Instructions and examples are provided at the beginning of each part. All conversations and questions will be heard 頻出イディオム ac the ideas found in ikebana have also had a powerful impact on daily life some very successful U.S. and European companies include these ideas in their designs of consumer products Japanese style LISTENING TASK If I Were A Boy lyrics ( Beyoncé Knowles). If I...a boy even just for a day I'd roll out of bed in the morning And throw on what I wanted And go drink beer with the... And chase after girls Phrasal verbs & Idioms in IELTS Speaking What are phrasal verbs? Phrasal verb 구동사 ; 동사에부사, 전치사가붙어새로운뜻을띄는표현 동사 + 부사 ; Grow up, pick up, take out 동사 + 부사 + 전치사 ; Look forward to, get on with Purpose of using Talking about the Future in English Rules Stories Exercises SAMPLE CHAPTER By Really Learn English Thanks for downloading this free booklet. It includes a sample chapter from the Talking about the Future 4A Memorable moments A B C D E 1 4.1 What s happening in pictures A E? Which person is? Word Bank 8, p. 71. Reading I think he s feeling proud of washing the car because... embarrassed (about) interested UNIT 8 I was It s so cold in Hong Kong. We were in Australia last week. It was very hot there. A We use the simple past tense to talk about events or completed actions that happened in the past. I He She SELLING NI T POWER U IN THE PICTURE In a store Talk about shopping WORK WITH WORDS RECALL Work in pairs. Do the tasks. You have three minutes. a Name at least eight stores. bakery b Complete the phrases Part A Instructions and examples A Instructions and examples Part A contains only the instructions for each exercise. Read the instructions and do the exercise while you listen to the recording. When you WARM-UP Grab one playing board Grab two buttons (not matching) and a penny Flip the penny to move Heads=1 space, Tails=2 spaces Use the button as a marker to move spaces Make up a sentence with the word Reported () Speech: Discovering the rules from Practical English Usage First, do Discovering the Rules. Then, read the explanations. You can find the explanations from Practical English Usage below this THE ADVENTURES OF HUCKLEBERRY FINN MARK TWAIN I never had a home, write Huck, or went to school like all the other boys. I slept in the streets or in the woods, and I could do what I wanted, when I wanted. Past Simple Questions Find your sentence: Who? What? Janet Chris Mary Paul Liz John Susan Victor wrote a letter read a book ate an apple drank some milk drew a house made a model plane took some photos This/These That/Those SINGULAR FORM This is a fork. That is a knife. PLURAL FORM These are forks. Those are knives. 3. Circle the correct words. a. This / These is my aunt and this / these are my cousins. In questions we usually put the subject after the first verb: subject + verb verb + subject I Tom you the house will have was will have was Tom you the house 0 Will Tom be here tomorrow C Have you been
CS101 Study Guide Unit 8: Java I/O and Exception Handling 8a. Describe the Java I/O package - What package does Java use to move data into and out from a computer program? - What are the three fundamental data streams? - How is it possible to read data as one type and store it as another type? - How does a programmer communicate with the console? We have spent a lot of time talking about organizing and manipulating data within the computer. But, we have spent little time on how to get data into and out of the computer. Java's utility package contains all we need for that. For this learning outcome we concentrate on the three fundamental data streams: System.in (data input), System.out (data output), System.err (error reporting). The computer has no value unless it has a positive impact on human needs. Input/Output allows the computer to connect to the real world. What is "data" within the physical reality of the computer? Data exists as units of eight bits, switches that are either on or off. These units are called bytes. Data is composed of groupings and interpretations of bytes. If one takes a group of bytes and interprets that group differently, it becomes a different data value, or data values. Yes, it is possible that a single data value can be turned into multiple values. Turning one fundamental value into another is called casting. double thisDouble = (double)3; turns the integer value 3 into the double value 3.0. Although not the same as casting, it is possible to turn a character string into a sequence of fundamental data values. Data input and output is a vast subject within any language, depending on the type of data and where it comes from. Let's begin our exploration with the keyboard and console, two very common I/O devices. - Input and Output: Most are necessary. Overall review: 19. These are examples: 6, 7, 16, 17, 18. - Fill in the Blanks Chapter 12: You would be well-served to complete this review successfully. - String Formation: Very important but short reading with lots of detail. This detail is important for programs to work properly. - How to Write Data to Console in Java: Very important but short reading with lots of detail. This detail is important for programs to work properly. 8b. Read and write data from/to an external file - Name various sources of data that can be input to a computer program. - Name various targets of data output from a computer program. - What are the various ways to open a file? - What is good practice regarding the closing of files? We have input data from the keyboard and output data to the console. There are many other sources and targets. These include files of different kinds, and also network addresses. "Data" itself has to be understood in the most general terms. A huge consideration is how the data is to be interpreted. Much depends on the source and target, and any intervening applications along the way. In this particular learning exercise, we concentrate on file input/output. However, the lessons learned here are applicable to other sources and targets, as you will see as you learn more. - Input and Output Streams: Each is very important. If you can do nothing else, go through 14. - Writing Text Files: 4, 5, 11, 15, 16, 17 are examples. The rest should be reviewed. - Reading Data From a File: 4, 7, 8, 9, 13 are examples. The rest should be reviewed. 8c. Use the Java I/O package to retrieve data for populating method parameters - How is the Scanner class used for both keyboard and file input? - Why is it important to be able to read data from permanent storage devices? - What is the difference between a text and binary file? - Which other classes can be used for file input? There are two general ways of dealing with data, as it arrives and historically. For instance, an application in precision irrigation will be interested in the immediate ambient temperature. However, that same application will be interested in temperature trends. For that reason, the immediate temperature has to be combined with past temperature readings. Thus, the immediate temperature has to be stored somehow for later access. How to read stored data values is what we consider in this component of the course. Java offers several ways to read stored data. Here we consider the Scanner class combined with the File class. Storing and reading data is a vast subject within Java. Scanner and File classes offer basic approaches that fit well with numerous requirements. Let's begin our exploration of Java files with those. Review java.io.File and File Input: There are two short readings here. Both should be taken in detail. 8d. Explain error-handling via exceptions - Under which type of situations are exceptions useful? - What do exception handlers do? What do they prevent? - What is the difference between error-checking and exception-handling? - Is it possible for a programmer to define their own exceptions? If so, how? It is not a matter of if but when things will go wrong in a computer program. I began my consulting career by approaching a potential customer to tell them that he had seen the source code to their main multi-user system and that they were sure to have multiple crashes, periods of time when their system would go down and would have to be restarted. They replied that such a thing was not possible, that their users did not know enough to accomplish that. After asserting that it was because their users did not know enough that the system would crash, given the software at hand, I waited a week. At the end of that week, the customer received another visit. They begged for help since their system was spending more time crashing/restarting than getting anything useful accomplished. The fix was to insert easily-accomplished exception-handling into the code. The job was done in one morning of work. Such is the importance of exception-handling. Taken in the most general sense, "exception" refers to an event that is unacceptable to the situational domain covered by the program. Anything outside that domain is an "exception". For instance, if the user is asked to input a value between 1 and 10 inclusive, steps must be taken to account for values outside that range. Similarly, if a file is supposed to exist, it is an "exception" when that file does not exist, or if the file contains unexpected data. Do not think this means you have to deal with an infinity of specific error situations. Rather, think in terms of, "Is this an expected situation?" If it is not, then an exceptional situation exists and must be declared and dealt with in some way that allows the program to keep operating. Given the relatively narrow range of "acceptable", it is easy to identify the "unacceptable". Philosophically, the question is, "Is this situation within the domain of acceptability?" If it is not, then an exception exists. Review Exceptions: When Things Go Wrong: 10.1, 10.2, 10.3 8e. Apply exception handling techniques - What are the two constructs for identifying and handling exceptional (error) situations? - How can programs be best designed for smoothly identifying and dealing with exceptional situations? - Can a programmer create their own Java exceptions? If so, how? - Explain exceptions thrown by the JVM. if/else and try/catch are the two approaches we can use to identify and deal with situations that are outside a program's domain. try/catch is important since it will allow the program to cope with errors generated by the JVM. For instance, if the program attempts to open a file that does not exist, the JVM will throw an exception. If the program does not check for JVM-thrown exceptions during the file-open attempt, the program will crash upon exception. try/catch is like saying, "if-exception then an error preventing continuation has occurred". If the exception is not caught and properly handled, the program crashes. Continuing with that thought, if the file is not found, what should happen? Should the program store all internal data, close all open files, and then exit? Should the program go into a quiescent mode and try again later? Should the program go off to do something else for a while? Should it ask for a different filename? That depends on what the program is supposed to accomplish and the system environment in which it exists. How you design your software is an important undertaking that will make all the difference in maintenance, expansion, and reliability. Review Exceptions: When Things Go Wrong: 10.4, 10.5, 10.6, 10.7 Unit 8 Vocabulary This vocabulary list includes the terms listed above that you will need to know to successfully complete the final exam. - data streams - File class - Scanner class - utility package
The relationship between inflation and unemployment can vary greatly depending on the underlying causes of inflation. Both high economic growth and large government budget deficits can lead to inflationary pressures on the economy. While rampant economic growth results in low unemployment rates, a big deficit means fewer jobs and high rates of unemployment. Inflation is defined as a broad rise in price levels. It is measured by tracking the change in prices for a broad range of goods and calculating an average change. Two types of inflation rates are calculated and made public. In addition to the consumer price index, or CPI, the producer price index, or PPI, is also widely followed by economists. While the former is a measure of how much more expensive the goods purchased by consumer have become, the later tracks prices paid by manufacturers for such things as steel, cement and wood. It is generally assumed that an increase in PPI results in an eventual increase in the CPI, because sooner or later manufacturers must pass on higher input prices to consumers. Excess growth in an economy can lead to inflationary pressures over the long run. As consumers demand more goods and services, producers will inevitably begin charging higher prices. Not only is it easy to ask for higher prices if demand is strong, but manufacturing higher and higher quantities results in bigger manufacturing costs as well. Labor costs, in particular, will increase as manufacturers will have to pay overtime rates, which tends to be more costly, to increase output. This higher cost will naturally be passed on to consumers and result in higher consumer price inflation. Therefore, if the inflation is due to excess consumer demand, it will tend to result in plenty of jobs and low unemployment rates. High rates of inflation can also result from excessive government spending and extreme deficits. When the government spends more than it can collect in tax revenue, it must borrow to finance the shortfall. This borrowing is almost entirely done through the sale of government bonds. When the government needs to borrow excessive amounts, it must offer high rates of interest to entice sufficient numbers of investors. Such high interest rates lead to increasing prices as well, because the borrowing cost of manufacturers also increases. When investors can lend to the government at high rates of interest, they demand even higher rates of return when lending to private corporations, which carry far higher repayment risk. Such circumstances lead to both higher inflation and high rates of unemployment, as manufacturers must often lay off workers in response to increased borrowing costs. The causes and effects of inflation, unemployment, economic growth and government deficits can be highly complex. Every macroeconomic picture must therefore be analyzed individually, and such things as the savings rate, the qualifications of the work force and other factors also considered. A high deficit and resulting need for heavy borrowing may not result in high inflation, if, for instance, most of the households in an economy save a great deal of their income and buy government bonds with their savings. Due to such demand, the government may not need to offer very high interest rates to elicit sufficient demand for bonds. Similarly, certain countries rely heavily on robots and other machine-intensive manufacturing techniques and may not need to pay a great deal in overtime wages to ramp up production levels. Why Is a High Rate of Inflation Bad for the Economy? Inflation is caused by increases in an economic condition that is caused by an increasing money supply or rapid increases in the... Relationship Between Inflation & Unemployment Rate Inflation and unemployment have an inverse relationship, so an increase in unemployment will reduce the inflation rate. It is possible for a... The Effect of Interest Rates on Inflation & Unemployment The central bank of the United States, the Federal Reserve, is one of the most important influences on interest rates in the... The Effects of Unemployment Unemployment, particularly sustained unemployment, has both obvious and subtle effects on individuals, communities, families, businesses and political entities. The impact is felt... The Effect of High Inflation on Output & Employment High inflation has the power to decimate savings accounts and render them worthless, while it also can create price and market instability.... How Inflation and Unemployment Affect the Economy An economy does well when there is money circulating. When people and businesses can afford to buy goods and services, people can... The Effects of a High Unemployment Rate Once the U.S. unemployment rate surpasses the 5 percent to 6 percent range, it reflects high unemployment in the country, according to...
"The optical and ultraviolet light from stars continues to travel throughout the universe even after the stars cease to shine, and this creates a fossil radiation field we can explore using gamma rays from distant sources," said lead scientist Marco Ajello, a postdoctoral researcher at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University in California and the Space Sciences Laboratory at the University of California at Berkeley. Gamma rays are the most energetic form of light. Since Fermi's launch in 2008, its Large Area Telescope (LAT) observes the entire sky in high-energy gamma rays every three hours, creating the most detailed map of the universe ever known at these energies. The total sum of starlight in the cosmos is known to astronomers as the extragalactic background light (EBL). To gamma rays, the EBL functions as a kind of cosmic fog. Ajello and his team investigated the EBL by studying gamma rays from 150 blazars, or galaxies powered by black holes, that were strongly detected at energies greater than 3 billion electron volts (GeV), or more than a billion times the energy of visible light. "With more than a thousand detected so far, blazars are the most common sources detected by Fermi, but gamma rays at these energies are few and far between, which is why it took four years of data to make this analysis," said team member Justin Finke, an astrophysicist at the Naval Research Laboratory in Washington. As matter falls toward a galaxy's supermassive black hole, some of it is accelerated outward at almost the speed of light in jets pointed in opposite directions. When one of the jets happens to be aimed in the direction of Earth, the galaxy appears especially bright and is classified as a blazar. Gamma rays produced in blazar jets travel across billions of light-years to Earth. During their journey, the gamma rays pass through an increasing fog of visible and ultraviolet light emitted by stars that formed throughout the history of the universe. Occasionally, a gamma ray collides with starlight and transforms into a pair of particles -- an electron and its antimatter counterpart, a positron. Once this occurs, the gamma ray light is lost. In effect, the process dampens the gamma ray signal in much the same way as fog dims a distant lighthouse. From studies of nearby blazars, scientists have determined how many gamma rays should be emitted at different energies. More distant blazars show fewer gamma rays at higher energies -- especially above 25 GeV -- thanks to absorption by the cosmic fog. The farthest blazars are missing most of their higher-energy gamma rays. The researchers then determined the average gamma-ray attenuation across three distance ranges between 9.6 billion years ago and today. From this measurement, the scientists were able to estimate the fog's thickness. To account for the observations, the average stellar density in the cosmos is about 1.4 stars per 100 billion cubic light-years, which means the average distance between stars in the universe is about 4,150 light-years. A paper describing the findings was published Thursday on Science Express. "The Fermi result opens up the exciting possibility of constraining the earliest period of cosmic star formation, thus setting the stage for NASA's James Webb Space Telescope," said Volker Bromm, an astronomer at the University of Texas, Austin, who commented on the findings. "In simple terms, Fermi is providing us with a shadow image of the first stars, whereas Webb will directly detect them." Measuring the extragalactic background light was one of the primary mission goals for Fermi. "We're very excited about the prospect of extending this measurement even farther," said Julie McEnery, the mission's project scientist at NASA's Goddard Space Flight Center in Greenbelt, Md. Goddard manages the Fermi astrophysics and particle physics research partnership. Fermi was developed in collaboration with the U.S. Department of Energy with contributions from academic institutions and partners in France, Germany, Italy, Japan, Sweden and the United States.Francis Reddy J. D. Harrington | EurekAlert! Light-driven atomic rotations excite magnetic waves 24.10.2016 | Max-Planck-Institut für Struktur und Dynamik der Materie Move over, lasers: Scientists can now create holograms from neutrons, too 21.10.2016 | National Institute of Standards and Technology (NIST) Terahertz excitation of selected crystal vibrations leads to an effective magnetic field that drives coherent spin motion Controlling functional properties by light is one of the grand goals in modern condensed matter physics and materials science. A new study now demonstrates how... Researchers from the Institute for Quantum Computing (IQC) at the University of Waterloo led the development of a new extensible wiring technique capable of controlling superconducting quantum bits, representing a significant step towards to the realization of a scalable quantum computer. "The quantum socket is a wiring method that uses three-dimensional wires based on spring-loaded pins to address individual qubits," said Jeremy Béjanin, a PhD... In a paper in Scientific Reports, a research team at Worcester Polytechnic Institute describes a novel light-activated phenomenon that could become the basis for applications as diverse as microscopic robotic grippers and more efficient solar cells. A research team at Worcester Polytechnic Institute (WPI) has developed a revolutionary, light-activated semiconductor nanocomposite material that can be used... By forcefully embedding two silicon atoms in a diamond matrix, Sandia researchers have demonstrated for the first time on a single chip all the components needed to create a quantum bridge to link quantum computers together. "People have already built small quantum computers," says Sandia researcher Ryan Camacho. "Maybe the first useful one won't be a single giant quantum computer... COMPAMED has become the leading international marketplace for suppliers of medical manufacturing. The trade fair, which takes place every November and is co-located to MEDICA in Dusseldorf, has been steadily growing over the past years and shows that medical technology remains a rapidly growing market. In 2016, the joint pavilion by the IVAM Microtechnology Network, the Product Market “High-tech for Medical Devices”, will be located in Hall 8a again and will... 14.10.2016 | Event News 14.10.2016 | Event News 12.10.2016 | Event News 24.10.2016 | Earth Sciences 24.10.2016 | Life Sciences 24.10.2016 | Physics and Astronomy
A solar cell, or photovoltaic cell, is an electrical device that converts the energy of light directly into electricity by the photovoltaic effect, which is a physical and chemical phenomenon. It is a form of photoelectric cell, defined as a device whose electrical characteristics, such as current, voltage, or resistance, vary when exposed to light. Individual solar cell devices can be combined to form modules, otherwise known as solar panels. In basic terms a single junction silicon solar cell can produce a maximum open-circuit voltage of approximately 0.5 to 0.6 volts. Solar cells are described as being photovoltaic, irrespective of whether the source is sunlight or an artificial light. They are used as a photodetector (for example infrared detectors), detecting light or other electromagnetic radiation near the visible range, or measuring light intensity. The operation of a photovoltaic (PV) cell requires three basic attributes: - The absorption of light, generating either electron-hole pairs or excitons. - The separation of charge carriers of opposite types. - The separate extraction of those carriers to an external circuit. In contrast, a solar thermal collector supplies heat by absorbing sunlight, for the purpose of either direct heating or indirect electrical power generation from heat. A "photoelectrolytic cell" (photoelectrochemical cell), on the other hand, refers either to a type of photovoltaic cell (like that developed by Edmond Becquerel and modern dye-sensitized solar cells), or to a device that splits water directly into hydrogen and oxygen using only solar illumination. Assemblies of solar cells are used to make solar modules that generate electrical power from sunlight, as distinguished from a "solar thermal module" or "solar hot water panel". A solar array generates solar power using solar energy. Cells, modules, panels and systems Multiple solar cells in an integrated group, all oriented in one plane, constitute a solar photovoltaic panel or module. Photovoltaic modules often have a sheet of glass on the sun-facing side, allowing light to pass while protecting the semiconductor wafers. Solar cells are usually connected in series and parallel circuits or series in modules, creating an additive voltage. Connecting cells in parallel yields a higher current; however, problems such as shadow effects can shut down the weaker (less illuminated) parallel string (a number of series connected cells) causing substantial power loss and possible damage because of the reverse bias applied to the shadowed cells by their illuminated partners. Strings of series cells are usually handled independently and not connected in parallel, though as of 2014, individual power boxes are often supplied for each module, and are connected in parallel. Although modules can be interconnected to create an array with the desired peak DC voltage and loading current capacity, using independent MPPTs (maximum power point trackers) is preferable. Otherwise, shunt diodes can reduce shadowing power loss in arrays with series/parallel connected cells. |USD/W||Australia||China||France||Germany||Italy||Japan||United Kingdom||United States| |Source: IEA – Technology Roadmap: Solar Photovoltaic Energy report, 2014 edition:15 Note: DOE – Photovoltaic System Pricing Trends reports lower prices for the U.S. The photovoltaic effect was experimentally demonstrated first by French physicist Edmond Becquerel. In 1839, at age 19, he built the world's first photovoltaic cell in his father's laboratory. Willoughby Smith first described the "Effect of Light on Selenium during the passage of an Electric Current" in a 20 February 1873 issue of Nature. In 1883 Charles Fritts built the first solid state photovoltaic cell by coating the semiconductor selenium with a thin layer of gold to form the junctions; the device was only around 1% efficient. Other milestones include: - 1888 – Russian physicist Aleksandr Stoletov built the first cell based on the outer photoelectric effect discovered by Heinrich Hertz in 1887. - 1905 – Albert Einstein proposed a new quantum theory of light and explained the photoelectric effect in a landmark paper, for which he received the Nobel Prize in Physics in 1921. - 1941 – Vadim Lashkaryov discovered p-n-junctions in Cu2O and Ag2S protocells. - 1946 – Russell Ohl patented the modern junction semiconductor solar cell, while working on the series of advances that would lead to the transistor. - 1954 – the first practical photovoltaic cell was publicly demonstrated at Bell Laboratories. The inventors were Calvin Souther Fuller, Daryl Chapin and Gerald Pearson. - 1958 – solar cells gained prominence with their incorporation onto the Vanguard I satellite. Solar cells were first used in a prominent application when they were proposed and flown on the Vanguard satellite in 1958, as an alternative power source to the primary battery power source. By adding cells to the outside of the body, the mission time could be extended with no major changes to the spacecraft or its power systems. In 1959 the United States launched Explorer 6, featuring large wing-shaped solar arrays, which became a common feature in satellites. These arrays consisted of 9600 Hoffman solar cells. By the 1960s, solar cells were (and still are) the main power source for most Earth orbiting satellites and a number of probes into the solar system, since they offered the best power-to-weight ratio. However, this success was possible because in the space application, power system costs could be high, because space users had few other power options, and were willing to pay for the best possible cells. The space power market drove the development of higher efficiencies in solar cells up until the National Science Foundation "Research Applied to National Needs" program began to push development of solar cells for terrestrial applications. In the early 1990s the technology used for space solar cells diverged from the silicon technology used for terrestrial panels, with the spacecraft application shifting to gallium arsenide-based III-V semiconductor materials, which then evolved into the modern III-V multijunction photovoltaic cell used on spacecraft. Improvements were gradual over the 1960s. This was also the reason that costs remained high, because space users were willing to pay for the best possible cells, leaving no reason to invest in lower-cost, less-efficient solutions. The price was determined largely by the semiconductor industry; their move to integrated circuits in the 1960s led to the availability of larger boules at lower relative prices. As their price fell, the price of the resulting cells did as well. These effects lowered 1971 cell costs to some $100 per watt. In late 1969 Elliot Berman joined Exxon's task force which was looking for projects 30 years in the future and in April 1973 he founded Solar Power Corporation, a wholly owned subsidiary of Exxon at that time. The group had concluded that electrical power would be much more expensive by 2000, and felt that this increase in price would make alternative energy sources more attractive. He conducted a market study and concluded that a price per watt of about $20/watt would create significant demand. The team eliminated the steps of polishing the wafers and coating them with an anti-reflective layer, relying on the rough-sawn wafer surface. The team also replaced the expensive materials and hand wiring used in space applications with a printed circuit board on the back, acrylic plastic on the front, and silicone glue between the two, "potting" the cells. Solar cells could be made using cast-off material from the electronics market. By 1973 they announced a product, and SPC convinced Tideland Signal to use its panels to power navigational buoys, initially for the U.S. Coast Guard. Research and industrial production Research into solar power for terrestrial applications became prominent with the U.S. National Science Foundation's Advanced Solar Energy Research and Development Division within the "Research Applied to National Needs" program, which ran from 1969 to 1977, and funded research on developing solar power for ground electrical power systems. A 1973 conference, the "Cherry Hill Conference", set forth the technology goals required to achieve this goal and outlined an ambitious project for achieving them, kicking off an applied research program that would be ongoing for several decades. The program was eventually taken over by the Energy Research and Development Administration (ERDA), which was later merged into the U.S. Department of Energy. Following the 1973 oil crisis, oil companies used their higher profits to start (or buy) solar firms, and were for decades the largest producers. Exxon, ARCO, Shell, Amoco (later purchased by BP) and Mobil all had major solar divisions during the 1970s and 1980s. Technology companies also participated, including General Electric, Motorola, IBM, Tyco and RCA. Declining costs and exponential growth Adjusting for inflation, it cost $96 per watt for a solar module in the mid-1970s. Process improvements and a very large boost in production have brought that figure down 99%, to 68¢ per watt in 2016, according to data from Bloomberg New Energy Finance. Swanson's law is an observation similar to Moore's Law that states that solar cell prices fall 20% for every doubling of industry capacity. It was featured in an article in the British weekly newspaper The Economist in late 2012. Further improvements reduced production cost to under $1 per watt, with wholesale costs well under $2. Balance of system costs were then higher than those of the panels. Large commercial arrays could be built, as of 2010, at below $3.40 a watt, fully commissioned. As the semiconductor industry moved to ever-larger boules, older equipment became inexpensive. Cell sizes grew as equipment became available on the surplus market; ARCO Solar's original panels used cells 2 to 4 inches (50 to 100 mm) in diameter. Panels in the 1990s and early 2000s generally used 125 mm wafers; since 2008, almost all new panels use 156 mm cells. The widespread introduction of flat screen televisions in the late 1990s and early 2000s led to the wide availability of large, high-quality glass sheets to cover the panels. During the 1990s, polysilicon ("poly") cells became increasingly popular. These cells offer less efficiency than their monosilicon ("mono") counterparts, but they are grown in large vats that reduce cost. By the mid-2000s, poly was dominant in the low-cost panel market, but more recently the mono returned to widespread use. Manufacturers of wafer-based cells responded to high silicon prices in 2004–2008 with rapid reductions in silicon consumption. In 2008, according to Jef Poortmans, director of IMEC's organic and solar department, current cells use 8–9 grams (0.28–0.32 oz) of silicon per watt of power generation, with wafer thicknesses in the neighborhood of 200 microns. Crystalline silicon panels dominate worldwide markets and are mostly manufactured in China and Taiwan. By late 2011, a drop in European demand dropped prices for crystalline solar modules to about $1.09 per watt down sharply from 2010. Prices continued to fall in 2012, reaching $0.62/watt by 4Q2012. Solar PV is growing fastest in Asia, with China and Japan currently accounting for half of worldwide deployment. Global installed PV capacity reached at least 301 gigawatts in 2016, and grew to supply 1.3% of global power by 2016. In fact, the harnessed energy of silicon solar cells at the cost of a dollar has surpassed its oil counterpart since 2004. It was anticipated that electricity from PV will be competitive with wholesale electricity costs all across Europe and the energy payback time of crystalline silicon modules can be reduced to below 0.5 years by 2020. Subsidies and grid parity Solar-specific feed-in tariffs vary by country and within countries. Such tariffs encourage the development of solar power projects. Widespread grid parity, the point at which photovoltaic electricity is equal to or cheaper than grid power without subsidies, likely requires advances on all three fronts. Proponents of solar hope to achieve grid parity first in areas with abundant sun and high electricity costs such as in California and Japan. In 2007 BP claimed grid parity for Hawaii and other islands that otherwise use diesel fuel to produce electricity. George W. Bush set 2015 as the date for grid parity in the US. The Photovoltaic Association reported in 2012 that Australia had reached grid parity (ignoring feed in tariffs). The price of solar panels fell steadily for 40 years, interrupted in 2004 when high subsidies in Germany drastically increased demand there and greatly increased the price of purified silicon (which is used in computer chips as well as solar panels). The recession of 2008 and the onset of Chinese manufacturing caused prices to resume their decline. In the four years after January 2008 prices for solar modules in Germany dropped from €3 to €1 per peak watt. During that same time production capacity surged with an annual growth of more than 50%. China increased market share from 8% in 2008 to over 55% in the last quarter of 2010. In December 2012 the price of Chinese solar panels had dropped to $0.60/Wp (crystalline modules). (The abbreviation Wp stands for watt peak capacity, or the maximum capacity under optimal conditions.) As of the end of 2016, it was reported that spot prices for assembled solar panels (not cells) had fallen to a record-low of US$0.36/Wp. The second largest supplier, Canadian Solar Inc., had reported costs of US$0.37/Wp in the third quarter of 2016, having dropped $0.02 from the previous quarter, and hence was probably still at least breaking even. Many producers expected costs would drop to the vicinity of $0.30 by the end of 2017. It was also reported that new solar installations were cheaper than coal-based thermal power plants in some regions of the world, and this was expected to be the case in most of the world within a decade. The solar cell works in several steps: - Photons in sunlight hit the solar panel and are absorbed by semiconducting materials, such as silicon. - Electrons are excited from their current molecular/atomic orbital. Once excited an electron can either dissipate the energy as heat and return to its orbital or travel through the cell until it reaches an electrode. Current flows through the material to cancel the potential and this electricity is captured. The chemical bonds of the material are vital for this process to work, and usually silicon is used in two layers, one layer being doped with boron, the other phosphorus. These layers have different chemical electric charges and subsequently both drive and direct the current of electrons. - An array of solar cells converts solar energy into a usable amount of direct current (DC) electricity. - An inverter can convert the power to alternating current (AC). The most commonly known solar cell is configured as a large-area p–n junction made from silicon. Other possible solar cell types are organic solar cells, dye sensitized solar cells, perovskite solar cells, quantum dot solar cells etc. The illuminated side of a solar cell generally has a transparent conducting film for allowing light to enter into active material and to collect the generated charge carriers. Typically, films with high transmittance and high electrical conductance such as indium tin oxide, conducting polymers or conducting nanowire networks are used for the purpose. Solar cell efficiency may be broken down into reflectance efficiency, thermodynamic efficiency, charge carrier separation efficiency and conductive efficiency. The overall efficiency is the product of these individual metrics. A solar cell has a voltage dependent efficiency curve, temperature coefficients, and allowable shadow angles. Due to the difficulty in measuring these parameters directly, other parameters are substituted: thermodynamic efficiency, quantum efficiency, integrated quantum efficiency, VOC ratio, and fill factor. Reflectance losses are a portion of quantum efficiency under "external quantum efficiency". Recombination losses make up another portion of quantum efficiency, VOC ratio, and fill factor. Resistive losses are predominantly categorized under fill factor, but also make up minor portions of quantum efficiency, VOC ratio. The fill factor is the ratio of the actual maximum obtainable power to the product of the open circuit voltage and short circuit current. This is a key parameter in evaluating performance. In 2009, typical commercial solar cells had a fill factor > 0.70. Grade B cells were usually between 0.4 and 0.7. Cells with a high fill factor have a low equivalent series resistance and a high equivalent shunt resistance, so less of the current produced by the cell is dissipated in internal losses. Single p–n junction crystalline silicon devices are now approaching the theoretical limiting power efficiency of 33.16%, noted as the Shockley–Queisser limit in 1961. In the extreme, with an infinite number of layers, the corresponding limit is 86% using concentrated sunlight. In 2014, three companies broke the record of 25.6% for a silicon solar cell. Panasonic's was the most efficient. The company moved the front contacts to the rear of the panel, eliminating shaded areas. In addition they applied thin silicon films to the (high quality silicon) wafer's front and back to eliminate defects at or near the wafer surface. In 2015, a 4-junction GaInP/GaAs//GaInAsP/GaInAs solar cell achieved a new laboratory record efficiency of 46.1 percent (concentration ratio of sunlight = 312) in a French-German collaboration between the Fraunhofer Institute for Solar Energy Systems (Fraunhofer ISE), CEA-LETI and SOITEC. In September 2015, Fraunhofer ISE announced the achievement of an efficiency above 20% for epitaxial wafer cells. The work on optimizing the atmospheric-pressure chemical vapor deposition (APCVD) in-line production chain was done in collaboration with NexWafe GmbH, a company spun off from Fraunhofer ISE to commercialize production. For triple-junction thin-film solar cells, the world record is 13.6%, set in June 2015. In 2017, a team of researchers at National Renewable Energy Laboratory (NREL), EPFL and CSEM (Switzerland) reported record one-sun efficiencies of 32.8% for dual-junction GaInP/GaAs solar cell devices. In addition, the dual-junction device was mechanically stacked with a Si solar cell, to achieve a record one-sun efficiency of 35.9% for triple-junction solar cells. Solar cells are typically named after the semiconducting material they are made of. These materials must have certain characteristics in order to absorb sunlight. Some cells are designed to handle sunlight that reaches the Earth's surface, while others are optimized for use in space. Solar cells can be made of only one single layer of light-absorbing material (single-junction) or use multiple physical configurations (multi-junctions) to take advantage of various absorption and charge separation mechanisms. Solar cells can be classified into first, second and third generation cells. The first generation cells—also called conventional, traditional or wafer-based cells—are made of crystalline silicon, the commercially predominant PV technology, that includes materials such as polysilicon and monocrystalline silicon. Second generation cells are thin film solar cells, that include amorphous silicon, CdTe and CIGS cells and are commercially significant in utility-scale photovoltaic power stations, building integrated photovoltaics or in small stand-alone power system. The third generation of solar cells includes a number of thin-film technologies often described as emerging photovoltaics—most of them have not yet been commercially applied and are still in the research or development phase. Many use organic materials, often organometallic compounds as well as inorganic substances. Despite the fact that their efficiencies had been low and the stability of the absorber material was often too short for commercial applications, there is a lot of research invested into these technologies as they promise to achieve the goal of producing low-cost, high-efficiency solar cells. By far, the most prevalent bulk material for solar cells is crystalline silicon (c-Si), also known as "solar grade silicon". Bulk silicon is separated into multiple categories according to crystallinity and crystal size in the resulting ingot, ribbon or wafer. These cells are entirely based around the concept of a p-n junction. Solar cells made of c-Si are made from wafers between 160 and 240 micrometers thick. Monocrystalline silicon (mono-Si) solar cells are more efficient and more expensive than most other types of cells. The corners of the cells look clipped, like an octagon, because the wafer material is cut from cylindrical ingots, that are typically grown by the Czochralski process. Solar panels using mono-Si cells display a distinctive pattern of small white diamonds. Epitaxial wafers of crystalline silicon can be grown on a monocrystalline silicon "seed" wafer by chemical vapor deposition (CVD), and then detached as self-supporting wafers of some standard thickness (e.g., 250 µm) that can be manipulated by hand, and directly substituted for wafer cells cut from monocrystalline silicon ingots. Solar cells made with this "kerfless" technique can have efficiencies approaching those of wafer-cut cells, but at appreciably lower cost if the CVD can be done at atmospheric pressure in a high-throughput inline process. The surface of epitaxial wafers may be textured to enhance light absorption. Polycrystalline silicon, or multicrystalline silicon (multi-Si) cells are made from cast square ingots—large blocks of molten silicon carefully cooled and solidified. They consist of small crystals giving the material its typical metal flake effect. Polysilicon cells are the most common type used in photovoltaics and are less expensive, but also less efficient, than those made from monocrystalline silicon. Ribbon silicon is a type of polycrystalline silicon—it is formed by drawing flat thin films from molten silicon and results in a polycrystalline structure. These cells are cheaper to make than multi-Si, due to a great reduction in silicon waste, as this approach does not require sawing from ingots. However, they are also less efficient. This form was developed in the 2000s and introduced commercially around 2009. Also called cast-mono, this design uses polycrystalline casting chambers with small "seeds" of mono material. The result is a bulk mono-like material that is polycrystalline around the outsides. When sliced for processing, the inner sections are high-efficiency mono-like cells (but square instead of "clipped"), while the outer edges are sold as conventional poly. This production method results in mono-like cells at poly-like prices. Thin-film technologies reduce the amount of active material in a cell. Most designs sandwich active material between two panes of glass. Since silicon solar panels only use one pane of glass, thin film panels are approximately twice as heavy as crystalline silicon panels, although they have a smaller ecological impact (determined from life cycle analysis). Cadmium telluride is the only thin film material so far to rival crystalline silicon in cost/watt. However cadmium is highly toxic and tellurium (anion: "telluride") supplies are limited. The cadmium present in the cells would be toxic if released. However, release is impossible during normal operation of the cells and is unlikely during fires in residential roofs. A square meter of CdTe contains approximately the same amount of Cd as a single C cell nickel-cadmium battery, in a more stable and less soluble form. Copper indium gallium selenide (CIGS) is a direct band gap material. It has the highest efficiency (~20%) among all commercially significant thin film materials (see CIGS solar cell). Traditional methods of fabrication involve vacuum processes including co-evaporation and sputtering. Recent developments at IBM and Nanosolar attempt to lower the cost by using non-vacuum solution processes. Silicon thin-film cells are mainly deposited by chemical vapor deposition (typically plasma-enhanced, PE-CVD) from silane gas and hydrogen gas. Depending on the deposition parameters, this can yield amorphous silicon (a-Si or a-Si:H), protocrystalline silicon or nanocrystalline silicon (nc-Si or nc-Si:H), also called microcrystalline silicon. Amorphous silicon is the most well-developed thin film technology to-date. An amorphous silicon (a-Si) solar cell is made of non-crystalline or microcrystalline silicon. Amorphous silicon has a higher bandgap (1.7 eV) than crystalline silicon (c-Si) (1.1 eV), which means it absorbs the visible part of the solar spectrum more strongly than the higher power density infrared portion of the spectrum. The production of a-Si thin film solar cells uses glass as a substrate and deposits a very thin layer of silicon by plasma-enhanced chemical vapor deposition (PECVD). Protocrystalline silicon with a low volume fraction of nanocrystalline silicon is optimal for high open circuit voltage. Nc-Si has about the same bandgap as c-Si and nc-Si and a-Si can advantageously be combined in thin layers, creating a layered cell called a tandem cell. The top cell in a-Si absorbs the visible light and leaves the infrared part of the spectrum for the bottom cell in nc-Si. The semiconductor material Gallium arsenide (GaAs) is also used for single-crystalline thin film solar cells. Although GaAs cells are very expensive, they hold the world's record in efficiency for a single-junction solar cell at 28.8%. GaAs is more commonly used in multijunction photovoltaic cells for concentrated photovoltaics (CPV, HCPV) and for solar panels on spacecrafts, as the industry favours efficiency over cost for space-based solar power. Based on the previous literature and some theoretical analysis, there are several reasons why GaAs has such high power conversion efficiency. First, GaAs bandgap is 1.43ev which is almost ideal for solar cells. Second, because Gallium is a by-product of the smelting of other metals, GaAs cells are relatively insensitive to heat and it can keep high efficiency when temperature is quite high. Third, GaAs has the wide range of design options. Using GaAs as active layer in solar cell, engineers can have multiple choices of other layers which can better generate electrons and holes in GaAs. Multi-junction cells consist of multiple thin films, each essentially a solar cell grown on top of another, typically using metalorganic vapour phase epitaxy. Each layer has a different band gap energy to allow it to absorb electromagnetic radiation over a different portion of the spectrum. Multi-junction cells were originally developed for special applications such as satellites and space exploration, but are now used increasingly in terrestrial concentrator photovoltaics (CPV), an emerging technology that uses lenses and curved mirrors to concentrate sunlight onto small, highly efficient multi-junction solar cells. By concentrating sunlight up to a thousand times, High concentrated photovoltaics (HCPV) has the potential to outcompete conventional solar PV in the future.:21,26 Tandem solar cells based on monolithic, series connected, gallium indium phosphide (GaInP), gallium arsenide (GaAs), and germanium (Ge) p–n junctions, are increasing sales, despite cost pressures. Between December 2006 and December 2007, the cost of 4N gallium metal rose from about $350 per kg to $680 per kg. Additionally, germanium metal prices have risen substantially to $1000–1200 per kg this year. Those materials include gallium (4N, 6N and 7N Ga), arsenic (4N, 6N and 7N) and germanium, pyrolitic boron nitride (pBN) crucibles for growing crystals, and boron oxide, these products are critical to the entire substrate manufacturing industry. A triple-junction cell, for example, may consist of the semiconductors: GaAs, Ge, and GaInP 2. Triple-junction GaAs solar cells were used as the power source of the Dutch four-time World Solar Challenge winners Nuna in 2003, 2005 and 2007 and by the Dutch solar cars Solutra (2005), Twente One (2007) and 21Revolution (2009). GaAs based multi-junction devices are the most efficient solar cells to date. On 15 October 2012, triple junction metamorphic cells reached a record high of 44%. In 2016, a new approach was described for producing hybrid photovoltaic wafers combining the high efficiency of III-V multi-junction solar cells with the economies and wealth of experience associated with silicon. The technical complications involved in growing the III-V material on silicon at the required high temperatures, a subject of study for some 30 years, are avoided by epitaxial growth of silicon on GaAs at low temperature by plasma-enhanced chemical vapor deposition (PECVD). Si single-junction solar cells have been widely studied for decades and are reaching their practical efficiency of ~26% under 1-sun conditions. Increasing this efficiency may require adding more cells with bandgap energy larger than 1.1 eV to the Si cell, allowing to convert short-wavelength photons for generation of additional voltage. A dual-junction solar cell with a band gap of 1.6–1.8 eV as a top cell can reduce thermalization loss, produce a high external radiative efficiency and achieve theoretical efficiencies over 45%. A tandem cell can be fabricated by growing the GaInP and Si cells. Growing them separately can overcome the 4% lattice constant mismatch between Si and the most common III–V layers that prevent direct integration into one cell. The two cells therefore are separated by a transparent glass slide so the lattice mismatch does not cause strain to the system. This creates a cell with four electrical contacts and two junctions that demonstrated an efficiency of 18.1%. With a fill factor (FF) of 76.2%, the Si bottom cell reaches an efficiency of 11.7% (± 0.4) in the tandem device, resulting in a cumulative tandem cell efficiency of 29.8%. This efficiency exceeds the theoretical limit of 29.4% and the record experimental efficiency value of a Si 1-sun solar cell, and is also higher than the record-efficiency 1-sun GaAs device. However, using a GaAs substrate is expensive and not practical. Hence researchers try to make a cell with two electrical contact points and one junction, which does not need a GaAs substrate. This means there will be direct integration of GaInP and Si. Research in solar cells Perovskite solar cells Perovskite solar cells are solar cells that include a perovskite-structured material as the active layer. Most commonly, this is a solution-processed hybrid organic-inorganic tin or lead halide based material. Efficiencies have increased from below 5% at their first usage in 2009 to over 20% in 2014, making them a very rapidly advancing technology and a hot topic in the solar cell field. Perovskite solar cells are also forecast to be extremely cheap to scale up, making them a very attractive option for commercialisation. So far most types of perovskite solar cells have not reached sufficient operational stability to be commercialised, although many research groups are investigating ways to solve this. Bifacial solar cells With a transparent rear side, bifacial solar cells can absorb light from both the front and rear sides. Hence, they can produce more electricity than conventional monofacial solar cells. The first patent of bifacial solar cells was filed by Japanese researcher Hiroshi Mori, in 1966. Later, it is said that Russia was the first to deploy bifacial solar cells in their space program in the 1970s. In 1976, the Institute for Solar Energy of the Technical University of Madrid, began a research program for the development of bifacial solar cells led by Prof. Antonio Luque. Based on 1977 US and Spanish patents by Luque, a practical bifacial cell was proposed with a front face as anode and a rear face as cathode; in previously reported proposals and attempts both faces were anodic and interconnection between cells was complicated and expensive. In 1980, Andrés Cuevas, a PhD student in Luque's team, demonstrated experimentally a 50% increase in output power of bifacial solar cells, relative to identically oriented and tilted monofacial ones, when a white background was provided. In 1981 the company Isofoton was founded in Málaga to produce the developed bifacial cells, thus becoming the first industrialization of this PV cell technology. With an initial production capacity of 300 kW/yr. of bifacial solar cells, early landmarks of Isofoton's production were the 20kWp power plant in San Agustín de Guadalix, built in 1986 for Iberdrola, and an off grid installation by 1988 also of 20kWp in the village of Noto Gouye Diama (Senegal) funded by the Spanish international aid and cooperation programs. Due to the reduced manufacturing cost, companies have again started to produce commercial bifacial modules since 2010. By 2017, there were at least eight certified PV manufacturers providing bifacial modules in North America. It has been predicted by the International Technology Roadmap for Photovoltaics (ITRPV) that the global market share of bifacial technology will expand from less than 5% in 2016 to 30% in 2027. Due to the significant interest in the bifacial technology, a recent study has investigated the performance and optimization of bifacial solar modules worldwide. The results indicate that, across the globe, ground-mounted bifacial modules can only offer ~10% gain in annual electricity yields compared to the monofacial counterparts for a ground albedo coefficient of 25% (typical for concrete and vegetation groundcovers). However, the gain can be increased to ~30% by elevating the module 1 m above the ground and enhancing the ground albedo coefficient to 50%. Sun et al. also derived a set of empirical equations that can optimize bifacial solar modules analytically. An online simulation tool is available to model the performance of bifacial modules in any arbitrary location across the entire world. It can also optimize bifacial modules as a function of tilt angle, azimuth angle, and elevation above the ground. Intermediate band photovoltaics in solar cell research provides methods for exceeding the Shockley–Queisser limit on the efficiency of a cell. It introduces an intermediate band (IB) energy level in between the valence and conduction bands. Theoretically, introducing an IB allows two photons with energy less than the bandgap to excite an electron from the valence band to the conduction band. This increases the induced photocurrent and thereby efficiency. Luque and Marti first derived a theoretical limit for an IB device with one midgap energy level using detailed balance. They assumed no carriers were collected at the IB and that the device was under full concentration. They found the maximum efficiency to be 63.2%, for a bandgap of 1.95eV with the IB 0.71eV from either the valence or conduction band. Under one sun illumination the limiting efficiency is 47%. Upconversion and downconversion Photon upconversion is the process of using two low-energy (e.g., infrared) photons to produce one higher energy photon; downconversion is the process of using one high energy photon (e.g.,, ultraviolet) to produce two lower energy photons. Either of these techniques could be used to produce higher efficiency solar cells by allowing solar photons to be more efficiently used. The difficulty, however, is that the conversion efficiency of existing phosphors exhibiting up- or down-conversion is low, and is typically narrow band. One upconversion technique is to incorporate lanthanide-doped materials (Er3+ or a combination), taking advantage of their luminescence to convert infrared radiation to visible light. Upconversion process occurs when two infrared photons are absorbed by rare-earth ions to generate a (high-energy) absorbable photon. As example, the energy transfer upconversion process (ETU), consists in successive transfer processes between excited ions in the near infrared. The upconverter material could be placed below the solar cell to absorb the infrared light that passes through the silicon. Useful ions are most commonly found in the trivalent state. Er+ ions have been the most used. Er3+ ions absorb solar radiation around 1.54 µm. Two Er3+ ions that have absorbed this radiation can interact with each other through an upconversion process. The excited ion emits light above the Si bandgap that is absorbed by the solar cell and creates an additional electron–hole pair that can generate current. However, the increased efficiency was small. In addition, fluoroindate glasses have low phonon energy and have been proposed as suitable matrix doped with Ho3+ Dye-sensitized solar cells (DSSCs) are made of low-cost materials and do not need elaborate manufacturing equipment, so they can be made in a DIY fashion. In bulk it should be significantly less expensive than older solid-state cell designs. DSSC's can be engineered into flexible sheets and although its conversion efficiency is less than the best thin film cells, its price/performance ratio may be high enough to allow them to compete with fossil fuel electrical generation. Typically a ruthenium metalorganic dye (Ru-centered) is used as a monolayer of light-absorbing material. The dye-sensitized solar cell depends on a mesoporous layer of nanoparticulate titanium dioxide to greatly amplify the surface area (200–300 m2/g TiO 2, as compared to approximately 10 m2/g of flat single crystal). The photogenerated electrons from the light absorbing dye are passed on to the n-type TiO 2 and the holes are absorbed by an electrolyte on the other side of the dye. The circuit is completed by a redox couple in the electrolyte, which can be liquid or solid. This type of cell allows more flexible use of materials and is typically manufactured by screen printing or ultrasonic nozzles, with the potential for lower processing costs than those used for bulk solar cells. However, the dyes in these cells also suffer from degradation under heat and UV light and the cell casing is difficult to seal due to the solvents used in assembly. The first commercial shipment of DSSC solar modules occurred in July 2009 from G24i Innovations. Quantum dot solar cells (QDSCs) are based on the Gratzel cell, or dye-sensitized solar cell architecture, but employ low band gap semiconductor nanoparticles, fabricated with crystallite sizes small enough to form quantum dots (such as CdS, CdSe, Sb 3, PbS, etc.), instead of organic or organometallic dyes as light absorbers. Due to the toxicity associated with Cd and Pb based compounds there are also a series of "green" QD sensitizing materials in development (such as CuInS2, CuInSe2 and CuInSeS). QD's size quantization allows for the band gap to be tuned by simply changing particle size. They also have high extinction coefficients and have shown the possibility of multiple exciton generation. In a QDSC, a mesoporous layer of titanium dioxide nanoparticles forms the backbone of the cell, much like in a DSSC. This TiO 2 layer can then be made photoactive by coating with semiconductor quantum dots using chemical bath deposition, electrophoretic deposition or successive ionic layer adsorption and reaction. The electrical circuit is then completed through the use of a liquid or solid redox couple. The efficiency of QDSCs has increased to over 5% shown for both liquid-junction and solid state cells, with a reported peak efficiency of 11.91%. In an effort to decrease production costs, the Prashant Kamat research group demonstrated a solar paint made with TiO 2 and CdSe that can be applied using a one-step method to any conductive surface with efficiencies over 1%. However, the absorption of quantum dots (QDs) in QDSCs is weak at room temperature. The plasmonic nanoparticles can be utilized to address the weak absorption of QDs (e.g., nanostars). Adding an external infrared pumping sources to excite intraband and interband transition of QDs is another solution. Organic/polymer solar cells Organic solar cells and polymer solar cells are built from thin films (typically 100 nm) of organic semiconductors including polymers, such as polyphenylene vinylene and small-molecule compounds like copper phthalocyanine (a blue or green organic pigment) and carbon fullerenes and fullerene derivatives such as PCBM. They can be processed from liquid solution, offering the possibility of a simple roll-to-roll printing process, potentially leading to inexpensive, large-scale production. In addition, these cells could be beneficial for some applications where mechanical flexibility and disposability are important. Current cell efficiencies are, however, very low, and practical devices are essentially non-existent. Energy conversion efficiencies achieved to date using conductive polymers are very low compared to inorganic materials. However, Konarka Power Plastic reached efficiency of 8.3% and organic tandem cells in 2012 reached 11.1%. The active region of an organic device consists of two materials, one electron donor and one electron acceptor. When a photon is converted into an electron hole pair, typically in the donor material, the charges tend to remain bound in the form of an exciton, separating when the exciton diffuses to the donor-acceptor interface, unlike most other solar cell types. The short exciton diffusion lengths of most polymer systems tend to limit the efficiency of such devices. Nanostructured interfaces, sometimes in the form of bulk heterojunctions, can improve performance. In 2011, MIT and Michigan State researchers developed solar cells with a power efficiency close to 2% with a transparency to the human eye greater than 65%, achieved by selectively absorbing the ultraviolet and near-infrared parts of the spectrum with small-molecule compounds. Researchers at UCLA more recently developed an analogous polymer solar cell, following the same approach, that is 70% transparent and has a 4% power conversion efficiency. These lightweight, flexible cells can be produced in bulk at a low cost and could be used to create power generating windows. In 2013, researchers announced polymer cells with some 3% efficiency. They used block copolymers, self-assembling organic materials that arrange themselves into distinct layers. The research focused on P3HT-b-PFTBT that separates into bands some 16 nanometers wide. Adaptive cells change their absorption/reflection characteristics depending to respond to environmental conditions. An adaptive material responds to the intensity and angle of incident light. At the part of the cell where the light is most intense, the cell surface changes from reflective to adaptive, allowing the light to penetrate the cell. The other parts of the cell remain reflective increasing the retention of the absorbed light within the cell. In 2014, a system was developed that combined an adaptive surface with a glass substrate that redirect the absorbed to a light absorber on the edges of the sheet. The system also includes an array of fixed lenses/mirrors to concentrate light onto the adaptive surface. As the day continues, the concentrated light moves along the surface of the cell. That surface switches from reflective to adaptive when the light is most concentrated and back to reflective after the light moves along. For the past years, researchers have been trying to reduce the price of solar cells while maximizing efficiency. Thin-film solar cell is a cost-effective second generation solar cell with much reduced thickness at the expense of light absorption efficiency. Efforts to maximize light absorption efficiency with reduced thickness have been made. Surface texturing is one of techniques used to reduce optical losses to maximize light absorbed. Currently, surface texturing techniques on silicon photovoltaics are drawing much attention. Surface texturing could be done in multiple ways. Etching single crystalline silicon substrate can produce randomly distributed square based pyramids on the surface using anisotropic etchants. Recent studies show that c-Si wafers could be etched down to form nano-scale inverted pyramids. Multicrystalline silicon solar cells, due to poorer crystallographic quality, are less effective than single crystal solar cells, but mc-Si solar cells are still being used widely due to less manufacturing difficulties. It is reported that multicrystalline solar cells can be surface-textured to yield solar energy conversion efficiency comparable to that of monocrystalline silicon cells, through isotropic etching or photolithography techniques. Incident light rays onto a textured surface do not reflect back out to the air as opposed to rays onto a flat surface. Rather some light rays are bounced back onto the other surface again due to the geometry of the surface. This process significantly improves light to electricity conversion efficiency, due to increased light absorption. This texture effect as well as the interaction with other interfaces in the PV module is a challenging optical simulation task. A particularly efficient method for modeling and optimization is the OPTOS formalism. In 2012, researchers at MIT reported that c-Si films textured with nanoscale inverted pyramids could achieve light absorption comparable to 30 times thicker planar c-Si. In combination with anti-reflective coating, surface texturing technique can effectively trap light rays within a thin film silicon solar cell. Consequently, required thickness for solar cells decreases with the increased absorption of light rays. Solar cells are commonly encapsulated in a transparent polymeric resin to protect the delicate solar cell regions for coming into contact with moisture, dirt, ice, and other conditions expected either during operation or when used outdoors. The encapsulants are commonly made from polyvinyl acetate or glass. Most encapsulants are uniform in structure and composition, which increases light collection owing to light trapping from total internal reflection of light within the resin. Research has been conducted into structuring the encapsulant to provide further collection of light. Such encapsulants have included roughened glass surfaces, diffractive elements, prism arrays, air prisms, v-grooves, diffuse elements, as well as multi-directional waveguide arrays. Prism arrays show an overall 5% increase in the total solar energy conversion. Arrays of vertically aligned broadband waveguides provide a 10% increase at normal incidence, as well as wide-angle collection enhancement of up to 4%, with optimized structures yielding up to a 20% increase in short circuit current. Active coatings that convert infrared light into visible light have shown a 30% increase. Nanoparticle coatings inducing plasmonic light scattering increase wide-angle conversion efficiency up to 3%. Optical structures have also been created in encapsulation materials to effectively "cloak" the metallic front contacts. Solar cells share some of the same processing and manufacturing techniques as other semiconductor devices. However, the stringent requirements for cleanliness and quality control of semiconductor fabrication are more relaxed for solar cells, lowering costs. Polycrystalline silicon wafers are made by wire-sawing block-cast silicon ingots into 180 to 350 micrometer wafers. The wafers are usually lightly p-type-doped. A surface diffusion of n-type dopants is performed on the front side of the wafer. This forms a p–n junction a few hundred nanometers below the surface. Anti-reflection coatings are then typically applied to increase the amount of light coupled into the solar cell. Silicon nitride has gradually replaced titanium dioxide as the preferred material, because of its excellent surface passivation qualities. It prevents carrier recombination at the cell surface. A layer several hundred nanometers thick is applied using PECVD. Some solar cells have textured front surfaces that, like anti-reflection coatings, increase the amount of light reaching the wafer. Such surfaces were first applied to single-crystal silicon, followed by multicrystalline silicon somewhat later. A full area metal contact is made on the back surface, and a grid-like metal contact made up of fine "fingers" and larger "bus bars" are screen-printed onto the front surface using a silver paste. This is an evolution of the so-called "wet" process for applying electrodes, first described in a US patent filed in 1981 by Bayer AG. The rear contact is formed by screen-printing a metal paste, typically aluminium. Usually this contact covers the entire rear, though some designs employ a grid pattern. The paste is then fired at several hundred degrees Celsius to form metal electrodes in ohmic contact with the silicon. Some companies use an additional electro-plating step to increase efficiency. After the metal contacts are made, the solar cells are interconnected by flat wires or metal ribbons, and assembled into modules or "solar panels". Solar panels have a sheet of tempered glass on the front, and a polymer encapsulation on the back. Manufacturers and certification Solar cells are manufactured in volume in Japan, Germany, China, Taiwan, Malaysia and the United States, whereas Europe, China, the U.S., and Japan have dominated (94% or more as of 2013) in installed systems. Other nations are acquiring significant solar cell production capacity. Global PV cell/module production increased by 10% in 2012 despite a 9% decline in solar energy investments according to the annual "PV Status Report" released by the European Commission's Joint Research Centre. Between 2009 and 2013 cell production has quadrupled. Due to heavy government investment, China has become the dominant force in solar cell manufacturing. Chinese companies produced solar cells/modules with a capacity of ~23 GW in 2013 (60% of global production). Solar cells degrade over time and lose their efficiency. Solar cells in extreme climates, such as desert or polar, are more prone to degradation due to exposure to harsh UV light and snow loads respectively. Usually, solar panels are given a lifespan of 25–30 years before they get decommissioned. The International Renewable Energy Agency estimated that the amount of solar panel waste generated in 2016 was 43,500–250,000 metric tons. This number is estimated to increase substantially by 2030, reaching an estimated waste volume of 60–78 million metric tons in 2050. In 2018, most decommissioned solar panels were sent to landfills. Recycling is limited because it is too expensive to process the low volume of solar panel waste. However, solar cells contain toxic materials like lead and cadmium which, when broken, could leach into the soil and contaminate the environment. With the volume of solar panel waste set to increase, the safety of disposing solar panels in landfills is becoming a big concern. Many manufacturers are turning to recycling solar panels instead. The first solar panel recycling plant opened in Rousset, France in 2018. It was set to recycle 1300 tonnes of solar panel waste a year, and can increase its capacity to 4000 tonnes. - Anomalous photovoltaic effect - Autonomous building - Black silicon - Energy development - Electromotive force (Solar cell) - Flexible substrate - Green technology - Inkjet solar cell - List of photovoltaics companies - List of types of solar cells - Maximum power point tracking - Metallurgical grade silicon - P–n junction - Plasmonic solar cell - Printed electronics - Quantum efficiency - Renewable energy - Roll-to-roll processing - Shockley-Queisser limit - Solar Energy Materials and Solar Cells (journal) - Solar module quality assurance - Solar roof - Solar shingles - Solar tracker - Solar panel - Theory of solar cells - Solar Cells. chemistryexplained.com - "Solar cells – performance and use". solarbotics.net. - "Technology Roadmap: Solar Photovoltaic Energy" (PDF). IEA. 2014. Archived (PDF) from the original on 7 October 2014. Retrieved 7 October 2014. - "Photovoltaic System Pricing Trends – Historical, Recent, and Near-Term Projections, 2014 Edition" (PDF). NREL. 22 September 2014. p. 4. Archived (PDF) from the original on 29 March 2015. - Gevorkian, Peter (2007). Sustainable energy systems engineering: the complete green building design resource. McGraw Hill Professional. ISBN 978-0-07-147359-0. - "The Nobel Prize in Physics 1921: Albert Einstein", Nobel Prize official page - Lashkaryov, V. E. (1941) Investigation of a barrier layer by the thermoprobe method Archived 28 September 2015 at the Wayback Machine, Izv. Akad. Nauk SSSR, Ser. Fiz. 5, 442–446, English translation: Ukr. J. Phys. 53, 53–56 (2008) - "Light sensitive device" U.S. Patent 2,402,662 Issue date: June 1946 - "April 25, 1954: Bell Labs Demonstrates the First Practical Silicon Solar Cell". APS News. American Physical Society. 18 (4). April 2009. - Tsokos, K. A. (28 January 2010). Physics for the IB Diploma Full Colour. Cambridge University Press. ISBN 978-0-521-13821-5. - Perlin 1999, p. 50. - Perlin 1999, p. 53. - Williams, Neville (2005). Chasing the Sun: Solar Adventures Around the World. New Society Publishers. p. 84. ISBN 9781550923124. - Jones, Geoffrey; Bouamane, Loubna (2012). "Power from Sunshine": A Business History of Solar Energy (PDF). Harvard Business School. pp. 22–23. - Perlin 1999, p. 54. - The National Science Foundation: A Brief History, Chapter IV, NSF 88-16, 15 July 1994 (retrieved 20 June 2015) - Herwig, Lloyd O. (1999). "Cherry Hill revisited: Background events and photovoltaic technology status". AIP Conference Proceedings. National Center for Photovoltaics (NCPV) 15th Program Review Meeting. AIP Conference Proceedings. 462. p. 785. Bibcode:1999AIPC..462..785H. doi:10.1063/1.58015. - Deyo, J. N., Brandhorst, H. W., Jr., and Forestieri, A. F., Status of the ERDA/NASA photovoltaic tests and applications project, 12th IEEE Photovoltaic Specialists Conf., 15–18 Nov. 1976 - Reed Business Information (18 October 1979). The multinational connections-who does what where. Reed Business Information. ISSN 0262-4079. - Buhayar, Noah (28 January 2016) Warren Buffett controls Nevada’s legacy utility. Elon Musk is behind the solar company that’s upending the market. Let the fun begin. Bloomberg Businessweek - "Sunny Uplands: Alternative energy will no longer be alternative". The Economist. 21 November 2012. Retrieved 28 December 2012. - $1/W Photovoltaic Systems DOE whitepaper August 2010 - Solar Stocks: Does the Punishment Fit the Crime?. 24/7 Wall St. (6 October 2011). Retrieved 3 January 2012. - Parkinson, Giles. "Plunging Cost Of Solar PV (Graphs)". Clean Technica. Retrieved 18 May 2013. - "Snapshot of Global PV 1992–2014" (PDF). International Energy Agency — Photovoltaic Power Systems Programme. 30 March 2015. Archived from the original on 30 March 2015. - "Solar energy – Renewable energy – Statistical Review of World Energy – Energy economics – BP". bp.com. - Yu, Peng; Wu, Jiang; Liu, Shenting; Xiong, Jie; Jagadish, Chennupati; Wang, Zhiming M. (2016-12-01). "Design and fabrication of silicon nanowires towards efficient solar cells". Nano Today. 11 (6): 704–737. doi:10.1016/j.nantod.2016.10.001. - Mann, Sander A.; de Wild-Scholten, Mariska J.; Fthenakis, Vasilis M.; van Sark, Wilfried G.J.H.M.; Sinke, Wim C. (2014-11-01). "The energy payback time of advanced crystalline silicon PV modules in 2020: a prospective study". Progress in Photovoltaics: Research and Applications. 22 (11): 1180–1194. doi:10.1002/pip.2363. ISSN 1099-159X. - "BP Global – Reports and publications – Going for grid parity". Archived from the original on 8 June 2011. Retrieved 4 August 2012. . Bp.com. Retrieved 19 January 2011. - BP Global – Reports and publications – Gaining on the grid. Bp.com. August 2007. - The Path to Grid Parity. bp.com - Peacock, Matt (20 June 2012) Solar industry celebrates grid parity, ABC News. - Baldwin, Sam (20 April 2011) Energy Efficiency & Renewable Energy: Challenges and Opportunities. Clean Energy SuperCluster Expo Colorado State University. U.S. Department of Energy. - ENF Ltd. (8 January 2013). "Small Chinese Solar Manufacturers Decimated in 2012 | Solar PV Business News | ENF Company Directory". Enfsolar.com. Retrieved 1 June 2013. - "What is a solar panel and how does it work?". Energuide.be. Sibelga. Retrieved 3 January 2017. - Martin, Chris (30 December 2016). "Solar Panels Now So Cheap Manufacturers Probably Selling at Loss". Bloomberg View. Bloomberg LP. Retrieved 3 January 2017. - Shankleman, Jessica; Martin, Chris (3 January 2017). "Solar Could Beat Coal to Become the Cheapest Power on Earth". Bloomberg View. Bloomberg LP. Retrieved 3 January 2017. - Kumar, Ankush (2017-01-03). "Predicting efficiency of solar cells based on transparent conducting electrodes". Journal of Applied Physics. 121 (1): 014502. Bibcode:2017JAP...121a4502K. doi:10.1063/1.4973117. ISSN 0021-8979. - "Solar Cell Efficiency | PVEducation". www.pveducation.org. Retrieved 2018-01-31. - "T.Bazouni: What is the Fill Factor of a Solar Panel". Archived from the original on 15 April 2009. Retrieved 17 February 2009. - Rühle, Sven (2016-02-08). "Tabulated Values of the Shockley-Queisser Limit for Single Junction Solar Cells". Solar Energy. 130: 139–147. Bibcode:2016SoEn..130..139R. doi:10.1016/j.solener.2016.02.015. - Vos, A. D. (1980). "Detailed balance limit of the efficiency of tandem solar cells". Journal of Physics D: Applied Physics. 13 (5): 839. Bibcode:1980JPhD...13..839D. doi:10.1088/0022-3727/13/5/018. - Bullis, Kevin (13 June 2014) Record-Breaking Solar Cell Points the Way to Cheaper Power. MIT Technology Review - Dimroth, Frank; Tibbits, Thomas N.D.; Niemeyer, Markus; Predan, Felix; Beutel, Paul; Karcher, Christian; Oliva, Eduard; Siefer, Gerald; Lackner, David; et al. (2016). "Four-Junction Wafer Bonded Concentrator Solar Cells". IEEE Journal of Photovoltaics. 6 (1): 343–349. doi:10.1109/jphotov.2015.2501729. - Janz, Stefan; Reber, Stefan (14 September 2015). "20% Efficient Solar Cell on EpiWafer". Fraunhofer ISE. Retrieved 15 October 2015. - Drießen, Marion; Amiri, Diana; Milenkovic, Nena; Steinhauser, Bernd; Lindekugel, Stefan; Benick, Jan; Reber, Stefan; Janz, Stefan (2016). "Solar Cells with 20% Efficiency and Lifetime Evaluation of Epitaxial Wafers". Energy Procedia. 92: 785–790. doi:10.1016/j.egypro.2016.07.069. ISSN 1876-6102. - Zyg, Lisa (4 June 2015). "Solar cell sets world record with a stabilized efficiency of 13.6%". Phys.org. - 30.2 Percent Efficiency – New Record for Silicon-based Multi-junction Solar Cell — Fraunhofer ISE. Ise.fraunhofer.de (2016-11-09). Retrieved 2016-11-15. - Essig, Stephanie; Allebé, Christophe; Remo, Timothy; Geisz, John F.; Steiner, Myles A.; Horowitz, Kelsey; Barraud, Loris; Ward, J. Scott; Schnabel, Manuel (September 2017). "Raising the one-sun conversion efficiency of III–V/Si solar cells to 32.8% for two junctions and 35.9% for three junctions". Nature Energy. 2 (9): 17144. Bibcode:2017NatEn...217144E. doi:10.1038/nenergy.2017.144. ISSN 2058-7546. - Gaucher, Alexandre; Cattoni, Andrea; Dupuis, Christophe; Chen, Wanghua; Cariou, Romain; Foldyna, Martin; Lalouat, Loı̈c; Drouard, Emmanuel; Seassal, Christian; Roca i Cabarrocas, Pere; Collin, Stéphane (2016). "Ultrathin Epitaxial Silicon Solar Cells with Inverted Nanopyramid Arrays for Efficient Light Trapping". Nano Letters. 16 (9): 5358–64. Bibcode:2016NanoL..16.5358G. doi:10.1021/acs.nanolett.6b01240. PMID 27525513. - Chen, Wanghua; Cariou, Romain; Foldyna, Martin; Depauw, Valerie; Trompoukis, Christos; Drouard, Emmanuel; Lalouat, Loic; Harouri, Abdelmounaim; Liu, Jia; Fave, Alain; Orobtchouk, Régis; Mandorlo, Fabien; Seassal, Christian; Massiot, Inès; Dmitriev, Alexandre; Lee, Ki-Dong; Cabarrocas, Pere Roca i (2016). "Nanophotonics-based low-temperature PECVD epitaxial crystalline silicon solar cells". Journal of Physics D: Applied Physics. 49 (12): 125603. Bibcode:2016JPhD...49l5603C. doi:10.1088/0022-3727/49/12/125603. ISSN 0022-3727. - Kobayashi, Eiji; Watabe, Yoshimi; Hao, Ruiying; Ravi, T. S. (2015). "High efficiency heterojunction solar cells on n-type kerfless mono crystalline silicon wafers by epitaxial growth". Applied Physics Letters. 106 (22): 223504. Bibcode:2015ApPhL.106v3504K. doi:10.1063/1.4922196. ISSN 0003-6951. - Kim, D.S.; et al. (18 May 2003). String ribbon silicon solar cells with 17.8% efficiency (PDF). Proceedings of 3rd World Conference on Photovoltaic Energy Conversion, 2003. 2. pp. 1293–1296. ISBN 978-4-9901816-0-4. - Wayne McMillan, "The Cast Mono Dilemma" Archived 5 November 2013 at the Wayback Machine, BT Imaging - Pearce, J.; Lau, A. (2002). "Net Energy Analysis for Sustainable Energy Production from Silicon Based Solar Cells" (PDF). Solar Energy. p. 181. doi:10.1115/SED2002-1051. ISBN 978-0-7918-1689-9. Archived from the original (PDF) on 2010-06-22. - Edoff, Marika (March 2012). "Thin Film Solar Cells: Research in an Industrial Perspective". AMBIO. 41 (2): 112–118. doi:10.1007/s13280-012-0265-6. ISSN 0044-7447. PMC 3357764. PMID 22434436. - Fthenakis, Vasilis M. (2004). "Life cycle impact analysis of cadmium in CdTe PV production" (PDF). Renewable and Sustainable Energy Reviews. 8 (4): 303–334. doi:10.1016/j.rser.2003.12.001. - "IBM and Tokyo Ohka Kogyo Turn Up Watts on Solar Energy Production", IBM - Collins, R. W.; Ferlauto, A. S.; Ferreira, G. M.; Chen, C.; Koh, J.; Koval, R. J.; Lee, Y.; Pearce, J. M.; Wronski, C. R. (2003). "Evolution of microstructure and phase in amorphous, protocrystalline, and microcrystalline silicon studied by real time spectroscopic ellipsometry". Solar Energy Materials and Solar Cells. 78 (1–4): 143. doi:10.1016/S0927-0248(02)00436-1. - Pearce, J. M.; Podraza, N.; Collins, R. W.; Al-Jassim, M. M.; Jones, K. M.; Deng, J.; Wronski, C. R. (2007). "Optimization of open circuit voltage in amorphous silicon solar cells with mixed-phase (amorphous+nanocrystalline) p-type contacts of low nanocrystalline content" (PDF). Journal of Applied Physics. 101 (11): 114301–114301–7. Bibcode:2007JAP...101k4301P. doi:10.1063/1.2714507. Archived from the original (PDF) on 13 June 2009. - Yablonovitch, Eli; Miller, Owen D.; Kurtz, S. R. (2012). "The opto-electronic physics that broke the efficiency limit in solar cells". 2012 38th IEEE Photovoltaic Specialists Conference. p. 001556. doi:10.1109/PVSC.2012.6317891. ISBN 978-1-4673-0066-7. - "Photovoltaics Report" (PDF). Fraunhofer ISE. 28 July 2014. Archived (PDF) from the original on 31 August 2014. Retrieved 31 August 2014. - Oku, Takeo; Kumada, Kazuma; Suzuki, Atsushi; Kikuchi, Kenji (June 2012). "Effects of germanium addition to copper phthalocyanine/fullerene-based solar cells". Central European Journal of Engineering. 2 (2): 248–252. Bibcode:2012CEJE....2..248O. doi:10.2478/s13531-011-0069-7. - Triple-Junction Terrestrial Concentrator Solar Cells. (PDF) Retrieved 3 January 2012. - Clarke, Chris (19 April 2011) San Jose Solar Company Breaks Efficiency Record for PV. Optics.org. Retrieved 19 January 2011. - Cariou, Romain; Chen, Wanghua; Maurice, Jean-Luc; Yu, Jingwen; Patriarche, Gilles; Mauguin, Olivia; Largeau, Ludovic; Decobert, Jean; Roca i Cabarrocas, Pere (2016). "Low temperature plasma enhanced CVD epitaxial growth of silicon on GaAs: a new paradigm for III-V/Si integration". Scientific Reports. 6: 25674. Bibcode:2016NatSR...625674C. doi:10.1038/srep25674. ISSN 2045-2322. PMC 4863370. PMID 27166163. - Smith, David D.; Cousins, Peter; Westerberg, Staffan; Jesus-Tabajonda, Russelle De; Aniero, Gerly; Shen, Yu-Chen (2014). "Toward the Practical Limits of Silicon Solar Cells". IEEE Journal of Photovoltaics. 4 (6): 1465–1469. doi:10.1109/JPHOTOV.2014.2350695. - Almansouri, Ibraheem; Ho-Baillie, Anita; Bremner, Stephen P.; Green, Martin A. (2015). "Supercharging Silicon Solar Cell Performance by Means of Multijunction Concept". IEEE Journal of Photovoltaics. 5 (3): 968–976. doi:10.1109/JPHOTOV.2015.2395140. - Essig, Stephanie; Steiner, Myles A.; Allebe, Christophe; Geisz, John F.; Paviet-Salomon, Bertrand; Ward, Scott; Descoeudres, Antoine; Lasalvia, Vincenzo; Barraud, Loris; Badel, Nicolas; Faes, Antonin; Levrat, Jacques; Despeisse, Matthieu; Ballif, Christophe; Stradins, Paul; Young, David L. (2016). "Realization of GaInP/Si Dual-Junction Solar Cells with 29.8% 1-Sun Efficiency". IEEE Journal of Photovoltaics. 6 (4): 1012–1019. doi:10.1109/JPHOTOV.2016.2549746. - Richter, Armin; Hermle, Martin; Glunz, Stefan W. (2013). "Reassessment of the Limiting Efficiency for Crystalline Silicon Solar Cells". IEEE Journal of Photovoltaics. 3 (4): 1184–1191. doi:10.1109/JPHOTOV.2013.2270351. - "NREL effiiciency chart". Archived from the original on 22 January 2016. - Kosasih, Felix Utama; Ducati, Caterina (May 2018). "Characterising degradation of perovskite solar cells through in-situ and operando electron microscopy". Nano Energy. 47: 243–256. doi:10.1016/j.nanoen.2018.02.055. - "Radiation energy transducing device". Mori Hiroshi, Hayakawa Denki Kogyo KK. 1961-10-03. - (A1) ES 453575 (A1) A. Luque: "Procedimiento para obtener células solares bifaciales" filing date 05.05.1977 - (A) US 4169738 (A) A. Luque: "Double-sided solar cell with self-refrigerating concentrator" filing date 21.11.1977 - Luque, A.; Cuevas, A.; Eguren, J. (1978). "Solar-Cell Behavior under Variable Surface Recombination Velocity and Proposal of a Novel Structure". Solid-State Electronics. 21 (5): 793–794. Bibcode:1978SSEle..21..793L. doi:10.1016/0038-1101(78)90014-X. - Cuevas, A.; Luque, A.; Eguren, J.; Alamo, J. del (1982). "50 Per cent more output power from an albedo-collecting flat panel using bifacial solar cells". Solar Energy. 29 (5): 419–420. Bibcode:1982SoEn...29..419C. doi:10.1016/0038-092x(82)90078-0. - "International Technology Roadmap for Photovoltaic (ITRPV) – Home". www.itrpv.net. Retrieved 2018-02-20. - Sun, Xingshu; Khan, Mohammad Ryyan; Deline, Chris; Alam, Muhammad Ashraful (2018). "Optimization and performance of bifacial solar modules: A global perspective". Applied Energy. 212: 1601–1610. doi:10.1016/j.apenergy.2017.12.041. - Khan, M. Ryyan; Hanna, Amir; Sun, Xingshu; Alam, Muhammad A. (2017). "Vertical bifacial solar farms: Physics, design, and global optimization". Applied Energy. 206: 240–248. doi:10.1016/j.apenergy.2017.08.042. - Zhao, Binglin; Sun, Xingshu; Khan, Mohammad Ryyan; Alam, Muhammad Ashraful (2018-02-19). "Purdue Bifacial Module Calculator". doi:10.4231/d3542jb3c. - Luque, Antonio; Martí, Antonio (1997). "Increasing the Efficiency of Ideal Solar Cells by Photon Induced Transitions at Intermediate Levels". Physical Review Letters. 78 (26): 5014–5017. Bibcode:1997PhRvL..78.5014L. doi:10.1103/PhysRevLett.78.5014. - Okada, Yoshitaka, Tomah Sogabe, and Yasushi Shoji (2014). "Ch. 13: Intermediate Band Solar Cells". In Arthur J. Nozik, Gavin Conibeer, and Matthew C. Beard. Advanced Concepts in Photovoltaics. Energy and Environment Series. Vol. 11. Cambridge, UK: Royal Society of Chemistry. pp. 425–54. doi:10.1039/9781849739955-00425. ISBN 978-1-84973-995-5.CS1 maint: Multiple names: authors list (link) - Researchers use liquid inks to create better solar cells, Phys.org, 17 September 2014, Shaun Mason - Hernández-Rodríguez, M.A.; Imanieh, M.H.; Martín, L.L.; Martín, I.R. (September 2013). "Experimental enhancement of the photocurrent in a solar cell using upconversion process in fluoroindate glasses exciting at 1480nm". Solar Energy Materials and Solar Cells. 116: 171–175. doi:10.1016/j.solmat.2013.04.023. - Dye Sensitized Solar Cells. G24i.com (2 April 2014). Retrieved 20 April 2014. - Sharma, Darshan; Jha, Ranjana; Kumar, Shiv (2016-10-01). "Quantum dot sensitized solar cell: Recent advances and future perspectives in photoanode". Solar Energy Materials and Solar Cells. 155: 294–322. doi:10.1016/j.solmat.2016.05.062. ISSN 0927-0248. - Semonin, O. E.; Luther, J. M.; Choi, S.; Chen, H.-Y.; Gao, J.; Nozik, A. J.; Beard, M. C. (2011). "Peak External Photocurrent Quantum Efficiency Exceeding 100% via MEG in a Quantum Dot Solar Cell". Science. 334 (6062): 1530–3. Bibcode:2011Sci...334.1530S. doi:10.1126/science.1209845. PMID 22174246. - Kamat, Prashant V. (2012). "Boosting the Efficiency of Quantum Dot Sensitized Solar Cells through Modulation of Interfacial Charge Transfer". Accounts of Chemical Research. 45 (11): 1906–15. doi:10.1021/ar200315d. PMID 22493938. - Santra, Pralay K.; Kamat, Prashant V. (2012). "Mn-Doped Quantum Dot Sensitized Solar Cells: A Strategy to Boost Efficiency over 5%". Journal of the American Chemical Society. 134 (5): 2508–11. doi:10.1021/ja211224s. PMID 22280479. - Moon, Soo-Jin; Itzhaik, Yafit; Yum, Jun-Ho; Zakeeruddin, Shaik M.; Hodes, Gary; GräTzel, Michael (2010). "Sb2S3-Based Mesoscopic Solar Cell using an Organic Hole Conductor". The Journal of Physical Chemistry Letters. 1 (10): 1524. doi:10.1021/jz100308q. - Du, Jun; Du, Zhonglin; Hu, Jin-Song; Pan, Zhenxiao; Shen, Qing; Sun, Jiankun; Long, Donghui; Dong, Hui; Sun, Litao; Zhong, Xinhua; Wan, Li-Jun (2016). "Zn–Cu–In–Se Quantum Dot Solar Cells with a Certified Power Conversion Efficiency of 11.6%". Journal of the American Chemical Society. 138 (12): 4201–4209. doi:10.1021/jacs.6b00615. PMID 26962680. - Solar Cell Research || The Prashant Kamat lab at the University of Notre Dame. Nd.edu (22 February 2007). Retrieved 17 May 2012. - Genovese, Matthew P.; Lightcap, Ian V.; Kamat, Prashant V. (2012). "Sun-BelievableSolar Paint. A Transformative One-Step Approach for Designing Nanocrystalline Solar Cells". ACS Nano. 6 (1): 865–72. doi:10.1021/nn204381g. PMID 22147684. - Yu, Peng; Wu, Jiang; Gao, Lei; Liu, Huiyun; Wang, Zhiming (2017-03-01). "InGaAs and GaAs quantum dot solar cells grown by droplet epitaxy". Solar Energy Materials and Solar Cells. 161: 377–381. doi:10.1016/j.solmat.2016.12.024. - Wu, Jiang; Yu, Peng; Susha, Andrei S.; Sablon, Kimberly A.; Chen, Haiyuan; Zhou, Zhihua; Li, Handong; Ji, Haining; Niu, Xiaobin (2015-04-01). "Broadband efficiency enhancement in quantum dot solar cells coupled with multispiked plasmonic nanostars". Nano Energy. 13: 827–835. doi:10.1016/j.nanoen.2015.02.012. - Konarka Power Plastic reaches 8.3% efficiency. pv-tech.org. Retrieved 7 May 2011. - Mayer, A.; Scully, S.; Hardin, B.; Rowell, M.; McGehee, M. (2007). "Polymer-based solar cells". Materials Today. 10 (11): 28. doi:10.1016/S1369-7021(07)70276-6. - Lunt, R. R.; Bulovic, V. (2011). "Transparent, near-infrared organic photovoltaic solar cells for window and energy-scavenging applications". Applied Physics Letters. 98 (11): 113305. Bibcode:2011ApPhL..98k3305L. doi:10.1063/1.3567516. - Rudolf, John Collins (20 April 2011). "Transparent Photovoltaic Cells Turn Windows Into Solar Panels". green.blogs.nytimes.com. - "UCLA Scientists Develop Transparent Solar Cell". Enviro-News.com. 24 July 2012. Archived from the original on 27 July 2012. - Lunt, R. R.; Osedach, T. P.; Brown, P. R.; Rowehl, J. A.; Bulović, V. (2011). "Practical Roadmap and Limits to Nanostructured Photovoltaics". Advanced Materials. 23 (48): 5712–27. doi:10.1002/adma.201103404. PMID 22057647. - Lunt, R. R. (2012). "Theoretical limits for visibly transparent photovoltaics". Applied Physics Letters. 101 (4): 043902. Bibcode:2012ApPhL.101d3902L. doi:10.1063/1.4738896. - Guo, C.; Lin, Y. H.; Witman, M. D.; Smith, K. A.; Wang, C.; Hexemer, A.; Strzalka, J.; Gomez, E. D.; Verduzco, R. (2013). "Conjugated Block Copolymer Photovoltaics with near 3% Efficiency through Microphase Separation". Nano Letters. 13 (6): 2957–63. Bibcode:2013NanoL..13.2957G. doi:10.1021/nl401420s. PMID 23687903. - "Organic polymers create new class of solar energy devices". Kurzweil Accelerating Institute. 31 May 2013. Retrieved 1 June 2013. - Bullis, Kevin (30 July 2014) Adaptive Material Could Cut the Cost of Solar in Half. MIT Technology Review - Campbell, Patrick; Green, Martin A. (Feb 1987). "Light Trapping Properties of Pyramidally textured surfaces". Journal of Applied Physics. 62 (1): 243–249. Bibcode:1987JAP....62..243C. doi:10.1063/1.339189. - Zhao, Jianhua; Wang, Aihua; Green, Martin A. (May 1998). "19.8% efficient "honeycomb" textured multicrystalline and 24.4% monocrystalline silicon solar cells". Applied Physics Letters. 73 (14): 1991–1993. Bibcode:1998ApPhL..73.1991Z. doi:10.1063/1.122345. - Hauser, H.; Michl, B.; Kubler, V.; Schwarzkopf, S.; Muller, C.; Hermle, M.; Blasi, B. (2011). "Nanoimprint Lithography for Honeycomb Texturing of Multicrystalline Silicon". Energy Procedia. 8: 648–653. doi:10.1016/j.egypro.2011.06.196. - Tucher, Nico; Eisenlohr, Johannes; Gebrewold, Habtamu; Kiefel, Peter; Höhn, Oliver; Hauser, Hubert; Goldschmidt, Jan Christoph; Bläsi, Benedikt (2016-07-11). "Optical simulation of photovoltaic modules with multiple textured interfaces using the matrix-based formalism OPTOS". Optics Express. 24 (14): A1083–A1093. Bibcode:2016OExpr..24A1083T. doi:10.1364/OE.24.0A1083. PMID 27410896. - Mavrokefalos, Anastassios; Han, Sang Eon.; Yerci, Selcuk; Branham, M.S.; Chen, Gang. (June 2012). "Efficient Light Trapping in Inverted Nanopyramid Thin Crystalline Silicon Membranes for Solar Cell Applications". Nano Letters. 12 (6): 2792–2796. Bibcode:2012NanoL..12.2792M. doi:10.1021/nl2045777. PMID 22612694. - Jaus, J.; Pantsar, H.; Eckert, J.; Duell, M.; Herfurth, H.; Doble, D. (2010). "Light management for reduction of bus bar and gridline shadowing in photovoltaic modules". 2010 35th IEEE Photovoltaic Specialists Conference. p. 000979. doi:10.1109/PVSC.2010.5614568. ISBN 978-1-4244-5890-5. - Mingareev, I.; Berlich, R.; Eichelkraut, T. J.; Herfurth, H.; Heinemann, S.; Richardson, M. C. (2011-06-06). "Diffractive optical elements utilized for efficiency enhancement of photovoltaic modules". Optics Express. 19 (12): 11397–404. Bibcode:2011OExpr..1911397M. doi:10.1364/OE.19.011397. PMID 21716370. - Uematsu, T; Yazawa, Y; Miyamura, Y; Muramatsu, S; Ohtsuka, H; Tsutsui, K; Warabisako, T (2001-03-01). "Static concentrator photovoltaic module with prism array". Solar Energy Materials and Solar Cells. PVSEC 11 – PART III. 67 (1–4): 415–423. doi:10.1016/S0927-0248(00)00310-X. - Chen, Fu-hao; Pathreeker, Shreyas; Kaur, Jaspreet; Hosein, Ian D. (2016-10-31). "Increasing light capture in silicon solar cells with encapsulants incorporating air prisms to reduce metallic contact losses". Optics Express. 24 (22): A1419–A1430. Bibcode:2016OExpr..24A1419C. doi:10.1364/oe.24.0a1419. PMID 27828526. - Korech, Omer; Gordon, Jeffrey M.; Katz, Eugene A.; Feuermann, Daniel; Eisenberg, Naftali (2007-10-01). "Dielectric microconcentrators for efficiency enhancement in concentrator solar cells". Optics Letters. 32 (19): 2789. Bibcode:2007OptL...32.2789K. doi:10.1364/OL.32.002789. - Hosein, Ian D.; Lin, Hao; Ponte, Matthew R.; Basker, Dinesh K.; Saravanamuttu, Kalaichelvi (2013-11-03). Enhancing Solar Energy Light Capture with Multi-Directional Waveguide Lattices. Renewable Energy and the Environment. pp. RM2D.2. doi:10.1364/OSE.2013.RM2D.2. ISBN 978-1-55752-986-2. - Biria, Saeid; Chen, Fu Hao; Pathreeker, Shreyas; Hosein, Ian D. (2017-12-22). "Polymer Encapsulants Incorporating Light-Guiding Architectures to Increase Optical Energy Conversion in Solar Cells". Advanced Materials. 30 (8): 1705382. doi:10.1002/adma.201705382. PMID 29271510. - Biria, Saeid; Chen, Fu-Hao; Hosein, Ian D. (2019). "Enhanced Wide-Angle Energy Conversion Using Structure-Tunable Waveguide Arrays as Encapsulation Materials for Silicon Solar Cells". Physica Status Solidi A. 0 (2): 1800716. Bibcode:2019PSSAR.21600716B. doi:10.1002/pssa.201800716. - Huang, Zhiyuan; Li, Xin; Mahboub, Melika; Hanson, Kerry M.; Nichols, Valerie M.; Le, Hoang; Tang, Ming L.; Bardeen, Christopher J. (2015-08-12). "Hybrid Molecule–Nanocrystal Photon Upconversion Across the Visible and Near-Infrared". Nano Letters. 15 (8): 5552–5557. Bibcode:2015NanoL..15.5552H. doi:10.1021/acs.nanolett.5b02130. PMID 26161875. - Schumann, Martin F.; Langenhorst, Malte; Smeets, Michael; Ding, Kaining; Paetzold, Ulrich W.; Wegener, Martin (2017-07-04). "All-Angle Invisibility Cloaking of Contact Fingers on Solar Cells by Refractive Free-Form Surfaces". Advanced Optical Materials. 5 (17): 1700164. doi:10.1002/adom.201700164. - Langenhorst, Malte; Schumann, Martin F.; Paetel, Stefan; Schmager, Raphael; Lemmer, Uli; Richards, Bryce S.; Wegener, Martin; Paetzold, Ulrich W. (2018-08-01). "Freeform surface invisibility cloaking of interconnection lines in thin-film photovoltaic modules". Solar Energy Materials and Solar Cells. 182: 294–301. doi:10.1016/j.solmat.2018.03.034. - Fitzky, Hans G. and Ebneth, Harold (24 May 1983) U.S. Patent 4,385,102, "Large-area photovoltaic cell" - Pv News November 2012. Greentech Media. Retrieved 3 June 2012. - Jäger-Waldau, Arnulf (September 2013) PV Status Report 2013. European Commission, Joint Research Centre, Institute for Energy and Transport. - PV production grows despite a crisis-driven decline in investment. European Commission, Brussels, 30 September 2013 - PV Status Report 2013 | Renewable Energy Mapping and Monitoring in Europe and Africa (REMEA). Iet.jrc.ec.europa.eu (11 April 2014). Retrieved 20 April 2014. - "Solar Rises in Malaysia During Trade Wars Over Panels". New York Times. 12 December 2014. - Plunging Cost Of Solar PV (Graphs). CleanTechnica (7 March 2013). Retrieved 20 April 2014. - Falling silicon prices shakes up solar manufacturing industry. Down To Earth (19 September 2011). Retrieved 20 April 2014. - Jordan, Dirk C.; Kurtz, Sarah R. (June 2012). "Photovoltaic Degradation Rates – An Analytical Review" (PDF). Progress in Photovoltaics: Research and Applications. Retrieved 2019-03-06. - How long do solar panels last?. CleanTechnica (4 February 2019). Retrieved 2019-03-06. - End-of-Life Management: Solar Photovoltaic Panels. International Renewable Energy Agency (June 2016). Retrieved 2019-03-06. - More solar panels mean more waste and there’s no easy solution. The Verge (25 October 2018). Retrieved 2019-3-6. - If Solar Panels Are So Clean, Why Do They Produce So Much Toxic Waste?. Forbes (23 May 2018). Retrieved 2019-03-06. - Europe's First Solar Panel Recycling Plant Opens in France. Reuters (25 June 2018). Retrieved 2019-03-06. - Perlin, John (1999). From space to Earth: the story of solar electricity. Earthscan. p. 50. ISBN 978-0-937948-14-9. - PV Lighthouse Calculators and Resources for photovoltaic scientists and engineers - Photovoltaics CDROM online - Solar cell manufacturing techniques - Renewable Energy: Solar at Curlie - Solar Energy Laboratory at University of Southampton - NASA's Photovoltaic Info - Green, M. A.; Emery, K.; Hishikawa, Y.; Warta, W. (2010). "Solar cell efficiency tables (version 36)". Progress in Photovoltaics: Research and Applications. 18 (5): 346. doi:10.1002/pip.1021. - "Electric Energy From Sun Produced by Light Cell" Popular Mechanics, July 1931 article on various 1930s research on solar cells - Беларуская (тарашкевіца) - Bahasa Indonesia - Kreyòl ayisyen - Bahasa Melayu - Norsk nynorsk - Simple English - Српски / srpski - Srpskohrvatski / српскохрватски - Tiếng Việt
What is a Segment Bisector? If you are taking a geometry class, chances are you have heard of segment bisectors. But what exactly is a segment bisector, and how can it be used in geometry? In this blog post, we will explain the concept of segment bisectors in an easy-to-understand way and discuss how they can be used to solve problems. A segment bisector is a line or a ray that divides a line segment into two congruent segments. This means that each part of the line segment has the same length. A segment bisector always passes through the midpoint of the line segment; therefore, it must divide the line segment into two equal halves. A helpful way to remember this definition is to think of “bisector” as meaning “half divider”. The most common use for a segment bisector is in constructing triangles. To construct an equilateral triangle (a triangle with all three sides equal), first draw any straight line and then draw its perpendicular bisector at its midpoint to divide it into two equal parts. Then, draw another line from one endpoint of your original line through the midpoint and extend it until you have reached the other endpoint. The resulting triangle will be an equilateral triangle! Segment bisectors also come in handy when solving geometric proofs. For example, if you have two intersecting lines and need to prove that their corresponding angles are congruent (equal), then drawing each of their perpendicular bisectors and extending them so they intersect at one point can help you prove your statement! By connecting each endpoints and creating two intersecting segments, you can use properties like angle addition postulate or vertical angles theorem to prove your statement without having to rely on any outside information or formulas. In conclusion, understanding what a segment bisector is and how it works can help make solving geometry questions much easier! Not only do they come in handy when constructing triangles or proving statements in geometric proofs, but they are also useful when finding missing lengths or angles in various shapes. So next time you are stuck on a geometry problem involving lines or angles, remember that knowing about segment bisectors can help get you out of any tricky situation! Explain segment bisector in geometry A segment bisector is a line or ray that divides a line segment into two congruent segments. This means that each part of the line segment has the same length. A segment bisector always passes through the midpoint of the line segment, so it must divide the line segment into two equal halves. It can be used to construct equilateral triangles and can also be used to solve geometric proofs. It is also useful for finding missing lengths or angles in various shapes.
The Russian Revolution of 1905,[a] also known as the First Russian Revolution,[b] occurred on 22 January 1905, and was a wave of mass political and social unrest that spread through vast areas of the Russian Empire. The mass unrest was directed against the Tsar, nobility, and ruling class. It included worker strikes, peasant unrest, and military mutinies. In response to the public pressure, Tsar Nicholas II enacted some constitutional reform (namely the October Manifesto). This took the form of establishing the State Duma, the multi-party system, and the Russian Constitution of 1906. Despite popular participation in the Duma, the parliament was unable to issue laws of its own, and frequently came into conflict with Nicholas. Its power was limited and Nicholas continued to hold the ruling authority. Furthermore, he could dissolve the Duma, which he often did. |Russian Revolution of 1905| Demonstrations before Bloody Sunday |Commanders and leaders| |Casualties and losses| The 1905 revolution was primarily spurred by the international humiliation as a result of the Russian defeat in the Russo-Japanese War, which ended in the same year. Calls for revolution were intensified by the growing realisation by a variety of sectors of society of the need for reform. Politicians such as Sergei Witte had succeeded in partially industrializing Russia but failed to reform and modernize Russia socially. Tsar Nicholas II and the monarchy survived the Revolution of 1905, but its events foreshadowed the 1917 Russian Revolution just twelve years later. Many historians contend that the 1905 revolution set the stage for the 1917 Russian Revolutions, which saw the monarchy abolished and the Tsar executed. Calls for radicalism were present in the 1905 Revolution, but many of the revolutionaries who were in a position to lead were either in exile or in prison while it took place. The events in 1905 demonstrated the precarious position in which the Tsar found himself. As a result, Tsarist Russia did not undergo sufficient reform, which had a direct impact on the radical politics brewing in the Russian Empire. Although the radicals were still in the minority of the populace, their momentum was growing. Vladimir Lenin, a revolutionary himself, would later say that the Revolution of 1905 was "The Great Dress Rehearsal", without which the "victory of the October Revolution in 1917 would have been impossible". According to Sidney Harcave, four problems in Russian society contributed to the revolution. Newly emancipated peasants earned too little and were not allowed to sell or mortgage their allotted land. Ethnic and national minorities resented the government because of its "Russification" of the Empire: it practised discrimination and repression against national minorities, such as banning them from voting; serving in the Imperial Guard or Navy; and limiting their attendance in schools. A nascent industrial working class resented the government for doing too little to protect them, as it banned strikes and organizing into labor unions. Finally, university students developed a new consciousness, after discipline was relaxed in the institutions, and they were fascinated by increasingly radical ideas, which spread among them. Also, disaffected soldiers returning from a bloody and disgraceful defeat with Japan, who found inadequate factory pay, shortages, and general disarray, organized in protest. Taken individually, these issues might not have affected the course of Russian history, but together they created the conditions for a potential revolution. At the turn of the century, discontent with the Tsar’s dictatorship was manifested not only through the growth of political parties dedicated to the overthrow of the monarchy but also through industrial strikes for better wages and working conditions, protests and riots among peasants, university demonstrations, and the assassination of government officials, often done by Socialist Revolutionaries. Because the Russian economy was tied to European finances, the contraction of Western money markets in 1899–1900 plunged Russian industry into a deep and prolonged crisis; it outlasted the dip in European industrial production. This setback aggravated social unrest during the five years preceding the revolution of 1905. The government finally recognized these problems, albeit in a shortsighted and narrow-minded way. The Minister of the Interior Vyacheslav von Plehve said in 1903 that, after the agrarian problem, the most serious issues plaguing the country were those of the Jews, the schools, and the workers, in that order. One of the major contributing factors that changed Russia from a country in unrest to a country in revolt was "Bloody Sunday". Though significant minorities had fomented revolution up to this point, they had been primarily confined to the social elite, while the lower classes had remained aloof from the conflict. However, loyalty of the masses to Tsar Nicholas II was lost on 22 January 1905, when his soldiers fired upon a crowd of protesting workers, led by Georgy Gapon, who were marching to present a petition at the Winter Palace. Every year, thousands of nobles in debt mortgaged their estates to the noble land bank or sold them to municipalities, merchants, or peasants. By the time of the revolution, the nobility had sold off one-third of its land and mortgaged another third. The peasants had been freed by the emancipation reform of 1861, but their lives were generally quite limited. The government hoped to develop the peasants as a politically conservative, land-holding class by enacting laws to enable them to buy land from nobility by paying small installments over many decades. Such land, known as "allotment land", would not be owned by individual peasants but by the community of peasants; individual peasants would have rights to strips of land to be assigned to them under the open field system. A peasant could not sell or mortgage this land, so in practice he could not renounce his rights to his land, and he would be required to pay his share of redemption dues to the village commune. This plan was intended to prevent peasants from becoming part of the proletariat. However, the peasants were not given enough land to provide for their needs. Their earnings were often so small that they could neither buy the food they needed nor keep up the payment of taxes and redemption dues they owed the government for their land allotments. By 1903 their total arrears in payments of taxes and dues was 118 million rubles. The situation worsened as masses of hungry peasants roamed the countryside looking for work and sometimes walked hundreds of kilometers to find it. Desperate peasants proved capable of violence. "In the provinces of Kharkov and Poltava in 1902, thousands of them, ignoring restraints and authority, burst out in a rebellious fury that led to extensive destruction of property and looting of noble homes before troops could be brought to subdue and punish them." These violent outbreaks caught the attention of the government, so it created many committees to investigate the causes. The committees concluded that no part of the countryside was prosperous; some parts, especially the fertile areas known as the "black-soil region", were in decline. Although cultivated acreage had increased in the last half century, the increase had not been proportionate to the growth of the peasant population, which had doubled. "There was general agreement at the turn of the century that Russia faced a grave and intensifying agrarian crisis due mainly to rural overpopulation with an annual excess of fifteen to eighteen live births over deaths per 1,000 inhabitants." The investigations revealed many difficulties but the committees could not find solutions that were both sensible and "acceptable" to the government. Russia was a multi-ethnic empire. Nineteenth-century Russians saw cultures and religions in a clear hierarchy. Non-Russian cultures were tolerated in the empire but were not necessarily respected. Culturally, Europe was favored over Asia, as was Orthodox Christianity over other religions. For generations, Russian Jews had been considered a special problem. Jews constituted only about 4% of the population, but were concentrated in the western borderlands. Like other minorities in Russia, the Jews lived "miserable and circumscribed lives, forbidden to settle or acquire land outside the cities and towns, legally limited in attendance at secondary school and higher schools, virtually barred from legal professions, denied the right to vote for municipal councilors, and excluded from services in the Navy or the Guards". The government's treatment of Jews, although considered a separate issue, was similar to its policies in dealing with all national and religious minorities. Historian Theodore Weeks notes: "Russian administrators, who never succeeded in coming up with a legal definition of 'Pole', despite the decades of restrictions on that ethnic group, regularly spoke of individuals 'of Polish descent' or, alternatively, 'of Russian descent', making identity a function of birth." This policy only succeeded in producing or aggravating feelings of disloyalty. There was growing impatience with their inferior status and resentment against "Russification". Russification is cultural assimilation definable as "a process culminating in the disappearance of a given group as a recognizably distinct element within a larger society". Besides the imposition of a uniform Russian culture throughout the empire, the government's pursuit of Russification, especially during the second half of the nineteenth century, had political motives. After the emancipation of the serfs in 1861, the Russian state was compelled to take into account public opinion, but the government failed to gain the public's support. Another motive for Russification policies was the Polish uprising of 1863. Unlike other minority nationalities, the Poles, in the eyes of the Tsar, were a direct threat to the empire's stability. After the rebellion was crushed, the government implemented policies to reduce Polish cultural influences. In the 1870s the government began to distrust German elements on the western border. The Russian government felt that the unification of Germany would upset the power balance among the great powers of Europe and that Germany would use its strength against Russia. The government thought that the borders would be defended better if the borderland were more "Russian" in character. The culmination of cultural diversity created a cumbersome nationality problem that plagued the Russian government in the years leading up to the revolution. The economic situation in Russia before the revolution presented a grim picture. The government had experimented with laissez-faire capitalist policies, but this strategy largely failed to gain traction within the Russian economy until the 1890s. Meanwhile, "agricultural productivity stagnated, while international prices for grain dropped, and Russia’s foreign debt and need for imports grew. War and military preparations continued to consume government revenues. At the same time, the peasant taxpayers' ability to pay was strained to the utmost, leading to widespread famine in 1891." In the 1890s, under Finance Minister Sergei Witte, a crash governmental program was proposed to promote industrialization. His policies included heavy government expenditures for railroad building and operations, subsidies and supporting services for private industrialists, high protective tariffs for Russian industries (especially heavy industry), an increase in exports, currency stabilization, and encouragement of foreign investments. His plan was successful and during the 1890s "Russian industrial growth averaged 8 percent per year. Railroad mileage grew from a very substantial base by 40 percent between 1892 and 1902." Ironically, Witte's success in implementing this program helped spur the 1905 revolution and eventually the 1917 revolution because it exacerbated social tensions. "Besides dangerously concentrating a proletariat, a professional and a rebellious student body in centers of political power, industrialization infuriated both these new forces and the traditional rural classes." The government policy of financing industrialization through taxing peasants forced millions of peasants to work in towns. The "peasant worker" saw his labor in the factory as the means to consolidate his family's economic position in the village and played a role in determining the social consciousness of the urban proletariat. The new concentrations and flows of peasants spread urban ideas to the countryside, breaking down isolation of peasants on communes. Industrial workers began to feel dissatisfaction with the Tsarist government despite the protective labour laws the government decreed. Some of those laws included the prohibition of children under 12 from working, with the exception of night work in glass factories. Employment of children aged 12 to 15 was prohibited on Sundays and holidays. Workers had to be paid in cash at least once a month, and limits were placed on the size and bases of fines for workers who were tardy. Employers were prohibited from charging workers for the cost of lighting of the shops and plants. Despite these labour protections, the workers believed that the laws were not enough to free them from unfair and inhumane practices. At the start of the 20th century, Russian industrial workers worked on average an 11-hour day (10 hours on Saturday), factory conditions were perceived as grueling and often unsafe, and attempts at independent unions were often not accepted. Many workers were forced to work beyond the maximum of 11 and a half hours per day. Others were still subject to arbitrary and excessive fines for tardiness, mistakes in their work, or absence. Russian industrial workers were also the lowest-wage workers in Europe. Although the cost of living in Russia was low, "the average worker's 16 rubles per month could not buy the equal of what the French worker's 110 francs would buy for him." Furthermore, the same labour laws prohibited organisation of trade unions and strikes. Dissatisfaction turned into despair for many impoverished workers, which made them more sympathetic to radical ideas. These discontented, radicalized workers became key to the revolution by participating in illegal strikes and revolutionary protests. The government responded by arresting labour agitators and enacting more "paternalistic" legislation. Introduced in 1900 by Sergei Zubatov, head of the Moscow security department, "police socialism" planned to have workers form workers' societies with police approval to "provide healthful, fraternal activities and opportunities for cooperative self-help together with 'protection' against influences that might have inimical effect on loyalty to job or country". Some of these groups organised in Moscow, Odesa, Kyiv, Mykolaiv, and Kharkiv, but these groups and the idea of police socialism failed. In 1900–1903, the period of industrial depression caused many firm bankruptcies and a reduction in the employment rate. Employees were restive: they would join legal organisations but turn the organisations toward an end that the organisations' sponsors did not intend. Workers used legitimate means to organise strikes or to draw support for striking workers outside these groups. A strike that began in 1902 by workers in the railroad shops in Vladikavkaz and Rostov-on-Don created such a response that by the next summer, 225,000 in various industries in southern Russia and Transcaucasia were on strike. These were not the first illegal strikes in the country's history but their aims, and the political awareness and support among workers and non-workers, made them more troubling to the government than earlier strikes. The government responded by closing all legal organisations by the end of 1903. Educated class as a problemEdit The Minister of the Interior, Plehve, designated schools as a pressing problem for the government, but he did not realize it was only a symptom of antigovernment feelings among the educated class. Students of universities, other schools of higher learning, and occasionally of secondary schools and theological seminaries were part of this group. Student radicalism began around the time Tsar Alexander II came to power. Alexander abolished serfdom and enacted fundamental reforms in the legal and administrative structure of the Russian empire, which were revolutionary for their time. He lifted many restrictions on universities and abolished obligatory uniforms and military discipline. This ushered in a new freedom in the content and reading lists of academic courses. In turn, that created student subcultures, as youth were willing to live in poverty in order to receive an education. As universities expanded, there was a rapid growth of newspapers, journals, and an organisation of public lectures and professional societies. The 1860s was a time when the emergence of a new public sphere was created in social life and professional groups. This created the idea of their right to have an independent opinion. The government was alarmed by these communities, and in 1861 tightened restrictions on admission and prohibited student organisations; these restrictions resulted in the first ever student demonstration, held in St. Petersburg, which led to a two-year closure of the university. The consequent conflict with the state was an important factor in the chronic student protests over subsequent decades. The atmosphere of the early 1860s gave rise to political engagement by students outside universities that became a tenet of student radicalism by the 1870s. Student radicals described "the special duty and mission of the student as such to spread the new word of liberty. Students were called upon to extend their freedoms into society, to repay the privilege of learning by serving the people, and to become in Nikolai Ogarev's phrase 'apostles of knowledge'."[attribution needed] During the next two decades, universities produced a significant share of Russia's revolutionaries. Prosecution records from the 1860s and 1870s show that more than half of all political offences were committed by students despite being a minute proportion of the population. "The tactics of the left-wing students proved to be remarkably effective, far beyond anyone's dreams. Sensing that neither the university administrations nor the government any longer possessed the will or authority to enforce regulations, radicals simply went ahead with their plans to turn the schools into centres of political activity for students and non-students alike."[attribution needed] They took up problems that were unrelated to their "proper employment", and displayed defiance and radicalism by boycotting examinations, rioting, arranging marches in sympathy with strikers and political prisoners, circulating petitions, and writing anti-government propaganda. This disturbed the government, but it believed the cause was lack of training in patriotism and religion. Therefore, the curriculum was "toughened up" to emphasize classical language and mathematics in secondary schools, but defiance continued. Expulsion, exile, and forced military service also did not stop students. "In fact, when the official decision to overhaul the whole educational system was finally made, in 1904, and to that end Vladimir Glazov, head of General Staff Academy, was selected as Minister of Education, the students had grown bolder and more resistant than ever."[attribution needed] Rise of the oppositionEdit The events of 1905 came after progressive and academic agitation for more political democracy and limits to Tsarist rule in Russia, and an increase in strikes by workers against employers for radical economic demands and union recognition, (especially in southern Russia). Many[quantify] socialists view this as a period when the rising revolutionary movement was met with rising reactionary movements. As Rosa Luxemburg stated in 1906 in The Mass Strike, when collective strike activity was met with what is perceived as repression from an autocratic state, economic and political demands grew into and reinforced each other. Russian progressives formed the Union of Zemstvo Constitutionalists in 1903 and the Union of Liberation in 1904, which called for a constitutional monarchy. Russian socialists formed two major groups: the Socialist Revolutionary Party (founded in 1902), which followed the Russian populist tradition, and the Marxist Russian Social Democratic Labour Party (founded in 1898). In late 1904 liberals started a series of banquets (modeled on the campagne des banquets leading up to the French Revolution of 1848), nominally celebrating the 40th anniversary of the liberal court statutes, but actually an attempt to circumvent laws against political gatherings. The banquets resulted in calls for political reforms and a constitution. In November 1904 a Zemsky Congress (Russian: Земский съезд)—a gathering of zemstvo delegates representing all levels of Russian society—called for a constitution, civil liberties and a parliament. On 13 December [O.S. 30 November] 1904, the Moscow City Duma passed a resolution demanding the establishment of an elected national legislature, full freedom of the press, and freedom of religion. Similar resolutions and appeals from other city dumas and zemstvo councils followed. Emperor Nicholas II made a move to meet many of these demands, appointing liberal Pyotr Dmitrievich Sviatopolk-Mirsky as Minister of the Interior after the July 1904 assassination of Vyacheslav von Plehve. On 25 December [O.S. 12 December] 1904, the Emperor issued a manifesto promising the broadening of the zemstvo system and more authority for local municipal councils, insurance for industrial workers, the emancipation of Inorodtsy and the abolition of censorship. The crucial demand—that for a representative national legislature—was missing in the manifesto. Worker strikes in the Caucasus broke out in March 1902. Strikes on the railways, originating from pay disputes, took on other issues and drew in other industries, culminating in a general strike at Rostov-on-Don in November 1902. Daily meetings of 15,000 to 20,000 heard openly revolutionary appeals for the first time, before a massacre defeated the strikes. But reaction to the massacres brought political demands to purely economic ones. Luxemburg described the situation in 1903 by saying: "the whole of South Russia in May, June and July was aflame", including Baku (where separate wage struggles culminated in a citywide general strike) and Tiflis, where commercial workers gained a reduction in the working day, and were joined by factory workers. In 1904, massive strike waves broke out in Odessa in the spring, in Kyiv in July, and in Baku in December. This all set the stage for the strikes in St. Petersburg in December 1904 to January 1905 seen[by whom?] as the first step in the 1905 revolution. |Years||Average annual strikes| Another contributing factor behind the revolution was the Bloody Sunday massacre of protesters that took place in January 1905 in St. Petersburg sparked a spate of civil unrest in the Russian Empire. Lenin urged Bolsheviks to take a greater role in the events, encouraging violent insurrection. In doing so, he adopted SR slogans regarding "armed insurrection", "mass terror", and "the expropriation of gentry land", resulting in Menshevik accusations that he had deviated from orthodox Marxism. In turn, he insisted that the Bolsheviks split completely with the Mensheviks; many Bolsheviks refused, and both groups attended the Third RSDLP Congress, held in London in April 1905 at the Brotherhood Church. Lenin presented many of his ideas in the pamphlet Two Tactics of Social Democracy in the Democratic Revolution, published in August 1905. Here, he predicted that Russia's liberal bourgeoisie would be sated by a transition to constitutional monarchy and thus betray the revolution; instead he argued that the proletariat would have to build an alliance with the peasantry to overthrow the Tsarist regime and establish the "provisional revolutionary democratic dictatorship of the proletariat and the peasantry." Start of the revolutionEdit In December 1904, a strike occurred at the Putilov plant (a railway and artillery supplier) in St. Petersburg. Sympathy strikes in other parts of the city raised the number of strikers to 150,000 workers in 382 factories. By 21 January [O.S. 8 January] 1905, the city had no electricity and newspaper distribution was halted. All public areas were declared closed. Controversial Orthodox priest Georgy Gapon, who headed a police-sponsored workers' association, led a huge workers' procession to the Winter Palace to deliver a petition to the Tsar on Sunday, 22 January [O.S. 9 January] 1905. The troops guarding the Palace were ordered to tell the demonstrators not to pass a certain point, according to Sergei Witte, and at some point, troops opened fire on the demonstrators, causing between 200 (according to Witte) and 1,000 deaths. The event became known as Bloody Sunday, and is considered by many scholars as the start of the active phase of the revolution. The events in St. Petersburg provoked public indignation and a series of massive strikes that spread quickly throughout the industrial centers of the Russian Empire. Polish socialists—both the PPS and the SDKPiL—called for a general strike. By the end of January 1905, over 400,000 workers in Russian Poland were on strike (see Revolution in the Kingdom of Poland (1905–1907)). Half of European Russia's industrial workers went on strike in 1905, and 93.2% in Poland. There were also strikes in Finland and the Baltic coast. In Riga, 130 protesters were killed on 26 January [O.S. 13 January] 1905, and in Warsaw a few days later over 100 strikers were shot on the streets. By February, there were strikes in the Caucasus, and by April, in the Urals and beyond. In March, all higher academic institutions were forcibly closed for the remainder of the year, adding radical students to the striking workers. A strike by railway workers on 21 October [O.S. 8 October] 1905 quickly developed into a general strike in Saint Petersburg and Moscow. This prompted the setting up of the short-lived Saint Petersburg Soviet of Workers' Delegates, an admixture of Bolsheviks and Mensheviks headed by Khrustalev-Nossar and despite the Iskra split would see the likes of Julius Martov and Georgi Plekhanov spar with Lenin. Leon Trotsky, who felt a strong connection to the Bolsheviki, had not given up a compromise but spearheaded strike action in over 200 factories. By 26 October [O.S. 13 October] 1905, over 2 million workers were on strike and there were almost no active railways in all of Russia. Growing inter-ethnic confrontation throughout the Caucasus resulted in Armenian–Tatar massacres, heavily damaging the cities and the Baku oilfields. With the unsuccessful and bloody Russo-Japanese War (1904–1905) there was unrest in army reserve units. On 2 January 1905, Port Arthur was lost; in February 1905, the Russian army was defeated at Mukden, losing almost 80,000 men. On 27–28 May 1905, the Russian Baltic Fleet was defeated at Tsushima. Witte was dispatched to make peace, negotiating the Treaty of Portsmouth (signed 5 September [O.S. 23 August] 1905). In 1905, there were naval mutinies at Sevastopol (see Sevastopol Uprising), Vladivostok, and Kronstadt, peaking in June with the mutiny aboard the battleship Potemkin. The mutineers eventually surrendered the battleship to Romanian authorities on 8 July in exchange for asylum, then the Romanians returned her to Imperial Russian authorities on the following day. Some sources claim over 2,000 sailors died in the suppression. The mutinies were disorganised and quickly crushed. Despite these mutinies, the armed forces were largely apolitical and remained mostly loyal, if dissatisfied—and were widely used by the government to control the 1905 unrest. Nationalist groups had been angered by the Russification undertaken since Alexander II. The Poles, Finns, and the Baltic provinces all sought autonomy, and also freedom to use their national languages and promote their own culture. Muslim groups were also active, founding the Union of the Muslims of Russia in August 1905. Certain groups took the opportunity to settle differences with each other rather than the government. Some nationalists undertook anti-Jewish pogroms, possibly with government aid, and in total over 3,000 Jews were killed. The number of prisoners throughout the Russian Empire, which had peaked at 116,376 in 1893, fell by over a third to a record low of 75,009 in January 1905, chiefly because of several mass amnesties granted by the Tsar; the historian S G Wheatcroft has wondered what role these released criminals played in the 1905–06 social unrest. On 12 January 1905, the Tsar appointed Dmitri Feodorovich Trepov as governor in St Petersburg and dismissed the Minister of the Interior, Pyotr Sviatopolk-Mirskii, on 18 February [O.S. 5 February] 1905. He appointed a government commission "to enquire without delay into the causes of discontent among the workers in the city of St Petersburg and its suburbs" in view of the strike movement. The commission was headed by Senator NV Shidlovsky, a member of the State Council, and included officials, chiefs of government factories, and private factory owners. It was also meant to have included workers' delegates elected according to a two-stage system. Elections of the workers delegates were, however, blocked by the socialists who wanted to divert the workers from the elections to the armed struggle. On 5 March [O.S. 20 February] 1905, the commission was dissolved without having started work. Following the assassination of his uncle, the Grand Duke Sergei Aleksandrovich, on 17 February [O.S. 4 February] 1905, the Tsar made new concessions. On 2 March [O.S. 18 February] 1905 he published the Bulygin Rescript, which promised the formation of a consultative assembly, religious tolerance, freedom of speech (in the form of language rights for the Polish minority) and a reduction in the peasants' redemption payments. On 24 and 25 May [O.S. 11 and 12 May] 1905, about 300 Zemstvo and municipal representatives held three meetings in Moscow, which passed a resolution, asking for popular representation at the national level. On 6 June [O.S. 24 May] 1905, Nicholas II had received a Zemstvo deputation. Responding to speeches by Prince Sergei Nikolaevich Trubetskoy and Mr Fyodrov, the Tsar confirmed his promise to convene an assembly of people's representatives. Height of the RevolutionEdit Tsar Nicholas II agreed on 2 March [O.S. 18 February] to the creation of a State Duma of the Russian Empire but with consultative powers only. When its slight powers and limits on the electorate were revealed, unrest redoubled. The Saint Petersburg Soviet was formed and called for a general strike in October, refusal to pay taxes, and the en masse withdrawal of bank deposits. In June and July 1905, there were many peasant uprisings in which peasants seized land and tools. Disturbances in the Russian-controlled Congress Poland culminated in June 1905 in the Łódź insurrection. Surprisingly, only one landlord was recorded as killed. Far more violence was inflicted on peasants outside the commune: 50 deaths were recorded. Anti-tsarist protests displaced onto Jewish communities in the October 1905 Kishinev pogrom. The October Manifesto, written by Sergei Witte and Alexis Obolenskii, was presented to the Tsar on 14 October [O.S. 1 October]. It closely followed the demands of the Zemstvo Congress in September, granting basic civil rights, allowing the formation of political parties, extending the franchise towards universal suffrage, and establishing the Duma as the central legislative body. The Tsar waited and argued for three days, but finally signed the manifesto on 30 October [O.S. 17 October] 1905, citing his desire to avoid a massacre and his realisation that there was insufficient military force available to pursue alternative options. He regretted signing the document, saying that he felt "sick with shame at this betrayal of the dynasty ... the betrayal was complete". When the manifesto was proclaimed, there were spontaneous demonstrations of support in all the major cities. The strikes in Saint Petersburg and elsewhere officially ended or quickly collapsed. A political amnesty was also offered. The concessions came hand-in-hand with renewed, and brutal, action against the unrest. There was also a backlash from the conservative elements of society, with right-wing attacks on strikers, left-wingers, and Jews. While the Russian liberals were satisfied by the October Manifesto and prepared for upcoming Duma elections, radical socialists and revolutionaries denounced the elections and called for an armed uprising to destroy the Empire. Some of the November uprising of 1905 in Sevastopol, headed by retired naval Lieutenant Pyotr Schmidt, was directed against the government, while some was undirected. It included terrorism, worker strikes, peasant unrest and military mutinies, and was only suppressed after a fierce battle. The Trans-Baikal railroad fell into the hands of striker committees and demobilised soldiers returning from Manchuria after the Russo–Japanese War. The Tsar had to send a special detachment of loyal troops along the Trans-Siberian Railway to restore order. Between 5 and 7 December [O.S. 22 and 24 November], there was a general strike by Russian workers. The government sent troops on 7 December, and a bitter street-by-street fight began. A week later, the Semyonovsky Regiment was deployed, and used artillery to break up demonstrations and to shell workers' districts. On 18 December [O.S. 5 December], with around a thousand people dead and parts of the city in ruins, the workers surrendered. After a final spasm in Moscow, the uprisings ended in December 1905. According to figures presented in the Duma by Professor Maksim Kovalevsky, by April 1906, more than 14,000 people had been executed and 75,000 imprisoned. Historian Brian Taylor states the number of deaths in the 1905 Revolution was in the "thousands", and notes one source that puts the figure at over 13,000 deaths. Following the Revolution of 1905, the Tsar made last attempts to save his regime, and offered reforms similar to most rulers when pressured by a revolutionary movement. The military remained loyal throughout the Revolution of 1905, as shown by their shooting of revolutionaries when ordered by the Tsar, making overthrow difficult. These reforms were outlined in a precursor to the Constitution of 1906 known as the October Manifesto which created the Imperial Duma. The Russian Constitution of 1906, also known as the Fundamental Laws, set up a multiparty system and a limited constitutional monarchy. The revolutionaries were quelled and satisfied with the reforms, but it was not enough to prevent the 1917 revolution that would later topple the Tsar's regime. Creation of Duma and appointment of StolypinEdit There had been earlier attempts in establishing a Russian Duma before the October Manifesto, but these attempts faced dogged resistance. One attempt in July 1905, called the Bulygin Duma, tried to reduce the assembly into a consultative body. It also proposed limiting voting rights to those with a higher property qualification, excluding industrial workers. Both sides—the opposition and the conservatives—were not pleased with the results. Another attempt in August 1905 was almost successful, but that too died when Nicholas insisted on the Duma's functions be relegated to an advisory position. The October Manifesto, aside from granting the population the freedom of speech and assembly, proclaimed that no law would be passed without examination and approval by the Imperial Duma. The Manifesto also extended the suffrage to universal proportions, allowing for greater participation in the Duma, though the electoral law in 11 December still excluded women. Nevertheless, the tsar retained the power of veto. Propositions for restrictions to the Duma's legislative powers remained persistent. A decree on 20 February 1906 transformed the State Council, the advisory body, into a second chamber with legislative powers "equal to those of the Duma". Not only did this transformation violate the Manifesto, but the Council became a buffer zone between the tsar and Duma, slowing whatever progress the latter could achieve. Even three days before the Duma's first session, on 24 April 1906, the Fundamental Laws further limited the assembly's movement by giving the tsar the sole power to appoint/dismiss ministers. Adding insult was the indication that the Tsar alone had control over many facets of political reins—all without the Duma's expressed permission. The trap seemed perfectly set for the unsuspecting Duma: by the time the assembly convened in 27 April, it quickly found itself unable to do much without violating the Fundamental Laws. Defeated and frustrated, the majority of the assembly voted no confidence and handed in their resignations after a few weeks on 13 May. The attacks on the Duma were not confined to its legislative powers. By the time the Duma opened, it was missing crucial support from its populace, thanks in no small part to the government's return to Pre-Manifesto levels of suppression. The Soviets were forced to lay low for a long time, while the zemstvos turned against the Duma when the issue of land appropriation came up. The issue of land appropriation was the most contentious of the Duma's appeals. The Duma proposed that the government distribute its treasury, "monastic and imperial lands", and seize private estates as well. The Duma, in fact, was preparing to alienate some of its more affluent supporters, a decision that left the assembly without the necessary political power to be efficient. Nicholas II remained wary of having to share power with reform-minded bureaucrats. When the pendulum in 1906 elections swung to the left, Nicholas immediately ordered the Duma's dissolution just after 73 days. Hoping to further squeeze the life out of the assembly, he appointed a tougher prime minister in Petr Stolypin as the liberal Witte's replacement. Much to Nicholas's chagrin, Stolypin attempted to bring about acts of reform (land reform), while retaining measures favorable to the regime (stepping up the number of executions of revolutionaries). After the revolution subsided, he was able to bring economic growth back to Russia's industries, a period which lasted until 1914. But Stolypin's efforts did nothing to prevent the collapse of the monarchy, nor seemed to satisfy the conservatives. Stolypin died from a bullet wound by a revolutionary, Dmitry Bogrov, on 5 September 1911. Even after Bloody Sunday and defeat in the Russo-Japanese War, Nicholas II had been slow to offer a meaningful solution to the social and political crisis. At this point, he became more concerned with his personal affairs such as the illness of his son, whose struggle with haemophilia was overseen by Rasputin. Nicholas also refused to believe that the population was demanding changes in the autocratic regime, seeing "public opinion" as mainly the "intelligentsia" and believing himself to be the patronly 'father figure' to the Russian people. Sergei Witt, the minister of Russia, frustratingly argued with the Tsar that an immediate implementation of reforms was needed to retain order in the country. It was only after the Revolution started picking up steam that Nicholas was forced to make concessions by writing the October Manifesto. Issued on 17 October 1905, the Manifesto stated that the government would grant the population reforms such as the right to vote and to convene in assemblies. Its main provisions were: - The granting of the population "inviolable personal rights" including freedom of conscience, speech, and assemblage - Giving the population who were previously cut off from doing so participation in the newly formed Duma - Ensuring that no law would be passed without the consent of the Imperial Duma. Despite what seemed to be a moment for celebration for Russia's population and the reformists, the Manifesto was rife with problems. Aside from the absence of the word "constitution", one issue with the manifesto was its timing. By October 1905, Nicholas was already dealing with a revolution. Another problem surfaced in the conscience of Nicholas himself: Witte said in 1911 that the manifesto was written only to get the pressure off the monarch's back, that it was not a "voluntary act". In fact, the writers hoped that the Manifesto would sow discord into "the camp of the autocracy’s enemies" and bring order back to Russia. One immediate effect it did have, for a while, was the start of the Days of Freedom, a six-week period from 17 October to early December. This period witnessed an unprecedented level of freedom on all publications—revolutionary papers, brochures, etc.—even though the tsar officially retained the power to censor provocative material. This opportunity allowed the press to address the tsar, and government officials, in a harsh, critical tone previously unheard of. The freedom of speech also opened the floodgates for meetings and organised political parties. In Moscow alone, over 400 meetings took place in the first four weeks. Some of the political parties that came out of these meetings were the Constitutional Democrats (Kadets), Social Democrats, Socialist Revolutionaries, Octobrists, and the far-rightist Union of the Russian People. Among all the groups that benefited most from the Days of Freedoms were the labour unions. In fact, the Days of Freedom witnessed unionisation in the history of the Russian Empire at its apex. At least 67 unions were established in Moscow, as well as 58 in St. Petersburg; the majority of both combined were formed in November 1905 alone. For the Soviets, it was a watershed period of time: nearly 50 of the unions in St. Petersburg came under Soviet control, while in Moscow, the Soviets had around 80,000 members. This large sector of power allowed the Soviets enough clout to form their own militias. In St. Petersburg alone, the Soviets claimed around 6,000 armed members with the purpose of protecting the meetings. Perhaps empowered in their newfound window of opportunity, the St. Petersburg Soviets, along with other socialist parties, called for armed struggles against the Tsarist government, a war call that no doubt alarmed the government. Not only were the workers motivated, but the Days of Freedom also had an earthquake-like effect on the peasant collective as well. Seeing an opening in the autocracy's waning authority thanks to the Manifesto, the peasants, with a political organisation, took to the streets in revolt. In response, the government exerted its forces in campaigns to subdue and repress both the peasants and the workers. Consequences were now in full force: with a pretext in their hands, the government spent the month of December 1905 regaining the level of authority once lost to Bloody Sunday. Ironically, the writers of the October Manifesto were caught off guard by the surge in revolts. One of the main reasons for writing the October Manifesto bordered on the government's "fear of the revolutionary movement". In fact, many officials believed this fear was practically the sole reason for the Manifesto's creation in the first place. Among those more scared was Dmitri Feodorovich Trepov, governor general of St. Petersburg and deputy minister of the interior. Trepov urged Nicholas II to stick to the principles in the Manifesto, for "every retreat ... would be hazardous to the dynasty". Russian Constitution of 1906Edit The Russian Constitution of 1906 was published on the eve of the convocation of the First Duma. The new Fundamental Law was enacted to institute promises of the October Manifesto as well as add new reforms. The Tsar was confirmed as absolute leader, with complete control of the executive, foreign policy, church, and the armed forces. The structure of the Duma was changed, becoming a lower chamber below the Council of Ministers, and was half-elected, half-appointed by the Tsar. Legislations had to be approved by the Duma, the council, and the Tsar to become law. The Fundamental State Laws were the "culmination of the whole sequence of events set in motion in October 1905 and which consolidated the new status quo". The introduction of The Russian Constitution of 1906 was not simply an institution of the October Manifesto. The introduction of the constitution states (and thus emphasizes) the following: - The Russian State is one and indivisible. - The Grand Duchy of Finland, while comprising an inseparable part of the Russian State, is governed in its internal affairs by special decrees based on special legislation. - The Russian language is the common language of the state, and its use is compulsory in the army, the navy and all state and public institutions. The use of local (regional) languages and dialects in state and public institutions are determined by special legislation. The Constitution did not mention any of the provisions of the October Manifesto. While it did enact the provisions laid out previously, its sole purpose seems again to be the propaganda for the monarchy and to simply not fall back on prior promises. The provisions and the new constitutional monarchy did not satisfy Russians and Lenin. The Constitution lasted until the fall of the empire in 1917. Rise of political violenceEdit The years 1904 and 1907 saw a decline of mass movements, strikes and protests, and a rise of overt political violence. Combat groups such as the SR Combat Organization carried out many assassinations targeting civil servants and police, and robberies. Between 1906 and 1909, revolutionaries killed 7,293 people, of whom 2,640 were officials, and wounded 8,061. Notable victims included: - Nikolai Bobrikov – Governor-General of Finland. Killed 30 June [O.S. 17 June] 1904 in Helsinki. - Vyacheslav von Plehve – Minister of Interior. Killed 10 August [O.S. 28 July] 1904 in Saint Petersburg. - Grand Duke Sergei Alexandrovich of Russia – Killed 17 February [O.S. 4 February] 1905 in Moscow. - Eliel Soisalon-Soininen – Procurator of Justice of Finland. Killed 19 February [O.S. 6 February] 1905 in Helsinki. - Viktor Sakharov – former war minister. Killed 5 December [O.S. 22 November] 1905. - Admiral Chukhnin – the commander of the Black Sea Fleet. Killed 24 July [O.S. 11 July] 1906. - Aleksey Ignatyev – Killed 22 December [O.S. 9 December] 1906. The years of revolution were marked by a dramatic rise in the numbers of death sentences and executions. Different figures on the number of executions were compared by Senator Nikolai Tagantsev, and are listed in the table. |Year||Number of executions by different accounts| |Report by Ministry of Internal Affairs Police Department to the State Duma on 19 February [O.S. 6 February] 1909||Report by Ministry of War Military Justice department||By Oscar Gruzenberg||Report by Mikhail Borovitinov, assistant head of Ministry of Justice Chief Prison Administration, at the International Prison Congress in Washington, 1910.| |Total||1,435 + 683 = 2,118||2,212||2,235||2,628| |Year||Number of executions| These numbers reflect only executions of civilians, and do not include a large number of summary executions by punitive army detachments and executions of military mutineers. Peter Kropotkin, an anarchist, noted that official statistics excluded executions conducted during punitive expeditions, especially in Siberia, Caucasus and the Baltic provinces. By 1906 some 4,509 political prisoners were incarcerated in Russian Poland, 20 percent of the empire's total. Ivanovo Voznesensk was known as the 'Russian Manchester' for its textile mills. In 1905, its local revolutionaries were overwhelmingly Bolshevik. It was the first Bolshevik branch in which workers outnumbered intellectuals. - 11 May 1905: The 'Group', the revolutionary leadership, called for the workers at all the textile mills to strike. - 12 May: The strike begins. Strike leaders meet in the local woods. - 13 May: 40,000 workers assemble before the Administration Building to give Svirskii, the regional factory inspector, a list of demands. - 14 May: Workers' delegates are elected. Svirskii had suggested they do so, as he wanted people to negotiate with. A mass meeting is held in Administration Square. Svirskii tells them the mill owners will not meet their demands but will negotiate with elected mill delegates, who will be immune to prosecution, according to the governor. - 15 May: Svirskii tells the strikers they can negotiate only about each factory in turn, but they can hold elections wherever. The strikers elect delegates to represent each mill while they are still out in the streets. Later the delegates elect a chairman. - 17 May: The meetings are moved to the bank of the Talka River, on suggestion by the police chief. - 27 May: The delegates' meeting house is closed. - 3 June: Cossacks break up a workers' meeting, arresting over 20 men. Workers start sabotaging telephone wires and burn down a mill. - 9 June: The police chief resigns. - 12 June: All prisoners are released. Most mill owners flee to Moscow. Neither side gives in. - 27 June: Workers agree to stop striking 1 July. The 1905–1907 revolution was at the time the largest wave of strikes and widest emancipatory movement Poland had ever seen, and it would remain so until the 1970s and 1980s. In 1905, 93.2% of Congress Poland's industrial workers went on strike. The first phase of the revolution consisted primarily of mass strikes, rallies, demonstrations—later this evolved into street skirmishes with the police and army as well as bomb assassinations and robberies of transports carrying money to tsarist financial institutions. One of the major events of that period was the insurrection in Łódź in June 1905, but unrest happened in many other areas too. Warsaw was also an active centre of resistance, particularly in terms of strikes, whereas further south the Republika Ostrowiecka and Republika Zagłębiowska were proclaimed (tsarist control was later restored in these areas when martial law was introduced). Until November 1905, Poland was at the vanguard of the revolutionary movement in the Russian Empire despite the vast military numbers thrown against it; even when the upheaval began its downfall, larger strikes happened more often in Poland than they did in other parts of the Empire in the years 1906–1907. Due to its reach, violence, radicalism, and effects, some Polish historians even consider the events of the 1905 revolution in Poland a fourth Polish uprising against the Russian Empire. Rosa Luxemburg described Poland as "one of the most explosive centres of the revolutionary movement" which "in 1905 marched at the head of the Russian Revolution". In the Grand Duchy of Finland, the Social Democrats organised the general strike of 1905 (12–19 November [O.S. 30 October – 6 November]). The Red Guards were formed, led by captain Johan Kock. During the general strike, the Red Declaration, written by Finnish politician and journalist Yrjö Mäkelin, was published in Tampere, demanding dissolution of the Senate of Finland, universal suffrage, political freedoms, and abolition of censorship. Leo Mechelin, leader of the constitutionalists, crafted the November Manifesto: the revolution resulted in the abolition of the Diet of Finland and of the four Estates, and to the creation of the modern Parliament of Finland. It also resulted in a temporary halt to the Russification policy that Russia had started in 1899. On 12 August [O.S. 30 July] 1906, Russian artillerymen and military engineers rose in revolt in the fortress of Sveaborg (later called Suomenlinna), Helsinki. The Finnish Red Guards supported the Sveaborg Rebellion with a general strike, but the mutiny was quelled within 60 hours by loyal troops and ships of the Baltic Fleet. In the Governorate of Estonia, Estonians called for freedom of the press and assembly, for universal suffrage, and for national autonomy. On 29 October [O.S. 16 October], the Russian army opened fire in a meeting on a street market in Tallinn in which about 8 000–10 000 people participated, killing 94 and injuring over 200. The October Manifesto was supported in Estonia and the Estonian flag was displayed publicly for the first time. Jaan Tõnisson used the new political freedoms to widen the rights of Estonians by establishing the first Estonian political party – National Progress Party. Another, more radical political organisation, the Estonian Social Democratic Workers' Union was founded as well. The moderate supporters of Tõnisson and the more radical supporters of Jaan Teemant could not agree about how to continue with the revolution, and only agreed that both wanted to limit the rights of Baltic Germans and to end Russification. The radical views were publicly welcomed and in December 1905, martial law was declared in Tallinn. A total of 160 manors were looted, resulting in ca. 400 workers and peasants being killed by the army. Estonian gains from the revolution were minimal, but the tense stability that prevailed between 1905 and 1917 allowed Estonians to advance the aspiration of national statehood. Following the shooting of demonstrators in St. Petersburg, a wide-scale general strike began in Riga. On 26 January [O.S. 13 January], Russian army troops opened fire on demonstrators killing 73 and injuring 200 people. During the middle of 1905, the focus of revolutionary events moved to the countryside with mass meetings and demonstrations. 470 new parish administrative bodies were elected in 94% of the parishes in Latvia. The Congress of Parish Representatives was held in Riga in November. In autumn 1905, armed conflict between the Baltic German nobility and the Latvian peasants began in the rural areas of Livonia and Courland. In Courland, the peasants seized or surrounded several towns. In Livonia, the fighters controlled the Rūjiena-Pärnu railway line. Martial law was declared in Courland in August 1905, and in Livonia in late November. Special punitive expeditions were dispatched in mid-December to suppress the movement. They executed 1170 people without trial or investigation and burned 300 peasant homes. Thousands were exiled to Siberia. Many Latvian intellectuals only escaped by fleeing to Western Europe or US. In 1906, the revolutionary movement gradually subsided. - Artists Valentin Serov, Boris Kustodiev, Ivan Bilibin and Mstislav Dobuzhinsky published their works dedicated to the 1905 Revolution in the satirical magazine Zhupel. - Novels Mother (1907) by Maxim Gorky and The Silver Dove (1909) by Andrei Bely were written under the impression of the 1905 Revolution. The same authors depicted it in their later works: Andrei Bely in his Petersburg (1913/1922) and Maxim Gorky in The Life of Klim Samgin (1927–1931). - Battleship Potemkin (1925), Sergei Eisenstein originally intended this film to be a pro-Bolshevik narrative of the 1905 Russian Revolution - Doctor Zhivago, a 1957 novel by Boris Pasternak which takes place from the years between 1902 and World War II. - Symphony No. 11 (Shostakovich), subtitled The Year 1905, written in 1957. - Clodfelter, M. (2017). Warfare and Armed Conflicts: A Statistical Encyclopedia of Casualty and Other Figures, 1492–2015. McFarland. p. 340. ISBN 9781476625850. - "1906 Russian Duma Meets". www.historycentral.com. Retrieved 24 February 2022. - Ascher, Abraham (1994). The Revolution of 1905: Russia in Disarray. Stanford University Press. pp. 1–2. ISBN 978-0-8047-2327-5. - Harcave, Sidney (1970). The Russian Revolution. London: Collier Books. OCLC 293897. - Defronzo, James (2011). Revolutions and Revolutionary Moments. New York: Westview Press. ISBN 978-1-85109-798-2. - Skocpol, Theda (1979). States and Social Revolutions: A Comparative Analysis of France, Russia and China. Cambridge: Cambridge University Press. pp. 93. ISBN 978-0-521-22439-0. - Harcave 1990, 21 - Pipes 1990, pp. 21, 25. - Harcave 1970, 19 - Harcave 1970, 20 - Harcave 1970, 21 - Pipes, Richard (1996). A Concise History of the Russian Revolution. New York: Vintage. p. 8. - Weeks 2004, 472 - Conroy, Mary (2006). "Civil Society in Late Imperial Russia". In Henry, Laura; Sundstrom, Lisa; Evans Jr., Albert (eds.). Russian Civil Society: A Critical Assessment. New York: M. E. Sharpe. p. 12. - Harcave 1970, 22 - Weeks, Theodore (December 2004). "Russification: Word and Practice 1863–1914". Proceedings of the American Philosophical Society. 148: 473. - Staliūnas, Darius (2007). "Between Russification and Divide and Rule: Russian Nationality Policy in the Western Borderlands in Mid-19th Century". Jahrbücher für Geschichte Osteuropas. Neue Folge. 55 (3). - Weeks 2004, 475 - Weeks 2004, 475–476 - Skocpol 1979, 90 - Skocpol 1979, 91 - Skocpol 1979, 92 - Perrie, Maureen (November 1972). "The Russian Peasant Movement of 1905–1907: Its Social Composition and Revolutionary Significance". Past and Present. 57: 124–125. doi:10.1093/past/57.1.123. - John Simkin (ed), "1905 Russian Revolution Archived 4 May 2012 at the Wayback Machine", Spartacus Educational, undated. - Harcave 1970, 23 - Harcave 1970, 24 - Harcave 1970, 25 - Morrissey, Susan (1998). Heralds of Revolution: Russian Students and the Mythologies of Radicalism. Oxford: Oxford University Press. p. 20. - Morrissey 1998, 22 - Morrissey 1998, 20 - Morrissey 1998, 23 - Ascher, Abraham (1994). The Revolution of 1905: Russia in Disarray. Stanford University Press. p. 202. - Harcave 1970, 26 - Rosa Luxemburg, The Mass Strike, the Political Party and the Trade Unions, 1906 [English translation Patrick Lavin, 1925]. Chapter 4, "The Interaction of the Political and the Economic Struggle." Wynn, Charters (1992). "The Revolutionary Surge: 1903 to October 1905". Workers, Strikes, and Pogroms: The Donbass-Dnepr Bend in Late Imperial Russia, 1870–1905. Volume 131 of Princeton Legacy Library. Princeton: Princeton University Press (published 2014). p. 167. ISBN 9781400862894. Retrieved 26 June 2021. The beginning of the revolutionary upsurge could be dated back a little earlier, to the Rostov-on-Don general strike in November 1902 [...]. - Rosa Luxemburg, The Mass Strike, the Political Party and the Trade Unions, 1906 Chapter 3, "Development of the Mass Strike Movement in Russia". - Abraham Ascher, The Revolution of 1905: A Short History, p. 6 - Fischer 1964, p. 44; Rice 1990, pp. 86–88; Service 2000, p. 167; Read 2005, p. 75; Rappaport 2010, pp. 117–120; Lih 2011, p. 87. - Fischer 1964, pp. 44–45; Pipes 1990, pp. 362–363; Rice 1990, pp. 88–89. - Service 2000, pp. 170–171. - Pipes 1990, pp. 363–364; Rice 1990, pp. 89–90; Service 2000, pp. 168–170; Read 2005, p. 78; Rappaport 2010, p. 124. - Fischer 1964, p. 60; Pipes 1990, p. 367; Rice 1990, pp. 90–91; Service 2000, p. 179; Read 2005, p. 79; Rappaport 2010, p. 131. - Salisbury, Harrison E. (1981). Black Night White Snow. Da Capo Press. p. 117. ISBN 978-0-306-80154-9. - This petition asked for "an eight-hour day, a minimum daily wage of one ruble (fifty cents), a repudiation of bungling bureaucrats, and a democratically elected Constituend Assembly to introduce representative government into the empire." R.R. Palmer, A History of the Modern World, second edition, Alfred A. Knopf (New York) 1960, p. 715 - Robert Blobaum, Feliks Dzierzynski and the SDKPiL: A Study of the Origins of Polish Communism, p. 123 - Voline (2004). Unknown Revolution, Chapter 2: The Birth of the "Soviets" - Neal Bascomb, Red Mutiny: Eleven Fateful Days on the Battleship Potemkin, pp. 286–299 - Bascomb, N (2007). Red Mutiny: Eleven Fateful Days on the Battleship Potemkin. Boston: Houghton Mifflin. - Kevin O'Connor, The History of the Baltic States, Greenwood Press, ISBN 0-313-32355-0, Google Print, p. 58 - Taylor, BD (2003). Politics and the Russian Army: Civil-Military Relations, 1689–2000. Cambridge University Press. p. 69. - Wheatcroft, SG (2002). Challenging Traditional Views of Russian History. Palgrave Macmillan. The Pre-Revolutionary Period, p. 34. - Allen, Rowan; Rose, Denny (28 March 2018). History of Europe. UK: ED-Tech Pres. p. 167. ISBN 9781839472787. Retrieved 25 May 2022. - Paul Barnes, R. Paul Evans, Peris Jones-Evans (2003). GCSE History for WJEC Specification A. Heinemann. p. 68 - Richard Pipes, The Russian Revolution, p. 48 - Larned, J. N. (1910). History for ready reference, Vol VII, p. 574. Springfield, Massachusetts: The C. A. Nicholson Co., Publishers. (The original source for this information, according to the book, was Professor Maksim Kovalevsky, who presented these figures in the Duma on 2 May 1906, "in the presence of M. Stolypin, who did not contest it".) - Sohrabi, Nader (May 1995). "Historicizing Revolutions: Constitutional Revolutions in the Ottoman Empire, Iran, Russia, 1905–1908". American Journal of Sociology. The University of Chicago Press. 100 (6): 1424–1425. doi:10.1086/230667. JSTOR 2782676. S2CID 144939087. - Sixsmith, Martin (31 December 2013). Russia:A 1,000 Year Chronicle of the Wild East. New York: The Overlook Press,Peter Mayer Publishers, Inc. p. 171. ISBN 978-1-4683-0501-2. - Nader Sohrabi, Historicizing Revolutions, p. 1425 - Nader Sohrabi, Historicizing Revolutions, p. 1426 - Nader Sohrabi, Historicizing Revolutions, p. 1427 - Martin Sixsmith, Russia: A 1,000 Year Chronicle of the Wild East, p. 173 - Martin Sixsmith, Russia: A 1,000 Year Chronicle of the Wild East, p. 174 - Martin Sixsmith, Russia: A 1,000 Year Chronicle of the Wild East, p. 171 - Martin Sixsmith, Russia: A 1,000 Year Chronicle of the Wild East, p. 171 - Kropotkin, G. M. (Spring 2008). "The Ruling Bureaucracy and the "New Order" of Russian Statehood After the Manifesto of 17 October 1905". Russian Studies in History. 46 (4): 6–33. doi:10.2753/RSH1061-1983460401. S2CID 154943318. - G. M. Kropotkin, The Ruling Bureaucracy and the "New Order" of Russian Statehood, p. 9 - Nader Sohrabi, Historicizing Revolutions, p. 1407 - Nader Sohrabi, Historicizing Revolutions, pp. 1407–1408 - Nader Sohrabi, Historicizing Revolutions, p. 1409 - G. M. Kropotkin, The Ruling Bureaucracy and the "New Order" of Russian Statehood, p. 23 - Galina Mikhaĭlovna Ivanova, Carol Apollonio Flath and Donald J. Raleigh, Labour camp socialism: the Gulag in the Soviet totalitarian system (2000), p.6 - "Article Death penalty in Russia". - 683 executions by sentences of Field Courts Martial, acting from 1 September [O.S. 19 August] 1906, to 3 May [O.S. 20 April] 1907 were listed separately and not subdivided by year. - "Executions". Dwardmac.pitzer.edu. Retrieved 15 March 2012. - "Death penalty in Russia". - Robert Blobaum: Feliks Dzierzynsky and the SDKPiL: A study of the Origins of Polish Communism, p. 149 - Ivanovo. Britannica. - Solomon Schwarz, The Russian Revolution of 1905, pp. 135–137, 335–338 - Tych, Feliks (2018). "Przedmowa". In Wielgosz, Przemysław (ed.). O rewolucji: 1905, 1917. Instytut Wydawniczy „Książka i Prasa”. p. 9. ISBN 9788365304599. - (in Polish) Rewolucja 1905-07 Na Ziemiach Polskich, Encyklopedia Interia, retrieved on 8 April 2008 - "Republika Zagłębiowska". encyklopedia.interia.pl. Interia Encyklopedia. Retrieved 27 July 2021. - Tych, Feliks (2018). "Przedmowa". In Wielgosz, Przemysław (ed.). O rewolucji: 1905, 1917. Instytut Wydawniczy „Książka i Prasa”. p. 11. ISBN 9788365304599. - Castle, Rory (16 June 2013). "Rosa Luxemburg, Her Family and the Origins of her Polish-Jewish Identity". praktykateoretyczna.pl. Praktyka Teoretyczna. Retrieved 3 December 2021. - Bleiere, Daina; Ilgvars Butulis; Antonijs Zunda; Aivars Stranga; Inesis Feldmanis (2006). History of Latvia : the 20th century. Riga: Jumava. p. 68. ISBN 978-9984-38-038-4. OCLC 70240317. - Клейман, Наум И.; Левина, К. Б. (eds.). Броненосец "Потемкин.". Шедевры советского кино. p. 24. - Marie Seton (1960). Sergei M. Eisenstein: a biography. Grove Press. p. 74. - Jay Leyda (1960). Kino: A History of the Russian and Soviet Film. George Allen & Unwin. pp. 193–199. - Abraham Ascher; The Revolution of 1905, vol. 1: Russia in Disarray; Stanford University Press, Stanford, 1988; ISBN 0804714363, ISBN 9780804714365. - Abraham Ascher; The Revolution of 1905, vol. 2: Authority Restored; Stanford University Press, Stanford, 1994 - Abraham Ascher; The Revolution of 1905: A Short History; Stanford University Press, Stanford, 2004 - Donald C. Rawson; Russian Rightists and the Revolution of 1905; Cambridge Russian, Soviet and Post-Soviet Studies, Cambridge University Press, Cambridge, 1995 - François-Xavier Coquin; 1905, La Révolution russe manquée; Editions Complexe, Paris, 1999 - François-Xavier Coquin and Céline Gervais-Francelle (Editors); 1905 : La première révolution russe (Actes du colloque sur la révolution de 1905), Publications de la Sorbonne et Institut d'Études Slaves, Paris, 1986 - John Bushnell; Mutiny amid Repression: Russian Soldiers in the Revolution of 1905–1906; Indiana University Press, Bloomington, 1985 - Anna Geifman. Thou Shalt Kill: Revolutionary Terrorism in Russia, 1894–1917. - Pete Glatter ed., The Russian Revolution of 1905: Change Through Struggle, Revolutionary History Vol 9 No 1 (Editorial: Pete Glatter; Introduction; The Road to Bloody Sunday (Introduced by Pete Glatter); A Revolution Takes Shape (Introduced by Pete Glatter); The Decisive Days (Introduced by Pete Glatter and Philip Ruff); Rosa Luxemburg and the 1905 Revolution (Introduced by Mark Thomas); Mike Haynes, Patterns of Conflict in the 1905 Revolution) - Pete Glatter (17 October 2005). "1905 The consciousness factor". International Socialism (108). - Scott Ury, Barricades and Banners: The Revolution of 1905 and the Transformation of Warsaw Jewry, Stanford University Press, Stanford, 2012. ISBN 978-0-804763-83-7 - Fischer, Louis (1964). The Life of Lenin. London: Weidenfeld and Nicolson. - Rice, Christopher (1990). Lenin: Portrait of a Professional Revolutionary. London: Cassell. ISBN 978-0-304-31814-8. - Pipes, Richard (1990). The Russian Revolution: 1899–1919. London: Collins Harvill. ISBN 978-0-679-73660-8. - Read, Christopher (2005). Lenin: A Revolutionary Life. Routledge Historical Biographies. London: Routledge. ISBN 978-0-415-20649-5. - Rappaport, Helen (2010). Conspirator: Lenin in Exile. New York: Basic Books. ISBN 978-0-465-01395-1. - Lih, Lars T. (2011). Lenin. Critical Lives. London: Reaktion Books. ISBN 978-1-86189-793-0. - Service, Robert (2000). Lenin: A Biography. London: Macmillan. ISBN 978-0-333-72625-9. - 1905 Russian Revolution Archive at marxists.org - Russian Chronology 1904–1914, including the Revolution of 1905 and its aftermath Archived 5 December 2008 at the Wayback Machine - The Mass Strike by Rosa Luxemburg, 1906. - The Year 1905 by Leon Trotsky - Russia and reform (1907) by Bernard Pares - 1905 An article on the events of 1905 from an anarchist perspective (Anarcho-Syndicalist Review, no. 42/3, Winter 2005) - Estonia during the Russian Revolution of 1905 (in Estonian) - Russian Graphic Art and the Revolution of 1905. From the collection of the Beinecke Rare Book and Manuscript Library at Yale University - Revolution of 1905 in Poland (in Polish)
Scientists at NASA have detected water vapour for the first time above the surface of Europa, a finding that supports the idea of a liquid water ocean sloshing beneath the miles-thick ice shell of the Jupiter’s moon. The study, published in the journal Nature Astronomy, measured the vapour by peering at Europa through W. M. Keck Observatory in Hawaii, US. Missions to the outer solar system have amassed enough information about Europa to make it a high-priority target of investigation in NASA’s search for life. What makes this moon so alluring is the possibility that it may possess all of the ingredients necessary for life, said researchers from NASA’s Goddard Space Flight Center in the US. Scientists have evidence that one of these ingredients, liquid water, is present under the icy surface and may sometimes erupt into space in huge geysers. However, no one has been able to confirm the presence of water in these plumes by directly measuring the water molecule itself, NASA said in a statement. Confirming that water vapour is present above Europa helps scientists better understand the inner workings of the moon, the US space agency said. For example, it helps support an idea—of which scientists are confident—that there’s a liquid water ocean, possibly twice as big as Earth’s, sloshing beneath this moon’s miles-thick ice shell, NASA said. Another source of water for the plumes, some scientists suspect, could be shallow reservoirs of melted water-ice not far below Europa’s surface. It’s also possible that Jupiter’s strong radiation field is stripping water particles from Europa’s ice shell, though the recent investigation argued against this mechanism as the source of the observed water. “Essential chemical elements (carbon, hydrogen, oxygen, nitrogen, phosphorus, and sulphur) and sources of energy, two of three requirements for life, are found all over the solar system. But the third—liquid water—is somewhat hard to find beyond Earth,” said Lucas Paganini, a NASA planetary scientist who led the water detection investigation. “While scientists have not yet detected liquid water directly, we’ve found the next best thing: water in vapour form,” Paganin said. The researchers said that they detected enough water releasing from Europa (2,360 kilogrammes per second) to fill an Olympic-size swimming pool within minutes. The scientists also found that the water appears infrequently, at least in amounts large enough to detect from Earth. “For me, the interesting thing about this work is not only the first direct detection of water above Europa, but also the lack thereof within the limits of our detection method,” said Paganini. The team detected the faint yet distinct signal of water vapour just once throughout 17 nights of observations between 2016 and 2017. Looking at the moon from Keck Observatory, the scientists saw water molecules at Europa’s leading
Elections for the Senate The powers and operations of the Senate are inextricably linked with the manner of its election, particularly its direct election by the people of the states by a system of proportional representation. This chapter therefore examines the bases of the system of election as well as describing its salient features. The constitutional framework The Constitution provides that “The Senate shall be composed of senators for each State, directly chosen by the people of the State, voting, until the Parliament otherwise provides, as one electorate”. Each Original State had initially six members of the Senate and now has twelve. The Parliament is authorised to increase the number of senators elected by each state subject to the qualification that “equal representation of the several Original States shall be maintained and that no Original State shall have less than six senators”. Senators representing the states are elected for terms of six years, half the Senate retiring at three yearly intervals except in cases of or following simultaneous dissolution of both Houses. A state may not be deprived of its equal representation in the Senate by any alteration of the Constitution without the consent of the electors of the state. Bases of the constitutional arrangements The constitutional foundations for composition of the Senate reflect the federal character of the Commonwealth. Arrangements for the Australian Senate correspond with those for the United States Senate in that each state is represented equally irrespective of geographical size or population; and senators are elected for terms of six years. Both Senates are essentially continuing Houses: in Australia half the Senate retires every three years; in the United States, a third of the Senate is elected at each biennial election. A major distinction is, however, that the United States Senate can never be dissolved whereas the Australian Senate may be dissolved in the course of seeking to settle disputes over legislation between the two Houses. An important innovation in Australia was the requirement that senators should be “directly chosen by the people of the State”. Direct election of United States senators was provided in the constitution by an amendment which took effect in 1913, prior to which they were elected by state legislatures. The innovatory character of Australia's Senate is also illustrated by contrasting it with the Canadian Senate created by the British North America Act 1867. The provinces are not equally represented in the Canadian Senate; and senators are appointed by the national government, initially for life and now until age 75. Composition on this antiquated basis has deprived the Canadian Senate of the legitimacy deriving from popular choice and has meant, in practice, that the Canadian Senate has not contributed either to enhancing the representivity of the Canadian Parliament (the more desirable because of the first-past-the-post method of election used in the House of Commons) nor to assuaging the pressures of Canada's culturally and geographically diverse federation. Prominent proposals for reform of Canada's Senate in recent decades have included equality of representation for provinces and direct election of senators. The principle of equal representation of the states is vital to the architecture of Australian federalism. It was a necessary inclusion at the time of federation in order to secure popular support for the new Commonwealth in each state especially the smaller states. It ensures that a legislative majority in the Senate is geographically distributed across the Commonwealth and prevents a parliamentary majority being formed from the representatives of the three largest cities and their environs alone. In contemporary Australia it acknowledges that the states continue to be the basis of activity in the nation whether for political, commercial, cultural or sporting purposes. Many organisations in Australia, at the national level, are constituted on the basis of equal state representation or with some modification thereof; this includes the major political parties. By contrast, very few nation-wide bodies are organised on the principle of the election and composition of the House of Representatives. Indeed, in Australia's national life, a body such as the House of Representatives is, if not an aberration, at least relatively unusual. This demonstrates that in Australia federalism is organic and not simply a nominal or contrived feature of government and politics. Constitutional provisions governing composition of the Senate thus remain as valid for Australia in the 21st century as they were in securing support for the Commonwealth in the nation-building final decade of the 19th century. In addition to senators elected by the people of the states, the Constitution also provides, in section 122, that in respect of territories, the Parliament “may allow the representation of such territory in either House of the Parliament to the extent and on the terms which it thinks fit”. Since 1975 the Northern Territory and the Australian Capital Territory have each elected two senators. The particular arrangements for election and terms of territory senators are set out in detail below. The principles of direct election by the people and equal representation of the states are entrenched in the Constitution and cannot be altered except by means of referendum and with the consent of every state. On the other hand, the principle of choosing senators “by the people of the State, voting ... as one electorate” is susceptible to change by statutory enactment. It is, however, essential to the effectiveness of the Senate as a component of the bicameral Parliament. Current electoral arrangements and proportional representation As explained in Chapter 1, the Senate, since proportional representation was introduced in 1948, taking effect from 1949, has been the means of a marked improvement in the representivity of the Parliament. The 1948 electoral settlement for the Senate mitigated the dysfunctions of the single member electorate basis of the House of Representatives by enabling additional, discernible bodies of electoral opinion to be represented in Parliament. The consequence has been that parliamentary government of the Commonwealth is not simply a question of majority rule but one of representation. The Senate, because of the method of composition, is the institution in the Commonwealth which reconciles majority rule, as imperfectly expressed in the House of Representatives, with adequate representation. Proportional representation applied in each state with the people voting as one electorate has been twice affirmed. In 1977, the people at referendum agreed to an amendment to the Constitution so that in filling a casual vacancy by the parliament of a state (or the state governor as advised by the state executive council), the person chosen will be drawn, where possible, from the party of the senator whose death or resignation has given rise to the vacancy. A senator so chosen completes the term of the senator whose place has been taken and is not required, as was previously the case, to stand for election at the next general election of the House of Representatives or periodical election of the Senate. The previous arrangement had the defect of, on occasions, distorting the representation of a state as expressed in a periodical election. The Constitution thus reinforces a method of electing senators which is itself only embodied in the statute law. The present combination of statute and constitutional law serves to underline and preserve the representative character of the Senate. If the statute law were amended so as to abandon the principle of state-wide electorates for choosing of senators in favour of Senate electorates, this would not only have the defect of replicating the House of Representatives system, which by itself is an inadequate means of even trying to represent electoral opinion fairly, but would invalidate the special method of filling a casual vacancy now provided for in section 15 of the Constitution. Single member constituencies would probably be unconstitutional, as they would result in only part of the people of a state voting in each periodical Senate election. There are grounds for concluding that anything other than state-wide electorates and proportional representation would be unconstitutional. The second affirmation of state-wide electorates for the purpose of electing the Senate may be found in the decision of the Commonwealth Parliament, on the basis of a private senator's bill, to remove the authority of the Queensland Parliament to make laws dividing Queensland “into divisions and determining the number of senators to be chosen for each division”. The irresistible conclusion of any analysis of basic arrangements for election of senators is that, for reasons of principle and practice, these features are essential: direct election by the people; equality of representation of the states; distinctive method of election based on proportional representation as embodied in the 1948 electoral settlement for the Senate; elections in which each state votes as one electorate; and filling of casual vacancies according to section 15 of the Constitution. Terms of Service – State Senators Except in cases of simultaneous dissolution, senators representing the states are elected for terms of six years. Terms commence on 1 July following the election. The commencement date was originally 1 January but was altered by referendum in 1906 in an ultimately unsuccessful attempt to avoid the problem of unsynchronised elections for both Houses. The terms of senators elected following a dissolution of the Senate commence on 1 July preceding the date of the general election. Following a general election for the Senate, senators are divided into two classes. Unless another simultaneous election for both Houses intervenes, those in the first class retire on 30 June two years after the general election; those in the second class retire on 30 June five years after the general election. The method of dividing senators is described below. Terms of service – Territory Senators Territory senators' terms commence on the date of their election and end on the day of the next election. They therefore do not have the fixed six year terms commencing on 1 July of the senators elected to represent the states. Their terms are, however, unbroken, which is important in ensuring that the Senate has a full complement of members during an election period. Their elections coincide with general elections for the House of Representatives. Number of senators Under the Constitution each original state is represented by a minimum of six senators. This number has been twice increased, in 1948 (taking effect at the 1949 elections) to 10, and in 1983 (taking effect in the election of 1984) to 12. The Senate's size also increased after 1975 following election of two senators each by the Australian Capital Territory and the Northern Territory. The size of the Senate was 36 from 1901 until 1949; 60 from 1950 to 1975; 64 from 1976 to 1984; and 76 since 1985. The places of half of the senators for each state are open to election each three years, under the system of rotation. Electoral arrangements for territory senators are described below. Election timing – periodical elections Section 13 of the Constitution provides that a periodical election for the Senate must “be made” within one year before the relevant places in the Senate are to become vacant. The relevant places of senators become vacant on 30 June. This means that the election must occur on or after 1 July of the previous year. The question which arises is whether the whole process of election, commencing with the issue of the writs, must occur within one year of the places becoming vacant, or whether only the polling day or subsequent stages must occur within that period, so that the writs for the election could be issued before 1 July. This question has not been definitely decided. In Vardon v O'Loghlin (1907) 5 CLR 201, the question before the High Court was whether, the election of a senator having been found to be void, this created a vacancy which could be filled by the parliament of the relevant state under section 15 of the Constitution. The Court found that this situation did not create a vacancy which could be filled by that means, but that the senator originally returned as elected was never elected. A contrary argument was raised to the effect that, under section 13 of the Constitution, the term of service of a senator began on 1 January [now 1 July] following the day of his election, and it would lead to confusion if it were held that the subsequent voiding of the election, perhaps a year or more after the commencement of the term, could not be filled as a vacancy under section 15. In dismissing this argument, the Court, in the judgment delivered by Chief Justice Samuel Griffith, made the following observation: It is plain, however, that sec. 13 was framed alio intuitu, i.e., for the purpose of fixing the term of service of senators elected in ordinary and regular rotation. The term “election” in that section does not mean the day of nomination or the polling day alone, but comprises the whole proceedings from the issue of the writ to the valid return. And the election spoken of is the periodical election prescribed to be held in the year at the expiration of which the places of elected senators become vacant. The words “the first day of January following the day of his election” in this view mean the day on which he was elected during that election. For the purpose of determining his term of service any accidental delay before that election is validly completed is quite immaterial. This part of the judgment has been taken to indicate that, in interpreting the provision in section 13 whereby the periodical Senate election must be made within one year of the relevant places becoming vacant, the Court would hold that the whole process of election, not simply the polling day or subsequent stages, must occur within that period. This question, however, has not been distinctly decided. It would still be open to the Court to hold that only the polling day or subsequent stages must occur within the prescribed period, and there are various arguments which could be advanced to support this interpretation. The view that the requirement that the election “be made” within the relevant period means only that the election must be completed in that period is quite persuasive. If it were decided, however, to hold a periodical Senate election with only the polling day or subsequent stages occurring within the prescribed period, there would be a risk of the validity of the election being successfully challenged and the election held to be void. This would lead to the major consequence that the whole election process would have to start again. It may be doubted whether the Court would favour an interpretation which would bring about this consequence. Section 13 of the Constitution, as has been noted, also provides that the term of service of a senator is taken to begin on the first day of July following the day of the election. In this provision, the term “day of …. election” clearly means the polling day for the election. This is in accordance with the finding in Vardon v O'Loghlin. The day of election is polling day provided that the election is valid; if the election is found to be invalid then no election has occurred and the question of what is the day of election does not arise. Election timing – simultaneous general elections The provision for dating a senator's term from 1 July preceding simultaneous general elections for both Houses has been seen to be the source of a problem stemming from the preference of governments, for financial reasons as well as others of party advantage, to avoid separate dates for a general election of the House of Representatives (the term of which is governed by the date of the simultaneous dissolution) and an ensuing periodical election for half the Senate. The consequence in most cases has been to hold an “early” general election of the House to coincide with the next periodical Senate election. An instance where an “early” general election for the House was not subsequently held in order to synchronise with the next periodical election for the Senate was May 1953; the 1955 general election for the House is the only occasion when an “early” general election has been called to coincide with election of senators to fill the places of second class (long term) senators elected following simultaneous elections for both Houses. Elections arising from simultaneous dissolutions, held in August 1914, July 1987 and July 2016 did not give rise in significant form to the issue of keeping elections for the two Houses synchronised because of the close proximity of the commencing dates for Senate and House terms in the relevant circumstances. However, the simultaneous dissolution of May 2016, only days before the last possible date to dissolve both Houses under section 57, led to a longer than usual campaign period to ensure a July election and minimal backdating of senators' terms. The early dissolution of the House of Representatives in November 1929 had, in the event, no effect on synchronisation of Senate and House elections because another early dissolution, occasioned by defeat of the Scullin Government on the floor of the House, was needed in December 1931, a date when a periodical election for the Senate was convenient. The House of Representatives was prematurely dissolved in 1963; as a consequence there was a periodical election for the Senate the following year. Subsequently there were general elections for the House in 1966, 1969 and 1972, and periodical elections for the Senate in 1967 and 1970. This sequence of unsynchronised elections ended with the simultaneous dissolutions of April 1974. The case for synchronisation of elections for the two Houses is more a question of convenience and partisan advantage than one of institutional philosophy. Financial considerations simply buttress arguments of party advantage. In a truly bicameral system there is no requirement at all for synchronisation of elections. Proposals to make this a requirement of the Australian Constitution have four times failed at referendum, even though “expert” opinion continues to favour a constitutional amendment of this character. If there is to be change, a more practical approach would be an alteration of the Constitution to provide that the terms of senators elected in a simultaneous dissolution election should be deemed to commence on 1 July following (rather than preceding) the date of election. Provided that the House of Representatives was not subsequently dissolved within two years of election, synchronisation of a general election for the House and a periodical election for the Senate could be restored with relative ease. Such a proposal, if adopted, would remove the current defect in simultaneous dissolution arrangements of circumscribing the standard six-year term for senators by anything up to one year. This approach would, on the other hand, avoid the two major deficiencies posed by simultaneous election proposals: the augmented power placed in the hands of a prime minister by extending executive government authority over the life of the House of Representatives to half the Senate; and diminishing bicameralism by irrevocably tying the electoral schedule for the Senate to that of the House of Representatives. Effective bicameralism requires that the second chamber should have a significant measure of autonomy in its electoral cycle, as well as distinctive electoral arrangements. Issue of writs Writs for the election of senators are issued by the state governor under the authority of the relevant state legislation. The practice is for the governors of the states (when the elections are concurrent) to fix times and polling places identical with those for the elections for the House of Representatives, the writs for which are issued by the Governor-General. In practice, the Prime Minister informs the Governor-General of the requirements of section 12 of the Constitution, which provides that writs for the election of senators are issued by the state governors, observes that it would be desirable that the states should adopt the polling date proposed by the Commonwealth, and requests the Governor-General to invite the state governors to adopt a suggested date. Theoretically, a state could fix some date for the Senate poll other than that suggested by the Commonwealth, provided it is a Saturday. Different states, too, could fix different Saturdays for a Senate poll. This power vested in the states to issue writs for Senate elections, fixing the date of polling, gives expression to the state basis of representation in the Senate. The Constitution provides that, in the case of a dissolution of the Senate, writs are issued within ten days from the proclamation of the dissolution. The Governor-General issues the writs for elections of territory senators. Under changes introduced in the 2007 election, claims for enrolment or transfer of enrolment could not be considered if lodged after 8 pm on the date of issue of the writs, and the rolls closed on the third working day after the writs were issued. These provisions were ruled invalid by the High Court in Rowe v Electoral Commissioner (2010) 243 CLR 1 and replacement legislation providing for the rolls to close seven days after the date of the writs was enacted in 2011. A claim for enrolment or transfer of enrolment received between the close of rolls and polling day (“the suspension period”), and that was delayed in the post by an industrial dispute, is regarded as having been received before the rolls closed. Claims received during the suspension period are not considered until after polling day. Potential disenfranchisement of claimants for enrolment or transfer during the suspension period was the subject of a challenge before the 2016 election but the challenge was dismissed by the High Court in Murphy & Anor v Electoral Commissioner HCA Trans 111. In Getup Ltd v Electoral Commissioner FCA 869, the Federal Court held that an online enrolment form signed with a digital pen was in order. Nominations close at least 10 days but not more than 27 days after the issue of the writ. A candidate for election to either House of the Parliament must be at least 18 years old; an Australian citizen; and an elector entitled to vote, or a person qualified to become such an elector. A person meeting the three qualifications may be disqualified for several reasons. Members of the House of Representatives, state parliaments or the legislative assemblies of the Australian Capital Territory or the Northern Territory cannot be chosen or sit as senators. Members of local government bodies, however, are offered some protection by s. 327(3) of the Commonwealth Electoral Act, but the High Court has not ruled conclusively on this matter. Others disqualified under the Constitution, section 44, are: - anyone who is a citizen or subject of a foreign power; - anyone convicted and under sentence, or subject to be sentenced, for an offence punishable by Commonwealth or state law by a sentence of 12 months or more; - anyone who is an undischarged bankrupt; - anyone who holds an office of profit under the Crown; and - anyone with a pecuniary interest in any agreement with the Commonwealth Public Service (except as a member of an incorporated company of more than 25 people). A person convicted of certain electoral-related offences is disqualified for 2 years. For cases of the disqualification of senators and senators elect, see Chapter 6, Senators, Qualifications of senators. No one may nominate as a candidate for more than one election held on the same day. Hence it is not possible for anyone to nominate for more than one division for the House of Representatives, or more than one state or territory for the Senate, or for both the House and the Senate. Nominations must be made by 12 noon on the day nominations close and the onus is on candidates to ensure nominations reach the electoral officer in time. Candidates may withdraw their nominations at any time up to the close of nominations, but cannot do so after nominations have closed. Nominations of candidates for the Senate, made on the appropriate nomination form (or a facsimile of the form), are made to the Australian Electoral Officer for the state or territory for which the election is to be held. A candidate may be nominated by 100 electors or the registered officer of the registered political party which has endorsed the candidate. Nomination of a candidate of a registered political party not made by the registered officer must be verified. Sitting independent candidates require only one nominee. Nomination forms are not valid unless the persons nominated: - consent to act if elected; - declare that they are qualified to be elected and that they are not candidates in any other election to be held on the same day; - state whether they are Australian citizens by birth or became citizens by other means; and - provide relevant particulars. Candidates in a Senate election may make a request on the nomination form to have their names grouped on the ballot paper. A party name or abbreviation (or for a group endorsed by more than one registered party, a composite name) may be printed on the ballot paper adjacent to the group voting square and any party logo. A deposit must be lodged with each nomination. The deposit, payable in legal tender or banker's cheque only, is $2,000 for a Senate nomination. The deposit is returned in a Senate election if, in the case of un-grouped candidates, the candidate's total number of first preference votes is at least four percent of the total number of formal first preference votes; or, where the candidate's name is included in a group, the sum of the first preference votes polled by all the candidates in the group is at least four percent of the total number of formal first preference votes. Where the number of nominations does not exceed the number of vacancies, the Australian Electoral Officer, on nomination day, declares the candidates elected. In a Senate election, if any candidate dies between the close of nominations and polling day, and the number of remaining candidates is not greater than the number of candidates to be elected, those candidates are declared elected. However, if the remaining candidates are greater in number than the number of candidates to be elected, the election proceeds. A vote recorded on a Senate ballot paper for a deceased candidate is counted to the candidate for whom the voter has recorded the next preference, and the numbers indicating subsequent preferences are regarded as altered accordingly. In a House of Representatives election, if a candidate dies between the close of nominations and polling day, the election in that division is deemed to have wholly failed and does not proceed. A new writ is issued for another election in that division, but this supplementary election is held using the electoral roll prepared for the original election. The statutory provisions regarding death after the close of nominations of a nominated candidate for the Senate could seriously prejudice the prospects of a political party unless a sufficient number of candidates is nominated to avoid disadvantage in the event of a death. The constitutionality of the statutory requirements for the registration of a political party (500 members, no overlapping membership with other parties) was upheld in Mulholland v Australian Electoral Commission (2004) 220 CLR 181. Polling takes place on a Saturday between the hours of 8 am and 6 pm. The Divisional Returning Officer for each electoral Division arranges for appointment of all polling officials for the Division and makes all necessary arrangements for equipping polling places with voting screens, ballot boxes, ballot papers and certified lists of voters. Candidates are prohibited from taking any part in the actual conduct of the polling. They may appoint a scrutineer to represent them at each polling place. The scrutineer has the right to observe the sealing of the empty ballot box before the poll commences at 8 am; observe the questioning of voters by the officer issuing ballot papers; object to the right of any person to vote; and observe all aspects of voting by voters in polling places, hospitals, prisons and remote mobile teams. Voting is compulsory for all electors with the exception of those living or travelling abroad, itinerant electors and electors located in the Antarctic. Contrary to the widely held belief that an elector only has to attend a polling place and have their name marked off the roll, the electoral Act specifically states that it shall be the duty of every elector to vote in each election and is quite specific about how ballot papers must be marked. The fact that voting is a private act performed in public means that the identity will never be discovered of electors who may deface their ballot paper or place it unmarked in the ballot box. Nonetheless, the law is still very clear on this point. Some prisoners are excluded from voting although some of the relevant provisions of the Commonwealth Electoral Act were ruled invalid in the case of Roach v Electoral Commissioner (2007) 233 CLR 162. Replacement legislation was enacted in 2011. The penalty for failing to vote without a valid and sufficient reason is $20 or, if the matter is dealt with in court, a fine not exceeding $50. Electors may vote at any polling place in the House of Representatives electorate for which they are enrolled, at any polling place in the same state or territory (absent voting) or at an interstate voting centre if they are travelling interstate on election day. Under prescribed circumstances electors may vote by post or cast a pre-poll vote. Special arrangements are also made for ballots to be cast by eligible voters in hospitals, prisons and remote locations including Antarctica, and those travelling or residing abroad. The ballot paper A ballot paper for a Senate election has two parts, each reflecting particular methods of registering a vote. Electors may use only one method. The two parts are separated by a thick horizontal line known as the dividing line, and the two methods are referred to as voting “above the line” or “below the line”. Introduced in 1983 to addresss an increasing proportion of informal votes for the Senate, the provisions for group voting tickets simplified voting for the Senate if electors chose not to indicate their order of preference for all candidates for that state or territory. By placing the number 1 in a box above the line for their chosen party, group or incumbent senator, voters could thereby adopt the registered preferences of the object of their choice. The constitutional validity of this method of voting was upheld in McKenzie v Commonwealth (1984) 57 ALR 747, Abbotto v Australian Electoral Commission (1997) 144 ALR 352 and Ditchburn v Australian Electoral Officer for Queensland (1991) 165 ALR 147. In due course, however, the potential for the system to be exploited by micro-parties with appealing names whose exchanges of preferences resulted in the election of candidates with miniscule primary votes became increasingly apparent. Recommendations by the Joint Standing Committee on Electoral Matters in an interim report on the conduct of the 2013 federal election for the abolition of group and individual voting tickets and the adoption of optional preferential voting both above and below the line were given effect in the Commonwealth Electoral Amendment Act 2016. The new provisions were the subject of an immediate challenge that was unanimously dismissed by the High Court which found that they did not impinge on the constitutional requirements for there to be one method of choosing senators which shall be uniform for all the States (s. 9) or for senators to be directly chosen by the people of the State (s. 7). Where groups of candidates or individual incumbent senators have registered as such, a series of boxes is printed on the top part of the Senate ballot paper above the candidates' names. The voter may vote above the line by numbering at least 6 of the boxes in the order of his or her choice, starting with the number 1. Alternatively, where the voter wishes to indicate preferences among individual Senate candidates on the bottom part of the ballot paper, the voter must place a number 1 in the square opposite the name of the candidate most preferred, and give preference votes for at least 11 other candidates by placing the numbers 2, 3, 4 (and so on, as the case requires) in the squares opposite their names so as to indicate an order of preference for them. The top part of the ballot paper is left blank. Counting the vote At the close of the poll each polling place becomes a counting centre under the control of an assistant returning officer who will have been the officer-in-charge of that polling place during the hours of polling. Only ordinary votes (not postal, pre-poll or absentee votes) are counted at the counting centres on election night. Votes for the House of Representatives are counted before Senate ballot papers, as there is widespread community interest in the formation of government and usually considerable time before the Senate terms begin. Furthermore, the nature of the Senate voting system means that a quota cannot be struck on polling night, so only provisional figures can be calculated from the ballot papers counted at polling places. Ballot papers are sorted by the polling officials according to the formal first preference votes marked and the results are then tabulated and sent to the Divisional Returning Officer. Results are relayed through a computer network to the AEC's Virtual Tally Room where progressive figures are displayed. When scrutiny of ordinary votes at each counting centre ends, ballot papers are placed in sealed parcels and delivered to the Divisional Returning Officer. Other votes are counted at the office of the Divisional Returning Officer after election night. In recent times, amendments to the electoral Act have permitted the computerised scrutiny of votes in Senate elections which has reduced the time taken to calculate results, particularly in the larger States. After the 2013 election, during the course of a recount of the Western Australian Senate vote, it was discovered that 1370 ballot papers had been lost. An official inquiry failed to locate the papers or identify the circumstances of the loss. Given the closeness of the results and the different outcome from the recount, the AEC itself lodged a petition with the High Court sitting as the Court of Disputed Returns asking for the election result to be declared void. Two other parties lodged similar petitions. The Court declared the election void, holding that it was precluded by the Commonwealth Electoral Act 1918 from reconstructing the result from earlier records of the lost ballot papers, the loss of which, combined with the closeness of the count inevitably affected the result. The election was held again on 5 April 2014, with a date for the return of the writs that allowed all elected or re-elected senators to begin their terms on 1 July 2014. Candidates may appoint scrutineers who are entitled to be present throughout the counting of votes. The number of scrutineers for a candidate at each counting centre is limited to the number of officers engaged in the counting. Formal voting in a Senate election Following a 2008 decision of the Federal Court sitting as the Court of Disputed Returns, a series of principles have been set out by the Court to be applied to the consideration of the admission or rejection of ballot papers. In summary, these principles are to (i) err in favour of the franchise; (ii) only have regard for what is on the ballot paper; and (iii) the ballot paper should be construed as a whole. Subsection 268(3) limits the reasons for informality to those specified and requires a ballot paper to be given effect to according to the voter's intention, so far as it is clear. However, the tests which apply to acceptance of a Senate ballot paper as formal are complicated because a Senate vote can be recorded either by numbering of preferences for individual candidates below the line or for parties or groups above the line. Additionally, a ballot paper may be accepted as formal even where the voter has erroneously attempted to record both types of votes. Thus three distinct cases may arise. The first case is a vote above the line. A ballot paper is formal if: - the numbers 1 to at least 6 are written in the squares printed above the line in order of preference for the parties or groups represented; or - if there are 6 or fewer squares printed above the line, they are numbered consecutively from 1. Specific allowances are made for voters who deviate from these requirements. A ballot paper is formal if the voter marks only the number 1 in a box above the line, or the number 1 and one or more higher numbers. In addition, a tick or a cross in a box above the line is accepted as the equivalent of the number 1. If a number is repeated, that number and any higher number are disregarded. If a number is missed, any numbers higher than the missing number are disregarded. The second case is a vote below the line. A ballot paper is formal if: - the numbers 1 to at least 12 are written in the squares printed below the line in order of preference for individual candidates; or - if there are 12 or fewer squares printed below the line, they are numbered consecutively from 1. Specific allowances are again made for voters who deviate from these requirements. If there are more than 6 squares printed below the line on a ballot paper, a vote is formal if the voter has numbered any of those squares consecutively from 1 to 6. In addition, a tick or a cross in a box below the line is accepted as the equivalent of the number 1. If a number is repeated, that number and any higher number are disregarded. If a number is missed, any numbers higher than the missing number are disregarded. Finally, if a ballot has been marked both above and below the line and each vote would have been formal if recorded on its own, the vote below the line is included in the scrutiny rather than the party or group vote above the line. As noted in Chapter 6, upon the finding that Senator Wood had not been eligible to contest an election for the Senate in July 1987, it was determined that the place should be filled by counting or recounting of ballot papers cast for candidates for election for the Senate at the election. It was held “that the ballot papers for an election to the Senate, conducted under the system of proportional preferential voting prescribed by Part XVIII of the Commonwealth Electoral Act, for which an unqualified person was a candidate, were not invalid but indications of voters' preference for the candidate were ineffective”. Determining the successful candidates The essential features of the Senate system of election are as follows: To secure election, candidates must secure a quota of votes. The quota is determined by dividing the total number of formal first preference votes in the count by one more than the number of senators to be elected for the state or territory and increasing the result by one. A quota cannot be determined until the total number of formal ballot papers is calculated, which means waiting until the statutory period (13 days) for the receipt of postal votes has passed. Should a candidate gain an exact quota, the candidate is declared elected and those ballot papers are set aside as finally dealt with, as there are no surplus votes. For each candidate elected with a surplus, commencing with the candidate elected first, a transfer value is calculated for all the candidate's ballot papers. All those ballot papers are then re-examined and the number showing a next available preference for each of the continuing candidates is determined. Each of these numbers, ignoring any fractional remainders, is added to the continuing candidates' respective progressive totals of votes. Surplus votes are transferred at less than their full value. The transfer value is calculated by dividing the successful candidate's total surplus by the total number of the candidate's ballot papers. Where a transfer of ballot papers raises the numbers of votes obtained by a candidate up to a quota, the candidate is declared elected. No more ballot papers are transferred to that elected candidate at any succeeding count. When all surpluses have been distributed and vacancies remain to be filled, and the number of continuing candidates exceeds the number of unfilled vacancies, exclusion of candidates with the lowest numbers of votes commences. Bulk exclusions are proceeded with if possible; otherwise exclusions of single candidates take place. Excluded candidates' votes are transferred at full value in accordance with their next preferences to the remaining candidates. Under certain circumstances the transfer of a surplus may be deferred until after an exclusion or bulk exclusion. Step 5 is continued, as necessary, until either all vacancies are filled or the number of candidates in the count is equal to the number of vacancies remaining to be filled. In the latter case, the remaining candidates are declared elected. In counting votes in a Senate election, if only two candidates remain for the last vacancy to be filled and they have an equal number of votes, the Australian Electoral Officer for the state or territory has a casting vote, but does not otherwise vote in the election. Recounts normally occur only when the result of an election is very close. At any time before the declaration of the result of an election, the officer conducting the election may, at the written request of a candidate or on the officer's own decision, recount some or all of the ballot papers. The Electoral Commissioner or an Australian Electoral Officer may direct a recount. A recount last occurred in 2013 after the result of the count in Western Australia was so close as to raise questions about the safety of the original result. The election was ultimately declared void. Return of the writ Writs must be returned within 100 days of issue. Following the declaration of the result in a Senate election, the Australian Electoral Officer for a state or territory certifies the names of the candidates elected for the state or territory, and returns the writ and the certificate to the Governor of the state or, in the case of the ACT and the Northern Territory, to the Governor-General. The State Governors forward their respective writs to the Governor-General whose Official Secretary in turn passes them to the Clerk of the Senate for tabling at the swearing in of new Senators. Meeting of new parliament Under the Constitution, section 5, after any general election (for the House of Representatives and usually a periodical election for the Senate) the Parliament shall be summoned to meet not later than 30 days after the day appointed for the return of the writs. Disputed returns and qualifications Under the Commonwealth Electoral Act the validity of any election or return may be disputed only by petition addressed to the Court of Disputed Returns. The High Court of Australia is the Court of Disputed Returns and it has jurisdiction either to try the petition or to refer it for trial to the Federal Court. A petition must: - set out the facts relied on to invalidate the election; - sufficiently identify the specific matters on which the petition relies; - detail the relief to which the petitioner claims to be entitled; - be signed; - be attested by two witnesses whose occupations and addresses are stated; - be filed in the Registry of the High Court within 40 days after the return of the writ or the notification of the appointment of a person to fill a vacancy; - be accompanied by the sum of $500 as security for costs. The Court has wide powers which include power to declare that any person who was returned was not duly elected; to declare any candidate duly elected who was not returned as elected; and to declare any election absolutely void. The requirement for a petition to be lodged within the 40 day limit cannot be set aside. The Court cannot void a whole general election. The Court must sit as an open Court and be guided by the substantial merits and good conscience of each case without regard to legal forms or technicalities, or whether the evidence before it is in accordance with the law of evidence or not. Questions of fact may be remitted to the Federal Court. All decisions of the Court are final and conclusive and without appeal and cannot be questioned in any way. If the Court of Disputed Returns finds that a candidate has committed or has attempted to commit bribery or undue influence, and that candidate has been elected, then the election will be declared void. Any question arising in the Senate respecting the qualification of a senator or respecting a vacancy may be referred by resolution to the Court of Disputed Returns. For cases on the qualifications of senators, see Chapter 6, Senators, under that heading. Division of the Senate following simultaneous general elections After a general election for the Senate, following simultaneous dissolutions of both Houses, it is necessary for the Senate to divide senators into two classes for the purpose of restoring the rotation of members. On all eight occasions that it has been necessary to divide the Senate for the purposes of rotation, the practice has been to allocate senators according to the order of their election. In 2016, the effective part of the resolution provided as follows: - Senators listed at positions 7 to 12 on the certificate of election of senators for each state shall be allocated to the first class and receive 3 year terms. - Senators listed at positions 1 to 6 on the certificate of election of senators for each state shall be allocated to the second class and receive 6 year terms. [update: The division of the Senate is a matter for the Senate itself. However, there was speculation during the 45th Parliament, with the disqualification of numerous senators under section 44 of the Constitution, whether the High Court might have a role. If a senator is found to have been disqualified at the time of election, their election is void and the vacancy is filled by a recount of the ballots under the supervision of the Court (a “special count”) to determine the person validly elected: see Chapter 6—Senators, under Qualifications of senators. The usual form of the court order following a special count was that a person is “duly elected for the place for which” the ineligible candidate was returned. One question agitated in hearings in December 2017 was whether such an order also had the effect of granting the incoming senator the term (that is, the 3- or 6-year term) initially allocated by the Senate to the ineligible candidate. Nettle J described as “an attractive proposition” the view put by the Commonwealth Solicitor-General that there is “…a very real question as to whether anyone other than the Senate has a role in determining the three- or six-year issue. It may be that the Court has a role in declaring who the people are, and the Senate then chooses who gets three and who gets six years”: Re Parry; Re Lambie HCATrans 258 (13 December 2017). Moreover, the High Court has held that a person invalidly returned in an election does not have a “term of service” at law for the purposes of section 13 of the Constitution: Vardon v O’Loghlin 5 CLR 201 at 211, 214. That being the case, it is hard to see how an order of the Senate under section 13 could have any effect in relation to that person, and similarly hard to argue that an incoming senator inherits that (non-existent) term. In February 2018, the Senate moved to remedy any uncertainty by modifying the effect of the August 2016 resolution, so that it would operate by reference to the revised order of election produced in any relevant special count: 13/2/2018, J.2690-1. In doing so, the Senate preserved the principle adopted at the beginning of the Parliament, that the longer terms be allocated to the senators first elected in the count, and asserted the conventional view that the division of the Senate is a matter for the Senate itself.] [update: Alternative method for dividing the Senate] In its report of September 1983 the Joint Select Committee on Electoral Reform proposed that “following a double dissolution election, the Australian Electoral Commission conduct a second count of Senate votes, using the half Senate quota, in order to establish the order of election to the Senate, and therefore the terms of election”. The committee also recommended that there should be a constitutional referendum on “the practice of ranking senators in accordance with their relative success at the election” so that “the issue is placed beyond doubt and removed from the political arena”. The Commonwealth Electoral Act was subsequently amended to authorise a recount of the Senate vote in each state after a dissolution of the Senate to determine who would have been elected in the event of a periodical election for half the Senate. Following the 1987 dissolution of the Senate, the then Leader of the Government in the Senate, Senator John Button, successfully proposed that the method used following previous elections for the full Senate should again be used in determining senators in the first and second classes respectively. The Opposition on that occasion unsuccessfully moved an amendment to utilise section 282 of the Commonwealth Electoral Act for the purpose of determining the two classes of senators, in accordance with the September 1983 recommendation of the Joint Select Committee on Electoral Reform. According to the leading Opposition speaker, Senator Short, the effect of using the historical rather than the proposed new method was that two National Party senators would be senators in the first (three-year) class rather than the second (six-year) class, whilst two Australian Democrat senators would be senators in the second rather than the first class. On 29 June 1998 the Senate agreed to a motion, moved by the Leader of the Opposition in the Senate, Senator Faulkner, indicating support for the use of section 282 of the Commonwealth Electoral Act in a future division of the Senate. The stated reason for the motion was that the new method should not be adopted without the Senate indicating its intention in advance of a simultaneous dissolution, but it was pointed out that the motion could not bind the Senate for the future. An identical motion was moved by Senator Ronaldson (Shadow Special Minister of State) on 22 June 2010 and agreed to without debate. No such resolution preceded the 2016 dissolution and the order of election method was again followed. The recount method would have resulted in two minor party senators being allocated six-year terms at the expense of two major party senators. Casual vacancies in the Senate are created by death, resignation or absence without permission. In the case of resignation, a senator writes to the President, or the Governor-General if there is no President or the President is absent from the Commonwealth. A resignation may take the following form— Dear Mr/Madam President I resign my place as a senator for the State of , pursuant to section 19 of the Constitution of the Commonwealth of Australia. Where the letter of resignation is sent to the Governor-General, the form may be as follows: Section 19 of the Constitution provides — “A senator may, by writing addressed to the President, or to the Governor-General if there is no President or if the President is absent from the Commonwealth, resign his place, which thereupon shall become vacant.” As the President of the Senate is absent from the Commonwealth, I address my resignation to you. I resign my place as a senator for the State of ..........., pursuant to section 19 of the Constitution of the Commonwealth of Australia. If the President resigns as a senator, the resignation is addressed to the Governor-General. The following principles have been observed in relation to the manner in which senators may resign their place: - a resignation by telegram or other form of unsigned message is not effective; - a resignation must be in writing signed by the senator who wishes to resign and must be received by the President; whether the writing is sent by post or other means is immaterial; - it is only upon the receipt of the resignation by the President that the senator's place becomes vacant under section 19 of the Constitution; - a resignation cannot take effect before its receipt by the President; - a resignation from a current term may not take effect at a future time; - the safest procedure is for the resignation, in writing, to be delivered to the President in person in order that the President can be satisfied that the writing is what it purports to be, namely, the resignation of the senator in question; resignations transmitted by facsimile or other electronic means and confirmed by telephone are accepted. On 5 July 1993 Senator Tate, having just commenced a new term as a senator for Tasmania, resigned before taking his seat in the Senate. The resignation of Senator Tate before his swearing in did not affect the procedure for his replacement. The interesting questions that would have arisen had he resigned before the end of his term were deferred till 2013 when Senator Bob Carr resigned, having just been elected to a new term starting on 1 July 2014. He submitted what was in effect a “double resignation”, resigning both from his place in respect of his term ending on 30 June and also in respect of his new term commencing on 1 July. Notification of both vacancies was provided to the Governor of NSW by the President of the Senate pursuant to section 21 of the Constitution. The resignation of a senator-elect in Senator Bob Carr's case was taken as giving rise to a double vacancy in respect of his current term and the term to which he had been elected. The death of a senator-elect has also been regarded as creating a casual vacancy to be filled in accordance with section 15 of the Constitution. Presumably a senator-elect could become disqualified and similarly create a casual vacancy. The disqualification of a senator at the time of election, however, does not create a vacancy but a failure of election which is remedied by a recount of ballot papers. The Constitution, section 20, states that the “place of a senator becomes vacant if for two consecutive months of any session of the Parliament” a senator fails to attend the Senate without its permission. In 1903 the seat of Senator John Ferguson was declared vacant owing to absence without leave for two months. For the purposes of section 20, a record is kept in the Journals of the Senate of senators' attendance. Method of filling casual vacancies Casual vacancies are filled in accordance with section 15 of the Constitution. The purpose of the current section 15, inserted by an amendment of the Constitution in 1977, is to preserve as much as possible the proportional representation determined by the electors in elections for the Senate. The main features of the section are as follows: - When a casual vacancy arises, the Houses of the Parliament, or the House where there is only one House, of the state represented by the vacating senator chooses a person to hold the place until the expiration of the term. - If the Parliament is not in session, the Governor of the state, with the advice of the Executive Council thereof, may appoint a person to hold the place until the expiration of 14 days from the beginning of the next session of the parliament of the state or the expiration of the term, whichever first happens. A person chosen is to be, where relevant and possible, a member of the party to which the senator whose death or resignation gave rise to the vacancy. The pertinent paragraph of section 15 states: Where a vacancy has at any time occurred in the place of a senator chosen by the people of a State and, at the time when he was so chosen, he was publicly recognised by a particular political party as being an endorsed candidate of that party and publicly represented himself to be such a candidate, a person chosen or appointed under this section in consequence of that vacancy, or in consequence of that vacancy and a subsequent vacancy or vacancies, shall, unless there is no member of that party available to be chosen or appointed, be a member of that party. Section 15 also provides: - in accordance with the last preceding paragraph, a member of a particular political party is chosen or appointed to hold the place of a senator whose place had become vacant; and - before taking his seat he ceases to be a member of that party (otherwise than by reason of the party having ceased to exist), he shall be deemed not to have been so chosen or appointed and the vacancy shall be again notified in accordance with section twenty-one of this Constitution. [update: This last provision gives the recognised party of a departing senator effective control over the choice of a replacement, including by deeming the choice of the state parliament void if “before taking his seat he ceases to be a member of that party”. Following the resignation of Senator Xenophon in 2017, reports that a party member other than the chosen nominee might press a claim to the position came to naught, so the operation of that part of section 15 remains untested: see also Delay in filling casual vacancies, below. In 2020 the vacancy caused by the resignation of then independent Senator Bernardi was filled by a nominee of the Liberal Party; the party he had represented at the 2016 election: 4/2/2020, J.1158; 10/2/2020, J.1271.] Casual vacancies arising in the Senate representation of the Australian Capital Territory or the Northern Territory are filled by the respective territory legislative assemblies. If the legislature is out of session, a temporary appointment can be made in the case of the Australian Capital Territory by the Chief Minister, and in the case of the Northern Territory by the Administrator. Provisions relating to political parties, similar to those of section 15 of the Constitution, also apply. The term of a senator filling a casual vacancy commences on the date of his or her choice by the appointing body. When a senator is appointed to a vacant place by the governor of a state and the appointment is “confirmed” by the state parliament within the 14 days allowed by section 15, the senator is not regarded as commencing a new term on the appointment by the parliament and is not sworn again. The 14 day period is regarded as commencing on the day after the first day of the session, in accordance with the normal rule of statutory interpretation. If there is a “gap” between the expiration of the 14 day period and the appointment of the senator by the parliament, the senator is sworn again. The “double resignation” of Senator Bob Carr in 2013 created interesting questions for the Parliament of New South Wales in choosing a replacement. Senator Carr's party nominated one person to fill both the remainder of his current term and the new term to which he had been elected, but the Parliament, after considering advice from the Crown Solicitor, determined that it could fill the current vacancy only and could not act prospectively to fill a future vacancy. The advice was tabled in the New South Wales Legislative Council on 12 November 2013. With the NSW Houses not scheduled to sit between 17 June and 12 August 2014, further advice was sought from the NSW Crown Solicitor about whether an appointment could be made by the Governor and whether a resolution of the Senate encouraging the NSW Parliament to fill the vacancy could somehow act as a “trigger” for the Houses to meet and fill the vacancy. Not surprisingly (NSW having always taken a strict view of when a governor's appointment could be made) the advice on both questions was negative. In any case, the Senate did not contemplate such a resolution. However, the NSW Houses resolved to meet on 2 July 2014 and again chose Senator O'Neill to fill the second vacancy created by the resignation of Senator Bob Carr. For the avoidance of doubt, the President, on 1 July 2014, reminded the NSW Governor of his earlier notification of the vacancy existing from that date. Delay in filling casual vacancies The 1977 alteration of the Constitution has not entirely solved all problems in the filling of casual vacancies. There is nothing to compel a state parliament to fill a vacancy. This was illustrated in 1987 following the resignation of Tasmanian Senator Grimes, who had been elected to the Senate as an endorsed candidate of the Australian Labor Party. In accordance with the Constitution, section 15, the Parliament of Tasmania met in joint sitting on 8 May 1987. The Leader of the Australian Labor Party in the House of Assembly and Leader of the Opposition, Mr Batt, nominated John Robert Devereux to fill the vacancy. In the ensuing debate it became apparent that government members as well as a number of independent members of the Legislative Council intended to vote against the nomination. The basis for doing so, in terms of the Constitution, was expressed as follows by Mr Groom, Minister for Forests: It has been suggested by some people that there is a convention which requires us to accept Mr Devereux's nomination without question, but section 15 of the Constitution clearly states that it is for the Parliament to choose the person to fill the vacancy and not the party. We can choose only a person who is a member of the same party as the retired senator — that is well recognised — but we are not bound to accept the nomination of the party concerned. The matter shortly came to a vote. Votes were tied at 26 each. The question was thus resolved in the negative in accordance with the rules adopted for the joint sitting. Subsequently a member of the Legislative Council who had voted “No” in the division nominated William G McKinnon, a financial member of the Australian Labor Party and former member of the Tasmanian Parliament, to fill the vacancy and produced a letter from the nominee agreeing to the nomination. After a brief suspension the chair of the Joint Sitting declared that the “letter is not in order”. He continued: It does not comply with rule 16(6) in that the letter does not declare that the person is eligible to be chosen for the Senate and that the nomination is in accordance with section 15 of the Constitution of the Commonwealth of Australia. Therefore I am in the position of being unable to accept the nomination. The joint sitting adjourned soon afterwards without any further voting. The filling of the casual vacancy was, in the event, overtaken by simultaneous dissolutions of the Senate and the House. In the subsequent election John Devereux was among the endorsed ALP candidates in Tasmania who were elected. In the Senate itself, the Opposition granted a pair to the government following Senator Grimes' resignation so that in party terms relative strengths were maintained. The Opposition's position on the matter was stated in the following terms: “the person appointed to fill casual vacancies of this kind ought to be the person nominated by the retiring senator's political party”. There was no certainty as to the outcome of the dispute. According to Senator Gareth Evans, representing the Attorney-General in the Senate, “we have all the makings, however, of a deadlock, and that is what will prevail in the absence of legal challenge and in the absence of a change of heart in Tasmania at the moment”. Failure to fill a casual vacancy promptly means that a state's representation in the Senate is deficient and the principle of equality of representation infringed. The Senate itself takes a keen interest in prompt filling of casual vacancies and has on several occasions expressed by resolution concern about delay. On 19 March 1987, in the case of the Tasmanian vacancy, the Senate expressed the view that the nominee of the relevant party should be appointed. Because of the delay in filling a casual vacancy created by the resignation of Senator Vallentine on 31 January 1992, the Senate passed a resolution on 5 March 1992 expressing its disapproval “of the action of the Western Australian Government for failing to appoint Christabel Chamarette [the candidate endorsed by the relevant political group] as a Senator for Western Australia, condemns the Western Australian Government for denying electors of that state their rightful representation in the Senate, and condemns the Western Australian Government for the disrespect it has shown to the Senate”. On 3 June 1992 the Senate passed the following resolution: That the Senate — - believes that casual vacancies in the Senate should be filled as expeditiously as possible, so that no State is without its full representation in the Senate for any time longer than is necessary; - recognises that under section 15 of the Constitution an appointment to a vacancy in the Senate may be delayed because the Houses of the Parliament of the relevant State are adjourned but have not been prorogued, which, on a strict construction of the section, prevents the Governor of the State making the appointment; and - recommends that all State Parliaments adopt procedures whereby their Houses, if they are adjourned when a casual vacancy in the Senate is notified, are recalled to fill the vacancy, and whereby the vacancy is filled: - within 14 days after the notification of the vacancy, or - where under section 15 of the Constitution the vacancy must be filled by a member of a political party, within 14 days after the nomination by that party is received, whichever is the later. This resolution was passed because the government of Western Australia had adopted the “strict construction” referred to in the resolution, that the state governor could not fill the vacancy because the state Parliament was not prorogued but the Houses had adjourned. Other states from time to time have adopted the view that their governors fill vacancies when their Houses are adjourned. This resolution was reaffirmed in 1997. The Senate passed a resolution on 4 March 1997 calling on two states to fill casual vacancies expeditiously. The resolution was prompted largely by statements by the Premier of Queensland that a casual vacancy in that state caused by a mooted resignation of a senator might not be filled in accordance with section 15 of the Constitution. A resolution of 15 May 1997 referred to the tardiness of the Victorian government in filling vacancies. In 2015, a resolution agreed to on 26 March reaffirmed earlier resolutions and called on NSW to take all necessary steps to fill the vacancy caused by the resignation of Senator Faulkner. Despite the 1991 precedent, a governor's appointment was not made after the state Parliament was prorogued, and the vacancy remained unfilled until after the NSW Houses met following the state election. The obligation on states to fill casual vacancies as expeditiously as possible is matched by an obligation on the Senate to swear in and seat the appointees at the earliest possible time. The Senate has always adhered to this principle. [update: On the Senate's final sitting day in 2021 the Victorian Parliament chose a senator to fill a casual vacancy, but too late to enable the senator to be sworn in. He took his seat on the first sitting day in 2022. Two casual vacancies arising in 2022 were filled following the prorogation of the parliament for a general election, and the Senate did not meet again before the terms of those senators expired on 30 June.] A list of casual vacancies filled under section 15 of the Constitution is contained in Appendix 7. Until 1975 all members of the Senate were elected to represent the people of the states. In the elections in December 1975 following simultaneous dissolution of the two Houses on 11 November 1975 the Australian Capital Territory and the Northern Territory each elected two senators for the first time. Legislation for election of territory senators was enacted in the Senate (Representation of Territories) Act 1973. This legislation was based on the Constitution, section 122, which provides that, in relation to territories, the Parliament “may allow the representation of such territory in either House of the Parliament to the extent and on the terms which it thinks fit”. The provisions for the representation of the territories in the Senate are now contained in the Commonwealth Electoral Act, ss 40-44. The legislation was not enacted without controversy. Indeed, it was one of the bills cited as a ground for the simultaneous dissolutions of 1974 and was eventually passed into law at the joint sitting of that year. It was subsequently twice challenged in the High Court, surviving the first challenge by a majority 4 to 3 decision, and the second by a majority of 5 to 2. The principal issue in dispute was the contention that territory senators would undermine the constitutional basis of the Senate as a house representing the people by states and that territory representation would disrupt the numerical balance between large and small states. Other questions related to the voting rights of territory senators; the effect of territory senators on the nexus between the sizes of the two Houses and on quorums in the Senate; and applicable criteria in determining whether a territory should be represented in the Senate. A full account of the matter is contained in ASP, 6th ed. That edition concluded that “the broadest possible representation of all the people of Australia best serves that [the Senate's] checks and balances role”. Given that each territory's representation is currently limited to two senators, the practice of electing both at the one election by proportional representation preserves the Senate's role as a House which enhances the representative capacity of the Parliament and provides a remedy for the defects in the electoral method used for the House of Representatives. As indicated in Chapter 1, since the 1980 general election all members of the House of Representatives for ACT electorates have usually been members of the Australian Labor Party. Throughout much of this period, one senator has been a member of the ALP, the other senator from the Liberal Party. One-party representation in the House has also been common for the Northern Territory, so that its two senators are also essential to providing that territory with balanced representation. The writ for election of senators for a territory is issued by the Governor-General and is addressed to the Australian Electoral Officer for that Territory; following declaration of the result of a Senate election in a territory, the writ is returned to the Governor-General.
Reproductive rights are legal rights and freedoms relating to reproduction and reproductive health that vary amongst countries around the world. The World Health Organization defines reproductive rights as follows: Reproductive rights rest on the recognition of the basic right of all couples and individuals to decide freely and responsibly the number, spacing and timing of their children and to have the information and means to do so, and the right to attain the highest standard of sexual and reproductive health. They also include the right of all to make decisions concerning reproduction free of discrimination, coercion and violence. |Rights by beneficiary| |Other groups of rights| Women's reproductive rights may include some or all of the following: the right to legal and safe abortion; the right to birth control; freedom from coerced sterilization and contraception; the right to access good-quality reproductive healthcare; and the right to education and access in order to make free and informed reproductive choices. Reproductive rights may also include the right to receive education about sexually transmitted infections and other aspects of sexuality, right to menstrual health and protection from practices such as female genital mutilation (FGM). Reproductive rights began to develop as a subset of human rights at the United Nation's 1968 International Conference on Human Rights. The resulting non binding Proclamation of Tehran was the first international document to recognize one of these rights when it stated that: "Parents have a basic human right to determine freely and responsibly the number and the spacing of their children." Women’s sexual, gynecological, and mental health issues were not a priority of the United Nations until its Decade of Women (1975-1985) brought them to the forefront. States, though, have been slow in incorporating these rights in internationally legally binding instruments. Thus, while some of these rights have already been recognized in hard law, that is, in legally binding international human rights instruments, others have been mentioned only in non binding recommendations and, therefore, have at best the status of soft law in international law, while a further group is yet to be accepted by the international community and therefore remains at the level of advocacy. Reproductive rights are a subset of sexual and reproductive health and rights. Proclamation of Tehran In 1945, the United Nations Charter included the obligation "to promote... universal respect for, and observance of, human rights and fundamental freedoms for all without discrimination as to race, sex, language, or religion". However, the Charter did not define these rights. Three years later, the UN adopted the Universal Declaration of Human Rights (UDHR), the first international legal document to delineate human rights; the UDHR does not mention reproductive rights. Reproductive rights began to appear as a subset of human rights in the 1968 Proclamation of Tehran, which states: "Parents have a basic human right to determine freely and responsibly the number and the spacing of their children". This right was affirmed by the UN General Assembly in the 1969 Declaration on Social Progress and Development which states "The family as a basic unit of society and the natural environment for the growth and well-being of all its members, particularly children and youth, should be assisted and protected so that it may fully assume its responsibilities within the community. Parents have the exclusive right to determine freely and responsibly the number and spacing of their children." The 1975 UN International Women's Year Conference echoed the Proclamation of Tehran. Cairo Programme of Action The twenty-year "Cairo Programme of Action" was adopted in 1994 at the International Conference on Population and Development (ICPD) in Cairo. The non-binding Programme of Action asserted that governments have a responsibility to meet individuals' reproductive needs, rather than demographic targets. It recommended that family planning services be provided in the context of other reproductive health services, including services for healthy and safe childbirth, care for sexually transmitted infections, and post-abortion care. The ICPD also addressed issues such as violence against women, sex trafficking, and adolescent health. The Cairo Program is the first international policy document to define reproductive health, stating: Reproductive health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity, in all matters relating to the reproductive system and its functions and processes. Reproductive health therefore implies that people are able to have a satisfying and safe sex life and that they have the capability to reproduce and the freedom to decide if, when and how often to do so. Implicit in this last condition are the right of men and women to be informed [about] and to have access to safe, effective, affordable and acceptable methods of family planning of their choice, as well as other methods for regulation of fertility which are not against the law, and the right of access to appropriate health-care services that will enable women to go safely through pregnancy and childbirth and provide couples with the best chance of having a healthy infant [para. 72]. Unlike previous population conferences, a wide range of interests from grassroots to government level were represented in Cairo. 179 nations attended the ICPD and overall eleven thousand representatives from governments, NGOs, international agencies and citizen activists participated. The ICPD did not address the far-reaching implications of the HIV/AIDS epidemic. In 1999, recommendations at the ICPD+5 were expanded to include commitment to AIDS education, research, and prevention of mother-to-child transmission, as well as to the development of vaccines and microbicides. The Cairo Programme of Action was adopted by 184 UN member states. Nevertheless, many Latin American and Islamic states made formal reservations to the programme, in particular, to its concept of reproductive rights and sexual freedom, to its treatment of abortion, and to its potential incompatibility with Islamic law. Implementation of the Cairo Programme of Action varies considerably from country to country. In many countries, post-ICPD tensions emerged as the human rights-based approach was implemented. Since the ICPD, many countries have broadened their reproductive health programs and attempted to integrate maternal and child health services with family planning. More attention is paid to adolescent health and the consequences of unsafe abortion. Lara Knudsen observes that the ICPD succeeded in getting feminist language into governments' and population agencies' literature, but in many countries the underlying concepts are not widely put into practice. In two preparatory meetings for the ICPD+10 in Asia and Latin America, the United States, under the George W. Bush Administration, was the only nation opposing the ICPD's Programme of Action. The 1995 Fourth World Conference on Women in Beijing, in its non-binding Declaration and Platform for Action, supported the Cairo Programme's definition of reproductive health, but established a broader context of reproductive rights: The human rights of women include their right to have control over and decide freely and responsibly on matters related to their sexuality, including sexual and reproductive health, free of coercion, discrimination and violence. Equal relationships between women and men in matters of sexual relations and reproduction, including full respect for the integrity of the person, require mutual respect, consent and shared responsibility for sexual behavior and its consequences [para. 96]. The Beijing Platform demarcated twelve interrelated critical areas of the human rights of women that require advocacy. The Platform framed women's reproductive rights as "indivisible, universal and inalienable human rights." The platform for the 1995 Fourth World Conference on Women included a section that denounced gender-based violence and included forced sterilization as a human rights violation. However, the international community at large has not confirmed that women have a right to reproductive healthcare and in ensuing years since the 1995 conference, countries have proposed language to weaken reproductive and sexual rights. This conference also referenced for the first time indigenous rights and women’s rights at the same time, combining them into one category needing specific representation. Reproductive rights are highly politicized, making it difficult to enact legislation. The Yogyakarta Principles on the Application of International Human Rights Law in relation to Sexual Orientation and Gender Identity, proposed by a group of experts in November 2006 but not yet incorporated by States in international law, declares in its Preamble that "the international community has recognized the rights of persons to decide freely and responsibly on matters related to their sexuality, including sexual and reproductive health, free from coercion, discrimination, and violence." In relation to reproductive health, Principle 9 on "The Right to Treatment with Humanity while in Detention" requires that "States shall... [p]rovide adequate access to medical care and counseling appropriate to the needs of those in custody, recognizing any particular needs of persons on the basis of their sexual orientation and gender identity, including with regard to reproductive health, access to HIV/AIDS information and therapy and access to hormonal or other therapy as well as to gender-reassignment treatments where desired." Nonetheless, African, Caribbean and Islamic Countries, as well as the Russian Federation, have objected to the use of these principles as Human Rights standards. State abuses against reproductive rights have happened both under right-wing and left-wing governments. Such abuses include attempts to forcefully increase the birth rate - one of the most notorious natalist policies of the 20th century was that which occurred in communist Romania in the period of 1967-1990 during communist leader Nicolae Ceaușescu, who adopted a very aggressive natalist policy which included outlawing abortion and contraception, routine pregnancy tests for women, taxes on childlessness, and legal discrimination against childless people - as well as attempts to decrease the fertility rate - China's one child policy (1978-2015). State mandated forced marriage was also practiced by authoritarian governments as a way to meet population targets: the Khmer Rouge regime in Cambodia systematically forced people into marriages, in order to increase the population and continue the revolution. Some governments have implemented eugenic policies of forced sterilizations of 'undesirable' population groups. Such policies were carried out against ethnic minorities in Europe and North America in the 20th century, and more recently in Latin America against the Indigenous population in the 1990s; in Peru, President Alberto Fujimori (in office from 1990 to 2000) has been accused of genocide and crimes against humanity as a result of a sterilization program put in place by his administration targeting indigenous people (mainly the Quechuas and the Aymaras). Prohibition of forced sterilization and forced abortion Article 39 – Forced abortion and forced sterilisation - Parties shall take the necessary legislative or other measures to ensure that the following intentional conducts are criminalised: - a performing an abortion on a woman without her prior and informed consent; - b performing surgery which has the purpose or effect of terminating a woman’s capacity to naturally reproduce without her prior and informed consent or understanding of the procedure Human rights have been used as a framework to analyze and gauge abuses, especially for coercive or oppressive governmental policies. The framing of reproductive (human) rights and population control programs are split along race and class lines, with white, western women predominately focused on abortion access (especially during the second wave feminism of the 1970-1980s), silencing women of color in the Global South or marginalized women in the Global North (black and indigenous women, prisoners, welfare recipients) who were subjected to forced sterilization or contraceptive usage campaigns. The hemisphere divide has also been framed as Global North feminists advocating for women’s bodily autonomy and political rights, while Global South women advocate for basic needs through poverty reduction and equality in the economy. This divide between first world versus third world women established as feminists focused on women’s issues (from the first world largely promoting sexual liberation) versus women focused on political issues (from the third world often opposing dictatorships and policies). In Latin America, this is complicated as feminists tend to align with first world ideals of feminism (sexual/reproductive rights, violence against women, domestic violence) and reject religious institutions such as the Catholic Church and Evangelicals, which attempt to control women’s reproduction. On the other side, human rights advocates are often aligned with religious institutions that are specifically combating political violence, instead of focusing on issues of individual bodily autonomy. The debate regarding whether women should have complete autonomous control over their bodies has been espoused by the United Nations and individual countries, but many of those same countries fail to implement these human rights for their female citizens. This shortfall may be partly due to the delay of including women-specific issues in the human rights framework. However, multiple human rights documents and declarations specifically proclaim reproductive rights of women, including the ability to make their own reproductive healthcare decisions regarding family planning, including: the UN Declaration of Human Rights (1948), The Convention on the Elimination of All Forms of Discrimination Against Women (1979), the U.N.’s Millennium Development Goals, and the new Sustainable Development Goals, which are focused on integrating universal reproductive healthcare access into national family planning programs. Unfortunately, the 2007 Declaration on the Rights of Indigenous Peoples, did not address indigenous women’s reproductive or maternal healthcare rights or access. Since most existing legally binding international human rights instruments do not explicitly mention sexual and reproductive rights, a broad coalition of NGOs, civil servants, and experts working in international organizations have been promoting a reinterpretation of those instruments to link the realization of the already internationally recognized human rights with the realization of reproductive rights. An example of this linkage is provided by the 1994 Cairo Programme of Action: Reproductive rights embrace certain human rights that are already recognized in national laws, international human rights documents and other relevant United Nations consensus documents. These rights rest on the recognition of the basic right of all couples and individuals to decide freely and responsibly the number, spacing and timing of their children and to have the information and means to do so, and the right to attain the highest standard of sexual and reproductive health. It also includes the right of all to make decisions concerning reproduction free of discrimination, coercion and violence as expressed in human rights documents. In the exercise of this right, they should take into account the needs of their living and future children and their responsibilities towards the community. Similarly, Amnesty International has argued that the realisation of reproductive rights is linked with the realisation of a series of recognised human rights, including the right to health, the right to freedom from discrimination, the right to privacy, and the right not to be subjected to torture or ill-treatment. The World Health Organization states that: "Sexual and reproductive health and rights encompass efforts to eliminate preventable maternal and neonatal mortality and morbidity, to ensure quality sexual and reproductive health services, including contraceptive services, and to address sexually transmitted infections (STI) and cervical cancer, violence against women and girls, and sexual and reproductive health needs of adolescents. Universal access to sexual and reproductive health is essential not only to achieve sustainable development but also to ensure that this new framework speaks to the needs and aspirations of people around the world and leads to realisation of their health and human rights." However, not all states have accepted the inclusion of reproductive rights in the body of internationally recognized human rights. At the Cairo Conference, several states made formal reservations either to the concept of reproductive rights or to its specific content. Ecuador, for instance, stated that: With regard to the Programme of Action of the Cairo International Conference on Population and Development and in accordance with the provisions of the Constitution and laws of Ecuador and the norms of international law, the delegation of Ecuador reaffirms, inter alia, the following principles embodied in its Constitution: the inviolability of life, the protection of children from the moment of conception, freedom of conscience and religion, the protection of the family as the fundamental unit of society, responsible paternity, the right of parents to bring up their children and the formulation of population and development plans by the Government in accordance with the principles of respect for sovereignty. Accordingly, the delegation of Ecuador enters a reservation with respect to all terms such as "regulation of fertility", "interruption of pregnancy", "reproductive health", "reproductive rights" and "unwanted children", which in one way or another, within the context of the Programme of Action, could involve abortion. Similar reservations were made by Argentina, Dominican Republic, El Salvador, Honduras, Malta, Nicaragua, Paraguay, Peru and the Holy See. Islamic Countries, such as Brunei, Djibouti, Iran, Jordan, Kuwait, Libya, Syria, United Arab Emirates, and Yemen made broad reservations against any element of the programme that could be interpreted as contrary to the Sharia. Guatemala even questioned whether the conference could legally proclaim new human rights. |Part of a series on| The United Nations Population Fund (UNFPA) and the World Health Organization (WHO) advocate for reproductive rights with a primary emphasis on women's rights. In this respect the UN and WHO focus on a range of issues from access to family planning services, sex education, menopause, and the reduction of obstetric fistula, to the relationship between reproductive health and economic status. The reproductive rights of women are advanced in the context of the right to freedom from discrimination and the social and economic status of women. The group Development Alternatives with Women for a New Era (DAWN) explained the link in the following statement: Control over reproduction is a basic need and a basic right for all women. Linked as it is to women's health and social status, as well as the powerful social structures of religion, state control and administrative inertia, and private profit, it is from the perspective of poor women that this right can best be understood and affirmed. Women know that childbearing is a social, not a purely personal, phenomenon; nor do we deny that world population trends are likely to exert considerable pressure on resources and institutions by the end of this century. But our bodies have become a pawn in the struggles among states, religions, male heads of households, and private corporations. Programs that do not take the interests of women into account are unlikely to succeed... Women's reproductive rights have long retained key issue status in the debate on overpopulation. "The only ray of hope I can see – and it's not much – is that wherever women are put in control of their lives, both politically and socially; where medical facilities allow them to deal with birth control and where their husbands allow them to make those decisions, birth rate falls. Women don't want to have 12 kids of whom nine will die." David Attenborough According to OHCHR: "Women’s sexual and reproductive health is related to multiple human rights, including the right to life, the right to be free from torture, the right to health, the right to privacy, the right to education, and the prohibition of discrimination". Attempts have been made to analyse the socioeconomic conditions that affect the realisation of a woman's reproductive rights. The term reproductive justice has been used to describe these broader social and economic issues. Proponents of reproductive justice argue that while the right to legalized abortion and contraception applies to everyone, these choices are only meaningful to those with resources, and that there is a growing gap between access and affordability. |Part of a series on| Men's reproductive rights have been claimed by various organizations, both for issues of reproductive health, and other rights related to sexual reproduction. Recently men's reproductive right with regards to paternity have become subject of debate in the U.S. The term "male abortion" was coined by Melanie McCulley, a South Carolina attorney, in a 1998 article. The theory begins with the premise that when a woman becomes pregnant she has the option of abortion, adoption, or parenthood; it argues, in the context of legally recognized gender equality, that in the earliest stages of pregnancy the putative (alleged) father should have the right to relinquish all future parental rights and financial responsibility, leaving the informed mother with the same three options. This concept has been supported by a former president of the feminist organization National Organization for Women, attorney Karen DeCrow. The feminist argument for male reproductive choice contends that the uneven ability to choose experienced by men and women in regards to parenthood is evidence of a state-enforced coercion favoring traditional sex roles. In 2006, the National Center for Men brought a case in the US, Dubay v. Wells (dubbed by some "Roe v. Wade for men"), that argued that in the event of an unplanned pregnancy, when an unmarried woman informs a man that she is pregnant by him, he should have an opportunity to give up all paternity rights and responsibilities. Supporters argue that this would allow the woman time to make an informed decision and give men the same reproductive rights as women. In its dismissal of the case, the U.S. Court of Appeals (Sixth Circuit) stated that "the Fourteenth Amendment does not deny to [the] State the power to treat different classes of persons in different ways." The opportunity to give men the right for a Paper Abortion is heavily discussed. Intersex and reproductive rights Intersex, in humans and other animals, is a variation in sex characteristics including chromosomes, gonads, or genitals that do not allow an individual to be distinctly identified as male or female. Such variation may involve genital ambiguity, and combinations of chromosomal genotype and sexual phenotype other than XY-male and XX-female. Intersex persons are often subjected to involuntary "sex normalizing" surgical and hormonal treatments in infancy and childhood, often also including sterilization. UN agencies have begun to take note. On 1 February 2013, Juan E Mendés, the UN Special Rapporteur on torture and other cruel, inhuman or degrading treatment or punishment, issued a statement condemning non-consensual surgical intervention on intersex people. His report stated, "Children who are born with atypical sex characteristics are often subject to irreversible sex assignment, involuntary sterilization, involuntary genital normalizing surgery, performed without their informed consent, or that of their parents, "in an attempt to fix their sex", leaving them with permanent, irreversible infertility and causing severe mental suffering". In May 2014, the World Health Organization issued a joint statement on Eliminating forced, coercive and otherwise involuntary sterilization, An interagency statement with the OHCHR, UN Women, UNAIDS, UNDP, UNFPA and UNICEF. The report references the involuntary surgical "sex-normalising or other procedures" on "intersex persons". It questions the medical necessity of such treatments, patients' ability to consent, and a weak evidence base. The report recommends a range of guiding principles to prevent compulsory sterilization in medical treatment, including ensuring patient autonomy in decision-making, ensuring non-discrimination, accountability and access to remedies. Youth rights and access In many jurisdictions minors require parental consent or parental notification in order to access various reproductive services, such as contraception, abortion, gynecological consultations, testing for STDs etc. The requirement that minors have parental consent/notification for testing for HIV/AIDS is especially controversial, particularly in areas where the disease is endemic, and it is a sensitive subject. Balancing minors' rights versus parental rights is considered an ethical problem in medicine and law, and there have been many court cases on this issue in the US. An important concept recognized since 1989 by the Convention on the Rights of the Child is that of the evolving capacities of a minor, namely that minors should, in accordance with their maturity and level of understanding, be involved in decisions that affect them. Youth are often denied equal access to reproductive health services because health workers view adolescent sexual activity as unacceptable, or see sex education as the responsibility of parents. Providers of reproductive health have little accountability to youth clients, a primary factor in denying youth access to reproductive health care. In many countries, regardless of legislation, minors are denied even the most basic reproductive care, if they are not accompanied by parents: in India, for instance, in 2017, a 17-year-old girl who was rejected by her family due to her pregnancy, was also rejected by hospitals and gave birth in the street. In recent years the lack of reproductive rights for adolescents has been a concern of international organizations, such as UNFPA. Mandatory involvement of parents in cases where the minor has sufficient maturity to understand their situation is considered by health organization as a violation of minor's rights and detrimental to their health. The World Health Organization has criticized parental consent/notification laws: Discrimination in health care settings takes many forms and is often manifested when an individual or group is denied access to health care services that are otherwise available to others. It can also occur through denial of services that are only needed by certain groups, such as women. Examples include specific individuals or groups being subjected to physical and verbal abuse or violence; involuntary treatment; breaches of confidentiality and/or denial of autonomous decision-making, such as the requirement of consent to treatment by parents, spouses or guardians; and lack of free and informed consent. [...] Laws and policies must respect the principles of autonomy in health care decision-making; guarantee free and informed consent, privacy and confidentiality; prohibit mandatory HIV testing; prohibit screening procedures that are not of benefit to the individual or the public; and ban involuntary treatment and mandatory third-party authorization and notification requirements." According to UNICEF: "When dealing with sexual and reproductive health, the obligation to inform parents and obtain their consent becomes a significant barrier with consequences for adolescents’ lives and for public health in general." One specific issue which is seen as a form of hypocrisy of legislators is that of having a higher age of medical consent for the purpose of reproductive and sexual health than the age of sexual consent - in such cases the law allows youth to engage in sexual activity, but does not allow them to consent to medical procedures that may arise from being sexually active; UNICEF states that "On sexual and reproductive health matters, the minimum age of medical consent should never be higher than the age of sexual consent." Youth sexual education in Uganda is relatively low. Comprehensive sex education is not generally taught in schools; even if it was, the majority of young people do not stay in school after the age of fifteen, so information would be limited regardless. Africa experiences high rates of unintended pregnancy, along with high rates of HIV/AIDS. Young women aged 15–24 are eight times more likely to have HIV/AIDS than young men. Sub-Saharan Africa is the world region most affected by HIV/AIDS, with approximately 25 million people living with HIV in 2015. Sub-Saharan Africa accounts for two-thirds of the global total of new HIV infections. Attempted abortions and unsafe abortions are a risk for youth in Africa. On average, there are 2.4 million unsafe abortions in East Africa, 1.8 million in Western Africa, over 900,000 in Middle Africa, and over 100,000 in Southern Africa each year. In Uganda, abortion is illegal except to save the mother's life. However, 78% of teenagers report knowing someone who has had an abortion and the police do not always prosecute everyone who has an abortion. An estimated 22% of all maternal deaths in the area stem from illegal, unsafe abortions. Sweden has the highest percentage of lifetime contraceptive use, with 96% of its inhabitants claiming to have used birth control at some point in their life. Sweden also has a high self-reported rate of postcoital pill use. A 2007 anonymous survey of Swedish 18-year-olds showed that three out of four youth were sexually active, with 5% reporting having had an abortion and 4% reporting the contraction of an STI. Latin America has come to international attention due to its harsh anti-abortion laws. Latin America is home to some of the few countries of the world with a complete ban on abortion, without an exception for saving maternal life. In some of these countries, particularity in Central America, the enforcement of such laws is very aggressive: El Salvador and Nicaragua have drawn international attention for strong enforcement of their complete bans on abortion. In 2017, Chile relaxed its total ban, allowing abortion to be performed when the woman’s life is in danger, when a fetus is unviable, or in cases of rape. In Ecuador, education and class play a large role in the definition of which young women become pregnant and which do not - 50% of young women who are illiterate get pregnant, compared to 11% of girls with secondary education. The same is true for poorer individuals - 28% become impregnated while only 11% of young women in wealthier households do. Furthermore, access to reproductive rights, including contraceptives, are limited, due to age and the perception of female morality. Health care providers often discuss contraception theoretically, not as a device to be used on a regular basis. Decisions concerning sexual activity often involve secrecy and taboos, as well as a lack of access to accurate information. Even more telling, young women have much easier access to maternal healthcare than they do to contraceptive help, which helps explain high pregnancy rates in the region. Rates of adolescent pregnancy in Latin America number over a million each year. Among sexually experienced teenagers, 78% of teenage females and 85% of teenage males used contraception the first time they had sex; 86% and 93% of these same females and males, respectively, reported using contraception the last time they had sex. The male condom is the most commonly used method during first sex, although 54% of young women in the United States rely upon the pill. Young people in the U.S. are no more sexually active than individuals in other developed countries, but they are significantly less knowledgeable about contraception and safe sex practices. As of 2006, only twenty states required sex education in schools - of these, only ten required information about contraception. On the whole, less than 10% of American students receive sex education that includes topical coverage of abortion, homosexuality, relationships, pregnancy, and STI prevention. Abstinence-only education was used throughout much of the United States in the 1990s and early 2000s. Based upon the moral principle that sex outside of marriage is unacceptable, the programs often misled students about their rights to have sex, the consequences, and prevention of pregnancy and STIs. Abortion in the United States is legal since the United States Supreme Court decision Roe v. Wade which decriminalised abortion nationwide in 1973, and established a minimal period during which abortion is legal (with more or fewer restrictions throughout the pregnancy). That basic framework, modified in Planned Parenthood v. Casey (1992), remains nominally in place, although the effective availability of abortion varies significantly from state to state, as many counties have no abortion providers. Planned Parenthood v. Casey held that a law cannot place legal restrictions imposing an undue burden for "the purpose or effect of placing a substantial obstacle in the path of a woman seeking an abortion of a nonviable fetus." Abortion is a controversial political issue, and regular attempts to restrict it occur in most states. One such case, originating in Texas, led to the Supreme Court case of Whole Woman's Health v. Hellerstedt (2016) in which several Texas restrictions were struck down. Lack of knowledge about rights One of the reasons why reproductive rights are poor in many places, is that the vast majority of the population does not know what the law is. Not only are ordinary people uninformed, but so are medical doctors. A study in Brazil on medical doctors found considerable ignorance and misunderstanding of the law on abortion (which is severely restricted, but not completely illegal). In Ghana, abortion, while restricted, is permitted on several grounds, but only 3% of pregnant women and 6% of those seeking an abortion were aware of the legal status of abortion. In Nepal, abortion was legalized in 2002, but a study in 2009 found that only half of women knew that abortion was legalized. Many people also do not understand the laws on sexual violence: in Hungary, where marital rape was made illegal in 1997, in a study in 2006, 62% of people did not know that marital rape was a crime. The United Nations Development Programme states that, in order to advance gender justice, "Women must know their rights and be able to access legal systems", and the 1993 UN Declaration on the Elimination of Violence Against Women states at Art. 4 (d) [...] "States should also inform women of their rights in seeking redress through such mechanisms". Gender equality and violence against women Addressing issues of gender-based violence is crucial for attaining reproductive rights. The United Nations Population Fund refers to "Equality and equity for men and women, to enable individuals to make free and informed choices in all spheres of life, free from discrimination based on gender" and "Sexual and reproductive security, including freedom from sexual violence and coercion, and the right to privacy," as part of achieving reproductive rights, and states that the right to liberty and security of the person which is fundamental to reproductive rights obliges states to: - Take measures to prevent, punish and eradicate all forms of gender-based violence - Eliminate female genital mutilation/cutting - "Gender and Reproductive Rights (GRR) aims to promote and protect human rights and gender equality as they relate to sexual and reproductive health by developing strategies and mechanisms for promoting gender equity and equality and human rights in the Departments global and national activities, as well as within the functioning and priority-setting of the Department itself." - Violence against women violates women's rights to life, physical and mental integrity, to the highest attainable standard of health, to freedom from torture and it violates their sexual and reproductive rights." One key issue for achieving reproductive rights is criminalization of sexual violence. If a woman is not protected from forced sexual intercourse, she is not protected from forced pregnancy, namely pregnancy from rape. In order for a woman to be able to have reproductive rights, she must have the right to choose with whom and when to reproduce; and first of all, decide whether, when, and under what circumstances to be sexually active. In many countries, these rights of women are not respected, because women do not have a choice in regard to their partner, with forced marriage and child marriage being common in parts of the world; and neither do they have any rights in regard to sexual activity, as many countries do not allow women to refuse to engage in sexual intercourse when they do not want to (because marital rape is not criminalized in those countries) or to engage in consensual sexual intercourse if they want to (because sex outside marriage is illegal in those countries). In addition to legal barriers, there are also social barriers, because in many countries a complete sexual subordination of a woman to her husband is expected (for instance, in one survey 74% of women in Mali said that a husband is justified to beat his wife if she refuses to have sex with him), while sexual/romantic relations disapproved by family members, or generally sex outside marriage, can result in serious violence, such as honor killings. According to the CDC, "HIV stands for human immunodeficiency virus. It weakens a person’s immune system by destroying important cells that fight disease and infection. No effective cure exists for HIV. But with proper medical care, HIV can be controlled." HIV amelioration is an important aspect of reproductive rights because the virus can be transmitted from mother to child during pregnancy or birth, or via breast milk. The WHO states that: "All women, including those with HIV, have the right “to decide freely and responsibly on the number and spacing of their children and to have access to the information, education and means to enable them to exercise these rights”". The reproductive rights of people living with HIV, and their health, are very important. The link between HIV and reproductive rights exists in regard to four main issues: - prevention of unwanted pregnancy - help to plan wanted pregnancy - healthcare during and after pregnancy - access to abortion services Child and forced marriage The WHO states that the reproductive rights and health of girls in child marriages are negatively affected. The UNPF calls child marriage a "human rights violation" and states that in developing countries, one in every three girls is married before reaching age 18, and one in nine is married under age 15. A forced marriage is a marriage in which one or more of the parties is married without his or her consent or against his or her will. The Istanbul convention, the first legally binding instrument in Europe in the field of violence against women and domestic violence, requires countries which ratify it to prohibit forced marriage (Article 37) and to ensure that forced marriages can be easily voided without further victimization (Article 32). Sexual violence in armed conflict Sexual violence in armed conflict is sexual violence committed by combatants during armed conflict, war, or military occupation often as spoils of war; but sometimes, particularly in ethnic conflict, the phenomenon has broader sociological motives. It often includes gang rape. Rape is often used as a tactic of war and a threat to international security. Sexual violence in armed conflict is a violation of reproductive rights, and often leads to forced pregnancy and sexually transmitted infections. Such sexual violations affect mostly women and girls, but rape of men can also occur, such as in Democratic Republic of the Congo. Maternal death is defined by the World Health Organization (WHO) as "the death of a woman while pregnant or within 42 days of termination of pregnancy, irrespective of the duration and site of the pregnancy, from any cause related to or aggravated by the pregnancy or its management but not from accidental or incidental causes." It is estimated that in 2015, about 303,000 women died during and following pregnancy and childbirth, and 99% of such deaths occur in developing countries. Birth control, also known as contraception and fertility control, is a method or device used to prevent pregnancy. Birth control has been used since ancient times, but effective and safe methods of birth control only became available in the 20th century. Planning, making available, and using birth control is called family planning. Some cultures limit or discourage access to birth control because they consider it to be morally, religiously, or politically undesirable. All birth control methods meet opposition, especially religious opposition, in some parts of the world. Opposition does not only target modern methods, but also 'traditional' ones : for example, the Quiverfull movement, a conservative Christian ideology, encourages the maximization of procreation, and opposes all forms of birth control, including natural family planning. According to a study by WHO and the Guttmacher Institute worldwide, 25 million unsafe abortions (45% of all abortions) occurred every year between 2010 and 2014. 97% of unsafe abortions occur in developing countries in Africa, Asia and Latin America. By contrast, most abortions that take place in Western and Northern Europe and North America are safe. The Committee on the Elimination of Discrimination against Women considers the criminalization of abortion a "violations of women’s sexual and reproductive health and rights" and a form of "gender based violence"; paragraph 18 of its General recommendation No. 35 on gender based violence against women, updating general recommendation No. 19 states that: "Violations of women’s sexual and reproductive health and rights, such as forced sterilizations, forced abortion, forced pregnancy, criminalisation of abortion, denial or delay of safe abortion and post abortion care, forced continuation of pregnancy, abuse and mistreatment of women and girls seeking sexual and reproductive health information, goods and services, are forms of gender based violence that, depending on the circumstances, may amount to torture or cruel, inhuman or degrading treatment." The same General Recommendation also urges countries at paragraph 31 to [...] In particular, repeal: a) Provisions that allow, tolerate or condone forms of gender based violence against women, including [...] legislation that criminalises abortion". An article from the World Health Organization calls safe, legal abortion a "fundamental right of women, irrespective of where they live" and unsafe abortion a "silent pandemic". The article states "ending the silent pandemic of unsafe abortion is an urgent public-health and human-rights imperative." It also states "access to safe abortion improves women’s health, and vice versa, as documented in Romania during the regime of President Nicolae Ceaușescu" and "legalisation of abortion on request is a necessary but insufficient step toward improving women’s health" citing that in some countries, such as India where abortion has been legal for decades, access to competent care remains restricted because of other barriers. WHO’s Global Strategy on Reproductive Health, adopted by the World Health Assembly in May 2004, noted: “As a preventable cause of maternal mortality and morbidity, unsafe abortion must be dealt with as part of the MDG on improving maternal health and other international development goals and targets." The WHO's Development and Research Training in Human Reproduction (HRP), whose research concerns people's sexual and reproductive health and lives, has an overall strategy to combat unsafe abortion that comprises four inter-related activities: - to collate, synthesize and generate scientifically sound evidence on unsafe abortion prevalence and practices; - to develop improved technologies and implement interventions to make abortion safer; - to translate evidence into norms, tools and guidelines; - and to assist in the development of programmes and policies that reduce unsafe abortion and improve access to safe abortion and highquality postabortion care The UN has estimated in 2017 that repealing anti-abortion laws would save the lives of nearly 50,000 women a year. Unsafe abortions take place primarily in countries where abortion is illegal, but also occur in countries where it is legal, but women cannot access it because of various reasons (conscientious objectors among doctors, high prices, lack of knowledge that abortion is legal). Indeed, there are countries where the law is liberal, but in practice it is very difficult to have an abortion, due to most doctors being conscientious objectors. The fact that is some countries where abortion is legal it is de facto very difficult to have access to one is controversial; the UN in its 2017 resolution on Intensification of efforts to prevent and eliminate all forms of violence against women and girls: domestic violence urged states to guarantee access to "safe abortion where such services are permitted by national law". Safe and legal abortion services are often very difficult to access by women from rural areas or from lower socioeconomic backgrounds. In 2008, Human Rights Watch stated that "In fact, even where abortion is permitted by law, women often have severely limited access to safe abortion services because of lack of proper regulation, health services, or political will" and estimated that "Approximately 13 percent of maternal deaths worldwide are attributable to unsafe abortion—between 68,000 and 78,000 deaths annually." The Maputo Protocol, which was adopted by the African Union in the form of a protocol to the African Charter on Human and Peoples' Rights, states at Article 14 (Health and Reproductive Rights) that: "(2). States Parties shall take all appropriate measures to: [...] c) protect the reproductive rights of women by authorising medical abortion in cases of sexual assault, rape, incest, and where the continued pregnancy endangers the mental and physical health of the mother or the life of the mother or the foetus." The Maputo Protocol is the first international treaty to recognize abortion, under certain conditions, as a woman's human right. The General comment No. 36 (2018) on article 6 of the International Covenant on Civil and Political Rights, on the right to life, adopted by the Human Rights Committee in 2018, defines, for the first time ever, a human right to abortion - in certain circumstances (however these UN general comments are considered soft law, and, as such, not legally binding). "Although States parties may adopt measures designed to regulate voluntary terminations of pregnancy, such measures must not result in violation of the right to life of a pregnant woman or girl, or her other rights under the Covenant. Thus, restrictions on the ability of women or girls to seek abortion must not, inter alia, jeopardize their lives, subject them to physical or mental pain or suffering which violates article 7, discriminate against them or arbitrarily interfere with their privacy. States parties must provide safe, legal and effective access to abortion where the life and health of the pregnant woman or girl is at risk, and where carrying a pregnancy to term would cause the pregnant woman or girl substantial pain or suffering, most notably where the pregnancy is the result of rape or incest or is not viable. In addition, States parties may not regulate pregnancy or abortion in all other cases in a manner that runs contrary to their duty to ensure that women and girls do not have to undertake unsafe abortions, and they should revise their abortion laws accordingly. For example, they should not take measures such as criminalizing pregnancies by unmarried women or apply criminal sanctions against women and girls undergoing abortion or against medical service providers assisting them in doing so, since taking such measures compel women and girls to resort to unsafe abortion. States parties should not introduce new barriers and should remove existing barriers that deny effective access by women and girls to safe and legal abortion , including barriers caused as a result of the exercise of conscientious objection by individual medical providers. " When negotiating the Cairo Programme of Action at the 1994 International Conference on Population and Development (ICPD), the issue was so contentious that delegates eventually decided to omit any recommendation to legalize abortion, instead advising governments to provide proper post-abortion care and to invest in programs that will decrease the number of unwanted pregnancies. On April 18, 2008 the Parliamentary Assembly of the Council of Europe, a group comprising members from 47 European countries, adopted a resolution calling for the decriminalization of abortion within reasonable gestational limits and guaranteed access to safe abortion procedures. The nonbinding resolution was passed on April 16 by a vote of 102 to 69. During and after the ICPD, some interested parties attempted to interpret the term ‘reproductive health’ in the sense that it implies abortion as a means of family planning or, indeed, a right to abortion. These interpretations, however, do not reflect the consensus reached at the Conference. For the European Union, where legislation on abortion is certainly less restrictive than elsewhere, the Council Presidency has clearly stated that the Council’s commitment to promote ‘reproductive health’ did not include the promotion of abortion. Likewise, the European Commission, in response to a question from a Member of the European Parliament, clarified: “The term ‘reproductive health’ was defined by the United Nations (UN) in 1994 at the Cairo International Conference on Population and Development. All Member States of the Union endorsed the Programme of Action adopted at Cairo. The Union has never adopted an alternative definition of ‘reproductive health’ to that given in the Programme of Action, which makes no reference to abortion.” With regard to the U.S., only a few days prior to the Cairo Conference, the head of the U.S. delegation, Vice President Al Gore, had stated for the record: “Let us get a false issue off the table: the US does not seek to establish a new international right to abortion, and we do not believe that abortion should be encouraged as a method of family planning.” Some years later, the position of the U.S. Administration in this debate was reconfirmed by U.S. Ambassador to the UN, Ellen Sauerbrey, when she stated at a meeting of the UN Commission on the Status of Women that: “nongovernmental organizations are attempting to assert that Beijing in some way creates or contributes to the creation of an internationally recognized fundamental right to abortion”. She added: “There is no fundamental right to abortion. And yet it keeps coming up largely driven by NGOs trying to hijack the term and trying to make it into a definition”. Collaborative research from the Institute of Development Studies states that "access to safe abortion is a matter of human rights, democracy and public health, and the denial of such access is a major cause of death and impairment, with significant costs to [international] development". The research highlights the inequities of access to safe abortion both globally and nationally and emphasises the importance of global and national movements for reform to address this. The shift by campaigners of reproductive rights from an issue-based agenda (the right to abortion), to safe, legal abortion not only as a human right, but bound up with democratic and citizenship rights, has been an important way of reframing the abortion debate and reproductive justice agenda. Meanwhile, the European Court of Human Rights complicated the question even more through a landmark judgment (case of A. B. and C. v. Ireland), in which it is stated that the denial of abortion for health and/or well-being reasons is an interference with an individuals right to respect for private and family life under Article 8 of the European Convention on Human Rights, an interference which in some cases can be justified. A desire to achieve certain population targets has resulted throughout history in severely abusive practices, in cases where governments ignored human rights and enacted aggressive demographic policies. In the 20th century, several authoritarian governments have sought either to increase or to decrease the births rates, often through forceful intervention. One of the most notorious natalist policies is that which occurred in communist Romania in the period of 1967-1990 during communist leader Nicolae Ceaușescu, who adopted a very aggressive natalist policy which included outlawing abortion and contraception, routine pregnancy tests for women, taxes on childlessness, and legal discrimination against childless people. Ceaușescu's policy resulted in over 9,000 women who died due to illegal abortions, large numbers of children put into Romanian orphanages by parents who couldn't cope with raising them, street children in the 1990s (when many orphanages were closed and the children ended on the streets), and overcrowding in homes and schools. The irony of Ceaușescu's aggressive natalist policy was a generation that may not have been born would eventually lead the Romanian Revolution which would overthrow and have him executed. In stark opposition with Ceaușescu's natalist policy was China's one child policy, in effect from 1978 to 2015, which included abuses such as forced abortions. This policy has also been deemed responsible for the common practice of sex selective abortion which led to an imbalanced sex ratio in the country. From the 1970s to 1980s, tension grew between women's health activists who advance women's reproductive rights as part of a human rights-based approach on the one hand, and population control advocates on the other. At the 1984 UN World Population Conference in Mexico City population control policies came under attack from women's health advocates who argued that the policies' narrow focus led to coercion and decreased quality of care, and that these policies ignored the varied social and cultural contexts in which family planning was provided in developing countries. In the 1980s the HIV/AIDS epidemic forced a broader discussion of sex into the public discourse in many countries, leading to more emphasis on reproductive health issues beyond reducing fertility. The growing opposition to the narrow population control focus led to a significant departure in the early 1990s from past population control policies. In the United States, abortion opponents have begun to foment conspiracy theories about reproductive rights advocates, accusing them of advancing a racist agenda of eugenics, and of trying to reduce the African American birth rate in the U.S. Female genital mutilation Female genital mutilation (FGM) is defined as "all procedures that involve partial or total removal of the external female genitalia, or other injury to the female genital organs for non-medical reasons." The procedure has no health benefits, and can cause severe bleeding and problems urinating, cysts, infections, and complications in childbirth and increased risk of newborn deaths. It is performed for traditional, cultural or religious reasons in many parts of the world, especially in Africa. The Istanbul Convention prohibits FGM (Article 38). Bride kidnapping or buying and reproductive slavery Bride kidnapping or marriage by abduction, is the practice whereby a woman or girl is abducted for the purpose of a forced marriage. Bride kidnapping has been practiced historically in many parts of the world, and it continues to occur today in some places, especially in Central Asia and the Caucasus, in countries such as Kyrgyzstan, Tajikistan, Kazakhstan, Turkmenistan, Uzbekistan and Armenia, as well as in Ethiopia. Bride kidnapping is often preceded or followed by rape (which may result in pregnancy), in order to force the marriage - a practice also supported by "marry-your-rapist law" (laws regarding sexual violence, abduction or similar acts, whereby the perpetrator avoids prosecution or punishment if he marries the victim). Abducting of women may happen on an individual scale or on a mass scale. Raptio is a Latin term referring to the large-scale abduction of women, usually for marriage or sexual slavery, particularity during wartime. Bride price, also called bridewealth, is money, property, or other form of wealth paid by a groom or his family to the parents of the woman he marries. The practice of bride price sometimes leads to parents selling young daughters into marriage and to trafficking. Bride price is common across Africa. Such forced marriages often lead to sexual violence, and forced pregnancy. In northern Ghana, for example, the payment of bride price signifies a woman's requirement to bear children, and women using birth control are at risks of threats and coercion. The 1956 Supplementary Convention on the Abolition of Slavery, the Slave Trade, and Institutions and Practices Similar to Slavery defines "institutions and practices similar to slavery" to include: c) Any institution or practice whereby: - (i) A woman, without the right to refuse, is promised or given in marriage on payment of a consideration in money or in kind to her parents, guardian, family or any other person or group; or - (ii) The husband of a woman, his family, or his clan, has the right to transfer her to another person for value received or otherwise; or - (iii) A woman on the death of her husband is liable to be inherited by another person; Laws in many countries and states require sperm donors to be either anonymous or known to the recipient, or the laws restrict the number of children each donor may father. Although many donors choose to remain anonymous, new technologies such as the Internet and DNA technology have opened up new avenues for those wishing to know more about the biological father, siblings and half-siblings. Ethnic minority women In Peru, President Alberto Fujimori (in office from 1990 to 2000) has been accused of genocide and crimes against humanity as a result of the Programa Nacional de Población, a sterilization program put in place by his administration. During his presidency, Fujimori put in place a program of forced sterilizations against indigenous people (mainly the Quechuas and the Aymaras), in the name of a "public health plan", presented on July 28, 1995. During the 20th century, forced sterilization of Roma women in European countries, especially in former Communist countries, was practiced, and there are allegations that these practices continue unofficially in some countries, such as Czech Republic, Bulgaria, Hungary and Romania. In V. C. vs. Slovakia, the European Court for Human Rights ruled in favor of a Roma woman who was the victim of forced sterilization in a state hospital in Slovakia in 2000. Forced sterilization in the United States was practiced starting with the 19th century. The United States during the Progressive era, ca. 1890 to 1920, was the first country to concertedly undertake compulsory sterilization programs for the purpose of eugenics. Thomas C. Leonard, professor at Princeton University, describes American eugenics and sterilization as ultimately rooted in economic arguments and further as a central element of Progressivism alongside wage controls, restricted immigration, and the introduction of pension programs. The heads of the programs were avid proponents of eugenics and frequently argued for their programs which achieved some success nationwide mainly in the first half of the 20th Century. Compulsory sterilization has been practiced historically in parts of Canada. Two Canadian provinces (Alberta and British Columbia) performed compulsory sterilization programs in the 20th century with eugenic aims. Canadian compulsory sterilization operated via the same overall mechanisms of institutionalization, judgment, and surgery as the American system. However, one notable difference is in the treatment of non-insane criminals. Canadian legislation never allowed for punitive sterilization of inmates. The Sexual Sterilization Act of Alberta was enacted in 1928 and repealed in 1972. In 1995, Leilani Muir sued the Province of Alberta for forcing her to be sterilized against her will and without her permission in 1959. Since Muir’s case, the Alberta government has apologized for the forced sterilization of over 2,800 people. Nearly 850 Albertans who were sterilized under the Sexual Sterilization Act were awarded CA$142 million in damages. Roman Catholic Church The Catholic Church is opposed to artificial contraception, abortion, and sexual intercourse outside marriage. This belief dates back to the first centuries of Christianity. While Roman Catholicism is not the only religion with such views, its religious doctrine is very powerful in influencing countries where most of the population is Catholic, and the few countries of the world with complete bans on abortion are Catholic-majority countries, and in Europe strict restrictions on abortion exist in the Catholic majority countries of Malta (complete ban), Andorra, San Marino, Liechtenstein and to a lesser extent Poland and Monaco. Some of the countries of Central America, notably El Salvador, have also come to international attention due to very forceful enforcement of the anti-abortion laws. El Salvador has received repeated criticism from the UN. The Office of the UN High Commissioner for Human Rights (OHCHR) named the law "one of the most draconian abortion laws in the world", and urged liberalization, and Zeid bin Ra'ad, the United Nations High Commissioner for Human Rights, stated that he was "appalled that as a result of El Salvador’s absolute prohibition on abortion, women are being punished for apparent miscarriages and other obstetric emergencies, accused and convicted of having induced termination of pregnancy". Criticism surrounds certain forms of anti-abortion activism. Anti-abortion violence is a serious issue in some parts of the world, especially in North America. It is recognized as single-issue terrorism. Numerous organizations have also recognized anti-abortion extremism as a form of Christian terrorism. Incidents include vandalism, arson, and bombings of abortion clinics, such as those committed by Eric Rudolph (1996–98), and murders or attempted murders of physicians and clinic staff, as committed by James Kopp (1998), Paul Jennings Hill (1994), Scott Roeder (2009), Michael F. Griffin (1993), and Peter James Knight (2001). Since 1978, in the US, anti-abortion violence includes at least 11 murders of medical staff, 26 attempted murders, 42 bombings, and 187 arsons. Some opponents of legalized abortion view the term "reproductive rights" as a euphemism to sway emotions in favor of abortion. National Right to Life has referred to "reproductive rights" as a "fudge term" and "the code word for abortion rights." - Cook, Rebecca J.; Fathalla, Mahmoud F. (1996). "Advancing Reproductive Rights Beyond Cairo and Beijing". International Family Planning Perspectives. 22 (3): 115–21. doi:10.2307/2950752. JSTOR 2950752. - "Gender and reproductive rights". WHO.int. Archived from the original on 2009-07-26. Retrieved 2010-08-29. - Amnesty International USA (2007). "Stop Violence Against Women: Reproductive rights". SVAW. Amnesty International USA. Archived from the original on 2008-01-20. Retrieved 2007-12-08. - "Tackling the taboo of menstrual hygiene in the European Region". WHO.int. 2018-11-08. Archived from the original on 2019-07-28. - Singh, Susheela (2018). "Inclusion of menstrual health in sexual and reproductive health and rights — Authors' reply". The Lancet Child & Adolescent Health. 2 (8): e19. doi:10.1016/S2352-4642(18)30219-0. - Freedman, Lynn P.; Isaacs, Stephen L. (1993). "Human Rights and Reproductive Choice". Studies in Family Planning. 24 (1): 18–30. doi:10.2307/2939211. JSTOR 2939211. PMID 8475521. - "Template". Nocirc.org. Retrieved 19 August 2017. - "Proclamation of Teheran". International Conference on Human Rights. 1968. Archived from the original on 2007-10-17. Retrieved 2007-11-08. - Dorkenoo, Efua. (1995). Cutting the rose : female genital mutilation : the practice and its prevention. Minority Rights Publications. ISBN 1873194609. OCLC 905780971. - Center for Reproductive Rights, International Legal Program, Establishing International Reproductive Rights Norms: Theory for Change, US CONG. REC. 108th CONG. 1 Sess. E2534 E2547 (Rep. Smith) (Dec. 8, 2003): We have been leaders in bringing arguments for a woman's right to choose abortion within the rubric of international human rights. However, there is no binding hard norm that recognizes women's right to terminate a pregnancy. (...) While there are hard norms prohibiting sex discrimination that apply to girl adolescents, these are problematic since they must be applied to a substantive right (i.e., the right to health) and the substantive reproductive rights of adolescents are not `hard' (yet!). There are no hard norms on age discrimination that would protect adolescents' ability to exercise their rights to reproductive health, sexual education, or reproductive decisionmaking. In addition, there are no hard norms prohibiting discrimination based on marital status, which is often an issue with respect to unmarried adolescents' access to reproductive health services and information. The soft norms support the idea that the hard norms apply to adolescents under 18. They also fill in the substantive gaps in the hard norms with respect to reproductive health services and information as well as adolescents' reproductive autonomy. (...) There are no hard norms in international human rights law that directly address HIV/AIDS directly. At the same time, a number of human rights bodies have developed soft norms to secure rights that are rendered vulnerable by the HIV/AIDS epidemic. (...) Practices with implications for women's reproductive rights in relation to HIV/AIDS are still not fully covered under existing international law, although soft norms have addressed them to some extent. (...) There is a lack of explicit prohibition of mandatory testing of HIV-positive pregnant women under international law. (...) None of the global human rights treaties explicitly prohibit child marriage and no treaty prescribes an appropriate minimum age for marriage. The onus of specifying a minimum age at marriage rests with the states' parties to these treaties. (...) We have to rely extensively on soft norms that have evolved from the TMBs and that are contained in conference documents to assert that child marriage is a violation of fundamental human rights. - Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. p. 1. ISBN 978-0-8265-1528-5. - "Population Matters search on "reproductive rights"". Populationmatters.org/. Archived from the original on 2014-06-27. Retrieved 2017-08-19. - "unhchr.ch". Unhchr.ch. - Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. pp. 5–6. ISBN 978-0-8265-1528-5. - Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. p. 7. ISBN 978-0-8265-1528-5. - "A/CONF.171/13: Report of the ICPD (94/10/18) (385k)". Un.org. Retrieved 2017-08-19. - Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. p. 9. ISBN 978-0-8265-1528-5. - Bunch, Charlotte; Fried, Susana (1996). "Beijing '95: Moving Women's Human Rights from Margin to Center". Signs: Journal of Women in Culture and Society. 22 (1): 200–4. doi:10.1086/495143. JSTOR 3175048. - Merry, S.E. (Editor M. Agosin) (2001). Women, Violence, and the Human Rights System. Women, Gender, and Human Rights: A Global Perspective. New Brunswick: Rutgers University Press. pp. 83–97. - Nowicka, Wanda (2011). "Sexual and reproductive rights and the human rights agenda: controversial and contested". Reproductive Health Matters. 19 (38): 119–128. doi:10.1016/s0968-8080(11)38574-6. ISSN 0968-8080. PMID 22118146. - ROUSSEAU, STEPHANIE. MORALES HUDON, ANAHI. (2019). INDIGENOUS WOMEN'S MOVEMENTS IN LATIN AMERICA : gender and ethnicity in peru, mexico, and bolivia. PALGRAVE MACMILLAN. ISBN 978-1349957194. OCLC 1047563400.CS1 maint: multiple names: authors list (link) - Solinger, Rickie, 1947- author. Reproductive politics : what everyone needs to know. ISBN 9780199811458. OCLC 830323649.CS1 maint: multiple names: authors list (link) - "About the Yogyakarta Principles". Yogyakartaprinciples.org. Archived from the original on 2016-03-04. Retrieved 2017-08-19. - International Service for Human Rights, Majority of GA Third Committee unable to accept report on the human right to sexual education Archived 2013-05-15 at the Wayback Machine - "The Yogyakarta Principles" Preamble and Principle 9. The Rights to Treatment with Humanity While in Detention - United Nations General Assembly, Official Records, Third Committee, Summary record of the 29th meeting held in New York, on Monday, 25 October 2010, at 3 p.m Archived 27 September 2012 at the Wayback Machine. For instance, Malawi, speaking on behalf of all African States, argued that the Yogyakarta Principles were "controversial and unrecognized," while the representative of the Russian Federation said that they "had not been agreed to at the intergovernmental level, and which therefore could not be considered as authoritative expressions of the opinion of the international community" (para. 9, 23). - Natalae Anderson (September 22, 2010). "Memorandum: Charging Forced Marriage as a Crime Against Humanity" (PDF). D.dccam.org. Retrieved 2017-08-19. - "BBC NEWS - World - Americas - Mass sterilisation scandal shocks Peru". News.bbc.co.uk. 2002-07-24. Retrieved 2017-08-19. - "Archived copy" (PDF). Archived from the original (PDF) on 2016-03-04. Retrieved 2015-11-20.CS1 maint: archived copy as title (link) - "Archived copy". Archived from the original on 2016-07-08. Retrieved 2016-09-26.CS1 maint: archived copy as title (link) - Wilson, K. (2017). "In the name of reproductive rights: race, neoliberalism and the embodied violence of population policies" (PDF). New Formations. 91 (91): 50–68. doi:10.3898/NEWF:91.03.2017 – via JSTOR. - Basu, A. (Editors C. R. a. K. McCann, Seung-kyung) (2000). Globalization of the Local/Localization of the Global: Mapping Transnational Women's Movements. In Feminist Theory Reader: Local and Global Perspectives. United Kingdom: Routledge. pp. 68–76. - Mooney, Jadwiga E. Pieper, Auteur. (2009). The politics of motherhood maternity and women's rights in twentieth-century Chile. University of Pittsburgh Press. ISBN 9780822960430. OCLC 690336424.CS1 maint: multiple names: authors list (link) - Bueno-Hansen, Pascha, author. (2015). Feminist and human rights struggles in Peru : decolonizing transitional justice. Urbana: University of Illinois Press. ISBN 9780252097539. OCLC 1004369974.CS1 maint: multiple names: authors list (link) - Kaplan, T. (Editor M. Agosin) (2001). Women's Rights as Human Rights: Women as Agents of Social Change. Women, Gender, and Human Rights: A Global Perspective. New Brunswick: Rutgers University Press. pp. 191–204. - "Universal declaration of human rights". 2014-05-28. doi:10.18356/b0fc2dba-en. Cite journal requires - Marsha A, Freeman; Christine, Chinkin; Beate, Rudolf (2012-01-01). "Violence Against Women". The UN Convention on the Elimination of All Forms of Discrimination Against Women. 1. doi:10.5422/fso/9780199565061.003.0019. - "U.N. Millennium Development Goals". - "U.N. Sustainable Development Goals". - Murray, Christopher, J.L. (2015). "Shifting to Sustainable Development Goals - Implications for Global Health". New England Journal of Medicine. 373 (15): 1390–1393. doi:10.1056/NEJMp1510082. PMID 26376045. - Bant, Astrid; Girard, Françoise (2008). "Sexuality, health, and human rights: self-identified priorities of indigenous women in Peru". Gender & Development. 16 (2): 247–256. doi:10.1080/13552070802120426. ISSN 1355-2074. - Amnesty International, Defenders of Sexual and Reproductive Rights Archived 2013-10-02 at the Wayback Machine; International Women’s Health Coalition and the United Nations, Campaign for an Inter-American Convention on Sexual and Reproductive Rights , Women's Health Collection, Abortion as a human right: possible strategies in unexplored territory. (Sexual Rights and Reproductive Rights), (2003); and Shanthi Dairiam, Applying the CEDAW Convention for the recognition of women's health rights, Arrows For Change, (2002). In this regard, the Center for Reproductive Rights has noted that: Our goal is to ensure that governments worldwide guarantee women's reproductive rights out of an understanding that they are bound to do so. The two principal prerequisites for achieving this goal are: (1) the strengthening of international legal norms protecting reproductive rights; and (2) consistent and effective action on the part of civil society and the international community to enforce these norms. Each of these conditions, in turn, depends upon profound social change at the local, national and international (including regional) levels. (...) Ultimately, we must persuade governments to accept reproductive rights as binding norms. Again, our approach can move forward on several fronts, with interventions both at the national and international levels. Governments' recognition of reproductive rights norms may be indicated by their support for progressive language in international conference documents or by their adoption and implementation of appropriate national-level legislative and policy instruments. In order to counter opposition to an expansion of recognized reproductive rights norms, we have questioned the credibility of such reactionary yet influential international actors as the United States and the Holy See. Our activities to garner support for international protections of reproductive rights include: Lobbying government delegations at UN conferences and producing supporting analyses/materials; fostering alliances with members of civil society who may become influential on their national delegations to the UN; and preparing briefing papers and factsheets exposing the broad anti-woman agenda of our opposition.Center for Reproductive Rights, International Legal Program, Establishing International Reproductive Rights Norms: Theory for Change, US CONG. REC. 108th CONG. 1 Sess. E2534 E2547 (Rep. Smith) (Dec. 8, 2003) - "[programme] Basis for action". Iisd.ca. Retrieved 2015-02-17. - "WHO | Sexual and reproductive health and rights: a global development, health, and human rights priority". WHO. Retrieved 2019-06-19. - United Nations, Report of the Fourth International Conference on Population and Development, Cario, 5 - 13 September 1994. Guatemala entered the following reservation: Chapter VII: we enter a reservation on the whole chapter, for the General Assembly's mandate to the Conference does not extend to the creation or formulation of rights; this reservation therefore applies to all references in the document to "reproductive rights", "sexual rights", "reproductive health", "fertility regulation", "sexual health", "individuals", "sexual education and services for minors", "abortion in all its forms", "distribution of contraceptives" and "safe motherhood" - "Sir David Attenborough on the roots of Climatic problems". Independent.co.uk. The Independent, UK broadsheet newspaper. - "OHCHR | Sexual and reproductive health and rights". www.ohchr.org. Retrieved 2019-06-19. - "Women's History". Womenshistory.about.com. Retrieved 19 August 2017. - Kirk, Okazawa-Rey 2004 - Best, Kim (Spring 1998). "Men's Reproductive Health Risks: Threats to men's fertility and reproductive health include disease, cancer and exposure to toxins". Network: 7–10. Retrieved 2008-01-02. - McCulley Melanie G (1998). "The male abortion: the putative father's right to terminate his interests in and obligations to the unborn child". The Journal of Law and Policy. VII (1): 1–55. PMID 12666677. - Young, Kathy (Oct 19, 2000). "A man's right to choose". Salon.com. Retrieved May 10, 2011. - Owens, Lisa Lucile (2013). "Coerced Parenthood as Family Policy: Feminism, the Moral Agency of Women, and Men's 'Right to Choose'". Alabama Civil Rights & Civil Liberties Law Review. 5: 1–33. SSRN 2439294. - Traister, Rebecca. (March 13, 2006). "Roe for men?" Salon.com. Retrieved December 17, 2007. - "ROE vs. WADE… FOR MEN: Men's Center files pro-choice lawsuit in federal court". Nationalcenterformen.org. - "U.S. Court of Appeals for the Sixth Circuit, case No. 06-11016" (PDF). - Money, John; Ehrhardt, Anke A. (1972). Man & Woman Boy & Girl. Differentiation and dimorphism of gender identity from conception to maturity. USA: The Johns Hopkins University Press. ISBN 978-0-8018-1405-1. - Domurat Dreger, Alice (2001). Hermaphrodites and the Medical Invention of Sex. USA: Harvard University Press. ISBN 978-0-674-00189-3. - Resolution 1952/2013, Provision version, Children’s right to physical integrity, Council of Europe, 1 October 2013 - Involuntary or coerced sterilisation of intersex people in Australia, Australian Senate Community Affairs Committee, October 2013. - It's time to defend intersex rights, Morgan Carpenter at Australian Broadcasting Corporation, 15 November 2013. - Australian Parliament committee releases intersex rights report, Gay Star News, 28 October 2013. - On the management of differences of sex development, Ethical issues relating to "intersexuality", Opinion No. 20/2012 Archived 2013-06-20 at the Wayback Machine, Swiss National Advisory Commission on Biomedical Ethics, November 2012. - Report of the UN Special Rapporteur on Torture, Office of the UN High Commissioner for Human Rights, February 2013. - WHO/UN interagency statement on involuntary or coerced sterilisation, Organisation Intersex International Australia, 30 May 2014. - Eliminating forced, coercive and otherwise involuntary sterilization, An interagency statement, World Health Organization, May 2014. - Organization, World Health. "World Health Organization - HIV and Adolescents from Guidance to Action". apps.who.int. - Uy, Jocelyn R. "DOH backs bill allowing minor to get HIV, AIDS tests without parental consent". Newsinfo.inquirer.net. - "Challenging parental consent laws to increase young people's access to vital HIV services - UNAIDS". Unaids.org. - Maradiegue, Ann (2003). "Minor's Rights Versus Parental Rights: Review of Legal Issues in Adolescent Health Care". Journal of Midwifery & Women's Health. 48 (3): 170–177. doi:10.1016/S1526-9523(03)00070-9. PMID 12764301. - "Sexual and Reproductive Rights of Young People: Autonomous decision making and confidential services" (PDF). International Planned Parenthood Federation. Retrieved October 1, 2017. - Mugisha, Frederick (2009). "Chapter 42: HIV and AIDS, STIs and sexual health among young people". In Furlong, Andy (ed.). Handbook of Youth and Young Adulthood. Routledge. pp. 344–352. ISBN 978-0-415-44541-2. - Lowry, Andrew (August 29, 2017). "Homeless girl in India forced to give birth on street metres away from health centre: She was shivering and unable to lift and cuddle her infant". The Independent. Retrieved October 1, 2017. - "Adolescent sexual and reproductive health - UNFPA - United Nations Population Fund". Unfpa.org. - "Joint United Nations statement on ending discrimination in health care settings — Joint WHO/UN statement". World Health Organization. June 27, 2017. Retrieved October 1, 2017. - "Legal minimum ages and the realization of adolescents' rights" (PDF). Unicef. Retrieved October 12, 2017. - Lukale, Nelly (2012). "Sexual Reproductive Health and Rights for Young People in Africa". ARROWs for Change. 18 (2): 7–8. - Knudson, Lara (2006). Reproductive Rights in a Global Context: South Africa, Uganda, Peru, Denmark, United States, Vietnam, Jordan. Nashville, TN: Vanderbilt University Press. - "HIV/AIDS Factsheet". World Health Organization. Retrieved October 1, 2017. - De Irala, Jokin; Osorio, Alfonso; Carlos, Silvia; Lopez-Del Burgo, Cristina (2011). "Choice of birth control methods among European women and the role of partners and providers" (PDF). Contraception. 84 (6): 558–64. doi:10.1016/j.contraception.2011.04.004. hdl:10171/19110. PMID 22078183. - Larsson, Margareta; Tydén, Tanja; Hanson, Ulf; Häggström-Nordin, Elisabet (2009). "Contraceptive use and associated factors among Swedish high school students". The European Journal of Contraception & Reproductive Health Care. 12 (2): 119–24. doi:10.1080/13625180701217026. PMID 17559009. - "Chile abortion: Court approves easing total ban". BBC. August 21, 2017. Retrieved October 1, 2017. - Freeman, Cordelia (August 29, 2017). "Chile: the long road to abortion reform — After a fierce debate, one of the most restrictive reproductive laws in the world has been eased". The Independent. Retrieved October 1, 2017. - Goicolea, Isabel (2010). "Adolescent Pregnancies in the Amazon Basin of Ecuador: A Rights and Gender Approach to Adolescents' Sexual and Reproductive Health". Global Health Action. 3: 1–11. doi:10.3402/gha.v3i0.5280. PMC 2893010. PMID 20596248. - "Fact Sheet: Contraceptive Use in the United States". Guttmacher Institute. 2004-08-04. Retrieved 24 April 2013. - Alesha Doan (2007). Opposition and Intimidation: The Abortion Wars and Strategies of Political Harassment. University of Michigan Press. p. 57. ISBN 9780472069750. - Casey, 505 U.S. at 877. - "Strict Texas abortion law struck down". 27 June 2016 – via www.bbc.com. - Goldman, Lisa A.; García, Sandra G.; Díaz, Juan; Yam, Eileen A. (15 November 2005). "Brazilian obstetrician-gynecologists and abortion: a survey of knowledge, opinions and practices". Reproductive Health. 2: 10. doi:10.1186/1742-4755-2-10. PMC 1308861. PMID 16288647. - "Abortion in Ghana". 24 February 2016. - "NEPAL: Only Half of Women Know Abortion is Legal - Inter Press Service". www.ipsnews.net. - "Wayback Machine". 8 June 2011. - Assembly, United Nations General. "A/RES/48/104 - Declaration on the Elimination of Violence against Women - UN Documents: Gathering a body of global agreements". www.un-documents.net. - "United Nations Population Fund | Supporting the Constellation of Reproductive Rights". UNFPA. Retrieved 2015-02-17. - "United Nations Population Fund | State of World Population 2005". UNFPA. Retrieved 2015-02-17. - "WHO | Gender and Reproductive Rights". Who.int. Retrieved 2015-02-17. - "Sexual and reproductive rights | Amnesty International". Amnesty.org. 2007-11-06. Retrieved 2015-02-17. - "WHO | Gender and human rights". Who.int. 2002-01-31. Retrieved 2015-02-17. - "Bioline International Official Site (site up-dated regularly)". Bioline.org.br. 2015-02-09. Retrieved 2015-02-17. - "Ethics: Honour Crimes". BBC. 1970-01-01. Retrieved 2015-02-17. - "AIDSinfo". UNAIDS. Retrieved 4 March 2013. - "HIV Basics | HIV/AIDS | CDC". Cdc.gov. 2018-07-23. Retrieved 2016-10-05. - "WHO | Reproductive choices for women with HIV". Who.int. Retrieved 2015-02-17. - "Child marriage – a threat to health". www.euro.who.int. 20 December 2012. - "Child marriage - UNFPA - United Nations Population Fund". www.unfpa.org. - "The Convention of Belem do Para and the Istanbul Convention : A response to violence against women worldwide" (PDF). Oas.org. Retrieved 2015-11-20. - "OHCHR | Rape: Weapon of war". www.ohchr.org. Retrieved 2019-06-19. - Pillai Vijayan, Wang Ya-Chien, Maleku Arati (2017). "Women, war, and reproductive health in developing countries". Social Work in Health Care. 56 (1): 28–44. doi:10.1080/00981389.2016.1240134. PMID 27754779.CS1 maint: multiple names: authors list (link) - Melhado, L (2010). "Rates of Sexual Violence Are High in Democratic Republic of the Congo". International Perspectives on Sexual and Reproductive Health. 36 (4): 210. JSTOR 41038670. - Autesserre, Séverine (2012). "Dangerous Tales: Dominant Narratives on the Congo and their Unintended Consequences". African Affairs. 111 (443): 202–222. doi:10.1093/afraf/adr080. - Country Comparison: Maternal Mortality Rate in The CIA World Factbook. Date of Information: 2010 - "WHO - Maternal mortality ratio (per 100 000 live births)". www.who.int. - "Maternal mortality". World Health Organization. - "Definition of Birth control". MedicineNet. Archived from the original on August 6, 2012. Retrieved August 9, 2012. - Hanson, S.J.; Burke, Anne E. (December 21, 2010). "Fertility control: contraception, sterilization, and abortion". In Hurt, K. Joseph; Guile, Matthew W.; Bienstock, Jessica L.; Fox, Harold E.; Wallach, Edward E. (eds.). The Johns Hopkins manual of gynecology and obstetrics (4th ed.). Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins. pp. 382–395. ISBN 978-1-60547-433-5. - Oxford English Dictionary. Oxford University Press. June 2012. - World Health Organization (WHO). "Family planning". Health topics. World Health Organization (WHO). Archived from the original on March 18, 2016. Retrieved March 28, 2016. - Joyce, Kathryn (2006-11-09). "Arrows for the War". ISSN 0027-8378. Retrieved 2019-06-19. - "Worldwide, an estimated 25 million unsafe abortions occur each year". World Health Organization. - "WHO: Unsafe Abortion - The Preventable Pandemic". Archived from the original on 2010-01-13. Retrieved 2010-01-16. - "WHO | Preventing unsafe abortion". Who.int. Retrieved 2015-02-17. - "HRP | World Health Organization". Who.int. Retrieved 2015-02-17. - Section, United Nations News Service (27 September 2016). "UN News - Repealing anti-abortion laws would save the lives of nearly 50,000 women a year – UN experts". UN News Service Section. - Duncan, Stephanie Kirchgaessner Pamela; Nardelli, Alberto; Robineau, Delphine (11 March 2016). "Seven in 10 Italian gynaecologists refuse to carry out abortions". The Guardian – via www.theguardian.com. - "Doctors' Refusal to Perform Abortions Divides Croatia :: Balkan Insight". www.balkaninsight.com. - "United Nations Official Document". www.un.org. - "Human Rights Watch: Women's Human Rights: Abortion". www.hrw.org. - "Protocol to the African Charter on Human and Peoples' Rights on the Rights of Women in Africa / Legal Instruments / ACHPR". www.achpr.org. Retrieved 2019-06-19. - "General Comment No. 2 on Article 14.1 (a), (b), (c) and (f) and Article 14. 2 (a) and (c) of the Protocol to the African Charter on Human and Peoples' Rights on the Rights of Women in Africa / Legal Instruments / ACHPR". www.achpr.org. Retrieved 2019-06-19. - Grover, Leena; Keller, Helen (April 2012). "General Comments of the Human Rights Committee and their legitimacy". UN Human Rights Treaty Bodies: Law and Legitimacy. Retrieved 2019-06-19. - Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. p. 6. ISBN 978-0-8265-1528-5. - "Council of Europe Urges Member States to Decriminalize Abortion". Guttmacher.org. 2008-04-18. Retrieved 2015-02-17. - European Parliament, 4 December 2003: Oral Question (H-0794/03) for Question Time at the part-session in December 2003 pursuant to Rule 43 of the Rules of Procedure by Dana Scallon to the Council. In the written record of that session, one reads: Posselt (PPE-DE): "Does the term 'reproductive health’ include the promotion of abortion, yes or no?" - Antonione, Council: "No." - European Parliament, 24 October 2002: Question no 86 by Dana Scallon (H-0670/02) - Jyoti Shankar Singh, Creating a New Consensus on Population (London: Earthscan, 1998), 60 - Lederer, AP/San Francisco Chronicle, 1 March 2005 - Leopold, Reuters, 28 February 2005 - "Unsafe Abortion: A Development Issue". Institute of Development Studies (IDS) Bulletin. 39 (3). July 2009. - Kligman, Gail. "Political Demography: The Banning of Abortion in Ceausescu's Romania". In Ginsburg, Faye D.; Rapp, Rayna, eds. Conceiving the New World Order: The Global Politics of Reproduction. Berkeley, CA: University of California Press, 1995 :234-255. Unique Identifier : AIDSLINE KIE/49442. - Levitt & Dubner, Steven & Stephen (2005). Freakonomics. 80 Strand, London WC2R ORL England: Penguin Group. p. 107. ISBN 9780141019017. - "China forced abortion photo sparks outrage - BBC News". BBC News. 2012-06-14. Retrieved 2017-03-11. - Bulte, E., Heerink, N., & Zhang, X. (2011). "China's one-child policy and 'the mystery of missing women': ethnic minorities and male-biased sex ratios". Oxford Bulletin of Economics and Statistics. 73 (1): 0305–9049. doi:10.1111/j.1468-0084.2010.00601.x.CS1 maint: multiple names: authors list (link) - Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. p. 2. ISBN 978-0-8265-1528-5. - Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. pp. 4–5. ISBN 978-0-8265-1528-5. - Dewan, Shaila (February 26, 2010). "To Court Blacks, Foes of Abortion Make Racial Case". New York Times. Retrieved 7 June 2010. - "Female genital mutilation". World Health Organization. - "Archived copy". Archived from the original on 2017-05-31. Retrieved 2017-08-07.CS1 maint: archived copy as title (link) - "One in five girls and women kidnapped for marriage in Kyrgyzstan". Reuters. August 2017. - Ash, Lucy (2010-08-10). "Chechen stolen brides 'exorcised'". BBC News. - "Kidnapped. Raped. Married. The extraordinary rebellion of Ethiopia's". 2010-03-17. - "Ethiopian girls fear forced marriage". 2006-05-14. - Mellen, Ruby (March–April 2017). "The Rapist's Loophole: Marriage". Foreign Policy (223): 20. - "Human rights groups ask NWFP Govt. To ban 'bride price' to curb women Trafficking. - Free Online Library". - "Islands Business - PNG Police blame bride price for violence in marri…". 2013-01-26. Archived from the original on 2013-01-26. - "Bride price practices in Africa". BBC News. 2015-08-06. - Bawah, Ayaga Agula; Akweongo, Patricia; Simmons, Ruth; Phillips, James F. (1999). "Women's fears and men's anxieties: the impact of family planning on gender relations in Northern Ghana". Studies in Family Planning. 30 (1): 54–66. doi:10.1111/j.1728-4465.1999.00054.x. hdl:2027.42/73927. PMID 10216896. Pdf. - "OHCHR | Supplementary Convention on the Abolition of Slavery". - "Mass sterilization scandal shocks Peru". BBC News. July 24, 2002. Archived from the original on June 30, 2006. Retrieved April 30, 2006. - "Czech regret over sterilisation". BBC News. 2009-11-24. Retrieved 2015-02-17. - "PopDev" (PDF). popdev.hampshire.edu. - Denysenko, Marina (2007-03-12). "Europe | Sterilised Roma accuse Czechs". BBC News. Retrieved 2015-02-17. - "Kocáb draws attention to the forced sterilization of Romani women; most recent incident allegedly took place in 2007". Romea.cz. 2009-07-21. Retrieved 2015-02-17. - Archived 2014-03-01 at the Wayback Machine - Iredale, Rachel (2000). "Eugenics And Its Relevance To Contemporary Health Care". Nursing Ethics. 7 (3): 205–14. doi:10.1177/096973300000700303. PMID 10986944. - Leonard, Thomas C. (2005). "Retrospectives: Eugenics and Economics in the Progressive Era" (PDF). Journal of Economic Perspectives. 19 (4): 207–224. doi:10.1257/089533005775196642. Archived (PDF) from the original on 2016-12-18. - Canadian Broadcasting Corporation (CBC) (November 9, 1999). "Alberta Apologizes for Forced Sterilization". CBC News. Archived from the original on November 23, 2012. Retrieved June 19, 2013. - Victims of sterilization finally get day in court. Lawrence Journal-World. December 23, 1996. - "UN rights office urges el Salvador to reform 'draconian' abortion laws". 2017-12-15. - "U.N. Calls on el Salvador to stop jailing women for abortion". Reuters. 2017-11-18. - Watson, Katy (2015-04-28). "The mothers being criminalised in el Salvador". BBC News. - "Gen 38:8-10 NIV - Then Judah said to Onan, "Sleep with - Bible Gateway". Bible Gateway. Retrieved 2016-02-14. - "Contraception and Sterilization". Archived from the original on 2013-11-24. - "Fr. Hardon Archives - The Catholic Tradition on the Morality of Contraception". - "El Salvador: Rape survivor sentenced to 30 years in jail under extreme anti-abortion law". www.amnesty.org. - "Jailed for a miscarriage". BBC News. - CNN, Kimberly Hutcherson. "A brief history of anti-abortion violence". CNN. Retrieved 2019-07-10. - Jelen, Ted G (1998). "Abortion". Encyclopedia of Religion and Society. Walnut Creek, California: AltaMira Press. - Smith, G. Davidson (Tim) (1998). "Single Issue Terrorism Commentary". Canadian Security Intelligence Service. Archived from the original on July 14, 2006. Retrieved June 9, 2006. - Al-Khattar, Aref M. (2003). Religion and terrorism: an interfaith perspective. Greenwood Publishing Group. pp. 58–59. ISBN 9780275969233. - Hoffman, Bruce (2006). Inside terrorism. Columbia University Press. p. 116. ISBN 9780231510462. - Harmon, Christopher C. (2000). Terrorism today. Psychology Press. p. 42. ISBN 9780714649986. - Juergensmeyer, Mark (2003). Terror in the mind of God: the global rise of religious violence. University of California Press. p. 4,19. ISBN 9780520240117. - Bryant, Clifton D. (2003). Handbook of death & dying, Volume 1. SAGE. p. 243. ISBN 9780761925149. - McAfee, Ward M. (2010). The Dialogue Comes of Age: Christian Encounters with Other Traditions. Fortress Press. p. 90. ISBN 9781451411157. - Flint, Colin Robert (2006). Introduction to geopolitics. Psychology Press. p. 172. ISBN 9780203503768. - Peoples, James; Bailey, Garrick (2008). Humanity: an introduction to cultural anthropology. Cengage. p. 371. ISBN 978-0495508748. - Dolnik, Adam; Gunaratna, Rohan (2006). "On the Nature of Religious Terrorism". The politics of terrorism: a survey. Taylor & Francis. - The terrorism ahead: confronting transnational violence in the twenty-first century, Paul J. Smith, p 94 - Religion and Politics in America: The Rise of Christian Evangelists, Muhammad Arif Zakaullah, p 109 - Terrorism: An Investigator's Handbook, William E. Dyson, p 43 - Encyclopedia of terrorism, Cindy C. Combs, Martin W. Slann, p 13 - Armed for Life: The Army of God and Anti-Abortion Terror in the United States, Jennifer Jefferis, p 40 - "Threats of violence against US abortion clinics almost doubled in 2017, industry group says". The Independent. 2018-05-07. Retrieved 2019-06-19. - "THE CHOICE "THAT DARE NOT SPEAK ITS NAME"". Nrlc.org. 2003. Archived from the original on 2013-08-04. Retrieved 2017-08-19. - The League of Women Voters on Reproductive Choice - UNFPA Population Issues: Reproductive Rights - American Civil Liberties Union - Women's Global Network for Reproductive Rights Network that links grassroots organizations that are active within this topic - Further readings - Gebhard, Julia, Trimiño, Diana. Reproductive Rights, International Regulation, Max Planck Encyclopedia of Public International Law - Reproductive rights cases before the European Court of Human Rights - The Environmental Politics of Population and Overpopulation A University of California, Berkeley summary about the role of reproductive rights in the current political and ecological context - Introductory note by Djamchid Momtaz, procedural history note and audiovisual material on the Proclamation of Teheran in the Historic Archives of the United Nations Audiovisual Library of International Law - Murray, Melissa and Kristin Luker. Cases on Reproductive Rights and Justice. United States: Foundation Press, 2015. ISBN 978-1609304348.
Asteroid Apophis, discovered in 2004, will make its closest approach to Earth on April 13, 2029. It will come within about 20,000 miles of Earth, closer than the many geostationary satellites orbiting the planet. Apophis, which NASA estimated to be about 1,100 feet across, was initially thought to pose a risk to Earth in 2068, but its orbit has since been better projected by researchers and is now not a risk to the planet for at least a century. At the end of 2022 a test following on from tests in January and October took place in which scientists bounced long-wavelength signals off the moon. Link to previous article: Avoiding Collision With Earth The results will not be known for some time but amateur scientists from around the world reported receiving the outgoing transmission which will aid with the research. At the High-frequency Active Auroral Research Program (HAARP) research facility at Gakona, Alaska, a powerful transmitter sent long wavelength radio signals into space with the purpose of bouncing them off an asteroid to learn about its interior. The University of New Mexico Long Wavelength Array near Socorro, New Mexico, and the Owens Valley Radio Observatory Long Wavelength Array near Bishop, California, are also involved in the experiment. The information from this experiment could aid efforts to defend Earth from larger asteroids that could cause significant damage – like Apophis. Several programs exist to quickly detect asteroids, determine their orbit and shape and image their surface, either with optical telescopes or the planetary radar of the Deep Space Network, NASA’s network of large and highly senstive radio antennas in California, Spain and Australia. Those radar-imaging programs don’t provide information about an asteroid’s interior, however. They use signals of short wavelengths, which bounce off the surface and provide high-quality external images but don’t penetrate an object. Long wavelength radio signals can reveal the interior of objects. HAARP, using three powerful generators, began transmitting chirping signals of long wavelength and continued sending them uninterrupted until the scheduled end of the 12-hour experiment. Knowing the distribution of mass within a dangerous asteroid could help scientists target devices designed to deflect an asteroid away from Earth. Jessica Matthews, HAARP’s program manager explained: “Our collaboration with JPL [NASA’s Jet Propulsion Laboratory] is not only an opportunity to do great science but also involves the global community of citizen scientists. “So far we have received over 300 reception reports from the amateur radio and radio astronomy communities from six continents who confirmed the HAARP transmission.” The University of Alaska Fairbanks operates HAARP under an agreement with the Air Force, which developed and owned HAARP but transferred the research instruments to UAF in August 2015. Leave a Reply
Life expectancy is a statistical measure of the average time an organism is expected to live, based on the year of its birth, its current age and other demographic factors including gender. The most commonly used measure of life expectancy is at birth (LEB), which can be defined in two ways. Cohort LEB is the mean length of life of an actual birth cohort (all individuals born a given year) and can be computed only for cohorts born many decades ago, so that all their members have died. Period LEB is the mean length of life of a hypothetical cohort assumed to be exposed, from birth through death, to the mortality rates observed at a given year. National LEB figures reported by statistical national agencies and international organizations are indeed estimates of period LEB. In the Bronze Age and the Iron Age, LEB was 26 years; the 2010 world LEB was 67.2 years. For recent years, in Swaziland LEB is about 49, and in Japan, it is about 83. The combination of high infant mortality and deaths in young adulthood from accidents, epidemics, plagues, wars, and childbirth, particularly before modern medicine was widely available, significantly lowers LEB. But for those who survive early hazards, a life expectancy of 70 would not be uncommon. For example, a society with a LEB of 40 may have few people dying at precisely 40: most will die before 30 or after 55. In populations with high infant mortality rates, LEB is highly sensitive to the rate of death in the first few years of life. Because of this sensitivity to infant mortality, LEB can be subjected to gross misinterpretation, leading one to believe that a population with a low LEB will necessarily have a small proportion of older people. For example, in a hypothetical stationary population in which half the population dies before the age of five but everybody else dies at exactly 70 years old, LEB will be about 36, but about 25% of the population will be between the ages of 50 and 70. Another measure, such as life expectancy at age 5 (e5), can be used to exclude the effect of infant mortality to provide a simple measure of overall mortality rates other than in early childhood; in the hypothetical population above, life expectancy at 5 would be another 65. Aggregate population measures, such as the proportion of the population in various age groups, should also be used along individual-based measures like formal life expectancy when analyzing population structure and dynamics. Mathematically, life expectancy is the mean number of years of life remaining at a given age, assuming age-specific mortality rates remain at their most recently measured levels. It is denoted by ,[a] which means the mean number of subsequent years of life for someone now aged , according to a particular mortality experience. Longevity, maximum lifespan, and life expectancy are not synonyms. Life expectancy is defined statistically as the mean number of years remaining for an individual or a group of people at a given age. Longevity refers to the characteristics of the relatively long life span of some members of a population. Maximum lifespan is the age at death for the longest-lived individual of a species. Moreover, because life expectancy is an average, a particular person may die many years before or many years after the "expected" survival. The term "maximum life span" has a quite different meaning and is more related to longevity. Life expectancy is also used in plant or animal ecology; life tables (also known as actuarial tables). The term life expectancy may also be used in the context of manufactured objects, but the related term shelf life is used for consumer products, and the terms "mean time to breakdown" (MTTB) and "mean time between failures" (MTBF) are used in engineering. Human beings are expected to live on average 30–40 years in Swaziland and 82.6 years in Japan, but the latter's recorded life expectancy may have been very slightly increased by counting many infant deaths as stillborn. An analysis published in 2011 in The Lancet attributes Japanese life expectancy to equal opportunities and public health as well as diet. The oldest confirmed recorded age for any human is 122 years, reached by Jeanne Calment who lived between 1875-1997. This is referred to as the "maximum life span", which is the upper boundary of life, the maximum number of years any human is known to have lived. Theoretical study shows that the maximum life expectancy at birth is limited by the human life characteristic value δ, which is around 104 years. According to a study by biologists Bryan G. Hughes and Siegfried Hekimi, there is no evidence for limit on human lifespan. Variation over timeEdit The following information is derived from the 1961 Encyclopædia Britannica and other sources, some with questionable accuracy. Unless otherwise stated, it represents estimates of the life expectancies of the world population as a whole. In many instances, life expectancy varied considerably according to class and gender. Life expectancy at birth takes account of infant mortality but not prenatal mortality. |Era||Life expectancy at birth in years||Life expectancy at older age| |Paleolithic||33||Based on Neolithic and Bronze Age data, the total life expectancy at 15 would not exceed 34 years. Based on the data from modern hunter-gatherer populations, it is estimated that at 15, life expectancy was an additional 39 years (total 54), with a 0.60 probability of reaching 15.| |Neolithic||20 to 33||Based on Early Neolithic data, total life expectancy at 15 would be 28–33 years| |Bronze Age and Iron Age||26||Based on Early and Middle Bronze Age data, total life expectancy at 15 would be 28–36 years| |Classical Greece||25 to 28||Based on Athens Agora and Corinth data, total life expectancy at 15 would be 37–41 years| |Classical Rome||25||If a child survived to age 25, life expectancy was an additional 28.6 years (total age 53.6 years).| |Medieval Islamic world||35+||Average lifespan of scholars was 59–84.3 years.| |Pre-Columbian Southern United States||25–30| |Late medieval English peerage||30||At age 21, life expectancy was an additional 43 years (total age 64).| |Early modern England||33–40||34 years for males in the 18th century.| |Pre-Champlain Canadian Maritimes||60||Samuel de Champlain wrote that in his visits to Mi'kmaq and Huron communities, he met people over 100 years old. Daniel Paul attributes the incredible lifespan in the region to low stress and a healthy diet of lean meats, diverse vegetables and legumes.| |18th-century Prussia||24.7||For males.| |18th-century France||27.5–30||For males.| |18th-century Qing China||39.6||For males.| |18th-century Edo Japan||41.1||For males.| |Early 19th-century England||40| |1900 world average||31| |1950 world average||48| |2014 world average||71.5| Life expectancy increases with age as the individual survives the higher mortality rates associated with childhood. For instance, the table above listed the life expectancy at birth among 13th-century English nobles at 30. Having survived until the age of 21, a male member of the English aristocracy in this period could expect to live: - 1200–1300: to age 64 - 1300–1400: to age 45 (because of the bubonic plague) - 1400–1500: to age 69 - 1500–1550: to age 71 17th-century English life expectancy was only about 35 years, largely because infant and child mortality remained high. Life expectancy was under 25 years in the early Colony of Virginia, and in seventeenth-century New England, about 40 per cent died before reaching adulthood. During the Industrial Revolution, the life expectancy of children increased dramatically. The under-5 mortality rate in London decreased from 745 in 1730–1749 to 318 in 1810–1829. Public health measures are credited with much of the recent increase in life expectancy. During the 20th century, despite a brief drop due to the 1918 flu pandemic starting around that time the average lifespan in the United States increased by more than 30 years, of which 25 years can be attributed to advances in public health. There are great variations in life expectancy between different parts of the world, mostly caused by differences in public health, medical care, and diet. The impact of AIDS on life expectancy is particularly notable in many African countries. According to projections made by the United Nations (UN) in 2002, the life expectancy at birth for 2010–2015 (if HIV/AIDS did not exist) would have been: - 70.7 years instead of 31.6 years Botswana - 69.9 years instead of 41.5 years South Africa - 70.5 years instead of 31.8 years Zimbabwe Actual life expectancy in Botswana declined from 65 in 1990 to 49 in 2000 before increasing to 66 in 2011. In South Africa, life expectancy was 63 in 1990, 57 in 2000, and 58 in 2011. And in Zimbabwe, life expectancy was 60 in 1990, 43 in 2000, and 54 in 2011. In the United States, African-American people have shorter life expectancies than their European-American counterparts. For example, white Americans born in 2010 are expected to live until age 78.9, but black Americans only until age 75.1. This 3.8-year gap, however, is the lowest it has been since 1975 at the latest. The greatest difference was 7.1 years in 1993. In contrast, Asian-American women live the longest of all ethnic groups in the United States, with a life expectancy of 85.8 years. The life expectancy of Hispanic Americans is 81.2 years. Cities also experience a wide range of life expectancy based on neighborhood breakdowns. This is largely due to economic clustering and poverty conditions that tend to associate based on geographic location. Multi-generational poverty found in struggling neighborhoods also contributes. In United States cities such as Cincinnati, the life expectancy gap between low income and high income neighborhoods touches 20 years. Economic circumstances also affect life expectancy. For example, in the United Kingdom, life expectancy in the wealthiest and richest areas is several years higher than in the poorest areas. This may reflect factors such as diet and lifestyle, as well as access to medical care. It may also reflect a selective effect: people with chronic life-threatening illnesses are less likely to become wealthy or to reside in affluent areas. In Glasgow, the disparity is amongst the highest in the world: life expectancy for males in the heavily deprived Calton area stands at 54, which is 28 years less than in the affluent area of Lenzie, which is only 8 km away. A 2013 study found a pronounced relationship between economic inequality and life expectancy. However, a study by José A. Tapia Granados and Ana Diez Roux at the University of Michigan found that life expectancy actually increased during the Great Depression, and during recessions and depressions in general. The authors suggest that when people are working extra hard during good economic times, they undergo more stress, exposure to pollution, and likelihood of injury among other longevity-limiting factors. Life expectancy is also likely to be affected by exposure to high levels of highway air pollution or industrial air pollution. This is one way that occupation can have a major effect on life expectancy. Coal miners (and in prior generations, asbestos cutters) often have lower life expediencies than average life expediencies. Other factors affecting an individual's life expectancy are genetic disorders, drug use, tobacco smoking, excessive alcohol consumption, obesity, access to health care, diet and exercise. In the uterus, male fetuses have a higher mortality rate (babies are conceived in a ratio estimated to be from 107 to 170 males to 100 females, but the ratio at birth in the United States is only 105 males to 100 females). Among the smallest premature babies (those under 2 pounds or 900 g), females again have a higher survival rate. At the other extreme, about 90% of individuals aged 110 are female. The difference in life expectancy between men and women in the United States dropped from 7.8 years in 1979 to 5.3 years in 2005, with women expected to live to age 80.1 in 2005. Also, data from the UK shows the gap in life expectancy between men and women decreasing in later life. This may be attributable to the effects of infant mortality and young adult death rates. In the past, mortality rates for females in child-bearing age groups were higher than for males at the same age. This is no longer the case, and female human life expectancy is considerably higher than that of males. The reasons for this are not entirely certain. Traditional arguments tend to favor sociology-environmental factors: historically, men have generally consumed more tobacco, alcohol and drugs than women in most societies, and are more likely to die from many associated diseases such as lung cancer, tuberculosis and cirrhosis of the liver. Men are also more likely to die from injuries, whether unintentional (such as occupational, war or car accidents) or intentional (suicide). Men are also more likely to die from most of the leading causes of death (some already stated above) than women. Some of these in the United States include: cancer of the respiratory system, motor vehicle accidents, suicide, cirrhosis of the liver, emphysema, prostate cancer, and coronary heart disease. These far outweigh the female mortality rate from breast cancer and cervical cancer. Some argue that shorter male life expectancy is merely another manifestation of the general rule, seen in all mammal species, that larger (size) individuals (within a species) tend, on average, to have shorter lives. This biological difference occurs because women have more resistance to infections and degenerative diseases. In her extensive review of the existing literature, Kalben concluded that the fact that women live longer than men was observed at least as far back as 1750 and that, with relatively equal treatment, today males in all parts of the world experience greater mortality than females. Kalben's study, however, was restricted to data in Western Europe alone, where demographic transition occurred relatively early. In countries such as Hungary, Bulgaria, India and China, males continued to outlive females into the twentieth century. Of 72 selected causes of death, only 6 yielded greater female than male age-adjusted death rates in 1998 in the United States. With the exception of birds, for almost all of the animal species studied, males have higher mortality than females. Evidence suggests that the sex mortality differential in people is due to both biological/genetic and environmental/behavioral risk and protective factors. There is a recent suggestion that mitochondrial mutations that shorten lifespan continue to be expressed in males (but less so in females) because mitochondria are inherited only through the mother. By contrast, natural selection weeds out mitochondria that reduce female survival; therefore such mitochondria are less likely to be passed on to the next generation. This thus suggests that females tend to live longer than males. The authors claim that this is a partial explanation. In developed countries, starting around 1880, death rates decreased faster among women, leading to differences in mortality rates between males and females. Before 1880 death rates were the same. In people born after 1900, the death rate of 50- to 70-year-old men was double that of women of the same age. Cardiovascular disease was the main cause of the higher death rates among men. Men may be more vulnerable to cardiovascular disease than women, but this susceptibility was evident only after deaths from other causes, such as infections, started to decline. In developed countries, the number of centenarians is increasing at approximately 5.5% per year, which means doubling the centenarian population every 13 years, pushing it from some 455,000 in 2009 to 4.1 million in 2050. Japan is the country with the highest ratio of centenarians (347 for every 1 million inhabitants in September 2010). Shimane prefecture had an estimated 743 centenarians per million inhabitants. In the United States, the number of centenarians grew from 32,194 in 1980 to 71,944 in November 2010 (232 centenarians per million inhabitants). The greater mortality of people with mental disorders may be due to death from injury, from co-morbid conditions, or from medication side effects. Psychiatric medicines can increase the chance of developing diabetes. Psychiatric medicine can also cause Agranulocytosis. Psychiatric medicines also affect the stomach, where the mentally ill have a four times risk of gastrointestinal disease. The life expectancy of people with diabetes, which is 9.3% of the U.S. population, is reduced by roughly ten to twenty years. Other demographics that tend to have a lower life expectancy than average include transplant recipients, and the obese. Evolution and aging rateEdit Various species of plants and animals, including humans, have different lifespans. Evolutionary theory states that organisms that, by virtue of their defenses or lifestyle, live for long periods and avoid accidents, disease, predation, etc. are likely to have genes that code for slow aging, which often translates to good cellular repair. One theory is that if predation or accidental deaths prevent most individuals from living to an old age, there will be less natural selection to increase the intrinsic life span. That finding was supported in a classic study of opossums by Austad; however, the opposite relationship was found in an equally prominent study of guppies by Reznick. One prominent and very popular theory states that lifespan can be lengthened by a tight budget for food energy called caloric restriction. Caloric restriction observed in many animals (most notably mice and rats) shows a near doubling of life span from a very limited calorific intake. Support for the theory has been bolstered by several new studies linking lower basal metabolic rate to increased life expectancy. That is the key to why animals like giant tortoises can live so long. Studies of humans with life spans of at least 100 have shown a link to decreased thyroid activity, resulting in their lowered metabolic rate. In a broad survey of zoo animals, no relationship was found between the fertility of the animal and its life span. The starting point for calculating life expectancy is the age-specific death rates of the population members. If a large number of data is available, a statistical population can be created that allow the age-specific death rates to be simply taken as the mortality rates actually experienced at each age (the number of deaths divided by the number of years "exposed to risk" in each data cell). However, it is customary to apply smoothing to iron out, as much as possible, the random statistical fluctuations from one year of age to the next. In the past, a very simple model used for this purpose was the Gompertz function, but more sophisticated methods are now used. These are the most common methods now used for that purpose: - to fit a mathematical formula, such as an extension of the Gompertz function, to the data, - for relatively small amounts of data, to look at an established mortality table that was previously derived for a larger population and make a simple adjustment to it (as multiply by a constant factor) to fit the data. - with a large number of data, one looks at the mortality rates actually experienced at each age, and applies smoothing (as by cubic splines). While the data required are easily identified in the case of humans, the computation of life expectancy of industrial products and wild animals involves more indirect techniques. The life expectancy and demography of wild animals are often estimated by capturing, marking, and recapturing them. The life of a product, more often termed shelf life, is also computed using similar methods. In the case of long-lived components, such as those used in critical applications: in aircraft, methods like accelerated aging are used to model the life expectancy of a component. The age-specific death rates are calculated separately for separate groups of data that are believed to have different mortality rates (such as males and females, and perhaps smokers and non-smokers if data are available separately for those groups) and are then used to calculate a life table from which one can calculate the probability of surviving to each age. In actuarial notation, the probability of surviving from age to age is denoted and the probability of dying during age (between ages and ) is denoted . For example, if 10% of a group of people alive at their 90th birthday die before their 91st birthday, the age-specific death probability at 90 would be 10%. That is a probability, not a mortality rate. The expected future lifetime of a life age in whole years (the curtate expected lifetime of (x)) is denoted by the symbol .[a] It is the conditional expected future lifetime (in whole years), assuming survival to age . If denotes the curtate future lifetime at , Substituting in the sum and simplifying gives the equivalent formula: If the assumption is made that on average, people live a half year in the year of death, the complete expectation of future lifetime at age is .[clarification needed] Life expectancy is by definition an arithmetic mean. It can also be calculated by integrating the survival curve from 0 to positive infinity (or equivalently to the maximum lifespan, sometimes called 'omega'). For an extinct or completed cohort (all people born in year 1850, for example), it can of course simply be calculated by averaging the ages at death. For cohorts with some survivors, it is estimated by using mortality experience in recent years. The estimates are called period cohort life expectancies. It is important to note that the statistic is usually based on past mortality experience and assumes that the same age-specific mortality rates will continue into the future. Thus, such life expectancy figures need to be adjusted for temporal trends before calculating how long a currently living individual of a particular age is expected to live. Period life expectancy remains a commonly used statistic to summarize the current health status of a population. However, for some purposes, such as pensions calculations, it is usual to adjust the life table used by assuming that age-specific death rates will continue to decrease over the years, as they have usually done in the past. That is often done by simply extrapolating past trends; but some models exist to account for the evolution of mortality like the Lee–Carter model. As discussed above, on an individual basis, a number of factors correlate with a longer life. Factors that are associated with variations in life expectancy include family history, marital status, economic status, physique, exercise, diet, drug use including smoking and alcohol consumption, disposition, education, environment, sleep, climate, and health care. Healthy life expectancyEdit This section does not cite any sources. (June 2015) (Learn how and when to remove this template message) In order to assess the quality of these additional years of life, 'healthy life expectancy' has been calculated for the last 30 years. Since 2001, the World Health Organization has published statistics called Healthy life expectancy (HALE), defined as the average number of years that a person can expect to live in "full health" excluding the years lived in less than full health due to disease and/or injury. Since 2004, Eurostat publishes annual statistics called Healthy Life Years (HLY) based on reported activity limitations. The United States uses similar indicators in the framework of the national health promotion and disease prevention plan "Healthy People 2010". More and more countries are using health expectancy indicators to monitor the health of their population. Forecasting life expectancy and mortality forms an important subdivision of demography. Future trends in life expectancy have huge implications for old-age support programs like U.S. Social Security and pension since the cash flow in these systems depends on the number of recipients who are still living (along with the rate of return on the investments or the tax rate in pay-as-you-go systems). With longer life expectancies, the systems see increased cash outflow; if the systems underestimate increases in life-expectancies, they will be unprepared for the large payments that will occur, as humans live longer and longer. Life expectancy forecasting is usually based on two different approaches: - Forecasting the life expectancy directly, generally using ARIMA or other time series extrapolation procedures: that has the advantage of simplicity, but it cannot account for changes in mortality at specific ages, and the forecast number cannot be used to derive other life table results. Analyses and forecasts using this approach can be done with any common statistical/mathematical software package, like EViews, R, SAS, Stata, Matlab, or SPSS. - Forecasting age specific death rates and computing the life expectancy from the results with life table methods: that is usually more complex than simply forecasting life expectancy because the analyst must deal with correlated age-specific mortality rates, but it seems to be more robust than simple one-dimensional time series approaches. It also yields a set of age specific-rates that may be used to derive other measures, such as survival curves or life expectancies at different ages. The most important approach within this group is the Lee-Carter model, which uses the singular value decomposition on a set of transformed age-specific mortality rates to reduce their dimensionality to a single time series, forecasts that time series and then recovers a full set of age-specific mortality rates from that forecasted value. Software includes Professor Rob J. Hyndman's R package called `demography` and UC Berkeley's LCFIT system. Life expectancy is also used in describing the physical quality of life of an area or, for an individual when the value of a life settlement is determined a life insurance policy sold for a cash asset. Disparities in life expectancy are often cited as demonstrating the need for better medical care or increased social support. A strongly associated indirect measure is income inequality. For the top 21 industrialized countries, if each person is counted equally, life expectancy is lower in more unequal countries (r = −0.907). There is a similar relationship among states in the US (r = −0.620). Life expectancy vs. life spanEdit Life expectancy differs from maximum life span. Life expectancy is an average for all people in the population — including those who die shortly after birth, those who die in early adulthood (e.g. childbirth, war), and those who live unimpeded until old age. Lifespan is an individual-specific concept — maximum lifespan is therefore an upper bound rather than an average. However, these two terms are often confused with each other to the point that when people hear "life expectancy was 35 years" they often interpret this as meaning that people of that time or place had short maximum life spans. One such example can be seen in the In Search of... episode "The Man Who Would Not Die" (About Count of St. Germain) where it is stated "Evidence recently discovered in the British Museum indicates that St. Germain may have well been the long lost third son of Rákóczi born in Transylvania in 1694. If he died in Germany in 1784, he lived 90 years. The average life expectancy in the 18th century was 35 years. Fifty was a ripe old age. Ninety... was forever." In reality, there are other examples of people living significantly longer than the life expectancy of their time period, such as Socrates, Saint Anthony, Michelangelo, and Benjamin Franklin. It can be argued that it is better to compare life expectancy of the period after childhood to get a better handle on life span. Life expectancy can change dramatically after childhood, even in preindustrial times as is demonstrated by the Roman Life Expectancy table, which estimates life expectancy to be 25 years at birth, but 53 years upon reaching age 25. Studies like Plymouth Plantation; "Dead at Forty" and Life Expectancy by Age, 1850–2004 similarly show a dramatic increase in life expectancy once adulthood was reached. - Calorie restriction - DNA damage theory of aging - Glasgow effect - Healthcare inequality - Indefinite lifespan - Life table - List of countries by life expectancy - List of longest-living organisms - Maximum life span - Medieval demography - Mortality rate - Population Pyramid - Lindy Effect Increasing life expectancyEdit a. ^ ^ In standard actuarial notation, ex refers to the expected future lifetime of (x) in whole years, while ex with a circle above the e denotes the complete expected future lifetime of (x), including the fraction. - S. Shryok, J. S. Siegel et al. The Methods and Materials of Demography. Washington, DC, US Bureau of the Census, 1973 - Laden, Greg (2011-05-01). "Falsehood: "If this was the Stone Age, I'd be dead by now"". ScienceBlogs. Retrieved 2014-08-31. - Arthur O'Sullivan; Steven M. Sheffrin (2003). Economics: Principles in Action. Pearson Prentice Hall. p. 473. ISBN 0-13-063085-3. - John S. Millar; Richard M. Zammuto (1983). "Life Histories of Mammals: An Analysis of Life Tables". Ecology. Ecological Society of America. 64 (4): 631–635. doi:10.2307/1937181. JSTOR 1937181. - Eliahu Zahavi, Vladimir Torbilo & Solomon Press (1996) Fatigue Design: Life Expectancy of Machine Parts. CRC Press. ISBN 0-8493-8970-4. - "The World Factbook — Central Intelligence Agency". - Ansley J. Coale; Judith Banister (December 1996). "Five decades of missing females in China". Proceedings of the American Philosophical Society. 140 (4): 421–450. JSTOR 987286. Also printed as Coale AJ, Banister J (Aug 1994). "Five decades of missing females in China". Demography. 31: 459–79. doi:10.2307/2061752. PMID 7828766. - Boseley, Sarah (August 30, 2011). "Japan's life expectancy 'down to equality and public health measures'". The Guardian. London. Retrieved August 31, 2011. Japan has the highest life expectancy in the world but the reasons says an analysis, are as much to do with equality and public health measures as diet.... According to a paper in a Lancet series on healthcare in Japan.... - Ikeda, Nayu; Saito, Eiko; Kondo, Naoki; Inoue, Manami; Ikeda, Shunya; Satoh, Toshihiko; Wada, Koji; Stickley, Andrew; Katanoda, Kota; Mizoue, Tetsuya; Noda, Mitsuhiko; Iso, Hiroyasu; Fujino, Yoshihisa; Sobue, Tomotaka; Tsugane, Shoichiro; Naghavi, Mohsen; Ezzati, Majid; Shibuya, Kenji (August 2011). "What has made the population of Japan healthy?". The Lancet. 378 (9796): 1094–105. doi:10.1016/S0140-6736(11)61055-6. PMID 21885105. Reduction in health inequalities with improved average population health was partly attributable to equal educational opportunities and financial access to care. - Santrock, John (2007). Life Expectancy. A Topical Approach to: Life-Span Development (pp. 128–132). New York, New York: The McGraw-Hill Companies, Inc. - X. Liu (2015). "Life equations for the senescence process". Biochemistry and Biophysics Reports. 4: 228–233. doi:10.1016/j.bbrep.2015.09.020. - "No detectable limit to how long people can live" (Press release). Science Daily. June 28, 2017. Retrieved July 4, 2017. - Hughes, Bryan G.; Hekimi, Siegfried (June 29, 2017). "Many possible maximum lifespan trajectories". Nature. 546: E8–E9. doi:10.1038/nature22786. Retrieved July 4, 2017. - J. Lawrence Angel (May 1969). "The bases of paleodemography". American Journal of Physical Anthropology. 30 (3): 427–437. doi:10.1002/ajpa.1330300314. - Hillard Kaplan; Kim Hill; Jane Lancaster; A. Magdalena Hurtado (2000). "A Theory of Human Life History Evolution: Diet, Intelligence and Longevity" (PDF). Evolutionary Anthropology. 9 (4): 156–185. doi:10.1002/1520-6505(2000)9:4<156::AID-EVAN5>3.0.CO;2-7. Retrieved 12 September 2010. - Galor, Oded; Moav, Omer (2007). "The Neolithic Revolution and Contemporary Variations in Life Expectancy" (PDF). Brown University Working Paper. Retrieved September 12, 2010. - Angel Lawrence J. (1984), "Health as a crucial factor in the changes from hunting to developed farming in the eastern Mediterranean", Proceedings of meeting on Paleopathology at the Origins of Agriculture: 51–73 - Galor, Oded; Moav, Omer (2005). "Natural Selection and the Evolution of Life Expectancy" (PDF). Brown University Working Paper. Retrieved November 4, 2010. - Mogens Herman Hansen, The Shotgun Method, p. 55. - "Mortality". Britannica.com. Retrieved November 4, 2010. - Frier, Bruce. (2000). The Cambridge Ancient History XI: The High Empire, A.D. 70–192. Cambridge University Press. p. 789. ISBN 0-521-04493-6. - Conrad, Lawrence I. (2006). The Western Medical Tradition. Cambridge University Press. p. 137. ISBN 0-521-47564-3. - Jaques, R. Kevin (2006). Authority, Conflict, and the Transmission of Diversity in Medieval Islamic Law. Brill Publishers. p. 188. ISBN 9789004147454. - Ahmad, Ahmad Atif (2007), "Authority, Conflict, and the Transmission of Diversity in Medieval Islamic Law by R. Kevin Jaques", Journal of Islamic Studies, 18 (2): 246–248 , doi:10.1093/jis/etm005 - Bulliet, Richard W. (1983), "The Age Structure of Medieval Islamic Education", Studia Islamica, 57: 105–117 , doi:10.2307/1595484 - Shatzmiller, Maya (1994), Labour in the Medieval Islamic World, Brill Publishers, p. 66, ISBN 9004098968 - "Pre-European Exploration, Prehistory through 1540". Encyclopediaofarkansas.net. October 5, 2010. Retrieved November 4, 2010. - "Time traveller's guide to Medieval Britain". Channel4.com. Retrieved November 4, 2010. - "A millennium of health improvement". BBC News. December 27, 1998. Retrieved November 4, 2010. - "Expectations of Life" by H.O. Lancaster (page 8) - Pomeranz, Kenneth (2000), The Great Divergence: China, Europe, and the Making of the Modern World Economy, Princeton University Press, p. 37, ISBN 978-0-691-09010-8 - Francis, Daniel (2006). Voices and Visions: A Story of Canada. Canada: Oxford University Press. p. 21. ISBN 978-0-19-542169-9. - Paul, Daniel N. (1993). We Were Not the Savages. Nova Scotia, Canada: Nimbus. ISBN 1552662098. - Prentice, Thomson. "Health, history and hard choices: Funding dilemmas in a fast-changing world" (PDF). World Health Organization: Global Health Histories. Retrieved November 4, 2010. - "", Stratfordhall.org. - "Death in Early America Archived December 30, 2010, at the Wayback Machine.". Digital History. - "Modernization - Population Change". Encyclopædia Britannica. - Mabel C. Buer, Health, Wealth and Population in the Early Days of the Industrial Revolution, London: George Routledge & Sons, 1926, page 30 ISBN 0-415-38218-1 - BBC—History—The Foundling Hospital. Published: May 1, 2001. - "Gapminder World". - CDC (1999). "Ten great public health achievements—United States, 1900–1999". MMWR Morb Mortal Wkly Rep. 48 (12): 241–3. PMID 10220250. Reprinted in: "From the Centers for Disease Control and Prevention. Ten great public health achievements—United States, 1900–1999". JAMA. 281 (16): 1481. 1999. doi:10.1001/jama.281.16.1481. PMID 10227303. - "Life expectancy at birth, total (years)—Data". - "World Population Prospects—The 2002 Revision", 2003, page 24 - "GHO—By category—Life expectancy—Data by country". - "Wealth & Health of Nations". Gapminder. Retrieved 26 June 2015. - "Life Expectancy | Visual Data". BestLifeRates.org. Retrieved 26 June 2015. - "Deaths: Final Data for 2010", National Vital Statistics Reports, authored by Sherry L. Murphy, Jiaquan Xu, and Kenneth D. Kochanek, volume 61, number 4, page 12, 8 May 2013 - United States Department of Health and Human Services, Office of Minority Health—Asian American/Pacific Islander Profile Archived February 4, 2012, at the Wayback Machine.. Retrieved October 1, 2013. - "The Root Causes of Poverty". Waterfields. Retrieved 2015-03-04. - Department of Health—Tackling health inequalities: Status report on the Programme for Action - "Social factors key to ill health". BBC News. August 28, 2008. Retrieved August 28, 2008. - "GP explains life expectancy gap". BBC News. August 28, 2008. Retrieved August 28, 2008. - Fletcher, Michael A. (March 10, 2013). "Research ties economic inequality to gap in life expectancy". Washington Post. Retrieved March 23, 2013. - "Did The Great Depression Have A Silver Lining? Life Expectancy Increased By 6.2 Years". September 29, 2009. Retrieved April 3, 2011. - firstname.lastname@example.org, Laurent PELE. "How long will I live ? Estimate remaining life expectancy for all countries in the world". - "The World Factbook—Central Intelligence Agency". CIA. Retrieved April 9, 2018. - "The World Factbook—Central Intelligence Agency". CIA. Retrieved April 9, 2018. - Kalben, Barbara Blatt. "Why Men Die Younger: Causes of Mortality Differences by Sex". Society of Actuaries", 2002, p. 17.http://www.soa.org/library/monographs/life/why-men-die-younger-causes-of-mortality-differences-by-sex/2001/january/m-li01-1-05.pdf - Hitti, Miranda (February 28, 2005). "U.S. Life Expectancy Best Ever, Says CDC". eMedicine. WebMD. Retrieved January 18, 2011. - "Life expectancy—care quality indicators". QualityWatch. Nuffield Trust & Health Foundation. Retrieved 16 April 2015. - World Health Organization (2004). "Annex Table 2: Deaths by cause, sex and mortality stratum in WHO regions, estimates for 2002" (PDF). The world health report 2004 - changing history. Retrieved November 1, 2008. - "Telemores, sexual size dimorphism and gender gap in life expectancy". Jerrymondo.tripod.com. Retrieved November 4, 2010. - Samaras Thomas T., Heigh Gregory H. "How human size affects longevity and mortality from degenerative diseases". Townsend Letter for Doctors & Patients. 159 (78–85): 133–139. - Living Standards in the Past: New Perspectives on Well-Being in Asia and Europe edited by Robert C. Allen, Tommy Bengtsson, Martin Dribe - Kalben, Barbara Blatt. Why Men Die Younger: Causes of Mortality Differences by Sex Society of Actuaries, 2002. - "Fruit flies offer DNA clue to why women live longer". August 2, 2012 – via www.bbc.co.uk. - Evolutionary biologist, PZ Myers Mother's Curse - "When Did Women Start to Outlive Men?". Retrieved 2015-07-08. - United Nations "World Population Ageing 2009"; ST/ESA/SER.A/295, Population Division, Department of Economic and Social Affairs, United Nations, New York, Oct. 2010, liv + 73 pp. - Japan Times "Centenarians to Hit Record 44,000". The Japan Times, September 15, 2010. Okinawa 667 centenarians per 1 million inhabitants in September 2010, had been for a long time the Japanese prefecture with the largest ratio of centenarians, partly because it also had the largest loss of young and middle-aged population during the Pacific War. - "Resident Population. National Population Estimates for the 2000s. Monthly Postcensal Resident Population, by single year of age, sex, race, and Hispanic Origin" Archived October 10, 2013, at the Wayback Machine., Bureau of the Census (updated monthly). Different figures, based on earlier assumptions (104,754 centenarians on Nov.1, 2009) are provided in "Older Americans Month: May 2010" Archived February 16, 2016, at the Wayback Machine., Bureau of the Census, Facts for Features, March 2, 2010, 5 pp. - "Nearly 1 in 5 Americans Suffers From Mental Illness Each Year" Author Victoria Bekiempis . Publisher Newsweek. February 28, 2014 - "The global prevalence of common mental disorders" Published by International Journal of Epidemiology. March 19, 2014. doi.org/10.1093/ije/dyu038 - "Morbidity and Mortality in People With Serious Mental Illness" (PDF). National Association of State Mental Health Program Directors. 2006. - "The Largest Health Disparity We Don’t Talk About" author Dhruv Khullar. May 30, 2018. New York Times publisher. - "Mortality rate three times as high among mental health service users than in general population" Health and Social Care Gov. UK. 2013 - "Morbidity and Mortality in People With Serious Mental Illness" (PDF). National Association of State Mental Health Program Directors. 2006. - Wahlbeck, Kristian; Westman, Jeanette; Nordentoft, Merete; Gissler, Mika; Laursen, Thomas Munk (December 1, 2011). "Outcomes of Nordic mental health systems: life expectancy of patients with mental disorders". Br J Psychiatry. 199 (6): 453–458. doi:10.1192/bjp.bp.110.085100. PMID 21593516 – via bjp.rcpsych.org. - Reininghaus, Ulrich; Dutta, Rina; Dazzan, Paola; Doody, Gillian A.; Fearon, Paul; Lappin, Julia; Heslin, Margaret; Onyejiaka, Adanna; Donoghue, Kim; Lomas, Ben; Kirkbride, James B.; Murray, Robin M.; Croudace, Tim; Morgan, Craig; Jones, Peter B. (September 27, 2014). "Mortality in Schizophrenia and Other Psychoses: A 10-Year Follow-up of the ӔSOP First-Episode Cohort". Schizophr Bull. 41: sbu138. doi:10.1093/schbul/sbu138. PMC 4393685. PMID 25262443 – via schizophreniabulletin.oxfordjournals.org. - Laursen TM, Munk-Olsen T, Vestergaard M (March 2012). "Life expectancy and cardiovascular mortality in persons with schizophrenia". Curr Opin Psychiatry. 25: 83–8. doi:10.1097/YCO.0b013e32835035ca. PMID 22249081. - "Antipsychotics Linked to Mortality in Parkinson's". Medscape. Retrieved April 9, 2018. - Rosenbaum Lisa (2016). "Closing the Mortality Gap — Mental Illness and Medical Care". New England Journal of Medicine. 375: 1585–1589. doi:10.1056/NEJMms1610125. - "Inquest told" Northampton Chronicle. July 3, 2013. - Kumar PN, Thomas B (2011). "Hyperglycemia associated with olanzapine treatment". Indian J Psychiatry. 53: 176–7. doi:10.4103/0019-5545.82562. PMC 3136028. PMID 21772658. - "Lilly Adds Strong Warning Label to Zyprexa, a Schizophrenia Drug". The New York Times. October 6, 2007. Retrieved April 9, 2018. - Codario, Ronald A. (October 28, 2007). "Type 2 Diabetes, Pre-Diabetes, and the Metabolic Syndrome". Springer Science & Business Media – via Google Books. - "Antipsychotic-Related Metabolic Testing Falls Far Short". MedScape. Retrieved April 9, 2018. - Jose Ma. J. Alvir (1993). "Clozapine-Induced Agranulocytosis -- Incidence and Risk Factors in the United States". New England Journal of Medicine. 329: 162–167. doi:10.1056/NEJM199307153290303. - Sonnenburg, Justin Sonnenburg, Erica. "Gut Feelings–the "Second Brain" in Our Gastrointestinal Systems [Excerpt]". Scientific American. Retrieved April 9, 2018. - Mosley, Michael, Michael (July 11, 2012). "The second brain in our stomachs". Retrieved April 9, 2018 – via www.bbc.com. - Rege S, Lafferty T (2008). "Life-threatening constipation associated with clozapine". Australas Psychiatry. 16: 216–9. doi:10.1080/10398560701882203. PMID 18568631. - Hibbard KR, Propst A, Frank DE, Wyse J (2009). "Fatalities associated with clozapine-related constipation and bowel obstruction: a literature review and two case reports". Psychosomatics. 50: 416–9. doi:10.1176/appi.psy.50.4.416. PMID 19687183. - Centers for Disease Control and Prevention - Kiberd Bryce A., Keough-Ryan Tammy, Clase Catherine M. (2003). "Screening for prostate, breast and colorectal cancer in renal transplant recipients". American Journal of Transplantation. 3 (5): 619–625. doi:10.1034/j.1600-6143.2003.00118.x. - Diehr Paula; et al. (2008). "Weight, mortality, years of healthy life, and active life expectancy in older adults". Journal of the American Geriatrics Society. 56 (1): 76–83. doi:10.1111/j.1532-5415.2007.01500.x. - Williams G (1957). "Pleiotropy, natural selection, and the evolution of senescence". Evolution. Society for the Study of Evolution. 11 (4): 398–411. doi:10.2307/2406060. JSTOR 2406060. - Austad SN (1993). "Retarded senescence in an insular population of Virginia opossums". J. Zool. Lond. 229 (4): 695–708. doi:10.1111/j.1469-7998.1993.tb02665.x. - Reznick DN, Bryant MJ, Roff D, Ghalambor CK, Ghalambor DE (2004). "Effect of extrinsic mortality on the evolution of senescence in guppies". Nature. 431 (7012): 1095–1099. doi:10.1038/nature02936. PMID 15510147. - Mitteldorf J, Pepper J (2007). "How can evolutionary theory accommodate recent empirical results on organismal senescence?". Theory in Biosciences. 126 (1): 3–8. doi:10.1007/s12064-007-0001-0. PMID 18087751. - Kirkwood TE (1977). "Evolution of aging". Nature. 270 (5635): 301–304. doi:10.1038/270301a0. PMID 593350. - Hulbert, A. J.; Pamplona, Reinald; Buffenstein, Rochelle; Buttemer, W. A. (October 1, 2007). "Life and Death: Metabolic Rate, Membrane Composition, and Life Span of Animals". Physiol. Rev. 87 (4): 1175–1213. doi:10.1152/physrev.00047.2006. PMID 17928583 – via physrev.physiology.org. - Olshansky, S J; Rattan, Suresh IS (July 25, 2009). "What Determines Longevity: Metabolic Rate or Stability?". 5 (28). - Aguilaniu, Hugo; Durieux, Jenni; Dillin, Andrew (October 15, 2005). "Metabolism, ubiquinone synthesis, and longevity". Genes Dev. 19 (20): 2399–2406. doi:10.1101/gad.1366505. PMID 16230529 – via genesdev.cshlp.org. - "The Longevity Secret for Tortoises Is Held In Their Low Metabolism Rate". Archived from the original on November 12, 2013. - Ricklefs RE, Cadena CD (2007). "Lifespan is unrelated to investment in reproduction in populations of mammals and birds in captivity". Ecol. Lett. 10 (10): 867–872. doi:10.1111/j.1461-0248.2007.01085.x. PMID 17845285. - Anderson, Robert N. (1999) Method for constructing complete annual U.S. life tables. Vital and health statistics. Series 2, Data evaluation and methods research; no. 129 (DHHS publication no. (PHS) 99-1329) PDF - Linda J Young; Jerry H Young (1998) Statistical ecology: a population perspective. Kluwer Academic Publishers, p. 310 - R. Cunningham; T. Herzog; R. London (2008). Models for Quantifying Risk (Third ed.). Actex. ISBN 978-1-56698-676-2. page 92. - Ronald D. Lee and Lawrence Carter. 1992. "Modeling and Forecasting the Time Series of U.S. Mortality," Journal of the American Statistical Association 87 (September): 659-671. - "WHO | Health Status Statistics: Mortality". www.who.int. Retrieved 2018-03-10. - "The Lee-Carter Method for Forecasting Mortality, with Various Extensions and Applications - SOA" (PDF). SOA. Retrieved April 9, 2018. - "International Human Development Indicators—UNDP". Hdrstats.undp.org. Archived from the original on April 20, 2009. Retrieved November 4, 2010. - Has the relation between income inequality and life expectancy disappeared? Evidence from Italy and top industrialised countries Archived January 9, 2015, at the Wayback Machine. J Epidemiol Community Health 2005;59:158–162. - Inequality in income and mortality in the United States: analysis of mortality and potential pathways BMJ 1996, 312:999. - Wanjek, Christopher (2002). Bad Medicine: Misconceptions and Misuses Revealed, from Distance Healing to Vitamin O. Wiley. pp. 70–71. ISBN 0-471-43499-X. - Wanjek, Christopher (2002), Bad Medicine: Misconceptions and Misuses Revealed, from Distance Healing to Vitamin O, Wiley, pp. 70–71, ISBN 047143499X. - Wanjek, Christopher (2002), Bad Medicine: Misconceptions and Misuses Revealed, from Distance Healing to Vitamin O, Wiley, pp. 70–72, ISBN 047143499X. - Wanjek, Christopher (2002). Bad Medicine: Misconceptions and Misuses Revealed, from Distance Healing to Vitamin O. Wiley. p. 71. ISBN 0-471-43499-X. - Frier, "Demography", 789. - Plymouth Plantation; "Dead at Forty" - Life Expectancy by Age, 1850–2004 - Leonid A. Gavrilov & Natalia S. Gavrilova (1991), The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher, ISBN 3-7186-4983-7 - Kochanek, Kenneth D., Elizabeth Arias, and Robert N. Anderson (2013), How Did Cause of Death Contribute to Racial Differences in Life Expectancy in the United States in 2010?. Hyattsville, Md.: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics. - Frier, Bruce W. "Demography", in Alan K. Bowman, Peter Garnsey, and Dominic Rathbone, eds., The Cambridge Ancient History XI: The High Empire, A.D. 70–192, (Cambridge: Cambridge University Press, 2000), 827–54. |Wikimedia Commons has media related to Life expectancy.| - Charts for all countries - Our World In Data – Life Expectancy—Visualizations of how life expectancy around the world has changed historically (by Max Roser). Includes life expectancy for different age groups. Charts for all countries, world maps, and links to more data sources. - Global Agewatch has the latest internationally comparable statistics on life expectancy from 195 countries. - Rank Order—Life expectancy at birth from the CIA's World Factbook. - CDC year-by-year life expectancy figures for USA from the USA Centers for Disease Controls and Prevention, National Center for Health Statistics. - Life expectancy in Roman times from the University of Texas. - Animal lifespans: Animal Lifespans from Tesarta Online (Internet Archive); The Life Span of Animals from Dr Bob's All Creatures Site.
Statistics, like all mathematical disciplines, does not infer valid conclusions from nothing. Inferring interesting conclusions about real statistical populations usually requires some background assumptions. Those assumptions must be made carefully, because incorrect assumptions can generate wildly inaccurate conclusions. Here are some examples of statistical assumptions. - Independence of observations from each other (this assumption is an especially common error). - Independence of observational error from potential confounding effects. - Exact or approximate normality of observations. - Linearity of graded responses to quantitative stimuli, e.g. in linear regression. Classes of assumptions There are two approaches to statistical inference: model-based inference and design-based inference. Both approaches rely on some statistical model to represent the data-generating process. In the model-based approach, the model is taken to be initially unknown, and one of the goals is to select an appropriate model for inference. In the design-based approach, the model is taken to be known, and one of the goals is to ensure that the sample data are selected randomly enough for inference. Statistical assumptions can be put into two classes, depending upon which approach to inference is used. - Model-based assumptions. These include the following three types: - Distributional assumptions. Where a statistical model involves terms relating to random errors, assumptions may be made about the probability distribution of these errors. In some cases, the distributional assumption relates to the observations themselves. - Structural assumptions. Statistical relationships between variables are often modelled by equating one variable to a function of another (or several others), plus a random error. Models often involve making a structural assumption about the form of the functional relationship, e.g. as in linear regression. This can be generalised to models involving relationships between underlying unobserved latent variables. - Cross-variation assumptions. These assumptions involve the joint probability distributions of either the observations themselves or the random errors in a model. Simple models may include the assumption that observations or errors are statistically independent. - Design-based assumptions. These relate to the way observations have been gathered, and often involve an assumption of randomization during sampling. The model-based approach is much the most commonly used in statistical inference; the design-based approach is used mainly with survey sampling. With the model-based approached, all the assumptions are effectively encoded in the model. Given that the validity of any conclusion drawn from a statistical inference depends on the validity of the assumptions made, it is clearly important that those assumptions should be reviewed at some stage. Some instances—for example where data are lacking—may require that researchers judge whether an assumption is reasonable. Researchers can expand this somewhat to consider what effect a departure from the assumptions might produce. Where more extensive data are available, various types of procedures for statistical model validation are available—e.g. for regression model validation. - Kruskall, 1988 - Koch G. G., Gillings D. B. (2006), "Inference, design-based vs. model-based", Encyclopedia of Statistical Sciences (editor—Kotz S.), Wiley-Interscience. - Cox, 2006, ch.9 - de Gruijter et al., 2006, §2.2 - McPherson, 1990, §3.4.1 - McPherson, 1990, §3.3 - de Gruijter et al., 2006, §2.2.1 ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (February 2010)| - Cox D. R. (2006), Principles of Statistical Inference, Cambridge University Press. - de Gruijter J., Brus D., Bierkens M., Knotters M. (2006), Sampling for Natural Resource Monitoring, Springer-Verlag. - Kruskal, William (December 1988). "Miracles and statistics: the casual assumption of independence (ASA Presidential address)". Journal of the American Statistical Association 83 (404): 929–940. JSTOR 2290117. - McPherson, G. (1990), Statistics in Scientific Investigation: Its Basis, Application and Interpretation, Springer-Verlag. ISBN 0-387-97137-8
A repeating decimal or recurring decimal is decimal representation of a number whose digits are periodic (repeating its values at regular intervals) and the infinitely repeated portion is not zero. It can be shown that a number is rational if and only if its decimal representation is repeating or terminating (i.e. all except finitely many digits are zero). For example, the decimal representation of 1/3 becomes periodic just after the decimal point, repeating the single digit "3" forever, i.e. 0.333.... A more complicated example is 3227/555, whose decimal becomes periodic at the second digit following the decimal point and then repeats the sequence "144" forever, i.e. 5.8144144144.... At present, there is no single universally accepted notation or phrasing for repeating decimals. The infinitely repeated digit sequence is called the repetend or reptend. If the repetend is a zero, this decimal representation is called a terminating decimal rather than a repeating decimal, since the zeros can be omitted and the decimal terminates before these zeros. Every terminating decimal representation can be written as a decimal fraction, a fraction whose denominator is a power of 10 (e.g. 1.585 = 1585/1000); it may also be written as a ratio of the form k/2n5m (e.g. 1.585 = 317/2352). However, every number with a terminating decimal representation also trivially has a second, alternative representation as a repeating decimal whose repetend is the digit 9. This is obtained by decreasing the final (rightmost) non-zero digit by one and appending a repetend of 9. 1.000... = 0.999... and 1.585000... = 1.584999... are two examples of this. (This type of repeating decimal can be obtained by long division if one uses a modified form of the usual division algorithm.) Any number that cannot be expressed as a ratio of two integers is said to be irrational. Their decimal representation neither terminates nor infinitely repeats but extends forever without repetition (see § ). Examples of such irrational numbers are √2 and π. There are several notational conventions for representing repeating decimals. None of them are accepted universally. In English, there are various ways to read repeating decimals aloud. For example, 1.234 may be read "one point two repeating three four", "one point two repeated three four", "one point two recurring three four", "one point two repetend three four" or "one point two into infinity three four". etc. Observe that at each step we have a remainder; the successive remainders displayed above are 56, 42, 50. When we arrive at 50 as the remainder, and bring down the "0", we find ourselves dividing 500 by 74, which is the same problem we began with. Therefore, the decimal repeats: 0.0675675675..... For any given divisor, only finitely many different remainders can occur. In the example above, the 74 possible remainders are 0, 1, 2, ..., 73. If at any point in the division the remainder is 0, the expansion terminates at that point. Then the length of the repetend, also called "period", is defined to be 0. If 0 never occurs as a remainder, then the division process continues forever, and eventually, a remainder must occur that has occurred before. The next step in the division will yield the same new digit in the quotient, and the same new remainder, as the previous time the remainder was the same. Therefore, the following division will repeat the same results. The repeating sequence of digits is called "repetend" which has a certain length greater than 0, also called "period". Each repeating decimal number satisfies a linear equation with integer coefficients, and its unique solution is a rational number. To illustrate the latter point, the number α = 5.8144144144... above satisfies the equation 10000α − 10α = 58144.144144... − 58.144144... = 58086, whose solution is α = 58086/9990 = 3227/555. The process of how to find these integer coefficients is described below. Thereby fraction is the unit fraction 1/n and ℓ10 is the length of the (decimal) repetend. The lengths ℓ10(n) of the decimal repetends of 1/n, n = 1, 2, 3, ..., are: For comparison, the lengths ℓ2(n) of the binary repetends of the fractions 1/n, n = 1, 2, 3, ..., are: The decimal repetend lengths of 1/p, p = 2, 3, 5, ... (nth prime), are: The least primes p for which 1/p has decimal repetend length n, n = 1, 2, 3, ..., are: The least primes p for which k/p has n different cycles (1 ≤ k ≤ p−1), n = 1, 2, 3, ..., are: A fraction in lowest terms with a prime denominator other than 2 or 5 (i.e. coprime to 10) always produces a repeating decimal. The length of the repetend (period of the repeating decimal segment) of 1/p is equal to the order of 10 modulo p. If 10 is a primitive root modulo p, the repetend length is equal to p − 1; if not, the repetend length is a factor of p − 1. This result can be deduced from Fermat's little theorem, which states that 10p−1 ≡ 1 (mod p). The base-10 repetend of the reciprocal of any prime number greater than 5 is divisible by 9. If the repetend length of 1/p for prime p is equal to p − 1 then the repetend, expressed as an integer, is called a cyclic number. The list can go on to include the fractions 1/109, 1/113, 1/131, 1/149, 1/167, 1/179, 1/181, 1/193, etc. (sequence in the OEIS). Every proper multiple of a cyclic number (that is, a multiple having the same number of digits) is a rotation: A fraction which is cyclic thus has a recurring decimal of even length that divides into two sequences in nines' complement form. For example 1/7 starts '142' and is followed by '857' while 6/7 (by rotation) starts '857' followed by its nines' complement '142'. The rotation of the repetend of a cyclic number always happens in such a way that each successive repetend is a bigger number than the previous one. In the succession above, for instance, we see that 0.142857... < 0.285714... < 0.428571... < 0.571428... < 0.714285... < 0.857142.... This, for cyclic fractions with long repetends, allows us to easily predict what the result of multiplying the fraction by any natural number n will be, as long as the repetend is known. A proper prime is a prime p which ends in the digit 1 in base 10 and whose reciprocal in base 10 has a repetend with length p − 1. In such primes, each digit 0, 1,..., 9 appears in the repeating sequence the same number of times as does each other digit (namely, p − 1/10 times). They are:: 166 The reason is that 3 is a divisor of 9, 11 is a divisor of 99, 41 is a divisor of 99999, etc. To find the period of 1/p, we can check whether the prime p divides some number 999...999 in which the number of digits divides p − 1. Since the period is never greater than p − 1, we can obtain this by calculating 10p−1 − 1/p. For example, for 11 we get Those reciprocals of primes can be associated with several sequences of repeating decimals. For example, the multiples of 1/13 can be divided into two sets, with different repetends. The first set is: where the repetend of each fraction is a cyclic re-arrangement of 076923. The second set is: where the repetend of each fraction is a cyclic re-arrangement of 153846. In general, the set of proper multiples of reciprocals of a prime p consists of n subsets, each with repetend length k, where nk = p − 1. For an arbitrary integer n, the length L(n) of the decimal repetend of 1/n divides φ(n), where φ is the totient function. The length is equal to φ(n) if and only if 10 is a primitive root modulo n. In particular, it follows that L(p) = p − 1 if and only if p is a prime and 10 is a primitive root modulo p. Then, the decimal expansions of n/p for n = 1, 2, ..., p − 1, all have period p − 1 and differ only by a cyclic permutation. Such numbers p are called full repetend primes. If p is a prime other than 2 or 5, the decimal representation of the fraction 1/p2 repeats: The period (repetend length) L(49) must be a factor of λ(49) = 42, where λ(n) is known as the Carmichael function. This follows from Carmichael's theorem which states that if n is a positive integer then λ(n) is the smallest integer m such that The period of 1/p2 is usually pTp, where Tp is the period of 1/p. There are three known primes for which this is not true, and for those the period of 1/p2 is the same as the period of 1/p because p2 divides 10p−1−1. These three primes are 3, 487, and 56598313 (sequence in the OEIS). If p and q are primes other than 2 or 5, the decimal representation of the fraction 1/pq repeats. An example is 1/119: The period T of 1/pq is a factor of λ(pq) and it happens to be 48 in this case: The period T of 1/pq is LCM(Tp, Tq), where Tp is the period of 1/p and Tq is the period of 1/q. If p, q, r, etc. are primes other than 2 or 5, and k, ℓ, m, etc. are positive integers, then where Tpk, Tqℓ, Trm,... are respectively the period of the repeating decimals 1/pk, 1/qℓ, 1/rm,... as defined above. An integer that is not coprime to 10 but has a prime factor other than 2 or 5 has a reciprocal that is eventually periodic, but with a non-repeating sequence of digits that precede the repeating part. The reciprocal can be expressed as: Given a repeating decimal, it is possible to calculate the fraction that produces it. For example: The procedure below can be applied in particular if the repetend has n digits, all of which are 0 except the final one which is 1. For instance for n = 7: So this particular repeating decimal corresponds to the fraction 1/10n − 1, where the denominator is the number written as n 9s. Knowing just that, a general repeating decimal can be expressed as a fraction without having to solve an equation. For example, one could reason: It is possible to get a general formula expressing a repeating decimal with an n-digit period (repetend length), beginning right after the decimal point, as a fraction: If the repeating decimal is between 0 and 1, and the repeating block is n digits long, first occurring right after the decimal point, then the fraction (not necessarily reduced) will be the integer number represented by the n-digit block divided by the one represented by n 9s. For example, If the repeating decimal is as above, except that there are k (extra) digits 0 between the decimal point and the repeating n-digit block, then one can simply add k digits 0 after the n digits 9 of the denominator (and, as before, the fraction may subsequently be simplified). For example, Any repeating decimal not of the form described above can be written as a sum of a terminating decimal and a repeating decimal of one of the two above types (actually the first type suffices, but that could require the terminating decimal to be negative). For example, An even faster method is to ignore the decimal point completely and go like this It follows that any repeating decimal with period n, and k digits after the decimal point that do not belong to the repeating part, can be written as a (not necessarily reduced) fraction whose denominator is (10n − 1)10k. Conversely the period of the repeating decimal of a fraction c/d will be (at most) the smallest number n such that 10n − 1 is divisible by d. For example, the fraction 2/7 has d = 7, and the smallest k that makes 10k − 1 divisible by 7 is k = 6, because 999999 = 7 × 142857. The period of the fraction 2/7 is therefore 6. A repeating decimal can also be expressed as an infinite series. That is, a repeating decimal can be regarded as the sum of an infinite number of rational numbers. To take the simplest example, The above series is a geometric series with the first term as 1/10 and the common factor 1/10. Because the absolute value of the common factor is less than 1, we can say that the geometric series converges and find the exact value in the form of a fraction by using the following formula where a is the first term of the series and r is the common factor. The cyclic behavior of repeating decimals in multiplication also leads to the construction of integers which are cyclically permuted when multiplied by certain numbers. For example, 102564 × 4 = 410256. 102564 is the repetend of 4/39 and 410256 the repetend of 16/39. Various features of repeating decimals extend to the representation of numbers in all other integer bases, not just base 10: For example, in duodecimal, 1/2 = 0.6, 1/3 = 0.4, 1/4 = 0.3 and 1/6 = 0.2 all terminate; 1/5 = 0.2497 repeats with period length 4, in contrast with the equivalent decimal expansion of 0.2; 1/7 = 0.186A35 has period 6 in duodecimal, just as it does in decimal. which is 0.186A35base12. 10base12 is 12base10, 102base12 is 144base10, 21base12 is 25base10, A5base12 is 125base10. For a rational 0 < p/q < 1 (and base b ∈ N>1) there is the following algorithm producing the repetend together with its length: Because all these remainders p are non-negative integers less than q, there can be only a finite number of them with the consequence that they must recur in the while loop. Such a recurrence is detected by the associative array occurs. The new digit z is formed in the yellow line, where p is the only non-constant. The length L of the repetend equals the number of the remainders (see also section ). Repeating decimals (also called decimal sequences) have found cryptographic and error-correction coding applications. In these applications repeating decimals to base 2 are generally used which gives rise to binary sequences. The maximum length binary sequence for 1/p (when 2 is a primitive root of p) is given by:
Graphing Linear Equations and Inequalities Students discuss methods to solve equations and inequalities with one variable. As a review, they write Addition and Subtraction properties of equality and Multiplication and Division properties of equality. Students graph points, write the equation of a line, and find the slope. They practice completing inequalities into slope-intercept and standard form.
In modern physics, antimatter is defined as a material composed of the antiparticles (or "partners") of the corresponding particles of ordinary matter. Minuscule numbers of antiparticles are generated daily at particle accelerators – total production has been only a few nanograms – and in natural processes like cosmic ray collisions and some types of radioactive decay, but only a tiny fraction of these have successfully been bound together in experiments to form anti-atoms. No macroscopic amount of antimatter has ever been assembled due to the extreme cost and difficulty of production and handling. In theory, a particle and its anti-particle (for example, proton and antiproton) have the same mass, but opposite electric charge and other differences in quantum numbers. For example, a proton has positive charge while an antiproton has negative charge. A collision between any particle and its anti-particle partner leads to their mutual annihilation, giving rise to various proportions of intense photons (gamma rays), neutrinos, and sometimes less-massive particle-antiparticle pairs. Annihilation usually results in a release of energy that becomes available for heat or work. The amount of the released energy is usually proportional to the total mass of the collided matter and antimatter, in accordance with the mass–energy equivalence equation, E=mc2. Antimatter particles bind with one another to form antimatter, just as ordinary particles bind to form normal matter. For example, a positron (the antiparticle of the electron) and an antiproton (the antiparticle of the proton) can form an antihydrogen atom. The nuclei of antihelium have been artificially produced with difficulty, and these are the most complex anti-nuclei so far observed. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as anti-atoms corresponding to the known chemical elements. There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. The process by which this inequality between matter and antimatter particles developed is called baryogenesis. Antimatter particles can be defined by their negative baryon number or lepton number, while "normal" (non-antimatter) matter particles have a positive baryon or lepton number. These two classes of particles are the antiparticle partners of one another. The idea of negative matter appears in past theories of matter that have now been abandoned. Using the once popular vortex theory of gravity, the possibility of matter with negative gravity was discussed by William Hicks in the 1880s. Between the 1880s and the 1890s, Karl Pearson proposed the existence of "squirts" and sinks of the flow of aether. The squirts represented normal matter and the sinks represented negative matter. Pearson's theory required a fourth dimension for the aether to flow from and into. The term antimatter was first used by Arthur Schuster in two rather whimsical letters to Nature in 1898, in which he coined the term. He hypothesized antiatoms, as well as whole antimatter solar systems, and discussed the possibility of matter and antimatter annihilating each other. Schuster's ideas were not a serious theoretical proposal, merely speculation, and like the previous ideas, differed from the modern concept of antimatter in that it possessed negative gravity. The modern theory of antimatter began in 1928, with a paper by Paul Dirac. Dirac realised that his relativistic version of the Schrödinger wave equation for electrons predicted the possibility of antielectrons. These were discovered by Carl D. Anderson in 1932 and named positrons (a portmanteau of "positive electron"). Although Dirac did not himself use the term antimatter, its use follows on naturally enough from antielectrons, antiprotons, etc. A complete periodic table of antimatter was envisaged by Charles Janet in 1929. One way to denote an antiparticle is by adding a bar over the particle's symbol. For example, the proton and antiproton are denoted as , respectively. The same rule applies if one were to address a particle by its constituent components. A proton is made up of quarks, so an antiproton must therefore be formed from antiquarks. Another convention is to distinguish particles by their electric charge. Thus, the electron and positron are denoted simply as respectively. However, to prevent confusion, the two conventions are never mixed. There are compelling theoretical reasons to believe that, aside from the fact that antiparticles have different signs on all charges (such as electric and baryon charges), matter and antimatter have exactly the same properties. This means a particle and its corresponding antiparticle must have identical masses and decay lifetimes (if unstable). It also implies that, for example, a star made up of antimatter (an "antistar") will shine just like an ordinary star. This idea was tested experimentally in 2016 by the ALPHA experiment, which measured the transition between the two lowest energy states of antihydrogen. The results, which are identical to that of hydrogen, confirmed the validity of quantum mechanics for antimatter. Almost all matter observable from the Earth seems to be made of matter rather than antimatter. If antimatter-dominated regions of space existed, the gamma rays produced in annihilation reactions along the boundary between matter and antimatter regions would be detectable. Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays impacting Earth's atmosphere (or any other matter in the Solar System) produce minute quantities of antiparticles in the resulting particle jets, which are immediately annihilated by contact with nearby matter. They may similarly be produced in regions like the center of the Milky Way and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the two gamma rays produced every time positrons annihilate with nearby matter. The frequency and wavelength of the gamma rays indicate that each carries 511 keV of energy (that is, the rest mass of an electron multiplied by c2). Observations by the European Space Agency's INTEGRAL satellite may explain the origin of a giant antimatter cloud surrounding the galactic center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries (binary star systems containing black holes or neutron stars), mostly on one side of the galactic center. While the mechanism is not fully understood, it is likely to involve the production of electron–positron pairs, as ordinary matter gains kinetic energy while falling into a stellar remnant. Antimatter may exist in relatively large amounts in far-away galaxies due to cosmic inflation in the primordial time of the universe. Antimatter galaxies, if they exist, are expected to have the same chemistry and absorption and emission spectra as normal-matter galaxies, and their astronomical objects would be observationally identical, making them difficult to distinguish. NASA is trying to determine if such galaxies exist by looking for X-ray and gamma-ray signatures of annihilation events in colliding superclusters. In October 2017, scientists working on the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter. Positrons are produced naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle created by natural radioactivity (β− decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In January 2011, research by the American Astronomical Society discovered antimatter (positrons) originating above thunderstorm clouds; positrons are produced in gamma-ray flashes created by electrons accelerated by strong electric fields in the clouds. Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module. Antiparticles are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). It is hypothesized that during the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, is called baryon asymmetry. The exact mechanism which produced this asymmetry during baryogenesis remains an unsolved problem. One of the necessary conditions for this asymmetry is the violation of CP symmetry, which has been experimentally observed in the weak interaction. Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. This antimatter cannot all have been created in the Big Bang, but is instead attributed to have been produced by cyclic processes at high energies. For instance, electron-positron pairs may be formed in pulsars, as a magnetized neutron star rotation cycle shears electron-positron pairs from the star surface. Therein the antimatter forms a wind which crashes upon the ejecta of the progenitor supernovae. This weathering takes place as "the cold, magnetized relativistic wind launched by the star hits the non-relativistically expanding ejecta, a shock wave system forms in the impact: the outer one propagates in the ejecta, while a reverse shock propagates back towards the star." The former ejection of matter in the outer shock wave and the latter production of antimatter in the reverse shock wave are steps in a space weather cycle. Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 GeV to 250 GeV. In September, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles. Cosmic ray antiprotons also have a much higher energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy. There is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (that is, anti-alpha particles), in cosmic rays. These are actively being searched for, because the detection of natural antihelium implies the existence of large antimatter structures such as an antistar. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the Space Shuttle Discovery on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio. Positrons were reported in November 2008 to have been generated by Lawrence Livermore National Laboratory in larger numbers than by any previous synthetic process. A laser drove electrons through a gold target's nuclei, which caused the incoming electrons to emit energy quanta that decayed into both matter and antimatter. Positrons were detected at a higher rate and in greater density than ever previously detected in a laboratory. Previous experiments made smaller quantities of positrons using lasers and paper-thin targets; however, new simulations showed that short bursts of ultra-intense lasers and millimeter-thick gold are a far more effective source. The existence of the antiproton was experimentally confirmed in 1955 by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics. An antiproton consists of two up antiquarks and one down antiquark ( ). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception of the antiproton having opposite electric charge and magnetic moment from the proton. Shortly afterwards, in 1956, the antineutron was discovered in proton–proton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork and colleagues. In addition to antibaryons, anti-nuclei consisting of multiple bound antiprotons and antineutrons have been created. These are typically produced at energies far too high to form antimatter atoms (with bound positrons in place of electrons). In 1965, a group of researchers led by Antonino Zichichi reported production of nuclei of antideuterium at the Proton Synchrotron at CERN. At roughly the same time, observations of antideuterium nuclei were reported by a group of American physicists at the Alternating Gradient Synchrotron at Brookhaven National Laboratory. In 1995, CERN announced that it had successfully brought into existence nine hot antihydrogen atoms by implementing the SLAC/Fermilab concept during the PS210 experiment. The experiment was performed using the Low Energy Antiproton Ring (LEAR), and was led by Walter Oelert and Mario Macri. Fermilab soon confirmed the CERN findings by producing approximately 100 antihydrogen atoms at their facilities. The antihydrogen atoms created during PS210 and subsequent experiments (at both CERN and Fermilab) were extremely energetic and were not well suited to study. To resolve this hurdle, and to gain a better understanding of antihydrogen, two collaborations were formed in the late 1990s, namely, ATHENA and ATRAP. In 1999, CERN activated the Antiproton Decelerator, a device capable of decelerating antiprotons from 3500 MeV to 5.3 MeV—still too "hot" to produce study-effective antihydrogen, but a huge leap forward. In late 2002 the ATHENA project announced that they had created the world's first "cold" antihydrogen. The ATRAP project released similar results very shortly thereafter. The antiprotons used in these experiments were cooled by decelerating them with the Antiproton Decelerator, passing them through a thin sheet of foil, and finally capturing them in a Penning–Malmberg trap. The overall cooling process is workable, but highly inefficient; approximately 25 million antiprotons leave the Antiproton Decelerator and roughly 25,000 make it to the Penning–Malmberg trap, which is about 1/ or 0.1% of the original amount. The antiprotons are still hot when initially trapped. To cool them further, they are mixed into an electron plasma. The electrons in this plasma cool via cyclotron radiation, and then sympathetically cool the antiprotons via Coulomb collisions. Eventually, the electrons are removed by the application of short-duration electric fields, leaving the antiprotons with energies less than 100 meV. While the antiprotons are being cooled in the first trap, a small cloud of positrons is captured from radioactive sodium in a Surko-style positron accumulator. This cloud is then recaptured in a second trap near the antiprotons. Manipulations of the trap electrodes then tip the antiprotons into the positron plasma, where some combine with antiprotons to form antihydrogen. This neutral antihydrogen is unaffected by the electric and magnetic fields used to trap the charged positrons and antiprotons, and within a few microseconds the antihydrogen hits the trap walls, where it annihilates. Some hundreds of millions of antihydrogen atoms have been made in this fashion. In 2005, ATHENA disbanded and some of the former members (along with others) formed the ALPHA Collaboration, which is also based at CERN. The ultimate goal of this endeavour is to test CPT symmetry through comparison of the atomic spectra of hydrogen and antihydrogen (see hydrogen spectral series). In 2016 a new antiproton decelerator and cooler called ELENA (E Low Energy Antiproton decelerator) was built. It takes the antiprotons from the antiproton decelerator and cools them to 90 keV, which is "cold" enough to study. This machine works by using high energy and accelerating the particles within the chamber. More than one hundred antiprotons can be captured per second, a huge improvement, but it would still take several thousand years to make a nanogram of antimatter. Most of the sought-after high-precision tests of the properties of antihydrogen could only be performed if the antihydrogen were trapped, that is, held in place for a relatively long time. While antihydrogen atoms are electrically neutral, the spins of their component particles produce a magnetic moment. These magnetic moments can interact with an inhomogeneous magnetic field; some of the antihydrogen atoms can be attracted to a magnetic minimum. Such a minimum can be created by a combination of mirror and multipole fields. Antihydrogen can be trapped in such a magnetic minimum (minimum-B) trap; in November 2010, the ALPHA collaboration announced that they had so trapped 38 antihydrogen atoms for about a sixth of a second. This was the first time that neutral antimatter had been trapped. On 26 April 2011, ALPHA announced that they had trapped 309 antihydrogen atoms, some for as long as 1,000 seconds (about 17 minutes). This was longer than neutral antimatter had ever been trapped before. ALPHA has used these trapped atoms to initiate research into the spectral properties of the antihydrogen. The biggest limiting factor in the large-scale production of antimatter is the availability of antiprotons. Recent data released by CERN states that, when fully operational, their facilities are capable of producing ten million antiprotons per minute. Assuming a 100% conversion of antiprotons to antihydrogen, it would take 100 billion years to produce 1 gram or 1 mole of antihydrogen (approximately 6.02×1023 atoms of anti-hydrogen). Antihelium-3 nuclei (3 ) were first observed in the 1970s in proton–nucleus collision experiments at the Institute for High Energy Physics by Y. Prockoshkin's group (Protvino near Moscow, USSR) and later created in nucleus–nucleus collision experiments. Nucleus–nucleus collisions produce antinuclei through the coalescence of antiprotons and antineutrons created in these reactions. In 2011, the STAR detector reported the observation of artificially created antihelium-4 nuclei (anti-alpha particles) (4 ) from such collisions. Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and an equal amount of the container. Antimatter in the form of charged particles can be contained by a combination of electric and magnetic fields, in a device called a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electric or magnetic) of the trapped particles. At high vacuum, the matter or antimatter particles can be trapped and cooled with slightly off-resonant laser radiation using a magneto-optical trap or magnetic trap. Small particles can also be suspended with optical tweezers, using a highly focused laser beam. In 2011, CERN scientists were able to preserve antihydrogen for approximately 17 minutes. A proposal was made in 2018, to develop containment technology advanced enough to contain a billion anti-protons in a portable device to be driven to another lab for further experimentation. Scientists claim that antimatter is the costliest material to make. In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen. This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators), and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions). In comparison, to produce the first atomic weapon, the cost of the Manhattan Project was estimated at $23 billion with inflation during 2007. Several studies funded by the NASA Institute for Advanced Concepts are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately, the belts of gas giants, like Jupiter, hopefully at a lower cost per gram. Matter–antimatter reactions have practical applications in medical imaging, such as positron emission tomography (PET). In positive beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and a neutrino is also emitted). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use. Antiprotons have also been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy. Isolated and stored anti-matter could be used as a fuel for interplanetary or interstellar travel as part of an antimatter catalyzed nuclear pulse propulsion or other antimatter rocketry, such as the redshift rocket. Since the energy density of antimatter is higher than that of conventional fuels, an antimatter-fueled spacecraft would have a higher thrust-to-weight ratio than a conventional spacecraft. If matter–antimatter collisions resulted only in photon emission, the entire rest mass of the particles would be converted to kinetic energy. The energy per unit mass (9×1016 J/kg) is about 10 orders of magnitude greater than chemical energies, and about 3 orders of magnitude greater than the nuclear potential energy that can be liberated, today, using nuclear fission (about 200 MeV per fission reaction or 8×1013 J/kg), and about 2 orders of magnitude greater than the best possible results expected from fusion (about 6.3×1014 J/kg for the proton–proton chain). The reaction of 1 kg of antimatter with 1 kg of matter would produce 1.8×1017 J (180 petajoules) of energy (by the mass–energy equivalence formula, E=mc2), or the rough equivalent of 43 megatons of TNT – slightly less than the yield of the 27,000 kg Tsar Bomba, the largest thermonuclear weapon ever detonated. Not all of that energy can be utilized by any realistic propulsion technology because of the nature of the annihilation products. While electron–positron reactions result in gamma ray photons, these are difficult to direct and use for thrust. In reactions between protons and antiprotons, their energy is converted largely into relativistic neutral and charged pions. The neutral pions decay almost immediately (with a lifetime of 85 attoseconds) into high-energy photons, but the charged pions decay more slowly (with a lifetime of 26 nanoseconds) and can be deflected magnetically to produce thrust. Charged pions ultimately decay into a combination of neutrinos (carrying about 22% of the energy of the charged pions) and unstable charged muons (carrying about 78% of the charged pion energy), with the muons then decaying into a combination of electrons, positrons and neutrinos (cf. muon decay; the neutrinos from this decay carry about 2/3 of the energy of the muons, meaning that from the original charged pions, the total fraction of their energy converted to neutrinos by one route or another would be about 0.22 + (2/3)⋅0.78=0.74). Antimatter has been considered as a trigger mechanism for nuclear weapons. A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it will ever be feasible. However, the U.S. Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself. Matter conservation means conservation of baryonic number A and leptonic number L, A and L being algebraic numbers. Positive A and L are associated to matter particles, negative A and L are associated to antimatter particles. All known interactions do conserve matter. Antimatter particles are characterized by negative baryonic number A or/and negative leptonic number L. Materialization and annihilation obey conservation of A and L (associated to all known interactions). Antimatter is the most expensive substance on Earth "A rough estimate to produce the 10 milligrams of positrons needed for a human Mars mission is about 250 million dollars using technology that is currently under development," said Smith. An anti-materiel rifle (AMR) is a rifle that is designed for use against military equipment (materiel), rather than against other combatants ("anti-personnel").Antihydrogen Antihydrogen (H) is the antimatter counterpart of hydrogen. Whereas the common hydrogen atom is composed of an electron and proton, the antihydrogen atom is made up of a positron and antiproton. Scientists hope studying antihydrogen may shed light on the question of why there is more matter than antimatter in the observable universe, known as the baryon asymmetry problem. Antihydrogen is produced artificially in particle accelerators. In 1999, NASA gave a cost estimate of $62.5 trillion per gram of antihydrogen (equivalent to $94 trillion today), making it the most expensive material to produce. This is due to the extremely low yield per experiment, and high opportunity cost of using a particle accelerator.Antimatter-catalyzed nuclear pulse propulsion Antimatter catalyzed nuclear pulse propulsion is a variation of nuclear pulse propulsion based upon the injection of antimatter into a mass of nuclear fuel which normally would not be useful in propulsion. The anti-protons used to start the reaction are consumed, so it is a misnomer to refer to them as a catalyst.Antimatter comet Antimatter comets (and antimatter meteoroids) are hypothetical comets (meteoroids) composed solely of antimatter instead of ordinary matter. Although never actually observed, and unlikely to exist anywhere within the Milky Way, they have been hypothesized to exist, and their existence, on the presumption that hypothesis is correct, has been put forward as one possible explanation for various observed natural phenomena over the years.Antimatter rocket An antimatter rocket is a proposed class of rockets that use antimatter as their power source. There are several designs that attempt to accomplish this goal. The advantage to this class of rocket is that a large fraction of the rest mass of a matter/antimatter mixture may be converted to energy, allowing antimatter rockets to have a far higher energy density and specific impulse than any other proposed class of rocket.Antimatter weapon An antimatter weapon is a theoretically possible device using antimatter as a power source, a propellant, or an explosive for a weapon. Antimatter weapons cannot yet be produced due to the current cost of production of antimatter (estimated at 63 trillion dollars per gram) given the extremely limited technology available to create it in sufficient masses to be viable in a weapon, and the fact that it annihilates upon touching ordinary matter, making containment very difficult. The paramount advantage of such a theoretical weapon is that antimatter and matter collisions result in the entire sum of their mass energy equivalent being released as energy, which is at least an order of magnitude greater than the energy release of the most efficient fusion weapons (100% vs 7-10%). Annihilation requires and converts exactly equal masses of antimatter and matter by the collision which releases the entire mass-energy of both, which for 1 gram is ~1.8×1014 joules. Using the convention that 1 kiloton TNT equivalent = 4.184×1012 joules (or one trillion calories of energy), one gram of antimatter reacting with one gram of ordinary matter results in 42.96 kilotons-equivalent of energy (though there is considerable "loss" by production of neutrinos).Antineutron The antineutron is the antiparticle of the neutron with symbol n. It differs from the neutron only in that some of its properties have equal magnitude but opposite sign. It has the same mass as the neutron, and no net electric charge, but has opposite baryon number (+1 for neutron, −1 for the antineutron). This is because the antineutron is composed of antiquarks, while neutrons are composed of quarks. The antineutron consists of one up antiquark and two down antiquarks. Since the antineutron is electrically neutral, it cannot easily be observed directly. Instead, the products of its annihilation with ordinary matter are observed. In theory, a free antineutron should decay into an antiproton, a positron and a neutrino in a process analogous to the beta decay of free neutrons. There are theoretical proposals of neutron–antineutron oscillations, a process that implies the violation of the baryon number conservation.The antineutron was discovered in proton–antiproton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork in 1956, one year after the antiproton was discovered.Antiparticle In particle physics, every type of particle has an associated antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the antielectron (which is often referred to as positron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron. Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as normal matter (the kind all matter usually interacted with is made of), and the other (usually given the prefix "anti-") as antimatter. Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography. The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of Charge Parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate. Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle and its antiparticle, which can occur in particle accelerators such as the Large Hadron Collider at CERN. Although particles and their antiparticles have opposite charges, electrically neutral particles need not be identical to their antiparticles. The neutron, for example, is made out of quarks, the antineutron from antiquarks, and they are distinguishable from one another because neutrons and antineutrons annihilate each other upon contact. However, other neutral particles are their own antiparticles, such as photons, Z0 bosons, π0 mesons, and hypothetical gravitons and some hypothetical WIMPs.Antiproton , (pronounced p-bar) is the antiparticle of the proton. Antiprotons are stable, but they are typically short-lived, since any collision with a proton will cause both particles to be annihilated in a burst of energy. The existence of the antiproton with −1 electric charge, opposite to the +1 electric charge of the proton, was predicted by Paul Dirac in his 1933 Nobel Prize lecture. Dirac received the Nobel Prize for his previous 1928 publication of his Dirac equation that predicted the existence of positive and negative solutions to the Energy Equation () of Einstein and the existence of the positron, the antimatter analog to the electron, with positive charge and opposite spin. The antiproton was first experimentally confirmed in 1955 at the Bevatron particle accelerator by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics. In terms of valence quarks, an antiproton consists of two up antiquarks and one down antiquark (uud). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception that the antiproton has electric charge and magnetic moment that are the opposites of those in the proton. The questions of how matter is different from antimatter, and the relevance of antimatter in explaining how our universe survived the Big Bang, remain open problems—open, in part, due to the relative scarcity of antimatter in today's universe.Baryogenesis In physical cosmology, baryogenesis is the hypothetical physical process that took place during the early universe that produced baryonic asymmetry, i.e. the imbalance of matter (baryons) and antimatter (antibaryons) in the observed universe. One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density – that is, matter exists. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. This has led to a number of proposed mechanisms for symmetry breaking that favour the creation of normal matter (as opposed to antimatter) under certain conditions. This imbalance would have been exceptionally small, on the order of 1 in every 10000000000 (1010) particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons. Experiments reported in 2010 at Fermilab, however, seem to show that this imbalance is much greater than previously assumed. In an experiment involving a series of particle collisions, the amount of generated matter was approximately 1% larger than the amount of generated antimatter. The reason for this discrepancy is yet unknown.Most grand unified theories explicitly break the baryon number symmetry, which would account for this discrepancy, typically invoking reactions mediated by very massive X bosons (X) or massive Higgs bosons (H0). The rate at which these events occur is governed largely by the mass of the intermediate X or H0 particles, so by assuming these reactions are responsible for the majority of the baryon number seen today, a maximum mass can be calculated above which the rate would be too slow to explain the presence of matter today. These estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay. Baryogenesis theories are based on different descriptions of the interaction between fundamental particles. Two main theories are electroweak baryogenesis (standard model), which would occur during the electroweak epoch, and the GUT baryogenesis, which would occur during or shortly after the grand unification epoch. Quantum field theory and statistical physics are used to describe such possible mechanisms. Baryogenesis is followed by primordial nucleosynthesis, when atomic nuclei began to form.Baryon asymmetry In physics, the baryon asymmetry problem, also known as the matter asymmetry problem or the matter-antimatter asymmetry problem, is the observed imbalance in baryonic matter (the type of matter experienced in everyday life) and antibaryonic matter in the observable universe. Neither the standard model of particle physics, nor the theory of general relativity provides a known explanation for why this should be so, and it is a natural assumption that the universe be neutral with all conserved charges. The Big Bang should have produced equal amounts of matter and antimatter. Since this does not seem to have been the case, it is likely some physical laws must have acted differently or did not exist for matter and antimatter. Several competing hypotheses exist to explain the imbalance of matter and antimatter that resulted in baryogenesis. However, there is as of yet no consensus theory to explain the phenomenon. As remarked in a 2012 research paper, "The origin of matter remains one of the great mysteries in physics."CP violation In particle physics, CP violation is a violation of CP-symmetry (or charge conjugation parity symmetry): the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C symmetry) while its spatial coordinates are inverted ("mirror" or P symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers James Cronin and Val Fitch. It plays an important role both in the attempts of cosmology to explain the dominance of matter over antimatter in the present Universe, and in the study of weak interactions in particle physics.Gravitational interaction of antimatter The gravitational interaction of antimatter with matter or antimatter has not been conclusively observed by physicists. While the consensus among physicists is that gravity will attract both matter and antimatter at the same rate that matter attracts matter, there is a strong desire to confirm this experimentally. Antimatter's rarity and tendency to annihilate when brought into contact with matter makes its study a technically demanding task. Furthermore, gravity is much weaker than the other fundamental forces, for reasons still of interest to physicists, complicating efforts to study gravity in systems small enough to be feasibly created in lab, including antimatter systems. Most methods for the creation of antimatter (specifically antihydrogen) result in high-energy particles and atoms of high kinetic energy, which are unsuitable for gravity-related study. In recent years, first ALPHA and then ATRAP have trapped antihydrogen atoms at CERN; in 2012 ALPHA used such atoms to set the first free-fall loose bounds on the gravitational interaction of antimatter with matter, measured to within ±7500% of ordinary gravity, not enough for a clear scientific statement about the sign of gravity acting on antimatter. Future experiments need to be performed with higher precision, either with beams of antihydrogen (AEGIS) or with trapped antihydrogen (ALPHA or GBAR). In addition to uncertainty regarding whether antimatter is gravitationally attracted or repulsed from other matter, it is also unknown whether the magnitude of the gravitational force is the same. Difficulties in creating quantum gravity theories have led to the idea that antimatter may react with a slightly different magnitude.Matter In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of atoms, which are made up of interacting subatomic particles, and in everyday as well as scientific usage, "matter" generally includes atoms and anything made up of them, and any particles (or combination of particles) that act as if they have both rest mass and volume. However it does not include massless particles such as photons, or other energy phenomena or waves such as light or sound. Matter exists in various states (also known as phases). These include classical everyday phases such as solid, liquid, and gas – for example water exists as ice, liquid water, and gaseous steam – but other states are possible, including plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma.Usually atoms can be imagined as a nucleus of protons and neutrons, and a surrounding "cloud" of orbiting electrons which "take up space". However this is only somewhat correct, because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles and they do not have well-defined sizes or positions. In the Standard Model of particle physics, matter is not a fundamental concept because the elementary constituents of atoms are quantum entities which do not have an inherent "size" or "volume" in any everyday sense of the word. Due to the exclusion principle and other fundamental interactions, some "point particles" known as fermions (quarks, leptons), and many composites and atoms, are effectively forced to keep a distance from other particles under everyday conditions; this creates the property of matter which appears to us as matter taking up space. For much of the history of the natural sciences people have contemplated the exact nature of matter. The idea that matter was built of discrete building blocks, the so-called particulate theory of matter, was first put forward by the Greek philosophers Leucippus (~490 BC) and Democritus (~470–380 BC).PAMELA detector PAMELA (Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics) was a cosmic ray research module attached to an Earth orbiting satellite. PAMELA was launched on 15 June 2006 and was the first satellite-based experiment dedicated to the detection of cosmic rays, with a particular focus on their antimatter component, in the form of positrons and antiprotons. Other objectives included long-term monitoring of the solar modulation of cosmic rays, measurements of energetic particles from the Sun, high-energy particles in Earth's magnetosphere and Jovian electrons. It was also hoped that it may detect evidence of dark matter annihilation. PAMELA operations were terminated in 2016, as were the operations of the host-satellite Resurs-DK1.Positron The positron or antielectron is the antiparticle or the antimatter counterpart of the electron. The positron has an electric charge of +1 e, a spin of 1/2 (same as electron), and has the same mass as an electron. When a positron collides with an electron, annihilation occurs. If this collision occurs at low energies, it results in the production of two or more gamma ray photons. Positrons can be created by positron emission radioactive decay (through weak interactions), or by pair production from a sufficiently energetic photon which is interacting with an atom in a material. |Forms of dark matter| |Theories and objects| |Potential dark galaxies|
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work! - Nucleotides: building blocks of nucleic acids - Deoxyribonucleic acid (DNA) - Ribonucleic acid (RNA) - Types of RNA - Nucleic acid metabolism - DNA metabolism Deoxyribonucleic acid (DNA) DNA is a polymer of the four nucleotides A, C, G, and T, which are joined through a backbone of alternating phosphate and deoxyribose sugar residues. These nitrogen-containing bases occur in complementary pairs as determined by their ability to form hydrogen bonds between them. A always pairs with T through two hydrogen bonds, and G always pairs with C through three hydrogen bonds. The spans of A:T and G:C hydrogen-bonded pairs are nearly identical, allowing them to bridge the sugar-phosphate chains uniformly. This structure, along with the molecule’s chemical stability, makes DNA the ideal genetic material. The bonding between complementary bases also provides a mechanism for the replication of DNA and the transmission of genetic information. In 1953 James D. Watson and Francis H.C. Crick proposed a three-dimensional structure for DNA based on low-resolution X-ray crystallographic data and on Erwin Chargaff’s observation that, in naturally occurring DNA, the amount of T equals the amount of A and the amount of G equals the amount of C. Watson and Crick, who shared a Nobel Prize in 1962 for their efforts, postulated that two strands of polynucleotides coil around each other, forming a double helix. The two strands, though identical, run in opposite directions as determined by the orientation of the 5′ to 3′ phosphodiester bond. The sugar-phosphate chains run along the outside of the helix, and the bases lie on the inside, where they are linked to complementary bases on the other strand through hydrogen bonds. The double helical structure of normal DNA takes a right-handed form called the B-helix. The helix makes one complete turn approximately every 10 base pairs. B-DNA has two principal grooves, a wide major groove and a narrow minor groove. Many proteins interact in the space of the major groove, where they make sequence-specific contacts with the bases. In addition, a few proteins are known to make contacts via the minor groove. Several structural variants of DNA are known. In A-DNA, which forms under conditions of high salt concentration and minimal water, the base pairs are tilted and displaced toward the minor groove. Left-handed Z-DNA forms most readily in strands that contain sequences with alternating purines and pyrimidines. DNA can form triple helices when two strands containing runs of pyrimidines interact with a third strand containing a run of purines. B-DNA is generally depicted as a smooth helix; however, specific sequences of bases can distort the otherwise regular structure. For example, short tracts of A residues interspersed with short sections of general sequence result in a bent DNA molecule. Inverted base sequences, on the other hand, produce cruciform structures with four-way junctions that are similar to recombination intermediates. Most of these alternative DNA structures have only been characterized in the laboratory, and their cellular significance is unknown. Naturally occurring DNA molecules can be circular or linear. The genomes of single-celled bacteria and archaea (the prokaryotes), as well as the genomes of mitochondria and chloroplasts (certain functional structures within the cell), are circular molecules. In addition, some bacteria and archaea have smaller circular DNA molecules called plasmids that typically contain only a few genes. Many plasmids are readily transmitted from one cell to another. For a typical bacterium, the genome that encodes all of the genes of the organism is a single contiguous circular molecule that contains a half million to five million base pairs. The genomes of most eukaryotes and some prokaryotes contain linear DNA molecules called chromosomes. Human DNA, for example, consists of 23 pairs of linear chromosomes containing three billion base pairs. In all cells, DNA does not exist free in solution but rather as a protein-coated complex called chromatin. In prokaryotes, the loose coat of proteins on the DNA helps to shield the negative charge of the phosphodiester backbone. Chromatin also contains proteins that control gene expression and determine the characteristic shapes of chromosomes. In eukaryotes, a section of DNA between 140 and 200 base pairs long winds around a discrete set of eight positively charged proteins called a histone, forming a spherical structure called the nucleosome. Additional histones are wrapped by successive sections of DNA, forming a series of nucleosomes like beads on a string. Transcription and replication of DNA is more complicated in eukaryotes because the nucleosome complexes have to be at least partially disassembled for the processes to proceed effectively. Most prokaryote viruses contain linear genomes that typically are much shorter and contain only the genes necessary for viral propagation. Bacterial viruses called bacteriophages (or phages) may contain both linear and circular forms of DNA. For instance, the genome of bacteriophage λ (lambda), which infects the bacterium Escherichia coli, contains 48,502 base pairs and can exist as a linear molecule packaged in a protein coat. The DNA of phage λ can also exist in a circular form (as described in the section Site-specific recombination) that is able to integrate into the circular genome of the host bacterial cell. Both circular and linear genomes are found among eukaryotic viruses, but they more commonly use RNA as the genetic material. The strands of the DNA double helix are held together by hydrogen bonding interactions between the complementary base pairs. Heating DNA in solution easily breaks these hydrogen bonds, allowing the two strands to separate—a process called denaturation or melting. The two strands may reassociate when the solution cools, reforming the starting DNA duplex—a process called renaturation or hybridization. These processes form the basis of many important techniques for manipulating DNA. For example, a short piece of DNA called an oligonucleotide can be used to test whether a very long DNA sequence has the complementary sequence of the oligonucleotide embedded within it. Using hybridization, a single-stranded DNA molecule can capture complementary sequences from any source. Single strands from RNA can also reassociate. DNA and RNA single strands can form hybrid molecules that are even more stable than double-stranded DNA. These molecules form the basis of a technique that is used to purify and characterize messenger RNA (mRNA) molecules corresponding to single genes. DNA melting and reassociation can be monitored by measuring the absorption of ultraviolet (UV) light at a wavelength of 260 nanometres (billionths of a metre). When DNA is in a double-stranded conformation, absorption is fairly weak, but when DNA is single-stranded, the unstacking of the bases leads to an enhancement of absorption called hyperchromicity. Therefore, the extent to which DNA is single-stranded or double-stranded can be determined by monitoring UV absorption. After a DNA molecule has been assembled, it may be chemically modified—sometimes deliberately by special enzymes called DNA methyltransferases and sometimes accidentally by oxidation, ionizing radiation, or the action of chemical carcinogens. DNA can also be cleaved and degraded by enzymes called nucleases.
||This article may be too technical for most readers to understand. (December 2013)| A Sun-synchronous orbit (sometimes called a heliosynchronous orbit) is a geocentric orbit which combines altitude and inclination in such a way that an object on that orbit will appear to orbit in the same position, from the perspective of the Sun, during its orbit around the Earth. More technically, it is an orbit arranged in such a way that it precesses once a year. The surface illumination angle will be nearly the same every time that the satellite is overhead. This consistent lighting is a useful characteristic for satellites that image the Earth's surface in visible or infrared wavelengths (e.g. weather and spy satellites) and for other remote sensing satellites (e.g. those carrying ocean and atmospheric remote sensing instruments that require sunlight). For example, a satellite in sun-synchronous orbit might ascend across the equator twelve times a day each time at approximately 15:00 mean local time. This is achieved by having the osculating orbital plane precess (rotate) approximately one degree each day with respect to the celestial sphere, eastward, to keep pace with the Earth's movement around the Sun. The uniformity of Sun angle is achieved by tuning the inclination to the altitude of the orbit (details in section "Technical details") such that the extra mass near the equator causes the orbital plane of the spacecraft to precess with the desired rate: the plane of the orbit is not fixed in space relative to the distant stars, but rotates slowly about the Earth's axis. Typical sun-synchronous orbits are about 600–800 km in altitude, with periods in the 96–100 minute range, and inclinations of around 98° (i.e. slightly retrograde compared to the direction of Earth's rotation: 0° represents an equatorial orbit and 90° represents a polar orbit). Special cases of the sun-synchronous orbit are the noon/midnight orbit, where the local mean solar time of passage for equatorial longitudes is around noon or midnight, and the dawn/dusk orbit, where the local mean solar time of passage for equatorial longitudes is around sunrise or sunset, so that the satellite rides the terminator between day and night. Riding the terminator is useful for active radar satellites as the satellites' solar panels can always see the Sun, without being shadowed by the Earth. It is also useful for some satellites with passive instruments which need to limit the Sun's influence on the measurements, as it is possible to always point the instruments towards the night side of the Earth. The dawn/dusk orbit has been used for solar observing scientific satellites such as Yohkoh, TRACE, Hinode and PROBA2, affording them a nearly continuous view of the Sun. Equation (24) of the article Orbital perturbation analysis (spacecraft) gives the angular precession per orbit for an orbit around an oblate planet as - is the coefficient for the second zonal term (1.7555 · 1010 km5 / s2) related to the oblateness of the earth (see Geopotential model), - is the gravitational constant of the Earth (398600.440 km3 / s2) - is the semi-latus rectum of the orbit, - is the inclination of the orbit to the equator. An orbit will be sun-synchronous when the precession rate, , equals the mean motion of the Earth about the Sun which is 360° per sidereal year (1.99106 · 10−7 radians / s) so we must set where P is the orbital period. As the orbital period of a spacecraft is (where a is the semi-major axis of the orbit) and as for a circular or almost circular orbit it follows that or when is 360° per year, As an example, for a=7200 km (the spacecraft about 800 km over the Earth surface) one gets with this formula a sun-synchronous inclination of 98.696 deg. Note that according to this approximation cos i equals −1 when the semi-major axis equals 12 352 km, which means that only smaller orbits can be sun-synchronous. The period can be in the range from 88 minutes for a very low orbit (a=6554 km, i=96°) to 3.8 hours (a=12 352 km, but this orbit would be equatorial with i=180°). (A period longer than 3.8 hours may be possible by using an eccentric orbit with p<12 352 km but a>12 352 km.) If one wants a satellite to fly over some given spot on Earth every day at the same hour, it can do between 7 and 16 orbits per day, as shown in the following table. (The table has been calculated assuming the periods given. The orbital period that should be used is actually slightly longer. For instance, a retrograde equatorial orbit that passes over the same spot after 24 hours has a true period about 365/364 ≈ 1.0027 times longer than the time between overpasses. For non-equatorial orbits the factor is closer to 1.) |Orbits per day||Period (hrs)||Height above |16||= 1 hr 30 min||282||83.4°| |15||= 1 hr 36 min||574||82.3°| |14||≈ 1 hr 43 min||901||81.0°| |13||≈ 1 hr 51 min||1269||79.3°| |11||≈ 2 hrs 11 min||2169||74.0°| |10||= 2 hrs 24 min||2730||69.9°| |9||= 2 hrs 40 min||3392||64.0°| |7||≈ 3 hrs 26 min||5172||37.9°| When one says that a sun-synchronous orbit goes over a spot on the earth at the same local time each time, this refers to mean solar time, not to apparent solar time. The sun will not be in exactly the same position in the sky during the course of the year. (See Equation of time and Analemma.) The Sun-synchronous orbit is mostly selected for Earth observation satellites that should be operated at a relatively constant altitude suitable for its Earth observation instruments, this altitude typically being between 600 km and 1000 km over the Earth surface. Because of the deviations of the gravitational field of the Earth from that of a homogeneous sphere that are quite significant at such relatively low altitudes a strictly circular orbit is not possible for these satellites. Very often a frozen orbit is therefore selected that is slightly higher over the Southern hemisphere than over the Northern hemisphere. ERS-1, ERS-2 and Envisat of European Space Agency as well as the MetOp spacecraft of EUMETSAT are all operated in Sun-synchronous, "frozen" orbits. - Orbital perturbation analysis (spacecraft) - Geosynchronous orbit - Geostationary orbit - List of orbits - Polar orbit - World Geodetic System - Tscherbakova, N. N.; Beletskii, V. V.; Sazonov, V. V. (1999). "Stabilization of heliosynchronous orbits of an Earth's artificial satellite by solar pressure.". Cosmic Research 37 (4): 393–403. Bibcode:1999KosIs..37..417S. - Rosengren, M. (November 1992). "ERS-1 - An Earth Observer that exactly follows its Chosen Path". ESA Bulletin (72). Bibcode:1992ESABu..72...76R. - Sandwell, David T., The Gravity Field of the Earth - Part 1 (2002) (p. 8) - Sun-Synchronous Orbit dictionary entry, from U.S. Centennial of Flight Commission - NASA Q&A - Boain, Ronald J. (February 2004). "The A-B-Cs of Sun Synchronous Orbit Design" (PDF). Space Flight Mechanics Conference.
- 36 lessons - 0 quizzes - 14 week duration Solving Systems of Equations This unit introduces how to systematically solve a system of equations, namely linear equations. Examples of non-linear systems, including systems of 3 unknowns will be of emphasis. Graphs of Trigonometric Functions The unit focuses primarily on how to graph periodic sinusoidal functions, and how to identify features of a waveform to produce an equation by inspection. Polar Coordinate Functions An introduction to the polar coordinate system. Exponents and Radicals This unit is an extension of what was introduced in Math 1131. To learn how to work with radicals, knowing your exponent laws in crucial. Hence, this unit begins with a thorough review. This chapter introduces you to exponential functions, and how they can be solved using logarithms. Trigonometric Identities and Equations Introduction to Periodic Functions In Part 1 of this course (Math 1131), you were newly introduced to the trigonometric functions: sine, cosine, and tangent. You learned how you can use these functions to solve triangles by setting up ratios, but you never learned what they looked like graphed. It turns out that if you select angles to represent θ for, let’s say y = sin(θ) or y = cos(θ), starting from θ = 0°, a repeating wave it formed. In fact, after every ±360°, the wave cycle repeats itself. Hence, trigonometric functions are commonly used to represent periodic functions – equations, that when graphed, repeat themselves indefinitely unless limits or bounds are defined. When the periodic function produces smooth symmetrical waves, where any portion of the wave can be horizontally translated onto another portion of the curve, it is referred to as a sinusoidal function. Anything that repeats itself over-and-over can be represented using a periodic function, this includes pistons moving up-and-down inside a car engine, a Ferris wheel, sound waves, television signals, etc. The algorithms to all these can be represented mathematically using periodic functions (sine, cosine, tangent) – this is one way we use math to quantify and make sense of the world with numbers. Our main focus in this chapter will be the sine and cosine function, which has wide applications to alternating current (electricity), mechanical vibrations, and so forth. Our task in this chapter will be to graph such functions containing either sine or cosine, building upon our earlier methods for graphing, and to extract useful information from the function. Sine Function Analysis Recall that sine is a ratio comparing the length of the opposite side of a right triangle to the length of its hypotenuse. If we keep the hypotenuse constant at a length of 1, at each given angle starting from an angle of 0°, the ratio will be different. A completed table showing the outputs of sine at angles from 0° to 360° as shown below (intervals of 30° were used for simplicity sake). If you plot the angles along the x-axis and the outputs along the y-axis, you should get a waveform that looks like this: Had you continued from 360° to 720°, the wave would have repeated itself. Each repeated portion of the curve is called a cycle. Therefore, in the wave above, you see only 1 complete cycle. The frequency is the number of cycles it completes in a given interval (usually the period). The period of any periodic waveform is the horizontal distance occupied by one cycle. Summary: For the waveform above: Number of cycles displayed: 1 Period: 360° / 1 cycle Frequency: cycles / period = 1 cycle per 360° Depending on the units used to represent the horizontal axis, the period is commonly expressed as: - degrees per cycle - radians per cycle - seconds per cycle - or any units per cycle The back-and-forth movement of a waveform is referred to as oscillation. Notice how y = sin(θ) oscillates between 1 and -1 along the y-axis. In nature, however, not everything oscillates between a -1 and 1. Similarly, not everything starts at x = 0° and ends at 360° either. In addition, the period might be shorter than 360 degrees / cycle, thereby creating several cycles per 360° or longer than 360°. Whenever there is a difference in the wave’s amplitude (also referred to as the height), a difference in period, or a difference from where the cycle begins (horizontal offset), this is called a transformation of the period function. The first transformation we’ll focus on is amplitude. The amplitude refers to the distance from the wave’s center to its peak. To modify the amplitude of a waveform, a factor a (representing any real number) in front of the period function is multiplied with the trigonometric function. Remember that originally, our equation was y = 1 sin(θ), where the amplitude equaled to 1. This is why the wave oscillated between -1 and +1. Had our equation been, say y = 1.5 sin(θ), then the wave would have oscillated between -1.5 and +1.5 (amplitude being 1.5). The relationship between factor a and amplitude is summarized below. Notice that whatever your a value is, you always take its absolute value to quantify the wave’s amplitude. Hence, if your equation was y = -3 sin(θ), the amplitude would be a = 3 because the absolute of -3 → |-3| is positive 3. - Therefore, if you every report a negative amplitude, it is wrong. Test yourself: Given the sine waves shown below, state the equations for each given what you just learned about amplitude:Solutions Red: y = 2 sin (θ) Black: y = 1 sin (θ) Fuschia: y = 1.5 sin (θ) Green: y = -2 sin (θ) → more on this below[collapse] Notice how when the amplitude is greater than one, the wave gets taller (or skinnier). This is why modifying the amplitude is often referred to as a vertical stretch or vertical compression, depending on the a value. - If a > 1 or a < -1 → vertical stretch - If -1 < a < 1 → vertical compression When the leading factor a is negative, it causes the wave to reflect about its center. The best way to show that the wave is reflected when graphing (next section) is to reverse the signs of the wave’s peaks. In other words, locate the maximum and minimum points within a cycle, then change the y-coordinates from positive to negative or negative to positive, as shown below: Cycle, Period, and Frequency To manipulate the period of a waveform, that is, to make a cycle shorter or longer, factor b needs to change. Let’s see what happens when you change b from 1, which is what it was originally in y = sin(1θ), to 2 or 0.5: Since two cycles were completed in y = sin(2θ) within the same period as y = sin(θ), and only half a cycle was completed in y = sin(0.5θ), we can conclude that: - When b is between -1 and 0 or 0 and 1, the period per cycle increases − gets bigger relative to 360º. - -1 < b < 1 (where b ≠ 0), period increases. - When b is less than -1 or greater than 1, the period per cycle decreases − gets smaller relative to 360º. - b > 1 or b < -1, period decreases. - Therefore, the factor b represents the functions cycle. The relation between the cycle and period can be summarized as: Remember that both of these equations can be manipulated to isolate for b if you’ve been given the graph to a wave, from which you can locate the period along the horizontal axis to then solve for b. Because frequency represent cycles per period, the frequency can be found by taking the reciprocal of the period or: Question: Find the period and frequency of the function y = sin(6x) both in degrees and radians, and graph one cycle. From the equation, it’s clear that b = 6. We know that when b > 1, the period gets smaller. In fact, 6 waves can fit in the span of 1 wave cycle whose period is 360° or 2π rad (shown below). Remember, not all cycles start at x = 0; some might before or after. To shift a periodic function to the left of the origin, a value must be added to the angle (or whatever the variable represents), and to shift it to the right, a value must be subtracted from the variable. Another term for phase shift common found in literature is horizontal translation. For instance: - y = sin(x + 45°) shifts the wave to the left by 45° (see blue wave) - y = sin(x – 45°) shifts the wave to the right by 45°. (see green wave) - Generally, this value is denoted by the letter c [ y = a·sin(bθ + c) ], and is related to the phase-shift by the formula: - In the examples provided above, b was equal to 1, so the formula technically wasn’t needed. Therefore, you use the formula when the periodic function contains a b value other than 1. Another interesting feature about the value c is that it’s usually an excellent indicator of whether the equation written in degrees or radians. As illustrated above, c is in degrees, therefore you’d use the formula 360/b instead of 2π/b to find the period. While there’s no focus on vertical shifts in this course, it’s worth to mention how it happens. If you want to shift the wave up or down, ±d is applied to the equation: y = a·sin(bθ + c) + d. A positive d value shifts it up d units, and a negative d value shifts it down d units. Another term for vertical shift common found in literature is vertical translation upwards or downwards. Putting it all Together Using the formula shown above, you can decode information contained in any sinusoidal function. The video below show how this is done.
- Volcanic eruptions vary greatly in size. - The eruption Sumatra’s Toba Volcano, 74,000 years ago, was the largest eruption known to have occurred on Earth in last 25 million years. Toba is especially significant because it may have killed most people on Earth, creating a genetic bottleneck. - Volcanologists classify volcanos as active, dormant, or extinct. - Volcanic eruptions may be effusive (involving lava flows) or explosive. - Volcanoes pose many different potential hazards, including death or destruction by lava flows, volcanic gases, or volcanic ash. Secondary hazards from volcanoes include landslides and tsunamis, and effects on weather and climate. - Scientists monitoring active volcanoes use several different approaches, but predicting eruptions is challenging. - 50-60 volcanoes are erupting at any time – most in the Ring of Fire around the Pacific Ocean. - The U.S. contains about 170 potentially active volcanoes, most in Alaska’s Aleutian Islands. The most recent U.S. major eruptions have, however, been in Hawaii. - Identifying minerals and naming igneous rocks is challenging in the field and is most often done by examining thin sections or by obtaining chemical analyses of rocks. - The International Union of Geological Sciences (IUGS) has promulgated the most widely used rock-naming and classification system for volcanic rocks. For more precision, IUGS names are modified by adding textural descriptors. 4.1 Two Significant Eruptions 4.1.1 Tambora Volcano The largest volcanic eruption in recent history occurred in 1815 when Mt. Tambora, on the island of Sumbawa, Indonesia, exploded and ejected 160 cubic kilometers of material into the air. The eruption created the large depression seen in Figure 4.1. We call depressions that form this way caldera. The Tambora caldera is 6 kilometers in diameter and 1,100 meters deep. It formed when Tambora’s estimated 4,000-meter-high peak blew away and the ground collapsed as the magma chamber below emptied. Tambora is still an active volcano, and several minor eruptions have extruded lava on the caldera’s floor since the big eruption in 1815. The sound of Tambora’s 1815 eruption was heard more than 1,400 kilometers away in the Molucca Islands (Figure 4.2). Thick layers of volcanic ash were deposited over a huge region, traveling thousands of kilometers from the eruption site. The ash eventually hardened to become tuff, a type of volcanic rock formed by consolidation of volcanic airfall deposits. Volcanologists estimate that Tambora’s eruption killed at least 70,000 people, some because of the immediate effects of the explosion and ashflows, but most because of subsequent starvation and disease. Although the death toll on Sumbawa and nearby Lombok Island was huge, Tambora’s effects were felt worldwide. In the months following the eruption, volcanic ash and sulfuric acid, carried around the globe by atmospheric circulation, blocked incoming sunlight and caused the year without summer. Average global temperatures fell by 0.5 – 1 oC (1 – 2 oF), leading to crop failure and major food shortage in the northern hemisphere. Famine led to food riots in northern Europe and caused a typhus epidemic in Ireland. The effects were not as great in North America, but still, frost killed most crops in New England and parts of Pennsylvania and the region experienced snowfall in June and July. 4.1.2 Toba Volcano We have recorded history documenting Tambora’s effects, but it was a relatively small event compared to many earlier prehistoric eruptions. Scientists interpret the history of those eruptions, and their effects on humans, using incomplete geological, anthropological, and other evidence. Toba Volcano, which erupted multiple times about 74,000 years ago, is an example. It is in northern Sumatra, Indonesia, about 2,500 kilometers (1,550 miles) northwest of Tambora Volcano (see Figure 4.2). Figure 4.3 shows Lake Toba, the largest volcanic lake in the world. It is 100 kilometers long, 30 kilometers wide, and more than 505 meters deep. This lake occupies a large caldera, in many ways similar to the one at Tambora, where the ground collapsed when lava spewed from a magma chamber below. Toba is a supervolcano. Its caldera is thousands of times larger than typical volcanoes and 100 times larger than Tambora. Toba erupted several times, but the last of the eruptions 74,000 years ago – the one that created the present-day caldera – was the largest, and it was the largest eruption known to have occurred on Earth in last 25 million years. Subsequent activity, many years after the large eruption, created some small cinder cones and volcanic islands in the center of Lake Toba, and several younger volcanoes along the lake rim. Geologists have identified a few other supervolcano eruptions that were larger than Toba. The largest known eruption, the Wah Wah Springs eruption, occurred in Utah 30 million years ago. Another huge eruption created the Fish Canyon Tuff in Colorado 28 million years ago. Figure 4.4 shows some climbers on a Fish Canyon Tuff outcrop. This tuff contains several different layers of slightly different compositions, all composed of thick layers of consolidated volcanic debris. The eruption that created the tuff occurred at La Garita Caldera in southwest Colorado. In all, it produced an estimated 5,000 cubic kilometers of volcanic ash and some coarser material. The tuff layers vary in color due to slightly different compositions and are easily eroded, leading to the colorful rounded outcrops seen in this photograph. Although there were larger eruptions, Toba is especially significant because it may have affected humans and human diversity. The eruption produced more than 2,800 cubic kilometers of ash that circled the globe, depositing ash layers 15 centimeters deep over most of South Asia. The volcano spread noxious sulfur-rich gases in all directions. Ash and sulfuric acid in the air led to cooling, generally estimated to have been 3 – 5 oC in all parts of Earth. (Some researchers, however, think the cooling may have been much greater.) Whatever the exact amount of cooling, the effects were a global ice age that lasted years and an overall Earth cooling that lasted, perhaps, 1,000 years. Although human ancestors have been around for more than 10 million years, scientists studying mutation rates have concluded that humans could not have been evolving independently that long – we are all too genetically similar to each other. Even before geologists dated the Toba eruption, geneticists had concluded that humans descended from a small number of ancestors slightly more than 70,000 years ago. There is some debate, but most archaeologists and geneticists believe that Toba’s eruption caused a genetic bottleneck. Humans, 74,000 years ago, mostly lived west of Indonesia, in a downwind direction from Toba. So, the direct effects of ash and gases, the longer term climate change, and loss of food, must have been significant. Many scientists believe that the once larger human population was reduced to a few survivors. Perhaps as few as 100 or 1,000 humans existed after Toba, greatly limiting genetic diversity. Although we do not know exactly where all humans lived before the Toba eruption, ample evidence suggests that there was major migration – after the effects of the eruption waned – from Africa north to Europe from between 70,000 and 60,000 years ago. Some studies also suggest that populations of humans in southern India and on some islands upwind of Toba survived the eruption too. Pliny the Elder (AD 23/24 – 79) was a Roman author and naturalist. He wrote the encyclopedic Naturalis Historia (Natural History), the first encyclopedia written. Pliny was fascinated by nature of all sorts, and when Mt. Vesuvius, in southern Italy, erupted in 79 AD, he took copious notes. The eruption had already destroyed the towns of Pompeii and Herculaneum when Pliny, an officer in the Roman Navy, sailed across the Gulf of Naples to rescue people stranded a few kilometers south of Pompei in the town of Stabiae. Unfortunately, Pliny was overcome by thick clouds of hot volcanic ash and gas and died during the rescue attempt. After the senior Pliny’s death, Pliny the Younger wrote about the eruption of Vesuvius and about his uncle’s demise. His descriptions, based in part on the notes of his uncle, were so detailed and insightful that today we call eruptions similar to that of Mt. Vesuvius Plinian eruptions. Mt. Vesuvius, best known for destroying Pompeii, has erupted many times since 79 AD – the last time in 1944 during the World War II Allied invasion of Italy. Because of its historical significance in a region of high population, Vesuvius has long been the focus of scientific investigation. Although it has not erupted in nearly 80 years, the volcano is considered one of the most dangerous in the world because of its violent eruptions and the many people who live nearby. The outskirts of Naples, with a population of more than 3 million people, is less than 10 km from Vesuvius, and other moderate sized towns sit in the volcano’s shadow. Today, Mt. Vesuvius is the only active volcano on the mainland of Europe. But, there are other active volcanoes in the Mediterranean region. Stromboli, an island volcano 225 km south of Vesuvius, has been erupting nearly continuously since 1932. Figure 4.6 is a night view of a Stromboli eruption in 2015. About 200 people live on Stromboli Island, less than 10 km from the site of active eruptions, and many tourists visit the island every year to watch the fireworks. Tourists, and the small number of residents can, in principle, be evacuated if need be, but there are periodic deaths when people linger too long after warnings are given. In 2019, a hiker died while trying to visit the summit to see an eruption close up. And, Stromboli continues to impress us today. Volcanology, derived from the name of the Roman god of fire, Vulcan, is the study of volcanoes. It is a hybrid science, involving investigations by geologists, geophysicists, geochemists, geodesists, archaeologists, and others. Figure 4.7 shows the first volcanological observatory, the Vesuvius Observatory near Mt. Vesuvius and Naples, that was founded in 1841. Today other dedicated volcano laboratories exist, including notably the United States Geological Survey’s Hawaiian Volcano Observatory on the Big Island of Hawaii. Volcanologists classify volcanoes as active, dormant, or extinct. The distinctions are imprecise and sometimes used in different ways by different people. Active volcanoes are those that are erupting today or that have erupted recently. For example, Tambora is an example of an active volcano, although it has not erupted in 200 years. The Global Volcanism Program defines recently as meaning in the past 10,000 years, but some scientists are not that specific. And, in many places we don’t have a record of any sort that goes back 10,000 years. Dormant volcanoes are presently inactive but could reasonably be expected to erupt in the future. However, it is often hard to judge reasonably if a volcano may come back to life and some dormant volcanoes may not even be recognized as volcanoes. The El Chichon volcano in Chiapas, Mexico was considered extinct until it erupted in 1982. Mt. Lamington, in Papua, New Guinea was not known to be a volcano until it erupted in 1951. Extinct volcanoes are those that will probably never erupt again. If a volcano has not erupted for a long time – longer than the recurrence intervals of previous eruptions – it is generally considered extinct. This sort of determination requires extensive field work, mapping, and age dating. Such information is lacking for most of the world’s volcanoes. But still, even with that information, scientists are fallible, and some presumed extinct volcanoes have come back to life. For example, Chaitén volcano in Chile, thought to be extinct, erupted in 2008 but had not erupted in the previous 9,000 years. Volcanologists study all three types of volcanoes (active, dormant, or extinct), but their tools and purposes vary. Those studying active volcanoes are concerned with predicting volcanic eruptions and protecting human lives. Active volcanoes pose many different potential hazards. Primary hazards include death or destruction by lava flows, volcanic gases, or volcanic ash. Ash, which can go as high as 40 km in the air, is particularly dangerous because it can cause jets and planes to crash. Secondary hazards from volcanoes include landslides, tsunamis (destructive tidal waves), dust inhalation, and other things. In 1985, a lahar, a name given to volcanic mudflows, traveled 48 km down a river valley from the volcano Nevada del Ruiz and engulfed the town of Armero, Columbia. It killed more than 20,000 of the town’s 29,000 residents. And, in 1986, CO2 released from volcanic Lake Nyos in Cameroon asphyxiated 2,000 people. On a larger time scale, eruptions can affect weather and climate. For example, the eruptions of Tambora in 1815, of Krakatau in 1883, and of Pinatubo in 1991, all released huge volumes of ash and dust into the atmosphere. The ash and dust led to years-long cold spells causing crop failures and many deaths. Eruptions also add CO2, sulfate, and other gases to the atmosphere that contribute to long-term climate change. Even smaller volcanic events probably have impacts on climate but the connections are less certain. Scientists monitoring active volcanoes use several different approaches. They may install seismographs to measure Earth tremors, because tremors suggest magma movement and possible eruption (Figure 4.9). They may monitor uplift and deformation of Earth’s surface using automated land-based surveying systems and satellite measurements, because magma flowing into a region or moving toward the surface often produces bulges in the land above. They may measure gas emissions and temperatures using direct monitoring instruments or, remotely, infrared spectroscopy, because changing gas compositions, increased emissions, or temperature increases often precede an eruption. Furthermore, they may use other kinds of measurements that also yield valuable information, including measurement of gravity, magnetism and electrical resistivity. Predicting eruptions and risks is a good thing, but often little can be done except to evacuate potential victims. People have, at times, tried to slow or divert lava flows but most attempts have been unsuccessful. In 1935, the U.S. Army Air Corps bombed lava flowing from Hawaii’s Mauna Loa to stop lava from reaching Hilo – but the lava kept flowing. Walls and earthen impoundments have had minor success several times when Mt. Etna, in Sicily, erupted. In the early 1990s, engineers diverted a flow into a trench and saved the Sicilian town of Zafferana Etnea from destruction. And in 1973, diversion barriers, and a large amount of water spraying to cool the lava, kept an eruption from destroying a town on Heimaey Island, Iceland. For the most part, however, attempts to control lavas have not worked. Some volcanologists focus their studies on past eruptions – on dormant or extinct volcanoes – to learn how and why volcanoes erupt, to investigate Earth’s evolution, or perhaps to discover the effects that volcanoes had on civilization. Their studies include geological mapping and determining the composition and age of volcanic rocks. Those volcanologists collect rock samples in the field, make thin sections for examining rocks with microscopes, and obtain rock and mineral analyses. They study the history of volcanic activity in a region, and how it has changed over time. Additional studies may focus on the ways that volcanic activity has affected Earth’s atmosphere, climate, soil formation, and energy and ore deposits. 4.3 Volcanoes Today Volcanoes can be pretty spectacular and they can be pretty dangerous. But, there are many different kinds that behave in many different ways. All of them, however, are formed by eruptions of magma – very hot molten rock. When we think of volcanoes, we commonly think of steep sided conical mountains. For example, Figure 4.10 is a view of three spectacular volcanoes in Guatemala, the volcanoes Fuego, Acatenango and Agua. But most volcanic eruptions do not produce cones so beautifully formed. So, volcanic landforms vary greatly in size and shape. Eruptions may produce volcanic mountains such as those seen in this figure, but they may also yield much smaller irregularly shaped volcanic hills, and some eruptions yield sheet-like flows that cover vast areas of continents. And perhaps the most common, but rarely seen, eruptions produce the flat plains that make up the floors of the world’s oceans. Figure 4.11 shows some of the many volcanoes that erupted between 1986 and 2018. In the United States, volcanoes in Alaska’s Aleutian Islands, on Hawaii, and Mt. St. Helens in Washington have erupted during this time span but are not labeled on this map. According to the United States Geological Survey, North America contains nearly 200 active volcanoes. This number is a bit misleading, however, because many of these volcanoes have not erupted for 1000s of years. They could, however, erupt in the future. Many other North American volcanoes are considered extinct but, as pointed out previously, extinct volcanoes occasionally come back to life. Most U.S. active volcanoes are in Alaska’s Aleutian Islands, where one or two eruptions occur every year. Notable eruptions in the last decade include Mount Cleveland, Bogoslof Island, Pavlof Volcano, and Kanaga Volcano. But, as seen in Figure 4.12, the Aleutians contain many more volcanoes than just those four. There are about 20 active volcanoes in the Pacific coast states (Washington, Oregon, California). Other active volcanoes are found in New Mexico, Arizona, Colorado, Idaho, and Wyoming. But, in the contiguous United States, only three volcanoes have erupted since our country was created: Mount St. Helens (Washington) last erupted in 2008, Lassen Peak (California) last erupted in 1917, and Mount Hood (Oregon) last erupted in 1791. Two large active volcanoes, Mauna Loa and Mauna Kea, make up most of the island of Hawaii, the largest of the Hawaiian Islands. Unlike some volcanoes, these two have broad conical shapes. They are shield volcanoes, the largest kind of volcanic mountain. Shield volcanoes have gentle slopes because they form from basaltic lavas, which are very fluid and can spread out and flow long distances. Mauna Loa, the southern of the two volcanoes, last erupted in 1984, but Mauna Kea, the northern volcano, has not erupted for 4,000 years. Figure 4.13 is a view of Mauna Loa from near the summit of Mauna Kea. The foreground contains basaltic rubble. These volcanoes appear as two topological highs on the map below in Figure 4.14a. A third Hawaiian volcano, Kilauea, has erupted many times recently. Kilauea, which is more of a region of volcanic activity than a single volcano, occupies the southeast corner of the big island – Hawaii (Figure 4.14). The inset map on the right is an enlargement of that region. Kilauea is the most active U.S. volcano and one of the most active in the world. The volcano has a summit containing several craters just southwest of the village of Volcano, but the summit is really just a bump on the side of the much more massive Mauna Loa Volcano. During the past 70 years, Kilauea has erupted more than 30 times, and eruptions were nearly continuous from 1983 to 2018, most in craters along Kilauea’s East Rift Zone (labeled in Figure 4.14b). Between 2015 and 2018, lava flowed from fissures in the eastern part of the rift zone. The fissure eruptions largely destroyed several towns including Kaimu, Kapoho and Kalapana. During the past several years, intermittent eruptions have also occurred in the Halema‘uma‘u crater at Kilauea’s summit. More than 500 volcanoes have erupted since the beginning of recorded history, and volcanic activity dates back to the early days of Earth. Volcanic rocks more than 3 billion years old are found around the globe, but we cannot estimate the total number of eruptions that have occurred because much of the evidence is gone – covered or erased by later geological events. Worldwide, about 20 volcanoes are actively erupting at any given time. 50-60 volcanoes erupted at some time during the spring of 2021, when I was writing this chapter. Figure 4.15 shows currently erupting volcanoes on May 28, 2021. Most are in the Ring of Fire around the margins of the Pacific Ocean. Click on the link below the figure to see what is erupting as you read this. 4.4 Effusive and Explosive Eruptions All volcanic eruptions involve magmas reaching Earth’s surface, but eruptions vary in violence and size. Volcanic eruptions fall into two main categories: effusive eruptions and explosive eruptions. Effusive eruptions are relatively low energy compared with violent explosive eruptions that blast magma and fragmented material into the air. For now, we will focus mostly on effusive eruptions and leave explosive eruptions for the next chapter. Some effusive eruptions are small and only produce spatter cones (steep-sided mounds of welded lava fragments) around small vents (any opening where magma reaches the surface). Other eruptions may produce larger lava fountains, or flows from fissures, that yield localized lava flows. Figures 4.16 to 4.19 are photos of Hawaiian eruptions. Figure 4.16 shows a spatter cone and a lava lake that formed in Kilauea’s Pu‘u ‘Ō‘ō crater in 2013. Figures 4.17 and 4.18 show 2011 East Rift Zone fissure eruptions where lava fountains are occurring. And Figure 4.19 shows a glowing river of lava surrounded by hardened basalt that is slightly older. This lava flow, which occurred in 2007, was also in Kilauea’s East Rift Zone. Eruptions of the sorts that typify the recent activity in Hawaii are sometimes called Hawaiian eruptions, or Hawaiian-type eruptions (even if they do not occur in Hawaii.) More properly, we call any eruption that involves lava flows on the surface, an effusive eruption. Effusive eruptions may be single events, or multiple eruptions in the same area, but the volume and violence of the eruptions are generally relatively small and localized compared with other large eruptions that may have global impacts. Although they occur in other places, most of the best photographs of effusive eruptions come from Hawaii because that is the site of much recent volcanic activity, and because photographic techniques today are much better than in the past. The maps in Figure 4.14 show the part of Hawaii where the most recent volcanism has occurred. Magmas that produce effusive eruptions are generally basaltic. This is because mafic magmas have low viscosities and generally do not retain high gas contents or gas bubbles that could power a large explosive event. Less common intermediate and silicic lava flows are more viscous than basalt and may form blocky lava flows or steep-sided lava domes – hills of lava too viscous to spread out. Extensive rhyolitic flows are uncommon because silicic magmas tend to erupt explosively instead of flowing effusively. Effusive eruptions begin when magma moves upwards through fractures and eventually reaches the surface. If eruption activity is concentrated in one place, a single volcano may form. But commonly, as in Hawaii today, linear fracture zones lead to multiple fissure eruptions that occur in rift zones over considerable distance. Kilauea has two rift zones, labeled in the map of Figure 4.14. The East Rift Zone extends 50 kilometers from Kilauea’s summit to the ocean, and then another 80 kilometers beneath the sea. The Southwest Rift Zone, which is not as active, extends from the summit a shorter distance and has only a small part under the ocean. The photos in Figure 4.18, above, show recent eruptions in the East Rift Zone. Figure 4.20 is a photo of hardened basalt from an earlier eruption in the same area. Effusive eruptions can be spectacular, especially when viewed at night. The photo in Figure 4.21 shows flows emanating from Mt Etna, Italy, in 2001. Although only moving 2-3 meters per hour, the lava eventually reached the town of Nicolisi and caused considerable damage. Taking photos like this one, with glowing lava rivers, is generally only possible after dark. During the day, steam and smoke reflect sunlight, masking any views of flowing lava. Fluid lavas generally do not explode, but they can contain dissolved gases that expand and propel lava fountains tens to hundreds of meters into the air. Figure 4.22 shows a lava fountain at the summit of Mount Etna, Italy, in 2021, and Figure 4.23 shows multiple fountains and lava flows at Bardarbunga Volcano, Iceland in 2014. When fountaining magma returns to Earth it creates lava lakes and lava flows such as those in the photos in this chapter. Once hardened into rock, lavas may contain vesicles that were once the gas bubbles that powered fountains. 4.4.1 Different Kinds of Lava Flows Basaltic lava flows fall conveniently into three categories; ʻaʻā and pāhoehoe are the most common. Blocky lava flows are less common. Basaltic lava tends to spread out, and lava flows of all three kinds tend to be thin, perhaps up to a few meters thick. The different kinds of flows may be intermingled. The names ʻaʻā and pāhoehoe derive from descriptions of flows on Hawaii. Pāhoehoe flows have a smooth, billowy, or ropy surface. ʻAʻā flows have a rough blocky, spiny or jagged surface akin to clinker. The rougher texture develops as partially hardened lava tumbles over itself during gradual flow advance. Figure 4.24 is a photo of lava in Craters of the Moon National Monument, Idaho, the smooth flows in this photo are pāhoehoe and the rough flows are ʻaʻā. Figure 4.25 shows a blocky lava flow in Utah. Figure 4.26a shows a June, 1986, eruption from a lava vent on the side of Kilauea’s Pu‘u ‘Ō‘ō cone. The glowing lava rivers flowed for days and have flowed intermittently since then, creating both ʻaʻā and pāhoehoe basalts. Lava from several different East Rift Zone eruptions has traveled downhill several kilometers and covered roads near the Hawaiian coast. Figure 4.26b shows partially solidified basalt and a no longer needed “No Parking” sign that restricted parking on Chain of Craters Road in Hawaii Volcanoes National Park. Some Hawaiian flows travel greater distances than the flows shown, but never more than 5 or 10 kilometers. Lava flowing at the surface may move relatively slowly, but if magma is confined underground in lava tubes where it can maintain high temperatures, it will be less viscous and thus flow much faster. Often swift magma rivers are capped by thin layers of solidified rock. If the rocks are too then, they may collapse, producing a skylight and providing a view of the magma below. The geologists in Figure 4.27 are looking through a skylight at magma flowing rapidly through a lava tube in Hawaii Volcanoes National Park. Magmas of this sort may travel as fast as 35 kilometers an hour. When lava cools and hardens, no matter its composition, it becomes a rock, also called lava. The geologists in Figure 4.27 are standing on recently formed basaltic lava (rock) created by a Kilauea eruption. This basalt, like most basalts, contains a great deal of volcanic glass and no mineral crystals that are visible with the naked eye. Cooling magma droplets from lava fountains often form fine glassy threads called Pele’s hair that collect on flow surfaces; Figure 4.28 shows some examples. Pele’s hair is ubiquitous on the flows in Kilauea’s East Rift Zone today. 4.4.2 Submarine Eruptions Although eruptions on land are the ones we generally think about, some effusive eruptions are submarine. This photograph (Figure 4.29) shows pillow basalts, named for their shape, on the ocean floor near Hawaii. Pillows like these form when basalt erupts under water and, consequently, cools rapidly before it can flow any long distance. In some places, pillow basalts have been incorporated into continents and uplifted above sea level. They, thus, become more amenable to study. Figure 4.30 shows an example of basalt pillows from Pt. Bonita, near San Francisco. 4.5 Naming Volcanic Rocks We name some volcanic rocks based their mineralogy. The common primary minerals in volcanic rocks are generally quartz, K-feldspar, plagioclase, muscovite, biotite, amphibole, pyroxene, or olivine (Table 2.1 in Chapter 2). These are the essential minerals sometimes used to assign rock names. However, many volcanic rocks contain small or large amounts of volcanic glass instead of minerals, or contain mineral crystals that are too fine grained for easy identification even with aid of thin sections and a petrographic microscope. Although they may contain only fine grains, if present, larger phenocrysts help distinguish different kinds of volcanic rocks. Figure 4.31 shows the common phenocrysts in rocks ranging from those that are silica-rich to those that are relatively silica-poor. These phenocryst minerals are the same minerals that may be present as microscopic crystals in the groundmass. Quartz and K-feldspar are generally restricted to relatively silicic rocks, olivine and pyroxene to relatively mafic rocks. The other minerals are most common in intermediate rocks. Plagioclase is a solid-solution mineral in volcanic rocks and varies from being Na-rich for silicic rocks to being Ca-rich for mafic and ultramafic rocks. Figure 4.31 is only an approximation because the minerals present as phenocrysts depend also on other things besides silica content (the horizontal axis in the figure). And in some porphyries, where the amount of phenocrysts is small compared with the amount of groundmass, phenocryst mineralogy may not well represent the overall rock composition. So, sometimes classifying and naming volcanic rocks based on mineralogy is problematic. We commonly modify volcanic rock names based on rock textures. For example, this view (Figure 4.32a) of a porphyritic andesite shows an intermediate composition rock that contains small lath-like plagioclase (light-colored) and a one-centimeter long phenocryst of black accessory augite (pyroxene). Smaller augite crystals are also present but hard to pick out. Besides texture, we may modify rock names based on characterizing accessory minerals too. For example, the vesicular olivine basalt shown in Figure 4.32b (from La Palma, Canary Islands) contains many 3-5 millimeter-sized vesicles and relatively large green olivine phenocrysts in a fine-grained matrix. |Chapter 16 (Section 16.2) contains many photos of volcanic rocks in hand specimen and in thin section. If you want to know what they look like, go there.| 4.5.1 The IUGS System for Naming Volcanic Rocks The International Union of Geological Sciences (IUGS) developed a standard classification system that divides volcanic rocks into fundamental groups, each of which contain several rock types: xx• feldspar-containing effusive rocks xx• pyroclastic rocks xx• ultramafic rocks xx• melilitic and related rocks xx• high-Mg rocks Unfortunately, identifying and naming rocks cannot be done the same way for all groups. Some names are assigned based on the minerals a rock contains, some based on the overall rock chemistry, and some by rock texture. The most important groups of volcanic rocks are feldspathic effusive rocks, pyroclastic rocks, and mafic rocks. These groups include the most common volcanic rocks at Earth’s surface. Rocks belonging to other groups are quite rare and we will not discuss naming them further in this book. And in this chapter, we will only consider the IUGS system for feldspar-containing effusive rocks. Despite some lingering debate and some shortcoming, this IUGS system gives useful and consistent results for any rock that contains at least 10% by volume of identifiable quartz, alkali feldspar, plagioclase, and feldspathoids combined. One difficulty, however, is that the IUGS system often requires obtaining a rock analysis, something that cannot be readily done in the field. 188.8.131.52 The QAPF Classification System Some effusive rocks contain quartz, alkali feldspar, and plagioclase, singly or in combination. If we can identify these minerals and their relative proportions, the proportions yield an IUGS rock name. Typically, we examine a thin section of a rock to determine the modes (volume %) of quartz (Q), alkali feldspar (A), plagioclase (P), and feldspathoids (F) present. After normalizing the percent values to 100%, we plot the results on the IUGS QAPF diagram, shown in Figure 4.33, to get a rock name. The QAPF diagram contains two different triangles (top and bottom of Figure 4.33) that share the AP line. The top QAP triangle applies to rocks that contain quartz. The less commonly used upside down bottom APF triangle applies to rocks that, due to low SiO2 content, do not contain any quartz. If rocks do not contain quartz, they often contain a mineral belonging to the feldspathoid group. So, for low-SiO2 rocks, feldspathoids (most commonly leucite or nepheline) may replace quartz, and naming is based on the modes of alkali feldspar, plagioclase, and a feldspathoid. Because quartz and feldspathoids are never found together, it is generally clear whether to use the top or bottom half of the diagram. Some of the rock names in this system include the term foidite, an abbreviation for feldspathoid-bearing rock. Most igneous rocks contain minerals other than quartz, alkali feldspar, plagioclase, and feldspathoids. Some may contain abundant mafic minerals such as olivine or pyroxene, but the IUGS system for feldspathic effusive rocks does not consider this. Consequently, two rocks with dissimilar appearance may have the same name using this system. Other effusive rocks may be extremely mafic, containing very small amounts, or no, quartz and feldspars. For such rocks, we must use the IUGS classification system for ultramafic rocks. Furthermore, when looking at a hand specimen, we may not be able to distinguish plagioclase from alkali feldspar, making the standard IUGS system unworkable. 184.108.40.206 The TAS Alternative Determining accurate mineral modes for volcanic rocks can be difficult or impossible due to small grain size or the presence of glass. Although some volcanic rocks contain larger phenocrysts, the modes of phenocrysts may not reflect the true mineral modes. The IUGS recommends using a TAS (total-alkali-silica) diagram (Figure 4.34) to name rocks if mineral modes cannot be determined accurately. The TAS system is, today, more often used than the QAPF system for naming volcanic rocks. This figure contains the same diagram presented in Chapter 3 (Figure 3.45) to name magmas, so an advantage is that this system gives a rock and its parent magma the same name. A second advantage is that this system is based on rock chemistry. So, even rocks that contain no visible minerals, or that contain large amounts of glass, can be unambiguously named. In the TAS diagram, the vertical axis is the total alkali oxide (Na2O + K2O) content of a rock and the horizontal axis is the silica (SiO2) content. The diagonal red line divides rocks into alkalic (top) and sub-alkalic (bottom) categories. To use the TAS system, we must know the chemical composition of a rock, so we can plot rock composition appropriately. Acquiring whole rock chemical analyses is not difficult but is more complicated than making thin sections. It can be time consuming and requires specialized analytical instruments. The most common analysis methods used today are X-ray fluorescence spectroscopy (XRF) and inductively coupled plasma-mass spectrometry (ICP-MS). Chapter 1 contains descriptions and photographs of these instruments. The TAS and QAPF classification systems are closely related, and the two systems yield the same, or nearly the same, rock names if we can apply them both. The horizontal axis in the TAS diagram (weight % SiO2) correlates with the vertical axis in QAPF diagram (the amount of quartz or feldspathoids present in a rock) because the modal amounts of either quartz or feldspathoids are functions of SiO2 concentrations. Rocks that contain abundant quartz are relatively rich in SiO2; those that contain less quartz contain less SiO2. Rocks that contain feldspathoids (foidites) are even poorer in SiO2. The vertical axis in the TAS diagram (weight % Na2O + K2O) correlates with the horizontal axis in the QAPF diagram because feldspathic rocks high in Na2O + K2O generally contain abundant K-feldspar; those that contain less Na2O + K2O contain more plagioclase. 4.5.2 Textures of Volcanic Rocks All basalts have about the same chemical composition and consist primarily of two minerals: plagioclase and clinopyroxene. But different basalts may not appear the same because they have different textures or fabrics. For example, basalt may be glassy, massive, porphyritic, vesicular, or scoriaceous. So, we may modify rock names with textural terms, and say perhaps that a rock is a glassy basalt, a massive basalt, a porphyritic basalt, etc. The most important things that determine the texture of an igneous rock are magma composition and cooling history. Magma composition is important because it dictates the minerals that can form, and different minerals have different shapes and may form at different temperatures. Consequently, mineral crystals may grow into, or around, crystals of other minerals. Cooling history includes the cooling rate, whether cooling was continuous or occurred in stages, and crystallization temperatures. Temperature is a key because it affects the rate of nucleation, which is the rate that new crystals begin to form from just a few atoms. Temperature also controls the diffusion rate – the rate at which atoms can move (diffuse) to sites of growing crystals. At higher temperatures, nucleation and crystal growth are more rapid compared with lower temperatures. In the sections below, we review the most common textures seen in volcanic rocks. Many of the terms were introduced before, but here we try to put them in a larger context by comparing them and relating them to each other. These terms apply to rocks of many compositons, and some of them are also used to describe textures of plutonic rocks. 220.127.116.11 Grain Size and Related Characteristics Cooling rate is the most important factor influencing grain size. Over time, crystals tend to grow larger at the expense of smaller crystals. This process, termed Ostwald ripening, occurs because larger crystals have lower surface energies – and thus are more stable – than smaller crystals. Additionally, multiple crystals may grow into each other and coalesce to produce compound grains, or recrystallize to produce a single larger grain. So, slow cooling generally leads to large crystals and fast cooling does not. Igneous rocks come with a wide varieties of textures related to grain size. As discussed in Chapter 2, intrusive rocks generally are phaneritic, meaning they contain grains visible with the naked eye. Especially coarse-grained intrusive rocks have a pegmatitic texture. Extrusive rocks, in contrast with intrusive rocks, are generally aphanitic. Aphanitic rocks are very fine grained. Crystals, which may be microcrystals up to a few tenths of a millimeter across, can only be seen using a microscope. Cryptocrystalline rocks contain even finer crystals – crystals not visible even with a microscope. Figure 4.35 shows an aphanitic basalt that contains only microscopic grains. The rock has a massive texture, meaning it has uniform grain size, a random arrangement of grains and shows no planar or linear features. A thin-section view of this rock would reveal that it is made of augite, plagioclase, and much volcanic glass. 18.104.22.168 Crystalline and Fragmental Rocks Many extrusive rocks are partly to entirely crystalline. They are made of interlocking crystals that formed when molten rock cooled. For example the latite in Figure 4.36 (composed of fine crystals of plagioclase, K-feldspar and dark-colored biotite and augite) is a crystalline rock. Other extrusive rocks are fragmental, also called pyroclastic, or volcanoclastic rocks. They consist of some combination of mineral crystals, rock fragments, pumice, or volcanic ash cemented together. The tuff in Figure 4.37 is a good example. The different rock constituents settled from a jumbled cloud of ash and coarser debris and subsequently became welded together during lithification. 22.214.171.124 Vesicles and Amygdules Vesicles, gas bubbles trapped in magma as it cools and hardens, are a common feature of basalts and many other kinds of volcanic rocks. Vesicles are holes or cavities, spherical or sometimes elongated, that range from millimeters to centimeters in scale. Figure 4.38 shows a vesicular rock that contains millimeter-sized crystals of pale green olivine. Vesicular rocks such as pumice (Figure 4.39) and scoria (Figure 2.18), comprise mostly vesicles and so have very low densities. So, pumiceous rock can sometimes float on water as seen in Figure 4.40. Scoria and pumice are quite similar, but scoria (which is generally made of dark-colored fine volcanic glass) forms from gas trapped in a lava flow, while pumice results from a gas-rich explosive eruption of foamy molten magma. Both kinds of rocks are made mostly of volcanic glass. Most vesicles are roughly spherical, but sometimes vesicles form as long tubular vesicles called pipe vesicles (Figure 4.41). Pipe vesicles, which have the appearance of vertical worm holes, form at the base of a lava flow when gas bubbles rise vertically through viscous lava. Secondary minerals, including quartz, calcite, or zeolites, may crystallize in vesicles. The filled cavities are amygdules and the term amygdaloidal describes rocks containing amygdules. Figure 4.42 shows an example of an amygdaloidal basalt from Kaiserstuhl, in southern Germany. Some of the pipe vesicles in Figure 4.41, too, are filled with secondary minerals. Amygdules develop during cooling and low-temperature alteration of solid rock and their presence may be hard to distinguish from minerals produced by low-grade metamorphism. Many of the minerals that precipitate in amygdules, especially some zeolites, are the same ones that form during metamorphism. 126.96.36.199 Porphyries and Phenocrysts Some extrusive rocks are porphyritic. In these rocks, fine-grained material called groundmass surrounds larger mineral grains called phenocrysts. Often, but not always, porphyritic textures form because a magma cooled in stages instead of all at once. Figures 4.43 and 4.44 show two examples of porphyries: a basalt from the Canary Islands that contains both augite and plagioclase phenocrysts, and a basalt from Utah that contains olivine phenocrysts. For more photos of porphyries, see Figures 2.9 and 2.10. If phenocrysts are very small (like the vesicular basalts in Figures 4.32 and 4.38), we may refer to a rock as a microporphyry. Sometimes we use the terms phyric to describe rocks that contain porphyroblasts and aphyric for those that do not. Vitrophyric rocks contain a matrix made of glass; felsophyric rocks have a matrix made of quartz and feldspar. Some porphyritic rocks have a groundmass composed of small but identifiable crystals. Often, however, groundmass shows weak interference colors in thin section, contains significant amounts of glass, and identifying any minerals present is impossible. We use the term cryptocrystalline to describe a groundmass made of minute crystals that cannot be seen, even with a microscope, but that make the groundmass anisotropic. Most phenocrysts are single mineral crystals, but some are compound. Glomeroporphyritic rocks contain apparent phenocrysts composed of mineral aggregates, or clots, called glomerocrysts. Glomerocrysts form when growing crystals are attracted to each other by surface tension and subsequently crystallize and become interlocked together. The thin-section view in Figure 4.45 shows a glomerocryst that contains plagioclase and augite crystals. The finer groundmass is mostly plagioclase and glass. Small plagioclase crystals are tabular with white to gray interference colors. 188.8.131.52 Variable Amounts of Volcanic Glass Most extrusive rocks contain volcanic glass. Glass is isotropic and so appears black when viewed under crossed polars with a microscope. In Figure 4.46, for example, isotropic glass surrounds crystals that are mostly plagioclase. Rocks that are entirely glass are holohyaline. Obsidian and tachytite are names for silicic and basaltic holohyaline rocks, respectively. Most obsidian, however, contains tiny microcrystals that give the glass a turbid (somewhat cloudy) look in thin section. Rocks that contain a mix of crystals and glass, like the andesite in Figure 4.46, are hypocrystalline. Rocks composed entirely of crystals, such as the latite in Figure 4.36 and the lunar basalt in Figure 4.47, are holocrystalline rocks. Most holocrystalline rocks, however, are plutonic, not volcanic. 184.108.40.206 Other Microscopic Features In principle, minerals that crystallize from magma should be homogeneous. Elements will diffuse in and out of growing crystals to maintain chemical equilibrium as temperature changes. Sometimes this does not happen, however, and crystals may become compositionally zoned, like the plagioclase crystal seen in a glassy rhyolite in Figure 4.48. The zoning in this crystal appears as vague rings, sort of like tree rings. In some zoned crystals, zones are more pronounced. The plagioclase grain in Figure 4.48 contains concentric zones with different compositions. The different compositions have different optical properties and so show different shades of interference colors under crossed polars. This plagioclase grain also exhibits polysynthetic twinning, a kind of twinning involving multiple twin domains that are parallel to each other. Zoned phenocrysts are common in volcanic rocks because rapid cooling does not give crystals time to homogenize. But phenocrysts in plutonic rocks sometimes show zoning as well. Volcanic glass is unstable at Earth’s surface and, over time, tends to devitrify (crystallize) and turn into minerals. Thus, very old volcanic glass is rare. Silica-rich rocks often develop a felsitic texture (Figure 4.49) composed of remnant glass and extremely small mineral grains called microlites that formed when glass devitrified. Microlites show birefringence but are too small for certain mineral identification. A felsitic matrix also commonly contains crystallites, smaller than microlites, with spherical, rod, or hair-like shapes. The texture seen in Figure 4.50, a trachytic texture, appears somewhat like a felsitic texture but does not form from devitrified glass. The matrix is composed of subparallel feldspar (sanidine) microlites and their alignment may reflect magma flow at the time of crystallization. This texture is typical of trachytes and chemically similar volcanic rocks. Sometimes glass devitrification produces spherulites – radiating or branching arrays of fibrous or needle-like crystals. Spherulites are common in glassy silicic volcanic rocks, and are most commonly aggregates of quartz and feldspar. Figure 4.51 show spherulites in devitrified glass of a pumice. 4.5.3 Identifying and Naming Volcanic Rocks in the Field Because identifying different kinds of fine-grained rocks in outcrops, or in hand specimens, is problematic, geologists sometimes use very approximate classification systems. Often, the best we can do when naming volcanic rocks without the aid of analytical instruments, is to note rock color and texture. Rock color depends on the minerals present and their grain sizes. Usually, rocks that contain abundant feldspars and quartz have a light color; those that contain abundant mafic minerals (Fe- and Mg-rich minerals such as amphibole, pyroxene, olivine) have a dark color. The percentage of dark colored minerals is the rock’s color index. Alkali feldspar, the kind of feldspar that dominates silicic rocks, is often pinkish or tan. Because silicic rocks generally contain alkali feldspar, silicic rocks are generally light-colored (low color index) and may have a pink or tan hue. Mafic rocks are dark-colored (high color index), and intermediate-composition rocks fall somewhere between. Unfortunately, some fine-grained rocks may appear darkly colored, even if they are silicic. Often, the best we can do when looking at rocks in the field is to call them basalt if dark colored, andesite if white or gray, and rhyolite if pinkish or tan. Upon thin section examination, or after obtaining a chemical analysis, the names may have to be corrected. Table 4.1 is a simple naming system that we can use when examining volcanic rocks in the field. The system is based on rock color, and whether a rock is fine-grained, vesicular, or glassy. The bottom part of the diagram lists the (likely) minerals present in rocks of different sorts, although they may not be visible. 4.6.1 Ultramafic Volcanic Rocks Ultramafic plutonic rocks make up most of Earth, because they make up most of the mantle. Although the mantle is out of our reach, ultramafic mantle samples have reached the surface as tectonic fragments caught up in mountain building or as much smaller xenoliths carried up by magmas. Yet, despite the abundance of ultramafic rocks at depth, ultramafic volcanic rocks are rare. Ultramafic magmas, which must originate in the mantle, do not reach the surface to produce ultramafic volcanic rocks today. They did, however, reach the surface during Earth’s early history. Thus, we find rare ultramafic volcanic rocks in old Archean shields, and much less commonly in younger terranes. The dearth of young ultramafic volcanic rocks is probably due to Earth’s cooling. The young Earth was much hotter than today’s Earth. Ultramafic magmas crystallize at high temperatures, but when Earth was warmer, the magmas could make it to the surface before they crystallized. Today they cannot, because they are likely to cool and solidify at depth. Figure 4.52 is a photo of 2.7 billion year old rocks exposed at Pyke Hill, in western Ontario. These are one of the best studied examples of ultramafic lava flows. The flows are a type of volcanic rock called komatiite, named after its type locality on the banks of the Komati River in South Africa. Komatiites are, by definition, ultramafic volcanic rocks that contain more than 18 wt% MgO. Primary minerals in komatiites include much olivine, one or two Mg-rich pyroxenes (augite, pigeonite, or orthopyroxene), along with chromite, plagioclase, or ilmenite. Rarely komatiites contain amphiboles. All known komatiites have been metamorphosed and some contain prograde minerals such as tremolite, chlorite, talc, and magnesite. Many have altered and contain secondary serpentine, chlorite, or brucite, often with carbonate minerals, that have replaced the primary minerals. One of the most distinctive things about komatiites is that layers within them commonly display a spinifex texture. This term derives from the shape of needles of a spinifex plant (Figure 4.53). In rocks, spinefex texture appears as long acicular (needle-shaped) crystals of olivine (or pseudomorphs containing altered olivine) or pyroxene that give the rock a bladed appearance. We most easily see spinifex texture on a weathered surface like the one see in Figure 4.54 . Some of the lavas seen in Figure 4.52 display spinefex texture but the photo is not of high enough resolution for the texture to be visible. 4.6.2 Mafic Volcanic Rocks In contrast with ultramafic rocks, mafic volcanic rocks are widespread. Basalt is the most common variety of mafic rock and the most common volcanic rock on Earth. Basalt flows dominate in oceanic regions, and basalts are found in some continental areas, too. Most mafic magma is generated by decompression melting of the mantle beneath mid-ocean ridges. Some of this magma erupts on the ocean floor as basaltic lava, but most cools underground, creating gabbros. Small amounts of basalt are also occasionally produced in subduction zones when water released by subducting slabs causes flux melting of overlying mantle. Although most igneous activity occurs at plate margins, basaltic melts may form in plate interiors. Mafic volcanoes and, often, gabbroic (mafic) dikes are commonly associated with hot spots or with continental rifts. For example, the basalt flows in the Hawaiian and Galapagos Islands are products of a hot spot. Figure 4.55 contains two photos of basalt in the Galapagos. The photo on the left shows Bartolomé Island; the photo comes from the 2003 Russel Crowe movie Master and Commander. The photo on the right is a close-up of the lava flow seen in the distance in the larger photo. We can visualize basalt compositions and mineralogy using the basalt tetrahedron (Figure 4.56). The tetrahedron has clinopyroxene, quartz, olivine, and nepheline at its corners. Orthopyroxene (which has composition halfway between olivine and quartz) plots in the center of the front edge. And, plagioclase (which has composition halfway between nepheline and quartz) plots in the center of the back edge. The main variables in this tetrahedron are silica content (which increases from left to right), calcium content (which increases from bottom to top), and alkali content (which increases from front to back). Clinopyroxene, quartz, olivine, orthopyroxene, and plagioclase are the major minerals found in basalts. But they cannot all be present, and basalts fall into one of three categories: alkali basalts, olivine tholeiites, and quartz tholeiites (Table 4.2), separated by the plane of silica undersaturation and the critical plane of silica saturation in the basalt tetrahedron. The three bottom drawings in Figure 4.56 highlight the different kinds of basalt. |Table 4.2 Common minerals in different kinds of basalt| |type of basalt||space common minerals| |alkali basalt||space olivine, plagioclase, clinopyroxene| |olivine tholeiite||space plagioclase, clinopyroxene, orthopyroxene| |quartz tholeiite||space clinopyroxene, orthopyroxene, quartz| Nepheline never coexists with orthopyroxene, and the critical plane of silica undersaturation separates alkali basalts from olivine tholeiites. Olivine cannot coexist with quartz (or else the two would react to yield pyroxene) and the plane of silica saturation separates olivine tholeiites from quartz tholeiites. Basalts of the ocean floor, most large oceanic islands, and continental flood basalts are tholeiites. Alkali basalts are associated with continental rifting, hotspot volcanism, and are occasionally found on ocean islands. The different kinds of basalts depicted in Figure 4.56 seem straightforward, but basalts can be complicated. Many are so fine grained that mineral identification is difficult, and most are partially to entirely glassy. So, basalts of different sorts may be missing all or some of the minerals in the tetrahedron. 4.6.3 Intermediate and Selicic Volcanic Rocks Intermediate-composition volcanic rocks – most often andesites, and closely related dacites (Figure 4.57) – are typically associated with subduction zones, but the exact origin of intermediate composition magmas is debated today. Perhaps the debate is because they may form in more than one way. In some places, intermediate composition magmas appear to form by partial melting of mafic rocks during subduction. In other places, it seems that andesitic magma is what is left over after a basaltic magma has partially crystallized. It also is likely that mixing of magmas, crustal melting, and crustal assimilation play important roles in the production of intermediate magmas. Like intermediate magmas, the origins of silicic magmas are sometimes debated. If partial melting of ultramafic and mafic mantle and lower crust are the source of silicic melts, there must be much leftover mafic material in the source regions. Yet, evidence for the existence of those leftovers is skimpy at best. So, although huge silicic batholiths and large rhyolite terranes exist, it seems inconceivable that they could be products of the partial melting of highly mafic, or ultramafic, parent material. The consensus today is that, although getting silicic magmas from anatexis of mantle or lower crust is possible, many silicic melts come from melting of crustal rocks. As occurred beneath Yellowstone, the heat needed to cause the melting may be carried into the crust by rising mafic magmas that originated in the mantle. This process occurs at hot spots, in rift zones, and in the roots of mountain chains. It may occur elsewhere on a smaller scale. Intermediate and silicic magmas are rare in oceanic terranes but common on continents. They are most often associated with subduction zones, but also occur at hot spots or in rift zones. These magmas are relatively viscous and often contain high gas contents. So, they tend to erupt explosively, producing ash and other airfall debris that eventually settles and consolidates to become tuffs. Rhyolitic tuffs are arguably the most voluminous of type of continental igneous rock. Figure 4.58 is a photo of New Mexico’s Bandelier Tuff – a tuff associated with the Rio Grande Rift. The tuff contains three members, deposited as ash flows at different times 1.85 to 1.05 million years ago, during explosive eruptions that produced the Valles Caldera near Los Alamos. Figure 4.58 shows two of these members, one forming sloping gray slopes and the other exposed in cliffs above. Both have rhyolitic compositions. Bandelier National Monument is very close to where this photo was taken. For a view of the tuff in Bandelier, with its Puebloan dwellings, see Figure 5.14. Most rhyolites and andesites erupt to produce stratovolcanoes such as those that we find in the Ring of Fire surrounding the Pacific Ocean. Figure 4.59 (below) shows an example – Mt. St. Helens, an active volcano of the Cascade Range in Washington. Most of the rocks produced by this mountain have been rhyolitic, and this photo shows a small rhyolitic dome that developed in the center of the summit crater. Mt. St. Helens is a typical stratovolcano with a long history. It first erupted about 275,000 years ago. During the past 4,000 years major eruptions have occurred more than a dozen times, including the devastating eruption on May 18, 1980 (described more fully in Chapter 5). Investigations by The United States Geological Survey have concluded that St. Helens magmas originated by flux melting in the mantle, above the subducting Juan de Fuca Plate. Hot magma rose and collected near the base of the crust temporarily before moving up to shallower storage chambers and, ultimately, erupting. 4.6.4 Lava Flows with Intermediate and Silicic Compositions Generally, high dissolved gas content means that silicic and intermediate magmas erupt explosively. Consequently, effusive eruptions of rhyolitic, dacitic, and andesitic lava, are relatively rare. But after many explosive eruptions magmas may become depleted in gases. So, later eruptions of the same magma may be effusive. Perhaps 10-15% of all lava flows are silicic or intermediate. Examples include the rhyolite flows that have almost filled the Yellowstone Caldera since it formed 640,000 years ago, such as the flow at Obsidian Cliff seen in Figure 4.60. Other rhyolite flows are found west of Yellowstone in the Snake River Plain. The most common occurrences of silicic and intermediate lavas, however, are associated with subduction zone volcanoes. For example, South America’s Andes Mountains contain many andesite flows (intermediate), along with lesser amounts of dacite and rhyolite flows (silicic). Although most are associated with subduction zone volcanism, andesitic flows are also found on ocean islands. Figure 4.61 is an aerial photo of an andesite flow on Bagana Volcano. Bagana is one of seven active volcanoes on Bougainville Island, in Papua New Guinea, northeast of Australia. The volcano has erupted multiple times during the last 175 years, with major eruptions in 1950, 1952, and 1966. Almost continuous eruptions have been occurring since the early 1970s. Bagana’s andesitic lava flows are sometimes as thick as 150 meters. As pointed out in Chapter 3 (Section 3.2.2), even if they do erupt effusively, silicic and intermediate magmas may not travel far. Often they produce localized lava domes instead of flows because the magmas are too viscous to travel significant distances. The rhyolite dome on the left in Figure 4.62 is about 650 meters across with steep sides. The photo of Mt. St. Helens on the right shows a smaller silicic lava dome in the volcano’s summit crater in 1982. The dome can also be seen in a more recent photo (in 2009) Figure 4.59. 4.6.5 Alkalic Volcanic Rocks Alkalic volcanic rocks are relatively rare. They mostly occur in continental interiors and may be associated with rift zones such as the East African Rift. These rocks are enriched in sodium, and sometimes potassium, compared with other volcanic rocks. Most petrologists use the descriptor alkalic rock for rocks that contain more sodium and potassium than can fit into feldspars alone. These rocks vary quite a bit in mineralogy. If low in silica they commonly contain feldspathoids (typically nepheline), so they may be nephelinites or phonolites of various sorts. With more silica, they may be trachytes (see Figures 4.33 and 4.34). Although most alkalic volcanic rocks are associated with continental rifting, some are found at hot spots. Figure 4.63 is a photo of a trachyte from Gran Canaria, a hot-spot island. This trachyte, like all trachytes, contains mostly alkali feldspar with small amounts of mafic minerals. Typically these rocks contain light-colored phenocrysts of sanidine (high-temperature alkali feldspar). Trachytes may also contain Na-rich pyroxene and amphibole. Figure 4.56, earlier in this chapter, introduced alkali basalts. They are alkali-rich and silica-poor compared with other kinds of basalt. Alkali basalts sometimes contain nepheline, reflecting their high alkali contents and relatively low silica contents. Alkali basalts, however, contain essential plagioclase and lack K-feldspar, which distinguishes them from other alkalic rocks. Carbonatites, rare igneous rocks that contain half or more igneous carbonate minerals (calcite, dolomite, siderite, or ankerite), are sometimes associated with more common alkalic rocks. Carbonatites contain less than 20 wt% SiO2, and so are the most silica-poor igneous rocks known. They are, however, for the most part plutonic or subvolcanic rocks. An exception is the active Ol Doinyo Lengai Volcano in Tanzania. Figure 4.64 shows black carbonatite lava flowing from a small cone. When the lava cools, it turns white. See also the photo and brief discussion of this volcano in Chapter 3. Kimberlites are a rare type of alkalic rock named after Kimberley, South Africa, where they were first identified as a source of diamonds. These ultramafic rocks occur mostly in diatremes called kimberlite pipes, or in dikes, and less commonly in sills. In many ways they are the equivalent of mica-bearing peridotites. These are coarse rocks, generally brecciated, that may contain primary olivine, phlogopite, garnet, pyroxene, ilmenite, and fragments of the rock the magma passed through during upward movement. Kimberlites, however, often alter and may become highly serpentinized. Kimberlites form from magmas that originate at depths greater than 150 km, perhaps as deep as 450 km. They erupt at the surface in explosive eruptions powered by expanding CO2 gas. Besides containing diamonds, kimberlites may contain ultramafic xenoliths and thus provide mantle samples for study. Figure 4.65 is a kimberlite sample from the Chicken Park kimberlite in northernmost Colorado. It contains visible red garnets in a sea of green-gray serpentine. |Chapter 16 (Section 16.2) contains many photos of volcanic rocks in hand specimen and in thin section. If you want to know what they look like, go there.| ● Figure Credits Uncredited graphics/photos came from the authors and other primary contributors to this book. 4.0 (opening photo) Eruption of Kilauea, National Park Service (NPS) - 4.1 Two Significant Eruptions - 4.2 Volcanology - 4.3 Volcanoes Today - 4.4 Effusive and Explosive Eruptions - 4.5 Naming Volcanic Rocks - 4.5.1 The IUGS System for Naming Volcanic Rocks - 4.5.2 Textures of Volcanic Rocks - 4.5.3 Identifying and Naming Volcanic Rocks in the Field - 4.6 Common and Unusual Volcanic Rocks - ● Figure Credits
|Part of a series on| Austerity is a political-economic term referring to policies that aim to reduce government budget deficits through spending cuts, tax increases, or a combination of both. Austerity measures are used by governments that find it difficult to pay their debts. The measures are meant to reduce the budget deficit by bringing government revenues closer to expenditures, which is assumed to make the payment of debt easier. Austerity measures also demonstrate a government's fiscal discipline to creditors and credit rating agencies. In most macroeconomic models, austerity policies generally increase unemployment as government spending falls. Cutbacks in government spending reduce employment in the public and may also do so in the private sector. Additionally, tax increases can reduce consumption by cutting household disposable income. Some claim that reducing spending may result in a higher debt-to-GDP ratio because government expenditure itself is a component of GDP. In the aftermath of the Great Recession, for instance, austerity measures in many European countries were followed by rising unemployment and debt-to-GDP ratios despite reductions in budget deficits. When an economy is operating at or near capacity, higher short-term deficit spending (stimulus) can cause interest rates to rise, resulting in a reduction in private investment, which in turn reduces economic growth. However, where there is excess capacity, the stimulus can result in an increase in employment and output. - 1 Justifications - 2 Theoretical considerations - 3 Empirical considerations - 4 Controversy - 5 Balancing stimulus and austerity - 6 "Age of austerity" - 7 Word of the year - 8 Examples of austerity - 9 Criticism - 10 See also - 11 References - 12 External links Austerity measures are typically pursued if there is a threat that a government cannot honour its debt obligations. This may occur when a government has borrowed in foreign currencies (that it has no right to issue), or if it has been legally forbidden from issuing its own currency. In such a situation, banks and investors may lose confidence in a government's ability or willingness to pay, and either refuse to roll over existing debts, or demand extremely high interest rates. International financial institutions such as the International Monetary Fund (IMF) may demand austerity measures as part of Structural Adjustment Programmes when acting as lender of last resort. Austerity policies may also appeal to the wealthier class of creditors, who prefer low inflation and the higher probability of payback on their government securities by less profligate governments. More recently austerity has been pursued after governments became highly indebted by assuming private debts following banking crises. (This occurred after Ireland assumed the debts of its private banking sector during the European debt crisis. This rescue of the private sector resulted in calls to cut back the profligacy of the public sector.) According to Mark Blyth, the concept of austerity emerged in the 20th century, when large states acquired sizable budgets. However, Blyth argues that the theories and sensibilities about the role of the state and capitalist markets that underline austerity emerged from the 17th century onwards. Austerity is grounded in liberal economics' view of the state and sovereign debt as deeply problematic. Blyth traces the discourse of austerity back to John Locke's theory of private property and derivative theory of the state, David Hume's ideas about money and the virtue of merchants, and Adam Smith's theories on economic growth and taxes. On the basis of classic liberal ideas, austerity emerged as a doctrine of neoliberalism in the 20th century. In the 1930s during the Great Depression, anti-austerity arguments gained more prominence. John Maynard Keynes became a well known anti-austerity economist, arguing that "The boom, not the slump, is the right time for austerity at the Treasury." Contemporary Keynesian economists argue that budget deficits are appropriate when an economy is in recession, to reduce unemployment and help spur GDP growth. According to Paul Krugman, since a government is not like a household, reductions in government spending during economic downturns worsen the crisis. Across an economy, one person's spending is another person's income. In other words, if everyone is trying to reduce their spending, the economy can be trapped in what economists call the paradox of thrift, worsening the recession as GDP falls. Krugman argues that, if the private sector is unable or unwilling to consume at a level that increases GDP and employment sufficiently, then the government should be spending more in order to offset the decline in private spending. An important component of economic output is business investment, but there is no reason to expect it to stabilize at full utilization of the economy's resources. High business profits do not necessarily lead to increased economic growth. (When businesses and banks have a disincentive to spend accumulated capital, such as cash repatriation taxes from profits in overseas tax havens and interest on excess reserves paid to banks, increased profits can lead to decreasing growth.) Economists Kenneth S. Rogoff and Carmen M. Reinhart wrote in April 2013, "Austerity seldom works without structural reforms – for example, changes in taxes, regulations and labor market policies – and if poorly designed, can disproportionately hit the poor and middle class. Our consistent advice has been to avoid withdrawing fiscal stimulus too quickly, a position identical to that of most mainstream economists." To help improve the U.S. economy, they advocated reductions in mortgage principal for 'underwater homes'- those whose negative equity (where the value of the asset is less than the mortgage principal) can lead to a stagnant housing market with no realistic opportunity to reduce private debts). In October 2012, the IMF announced that its forecasts for countries that implemented austerity programs have been consistently overoptimistic, suggesting that tax hikes and spending cuts have been doing more damage than expected and that countries that implemented fiscal stimulus, such as Germany and Austria, did better than expected. The IMF reported that this was due to fiscal multipliers that were considerably larger than expected: for example, the IMF estimated that fiscal multipliers based on data from 28 countries ranged between 0.9 and 1.7. In other words, a 1% GDP fiscal consolidation (i.e., austerity) would reduce GDP between 0.9% and 1.7%, thus inflicting far more economic damage than the 0.5 previously estimated in IMF forecasts. In many countries, little is known about the size of multipliers, as data availability limits the scope for empirical research. For these countries, Nicoletta Batini, Luc Eyraud and Anke Weber propose a simple method — dubbed the "bucket approach" — to come up with reasonable multiplier estimates. The approach bunches countries into groups (or "buckets") with similar multiplier values, based on their characteristics, and taking into account the effect of (some) temporary factors such as the state of the business cycle. For example, the U.S. Congressional Budget Office estimated that the payroll tax (levied on all wage earners) has a higher multiplier (impact on GDP) than does the income tax (which is levied primarily on wealthier workers). In other words, raising the payroll tax by $1 as part of an austerity strategy would slow the economy more than would raising the income tax by $1, resulting in less net deficit reduction. In theory, it would stimulate the economy and reduce the deficit if the payroll tax were lowered and the income tax raised in equal amounts. Crowding in or out The term "crowding out" refers to the extent to which an increase in the budget deficit offsets spending in the private sector. Economist Laura D'Andrea Tyson wrote in June 2012, "By itself an increase in the deficit, either in the form of an increase in government spending or a reduction in taxes, causes an increase in demand". How this affects output, employment, and growth depends on what happens to interest rates: When the economy is operating near capacity, government borrowing to finance an increase in the deficit causes interest rates to rise and higher interest rates reduce or 'crowd out' private investment, reducing growth. This theory explains why large and sustained government deficits take a toll on growth: they reduce capital formation. But this argument rests on how government deficits affect interest rates, and the relationship between government deficits and interest rates varies. When there is considerable excess capacity, an increase in government borrowing to finance an increase in the deficit does not lead to higher interest rates and does not crowd out private investment. Instead, the higher demand resulting from the increase in the deficit bolsters employment and output directly. The resultant increase in income and economic activity in turn encourages, or 'crowds in', additional private spending. Some argue that the 'crowding-in' model is an appropriate solution for current economic conditions." Government budget balance as a sectoral component According to economist Martin Wolf, the U.S. and many Eurozone countries experienced rapid increases in their budget deficits in the wake of the 2008 crisis as a result of significant private-sector retrenchment and ongoing capital account surpluses. Policy choices had little to do with these deficit increases. This makes austerity measures counterproductive. Wolf explained that government fiscal balance is one of three major financial sectoral balances in a country's economy, along with the foreign financial sector (capital account) and the private financial sector. By definition, the sum of the surpluses or deficits across these three sectors must be zero. In the U.S. and many Eurozone countries other than Germany, a foreign financial surplus exists because capital is imported (net) to fund the trade deficit. Further, there is a private-sector financial surplus because household savings exceed business investment. By definition, a government budget deficit must exist so all three net to zero: for example, the U.S. government budget deficit in 2011 was approximately 10% of GDP (8.6% of GDP of which was federal), offsetting a foreign financial surplus of 4% of GDP and a private-sector surplus of 6% of GDP. Wolf explained in July 2012 that the sudden shift in the private sector from deficit to surplus forced the U.S. government balance into deficit: "The financial balance of the private sector shifted towards surplus by the almost unbelievable cumulative total of 11.2 per cent of gross domestic product between the third quarter of 2007 and the second quarter of 2009, which was when the financial deficit of US government (federal and state) reached its peak.... No fiscal policy changes explain the collapse into massive fiscal deficit between 2007 and 2009, because there was none of any importance. The collapse is explained by the massive shift of the private sector from financial deficit into surplus or, in other words, from boom to bust." Wolf also wrote that several European economies face the same scenario and that a lack of deficit spending would likely have resulted in a depression. He argued that a private-sector depression (represented by the private- and foreign-sector surpluses) was being "contained" by government deficit spending. Economist Paul Krugman also explained in December 2011 the causes of the sizable shift from private-sector deficit to surplus in the U.S.: "This huge move into surplus reflects the end of the housing bubble, a sharp rise in household saving, and a slump in business investment due to lack of customers." One reason why austerity can be counterproductive in a downturn is due to a significant private-sector financial surplus, in which consumer savings is not fully invested by businesses. In a healthy economy, private-sector savings placed into the banking system by consumers are borrowed and invested by companies. However, if consumers have increased their savings but companies are not investing the money, a surplus develops. Business investment is one of the major components of GDP. For example, a U.S. private-sector financial deficit from 2004 to 2008 transitioned to a large surplus of savings over investment that exceeded $1 trillion by early 2009, and remained above $800 billion into September 2012. Part of this investment reduction was related to the housing market, a major component of investment. This surplus explains how even significant government deficit spending would not increase interest rates (because businesses still have access to ample savings if they choose to borrow and invest it, so interest rates are not bid upward) and how Federal Reserve action to increase the money supply does not result in inflation (because the economy is awash with savings with no place to go). Economist Richard Koo described similar effects for several of the developed world economies in December 2011: "Today private sectors in the U.S., the U.K., Spain, and Ireland (but not Greece) are undergoing massive deleveraging [paying down debt rather than spending] in spite of record low interest rates. This means these countries are all in serious balance sheet recessions. The private sectors in Japan and Germany are not borrowing, either. With borrowers disappearing and banks reluctant to lend, it is no wonder that, after nearly three years of record low interest rates and massive liquidity injections, industrial economies are still doing so poorly. Flow of funds data for the U.S. show a massive shift away from borrowing to savings by the private sector since the housing bubble burst in 2007. The shift for the private sector as a whole represents over 9 percent of U.S. GDP at a time of zero interest rates. Moreover, this increase in private sector savings exceeds the increase in government borrowings (5.8 percent of GDP), which suggests that the government is not doing enough to offset private sector deleveraging." A typical goal of austerity is to reduce the annual budget deficit without sacrificing growth. Over time, this may reduce the overall debt burden, often measured as the ratio of public debt to GDP. During the European debt crisis, many countries embarked on austerity programs, reducing their budget deficits relative to GDP from 2010 to 2011. According to the CIA World Factbook, Greece decreased its budget deficit from 10.4% of GDP in 2010 to 9.6% in 2011. Iceland, Italy, Ireland, Portugal, France, and Spain also decreased their budget deficits from 2010 to 2011 relative to GDP but the austerity policy of the Eurozone achieves not only the reduction of budget deficits. The goal of economic consolidation influences the future development of the European Social Model. With the exception of Germany, each of these countries had public-debt-to-GDP ratios that increased from 2010 to 2011, as indicated in the chart at right. Greece's public-debt-to-GDP ratio increased from 143% in 2010 to 165% in 2011 Indicating despite declining budget deficits GDP growth was not sufficient to support a decline in the debt-to-GDP ratio for these countries during this period. Eurostat reported that the overall debt-to-GDP ratio for the EA17 was 70.1% in 2008, 80.0% in 2009, 85.4% in 2010, 87.3% in 2011, and 90.6% in 2012. Further, real GDP in the EA17 declined for six straight quarters from Q4 2011 to Q1 2013. Unemployment is another variable considered in evaluating austerity measures. According to the CIA World Factbook, from 2010 to 2011, the unemployment rates in Spain, Greece, Ireland, Portugal, and the UK increased. France and Italy had no significant changes, while in Germany and Iceland the unemployment rate declined. Eurostat reported that Eurozone unemployment reached record levels in March 2013 at 12.1%, up from 11.6% in September 2012 and 10.3% in 2011. Unemployment varied significantly by country. Economist Martin Wolf analyzed the relationship between cumulative GDP growth in 2008 to 2012 and total reduction in budget deficits due to austerity policies in several European countries during April 2012 (see chart at right). He concluded, "In all, there is no evidence here that large fiscal contractions budget deficit reductions bring benefits to confidence and growth that offset the direct effects of the contractions. They bring exactly what one would expect: small contractions bring recessions and big contractions bring depressions." Similarly, economist Paul Krugman analyzed the relationship between GDP and reduction in budget deficits for several European countries in April 2012 and concluded that austerity was slowing growth. He wrote: "this also implies that 1 euro of austerity yields only about 0.4 euros of reduced deficit, even in the short run. No wonder, then, that the whole austerity enterprise is spiraling into disaster." The Greek government-debt crisis brought a package of austerity measures, put forth by the EU and the IMF mostly in the context of the three sucessive bailouts the country endured from 2010 to 2018; it was met with great anger by the Greek public, leading to riots and social unrest. On 27 June 2011, trade union organizations began a 48-hour labour strike in advance of a parliamentary vote on the austerity package, the first such strike since 1974. Massive demonstrations were organized throughout Greece, intended to pressure members of parliament into voting against the package. The second set of austerity measures was approved on 29 June 2011, with 155 out of 300 members of parliament voting in favor. However, one United Nations official warned that the second package of austerity measures in Greece could pose a violation of human rights. Around 2011, the IMF started issuing guidance suggesting that austerity could be harmful when applied without regard to an economy's underlying fundamentals. In 2013 it published a detailed analysis concluding that "if financial markets focus on the short-term behavior of the debt ratio, or if country authorities engage in repeated rounds of tightening in an effort to get the debt ratio to converge to the official target," austerity policies could slow or reverse economic growth and inhibit full employment. Keynesian economists and commentators such as Paul Krugman have suggested that this has in fact been occurring, with austerity yielding worse results in proportion to the extent to which it has been imposed. Overall, Greece lost 25% of its GDP during the crisis. Although the government debt increased only 6% between 2009 and 2017 (from €300 bn to €318 bn) — thanks, in part, to the 2012 debt restructuring —, the critical debt-to-GDP ratio shot up from 127% to 179% mostly due to the severe GDP drop during the handling of the crisis. In all, the Greek economy suffered the longest recession of any advanced capitalist economy to date, overtaking the US Great Depression. As such, the crisis hit hardly the populace as the series of sudden reforms and austerity measures led to impoverishment and loss of income and property, as well as a small-scale humanitarian crisis. Unemployment shot up from 8% in 2008 to 27% in 2013 and remained at 22% in 2017. As a result of the crisis, Greek political system has been upended, social exclusion increased, and hundreds of thousands of well-educated Greeks left the country. In April and May 2012, France held a presidential election in which the winner, François Hollande, had opposed austerity measures, promising to eliminate France's budget deficit by 2017 by canceling recently enacted tax cuts and exemptions for the wealthy, raising the top tax bracket rate to 75% on incomes over one million euros, restoring the retirement age to 60 with a full pension for those who have worked 42 years, restoring 60,000 jobs recently cut from public education, regulating rent increases, and building additional public housing for the poor. In the legislative elections in June, Hollande's Socialist Party won a supermajority capable of amending the French Constitution and enabling the immediate enactment of the promised reforms. Interest rates on French government bonds fell by 30% to record lows, fewer than 50 basis points above German government bond rates. Latvia's economy returned to growth in 2011 and 2012, outpacing the 27 nations in the EU, while implementing significant austerity measures. Advocates of austerity argue that Latvia represents an empirical example of the benefits of austerity, while critics argue that austerity created unnecessary hardship with the output in 2013 still below the pre-crisis level. According to the CIA World Fact Book, "Latvia's economy experienced GDP growth of more than 10% per year during 2006–07, but entered a severe recession in 2008 as a result of an unsustainable current account deficit and large debt exposure amid the softening world economy. Triggered by the collapse of the second largest bank, GDP plunged 18% in 2009. The economy has not returned to pre-crisis levels despite strong growth, especially in the export sector in 2011–12. The IMF, EU, and other international donors provided substantial financial assistance to Latvia as part of an agreement to defend the currency's peg to the euro in exchange for the government's commitment to stringent austerity measures. The IMF/EU program successfully concluded in December 2011. The government of Prime Minister Valdis Dombrovskis remained committed to fiscal prudence and reducing the fiscal deficit from 7.7% of GDP in 2010, to 2.7% of GDP in 2012." The CIA estimated that Latvia's GDP declined by 0.3% in 2010, then grew by 5.5% in 2011 and 4.5% in 2012. Unemployment was 12.8% in 2011 and rose to 14.3% in 2012. Latvia's currency, the Lati, fell from $0.47 per U.S. dollar in 2008 to $0.55 in 2012, a decline of 17%. Latvia entered the euro zone in 2014. Latvia's trade deficit improved from over 20% of GDP in 2006 to 2007 to under 2% GDP by 2012. Eighteen months after harsh austerity measures were enacted (including both spending cuts and tax increases), economic growth began to return, although unemployment remained above pre-crisis levels. Latvian exports have skyrocketed and both the trade deficit and budget deficit have decreased dramatically. More than one-third of government positions were eliminated, and the rest received sharp pay cuts. Exports increased after goods prices were reduced due to private business lowering wages in tandem with the government. Paul Krugman wrote in January 2013 that Latvia had yet to regain its pre-crisis level of employment. He also wrote, "So we're looking at a Depression-level slump, and 5 years later only a partial bounceback; unemployment is down but still very high, and the decline has a lot to do with emigration. It's not what you'd call a triumphant success story, any more than the partial US recovery from 1933 to 1936 – which was actually considerably more impressive – represented a huge victory over the Depression. And it's in no sense a refutation of Keynesianism, either. Even in Keynesian models, a small open economy can, in the long run, restore full employment through deflation and internal devaluation; the point, however, is that it involves many years of suffering". Latvian Prime Minister Valdis Dombrovskis defended his policies in a television interview, stating that Krugman refused to admit his error in predicting that Latvia's austerity policy would fail. Krugman had written a blog post in December 2008 entitled "Why Latvia is the New Argentina", in which he argued for Latvia to devalue its currency as an alternative or in addition to austerity. Following the financial crisis of 2007–2008 a period of economic recession began in the UK. The austerity programme was initiated in 2010 by the Conservative and Liberal Democrat coalition government. In his June 2010 budget speech, the Chancellor George Osborne identified two goals. The first was that the structural current budget deficit would be eliminated to "achieve cyclically-adjusted current balance by the end of the rolling, five-year forecast period". The second was that national debt as a percentage of GDP would be falling. The government intended to achieve both of its goals through substantial reductions in public expenditure. This was to be achieved by a combination of public spending reductions and tax increases. Economists Alberto Alesina, Carlo A. Favero and Francesco Giavazzi, writing in Finance & Development in 2018, argued that deficit reduction policies based on spending cuts typically have almost no effect on output, and hence form a better route to achieving a reduction in the debt-to-GDP ratio than raising taxes. The authors commented that the UK government austerity programme had resulted in growth that was higher than the European average and that the UK's economic performance had been much stronger than the International Monetary Fund had predicted. Austerity programs can be controversial. In the Overseas Development Institute (ODI) briefing paper "The IMF and the Third World", the ODI addresses five major complaints against the IMF's austerity conditions. Complaints include such measures being "anti-developmental", "self-defeating", and tending "to have an adverse impact on the poorest segments of the population". In many situations, austerity programs are implemented by countries that were previously under dictatorial regimes, leading to criticism that citizens are forced to repay the debts of their oppressors. In 2009, 2010, and 2011, workers and students in Greece and other European countries demonstrated against cuts to pensions, public services, and education spending as a result of government austerity measures. Following the announcement of plans to introduce austerity measures in Greece, massive demonstrations occurred throughout the country aimed at pressing parliamentarians to vote against the austerity package. In Athens alone, 19 arrests were made, while 46 civilians and 38 policemen had been injured by 29 June 2011. The third round of austerity was approved by the Greek parliament on 12 February 2012 and met strong opposition, especially in Athens and Thessaloniki, where police clashed with demonstrators. Opponents argue that austerity measures depress economic growth and ultimately cause reduced tax revenues that outweigh the benefits of reduced public spending. Moreover, in countries with already anemic economic growth, austerity can engender deflation, which inflates existing debt. Such austerity packages can also cause the country to fall into a liquidity trap, causing credit markets to freeze up and unemployment to increase. Opponents point to cases in Ireland and Spain in which austerity measures instituted in response to financial crises in 2009 proved ineffective in combating public debt and placed those countries at risk of defaulting in late 2010. In October 2012, the IMF announced that its forecasts for countries that implemented austerity programs have been consistently overoptimistic, suggesting that tax hikes and spending cuts have been doing more damage than expected and that countries that implemented fiscal stimulus, such as Germany and Austria, did better than expected. These data have been scrutinized by the Financial Times, which found no significant trends when outliers like Germany and Greece were excluded. Determining the multipliers used in the research to achieve the results found by the IMF was also described as an "exercise in futility" by Professor Carlos Vegh of the University of Michigan. Moreover, Barry Eichengreen of the University of California, Berkeley and Kevin H. O'Rourke of Oxford University write that the IMF's new estimate of the extent to which austerity restricts growth was much lower than historical data suggest. On 3 February 2015 Joseph Stiglitz wrote: "Austerity had failed repeatedly from its early use under US president Herbert Hoover, which turned the stock-market crash into the Great Depression, to the IMF programs imposed on East Asia and Latin America in recent decades. And yet when Greece got into trouble it was tried again." Government spending actually rose significantly under Hoover, while revenues were flat. Balancing stimulus and austerity Strategies that involve short-term stimulus with longer-term austerity are not mutually exclusive. Steps can be taken in the present that will reduce future spending, such as "bending the curve" on pensions by reducing cost of living adjustments or raising the retirement age for younger members of the population, while at the same time creating short-term spending or tax cut programs to stimulate the economy to create jobs. IMF managing director Christine Lagarde wrote in August 2011, "For the advanced economies, there is an unmistakable need to restore fiscal sustainability through credible consolidation plans. At the same time we know that slamming on the brakes too quickly will hurt the recovery and worsen job prospects. So fiscal adjustment must resolve the conundrum of being neither too fast nor too slow. Shaping a Goldilocks fiscal consolidation is all about timing. What is needed is a dual focus on medium-term consolidation and short-term support for growth. That may sound contradictory, but the two are mutually reinforcing. Decisions on future consolidation, tackling the issues that will bring sustained fiscal improvement, create space in the near term for policies that support growth." Federal Reserve Chair Ben Bernanke wrote in September 2011, "the two goals—achieving fiscal sustainability, which is the result of responsible policies set in place for the longer term, and avoiding creation of fiscal headwinds for the recovery—are not incompatible. Acting now to put in place a credible plan for reducing future deficits over the long term, while being attentive to the implications of fiscal choices for the recovery in the near term, can help serve both objectives." "Age of austerity" The term "age of austerity" was popularised by UK Conservative Party leader David Cameron in his keynote speech to the Conservative Party forum in Cheltenham on 26 April 2009, in which he committed to end years of what he called "excessive government spending". Word of the year Merriam-Webster's Dictionary named the word "austerity" as its "Word of the Year" for 2010 because of the number of web searches this word generated that year. According to the president and publisher of the dictionary, "austerity had more than 250,000 searches on the dictionary's free online [website] tool" and the spike in searches "came with more coverage of the debt crisis". Examples of austerity - Argentina — 1952, 1999–2002, 2012, 2018 - Brazil — 2003–2006, 2015–2018 - Canada — 1994 - Cuba — 1991–1999 - Netherlands — 1982–1990, 2003–2006, 2011–2014 - Czech Republic — 2010 - European countries — 2012 - Germany — 2011 - Greece — 2010–2018 - Ireland — 2010–2014 - Israel — 1949–1959 - Italy — 2011–2013 - Japan — 1949 American Occupation, 2010 - Latvia — 2009–2013 - Mexico — 1985 - Nicaragua — 1997 - Palestinian Authority — 2006 - Portugal — 1977–1979, 1983–1985, 2002–2008, 2010–2015 - Puerto Rico — 2009–2018 - Romania — Ceaușescu's 1981–1989 austerity, 2010 - Spain — 1979, 2010–2014 - United States — 1921, 1937, 1946 - United Kingdom — during and after the two World Wars, 1976-1979, 2011–2018. According to economist David Stuckler and physician Sanjay Basu in their study The Body Economic: Why Austerity Kills, a health crisis is being triggered by austerity policies, including up to 10,000 additional suicides that have occurred across Europe and the U.S. since the introduction of austerity programs. Several alternative and right-wing parties have been born in opposition to austerity policies, especially in Western Europe. One notable example is the Alternative fur Deutschland in Germany, which became the 3rd largest party in the Bundestag in 2017, primarily due to their opposition of the ruling CDU and SPD's austerity policies. J. Bradford DeLong and Lawrence Summers explained why an expansionary fiscal policy is effective in reducing a government's future debt burden, pointing out that the policy has a positive impact on its future productivity level. They pointed out that when an economy is depressed and its nominal interest rate is near zero, the real interest rate charged to firms is linked to the output as . This means that the rate decreases as the real GDP increases, and the actual fiscal multiplier is higher than that in normal times; a fiscal stimulus is more effective for the case where the interest rates are at the zero bound. As the economy is boosted by government spending, the increased output yields higher tax revenue, and so we have where is a baseline marginal tax-and-transfer rate. Also, we need to take account of the economy's long-run growth rate , as a steady economic growth rate may reduce its debt-to-GDP ratio. Then we can see that an expansionary fiscal policy is self-financing: as long as is less than zero. Then we can find that a fiscal stimulus makes the long-term budget in surplus if the real government borrowing rate satisfies the following condition: Impacts on short-run budget deficit Research by Gauti Eggertsson et al. indicates that a government's fiscal austerity measures actually increase its short-term budget deficit if the nominal interest rate is very low. In normal time, the government sets the tax rates and the central bank controls the nominal interest rate . If the rate is so low that monetary policies cannot mitigate the negative impact of the austerity measures, the significant decrease of tax base makes the revenue of the government and the budget position worse. If the multiplier is then we have , where That is, the austerity measures are counterproductive in the short-run, as long as the multiplier is larger than a certain level . This erosion of the tax base is the effect of the endogenous component of the deficit. Therefore, if the government increases sales taxes, then it reduces the tax base due to its negative effect on the demand, and it upsets the budget balance. No credit risk Supporters of austerity measures tend to use the metaphor that a government's debt is like a household's debt. They intend to frighten people with the notion that the government's overspending leads to the government's default. But this metaphor has been shown to be inaccurate. For a country that has its own currency, its government can create credits by itself, and its central bank can keep the interest rate close to or equal to the nominal risk-free rate. Former FRB chairman Alan Greenspan says that the probability that the US defaults on its debt repayment is zero, because the US government can print money. The FRB of St. Louis says that the US government's debt is denominated in US Dollars, therefore the government will never go bankrupt. - Functional finance - Planned shrinkage - Programme commun - French reform programme cancelled by 'Austerity turn' - Trickle-down economics - "Austerity measure". Financial Times Lexicon. Retrieved 1 March 2013. - Traynor, Ian; Katie Allen (11 June 2010). "Austerity Europe: who faces the cuts". London: Guardian News. Retrieved 29 September 2010. - Wesbury, Brian S.; Robert Stein (26 July 2010). "Government Austerity: The Good, Bad And Ugly". Forbes. Archived from the original on 29 September 2010. Retrieved 29 September 2010. - "Austerity – Pros and Cons". Economics Help. - "What is austerity?". The Economist. - Krugman, Paul (15 April 2012). "Europe's Economic Suicide". The New York Times. - Laura D'Andrea Tyson (1 June 2012). "Confusion about the Deficit". New York Times. Retrieved 16 May 2013. - Paul Krugman (6 June 2013). "How the Case for Austerity Has Crumbled". New York Times. Retrieved 17 May 2013. - Heard on Fresh Air from WHYY (4 October 2011). "NPR-Michael Lewis-How the Financial Crisis Created a New Third World-October 2011". Npr.org. Retrieved 7 July 2012. - Blyth, Mark (2013). Austerity: The History of a Dangerous Idea. New York: Oxford University Press. ISBN 019982830X. - David M Kotz, The Rise and Fall of Neoliberal Capitalism, (Harvard University Press, 2015), ISBN 0674725654 - "Keynes Was Right". - NYT-Paul Krugman-The Austerity Agenda-May 2012 - "Aggregate Demand, Instability and Growth" Review of Keynesian Economics, January 2013 (see also this review of the paper) - "Profits and Business Investment" Paul Krugman, New York Times", 9 February 2013 - "Still Say's Law After All These Years" Paul Krugman, New York Times, 10 February 2013 - NYT-Reinhart and Rogoff-Debt, Growth and the Austerity Debate-April 2013 - Brad Plumer (12 October 2012) "IMF: Austerity is much worse for the economy than we thought" Washington Post - IMF World Economic Outlook October 2012 - Box 1.1 Pages 41-43 - Batini, N., Eyraud, L., Weber, A. (2014) "A Simple Method to Compute Fiscal Multipliers", IMF Working Paper No. 14/93. Link - Zandi, Mark. "A Second Quick Boost From Government Could Spark Recovery." Edited excerpts from congressional testimony 24 July 2008. - CBO-Assessing the Short-Term Effects on Output from Changes in Fiscal Policies-May 2012 - "CBO ranks Democratic and Republican stimulus proposals in one chart". Washington Post. - "Congressional Budget Office Report Proves Spending Cuts Won't Boost Economic Growth - ThinkProgress". - "The balance sheet recession in the US". Financial Times. - "We still have that sinking feeling". Financial Times. - NYT-Paul Krugman-The Problem-December 2011 - Richard Koo-The world in balance sheet recession-Real World Economics Review-December 2011 - EU austerity drive country by country, BBC (21 May 2012) - Martin Wolf (27 April 2012) "The impact of fiscal austerity in the eurozone" Financial Times - Eurostat-Selected Principal European Economic Indicators-Retrieved 15 August 2012 - "The World Factbook". - Eurostat News Release-Euro indicators-23 April 2012 - Eurostat-Euro area and EU 27 government deficit and debt statistics-22 April 2013 - Eurostat-Flash Estimate for Q1 2013 – May 2013 - Eurostat News Release-Euro area unemployment rate at 12.1%-April 30, 2013 - Aaron Smith (31 October 2012). "Eurozone unemployment hits record high". CNNMoney. - "The impact of fiscal austerity in the eurozone". Financial Times. - NYT-Paul Krugman-Austerity and Growth (Again)-April 2012 - https://www.telegraph.co.uk/finance/financialcrisis/9662285/Greeks-clash-with-riot-police-as-politicans-pass-austerity-measures.html "Greeks clash with riot police as politicians pass austerity measures" - https://www.nytimes.com/2011/06/29/world/europe/29greece.html?_r=0 "Two-Day Strike in Greece Ahead of Austerity Vote" - http://www.skai.gr/news/greece/article/170468/oi-aganaktismenoi-diadilonoun-stis-ellinikes-poleis-/ "Στα χνάρια των Ισπανών αγανακτισμένων (in Greek)" - http://www.businessinsider.com/how-to-follow-the-greek-voting-2011-6 "Greek Government Wins Huge Austerity Vote" - "Greek austerity measures could violate human rights, UN expert says". United Nations. 30 June 2011. Retrieved 3 July 2011. - Andrew Berg and Jonathan Ostry. (2011) "Inequality and Unsustainable Growth: Two Sides of the Same Coin" IMF Staff Discussion Note No. SDN/11/08 (International Monetary Fund) - Luc Eyraud and Anke Weber. (2013) "The Challenge of Debt Reduction during Fiscal Consolidation" IMF Working Paper Series No. WP/13/67 (International Monetary Fund) - Paul Krugman (9 March 2013) "The English Prisoner" New York Times - Paul Krugman (10 March 2013) "The IMF on the Austerity Trap" New York Times - "Eurostat (Government debt data)". Eurostat. Retrieved 5 September 2018. - "Eurostat (2017 Government debt data)". Eurostat. 24 April 2018. Retrieved 5 September 2018. - iefimerida.gr (20 July 2015). "BBC: Η Ελλάδα βιώνει ανθρωπιστική κρίση -Εννέα αποκαλυπτικά γραφήματα [εικόνες]". - Naftemporiki (26 March 2015). "Η Ελλάδα και η ανθρωπιστική κρίση". - "Greece - Unemployment rate". IndexMundi. Retrieved 17 October 2018. - Oxenford, Matthew; Chryssogelos, Angelos (16 August 2018). "Greel Bailout: IMF and Europeans Diverge on Lessons Learnt". Chatham House. Retrieved 20 August 2018. - Bloomberg (2012) French government bond interest rates (graph) - Bloomberg (2012) German government bond interest rates (graph) - NYT-Used to Hardship Latvia Accepts Austerity and Its Pain Eases-January 2013 - Edward Hugh-A Fistful of Euros-Why The IMF's Decision To Agree A Lavian Bailout Programme Without Devaluation Is A Mistake-December 2008 - "The World Factbook". - "Why The IMF's Decision To Agree A Lavian Bailout Programme Without Devaluation Is A Mistake". - "IMF Survey : Latvia's Recovery Continues As It Eyes Euro Adoption". - Paul Krugman-NYT-Latvia, Again-January 2013 - CNBC-Katy Barnato-Krugman Can't Admit He Was Wrong on Austerity: Latvia PM-March 15, 2013 - NYT-Paul Krugman-Latvia is the New Argentina-December 2008 - Alesina, Alberto; Favero, Carlo A.; Giavazzi, Francesco (March 2018). "Climbing Out of Debt". Finance & Development. International Monetary Fund. 55 (1). - Harvey, D (2005) A Brief History of Neoliberalism - Klein, N. (2007) The Shock Doctrine - Chomsky, N (2004) Hegemony or Survival - Kyriakidou, Dina (4 August 2010). "In Greece you get a bonus for showing up for work – Arcane benefits add billions to Greece's bloated budget". Toronto Star. Toronto. Retrieved 29 September 2010. - Costas Kantouris and Nicholas Paphitis (10 September 2011). "Greek police, firefighters protest". The Boston Globe. Associated Press Sm,meme,emme,e,e,e. Retrieved 29 September 2011. - Leung, Sophie (11 November 2010). "Stiglitz Says Ireland Has Bleak Prospect of Cutting Deficit, Saving Banks". Bloomberg. Retrieved 1 July 2011. - Giles, Chris. "Robustness of IMF data scrutinised". Financial Times. Retrieved 6 December 2012. - Barry Eichengreen and Kevin H O'Rourke (23 October 2012) "Gauging the multiplier: Lessons from history" VoxEU.org - "A Greek Morality Tale". Project Syndicate. - "Hoover Was No Budget Cutter". The Atlantic. - "Don't let fiscal brakes stall global recovery". Financial Times. - Federal Reserve-Ben Bernanke-The U.S. Economic Outlook-September 2011 - Deborah Summers (26 April 2009). "David Cameron warns of 'new age of austerity'". The Guardian. . Archived from the original on 29 April 2009. Retrieved 26 April 2009. - M. Nicolas Firzli & Vincent Bazi. "Infrastructure Investments in an Age of Austerity : The Pension and Sovereign Funds Perspective". Revue Analyse Financière, volume 41 (Q4 2011 ed.). Retrieved 30 July 2011. - Contreras, Russell (20 December 2010). "Audacity of 'austerity,' 2010 Word of the Year". Associated Press. Archived from the original on 4 February 2013. Retrieved 20 December 2010. - Time Magazine (1952), "ARGENTINA: Inflexible Austerity" - "Clashes as austerity anger drives Europe strikes". CNN. - Sonja Pace (16 June 2010). "Germany Approves Biggest Austerity Plan Since World War II | News | English". Berlin: voanews.com. Retrieved 1 July 2011. - "WRAPUP 4-Greek debt costs spike on budget jitters". Reuters. 21 January 2010. - "UPDATE 2-Italy joins Europe's austerity club with deep cuts". Reuters. 25 May 2010. - (AFP) – 27 July 2010 (27 July 2010). "AFP: Japan unveils budget austerity guidelines". Google. Retrieved 1 July 2011. - "Soros says EU "wrong" to push austerity on Latvia". Reuters. 10 October 2009. - "Mexico's Austerity Plans". The New York Times. 8 February 1985. - "Revista Envío – President Arnoldo Alemán Between the Fund and the Front". Envio.org.ni. Retrieved 1 July 2011. - "Bankrupt Hamas government unveils austerity package". Americanintifada.com. Archived from the original on 7 July 2011. Retrieved 1 July 2011. - "Stability pays". The Economist. 25 March 2004. Retrieved 22 August 2015. - Cambon, Diane (27 June 2008). "Budget, impôts, retraite : la leçon d'austérité du Portugal" [Bugdet, taxes, reforms: Portugal's lesson of austerity]. Le Figaro (in French). Retrieved 22 August 2015. - Leigh Phillips (20 May 2010). "EUobserver / Romania sees biggest protest since 1989 over austerity measures". Euobserver.com. Retrieved 1 July 2011. - Salvadó, Francisco J. Romero (1999) Twentieth-century Spain: politics and society in Spain, 1898–1998 - "Uk contemporary history sourcebook" (PDF). p. 28. Retrieved 7 July 2015. - Coates, Sam; Evans, Judith (7 June 2010). "Cameron fingers culprits for Britains 770bn debt pile". The Times. London. - James Kirkup (5 January 2014). "George Osborne to cut taxes by extending austerity and creating smaller state". Archived from the original on 6 January 2014. Retrieved 24 October 2015. - Why Austerity Kills: From Greece to U.S., Crippling Economic Policies Causing Global Health Crisis. Democracy Now!. 21 May 2013. - (www.dw.com), Deutsche Welle. "AfD: What you need to know about Germany's far-right party | Germany | DW | 24.09.2017". DW.COM. Retrieved 2017-10-08. - J. DeLong and L. Summers, Brookings Papers on Economic Activity, 233 (2012) - M. Denes, G. Eggertsson and S. Gilbukh, Staff report, FRB of New York, 551 (2012) - G. Eggertsson, German Economic Review, 1, 1 (2013) - Unreliable boyfriends and other dreadful political metaphors J. Bloodworth, The Independent, 25 June 2014 - It Is Impossible For The US To Default J. Harvey, Forbes, Leadership, 10 Sep 2012 |Look up austerity in Wiktionary, the free dictionary.| - The Austerity Zone: Life in the New Europe – videos by The New York Times - Socialist Studies Special Edition on Austerity (2011) - Panic-driven austerity in the Eurozone and its implications Paul De Grauwe, Yuemei Ji, 21 February 2013 - NYT Review of Books – Paul Krugman – How the Case for Austerity Has Crumbled – June 2013 - IMF Working Paper-Olivier Blanchard and Daniel Leigh-Growth Forecast Errors and Fiscal Multipliers-January 2013 - How Austerity Kills. The New York Times. 12 May 2013. - The Austerity Delusion; Why a Bad Idea Won Over the West May/June 2013 Foreign Affairs - Video: Richard Koo debates Kenneth Rogoff about the need for austerity, Institute for New Economic Thinking inaugural conference, 22 April 2010 - "Debt may be 'Schuld' in German, but it's 'belief' in Italian and 'faith' in English" Interview with Mark Blyth Science Portal L.I.S.A., 26.01.2015 - Austerity’s Greek Death Toll: Study Connects Strict Measures to Rise in Suicides. Truthdig. February 4, 2015. - Hundreds of mental health experts issue rallying call against austerity. The Guardian. April 17, 2015. - Juice Rap News (April 2015). The EuroDivision Contest, a satire/parody of austerity - Is austerity the new normal? A look at Greece and France, Tony Cross - Life Under Austerity. Jacobin. July 12, 2015. - Austerity policies do more harm than good, IMF study concludes. The Guardian. May 27, 2016.
How can you tell if your shiny new algorithm is better than the one you already have? How do you settle an argument between you and your colleague? Order notation will help you choose between algorithms and settle arguments. Order notation, otherwise known as Big "O" Notation, is one way of looking at the limiting behaviour of algorithms and therefore provides a way to compare how different algorithms independent of their implementations. It is usually used to show how long an algorithm will take to produce an answer but it can also be used to show how much memory, or other resources, will be used to produce a result. Order notation is used to show the best, average and worst case costs of an algorithm and you should be careful of quoted values as this will refer to the average case and not to the worse case. See later for an example. Working it out Consider a sorted list of 100 numbers drawn at random from the numbers 1 to 1 million. Suppose we were given a random number from the same range and wanted to find if that number was in our list. The simplest way of doing this is to go through the list and check each item in the list against the number we've been given. Since the list contains a very small subset of the total range most of the time the number won't be in the list but we'll have to check all 100 items each time. Occasionally, an item will be in the list but as this happens very infrequently we can ignore this behaviour when working out the average order of this algorithm. This kind of simplification goes on a lot for simple order calculations. If more precision is required then this edge case can be considered. The speed of the algorithm depends on the size of the list and regardless of any other factors the bigger the list the slower the algorithm: if the list doubles in size the algorithm will take twice as long. This algorithm is therefore an O(n) algorithm where n is the size of the array. It is important to note that if we reduce the size of the source range, to numbers 1 to 200, so that the item is found in the list most of the time this becomes an O(n/2) algorithm. Now consider looking for a match using a binary search, which we can use as the list is sorted, we will only have to check at most 7 different numbers to find if the given number is in the list as we are halving the search space with each test. Also note that if we double the size of the list we only need to do one more search. This means that the binary search algorithm is O(log n). Note that in computer science log means log2 rather than log10 or loge. It is now possible to compare the two algorithms rather than comparing implementations of the algorithms. Clearly, as n, the size of the list, grows the binary search algorithm becomes better and better making it the clear winner for large n. However, the simple search is far easier to code as writing a working binary search algorithm is not trivial so it is worth keeping it simple until the speed increase is needed (see the post on Optimization). Interestingly, the space usage of both algorithms is the same as they work on the list of numbers directly and allocate no new space of their own. In order notation this is known as O(1) or O(k) as the cost is independent of the number of items. When analysing more complex algorithms break them into simpler parts, work out the order of each part and put them back together. For instance an algorithm that searches an input for numbers in a range and then sorts that range would be O(n + m log m) where n is the size of the input and m is the size of the subset. Note that for very large inputs the searching part will come to dominate and therefore this can be considered an O(n) algorithm. Algorithms are generally divided into different classes depending on the largest power of n. Thus, if you have an algorithm that is O(n2 + m) it is in the class of O(n2) algorithms. For the most part, when considering different algorithms, forget about space complexity. Time complexity is the important one. You can often trade off time for space, though, so be aware of how much you're using. - O(1), or constant time, is the best option. Accessing an array is O(1), so is using a hash table. Unfortunately, hash tables usually incur a space penalty as they need to have empty space to minimize hash collisions (different inputs generating the same hash). Also a poor hashing function will kill you as resolving a collision is an O(m) operation where m is the number of elements sharing the same hash. - O(log n) is the next best option. Binary search is pretty much the only thing that is O(log n). Useful to remember if you can maintain a sorted list and are doing lots of searching. - O(n) is not bad, if you have a small set of things to iterate through. O(n) is very easy to think about so these should be your "go to" algorithms. - O(n log n) is usually only encountered in sort as lots of sorting algorithms have this order. QuickSort is a particularly good sorting algorithm that has O(n log n) as it's average behaviour. - O(n²) is getting quite bad, and should only be used when there is no alternative or the set is very small. Sorting algorithms often have O(n²) as worse cases. Look at how to cut down the search space in at least one dimension to try and tame O(n²) algorithms. - O(n3) and higher (ie O(2n), O(n!) etc.) are bad but might be the only way to solve the problem. Do you need the exact solution or will a 'good' one be enough? Use a simpler, less accurate algorithm, or some kind of heuristic. See travelling salesman problem for examples of tacking a problem where the simple solution is O(n!). Being able to analyze algorithms, rather than implementations of algorithms, is an important tool in the programmers toolkit. It allows you to make informed choices as well as settle arguments. The next time you and a colleague are going head to head over whose algorithm is best break out the order notation and settle the argument once and for all.
The Coordinate Plane and Plotting Points Help The Rectangular Coordinate Plane This section explains the concepts of combining algebra and geometry using the rectangular coordinate plane . The rectangular coordinate plane uses two perpendicular number lines called axes . The horizontal axis is called the x axis . The vertical axis is called the y axis . The intersection of the axes is called the origin . The axes divide the plane into four quadrants , called I, II, III, and IV (see Fig. 11-1). Each point on the plane can be located by its coordinates . The coordinates give the horizontal and vertical distances from the y axis and x axis, respectively. The distances are called the x coordinate (abscissa) and the y coordinate (ordinate) , and they are written as an ordered pair (x, y). For example, a point with coordinates (2, 3) is located two units to the right of the y axis and 3 units above the x axis (see Fig. 11-2 ). The point whose coordinates are (2, 3) is located in the first quadrant or QI, since both coordinates are positive. The point whose coordinates are (–4, 1) is located in the second quadrant or QII, since the x coordinate is negative and the y coordinate is positive. The point whose coordinates are (–3, –5) is in the third quadrant or QIII, since both coordinates are negative. The point (3, –1) is located in the fourth quadrant or QIV, since the x coordinate is positive and the y coordinate is negative (see Fig. 11-3). The coordinates of the origin are (0, 0). Any point whose y coordinate is zero is located on the x axis. For example, point P, whose coordinates are (–3, 0), is located on the x axis 3 units to the left of the y axis. Any point whose x coordinate is zero is located on the y axis. For example, the point Q, whose coordinates are (0, 4), is located on the y axis four units above the x axis (see Fig. 11-4). Give the coordinates of each point shown in Fig. 11-5 . A (–2, 4); B (3, 1); C (–1, –5); D (6, –2); E (2, 0); F (0, –4). The Coordinate Plane and Plotting Points Practice Problems Give the coordinates of each point shown in Fig. 11-6 . 1. A (–4, –2) 2. B (2, –5) 3. C (–3, 4) 4. D (1, 5) 5. E (5, 0) 6. F (0, –1) Practice problems for these concepts can be found at: Graphing Practice Test. - Kindergarten Sight Words List - First Grade Sight Words List - 10 Fun Activities for Children with Autism - Signs Your Child Might Have Asperger's Syndrome - Definitions of Social Studies - A Teacher's Guide to Differentiating Instruction - Curriculum Definition - Theories of Learning - What Makes a School Effective? - Child Development Theories
Patriarchy is a historic creation formed by men and women in a process which took nearly 2500 years to its completion. In its earliest form patriarchy appeared as the archaic state. The basic unit of its organization was the patriarchal family, which both expressed and constantly generated its rules and values. We have seen how integrally definitions of gender affected the formation of the state. Let us briefly review the way in which gender became created, defined, and established. The roles and behavior deemed appropriate to the sexes were expressed in values, customs, laws, and social roles. They also, and very importantly, were expressed in leading metaphors, which became part of the cultural construct and explanatory system. The sexuality of women, consisting of their sexual and their reproductive capacities and services, was commodified even prior to the creation of Western civilization. The development of agriculture in the Neolithic period fostered the inter-tribal "exchange of women," not only as a means of avoiding incessant warfare by the cementing of marriage alliances but also because societies with more women could produce more children. In contrast to the economic needs of hunting/gathering societies, agriculturists could use the labor of children to increase production and accumulate surpluses. Men-as-a-group had rights in women which women-as-a-group did not have in men. Women themselves became a resource, acquired by men much as the land was acquired by men. Women were exchanged or bought in marriages for the benefit of their families; later, they were conquered or bought in slavery, where their sexual services were part of their labor and where their children were the property of their masters. In every known society it was women of conquered tribes who were first enslaved, whereas men were killed. It was only after men had learned how to enslave the women of groups who could be defined as strangers, that they learned how to enslave men of those groups and, later, subordinates from within their own societies. Thus, the enslavement of women, combining both racism and sexism, preceded the formation of classes and class oppression. Class differences were, at their very beginnings, expressed and constituted in terms of patriarchal relations. Class is not a separate construct from gender; rather, class is expressed in genderic terms. By the second millennium b.c. in Mesopotamian societies, the daughters of the poor were sold into marriage or prostitution in order to advance the economic interests of their families. The daughters of men of property could command a bride price, paid by the family of the groom to the family of the bride, which frequently enabled the bride's family to secure more financially advantageous marriages for their sons, thus improving the family's economic position. If a husband or father could not pay his debt, his wife and children could be used as pawns, becoming debt slaves to the creditor. These conditions were so firmly established by 1750 b.c. that Hammurabic law made a decisive improvement in the lot of debt pawns by limiting their terms of service to three years, where earlier it had been for life. The product of this commodification of women — bride price, sale price, and children — was appropriated by men. It may very well represent the first accumulation of private property. The enslavement of women of conquered tribes became not only a status symbol for nobles and warriors, but it actually enabled the conquerors to acquire tangible wealth through selling or trading the product of the slaves' labor and their reproductive product, slave children. Claude Lévi-Strauss, to whom we owe the concept of "the exchange of women," speaks of the reification of women, which occurred as its consequence. But it is not women who are reified and commodified, it is women's sexuality and reproductive capacity which is so treated. The distinction is important. Women never became "things," nor were they so perceived. Women, no matter how exploited and abused, retained their power to act and to choose to the same, often very limited extent, as men of their group. But women always and to this day lived in a relatively greater state of unfreedom than did men. Since their sexuality, an aspect of their body, was controlled by others, women were not only actually disadvantaged but psychologically restrained in a very special way. For women, as for men of subordinate and oppressed groups, history consisted of their struggle for emancipation and freedom from necessity. But women struggled against different forms of oppression and dominance than did men, and their struggle, up to this time, has lagged behind that of men. The first gender-defined social role for women was to be those who were exchanged in marriage transactions. The obverse gender role for men was to be those who did the exchanging or who defined the terms of the exchanges. Another gender-defined role for women was that of the "stand-in" wife, which became established and institutionalized for women of elite groups. This role gave such women considerable power and privileges, but it depended on their attachment to elite men and was based, minimally, on their satisfactory performance in rendering these men sexual and reproductive services. If a woman failed to meet these demands, she was quickly replaced and thereby lost all her privileges and standing. The gender-defined role of warrior led men to acquire power over men and women of conquered tribes. Such war-induced conquest usually occurred over people already differentiated from the victors by race, ethnicity, or simple tribal difference. In its ultimate origin, "difference" as a distinguishing mark between the conquered and the conquerors was based on the first clearly observable difference, that between the sexes. Men had learned how to assert and exercise power over people slightly different from themselves in the primary exchange of women. In so doing, men acquired the knowledge necessary to elevate "difference" of whatever kind into a criterion for dominance. From its inception in slavery, class dominance took different forms for enslaved men and women: men were primarily exploited as workers; women were always exploited as workers, as providers of sexual services, and as reproducers. The historical record of every slave society offers evidence for this generalization. The sexual exploitation of lower-class women by upper-class men can be shown in antiquity, under feudalism, in the bourgeois households of nineteenth- and twentieth-century Europe, in the complex sex/race relations between women of the colonized countries and their male colonizers — it is ubiquitous and pervasive. For women, sexual exploitation is the very mark of class exploitation. At any given moment in history, each "class" is constituted of two distinct classes — men and women. The class position of women became consolidated and actualized through their sexual relationships. It always was expressed within degrees of unfreedom on a spectrum ranging from the slave woman, whose sexual and reproductive capacity was commodified as she herself was; to the slave-concubine, whose sexual performance might elevate her own status or that of her children; then to the "free" wife, whose sexual and reproductive services to one man of the upper classes entitled her to property and legal rights. While each of these groups had vastly different obligations and privileges in regard to property, law, and economic resources, they shared the unfreedom of being sexually and reproductively controlled by men. We can best express the complexity of women's various levels of dependency and freedom by comparing each woman with her brother and considering how the sister's and brother's lives and opportunities would differ. Class for men was and is based on their relationship to the means of production: those who owned the means of production could dominate those who did not. The owners of the means of production also acquired the commodity of female sexual services, both from women of their own class and from women of the subordinate classes. In Ancient Mesopotamia, in classical antiquity, and in slave societies, dominant males also acquired, as property, the product of the reproductive capacity of subordinate women — children, to be worked, traded, married off, or sold as slaves, as the case might be. For women, class is mediated through their sexual ties to a man. It is through the man that women have access to or are denied access to the means of production and to resources. It is through their sexual behavior that they gain access to class. "Respectable women" gain access to class through their fathers and husbands, but breaking the sexual rules can at once declass them. The gender definition of sexual "deviance" marks a woman as "not respectable," which in fact consigns her to the lowest class status possible. Women who withhold heterosexual services (such as single women, nuns, lesbians) are connected to the dominant man in their family of origin and through him gain access to resources. Or, alternatively, they are declassed. In some historical periods, convents and other enclaves for single women created some sheltered space, in which such women could function and retain their respectability. But the vast majority of single women are, by definition, marginal and dependent on the protection of male kin. This is true throughout historical time up to the middle of the twentieth century in the Western world and still is true in most of the under-developed countries today. The group of independent, self-supporting women which exists in every society is small and usually highly vulnerable to economic disaster. Economic oppression and exploitation are based as much on the commodification of female sexuality and the appropriation by men of women's labor power and her reproductive power as on the direct economic acquisition of resources and persons. The archaic state in the Ancient Near East emerged in the second millennium b.c. from the twin roots of men's sexual dominance over women and the exploitation by some men of others. From its inception, the archaic state was organized in such a way that the dependence of male family heads on the king or the state bureaucracy was compensated for by their dominance over their families. Male family heads allocated the resources of society to their families the way the state allocated the resources of society to them. The control of male family heads over their female kin and minor sons was as important to the existence of the state as was the control of the king over his soldiers. This is reflected in the various compilations of Mesopotamian laws, especially in the large number of laws dealing with the regulation of female sexuality. From the second millennium b.c. forward control over the sexual behavior of citizens has been a major means of social control in every state society. Conversely, class hierarchy is constantly reconstituted in the family through sexual dominance. Regardless of the political or economic system, the kind of personality which can function in a hierarchical system is created and nurtured within the patriarchal family. The patriarchal family has been amazingly resilient and varied in different times and places. Oriental patriarchy encompassed polygamy and female enclosure in harems. Patriarchy in classical antiquity and in its European development was based upon monogamy, but in all its forms a double sexual standard, which disadvantages women, was part of the system. In modern industrial states, such as in the United States, property relations within the family develop along more egalitarian lines than those in which the father holds absolute power, yet the economic and sexual power relations within the family do not necessarily change. In some cases, sexual relations are more egalitarian, while economic relations remain patriarchal; in other cases the pattern is reversed. In all cases, however, such changes within the family do not alter the basic male dominance in the public realm, in institutions and in government. The family not merely mirrors the order in the state and educates its children to follow it, it also creates and constantly reinforces that order. It should be noted that when we speak of relative improvements in the status of women in a given society, this frequently means only that we are seeing improvements in the degree in which their situation affords them opportunities to exert some leverage within the system of patriarchy. Where women have relatively more economic power, they are able to have somewhat more control over their lives than in societies where they have no economic power. Similarly, the existence of women's groups, associations, or economic networks serves to increase the ability of women to counteract the dictates of their particular patriarchal system. Some anthropologists and historians have called this relative improvement women's "freedom." Such a designation is illusory and unwarranted. Reforms and legal changes, while ameliorating the condition of women and an essential part of the process of emancipating them, will not basically change patriarchy. Such reforms need to be integrated within a vast cultural revolution in order to transform patriarchy and thus abolish it. The system of patriarchy can function only with the cooperation of women. This cooperation is secured by a variety of means: gender indoctrination; educational deprivation; the denial to women of knowledge of their history; the dividing of women, one from the other, by defining "respectability" and "deviance" according to women's sexual activities; by restraints and outright coercion; by discrimination in access to economic resources and political power; and by awarding class privileges to conforming women. For nearly four thousand years women have shaped their lives and acted under the umbrella of patriarchy, specifically a form of patriarchy best described as paternalistic dominance. The term describes the relationship of a dominant group, considered superior, to a subordinate group, considered inferior, in which the dominance is mitigated by mutual obligations and reciprocal rights. The dominated exchange submission for protection, unpaid labor for maintenance. In the patriarchal family, responsibilities and obligations are not equally distributed among those to be protected: the male children's subordination to the father's dominance is temporary; it lasts until they themselves become heads of households. The subordination of female children and of wives is lifelong. Daughters can escape it only if they place themselves as wives under the dominance/protection of another man. The basis of paternalism is an unwritten contract for exchange: economic support and protection given by the male for subordination in all matters, sexual service, and unpaid domestic service given by the female. Yet the relationship frequently continues in fact and in law, even when the male partner has defaulted on his obligation. It was a rational choice for women, under conditions of public powerlessness and economic dependency, to choose strong protectors for themselves and their children. Women always shared the class privileges of men of their class as long as they were under "the protection" of a man. For women, other than those of the lower classes, the "reciprocal agreement" went like this: in exchange for your sexual, economic, political, and intellectual subordination to men you may share the power of men of your class to exploit men and women of the lower class. In class society it is difficult for people who themselves have some power, however limited and circumscribed, to see themselves also as deprived and subordinated. Class and racial privileges serve to undercut the ability of women to see themselves as part of a coherent group, which, in fact, they are not, since women uniquely of all oppressed groups occur in all strata of the society. The formation of a group consciousness of women must proceed along different lines. That is the reason why theoretical formulations, which have been appropriate to other oppressed groups, are so inadequate in explaining and conceptualizing the subordination of women. Women have for millennia participated in the process of their own subordination because they have been psychologically shaped so as to internalize the idea of their own inferiority. The unawareness of their own history of struggle and achievement has been one of the major means of keeping women subordinate. The connectedness of women to familial structures made any development of female solidarity and group cohesiveness extremely problematic. Each individual woman was linked to her male kin in her family of origin through ties which implied specific obligations. Her indoctrination, from early childhood on, emphasized her obligation not only to make an economic contribution to the kin and household but also to accept a marriage partner in line with family interests. Another way of saying this is to say that sexual control of women was linked to paternalistic protection and that, in the various stages of her life, she exchanged male protectors, but she never outgrew the childlike state of being subordinate and under protection. Other oppressed classes and groups were impelled toward group consciousness by the very conditions of their subordinate status. The slave could clearly mark a line between the interests and bonds to his/her own family and the ties of subservience/protection linking him/her with the master. In fact, protection by slave parents of their own family against the master was one of the most important causes of slave resistance. "Free" women, on the other hand, learned early that their kin would cast them out, should they ever rebel against their dominance. In traditional and peasant societies there are many recorded instances of female family members tolerating and even participating in the chastisement, torture, even death of a girl who had transgressed against the family "honor." In Biblical times, the entire community gathered to stone the adulteress to death. Similar practices prevailed in Sicily, Greece, and Albania into the twentieth century. Bangladesh fathers and husbands cast out their daughters and wives who had been raped by invading soldiers, consigning them to prostitution. Thus, women were often forced to flee from one "protector" to the other, their "freedom" frequently defined only by their ability to manipulate between these protectors. Most significant of all the impediments toward developing group consciousness for women was the absence of a tradition which would reaffirm the independence and autonomy of women at any period in the past. There had never been any woman or group of women who had lived without male protection, as far as most women knew. There had never been any group of persons like them who had done anything significant for themselves. Women had no history — so they were told; so they believed. Thus, ultimately, it was men's hegemony over the symbol system which most decisively disadvantaged women. Male hegemony over the symbol system took two forms: educational deprivation of women and male monopoly on definition. The former happened inadvertently, more the consequence of class dominance and the accession of military elites to power. Throughout historical times, there have always been large loopholes for women of the elite classes, whose access to education was one of the major aspects of their class privilege. But male dominance over definition has been deliberate and pervasive, and the existence of individual highly educated and creative women has, for nearly four thousand years, left barely an imprint on it. We have seen how men appropriated and then transformed the major symbols of female power: the power of the Mother-Goddess and the fertility-goddesses. We have seen how men constructed theologies based on the counterfactual metaphor of male procreativity and redefined female existence in a narrow and sexually dependent way. We have seen, finally, how the very metaphors for gender have expressed the male as norm and the female as deviant; the male as whole and powerful, the female as unfinished, mutilated, and lacking in autonomy. On the basis of such symbolic constructs, embedded in Greek philosophy, the Judeo-Christian theologies, and the legal tradition on which Western civilization is built, men have explained the world in their own terms and defined the important questions so as to make themselves the center of discourse. By making the term "man" subsume "woman" and arrogate to itself the representation of all of humanity, men have built a conceptual error of vast proportion into all of their thought. By taking the half for the whole, they have not only missed the essence of whatever they are describing, but they have distorted it in such a fashion that they cannot see it correctly. As long as men believed the earth to be flat, they could not understand its reality, its function, and its actual relationship to other bodies in the universe. As long as men believe their experiences, their viewpoint, and their ideas represent all of human experience and all of human thought, they are not only unable to define correctly in the abstract, but they are unable to describe reality accurately. The androcentric fallacy, which is built into all the mental constructs of Western civilization, cannot be rectified simply by "adding women." What it demands for rectification is a radical restructuring of thought and analysis which once and for all accepts the fact that humanity consists in equal parts of men and women and that the experiences, thoughts, and insights of both sexes must be represented in every generalization that is made about human beings. Today, historical development has for the first time created the necessary conditions by which large groups of women — finally, all women — can emancipate themselves from subordination. Since women's thought has been imprisoned in a confining and erroneous patriarchal framework, the transforming of the consciousness of women about ourselves and our thought is a precondition for change. We have opened this book with a discussion of the significance of history for human consciousness and psychic well-being. History gives meaning to human life and connects each life to immortality, but history has yet another function. In preserving the collective past and reinterpreting it to the present, human beings define their potential and explore the limits of their possibilities. We learn from the past not only what people before us did and thought and intended, but we also learn how they failed and erred. From the days of the Babylonian king-lists forward, the record of the past has been written and interpreted by men and has primarily focused on the deeds, actions, and intentions of males. With the advent of writing, human knowledge moved forward by tremendous leaps and at a much faster rate than ever before. While, as we have seen, women had participated in maintaining the oral tradition and religious and cultic functions in the preliterate period and for almost a millennium thereafter, their educational disadvantaging and their symbolic dethroning had a profound impact on their future development. The gap between the experience of those who could or might (in the case of lower-class males) participate in the creating of the symbol system and those who merely acted but did not interpret became increasingly greater. In her brilliant work The Second Sex, Simone de Beauvoir focused on the historical end product of this development. She described man as autonomous and transcendent, woman as immanent. But her analysis ignored history. Explaining "why women lack concrete means for organizing themselves into a unit" in defense of their own interests, she stated flatly: "They [women] have no past, no history, no religion of their own." De Beauvoir is right in her observation that woman has not "transcended," if by transcendence one means the definition and interpretation of human knowledge. But she was wrong in thinking that therefore woman has had no history. Two decades of Women's History scholarship have disproven this fallacy by unearthing an unending list of sources and uncovering and interpreting the hidden history of women. This process of creating a history of women is still ongoing and will need to continue for a long time. We are only beginning to understand its implications. The myth that women are marginal to the creation of history and civilization has profoundly affected the psychology of women and men. It has given men a skewed and essentially erroneous view of their place in human society and in the universe. For women, as shown in the case of Simone de Beauvoir, who surely is one of the best-educated women of her generation, history seemed for millennia to offer only negative lessons and no precedent for significant action, heroism, or liberating example. Most difficult of all was the seeming absence of a tradition which would reaffirm the independence and autonomy of women. It seemed that there had never been any woman or group of women who had lived without male protection. It is significant that all the important examples to the contrary were expressed in myth and fable: amazons, dragon-slayers, women with magic powers. But in real life, women had no history — so they were told and so they believed. And because they had no history they had no future alternatives. In one sense, class struggle can be described as a struggle for the control of the symbol systems of a given society. The oppressed group, while it shares in and partakes of the leading symbols controlled by the dominant, also develops its own symbols. These become in time of revolutionary change, important forces in the creation of alternatives. Another way of saying this is that revolutionary ideas can be generated only when the oppressed have an alternative to the symbol and meaning system of those who dominate them. Thus, slaves living in an environment controlled by their masters and physically subject to the masters' total control, could maintain their humanity and at times set limits to the masters' power by holding on to their own "culture." Such a culture consisted of collective memories, carefully kept alive, of a prior state of freedom and of alternatives to the masters' ritual, symbols, and beliefs. What was decisive for the individual was the ability to identify him/herself with a state different from that of enslavement or subordination. Thus, all males, whether enslaved or economically or racially oppressed, could still identify with those like them — other males — who represented mastery over the symbol system. No matter how degraded, each male slave or peasant was like to the master in his relationship to God. This was not the case for women. Up to the time of the Protestant Reformation the vast majority of women could not confirm and strengthen their humanity by reference to other females in positions of intellectual authority and religious leadership. The few exceptional noblewomen and mystics, mostly cloistered nuns, were by their very rarity unlikely models for the ordinary woman. Where there is no precedent, one cannot imagine alternatives to existing conditions. It is this feature of male hegemony which has been most damaging to women and has ensured their subordinate status for millennia. The denial to women of their history has reinforced their acceptance of the ideology of patriarchy and has undermined the individual woman's sense of self-worth. Men's version of history, legitimized as the "universal truth," has presented women as marginal to civilization and as the victim of historical process. To be so presented and to believe it is almost worse then being entirely forgotten. The picture is false, on both counts, as we now know, but women's progress through history has been marked by their struggle against this disabling distortion. Moreover, for more than 2500 years women have been educationally disadvantaged and deprived of the conditions under which to develop abstract thought. Obviously thought is not based on sex; the capacity for thought is inherent in humanity; it can be fostered or discouraged, but it cannot ultimately be restrained. This is certainly true for thought generated by and concerned with daily living, the level of thought on which most men and women operate all their lives. But the generating of abstract thought and of new conceptual models — theory formation — is another matter. This activity depends on the individual thinker's education in the best of existing traditions and on the thinker's acceptance by a group of educated persons who, by criticism and interaction, provide "cultural prodding." It depends on having private time. Finally, it depends on the individual thinker being capable of absorbing such knowledge and then making a creative leap into a new ordering. Women, historically, have been unable to avail themselves of all of these necessary preconditions. Educational discrimination has disadvantaged them in access to knowledge; "cultural prodding," which is institutionalized in the upper reaches of the religious and academic establishments, has been unavailable to them. Universally, women of all classes had less leisure time then men, and, due to their child-rearing and family service function, what free time they had was generally not their own. The time of thinking men, their work and study time, has since the inception of Greek philosophy been respected as private. Like Aristotle's slaves, women "who with their bodies minister to the needs of life" have for more than 2500 years suffered the disadvantages of fragmented, constantly interrupted time. Finally, the kind of character development which makes for a mind capable of seeing new connections and fashioning a new order of abstractions has been exactly the opposite of that required of women, trained to accept their subordinate and service-oriented position in society. Yet there have always existed a tiny minority of privileged women, usually from the ruling elite, who had some access to the same kind of education as did their brothers. From the ranks of such women have come the intellectuals, the thinkers, the writers, the artists. It is such women, throughout history, who have been able to give us a female perspective, an alternative to androcentric thought. They have done so at a tremendous cost and with great difficulty. Those women, who have been admitted to the center of intellectual activity of their day and especially in the past hundred years, academically trained women, have first had to learn "how to think like a man." In the process, many of them have so internalized that learning that they have lost the ability to conceive of alternatives. The way to think abstractly is to define precisely, to create models in the mind and generalize from them. Such thought, men have taught us, must be based on the exclusion of feelings. Women, like the poor, the subordinate, the marginals, have close knowledge of ambiguity, of feelings mixed with thought, of value judgments coloring abstractions. Women have always experienced the reality of self and community, known it, and shared it with each other. Yet, living in a world in which they are devalued, their experience bears the stigma of insignificance. Thus they have learned to mistrust their own experience and devalue it. What wisdom can there be in menses? What source of knowledge in the milk-filled breast? What food for abstraction in the daily routine of feeding and cleaning? Patriarchal thought has relegated such gender-defined experiences to the realm of the "natural," the non-transcendent. Women's knowledge becomes mere "intuition," women's talk becomes "gossip." Women deal with the irredeemably particular: they experience reality daily, hourly, in their service function (taking care of food and dirt); in their constantly interruptable time; their splintered attention. Can one generalize while the particular tugs at one's sleeve? He who makes symbols and explains the world and she who takes care of his bodily and psychic needs and of his children — the gulf between them is enormous. Historically, thinking women have had to choose between living a woman's life, with its joys, dailiness, and immediacy, and living a man's life in order to think. The choice for generations of educated women has been cruel and costly. Others have deliberately chosen an existence outside of the sex-gender system, by living alone or with other women. Some of the most significant advances in women's thought were given us by such women, whose personal struggle for an alternative mode of living infused their thinking. But such women, for most of historical time, have been forced to live on the margins of society; they were considered "deviant" and as such found it difficult to generalize from their experience to others and to win influence and approval. Why no female system-builders? Because one cannot think universals when one's self is excluded from the generic. The social cost of having excluded women from the human enterprise of constructing abstract thought has never been reckoned. We can begin to understand the cost of it to thinking women when we accurately name what was done to us and describe, no matter how painful it may be, the ways in which we have participated in the enterprise. We have long known that rape has been a way of terrorizing us and keeping us in subjection. Now we also know that we have participated, although unwittingly, in the rape of our minds. Creative women, writers and artists, have similarly struggled against a distorting reality. A literary canon, which defined itself by the Bible, the Greek classics, and Milton, would necessarily bury the significance and the meaning of women's literary work, as historians buried the activities of women. The effort to resurrect this meaning and to re-evaluate women's literary and artistic work is recent. Feminist literary criticism and poetics have introduced us to a reading of women's literature, which finds a hidden, deliberately "slant," yet powerful world-view. Through the reinterpretations of feminist literary critics we are uncovering among women writers of the eighteenth and nineteenth centuries a female language of metaphors, symbols, and myths. Their themes often are profoundly subversive of the male tradition. They feature criticism of the Biblical interpretation of Adam's fall; rejection of the goddess/witch dichotomy; projection or fear of the split self. The powerful aspect of woman's creativity becomes symbolized in heroines endowed with magical powers of goodness or in strong women who are banished to cellars or to live as "the madwoman in the attic." Others write in metaphors upgrading the confined domestic space, making it serve, symbolically as the world. For centuries, we find in the works of literary women a pathetic, almost desperate search for Women's History, long before historical studies as such exist. Nineteenth-century female writers avidly read the work of eighteenth-century female novelists; over and over again they read the "lives" of queens, abbesses, poets, learned women. Early "compilers" searched the Bible and all historical sources to which they had access to create weighty tomes with female heroines. Women's literary voices, successfully marginalized and trivialized by the dominant male establishment, nevertheless survived. The voices of anonymous women were present as a steady undercurrent in the oral tradition, in folksong and nursery rhymes, tales of powerful witches and good fairies. In stitchery, embroidery, and quilting women's artistic creativity expressed an alternate vision. In letters, diaries, prayers, and song the symbol-making force of women's creativity pulsed and persisted. All of this work will be the subject of our inquiry in the next volume. How did women manage to survive under male cultural hegemony; what was their influence and impact on the patriarchal symbol system; how and under what conditions did they come to create an alternate, feminist world-view? These are the questions we will examine in order to chart the rise of feminist consciousness as a historical phenomenon. Women and men have entered historical process under different conditions and have passed through it at different rates of speed. If recording, defining, and interpreting the past marks man's entry into history, this occurred for males in the third millennium b.c. It occurred for women (and only some of them) with a few notable exceptions in the nineteenth century. Until then, all History was for women pre-History. Women's lack of knowledge of our own history of struggle and achievement has been one of the major means of keeping us subordinate. But even those of us already defining ourselves as feminist thinkers and engaged in the process of critiquing traditional systems of ideas are still held back by unacknowledged restraints embedded deeply within our psyches. Emergent woman faces a challenge to her very definition of self. How can her daring thought — naming the hitherto unnamed, asking the questions defined by all authorities as "non-existent" — how can such thought coexist with her life as woman? In stepping out of the constructs of patriarchal thought, she faces, as Mary Daly put it, "existential nothingness." And more immediately, she fears the threat of loss of communication with, approval by, and love from the man (or the men) in her life. Withdrawal of love and the designation of thinking women as "deviant" have historically been the means of discouraging women's intellectual work. In the past, and now, many emergent women have turned to other women as love objects and reinforcers of self. Heterosexual feminists, too, have throughout the ages drawn strength from their friendships with women, from chosen celibacy, or from the separation of sex from love. No thinking man has ever been threatened in his self-definition and his love life as the price for his thinking. We should not underestimate the significance of that aspect of gender control as a force restraining women from full participation in the process of creating thought systems. Fortunately, for this generation of educated women, liberation has meant the breaking of this emotional hold and the conscious reinforcement of our selves through the support of other women. Nor is this the end of our difficulties. In line with our historic gender-conditioning, women have aimed to please and have sought to avoid disapproval. This is poor preparation for making the leap into the unknown required of those who fashion new systems. Moreover, each emergent woman has been schooled in patriarchal thought. We each hold at least one great man in our heads. The lack of knowledge of the female past has deprived us of female heroines, a fact which is only recently being corrected through the development of Women's History. So, for a long time, thinking women have refurbished the idea systems created by men, engaging in a dialogue with the great male minds in their heads. Elizabeth Cady Stanton took on the Bible, the Church fathers, the founders of the American republic. Kate Millet argued with Freud, Norman Mailer, and the liberal literary establishment; Simone de Beauvoir with Sartre, Marx, and Camus; all Marxist-Feminists are in a dialogue with Marx and Engels and some also with Freud. In this dialogue woman intends merely to accept whatever she finds useful to her in the great man's system. But in these systems woman — as a concept, a collective entity, an individual — is marginal or subsumed. In accepting such dialogue, thinking woman stays far longer than is useful within the boundaries or the question-setting defined by the "great men." And just as long as she does, the source of new insight is closed to her. Revolutionary thought has always been based on upgrading the experience of the oppressed. The peasant had to learn to trust in the significance of his life experience before he could dare to challenge the feudal lords. The industrial worker had to become "class-conscious," the Black "race-conscious" before liberating thought could develop into revolutionary theory. The oppressed have acted and learned simultaneously — the process of becoming the newly conscious person or group is in itself liberating. So with women. The shift in consciousness we must make occurs in two steps: we must, at least for a time, be woman-centered. We must, as far as possible, leave patriarchal thought behind. To be woman-centered means: asking if women were central to this argument, how would it be defined? It means ignoring all evidence of women's marginality, because, even where women appear to be marginal, this is the result of patriarchal intervention; frequently also it is merely an appearance. The basic assumption should be that it is inconceivable for anything ever to have taken place in the world in which women were not involved, except if they were prevented from participation through coercion and repression. When using methods and concepts from traditional systems of thought, it means using them from the vantage point of the centrality of women. Women cannot be put into the empty spaces of patriarchal thought and systems — in moving to the center, they transform the system. To step outside patriarchal thought means: Being skeptical toward every known system of thought; being critical of all assumptions, ordering values and definitions. Testing one's statement by trusting our own, the female experience. Since such experience has usually been trivialized or ignored, it means overcoming the deep-seated resistance within ourselves toward accepting ourselves and our knowledge as valid. It means getting rid of the great men in our heads and substituting for them ourselves, our sisters, our anonymous foremothers. Being critical toward our own thought, which is, after all, thought trained in the patriarchal tradition. Finally, it means developing intellectual courage, the courage to stand alone, the courage to reach farther than our grasp, the courage to risk failure. Perhaps the greatest challenge to thinking women is the challenge to move from the desire for safety and approval to the most "unfeminine" quality of all — that of intellectual arrogance, the supreme hubris which asserts to itself the right to reorder the world. The hubris of the godmakers, the hubris of the male system-builders. The system of patriarchy is a historic construct; it has a beginning; it will have an end. Its time seems to have nearly run its course — it no longer serves the needs of men or women and in its inextricable linkage to militarism, hierarchy, and racism it threatens the very existence of life on earth. What will come after, what kind of structure will be the foundation for alternate forms of social organization we cannot yet know. We are living in an age of unprecedented transformation. We are in the process of becoming. But we already know that woman's mind, at last unfettered after so many millennia, will have its share in providing vision, ordering, solutions. Women at long last are demanding, as men did in the Renaissance, the right to explain, the right to define. Women, in thinking themselves out of patriarchy add transforming insights to the process of redefinition. As long as both men and women regard the subordination of half the human race to the other as "natural," it is impossible to envision a society in which differences do not connote either dominance or subordination. The feminist critique of the patriarchal edifice of knowledge is laying the groundwork for a correct analysis of reality, one which at the very least can distinguish the whole from a part. Women's History, the essential tool in creating feminist consciousness in women, is providing the body of experience against which new theory can be tested and the ground on which women of vision can stand. A feminist world-view will enable women and men to free their minds from patriarchal thought and practice and at last to build a world free of dominance and hierarchy, a world that is truly human.
A physicist explores the history of mathematics among the Babylonians and Egyptians, showing how their scribes in the era from 2000 to 1600 BCE used visualizations of plane geometric figures to invent geometric algebra, even solving problems that we now do by quadratic algebra. Rudman traces the evolution of mathematics from the metric geometric algebra of Babylon and Egypt—which used numeric quantities on diagrams as a means to work out problems—to the nonmetric geometric algebra of Euclid (ca. 300 BCE). From his analysis of Babylonian geometric algebra, the author formulates a “Babylonian Theorem”, which he demonstrates was used to derive the Pythagorean Theorem, about a millennium before its purported discovery by Pythagoras. He also concludes that what enabled the Greek mathematicians to surpass their predecessors was the insertion of alphabetic notation onto geometric figures. Such symbolic notation was natural for users of an alphabetic language, but was impossible for the Babylonians and Egyptians, whose writing systems (cuneiform and hieroglyphics, respectively) were not alphabetic. This is a masterful, fascinating, and entertaining book, which will interest both math enthusiasts and students of history. Published by Prometheus Books Jan 26, 2010| 248 Pages| 6 x 9| ISBN 9781591027737
Welcome to 19th Century, where we dive into the dynamic clash between liberalism and nationalism during this pivotal era. Join us as we explore the intricate interplay of ideologies that shaped politics, culture, and society, defining an era of profound change and transformation. The Clash of Ideologies: Examining the Battle between Liberalism and Nationalism in the 19th Century The 19th century witnessed a major clash of ideologies between liberalism and nationalism . As societies underwent significant transformations due to industrialization and political revolutions, these two competing ideologies emerged as powerful forces shaping the course of history. On one hand, liberalism advocated for individual freedoms, limited government intervention, and economic liberalism. It emphasized the importance of a free market, private property rights, and the rule of law. Liberal thinkers such as John Locke and Adam Smith argued that individuals should be free to pursue their own interests without undue interference from the state. This ideology gained traction among the rising middle classes, who sought to protect their economic and social privileges. On the other hand, nationalism emphasized the primacy of the nation-state and the collective identity of its people. Nationalists believed in the idea of self-determination and the right of nations to govern themselves. They argued that cultural, linguistic, and historical ties should form the basis of political organization. Nationalism often went hand in hand with the demand for independence from imperial powers, resulting in the rise of numerous nation-states in Europe during this period. The clash between liberalism and nationalism was particularly evident in the context of colonialism and imperialism. While liberal ideas espoused individual rights and equality, European powers justified their imperial ventures through nationalist rhetoric, claiming their superior civilization and duty to civilize “lesser” peoples. This tension highlights the complexities and contradictions inherent in these ideologies. Additionally, the clash of ideologies manifested in political struggles and revolutions. In many European countries, liberals and nationalists joined forces to overthrow conservative monarchies and establish representative governments. However, disagreements over the extent of national inclusion and the balance between individual and collective rights led to internal conflicts within these movements. Overall, the clash between liberalism and nationalism in the 19th century shaped the political, social, and economic landscape of the time. These ideologies continue to influence political discourse and the formation of nation-states to this day. Understanding their historical context helps us to comprehend the challenges and complexities of modern societies. Libertarianism vs. Liberalism: What’s the Difference? | Polandball/Ideologyball History & Philosophy Liberal Wars | 3 Minute History What did nationalism and liberalism entail in the 19th century? In the context of the 19th century, nationalism and liberalism played significant roles in shaping the political landscape. Nationalism referred to a sense of pride, loyalty, and identity tied to one’s nation or ethnic group. It emphasized the idea that a nation should have self-determination and be able to govern itself. Nationalism often involved the promotion of national interests and the preservation of cultural and linguistic distinctiveness. It fueled movements for independence and the unification of fragmented territories into cohesive nation-states. Liberalism encompassed a range of political and economic ideas centered on individual rights, limited government intervention, and free market capitalism. It emphasized the importance of personal freedom, civil liberties, and equality before the law. Liberals advocated for constitutionalism, representative democracy, and the rule of law. Both nationalism and liberalism challenged the prevailing order of monarchy and absolutism during the 19th century. They were closely intertwined as liberal ideas often influenced nationalist movements, and vice versa. Nationalism provided a sense of unity and purpose for liberal movements, while liberalism offered a framework for achieving the goals of nationalism through legal and political reforms. Together, nationalism and liberalism contributed to significant social and political changes during the 19th century. They fostered the emergence of nation-states based on shared cultural or linguistic identities, led to the overthrow of monarchies in several countries, and inspired the pursuit of individual rights and freedoms. These ideologies continue to shape political discourse and movements around the world to this day. What was the significance of liberalism in the 19th century? Liberalism in the 19th century played a significant role in shaping political, social, and economic developments during this period. It advocated for individual freedom, limited government intervention, and equality before the law. One of the key significances of liberalism was its impact on political systems. Liberal thinkers championed the idea of constitutionalism, advocating for the establishment of representative governments that protected the rights and liberties of individuals. This led to the spread of democratic ideals and the rise of parliamentary systems in many countries. Economically, liberalism promoted free trade, deregulation, and laissez-faire policies. Liberal economists argued for the removal of barriers to trade, such as tariffs and monopolies, which they believed hindered economic growth and innovation. These ideas influenced the development of capitalism and the industrial revolution, allowing for greater economic prosperity and the emergence of the middle class. Socially, liberalism advocated for the protection of individual rights and freedoms. Liberal thinkers emphasized the importance of civil liberties, including freedom of speech, religion, and assembly. They also called for equal opportunities and the abolishment of discriminatory practices, such as slavery and serfdom. Overall, the significance of liberalism in the 19th century lies in its transformative impact on political, economic, and social structures. It laid the groundwork for modern democratic societies, free-market economies, and a greater emphasis on individual rights and freedoms. What were the concepts of nationalism during the 19th century? Nationalism in the 19th century was characterized by a strong sense of loyalty and devotion to one’s nation. It emerged as a powerful political and ideological force during this time period, primarily as a response to the social, economic, and political changes brought about by industrialization and the rise of the nation-state. One of the key concepts of 19th-century nationalism was the belief that people sharing a common language, culture, history, and territory should have their own independent nation-state. This idea was known as ethnic nationalism, which emphasized the importance of shared ethnic or cultural identity in defining the boundaries of a nation. Another concept emerged during this time, referred to as civic nationalism. Civic nationalists believed that a nation is not defined by ethnic or cultural factors alone, but also by shared values, political institutions, and citizenship. They argued that individuals who embraced these principles could become members of the nation, regardless of their ethnic backgrounds. The emergence of nationalism in the 19th century led to various movements and struggles for independence and self-determination. Throughout Europe, several nations fought for their sovereignty and sought to establish independent nation-states. Notable examples include Italy and Germany, where fragmented territories were unified through nationalist movements. In addition to political aspirations, nationalism also played a significant role in shaping cultural and intellectual movements during the 19th century. It influenced art, literature, music, and historical narratives, often promoting a romanticized view of the nation’s history and traditions. Overall, 19th-century nationalism marked a significant turning point in the history of nation-states, as it emphasized the importance of national identity and paved the way for the formation of modern nation-states based on linguistic, cultural, and political factors. What were the principles of liberal nationalism in 19th century Europe? Liberal nationalism in 19th century Europe was characterized by a set of principles that aimed to promote the ideas of individual liberties, popular sovereignty, and national self-determination. Firstly, individual liberties were seen as crucial to the liberal nationalist movement. This concept included the protection of individual rights such as freedom of speech, the press, and religion, as well as the belief in the rule of law and limited government interference in people’s lives. Secondly, popular sovereignty was a core principle of liberal nationalism. It emphasized that political power should derive from the consent of the governed, rather than being held solely by monarchs or aristocrats. Liberal nationalists called for representative government and the establishment of constitutions that would ensure the participation of citizens in decision-making processes. Thirdly, national self-determination was another key principle. Liberal nationalists believed that each nation should have the right to determine its own destiny, free from foreign domination or interference. They argued that nations should have their own independent states, where the cultural, linguistic, and historical characteristics of the people could be preserved and celebrated. Overall, the principles of liberal nationalism in 19th century Europe promoted the idea of individual freedoms, popular participation in politics, and the creation of nation-states based on the self-determination of distinct cultural groups. These ideas played a significant role in shaping the political landscape of the time and in inspiring movements for independence and unification across Europe. Frequently Asked Questions What were the key ideological differences between liberalism and nationalism in the 19th century? In the 19th century, there were significant ideological differences between liberalism and nationalism. Liberalism focused on individual freedom, limited government intervention, and a free market economy. Liberals believed in the idea of natural rights, such as liberty, property, and equality. They promoted the concept of a constitutional government with checks and balances to protect individual rights and liberties. Liberalism emphasized the importance of civil liberties, including freedom of speech, press, and assembly. Economically, liberals advocated for laissez-faire policies, supporting free trade and minimal government regulation. Nationalism, on the other hand, placed emphasis on the collective identity, culture, and interests of a particular nation or ethnic group. Nationalists believed that every nation should have its own independent state, where its people could exercise self-determination and govern themselves. Nationalism aimed to strengthen national identity and pride, often promoting cultural revival and linguistic unity. It sought to protect and promote the interests of the nation, whether through economic protectionism, territorial expansion, or social cohesion. While both ideologies emerged as powerful forces during the 19th century, they had contrasting priorities. Liberalism was centered on individual rights and freedoms, emphasizing the importance of the individual above the nation. Nationalism, on the other hand, prioritized the nation as a whole, seeking to protect and promote its collective interests and identity. However, it is important to note that these ideologies were not mutually exclusive, and many individuals and movements embraced a combination of liberal and nationalist ideas. How did the rise of nationalism in the 19th century challenge liberal principles such as individual rights and limited government? The rise of nationalism in the 19th century posed significant challenges to liberal principles such as individual rights and limited government. Nationalism refers to a strong sense of loyalty and devotion to one’s own nation, often accompanied by the belief that the nation’s interests should outweigh individual or minority rights. One of the ways in which nationalism challenged liberal principles was through its emphasis on collective identity and solidarity over individual rights. Nationalists argued that the nation as a whole should take precedence over individual liberties. This led to the suppression of dissenting voices and the curtailing of individual freedoms in the name of national unity and strength. Furthermore, the rise of nationalism also undermined the concept of limited government. Nationalist movements often sought to centralize political power in order to strengthen the nation and pursue its interests. This resulted in the expansion of state power and intervention in various aspects of society, including economic policies and cultural practices. Additionally, nationalism contributed to the erosion of liberal principles by promoting exclusivity and discrimination. Nationalist ideologies often emphasized a particular ethnic, linguistic, or cultural identity, leading to the exclusion or marginalization of minority groups. This undermined the notion of equal rights for all individuals, regardless of their background or identity. Overall, the rise of nationalism in the 19th century challenged liberal principles such as individual rights and limited government by prioritizing the interests of the nation over those of the individual. It led to the suppression of dissent, expansions of state powers, and the exclusion of minority groups. These tensions between nationalism and liberalism continue to shape political discourse and debates to this day. To what extent did the clash between liberalism and nationalism in the 19th century lead to social and political conflicts, such as the revolutions of 1848 and the unification movements in Italy and Germany? The clash between liberalism and nationalism in the 19th century played a significant role in fueling social and political conflicts, ultimately leading to the revolutions of 1848 and the unification movements in Italy and Germany. During this time, liberalism advocated for individual freedoms, constitutionalism, and limited government intervention, while nationalism focused on the promotion of cultural, ethnic, and linguistic identity and the desire for self-determination. These two ideologies often intersected and conflicted with each other, especially in multi-ethnic states like Austria-Hungary. In the aftermath of the Congress of Vienna in 1815, which reestablished conservative monarchies in Europe after the Napoleonic Wars, liberal and nationalistic sentiments began to rise. Various intellectual and political movements, such as the Enlightenment and the American and French Revolutions, had already laid the groundwork for such ideologies. In the mid-19th century, economic and social changes further intensified the clash between liberalism and nationalism. The spread of industrialization and the emergence of a middle class created new demands for political participation and economic opportunities. Liberals sought to protect individual rights and establish representative governments that would guarantee these rights, while nationalists fought for their respective cultural and linguistic groups to have their own independent states or greater autonomy within larger empires. The revolutions of 1848, also known as the Spring of Nations, were sparked by these tensions. Revolutions erupted across Europe, including in France, Germany, Italy, Austria, and the Balkans. Liberal and nationalist forces often joined together to challenge the conservative regimes in power. However, the revolutions largely failed to achieve lasting change due to internal divisions, lack of coordination, and the superior military strength of the ruling powers. In Italy and Germany, the struggle for unification was deeply influenced by liberal and nationalist ideas. Italy was divided into multiple states, and nationalists sought to unify the Italian peninsula into a single nation-state. Figures like Giuseppe Garibaldi and Count Camillo di Cavour played crucial roles in achieving Italian unification under the leadership of the Kingdom of Piedmont-Sardinia in 1861. Similarly, Germany was divided into numerous independent states, and prominent nationalists like Otto von Bismarck and Wilhelm I sought to unify them into one German nation-state. Through a combination of military victories and diplomatic strategies, Bismarck succeeded in creating the German Empire in 1871. Overall, the clash between liberalism and nationalism during the 19th century led to social and political conflicts manifested in the revolutions of 1848 and the subsequent unification movements in Italy and Germany. These conflicts reshaped the political landscape of Europe, paving the way for the formation of unified nation-states based on liberal principles of self-determination and representative government. In conclusion, the 19th century was a time of profound ideological debate, particularly between liberalism and nationalism. Both of these ideologies emerged as responses to the changing political, social, and economic landscape of the time. Liberalism, with its emphasis on individual rights, free markets, and limited government intervention, sought to promote equality and freedom for all individuals. On the other hand, nationalism emphasized the importance of a strong national identity, often rooted in language, culture, or history, and supported the idea that each nation should have its own sovereign state. Throughout the 19th century, these ideologies clashed and coexisted in various ways. Liberalism gained traction as it championed the principles of liberty, progress, and human rights. It advocated for representative governments, free trade, and constitutional reforms. However, nationalism also emerged as a powerful force, fueled by growing sentiments of national pride and unity. It played a significant role in shaping the rise of nation-states across Europe and beyond. The tension between liberalism and nationalism became particularly evident in issues related to imperialism and colonialism. While liberal ideas of individual rights clashed with the realities of colonial domination, nationalism fueled independence movements seeking self-determination. Liberal nationalism emerged as a hybrid ideology that attempted to reconcile the ideals of both liberalism and nationalism, promoting the concept of a nation state that respects individual rights and freedoms. As the century unfolded, it became clear that neither liberalism nor nationalism could fully address the complexities of the time. They were not mutually exclusive, but rather interconnected and influenced by each other. The 19th century was marked by a constant struggle to strike a balance between individual liberties and national interests. In the present day, the legacy of this ideological debate still reverberates. The tension between liberalism and nationalism continues to shape politics and societies worldwide. Understanding the historical context and complexities of these ideologies is crucial for navigating the challenges of our own time, as we grapple with questions of identity, global cooperation, and individual rights. In conclusion, the 19th century witnessed a dynamic interplay between liberalism and nationalism, two powerful ideologies that continue to shape our world today. While neither offered a complete solution, they provided frameworks for understanding and responding to the profound changes of the time. Liberalism and nationalism remain fundamental facets of political discourse, reminding us of the enduring relevance of these debates and the importance of balancing individual freedom with collective identity in any society.
Overview: This lesson applies the strategies for solving multiplication equations to equations with a fraction as the coefficient of the variable. Objectives: Students will learn how to use multiplication by reciprocals, to solve multiplication equations in which the coefficient of the variable is a fraction. California Content Standards: 1.2 Students add, subtract, multiply, and divide, rational numbers (integers, fractions, and terminating decimals) and take positive rational numbers to whole-number powers. 2.0 Students calculate, and solve problems involving addition, subtraction, multiplication, and division. 3.3 Develop generalizations of the results obtained and the strategies used and apply them to new problem situations. Warm-Up Activity: Ask students what the word reciprocal means. (If the product of two numbers is 1, then each factor is the reciprocal of the other). Ask students to identify the reciprocals of the following numbers: 1/4, 3/5, 7/8 and 1/n Encourage students to describe what they know about the product of these numbers and their reciprocals (the product equals 1). Direct instructions for all students: Write the first example , 1/4h = 6 on the overhead projector. Ask students how they can change the coefficient of h from 1/4 to 1. Lead students to conclude that multiplying the left side of the equation by 4 will give h a coefficient of 1. Remind students that to keep the equation true, they will need to multiply the right side of the equation by 4/1 as well. Solve the example, showing each of the steps indicated. 1/4h = 6 (4/1)1/4 h = 6/1 (4/1) (4/1)1/4 h = 6/1 (4/1) 4/4 h = 24/1 h = 24 Solve the second example showing each of the steps indicated: -2/5x = -20 (-5/2)(-2/5) x = -20(-5/2) (10/10) x = 100/2 x = 50 Working in small groups: Divide class into small groups. Students will work in pairs by helping each other, developing strategies and sharing the answers (in each pair one student must have good math skills). Group Problem Solving: Have small groups of students consider the following situation. Havermill School needs to increase its students reading and math scores. Currently 1/7 of the school, 30 students, score above average in reading and math. By the end of the year, the school would like 1/5 of its students to score above average. If the school meets its goal, how many students will score above average?
Young's modulus is a mechanical property that measures the stiffness of a solid material. It defines the relationship between stress (force per unit area) and strain (proportional deformation) in a material in the linear elasticity regime of a uniaxial deformation. A given uniaxial stress, whether tensile (extension) or compressive (compression) creates more deformation in a material with low stiffness (red) than with a high stiffness (blue). Young's modulus is a measure of stiffness. |In SI base units||Pa = kg m−1 s−2| |Dimension||M L−1 T−2| Young's modulus is named after the 19th-century British scientist Thomas Young. However, the concept was developed in 1727 by Leonhard Euler, and the first experiments that used the concept of Young's modulus in its current form were performed by the Italian scientist Giordano Riccati in 1782, pre-dating Young's work by 25 years. The term modulus is the diminutive of the Latin term modus which means measure. A solid material will undergo elastic deformation when a small load is applied to it in compression or extension. Elastic deformation is reversible (the material returns to its original shape after the load is removed). At near-zero stress and strain, the stress–strain curve is linear, and the relationship between stress and strain is described by Hooke's law that states stress is proportional to strain. The coefficient of proportionality is Young's modulus. The higher the modulus, the more stress is needed to create the same amount of strain; an idealized rigid body would have an infinite Young's modulus. Not many materials are linear and elastic beyond a small amount of deformation. Formula and unitsEdit - is Young's modulus, in pascal - is the uniaxial stress, or uniaxial force per unit surface, in pascal - is the strain, or proportional deformation (change in length divided by original length) (adimensional) In practice, Young's moduli are given in megapascals (MPa or N/mm2) or gigapascals (GPa or kN/mm2). Not to be confused withEdit Material stiffness should not be confused with: - Strength: maximal amount of stress the material can withstand while staying in the elastic (reversible) deformation regime; - Stiffness: a global characteristic of the body that depends on its shape, and not only on the local properties of the material; for instance, a I beam has a higher bending stiffness than a rod of the same material for a given mass per length; - Hardness: relative resistance of the material's surface to penetration by a harder body; - Toughness: amount of energy that a material can absorb before fracture. The Young's modulus enables the calculation of the change in the dimension of a bar made of an isotropic elastic material under tensile or compressive loads. For instance, it predicts how much a material sample extends under tension or shortens under compression. The Young's modulus directly applies to cases of uniaxial stress, that is tensile or compressive stress in one direction and no stress in the other directions. Young's modulus is also used in order to predict the deflection that will occur in a statically determinate beam when a load is applied at a point in between the beam's supports. Other elastic calculations usually require the use of one additional elastic property, such as the shear modulus, bulk modulus or Poisson's ratio. Any two of these parameters are sufficient to fully describe elasticity in an isotropic material. Linear vs non-linear strainEdit Young's modulus represents the factor of proportionality in Hooke's law, which relates the stress and the strain. However, Hooke's law is only valid under the assumption of an elastic and linear response. Any real material will eventually fail and break when stretched over a very large distance or with a very large force; however all solid materials exhibit nearly Hookean behavior for small enough strains or stresses. If the range over which Hooke's law is valid is large enough compared to the typical stress that one expects to apply to the material, the material is said to be linear. Otherwise (if the typical stress one would apply is outside the linear range) the material is said to be non-linear. Steel, carbon fiber and glass among others are usually considered linear materials, while other materials such as rubber and soils are non-linear. However, this is not an absolute classification: if very small stresses or strains are applied to a non-linear material, the response will be linear, but if very high stress or strain is applied to a linear material, the linear theory will not be enough. For example, as the linear theory implies reversibility, it would be absurd to use the linear theory to describe the failure of a steel bridge under a high load; although steel is a linear material for most applications, it is not in such a case of catastrophic failure. In solid mechanics, the slope of the stress–strain curve at any point is called the tangent modulus. It can be experimentally determined from the slope of a stress–strain curve created during tensile tests conducted on a sample of the material. Young's modulus is not always the same in all orientations of a material. Most metals and ceramics, along with many other materials, are isotropic, and their mechanical properties are the same in all orientations. However, metals and ceramics can be treated with certain impurities, and metals can be mechanically worked to make their grain structures directional. These materials then become anisotropic, and Young's modulus will change depending on the direction of the force vector. Anisotropy can be seen in many composites as well. For example, carbon fiber has a much higher Young's modulus (is much stiffer) when force is loaded parallel to the fibers (along the grain). Other such materials include wood and reinforced concrete. Engineers can use this directional phenomenon to their advantage in creating structures. - E is the Young's modulus (modulus of elasticity) - F is the force exerted on an object under tension; - A is the actual cross-sectional area, which equals the area of the cross-section perpendicular to the applied force; - ΔL is the amount by which the length of the object changes (ΔL is positive if the material is stretched , and negative when the material is compressed); - L0 is the original length of the object. Force exerted by stretched or contracted materialEdit The Young's modulus of a material can be used to calculate the force it exerts under specific strain. where F is the force exerted by the material when contracted or stretched by ΔL. Hooke's law for a stretched wire can be derived from this formula: where it comes in saturation But note that the elasticity of coiled springs comes from shear modulus, not Young's modulus. Elastic potential energyEdit The elastic potential energy stored in a linear elastic material is given by the integral of the Hooke's law: now by expliciting the intensive variables: This means that the elastic potential energy density (i.e., per unit volume) is given by: or, in simple notation, for a linear elastic material: , since the strain is defined . In a nonlinear elastic material the Young's modulus is a function of the strain, so the second equivalence no longer holds and the elastic energy is not a quadratic function of the strain: Relation among elastic constantsEdit For homogeneous isotropic materials simple relations exist between elastic constants (Young's modulus E, shear modulus G, bulk modulus K, and Poisson's ratio ν) that allow calculating them all as long as two are known: Young's modulus can vary somewhat due to differences in sample composition and test method. The rate of deformation has the greatest impact on the data collected, especially in polymers. The values here are approximate and only meant for relative comparison. |Rubber (small strain)||0.01–0.1||1.45–×10−3 14.5| |Low-density polyethylene||0.11–0.86||1.6–×10−2 6.5| |Diatom frustules (largely silicic acid)||0.35–2.77||0.05–0.4| |Polyethylene terephthalate (PET)||2–2.7||0.29–0.39| |Medium-density fiberboard (MDF)||4||0.58| |Wood (along grain)||11||1.60| |Human Cortical Bone||14||2.03| |Glass-reinforced polyester matrix||17.2||2.49| |Aromatic peptide nanotubes||19–27||2.76–3.92| |Amino-acid molecular crystals||21–44||3.04–6.38| |Carbon fiber reinforced plastic (50/50 fibre/matrix, biaxial fabric)||30–50||4.35–7.25| |Magnesium metal (Mg)||45||6.53| |Glass (see chart)[specify]||50–90||7.25–13.1| |Mother-of-pearl (nacre, largely calcium carbonate)||70||10.2| |Tooth enamel (largely calcium phosphate)||83||12| |Stinging nettle fiber||87||12.6| |Carbon fiber reinforced plastic (70/30 fibre/matrix, unidirectional, along fibre)||181||26.3| |Silicon Single crystal, different directions||130–185||18.9–26.8| |polycrystalline Yttrium iron garnet (YIG)||193||28| |single-crystal Yttrium iron garnet (YIG)||200||29| |Aromatic peptide nanospheres||230–275||33.4–40| |Silicon carbide (SiC)||450||65| |Tungsten carbide (WC)||450–650||65–94| |Single-walled carbon nanotube||1,000+||150+| - The Rational mechanics of Flexible or Elastic Bodies, 1638–1788: Introduction to Leonhardi Euleri Opera Omnia, vol. X and XI, Seriei Secundae. Orell Fussli. - IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "modulus of elasticity (Young's modulus), E". - "Elastic Properties and Young Modulus for some Materials". The Engineering ToolBox. Retrieved 2012-01-06. - "Overview of materials for Low Density Polyethylene (LDPE), Molded". Matweb. Archived from the original on January 1, 2011. Retrieved February 7, 2013. - Subhash G, Yao S, Bellinger B, Gretz MR (2005). "Investigation of mechanical properties of diatom frustules using nanoindentation". J Nanosci Nanotechnol. 5 (1): 50–6. doi:10.1166/jnn.2005.006. PMID 15762160. - Ivanovska IL, de Pablo PJ, Sgalari G, MacKintosh FC, Carrascosa JL, Schmidt CF, Wuite GJ (2004). "Bacteriophage capsids: Tough nanoshells with complex elastic properties". Proc Natl Acad Sci USA. 101 (20): 7600–5. Bibcode:2004PNAS..101.7600I. doi:10.1073/pnas.0308198101. PMC . PMID 15133147. - "Styrodur Technical Data" (PDF). BASF. Retrieved 2016-03-15. - "Medium Density Fiberboard (MDF) Material Properties :: MakeItFrom.com". Retrieved February 4, 2016. - Rho, JY (1993). "Young's modulus of trabecular and cortical bone material: ultrasonic and microtensile measurements". Journal of Biomechanics. 26 (2): 111–119. doi:10.1016/0021-9290(93)90042-d. PMID 8429054. - "Polyester Matrix Composite reinforced by glass fibers (Fiberglass)". [SubsTech] (2008-05-17). Retrieved on 2011-03-30. - Kol, N.; et al. (June 8, 2005). "Self-Assembled Peptide Nanotubes Are Uniquely Rigid Bioinspired Supramolecular Structures". Nano Letters. 5 (7): 1343–1346. Bibcode:2005NanoL...5.1343K. doi:10.1021/nl0505896. - Niu, L.; et al. (June 6, 2007). "Using the Bending Beam Model to Estimate the Elasticity of Diphenylalanine Nanotubes". Langmuir. 23 (14): 7443–7446. doi:10.1021/la7010106. - Azuri, I.; et al. (November 9, 2015). "Unusually Large Young's Moduli of Amino Acid Molecular Crystals". Angew. Chem. Int. Ed. 54 (46): 13566–13570. doi:10.1002/anie.201505813. - "Composites Design and Manufacture (BEng) – MATS 324". - Nabi Saheb, D.; Jog, JP. (1999). "Natural fibre polymer composites: a review". Advances in Polymer Technology. 18 (4): 351–363. doi:10.1002/(SICI)1098-2329(199924)18:4<351::AID-ADV6>3.0.CO;2-X. - Bodros, E. (2002). "Analysis of the flax fibres tensile behaviour and analysis of the tensile stiffness increase". Composite Part A. 33 (7): 939–948. doi:10.1016/S1359-835X(02)00040-4. - A. P. Jackson,J. F. V. Vincent and R. M. Turner (1988). "The Mechanical Design of Nacre". Proceedings of the Royal Society B. 234 (1277): 415–440. Bibcode:1988RSPSB.234..415J. doi:10.1098/rspb.1988.0056. - DuPont (2001). "Kevlar Technical Guide": 9. - M. Staines, W. H. Robinson and J. A. A. Hood (1981). "Spherical indentation of tooth enamel". Journal of Materials Science. 16 (9): 2551–2556. Bibcode:1981JMatS..16.2551S. doi:10.1007/bf01113595. - Bodros, E.; Baley, C. (15 May 2008). "Study of the tensile properties of stinging nettle fibres (Urtica dioica)". Materials Letters. 62 (14): 2143–2145. doi:10.1016/j.matlet.2007.11.034. - Epoxy Matrix Composite reinforced by 70% carbon fibers [SubsTech]. Substech.com (2006-11-06). Retrieved on 2011-03-30. - "Physical properties of Silicon (Si)". Ioffe Institute Database. Retrieved on 2011-05-27. - E.J. Boyd; et al. (February 2012). "Measurement of the Anisotropy of Young's Modulus in Single-Crystal Silicon". Journal of Microelectromechanical Systems. 21 (1): 243–249. doi:10.1109/JMEMS.2011.2174415. - Chou, H. M.; Case, E. D. (November 1988). "Characterization of some mechanical properties of polycrystalline yttrium iron garnet (YIG) by non-destructive methods". Journal of Materials Science Letters. 7 (11): 1217–1220. doi:10.1007/BF00722341. - YIG properties - "Properties of cobalt-chrome alloys – Heraeus Kulzer cara". Archived from the original on 1 July 2015. Retrieved February 4, 2016. - Adler-Abramovich, L.; et al. (December 17, 2010). "Self-Assembled Organic Nanostructures with Metallic-Like Stiffness". Angewandte Chemie International Edition. 49 (51): 9939–9942. doi:10.1002/anie.201002037. PMID 20878815. - Foley, James C.; et al. (2010). "An Overview of Current Research and Industrial Practices of Be Powder Metallurgy". In Marquis, Fernand D.S. Powder Materials: Current Research and Industrial Practices III. Hoboken, NJ, USA: John Wiley & Sons, Inc. p. 263. doi:10.1002/9781118984239.ch32. - "Molybdenum: physical properties". webelements. Retrieved January 27, 2015. - "Molybdenum, Mo" (PDF). Glemco. Retrieved January 27, 2014. - D.K.Pandey; Singh, D.; Yadawa, P. K.; et al. (2009). "Ultrasonic Study of Osmium and Ruthenium" (PDF). Platinum Metals Rev. 53 (4): 91–97. doi:10.1595/147106709X430927. Retrieved November 4, 2014. - L. Forro; et al. "Electronic and mechanical properties of carbon nanotubes" (PDF). - Y. H. Yang; Li, W. Z.; et al. (2011). "Radial elasticity of single-walled carbon nanotube measured by atomic force microscopy". Applied Physics Letters. 98 (4): 041901. Bibcode:2011ApPhL..98d1901Y. doi:10.1063/1.3546170. - Fang Liu; Pingbing Ming & Ju Li. "Ab initio calculation of ideal strength and phonon instability of graphene under tension" (PDF). - Spear and Dismukes (1994). Synthetic Diamond – Emerging CVD Science and Technology. Wiley, N.Y. p. 315. ISBN 978-0-471-53589-8. - Owano, Nancy (Aug 20, 2013). "Carbyne is stronger than any known material". phys.org. - Liu, Mingjie; Artyukhov, Vasilii I; Lee, Hoonkyung; Xu, Fangbo; Yakobson, Boris I (2013). "Carbyne From First Principles: Chain of C Atoms, a Nanorod or a Nanorope?". ACS Nano. 7 (11): 10075–10082. arXiv: . doi:10.1021/nn404177r. - ASTM E 111, "Standard Test Method for Young's Modulus, Tangent Modulus, and Chord Modulus" - The ASM Handbook (various volumes) contains Young's Modulus for various materials and information on calculations. Online version (subscription required) - Matweb: free database of engineering properties for over 115,000 materials - Young's Modulus for groups of materials, and their cost |Homogeneous isotropic linear elastic materials have their elastic properties uniquely determined by any two moduli among these; thus, given any two, any other of the elastic moduli can be calculated according to these formulas.| There are two valid solutions. |Cannot be used when|
|Part of a series on| |History and topics| Genetic distance is a measure of the genetic divergence between species or between populations within a species. Populations with many similar alleles have small genetic distances. This indicates that they are closely related and have a recent common ancestor. Genetic distance is useful for reconstructing the history of populations. For example, evidence from genetic distance suggests that African and Eurasian people diverged about 100,000 years ago. Genetic distance is also used for understanding the origin of biodiversity. For example, the genetic distances between different breeds of domesticated animals are often investigated in order to determine which breeds should be protected to maintain genetic diversity. - 1 Biological foundation - 2 Measures of genetic distance - 2.1 Nei's standard genetic distance - 2.2 Cavalli-Sforza chord distance - 2.3 Reynolds, Weir, and Cockerham's genetic distance - 2.4 Other measures of genetic distance - 3 Software - 4 See also - 5 References - 6 External links In the genome of an organism, each gene is located at a specific place called the locus for that gene. Allelic variations at these loci cause phenotypic variation within species (e.g. hair colour, eye colour). However, most alleles do not have an observable impact on the phenotype. Within a population new alleles generated by mutation either die out or spread throughout the population. When a population is split into different isolated populations (by either geographical or ecological factors), mutations that occur after the split will be present only in the isolated population. Random fluctuation of allele frequencies also produces genetic differentiation between populations. This process is known as genetic drift. By examining the differences between allele frequencies between the populations and computing genetic distance, we can estimate how long ago the two populations were separated. Measures of genetic distance Although it is simple to define genetic distance as a measure of genetic divergence, there are several different statistical measures that have been proposed. This has happened because different authors considered different evolutionary models. The most commonly used are Nei's genetic distance, Cavalli-Sforza and Edwards measure, and Reynolds, Weir and Cockerham's genetic distance, listed below. In all the formulae in this section, and represent two different populations for which loci have been studied. Let represent the th allele at the th locus. Nei's standard genetic distance In 1972, Masatoshi Nei published what came to be known as Nei's standard genetic distance. This distance has the nice property that if the rate of genetic change (amino acid substitution) is constant per year or generation then Nei's standard genetic distance (D) increases in proportion to divergence time. This measure assumes that genetic differences are caused by mutation and genetic drift. This distance can also be expressed in terms of the arithmetic mean of gene identity. Let be the probability for the two members of population having the same allele at a particular locus and be the corresponding probability in population . Also, let be the probability for a member of and a member of having the same allele. Now let , and represent the arithmetic mean of , and over all loci, respectively. In other words, where is the total number of loci examined. Nei's standard distance can then be written as Cavalli-Sforza chord distance In 1967 Luigi Luca Cavalli-Sforza and A. W. F. Edwards published this measure. It assumes that genetic differences arise due to genetic drift only. One major advantage of this measure is that the populations are represented in a hypersphere, the scale of which is one unit per gene substitution. The chord distance in the hyperdimensional sphere is given by Some authors drop the factor to simplify the formula at the cost of losing the property that the scale is one unit per gene substitution. Reynolds, Weir, and Cockerham's genetic distance In 1983, this measure was published by John Reynolds, B.S. Weir and C. Clark Cockerham. This measure assumes that genetic differentiation occurs only by genetic drift without mutations. It estimates the coancestry coefficient which provides a measure of the genetic divergence by: Other measures of genetic distance Many other measures of genetic distance have been proposed with varying success. Nei's DA distance 1983 This distance assumes that genetic differences arise due to mutation and genetic drift, but this distance measure is known to give more reliable population trees than other distances particularly for microsatellite DNA data. Goldstein distance 1995 Nei's minimum genetic distance 1973 Roger's distance 1972 A commonly used measure of genetic distance is the fixation index which varies between 0 and 1. A value of 0 indicates that two populations are genetically identical (minimal or no genetic diversity between the two populations) whereas a value of 1 indicates that two populations are genetically different (maximum genetic diversity between the two populations). No mutation is assumed. Large populations between which there is much migration, for example, tend to be little differentiated whereas small populations between which there is little migration tend to be greatly differentiated. Fst is a convenient measure of this differentiation, and as a result Fst and related statistics are among the most widely used descriptive statistics in population and evolutionary genetics. But Fst is more than a descriptive statistic and measure of genetic differentiation. Fst is directly related to the Variance in allele frequency among populations and conversely to the degree of resemblance among individuals within populations. If Fst is small, it means that allele frequencies within each population are very similar; if it is large, it means that allele frequencies are very different. - PHYLIP uses GENDIST - Nei's standard genetic distance 1972 - Cavalli-Sforza and Edwards 1967 - Reynolds, Weir, and Cockerham's 1983 - Nei's standard genetic distance (original and unbiased) - Nei's minimum genetic distance (original and unbiased) - Wright's (1978) modification of Roger's (1972) distance - Reynolds, Weir, and Cockerham's 1983 - POPTREE2 Takezaki, Nei, and Tamura (2010, 2014) - Commonly used genetic distances and gene diversity analysis - Nei's standard genetic distance 1972 - Nei's DA distance between populations 1983 - Coefficient of relationship - Degree of consanguinity - Human genetic variation - Human genetic clustering - Allele frequency - Nei, M. (1987). Molecular Evolutionary Genetics. (Chapter 9). New York: Columbia University Press. - Nei, M.; A. K. Roychoudhury (1974). "Genic variation within and between the three major races of man, Caucasoids, Negroids, and Mongoloids". The American Journal of Human Genetics. 26: 421–443. PMC . PMID 4841634. - Ruane, J. (1999). A critical review of the value of genetic distance studies in conservation of animal genetic resources. Journal of Animal Breeding and Genetics, 116(5), 317-323. Chicago. - Nei, M. (1972). "Genetic distance between populations". Am. Nat. 106: 283–292. doi:10.1086/282771. - L.L. Cavalli-Sforza; A.W.F. Edwards (1967). "Phylogenetic Analysis -Models and Estimation Procedures". The American Journal of Human Genetics. 19 (3 Part I (May)). - John Reynolds; B.S. Weir; C. Clark Cockerham (November 1983). "Estimation of the coancestry coefficient: Basis for a short-term genetic distance". Genetics. 105: 767–779. - Nei, M. (1987) Genetic distance and molecular phylogeny. In: Population Genetics and Fishery Management (N. Ryman and F. Utter, eds.), University of Washington Press, Seattle, WA, pp. 193-223. - Nei, M., F. Tajima, & Y. Tateno (1983) Accuracy of estimated phylogenetic trees from molecular data. II. Gene frequency data. J. Mol. Evol. 19:153-170. - Takezaki, N. and Nei, M. (1996) Genetic distances and reconstruction of phylogenetic trees from microsatellite DNA. Genetics 144:389-399. - Gillian Cooper; William Amos; Richard Bellamy; Mahveen Ruby Siddiqui; Angela Frodsham; Adrian V. S. Hill; David C. Rubinsztein (1999). "An Empirical Exploration of the Genetic Distance for 213 Human Microsatellite Markers". The American Journal of Human Genetics. 65: 1125–1133. doi:10.1086/302574. - Rogers, J. S. (1972). Measures of similarity and genetic distance. In Studies in Genetics VII. pp. 145−153. University of Texas Publication 7213. Austin, Texas. - The Estimation of Genetic Distance and Population Substructure from Microsatellite allele frequency data., Brent W. Murray (May 1996), McMaster University website on genetic distance - Computing distance by stepwise genetic distance model, web pages of Bruce Walsh at the Department of Ecology and Evolutionary Biology at the University of Arizona
This lesson offers a concise, but thorough explanation of central angles, but also of arcs and sectors of a circle An angle is a central angle if it meets the following two conditions 1) The vertex of the angle is located at the center of a circle. 2) The rays that make up its sides are radii of the circle. Below, find an illustration of the definition above: You can name it angle ABC.It is important to notice that such angle is always less than 180 degrees. Therefore, such angles can only be acute or obtuse A portion of the circumference of the circle. This is illustrated below in red: A sector is the area enclosed within a central angle and an arc. Again, this is illustrated below, but in green: As you can see, the area in green is included between the arc in red and the angle When computing the area of a sector, use the following ratio or formula to find out what part of the circle's area is covered by the sector: A circle has a radius of 10 centimeters. This radius and the center of the circle is used to make of angle of 45 degrees. Find the area of the resulting sector Divide 45 degrees divided by 360 degrees to determine the fraction of the circle covered by this sector 45/360 = 1/8 The area of the circle is A = pi × r2 A = 3.14 × 102 A = 3.14 × 10 × 10 A = 3.14 × 102 A = 3.14 × 100 A = 314 square centimeters Now just mutliply 314 by 1/8 314 × 1/8 = 314/8 = 39.25 The area of the sector is 39.25 square centimeters Here we go.If you have any questions about this lesson, do not hesitate to contact me. Now try to do this problem on your own. A circle has a radius of 5 centimeters. This radius and the center of the circle is used to make of angle of 60 degrees. Find the area of the resulting sector Feb 22, 17 01:53 PM What is the equation of a circle? How to derive the equation of a circle? New math lessons Your email is safe with us. We will only use it to inform you about new math lessons.
The electronic devices that you use every day rely on computer code to work. From a cellphone to a washing machine, computer code is used to give instructions to a device. As computers can only understand binary instructions, using just 0s and 1s, computer languages are used to write the instructions and convert them into binary. Teach your children about coding using our handy topic guide! - An algorithm is a set of instructions to perform a task. Before they start working on a computer or device, ask children to write a set of instructions to perform a simple, precise task. For example, write a set of instructions to navigate a simple maze. Ask someone else to perform the instructions exactly as written – can they complete the task successfully? - Use the Code For Life Rapid Router Online resources to guide children through building a program in small steps. - Try one of these “unplugged” coding activities to help your children understand how to construct an algorithm. - Try making a game in Scratch, based on your current book or topic. - Ask your class to decompose a game, analysing it to find out how it works and what objects it uses. These resources from code-it contain everything you need to try this approach with a game called Magic Carpet. - Use some of the ideas in this great post from Doug Stitcher when you are planning your coding unit. - Our Coding and Programming banners are perfect for your display board. - Download these Hour of Code resources, including colourful posters and complete lesson plans for teaching coding. - The Tynker website has plenty of coding projects, from beginner to advanced, for children to work through at their own pace. - You will find lesson plans and resources for teaching coding on the micro:bit site. - Ada Lovelace (1815-1852) is considered the first computer programmer; in 1842 she wrote an algorithm (set of instructions) to use an early computer to calculate numbers. The computer (Babbage’s Analytical Engine) was never finished. - The term “bug” for a problem in a program comes from a real bug – a moth which got trapped in a computer at Harvard University in 1947! - A program that uses malicious code to copy itself and infect computers, is known as a virus. - There are hundreds of different computer languages. Lots of them are very similar, so once you know one it is easier to learn others. - The oldest computer language still in use, FORTRAN, was released by IBM in 1957. Blast off Coding in Scratch A lesson on using variables in gaming for children aged 9-11, including a coding challenge Running time: 24.55 What is Computer Coding? This short video (the first in a series) is a good introduction to coding and how computers and other devices use programs. Running time: 1:55 What is an Algorithm? A simple explanation of algorithms for younger children. Running time: 1:07 Program a Robotic Car A primary class show how they programmed a car to drive to a location. Running Time: Running time: 5:31 Are you teaching your children about other topics? Explore our full collection of guides!
ELEMENTS OF EUCLID ||ELEMENTS OF EUCLID euclid--s elements.pdf (Size: 1.8 MB / Downloads: 18) Geometry is the Science of figured Space. Figured Space is of one, two, or three dimensions, according as it consists of lines, surfaces, or solids. The boundaries of solids are surfaces; of surfaces, lines; and of lines, points. Thus it is the province of Geometry to investigate the properties of solids, of surfaces, and of the figures described on surfaces. The simplest of all surfaces is the plane, and that department of Geometry which is occupied with the lines and curves drawn on a plane is called Plane Geometry; that which demonstrates the properties of solids, of curved surfaces, and the figures described on curved surfaces, is Geometry of Three Dimensions. The simplest lines that can be drawn on a plane are the right line and circle, and the study of the properties of the point, the right line, and the circle, is the introduction to Geometry, of which it forms an extensive and important department. This is the part of Geometry on which the oldest Mathematical Book in existence, namely, Euclid’s Elements, is written, and is the subject of the present volume. The conic sections and other curves that can be described on a plane form special branches, and complete the divisions of this, the most comprehensive of all the Sciences. The student will find in Chasles’ Aper¸cu Historique a valuable history of the origin and the development of the methods of Geometry. THEORY OF ANGLES, TRIANGLES, PARALLEL LINES, AND A point is that which has position but not dimensions. A geometrical magnitude which has three dimensions, that is, length, breadth, and thickness, is a solid; that which has two dimensions, such as length and breadth, is a surface; and that which has but one dimension is a line. But a point is neither a solid, nor a surface, nor a line; hence it has no dimensions—that is, it has neither length, breadth, nor thickness. A line is length without breadth. A line is space of one dimension. If it had any breadth, no matter how small, it would be space of two dimensions; and if in addition it had any thickness it would be space of three dimensions; hence a line has neither breadth nor thickness. iii. The intersections of lines and their extremities are points. iv. A line which lies evenly between its extreme points is called a straight or right line, such as AB. If a point move without changing its direction it will describe a right line. The direction in which a point moves in called its “sense.” If the moving point continually changes its direction it will describe a curve; hence it follows that only one right line can be drawn between two points. The following Illustration is due to Professor Henrici:—“If we suspend a weight by a string, the string becomes stretched, and we say it is straight, by which we mean to express that it has assumed a peculiar definite shape. If we mentally abstract from this string all thickness, we obtain the notion of the simplest of all lines, which we call a straight line.” “Elements of human reason,” according to Dugald Stewart, are certain general propositions, the truths of which are self-evident, and which are so fundamental, that they cannot be inferred from any propositions which are more elementary; in other words, they are incapable of demonstration. “That two sides of a triangle are greater than the third” is, perhaps, self-evident; but it is not an axiom, inasmuch as it can be inferred by demonstration from other propositions; but we can give no proof of the proposition that “things which are equal to the same are equal to one another,” and, being self-evident, it is an
12.2 Potential energy by Benjamin Crowell,licensed under the . We have already seen many examples of energy related to the distance between interacting objects. When two objects participate in an attractive noncontact force, energy is required to bring them farther apart. In both of the perpetual motion machines that started off the previous chapter, one of the types of energy involved was the energy associated with the distance between the balls and the earth, which attract each other gravitationally. In the perpetual motion machine with the magnet on the pedestal, there was also energy associated with the distance between the magnet and the iron ball, which were attracting each other. The opposite happens with repulsive forces: two socks with the same type of static electric charge will repel each other, and cannot be pushed closer together without supplying energy. In general, the term , with algebra symbol PE, is used for the energy associated with the distance between two objects that attract or repel each other via a force that depends on the distance between them. Forces that are not determined by distance do not have potential energy associated with them. For instance, the normal force acts only between objects that have zero distance between them, and depends on other factors besides the fact that the distance is zero. There is no potential energy associated with the normal force. The following are some commonplace examples of potential energy: I have deliberately avoided introducing the term potential energy up until this point, because it tends to produce unfortunate connotations in the minds of students who have not yet been inoculated with a careful description of the construction of a numerical energy scale. Specifically, there is a tendency to generalize the term inappropriately to apply to any situation where there is the “potential” for something to happen: “I took a break from digging, but I had potential energy because I knew I'd be ready to work hard again in a few minutes.” All the vital points about potential energy can be made by focusing on the example of gravitational potential energy. For simplicity, we treat only vertical motion, and motion close to the surface of the earth, where the gravitational force is nearly constant. (The generalization to the three dimensions and varying forces is more easily accomplished using the concept of work, which is the subject of the next chapter.) To find an equation for gravitational PE, we examine the case of free fall, in which energy is transformed between kinetic energy and gravitational PE. Whatever energy is lost in one form is gained in an equal amount in the other form, so using the notation `DeltaKE` to stand for `KE_f?KE_i` and a similar notation for PE, we have It will be convenient to refer to the object as falling, so that PE is being changed into KE, but the math applies equally well to an object slowing down on its way up. We know an equation for kinetic energy, so if we can relate `v` to height, `y`, we will be able to relate `DeltaPE` to `y`, which would tell us what we want to know about potential energy. The `y` component of the velocity can be connected to the height via the constant acceleration equation and Newton's second law provides the acceleration, [EQUATION NOT FOUND], in terms of the gravitational force. The algebra is simple because both equation and equation have velocity to the second power. Equation can be solved for `v^2` to give `v^2=2KE"/"m`, and substituting this into equation , we find Making use of equations and gives the simple result [change in gravitational PE resulting from a change in height `Deltay`; `F` is the gravitational force on the object i.e., its weight; valid only near the surface of the earth, where `F` is constant] `=>` If you drop a 1-kg rock from a height of 1 m, how many joules of KE does it have on impact with the ground? (Assume that any energy transformed into heat by air friction is negligible.) `=>` If we choose the `y` axis to point up, then `F_y` is negative, and equals `-(1 kg)(g)=-9.8 N`. A decrease in `y` is represented by a negative value of `Deltay`, `Deltay=-1 m`, so the change in potential energy is `-(-9.8 N)(-1 m)approx-10 J`. (The proof that newtons multiplied by meters give units of joules is left as a homework problem.) Conservation of energy says that the loss of this amount of PE must be accompanied by a corresponding increase in KE of 10 J. It may be dismaying to note how many minus signs had to be handled correctly even in this relatively simple example: a total of four. Rather than depending on yourself to avoid any mistakes with signs, it is better to check whether the final result make sense physically. If it doesn't, just reverse the sign. Although the equation for gravitational potential energy was derived by imagining a situation where it was transformed into kinetic energy, the equation can be used in any context, because all the types of energy are freely convertible into each other. `=>` A 50-kg firefighter slides down a 5-m pole at constant velocity. How much heat is produced? `=>` Since she slides down at constant velocity, there is no change in KE. Heat and gravitational PE are the only forms of energy that change. Ignoring plus and minus signs, the gravitational force on her body equals `mg`, and the amount of energy transformed is `(mg)(5 m)=2500 J`. On physical grounds, we know that there must have been an increase (positive change) in the heat energy in her hands and in the flagpole. Question: In a nutshell, why is there a minus sign in the equation? Answer: It is because we increase the PE by moving the object in the opposite direction compared to the gravitational force. Question: Why do we only get an equation for the change in potential energy? Don't I really want an equation for the potential energy itself? Answer: No, you really don't. This relates to a basic fact about potential energy, which is that it is not a well defined quantity in the absolute sense. Only changes in potential energy are unambiguously defined. If you and I both observe a rock falling, and agree that it deposits 10 J of energy in the dirt when it hits, then we will be forced to agree that the 10 J of KE must have come from a loss of 10 joules of PE. But I might claim that it started with 37 J of PE and ended with 27, while you might swear just as truthfully that it had 109 J initially and 99 at the end. It is possible to pick some specific height as a reference level and say that the PE is zero there, but it's easier and safer just to work with changes in PE and avoid absolute PE altogether. Question: You referred to potential energy as the energy that two objects have because of their distance from each other. If a rock falls, the object is the rock. Where's the other object? Answer: Newton's third law guarantees that there will always be two objects. The other object is the planet earth. Question: If the other object is the earth, are we talking about the distance from the rock to the center of the earth or the distance from the rock to the surface of the earth? Answer: It doesn't matter. All that matters is the change in distance, `Deltay`, not `y`. Measuring from the earth's center or its surface are just two equally valid choices of a reference point for defining absolute PE. Question: Which object contains the PE, the rock or the earth? Answer: We may refer casually to the PE of the rock, but technically the PE is a relationship between the earth and the rock, and we should refer to the earth and the rock together as possessing the PE. Question: How would this be any different for a force other than gravity? Answer: It wouldn't. The result was derived under the assumption of constant force, but the result would be valid for any other situation where two objects interacted through a constant force. Gravity is unusual, however, in that the gravitational force on an object is so nearly constant under ordinary conditions. The magnetic force between a magnet and a refrigerator, on the other hand, changes drastically with distance. The math is a little more complex for a varying force, but the concepts are the same. Question: Suppose a pencil is balanced on its tip and then falls over. The pencil is simultaneously changing its height and rotating, so the height change is different for different parts of the object. The bottom of the pencil doesn't lose any height at all. What do you do in this situation? Answer: The general philosophy of energy is that an object's energy is found by adding up the energy of every little part of it. You could thus add up the changes in potential energy of all the little parts of the pencil to find the total change in potential energy. Luckily there's an easier way! The derivation of the equation for gravitational potential energy used Newton's second law, which deals with the acceleration of the object's center of mass (i.e., its balance point). If you just define `Deltay` as the height change of the center of mass, everything works out. A huge Ferris wheel can be rotated without putting in or taking out any PE, because its center of mass is staying at the same height. A ball thrown straight up will have the same speed on impact with the ground as a ball thrown straight down at the same speed. How can this be explained using potential energy? (answer in the back of the PDF version of the book) A You throw a steel ball up in the air. How can you prove based on conservation of energy that it has the same speed when it falls back into your hand? What if you throw a feather up --- is energy not conserved in this case? 12.2 Potential energy by Benjamin Crowell,licensed under the .
Kuiper belt, also called Edgeworth-Kuiper belt, flat ring of icy small bodies that revolve around the Sun beyond the orbit of the planet Neptune. It was named for the Dutch American astronomer Gerard P. Kuiper and comprises hundreds of millions of objects—presumed to be leftovers from the formation of the outer planets—whose orbits lie close to the plane of the solar system. The Kuiper belt is thought to be the source of most of the observed short-period comets, particularly those that orbit the Sun in less than 20 years, and for the icy Centaur objects, which have orbits in the region of the giant planets. (Some of the Centaurs may represent the transition from Kuiper belt objects [KBOs] to short-period comets.) Although its existence had been assumed for decades, the Kuiper belt remained undetected until the 1990s, when the prerequisite large telescopes and sensitive light detectors became available. KBOs orbit at a mean distance from the Sun larger than the mean orbital distance of Neptune (about 30 astronomical units [AU]; 4.5 billion km [2.8 billion miles]). The outer edge of the Kuiper belt is more poorly defined but nominally excludes objects that never go closer to the Sun than 47.2 AU (7.1 billion km [4.4 billion miles]), the location of the 2:1 Neptune resonance, where an object makes one orbit for every two of Neptune’s. The Kuiper belt contains the large objects Eris, Pluto, Makemake, Haumea, Quaoar, and many, likely millions, of other smaller bodies. Learn all about Pluto and its pals. Discovery of the Kuiper belt The Irish astronomer Kenneth E. Edgeworth speculated in 1943 that the distribution of the solar system’s small bodies was not bounded by the present distance of Pluto. Kuiper developed a stronger case in 1951. Working from an analysis of the mass distribution of bodies needed to accrete into planets during the formation of the solar system, Kuiper demonstrated that a large residual amount of small icy bodies—inactive comet nuclei—must lie beyond Neptune. A year earlier the Dutch astronomer Jan Oort had proposed the existence of a much-more-distant spherical reservoir of icy bodies, now called the Oort cloud, from which comets are continually replenished. This distant source adequately accounted for the origin of long-period comets—those having periods greater than 200 years. Kuiper noted, however, that comets with very short periods (20 years or less), which all orbit in the same direction as all the planets around the Sun and close to the plane of the solar system, require a nearer, more-flattened source. This explanation, clearly restated in 1988 by the American astronomer Martin Duncan and coworkers, became the best argument for the existence of the Kuiper belt until its direct detection. In 1992 American astronomer David Jewitt and graduate student Jane Luu discovered (15760) 1992 QB1, which was considered the first KBO. The body is about 200–250 km (125–155 miles) in diameter, as estimated from its brightness. It moves in a nearly circular orbit in the plane of the planetary system at a distance from the Sun of about 44 AU (6.6 billion km [4.1 billion miles]). This is outside the orbit of Pluto, which has a mean radius of 39.5 AU (5.9 billion km [3.7 billion miles]). The discovery of 1992 QB1 alerted astronomers to the feasibility of detecting other KBOs, and within 20 years about 1,500 had been discovered. On the basis of brightness estimates, the sizes of the larger known KBOs approach or exceed that of Pluto’s largest moon, Charon, which has a diameter of 1,208 km (751 miles). One KBO, given the name Eris, appears to be twice that diameter—i.e., only slightly smaller than Pluto itself. Because of their location outside Neptune’s orbit (mean radius 30.1 AU; 4.5 billion km [2.8 billion miles]), they are also called trans-Neptunian objects (TNOs). Because several KBOs like Eris are nearly as large as Pluto, beginning in the 1990s, astronomers wondered if Pluto should really be considered as a planet or as one of the largest bodies in the Kuiper belt. Evidence mounted that Pluto was a KBO that just happened to have been discovered 62 years before 1992 QB1, and in 2006 the International Astronomical Union voted to classify Pluto and Eris as dwarf planets. KBOs are classified by their semimajor axis (the mean distance from the Sun), their perihelion distance (the closest approach to the Sun), and the inclination of their orbital plane to that formed by the planets of the solar system. Using these parameters, KBOs are often found in three distinct orbital substructures. - Resonant objects: KBOs in mean motion resonance (MMR) with Neptune. An estimated 55,000 KBOs larger than 100 km (60 miles) in diameter orbit the Sun in an integer ratio of Neptune orbital periods. For example, Pluto is in the 3:2 Neptune MMR, completing two orbits around the Sun in the time it takes Neptune to complete three. In fact, nearly one-quarter of all MMR objects are in the 3:2 resonance. In recognition of this kinship, these objects have been dubbed Plutinos. - Hot classicals: KBOs with inclinations drawn from a wide distribution (about 16°) and with perihelion distances of between 35 and 40 AU (5.2 billion and 6 billion km [3.3 billion and 3.7 billion miles]). The hot classical population consists of approximately 120,000 objects with diameters larger than 100 km. This population is estimated to included 80,000 objects whose mean distance from the Sun exceeds 50 AU (7.5 billion km [4.6 billion miles]) and that are therefore sometimes referred to collectively as the “outer” or “detached” Kuiper belt. - Cold classicals: KBOs drawn from a narrow distribution of orbit inclinations (about 2.6°), with mean orbital distances restricted to 42.5–47.2 AU (6.4 billion–7.1 billion km [4 billion–4.4 billion miles]) and perihelion distances smoothly distributed between 38 AU (5.7 billion km [3.5 billion miles]) and 47.2 AU. The cold classical population is approximately 75,000 objects with diameters of 100 km and larger. Within the cold classicals are a small subpopulation called “the kernel” of 25,000 objects with diameters larger than 100 km. The kernel objects have semimajor axes between 43.8 and 44.4 AU (6.55 billion and 6.64 billion km [4.07 billion and 4.13 billion miles]), orbital eccentricities of between 0.03 and 0.08, and a narrow inclination distribution like the rest of the cold classical component. The above list contains the currently well-defined substructures of the orbital space of the Kuiper belt. These objects are in metastable orbits; that is, their orbits are stable over timescales of 100 million to 1 billion years. However, some will chaotically diffuse out of the stable region. As more KBOs are discovered, additional significant orbital populations are likely to be found. KBOs that have significant gravitational interactions with Neptune are called “scattering KBOs.” Scattering KBOs are on orbits that are unstable on million-year timescales. These objects are thought to be in transition from being metastable KBOs to becoming Centaur objects and eventually short-period comets. The metastable region that supplies the scattering population is not known, but it may be the hot classicals or perhaps the resonant KBOs. Not all scattering orbits are equally unstable, and understanding how a KBO in a metastable orbit becomes a short-period comet is an area of active research. The estimated population of scattering sources (3,000–15,000 objects larger than 100 km in diameter) is significantly smaller than theoretical expectations. Because of the small number of detected sources, the estimated numbers of KBOs are still quite uncertain. Particularly uncertain is the number of small (1–10-km) KBOs if this region of the solar system really is the reservoir for short-period comets. For comparison, there are estimated to be 250 asteroids larger than 100 km in diameter and perhaps 1 million larger than 1 km. If the relation between the number of objects and size for KBOs is similar to that of asteroids, that implies a total Kuiper belt population of more than 100 billion sources larger than 1 km in diameter. This extrapolation is derived from the few hundred sources for which precise detection circumstances are available. However, extrapolating from 300 objects to 100 billion is subject to considerable uncertainty. As noted above, the planet Neptune has a strong gravitational influence over the orbital structure of the Kuiper belt. There are two prevailing models for the formation of structure in the orbital distribution of KBOs. In the “migration” model, Neptune’s mean orbital distance was initially smaller (around 23 AU; 3.4 billion km [2.1 billion miles[) and slowly migrated to its current location of about 30 AU (4.5 billion km [2.8 billion miles]). During this slow orbital growth many KBOs became trapped into orbital resonance with Neptune. However, this model does not produce the hot classical component, and some other process must therefore lead to more-inclined orbits for KBOs. Alternatively, in the “Nice” model (named after the French city where it was first proposed), the giant planets of the solar system formed in a more-compact configuration than is seen today, and through gravitational interaction Neptune and Uranus were scattered to their current locations. The Nice model provides a reasonable representation of the hot component of the Kuiper belt but is less successful at producing the resonant objects and does not provide for a cold classical component. A complete explanation of the formation of structure in the outer solar system may be some combination of these two scenarios or some completely different model of evolution. In addition to the nominal members of the Kuiper belt described above, there are some KBOs whose closest approach to the Sun leaves them well outside the influence of Neptune. Sedna, an object whose closest approach is 76.3 AU (11.4 billion km [7.1 billion miles]), is the most extreme example of these distant outliers. These rare objects (only two objects with closest approaches greater than 47.2 AU [7.1 billion km (4.4 billion miles)] and mean Sun distances larger than 200 AU (29.9 billion km (18.6 billion miles)] are currently known) may represent the very outer edge of the Kuiper belt region or the inner edge of an entirely new population of sources. Sedna is sometimes referred to as a member of the inner Oort cloud. Families, binaries, and satellites The Kuiper belt is likely to contain families of objects—that is, populations of objects that are likely to have been derived from a single parent body. The members of a family would have similar heliocentric orbital parameters and surface properties. Only one such group, the nine-member Haumea family, is currently well established. The Haumea family members have orbital parameters that are much more similar than would be expected from standard family production. Modeling the production of the Haumea family, an important step toward confirming that these groups really do come from a single progenitor object, is an ongoing field of research. Pairs of near-equal-sized KBOs that are gravitationally bound together are called binary KBOs. Of the known cold classical KBOs, 15 to 20 percent are in binary systems. The Pluto-Charon system is binary but is unusual in the compactness of the system. The production of binary KBOs requires a large initial population of KBOs, many times larger than that currently observed, for capture into binary pairs to have been possible. Alternatively, binary KBOs might result from a turbulence mechanism at work during the formation of planetesimals in the Kuiper belt. The existence of Kuiper belt binaries appears to preclude a major gravitational scattering of sources in this population, as such effects would have disrupted the observed systems. A few percent of all KBOs are found to have satellites. The term satellite is used instead of binary when there is a large (+10) mass ratio between the primary KBO and the orbiting material. Satellites likely form when two KBOs collide and some of the disrupted material is captured into orbit around large surviving members. The KBO Haumea has at least two such satellites, Hi’iaka and Namaka. The Haumea satellites were likely captured from the debris of the collision that produced the Haumea family of KBOs.
The water cycle, also known as the hydrologic cycle or the hydrological cycle, describes the continuous movement of water on, above and below the surface of the Earth. The mass of water on Earth remains fairly constant over time but the partitioning of the water into the major reservoirs of ice, fresh water, saline water and atmospheric water is variable depending on a wide range of climatic variables. The water moves from one reservoir to another, such as from river to ocean, or from the ocean to the atmosphere, by the physical processes of evaporation, condensation, precipitation, infiltration, surface runoff, and subsurface flow. In doing so, the water goes through different forms: liquid, solid (ice) and vapor. The water cycle involves the exchange of energy, which leads to temperature changes. When water evaporates, it takes up energy from its surroundings and cools the environment. When it condenses, it releases energy and warms the environment. These heat exchanges influence climate. The evaporative phase of the cycle purifies water which then replenishes the land with freshwater. The flow of liquid water and ice transports minerals across the globe. It is also involved in reshaping the geological features of the Earth, through processes including erosion and sedimentation. The water cycle is also essential for the maintenance of most life and ecosystems on the planet. - 1 Description - 2 Residence times - 3 Changes over time - 4 Effects on climate - 5 Effects on biogeochemical cycling - 6 Slow loss over geologic time - 7 History of hydrologic cycle theory - 8 See also - 9 References - 10 Further reading - 11 External links The sun, which drives the water cycle, heats water in oceans and seas. Water evaporates as water vapor into the air. Some ice and snow sublimates directly into water vapor. Evapotranspiration is water transpired from plants and evaporated from the soil. The water molecule H 2O has smaller molecular mass than the major components of the atmosphere, nitrogen and oxygen, N 2 and O 2, hence is less dense. Due to the significant difference in density, buoyancy drives humid air higher. As altitude increases, air pressure decreases and the temperature drops (see Gas laws). The lower temperature causes water vapor to condense into tiny liquid water droplets which are heavier than the air, and fall unless supported by an updraft. A huge concentration of these droplets over a large space up in the atmosphere become visible as cloud. Some condensation is near ground level, and called fog. Atmospheric circulation moves water vapor around the globe; cloud particles collide, grow, and fall out of the upper atmospheric layers as precipitation. Some precipitation falls as snow or hail, sleet, and can accumulate as ice caps and glaciers, which can store frozen water for thousands of years. Most water falls back into the oceans or onto land as rain, where the water flows over the ground as surface runoff. A portion of runoff enters rivers in valleys in the landscape, with streamflow moving water towards the oceans. Runoff and water emerging from the ground (groundwater) may be stored as freshwater in lakes. Not all runoff flows into rivers; much of it soaks into the ground as infiltration. Some water infiltrates deep into the ground and replenishes aquifers, which can store freshwater for long periods of time. Some infiltration stays close to the land surface and can seep back into surface-water bodies (and the ocean) as groundwater discharge. Some groundwater finds openings in the land surface and comes out as freshwater springs. In river valleys and floodplains, there is often continuous water exchange between surface water and ground water in the hyporheic zone. Over time, the water returns to the ocean, to continue the water cycle. - Condensed water vapor that falls to the Earth's surface. Most precipitation occurs as rain, but also includes snow, hail, fog drip, graupel, and sleet. Approximately 505,000 km3 (121,000 cu mi) of water falls as precipitation each year, 398,000 km3 (95,000 cu mi) of it over the oceans.[better source needed] The rain on land contains 107,000 km3 (26,000 cu mi) of water per year and a snowing only 1,000 km3 (240 cu mi). 78% of global precipitation occurs over the ocean. - Canopy interception - The precipitation that is intercepted by plant foliage eventually evaporates back to the atmosphere rather than falling to the ground. - The runoff produced by melting snow. - The variety of ways by which water moves across the land. This includes both surface runoff and channel runoff. As it flows, the water may seep into the ground, evaporate into the air, become stored in lakes or reservoirs, or be extracted for agricultural or other human uses. - The flow of water from the ground surface into the ground. Once infiltrated, the water becomes soil moisture or groundwater. A recent global study using water stable isotopes, however, shows that not all soil moisture is equally available for groundwater recharge or for plant transpiration. - Subsurface flow - The flow of water underground, in the vadose zone and aquifers. Subsurface water may return to the surface (e.g. as a spring or by being pumped) or eventually seep into the oceans. Water returns to the land surface at lower elevation than where it infiltrated, under the force of gravity or gravity induced pressures. Groundwater tends to move slowly and is replenished slowly, so it can remain in aquifers for thousands of years. - The transformation of water from liquid to gas phases as it moves from the ground or bodies of water into the overlying atmosphere. The source of energy for evaporation is primarily solar radiation. Evaporation often implicitly includes transpiration from plants, though together they are specifically referred to as evapotranspiration. Total annual evapotranspiration amounts to approximately 505,000 km3 (121,000 cu mi) of water, 434,000 km3 (104,000 cu mi) of which evaporates from the oceans. 86% of global evaporation occurs over the ocean. - The state change directly from solid water (snow or ice) to water vapor by passing the liquid state. - This refers to changing of water vapor directly to ice. - The movement of water through the atmosphere. Without advection, water that evaporated over the oceans could not precipitate over land. - The transformation of water vapor to liquid water droplets in the air, creating clouds and fog. - The release of water vapor from plants and soil into the air. - Water flows vertically through the soil and rocks under the influence of gravity. - Plate tectonics - Water enters the mantle via subduction of oceanic crust. Water returns to the surface via volcanism. The water cycle involves many of these processes. |Reservoir||Average residence time| |Glaciers||20 to 100 years| |Seasonal snow cover||2 to 6 months| |Soil moisture||1 to 2 months| |Groundwater: shallow||100 to 200 years| |Groundwater: deep||10,000 years| |Lakes (see lake retention time)||50 to 100 years| |Rivers||2 to 6 months| The residence time of a reservoir within the hydrologic cycle is the average time a water molecule will spend in that reservoir (see adjacent table). It is a measure of the average age of the water in that reservoir. Groundwater can spend over 10,000 years beneath Earth's surface before leaving. Particularly old groundwater is called fossil water. Water stored in the soil remains there very briefly, because it is spread thinly across the Earth, and is readily lost by evaporation, transpiration, stream flow, or groundwater recharge. After evaporating, the residence time in the atmosphere is about 9 days before condensing and falling to the Earth as precipitation. The major ice sheets – Antarctica and Greenland – store ice for very long periods. Ice from Antarctica has been reliably dated to 800,000 years before present, though the average residence time is shorter. In hydrology, residence times can be estimated in two ways. The more common method relies on the principle of conservation of mass and assumes the amount of water in a given reservoir is roughly constant. With this method, residence times are estimated by dividing the volume of the reservoir by the rate by which water either enters or exits the reservoir. Conceptually, this is equivalent to timing how long it would take the reservoir to become filled from empty if no water were to leave (or how long it would take the reservoir to empty from full if no water were to enter). Changes over time The water cycle describes the processes that drive the movement of water throughout the hydrosphere. However, much more water is "in storage" for long periods of time than is actually moving through the cycle. The storehouses for the vast majority of all water on Earth are the oceans. It is estimated that of the 332,500,000 mi3 (1,386,000,000 km3) of the world's water supply, about 321,000,000 mi3 (1,338,000,000 km3) is stored in oceans, or about 97%. It is also estimated that the oceans supply about 90% of the evaporated water that goes into the water cycle. During colder climatic periods, more ice caps and glaciers form, and enough of the global water supply accumulates as ice to lessen the amounts in other parts of the water cycle. The reverse is true during warm periods. During the last ice age, glaciers covered almost one-third of Earth's land mass with the result being that the oceans were about 122 m (400 ft) lower than today. During the last global "warm spell," about 125,000 years ago, the seas were about 5.5 m (18 ft) higher than they are now. About three million years ago the oceans could have been up to 50 m (165 ft) higher. The scientific consensus expressed in the 2007 Intergovernmental Panel on Climate Change (IPCC) Summary for Policymakers is for the water cycle to continue to intensify throughout the 21st century, though this does not mean that precipitation will increase in all regions. In subtropical land areas – places that are already relatively dry – precipitation is projected to decrease during the 21st century, increasing the probability of drought. The drying is projected to be strongest near the poleward margins of the subtropics (for example, the Mediterranean Basin, South Africa, southern Australia, and the Southwestern United States). Annual precipitation amounts are expected to increase in near-equatorial regions that tend to be wet in the present climate, and also at high latitudes. These large-scale patterns are present in nearly all of the climate model simulations conducted at several international research centers as part of the 4th Assessment of the IPCC. There is now ample evidence that increased hydrologic variability and change in climate has and will continue to have a profound impact on the water sector through the hydrologic cycle, water availability, water demand, and water allocation at the global, regional, basin, and local levels. Research published in 2012 in Science based on surface ocean salinity over the period 1950 to 2000 confirm this projection of an intensified global water cycle with salty areas becoming more saline and fresher areas becoming more fresh over the period: Fundamental thermodynamics and climate models suggest that dry regions will become drier and wet regions will become wetter in response to warming. Efforts to detect this long-term response in sparse surface observations of rainfall and evaporation remain ambiguous. We show that ocean salinity patterns express an identifiable fingerprint of an intensifying water cycle. Our 50-year observed global surface salinity changes, combined with changes from global climate models, present robust evidence of an intensified global water cycle at a rate of 8 ± 5% per degree of surface warming. This rate is double the response projected by current-generation climate models and suggests that a substantial (16 to 24%) intensification of the global water cycle will occur in a future 2° to 3° warmer world. Glacial retreat is also an example of a changing water cycle, where the supply of water to glaciers from precipitation cannot keep up with the loss of water from melting and sublimation. Glacial retreat since 1850 has been extensive. Human activities that alter the water cycle include: - alteration of the chemical composition of the atmosphere - construction of dams - deforestation and afforestation - removal of groundwater from wells - water abstraction from rivers - urbanization - to counteract its impact, water-sensitive urban design can be practiced Effects on climate The water cycle is powered from solar energy. 86% of the global evaporation occurs from the oceans, reducing their temperature by evaporative cooling. Without the cooling, the effect of evaporation on the greenhouse effect would lead to a much higher surface temperature of 67 °C (153 °F), and a warmer planet. Effects on biogeochemical cycling While the water cycle is itself a biogeochemical cycle, flow of water over and beneath the Earth is a key component of the cycling of other biogeochemicals. Runoff is responsible for almost all of the transport of eroded sediment and phosphorus from land to waterbodies. The salinity of the oceans is derived from erosion and transport of dissolved salts from the land. Cultural eutrophication of lakes is primarily due to phosphorus, applied in excess to agricultural fields in fertilizers, and then transported overland and down rivers. Both runoff and groundwater flow play significant roles in transporting nitrogen from the land to waterbodies. The dead zone at the outlet of the Mississippi River is a consequence of nitrates from fertilizer being carried off agricultural fields and funnelled down the river system to the Gulf of Mexico. Runoff also plays a part in the carbon cycle, again through the transport of eroded rock and soil. Slow loss over geologic time The hydrodynamic wind within the upper portion of a planet's atmosphere allows light chemical elements such as Hydrogen to move up to the exobase, the lower limit of the exosphere, where the gases can then reach escape velocity, entering outer space without impacting other particles of gas. This type of gas loss from a planet into space is known as planetary wind. Planets with hot lower atmospheres could result in humid upper atmospheres that accelerate the loss of hydrogen. History of hydrologic cycle theory Floating land mass In ancient times, it was widely thought that the land mass floated on a body of water, and that most of the water in rivers has its origin under the earth. Examples of this belief can be found in the works of Homer (circa 800 BCE). In the ancient near east, Hebrew scholars observed that even though the rivers ran into the sea, the sea never became full. Some scholars conclude that the water cycle was described completely during this time in this passage: "The wind goeth toward the south, and turneth about unto the north; it whirleth about continually, and the wind returneth again according to its circuits. All the rivers run into the sea, yet the sea is not full; unto the place from whence the rivers come, thither they return again" (, KJV). Scholars are not in agreement as to the date of Ecclesiastes, though most scholars point to a date during the time of King Solomon, son of David and Bathsheba, "three thousand years ago, there is some agreement that the time period is 962–922 BCE. Furthermore, it was also observed that when the clouds were full, they emptied rain on the earth (). In addition, during 793–740 BCE a Hebrew prophet, Amos, stated that water comes from the sea and is poured out on the earth (, ). In the Biblical Book of Job, dated between 7th and 2nd centuries BCE, there is a description of precipitation in the hydrologic cycle, "For he maketh small the drops of water: they pour down rain according to the vapour thereof; Which the clouds do drop and distil upon man abundantly" (, KJV). Precipitation and percolation In the Adityahridayam (a devotional hymn to the Sun God) of Ramayana, a Hindu epic dated to the 4th century BCE, it is mentioned in the 22nd verse that the Sun heats up water and sends it down as rain. By roughly 500 BCE, Greek scholars were speculating that much of the water in rivers can be attributed to rain. The origin of rain was also known by then. These scholars maintained the belief, however, that water rising up through the earth contributed a great deal to rivers. Examples of this thinking included Anaximander (570 BCE) (who also speculated about the evolution of land animals from fish) and Xenophanes of Colophon (530 BCE). Chinese scholars such as Chi Ni Tzu (320 BCE) and Lu Shih Ch'un Ch'iu (239 BCE) had similar thoughts. The idea that the water cycle is a closed cycle can be found in the works of Anaxagoras of Clazomenae (460 BCE) and Diogenes of Apollonia (460 BCE). Both Plato (390 BCE) and Aristotle (350 BCE) speculated about percolation as part of the water cycle. Up to the time of the Renaissance, it was thought that precipitation alone was insufficient to feed rivers, for a complete water cycle, and that underground water pushing upwards from the oceans were the main contributors to river water. Bartholomew of England held this view (1240 CE), as did Leonardo da Vinci (1500 CE) and Athanasius Kircher (1644 CE). The first published thinker to assert that rainfall alone was sufficient for the maintenance of rivers was Bernard Palissy (1580 CE), who is often credited as the "discoverer" of the modern theory of the water cycle. Palissy's theories were not tested scientifically until 1674, in a study commonly attributed to Pierre Perrault. Even then, these beliefs were not accepted in mainstream science until the early nineteenth century. - Moisture advection - Moisture recycling - Planetary boundaries - Water use - Deep water cycle - Global meteoric water line - "precipitation | National Snow and Ice Data Center". nsidc.org. Archived from the original on 2018-01-16. Retrieved 2018-01-15. - "The Water Cycle". Dr. Art's Guide to Planet Earth. Archived from the original on 2011-12-26. Retrieved 2006-10-24.CS1 maint: unfit url (link) - "Estimated Flows of Water in the Global Water Cycle". www3.geosc.psu.edu. Archived from the original on 2017-11-07. Retrieved 2018-01-15. - "Salinity | Science Mission Directorate". science.nasa.gov. Archived from the original on 2018-01-15. Retrieved 2018-01-15. - "Hydrologic Cycle". Northwest River Forecast Center. NOAA. Archived from the original on 2006-04-27. Retrieved 2006-10-24. - Evaristo, Jaivime; Jasechko, Scott; McDonnell, Jeffrey J. (2015-09-03). "Global separation of plant transpiration from groundwater and streamflow". Nature. 525 (7567): 91–94. Bibcode:2015Natur.525...91E. doi:10.1038/nature14983. ISSN 0028-0836. PMID 26333467. - "evaporation | National Snow and Ice Data Center". nsidc.org. Archived from the original on 2018-01-16. Retrieved 2018-01-15. - "sublimation | National Snow and Ice Data Center". nsidc.org. Archived from the original on 2018-01-16. Retrieved 2018-01-15. - "advection | National Snow and Ice Data Center". nsidc.org. Archived from the original on 2018-01-16. Retrieved 2018-01-15. - "condensation | National Snow and Ice Data Center". nsidc.org. Archived from the original on 2018-01-16. Retrieved 2018-01-15. - "Chapter 8: Introduction to the Hydrosphere". 8(b) the Hydrologic Cycle. PhysicalGeography.net. Archived from the original on 2016-01-26. Retrieved 2006-10-24. - Jouzel, J.; Masson-Delmotte, V.; Cattani, O.; Dreyfus, G.; Falourd, S.; Hoffmann, G.; Minster, B.; Nouet, J.; Barnola, J. M. (2007-08-10). "Orbital and millennial Antarctic climate variability over the past 800,000 years" (PDF). Science. 317 (5839): 793–96. Bibcode:2007Sci...317..793J. doi:10.1126/science.1141038. ISSN 1095-9203. PMID 17615306. - "The Water Cycle summary". USGS Water Science School. Archived from the original on 2018-01-16. Retrieved 2018-01-15. - Alley, Richard; et al. (February 2007). "Climate Change 2007: The Physical Science Basis" (PDF). International Panel on Climate Change. Archived from the original (PDF) on February 3, 2007. - Vahid, Alavian; Qaddumi, Halla Maher; Dickson, Eric; Diez, Sylvia Michele; Danilenko, Alexander V.; Hirji, Rafik Fatehali; Puz, Gabrielle; Pizarro, Carolina; Jacobsen, Michael (November 1, 2009). "Water and climate change : understanding the risks and making climate-smart investment decisions". Washington, DC: World Bank. pp. 1–174. Archived from the original on 2017-07-06. - Gillis, Justin (April 26, 2012). "Study Indicates a Greater Threat of Extreme Weather". The New York Times. Archived from the original on 2012-04-26. Retrieved 2012-04-27. - Paul J. Durack; Susan E. Wijffels & Richard J. Matear (27 April 2012). "Ocean Salinities Reveal Strong Global Water Cycle Intensification During 1950 to 2000" (PDF). Science (Submitted manuscript). 336 (6080): 455–58. Bibcode:2012Sci...336..455D. doi:10.1126/science.1212222. PMID 22539717. - Vinas, Maria-Jose (June 6, 2013). "NASA's Aquarius Sees Salty Shifts". NASA. Archived from the original on 2017-05-16. Retrieved 2018-01-15. - "Retreat of Glaciers in Glacier National Park". www.usgs.gov. Archived from the original on 2018-01-04. Retrieved 2018-01-15. - "Water Cycle | Science Mission Directorate". science.nasa.gov. Archived from the original on 2018-01-15. Retrieved 2018-01-15. - "Rising sea levels attributed to global groundwater extraction". University of Utrecht. 2014-12-05. Archived from the original on May 11, 2011. Retrieved February 8, 2011. - "Biogeochemical Cycles". The Environmental Literacy Council. Archived from the original on 2015-04-30. Retrieved 2006-10-24. - "Phosphorus Cycle". The Environmental Literacy Council. Archived from the original on 2016-08-20. Retrieved 2018-01-15. - "Nitrogen and the Hydrologic Cycle". Extension Fact Sheet. Ohio State University. Archived from the original on 2006-09-01. Retrieved 2006-10-24. - "The Carbon Cycle". Earth Observatory. NASA. 2011-06-16. Archived from the original on 2006-09-28. Retrieved 2006-10-24. - Nick Strobel (June 12, 2010). "Planetary Science". Archived from the original on September 17, 2010. Retrieved September 28, 2010. - Rudolf Dvořák (2007). Extrasolar Planets. Wiley-VCH. pp. 139–40. ISBN 978-3-527-40671-5. Retrieved 2009-05-05. - Morris, Henry M. (1988). Science and the Bible (Trinity Broadcasting Network ed.). Chicago, IL: Moody Press. p. 15. - Metzger, Bruce M.; Coogan, Michael D. (1993). The Oxford Companion to the Bible. New York, NY: Oxford University Press. p. 369. ISBN 978-0195046458. - Merrill, Eugene H.; Rooker, Mark F.; Grisanti, Michael A. (2011). The World and the Word. Nashville, TN: B&H Academic. p. 430. ISBN 9780805440317. - Kazlev, M.Alan. "Palaeos: History of Evolution and Paleontology in science, philosophy, religion, and popular culture : Pre 19th Century". Archived from the original on 2014-03-02. - James H. Lesher. "Xenophanes' Scepticism" (PDF). pp. 9–10. Archived from the original (PDF) on 2013-07-28. Retrieved 2014-02-26. - The Basis of Civilization – water Science?. International Association of Hydrological Science. 2004. ISBN 9781901502572 – via Google Books. - James C.I. Dodge. Concepts of the hydrological Cycle. Ancient and modern (PDF). International Symposium OH 2 'Origins and History of Hydrology', Dijon, May 9–11, 2001. Archived (PDF) from the original on 2014-10-11. Retrieved 2014-02-26. - Anderson, J. G.; Wilmouth, D. M.; Smith, J. B.; Sayres, D. S. (2012). "UV Dosage Levels in Summer: Increased Risk of Ozone Loss from Convectively Injected Water Vapor". Science. 337 (6096): 835–9. Bibcode:2012Sci...337..835A. doi:10.1126/science.1222978. PMID 22837384. |Wikimedia Commons has media related to Water cycle.| - The Water Cycle, United States Geological Survey - The Water Cycle for Kids, United States Geological Survey - The water cycle, from Dr. Art's Guide to the Planet. - Water cycle slideshow, 1 Mb Flash multilingual animation highlighting the often-overlooked evaporation from bare soil, from managingwholes.com. - Will the wet get wetter and the dry drier? – Climate research summary from NOAA Geophysical Fluid Dynamics Laboratory including text, graphics, and animations
This article needs additional citations for verification. (August 2015) (Learn how and when to remove this template message) A register file is an array of processor registers in a central processing unit (CPU). Modern integrated circuit-based register files are usually implemented by way of fast static RAMs with multiple ports. Such RAMs are distinguished by having dedicated read and write ports, whereas ordinary multiported SRAMs will usually read and write through the same ports. The instruction set architecture of a CPU will almost always define a set of registers which are used to stage data between memory and the functional units on the chip. In simpler CPUs, these architectural registers correspond one-for-one to the entries in a physical register file (PRF) within the CPU. More complicated CPUs use register renaming, so that the mapping of which physical entry stores a particular architectural register changes dynamically during execution. The register file is part of the architecture and visible to the programmer, as opposed to the concept of transparent caches. Register bank switching Register files may be clubbed together as register banks. Some processors have several register banks. ARM processors use ARM register banks for fast interrupt request. x86 processors use context switching and fast interrupt for switching between instruction, decoder, GPRs and register files, if there is more than one, before the instruction is issued, but this is only existing on processors that support superscalar. However, context switching is a totally different mechanism to ARM's register bank within the registers. The usual layout convention is that a simple array is read out vertically. That is, a single word line, which runs horizontally, causes a row of bit cells to put their data on bit lines, which run vertically. Sense amps, which convert low-swing read bitlines into full-swing logic levels, are usually at the bottom (by convention). Larger register files are then sometimes constructed by tiling mirrored and rotated simple arrays. Register files have one word line per entry per port, one bit line per bit of width per read port, and two bit lines per bit of width per write port. Each bit cell also has a Vdd and Vss. Therefore, the wire pitch area increases as the square of the number of ports, and the transistor area increases linearly. At some point, it may be smaller and/or faster to have multiple redundant register files, with smaller numbers of read ports, rather than a single register file with all the read ports. The MIPS R8000's integer unit, for example, had a 9 read 4 write port 32 entry 64-bit register file implemented in a 0.7 µm process, which could be seen when looking at the chip from arm's length. Two popular approaches to dividing registers into multiple register files are the distributed register file configuration and the partitioned register file configuration. In principle, any operation that could be done with a 64-bit-wide register file with many read and write ports could be done with a single 8-bit-wide register file with a single read port and a single write port. However, the bit-level parallelism of wide register files with many ports allows them to run much faster and thus, they can do operations in a single cycle that would take many cycles with fewer ports or a narrower bit width or both. The width in bits of the register file is usually the number of bits in the processor word size. Occasionally it is slightly wider in order to attach "extra" bits to each register, such as the poison bit. If the width of the data word is different than the width of an address—or in some cases, such as the 68000, even when they are the same width—the address registers are in a separate register file than the data registers. - The decoder is often broken into pre-decoder and decoder proper. - The decoder is a series of AND gates that drive word lines. - There is one decoder per read or write port. If the array has four read and two write ports, for example, it has 6 word lines per bit cell in the array, and six AND gates per row in the decoder. Note that the decoder has to be pitch matched to the array, which forces those AND gates to be wide and short The basic scheme for a bit cell: - State is stored in pair of inverters. - Data is read out by nmos transistor to a bit line. - Data is written by shorting one side or the other to ground through a two-nmos stack. - So: read ports take one transistor per bit cell, write ports take four. Many optimizations are possible: - Sharing lines between cells, for example, Vdd and Vss. - Read bit lines are often precharged to something between Vdd and Vss. - Read bit lines often swing only a fraction of the way to Vdd or Vss. A sense amplifier converts this small-swing signal into a full logic level. Small swing signals are faster because the bit line has little drive but a great deal of parasitic capacitance. - Write bit lines may be braided, so that they couple equally to the nearby read bitlines. Because write bitlines are full swing, they can cause significant disturbances on read bitlines. - If Vdd is a horizontal line, it can be switched off, by yet another decoder, if any of the write ports are writing that line during that cycle. This optimization increases the speed of the write. - Techniques that reduce the energy used by register files are useful in low-power electronics This section does not cite any sources. (September 2015) (Learn how and when to remove this template message) This section may require cleanup to meet Wikipedia's quality standards. The specific problem is: poor English (June 2016) (Learn how and when to remove this template message) Most register files make no special provision to prevent multiple write ports from writing the same entry simultaneously. Instead, the instruction scheduling hardware ensures that only one instruction in any particular cycle writes a particular entry. If multiple instructions targeting the same register are issued, all but one have their write enables turned off. The crossed inverters take some finite time to settle after a write operation, during which a read operation will either take longer or return garbage. It is common to have bypass multiplexers that bypass written data to the read ports when a simultaneous read and write to the same entry is commanded. These bypass multiplexers are often part of a larger bypass network that forwards results which have not yet been committed between functional units. The register file is usually pitch-matched to the datapath that it serves. Pitch matching avoids having many busses passing over the datapath turn corners, which would use a lot of area. But since every unit must have the same bit pitch, every unit in the datapath ends up with the bit pitch forced by the widest unit, which can waste area in the other units. Register files, because they have two wires per bit per write port, and because all the bit lines must contact the silicon at every bit cell, can often set the pitch of a datapath. Area can sometimes be saved, on machines with multiple units in a datapath, by having two datapaths side-by-side, each of which has smaller bit pitch than a single datapath would have. This case usually forces multiple copies of a register file, one for each datapath. The Alpha 21264 (EV6), for instance, was the first large micro-architecture to implement "Shadow Register File Architecture". It had two copies of the integer register file and two copies of floating point register that locate in its front end (future and scaled file, each contain 2 read and 2 write port), and took an extra cycle to propagate data between the two during context switch. The issue logic attempted to reduce the number of operations forwarding data between the two and greatly improved its integer performance and help reduce the impact of limited number of GPR in superscalar and speculative execution. The design was later adapted by SPARC, MIPS and some later x86 implementation. The MIPS uses multiple register file as well, R8000 floating-point unit had two copies of the floating-point register file, each with four write and four read ports, and wrote both copies at the same time with context switch. However it does not support integer operation and integer register file still remain one. Later shadow register file was abandoned in newer design in favor of embedded market. The SPARC uses "Shadow Register File Architecture" as well for its high end line, It had up to 4 copies of integer register files (future, retired, scaled, scratched, each contain 7 read 4 write port) and 2 copies of floating point register file. but unlike Alpha and x86, they are locate in back end as retire unit right after its Out of Order Unit and renaming register files and do not load instruction during instruction fetch and decoding stage and context switch is needless in this design. IBM uses the same mechanism as many major microprocessors, deeply merging the register file with the decoder but its register file are work independently by the decoder side and do not involve context switch, which is different from Alpha and x86. most of its register file not just serve for its dedicate decoder only but up to the thread level. For example, POWER8 has up to 8 instruction decoders, but up to 32 register files of 32 general purpose registers each (4 read and 4 write port), to facilitate simultaneous multithreading, which its instruction cannot be used cross any other register file (lack of context switch.). In the x86 processor line, a typical pre-486 CPU did not have an individual register file, as all general purpose register were directly work with its decoder, and the x87 push stack was located within the floating-point unit itself. Starting with Pentium, a typical Pentium-compatible x86 processor is integrated with one copy of the single-port architectural register file containing 8 architectural registers, 8 control registers, 8 debug registers, 8 condition code registers, 8 unnamed based register,[clarification needed] one instruction pointer, one flag register and 6 segment registers in one file. One copy of 8 x87 FP push down stack by default, MMX register were virtually simulated from x87 stack and require x86 register to supplying MMX instruction and aliases to exist stack. On P6, the instruction independently can be stored and executed in parallel in early pipeline stages before decoding into micro-operations and renaming in out-of-order execution. Beginning with P6, all register files do not require additional cycle to propagate the data, register files like architectural and floating point are located between code buffer and decoders, called "retire buffer", Reorder buffer and OoOE and connected within the ring bus (16 bytes). The register file itself still remains one x86 register file and one x87 stack and both serve as retirement storing. Its x86 register file increased to dual ported to increase bandwidth for result storage. Registers like debug/condition code/control/unnamed/flag were stripped from the main register file and placed into individual files between the micro-op ROM and instruction sequencer. Only inaccessible registers like the segment register are now separated from the general-purpose register file (except the instruction pointer); they are now located between the scheduler and instruction allocator, in order to facilitate register renaming and out-of-order execution. The x87 stack was later merged with the floating-point register file after a 128-bit XMM register debuted in Pentium III, but the XMM register file is still located separately from x86 integer register files. Later P6 implementations (Pentium M, Yonah) introduced "Shadow Register File Architecture" that expanded to 2 copies of dual ported integer architectural register file and consist with context switch (between future&retirered file and scaled file using the same trick that used between integer and floating point). It was in order to solve the register bottleneck that exist in x86 architecture after micro op fusion is introduced, but it is still have 8 entries 32 bit architectural registers for total 32 bytes in capacity per file (segment register and instruction pointer remain within the file, though they are inaccessible by program) as speculative file. The second file is served as a scaled shadow register file, which without context switch the scaled file cannot store some instruction independently. Some instruction from SSE2/SSE3/SSSE3 require this feature for integer operation, for example instruction like PSHUFB, PMADDUBSW, PHSUBW, PHSUBD, PHSUBSW, PHADDW, PHADDD, PHADDSW would require loading EAX/EBX/ECX/EDX from both of register file, though it was uncommon for x86 processor to take use of another register file with same instruction; most of time the second file is served as a scale retirered file. The Pentium M architecture still remains one dual-ported FP register file (8 entries MM/XMM) shared with three decoder and FP register does not have shadow register file with it as its Shadow Register File Architecture did not including floating point function. Processor after P6, the architectural register file are external and locate in processor's backend after retired, opposite to internal register file that are locate in inner core for register renaming/reorder buffer. However, in Core 2 it is now within a unit called "register alias table" RAT, located with instruction allocator but have same size of register size as retirement. Core 2 increased the inner ring bus to 24 bytes (allow more than 3 instructions to be decoded) and extended its register file from dual ported (one read/one write) to quad ported (two read/two write), register still remain 8 entries in 32 bit and 32 bytes (not including 6 segment register and one instruction pointer as they are unable to be access in the file by any code/instruction) in total file size and expanded to 16 entries in x64 for total 128 bytes size per file. From Pentium M as its pipeline port and decoder increased, but they're located with allocator table instead of code buffer. Its FP XMM register file are also increase to quad ported (2 read/2 write), register still remain 8 entries in 32 bit and extended to 16 entries in x64 mode and number still remain 1 as its shadow register file architecture is not including floating point/SSE functions. In later x86 implementations, like Nehalem and later processors, both integer and floating point registers are now incorporated into a unified octa-ported (six read and two write) general-purpose register file (8 + 8 in 32-bit and 16 + 16 in x64 per file), while the register file extended to 2 with enhanced "Shadow Register File Architecture" in favorite of executing hyper threading and each thread uses independent register files for its decoder. Later Sandy bridge and onward replaced shadow register table and architectural registers with much large and yet more advance physical register file before decoding to the reorder buffer. Randered that Sandy Bridge and onward no longer carry an architectural register. On the Atom line was the modern simplified revision of P5. It includes single copies of register file share with thread and decoder. The register file is a dual-port design, 8/16 entries GPRS, 8/16 entries debug register and 8/16 entries condition code are integrated in the same file. However it has an eight-entries 64 bit shadow based register and an eight-entries 64 bit unnamed register that are now separated from main GPRs unlike the original P5 design and located after the execution unit, and the file of these registers is single-ported and not expose to instruction like scaled shadow register file found on Core/Core2 (shadow register file are made of architectural registers and Bonnell did not due to not have "Shadow Register File Architecture"), however the file can be use for renaming purpose due to lack of out of order execution found on Bonnell architecture. It also had one copy of XMM floating point register file per thread. The difference from Nehalem is Bonnell do not have a unified register file and has no dedicated register file for its hyper threading. Instead, Bonnell uses a separate rename register for its thread despite it is not out of order. Similar to Bonnell, Larrabee and Xeon Phi also each have only one general-purpose integer register file, but the Larrabee has up to 16 XMM register files (8 entries per file), and the Xeon Phi has up to 128 AVX-512 register files, each containing 32 512-bit ZMM registers for vector instruction storage, which can be as big as L2 cache. There are some other of Intel's x86 lines that don't have a register file in their internal design, Geode GX and Vortex86 and many embedded processors that aren't Pentium-compatible or reverse-engineered early 80x86 processors. Therefore, most of them don't have a register file for their decoders, but their GPRs are used individually. Pentium 4, on the other hand, does not have a register file for its decoder, as its x86 GPRs didn't exist within its structure, due to the introduction of a physical unified renaming register file (similar to Sandy Bridge, but slightly different due to the inability of Pentium 4 to use the register before naming) for attempting to replace the architectural register file and skip the x86 decoding scheme. Instead it uses SSE for integer execution and storage before the ALU and after result, SSE2/SSE3/SSSE3 use the same mechanism as well for its integer operation. AMD's early design like K6 do not have a register file like Intel and do not support "Shadow Register File Architecture" as its lack of context switch and bypass inverter that are necessary require for a register file to function appropriately. Instead they use a separate GPRs that directly link to a rename register table for its OoOE CPU with a dedicated integer decoder and floating decoder. The mechanism is similar to Intel's pre-Pentium processor line. For example, the K6 processor has four int (one eight-entries temporary scratched register file + one eight-entries future register file + one eight-entries fetched register file + an eight-entries unnamed register file) and two FP rename register files (two eight-entries x87 ST file one goes fadd and one goes fmov) that directly link with its x86 EAX for integer renaming and XMM0 register for floating point renaming, but later Athlon included "shadow register" in its front end, it's scaled up to 40 entries unified register file for in order integer operation before decoded, the register file contain 8 entries scratch register + 16 future GPRs register file + 16 unnamed GPRs register file. In later AMD designs it abandons the shadow register design and favored to K6 architecture with individual GPRs direct link design. Like Phenom, it has three int register files and two SSE register files that are located in the physical register file directly linked with GPRs. However, it scales down to one integer + one floating-point on Bulldozer. Like early AMD designs, most of the x86 manufacturers like Cyrix, VIA, DM&P, and SIS used the same mechanism as well, resulting in a lack of integer performance without register renaming for their in-order CPU. Companies like Cyrix and AMD had to increase cache size in hope to reduce the bottleneck. AMD's SSE integer operation work in a different way than Core 2 and Pentium 4; it uses its separate renaming integer register to load the value directly before the decode stage. Though theoretically it will only need a shorter pipeline than Intel's SSE implementation, but generally the cost of branch prediction are much greater and higher missing rate than Intel, and it would have to take at least two cycles for its SSE instruction to be executed regardless of instruction wide, as early AMDs implementations could not execute both FP and Int in an SSE instruction set like Intel's implementation did. Unlike Alpha, Sparc, and MIPS that only allows one register file to load/fetch one operand at the time; it would require multiple register files to achieve superscale. The ARM processor on the other hand does not integrate multiple register files to load/fetch instructions. ARM GPRs have no special purpose to the instruction set (the ARM ISA does not require accumulator, index, and stack/base points. Registers do not have an accumulator and base/stack point can only be used in thumb mode). Any GPRs can propagate and store multiple instructions independently in smaller code size that is small enough to be able to fit in one register and its architectural register act as a table and shared with all decoder/instructions with simple bank switching between decoders. The major difference between ARM and other designs is that ARM allows to run on the same general-purpose register with quick bank switching without requiring additional register file in superscalar. Despite x86 sharing the same mechanism with ARM that its GPRs can store any data individually, x86 will confront data dependency if more than three non-related instructions are stored, as its GPRs per file are too small (eight in 32 bit mode and 16 in 64 bit, compared to ARM's 13 in 32 bit and 31 in 64 bit) for data, and it is impossible to have superscalar without multiple register files to feed to its decoder (x86 code is big and complex compared to ARM). Because most x86's front-ends have become much larger and much more power hungry than the ARM processor in order to be competitive (example: Pentium M & Core 2 Duo, Bay Trail). Some third-party x86 equivalent processors even became noncompetitive with ARM due to having no dedicated register file architecture. Particularly for AMD, Cyrix and VIA that cannot bring any reasonable performance without register renaming and out of order execution, which leave only Intel Atom to be the only in-order x86 processor core in the mobile competition. This was until the x86 Nehalem processor merged both of its integer and floating point register into one single file, and the introduction of a large physical register table and enhanced allocator table in its front-end before renaming in its out-of-order internal core. Processors that perform register renaming can arrange for each functional unit to write to a subset of the physical register file. This arrangement can eliminate the need for multiple write ports per bit cell, for large savings in area. The resulting register file, effectively a stack of register files with single write ports, then benefits from replication and subsetting the read ports. At the limit, this technique would place a stack of 1-write, 2-read regfiles at the inputs to each functional unit. Since regfiles with a small number of ports are often dominated by transistor area, it is best not to push this technique to this limit, but it is useful all the same. The SPARC ISA defines register windows, in which the 5-bit architectural names of the registers actually point into a window on a much larger register file, with hundreds of entries. Implementing multiported register files with hundreds of entries requires a large area. The register window slides by 16 registers when moved, so that each architectural register name can refer to only a small number of registers in the larger array, e.g. architectural register r20 can only refer to physical registers #20, #36, #52, #68, #84, #100, #116, if there are just seven windows in the physical file. To save area, some SPARC implementations implement a 32-entry register file, in which each cell has seven "bits". Only one is read and writeable through the external ports, but the contents of the bits can be rotated. A rotation accomplishes in a single cycle a movement of the register window. Because most of the wires accomplishing the state movement are local, tremendous bandwidth is possible with little power. This same technique is used in the R10000 register renaming mapping file, which stores a 6-bit virtual register number for each of the physical registers. In the renaming file, the renaming state is checkpointed whenever a branch is taken, so that when a branch is detected to be mispredicted, the old renaming state can be recovered in a single cycle. (See Register renaming.) - "A Survey of Techniques for Designing and Managing CPU Register File", Concurrency and Computation: Practice and Experience, 2016 - Wikibooks: Microprocessor Design/Register File#Register Bank. - Johan Janssen. "Compiler Strategies for Transport Triggered Architectures". 2001. p. 169. p. 171-173. - "Energy efficient asymmetrically ported register files" by Aneesh Aggarwal and M. Franklin. 2003. |The Wikibook Microprocessor Design has a page on the topic of: Register File| - Register file design considerations in dynamically scheduled processors - Farkas, Jouppi, Chow - 1995
Ages 4 to 13 + SEN Maths Circus supports pupils who are achieving or working towards level 4 and above. The program provides pupils with experience of problem solving linked to Ma3 shape, space and measures and Ma2 Number and algebra. The program encourages pupils to investigate pattern, spatial awareness, time and number. Pupils can progress to harder levels and apply calculation strategies learned in the classroom to problems. When working with 2D and 3D shapes, pupils use everyday language to describe properties and positions. They measure and order objects using direct comparison, and order events. Pupils use mathematical names for common 3D and 2D shapes and describe their properties, including numbers of sides and corners. They distinguish between straight and turning movements, understand angle as a measurement of turn, and recognise right angles in turns. They begin to use everyday non-standard and standard units to measure length and mass. Pupils classify 3D and 2D shapes in various ways using mathematical properties such as reflective symmetry for 2D shapes. They use non-standard units, standard metric units of length, capacity and mass, and standard units of time, in a range of contexts. Pupils draw common 2D shapes in different orientations on grids. They reflect simple shapes in a mirror line. They choose and use appropriate units and instruments, interpreting, with appropriate accuracy, numbers on a range of measuring instruments. They find perimeters of simple shapes and find areas by counting squares. When using shapes, pupils measure and draw angles to the nearest degree, and use language associated with angle. Pupils know the angle sum of a triangle and that of angles at a point. They identify all the symmetries of 2D shapes. They know the rough metric equivalents of imperial units still in daily use and convert one metric unit to another. They make sensible estimates of a range of measures in relation to everyday situations. Pupils understand and use the formula for the area of a rectangle. Pupils count, order, add and subtract numbers when solving problems involving up to 10 objects. They read and write the numbers involved. Pupils count sets of objects reliably, and use mental recall of addition and subtraction facts to 10. They begin to understand the place value of each digit in a number and use this to order numbers up to 100. They choose the appropriate operation when solving addition and subtraction problems. They use the knowledge that subtraction is the inverse of addition. They use mental calculation strategies to solve number problems involving money and measures. They recognise sequences of numbers, including odd and even numbers. Pupils show understanding of place value in numbers up to 1000 and use this to make approximations. They begin to use decimal notation and to recognise negative numbers, in contexts such as money and temperature. Pupils use mental recall of addition and subtraction facts to 20 in solving problems involving larger numbers. They add and subtract numbers with two digits mentally and numbers with three digits using written methods. They use mental recall of the 2, 3, 4, 5 and 10 multiplication tables and derive the associated division facts. They solve whole-number problems involving multiplication or division, including those that give rise to remainders. They use simple fractions that are several parts of a whole and recognise when two simple fractions are equivalent. Pupils use their understanding of place value to multiply and divide whole numbers by 10 or 100. In solving number problems, pupils use a range of mental methods of computation with the four operations, including mental recall of multiplication facts up to 10 x 10 and quick derivation of corresponding division facts. In solving problems with or without a calculator, pupils check the reasonableness of their results by reference to their knowledge of the context or to the size of the numbers. Pupils recognise and describe number patterns, and relationships including multiple, factor and square. They begin to use simple formulae expressed in words. Pupils use and interpret co-ordinates in the first quadrant. Pupils use their understanding of place value to multiply and divide whole numbers and decimals by 10, 100 and 1000. They order, add and subtract negative numbers in context. They use all four operations with decimals to two places. Pupils understand and use an appropriate non-calculator method for solving problems that involve multiplying and dividing any three-digit number by any two-digit number. They check their solutions by applying inverse operations or estimating using approximations. They construct, express in symbolic form, and use simple formulae involving one or two operations. They use brackets appropriately. Pupils use and interpret coordinates in all four quadrants. Pupils order and approximate decimals when solving numerical problems and equations [for example, x 3 + x = 20], using trial and improvement methods. When exploring number sequences, pupils find and describe in words the rule for the next term or nth term of a sequence where the rule is linear. They formulate and solve linear equations with whole-number coefficients. They represent mappings expressed algebraically, and use Cartesian coordinates for graphical representation interpreting general features. Last modified on 14/03/15 . ©4Mation 2000 - 2015 All rights reserved. E&OE 4Mation, the 4Mation logo are registered trademarks of 4Mation Educational Resources Ltd.
Volcanoes are a key part of the Earth system. Most of Earth’s atmosphere, water, and crust were delivered by volcanoes, and volcanoes continue to recycle earth materials. Volcanic eruptions are common. More than a dozen are usually erupting at any time somewhere on Earth, and close to 100 erupt in any year (Loughlin et al., 2015). Volcano landforms and eruptive behavior are diverse, reflecting the large number and complexity of interacting processes that govern the generation, storage, ascent, and eruption of magmas. Eruptions are influenced by the tectonic setting, the properties of Earth’s crust, and the history of the volcano. Yet, despite the great variability in the ways volcanoes erupt, eruptions are all governed by a common set of physical and chemical processes. Understanding how volcanoes form, how they erupt, and their consequences requires an understanding of the processes that cause rocks to melt and change composition, how magma is stored in the crust and then rises to the surface, and the interaction of magma with its surroundings. Our understanding of how volcanoes work and their consequences is also shared with the millions of people who visit U.S. volcano national parks each year. Volcanoes have enormous destructive power. Eruptions can change weather patterns, disrupt climate, and cause widespread human suffering and, in the past, mass extinctions. Globally, volcanic eruptions caused about 80,000 deaths during the 20th century (Sigurdsson et al., 2015). Even modest eruptions, such as the 2010 Eyjafjallajökull eruption in Iceland, have multibillion-dollar global impacts through disruption of air traffic. The 2014 steam explosion at Mount Ontake, Japan, killed 57 people without any magma reaching the surface. Many volcanoes in the United States have the potential for much larger eruptions, such as the 1912 eruption of Katmai, Alaska, the largest volcanic eruption of the 20th century (Hildreth and Fierstein, 2012). The 2008 eruption of the unmonitored Kasatochi volcano, Alaska, distributed volcanic gases over most of the continental United States within a week (Figure 1.1). Finally, volcanoes are important economically. Volcanic heat provides low-carbon geothermal energy. U.S. generation of geothermal energy accounts for nearly one-quarter of the global capacity (Bertani, 2015). In addition, volcanoes act as magmatic and hydrothermal distilleries that create ore deposits, including gold and copper ores. Moderate to large volcanic eruptions are infrequent yet high-consequence events. The impact of the largest possible eruption, similar to the super-eruptions at Yellowstone, Wyoming; Long Valley, California; or Valles Caldera, New Mexico, would exceed that of any other terrestrial natural event. Volcanoes pose the greatest natural hazard over time scales of several decades and longer, and at longer time scales they have the potential for global catastrophe (Figure 1.2). While the continental United States has not suffered a fatal eruption since 1980 at Mount St. Helens, the threat has only increased as more people move into volcanic areas. Volcanic eruptions evolve over very different temporal and spatial scales than most other natural hazards (Figure 1.3). In particular, many eruptions are preceded by signs of unrest that can serve as warnings, and an eruption itself often persists for an extended period of time. For example, the eruption of Kilauea Volcano in Hawaii has continued since 1983. We also know the locations of many volcanoes and, hence, where most eruptions will occur. For these reasons, the impacts of at least some types of volcanic eruptions should be easier to mitigate than other natural hazards. Anticipating the largest volcanic eruptions is possible. Magma must rise to Earth’s surface and this movement is usually accompanied by precursors—changes in seismic, deformation, and geochemical signals that can be recorded by ground-based and space-borne instruments. However, depending on the monitoring infrastructure, precursors may present themselves over time scales that range from a few hours (e.g., 2002 Reventador, Ecuador, and 2015 Calbuco, Chile) to decades before eruption (e.g., 1994 Rabaul, Papua New Guinea). Moreover, not all signals of volcanic unrest are immediate precursors to surface eruptions (e.g., currently Long Valley, California, and Campi Flegrei, Italy). Probabilistic forecasts account for this uncertainty using all potential eruption scenarios and all relevant data. An important consideration is that the historical record is short and biased. The instrumented record is even shorter and, for most volcanoes, spans only the last few decades—a miniscule fraction of their lifetime. Knowledge can be extended qualitatively using field studies of volcanic deposits, historical accounts, and proxy data, such as ice and marine sediment cores and speleothem (cave) records. Yet, these too are biased because they commonly do not record small to moderate eruptions. Understanding volcanic eruptions requires contributions from a wide range of disciplines and approaches. Geologic studies play a critical role in reconstructing the past eruption history of volcanoes, especially of the largest events, and in regions with no historical or directly observed eruptions. Geochemical and geophysical techniques are used to study volcano processes at scales ranging from crystals to plumes of volcanic ash. Models reveal essential processes that control volcanic eruptions, and guide data collection. Monitoring provides a wealth of information about the life cycle of volcanoes and vital clues about what kind of eruption is likely and when it may occur. At the request of managers at the National Aeronautics and Space Administration (NASA), the National Science Foundation, and the U.S. Geological Survey (USGS), the National Academies of Sciences, Engineering, and Medicine established a committee to undertake the following tasks: - Summarize current understanding of how magma is stored, ascends, and erupts. - Discuss new disciplinary and interdisciplinary research on volcanic processes and precursors that could lead to forecasts of the type, size, and timing of volcanic eruptions. - Describe new observations or instrument deployment strategies that could improve quantification of volcanic eruption processes and precursors. - Identify priority research and observations needed to improve understanding of volcanic eruptions and to inform monitoring and early warning efforts. The roles of the three agencies in advancing volcano science are summarized in Box 1.1. The committee held four meetings, including an international workshop, to gather information, deliberate, and prepare its report. The report is not intended to be a comprehensive review, but rather to provide a broad overview of the topics listed above. Chapter 2 addresses the opportunities for better understanding the storage, ascent, and eruption of magmas. Chapter 3 summarizes the challenges and prospects for forecasting eruptions and their consequences. Chapter 4 highlights repercussions of volcanic eruptions on a host of other Earth systems. Although not explicitly called out in the four tasks, the interactions between volcanoes and other Earth systems affect the consequences of eruptions, and offer opportunities to improve forecasting and obtain new insights into volcanic processes. Chapter 5 summarizes opportunities to strengthen research in volcano science. Chapter 6 provides overarching conclusions. Supporting material appears in appendixes, including a list of volcano databases (see Appendix A), a list of workshop participants (see Appendix B), biographical sketches of the committee members (see Appendix C), and a list of acronyms and abbreviations (see Appendix D). Background information on these topics is summarized in the rest of this chapter. The USGS has identified 169 potentially active volcanoes in the United States and its territories (e.g., Marianas), 55 of which pose a high threat or very high threat (Ewert et al., 2005). Of the total, 84 are monitored by at least one seismometer, and only 3 have gas sensors (as of November 2016).1 Volcanoes are found in the Cascade mountains, Aleutian arc, Hawaii, and the western interior of the continental United States (Figure 1.4). The geographical extent and eruption hazards of these volcanoes are summarized below. The Cascade volcanoes extend from Lassen Peak in northern California to Mount Meager in British Columbia. The historical record contains only small- to moderate-sized eruptions, but the geologic record reveals much larger eruptions (Carey et al., 1995; Hildreth, 2007). Activity tends to be sporadic (Figure 1.5). For example, nine Cascade eruptions occurred in the 1850s, but none occurred between 1915 and 1980, when Mount St. Helens erupted. Consequently, forecasting eruptions in the Cascades is subject to considerable uncertainty. Over the coming decades, there may be multiple eruptions from several volcanoes or no eruptions at all. The Aleutian arc extends 2,500 km across the North Pacific and comprises more than 130 active and potentially active volcanoes. Although remote, these volcanoes pose a high risk to overflying aircraft that carry more than 30,000 passengers a day, and are monitored by a combination of ground- and space-based sensors. One or two small to moderate explosive eruptions occur in the Aleutians every year, and very large eruptions occur less frequently. For example, the world’s largest eruption of the 20th century occurred approximately 300 miles from Anchorage, in 1912. In Hawaii, Kilauea has been erupting largely effusively since 1983, but the location and nature of eruptions can vary dramatically, presenting challenges for disaster preparation. The population at risk from large-volume, rapidly moving lava flows on the flanks of the Mauna Loa volcano has grown tremendously in the past few decades (Dietterich and Cashman, 2014), and few island residents are prepared for the even larger magnitude explosive eruptions that are documented in the last 500 years (Swanson et al., 2014). All western states have potentially active volcanoes, from New Mexico, where lava flows have reached within a few kilometers of the Texas and Oklahoma borders (Fitton et al., 1991), to Montana, which borders the Yellowstone caldera (Christiansen, 1984). These volcanoes range from immense calderas that formed from super-eruptions (Mastin et al., 2014) to small-volume basaltic volcanic fields that erupt lava flows and tephra for a few months to a few decades. Some of these eruptions are monogenic (erupt just once) and pose a special challenge for forecasting. Rates of activity in these distributed volcanic fields are low, with many eruptions during the past few thousand years (e.g., Dunbar, 1999; Fenton, 2012; Laughlin et al., 1994), but none during the past hundred years. Volcanoes often form prominent landforms, with imposing peaks that tower above the surrounding landscape, large depressions (calderas), or volcanic fields with numerous dispersed cinder cones, shield volcanoes, domes, and lava flows. These various landforms reflect the plate tectonic setting, the ways in which those volcanoes erupt, and the number of eruptions. Volcanic landforms change continuously through the interplay between constructive processes such as eruption and intrusion, and modification by tectonics, climate, and erosion. The stratigraphic and structural architecture of volcanoes yields critical information on eruption history and processes that operate within the volcano. Beneath the volcano lies a magmatic system that in most cases extends through the crust, except during eruption. Depending on the setting, magmas may rise 1 Personal communication from Charles Mandeville, Program Coordinator, Volcano Hazards Program, U.S. Geological Survey, on November 26, 2016. directly from the mantle or be staged in one or more storage regions within the crust before erupting. The uppermost part (within 2–3 km of Earth’s surface) often hosts an active hydrothermal system where meteoric groundwater mingles with magmatic volatiles and is heated by deeper magma. Identifying the extent and vigor of hydrothermal activity is important for three reasons: (1) much of the unrest at volcanoes occurs in hydrothermal systems, and understanding the interaction of hydrothermal and magmatic systems is important for forecasting; (2) pressure buildup can cause sudden and potentially deadly phreatic explosions from the hydrothermal system itself (such as on Ontake, Japan, in 2014), which, in turn, can influence the deeper magmatic system; and (3) hydrothermal systems are energy resources and create ore deposits. Below the hydrothermal system lies a magma reservoir where magma accumulates and evolves prior to eruption. Although traditionally modeled as a fluid-filled cavity, there is growing evidence that magma reservoirs may comprise an interconnected complex of vertical and/or horizontal magma-filled cracks, or a partially molten mush zone, or interleaved lenses of magma and solid material (Cashman and Giordano, 2014). In arc volcanoes, magma chambers are typically located 3–6 km below the surface. The magma chamber is usually connected to the surface via a fluid-filled conduit only during eruptions. In some settings, magma may ascend directly from the mantle without being stored in the crust. In the broadest sense, long-lived magma reservoirs comprise both eruptible magma (often assumed to contain less than about 50 percent crystals) and an accumulation of crystals that grow along the margins or settle to the bottom of the magma chamber. Physical segregation of dense crystals and metals can cause the floor of the magma chamber to sag, a process balanced by upward migration of more buoyant melt. A long-lived magma chamber can thus become increasingly stratified in composition and density. The deepest structure beneath volcanoes is less well constrained. Swarms of low-frequency earthquakes at mid- to lower-crustal depths (10–40 km) beneath volcanoes suggest that fluid is periodically transferred into the base of the crust (Power et al., 2004). Tomographic studies reveal that active volcanic systems have deep crustal roots that contain, on average, a small fraction of melt, typically less than 10 percent. The spatial distribution of that melt fraction, particularly how much is concentrated in lenses or in larger magma bodies, is unknown. Erupted samples preserve petrologic and geochemical evidence of deep crystallization, which requires some degree of melt accumulation. Seismic imaging and sparse outcrops suggest that the proportion of unerupted solidified magma relative to the surrounding country rock increases with depth and that the deep roots of volcanoes are much more extensive than their surface expression. Volcano monitoring is critical for hazard forecasts, eruption forecasts, and risk mitigation. However, many volcanoes are not monitored at all, and others are monitored using only a few types of instruments. Some parameters, such as the mass, extent, and trajectory of a volcanic ash cloud, are more effectively measured by satellites. Other parameters, notably low-magnitude earthquakes and volcanic gas emissions that may signal an impending eruption, require ground-based monitoring on or close to the volcanic edifice. This section summarizes existing and emerging technologies for monitoring volcanoes from the ground and from space. Monitoring Volcanoes on or Near the Ground Ground-based monitoring provides data on the location and movement of magma. To adequately capture what is happening inside a volcano, it is necessary to obtain a long-term and continuous record, with periods spanning both volcanic quiescence and periods of unrest. High-frequency data sampling and efficient near-real-time relay of information are important, especially when processes within the volcano–magmatic–hydrothermal system are changing rapidly. Many ground-based field campaigns are time intensive and can be hazardous when volcanoes are active. In these situations, telemetry systems permit the safe and continuous collection of data, although the conditions can be harsh and the lifetime of instruments can be limited in these conditions. Ground-based volcano monitoring falls into four broad categories: seismic, deformation, gas, and thermal monitoring (Table 1.1). Seismic monitoring tools, TABLE 1.1 Ground-Based Instrumentation for Monitoring Volcanoes |Seismic waves||Geophone||Detect lahars (volcanic mudflows) and pyroclastic density currents| |Short-period seismometer||Locate earthquakes, study earthquake mechanics, and detect unrest| |Broadband seismometer||Study earthquakes, tremor, and long-period earthquakes to quantify rock failure, fluid movement, and eruption progress| |Infrasound detector||Track evolution of near-surface eruptive activity| |Geodetic||Classical surveying techniques||Detect deformation over broad areas| |Tiltmeter||Detect subtle pressurization or volumetric sources| |Strainmeter||Detect changing stress distributions| |GNSS/Global Positioning System||Model intrusion locations and sizes, detect ash clouds| |Photogrammetic and structure from motion||Map and identify or measure morphologic changes| |Lidar||Precision mapping, detect ash and aerosol heights| |Radar||Quantify rapid surface movements and velocities of ballistic pyroclasts| |Gas||Miniature differential optical absorption spectrometer||Detect sulfur species concentrations and calculate gas flux| |Open-path Fourier transform infrared spectroscopy||Quantify gas concentration ratios| |Ultraviolet imagers||Detect plume sulfur| |Gigenbach-type sampling and multiGAS sensors||Determine chemical and isotopic compositions and make in situ measurements of gas species| |Portable laser spectrometer||Measure stable isotopic ratios of gases| |Thermal||Infrared thermal camera||Detect dome growth, lava breakouts, and emissions of volcanic ash and gas| |In situ thermocouple||Monitor fumarole temperatures| |Hydrologic||Temperature probe||Detect changes in hydrothermal sources| |Discharge measurements||Detect changes in pressure or permeability| |Sampling for chemical and isotopic composition||Detect magma movement| |Potential fields||Gravimeter||Detect internal mass movement| |Self-potential, resistivity||Detect fluids and identify fractures and voids| |Magnetotellurics||3D location of fluids and magma in shallow crust| |Other||Cosmic ray muon detector||Tomography| |High-speed camera||Image explosion dynamics| |Drones||Visually observe otherwise inaccessible surface phenomena| |Lightning detection array||Locate lightning and identify ash emissions| including seismometers and infrasound sensors, are used to detect vibrations caused by breakage of rock and movement of fluids and to assess the evolution of eruptive activity. Ambient seismic noise monitoring can image subsurface reservoirs and document changes in wave speed that may reflect stress. changes. Deformation monitoring tools, including tiltmeters, borehole strainmeters, the Global Navigation Satellite System (GNSS, which includes the Global Positioning System [GPS]), lidar, radar, and gravimeters, are used to detect the motion of magma and other fluids in the subsurface. Some of these tools, such as GNSS and lidar, are also used to detect erupted products, including ash clouds, pyroclastic density currents, and volcanic bombs. Gas monitoring tools, including a range of sensors (Table 1.1), and direct sampling of gases and fluids are used to detect magma intrusions and changes in magma–hydrothermal interactions. Thermal monitoring tools, such as infrared cameras, are used to detect dome growth and lava breakouts. Continuous video or photographic observations are also commonly used and, despite their simplicity, most directly document volcanic activity. Less commonly used monitoring technologies, such as self-potential, electromagnetic techniques, and lightning detection are used to constrain fluid movement and to detect ash clouds. In addition, unmanned aerial vehicles (e.g., aircraft and drones) are increasingly being used to collect data. Rapid sample collection and analysis is also becoming more common as a monitoring tool at volcano observatories. A schematic of ground-based monitoring techniques is shown in Figure 1.6. Monitoring Volcanoes from Space Satellite-borne sensors and instruments provide synoptic observations during volcanic eruptions when collecting data from the ground is too hazardous or where volcanoes are too remote for regular observation. Repeat-pass data collected over years or decades provide a powerful means for detecting surface changes on active volcanoes. Improvements in instrument sensitivity, data availability, and the computational capacity required to process large volumes of data have led to a dramatic increase in “satellite volcano science.” Although no satellite-borne sensor currently in orbit has been specifically designed for volcano monitoring, a number of sensors measure volcano-relevant TABLE 1.2 Satellite-Borne Sensor Suite for Volcano Monitoring |High-temporal/low-spatial-resolution multispectral thermal infrared||Detect eruptions and map ash clouds||GOES| |Low-temporal/moderate-spatial-resolution multispectral thermal infrared||Detect eruptions and map ash clouds with coverage of high latitudes; infer lava effusion rate||AVHRR, MODIS| |Low-temporal/high-spatial-resolution multispectral visible infrared||Map detailed surface and plumes; infer lava effusion rate||Landsat, ASTER, Sentinel-2| |Hyperspectral ultraviolet||Detect and quantify volcanic SO2, BrO, and OClO||OMI| |Hyperspectral infrared||Detect and quantify volcanic SO2 and H2S in nighttime and winter||IASI, AIRS| |Microwave limb sounding||Detect volcanic SO2 and HCl in the upper troposphere and stratosphere||MLS| |Visible–near-infrared multiangle imaging||Determine volcanic ash cloud altitudes and plume speed||MISR| |Ultraviolet–visible limb scattering||Measure aerosol vertical profiles||OMPS-LP| |Ultraviolet–near-infrared solar occultation||Measure stratospheric aerosol||SAGE III| |Spaceborne lidar||Develop vertical profiles of volcanic clouds||CALIPSO| |Spaceborne W-band radar||Measure volcanic hydrometeors||CloudSat| |Multiband (X-, C-, L-band) synthetic aperture radar||Measure deformation globally||Sentinel-1a/b, ALOS-2, COSMO-SkyMed, TerraSAR-X, TanDEM-X, Radarsat-2| NOTE: AIRS, Atmospheric Infrared Sounder; ALOS, Advanced Land Observing Satellite; ASTER, Advanced Spaceborne Thermal Emission and Reflection Radiometer; AVHRR, Advanced Very High Resolution Radiometer; CALIPSO, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation; COSMO-SkyMed, Constellation of Small Satellites for Mediterranean Basin Observation; GOES, Geostationary Operational Environmental Satellite; IASI, Infrared Atmospheric Sounding Interferometer; MISR, Multi-angle Imaging SpectroRadiometer; MLS, Microwave Limb Sounder; MODIS, Moderate Resolution Imaging Spectroradiometer; OMI, Ozone Monitoring Instrument; OMPS, Ozone Mapping and Profiler Suite; SAGE, Stratospheric Aerosol and Gas Experiment. parameters, including heat flux, gas and ash emissions, and deformation (Table 1.2). Thermal infrared data are used to detect eruption onset and cessation, calculate lava effusion rates, map lava flows, and estimate ash column heights during explosive eruptions. In some cases, satellites may capture thermal precursors to eruptions, although low-temperature phenomena are challenging to detect. Both high-temporal/low-spatial-resolution (geostationary orbit) and high-spatial/low-temporal-resolution (polar orbit) thermal infrared observations are needed for global volcano monitoring. Satellite-borne sensors are particularly effective for observing the emission and dispersion of volcanic gas and ash plumes in the atmosphere. Although several volcanic gas species can be detected from space (including SO2, BrO, OClO, H2S, HCl, and CO; Carn et al., 2016), SO2 is the most readily measured, and it is also responsible for much of the impact of eruptions on climate. Satellite measurements of SO2 are valuable for detecting eruptions, estimating global volcanic fluxes and recycling of other volatile species, and tracking volcanic clouds that may be hazardous to aviation in near real time. Volcanic ash cloud altitude is most accurately determined by spaceborne lidar, although spatial coverage is limited. Techniques for measuring volcanic CO2 from space are under development and could lead to earlier detection of preeruptive volcanic degassing. Interferometric synthetic aperture radar (InSAR) enables global-scale background monitoring of volcano deformation (Figure 1.7). InSAR provides much higher spatial resolution than GPS, but lower accuracy and temporal resolution. However, orbit repeat times will diminish as more InSAR missions are launched, such as the European Space Agency’s recently deployed Sentinel-1 satellite and the NASA–Indian Space Research Organisation synthetic aperture radar mission planned for launch in 2020. Eruptions range from violently explosive to gently effusive, from short lived (hours to days) to persistent over decades or centuries, from sustained to intermittent, and from steady to unsteady (Siebert et al., 2015). Eruptions may initiate from processes within the magmatic system (Section 1.3) or be triggered by processes and properties external to the volcano, such as precipitation, landslides, and earthquakes. The eruption behavior of a volcano may change over time. No classification scheme captures this full diversity of behaviors (see Bonadonna et al., 2016), but some common schemes to describe the style, magnitude, and intensity of eruptions are summarized below. Eruption Magnitude and Intensity The size of eruptions is usually described in terms of total erupted mass (or volume), often referred to as magnitude, and mass eruption rate, often referred to as intensity. Pyle (2015) quantified magnitude and eruption intensity as follows: magnitude = log10 (mass, in kg) – 7, and intensity = log10 (mass eruption rate, in kg/s) + 3. The Volcano Explosivity Index (VEI) introduced by Newhall and Self (1982) assigns eruptions to a VEI class based primarily on measures of either magnitude (erupted mass or volume) or intensity (mass eruption rate and/or eruption plume height), with more weight given to magnitude. The VEI classes are summarized in Figure 1.8. The VEI classification is still in use, despite its many limitations, such as its reliance on only a few types of measurements and its poor fit for small to moderate eruptions (see Bonadonna et al., 2016). Smaller VEI events are relatively common, whereas larger VEI events are exponentially less frequent (Siebert et al., 2015). For example, on average about three VEI 3 eruptions occur each year, whereas there is a 5 percent chance of a VEI 5 eruption and a 0.2 percent chance of a VEI 7 (e.g., Crater Lake, Oregon) event in any year. The style of an eruption encompasses factors such as eruption duration and steadiness, magnitude, gas flux, fountain or column height, and involvement of magma and/or external source of water (phreatic and phreatomagmatic eruptions). Eruptions are first divided into effusive (lava producing) and explosive (pyroclast producing) styles, although individual eruptions can be simultaneously effusive and weakly explosive, and can pass rapidly and repeatedly between eruption styles. Explosive eruptions are further subdivided into styles that are sustained on time scales of hours to days and styles that are short lived (Table 1.3). Classification of eruption style is often qualitative and based on historical accounts of characteristic eruptions from type-volcanoes. However, many type-volcanoes exhibit a range of eruption styles over time (e.g., progressing between Strombolian, Vulcanian, and Plinian behavior; see Fee et al., 2010), which has given rise to terms such as subplinian or violent Strombolian. Eruption hazards are diverse (Figure 1.9) and may extend more than thousands of kilometers from an active volcano. From the perspective of risk and impact, it is useful to distinguish between near-source and distal hazards. Near-source hazards are far more unpredictable than distal hazards. Near-source hazards include those that are airborne, such as tephra fallout, volcanic gases, and volcanic projectiles, and those that are transported laterally on or near the ground surface, such as pyroclastic density currents, lava flows, and lahars. Pyroclastic density currents are hot volcanic flows containing mixtures of gas and micron- to meter-sized volcanic particles. They can travel at velocities exceeding 100 km per hour. The heat combined with the high density of material within these flows obliterates objects in their path, making them the most destructive of volcanic hazards. Lava flows also destroy everything in their path, but usually move slowly enough to allow people to get out of the way. Lahars are mixtures of volcanic debris, sediment, and water that can travel many tens of kilometers along valleys and river channels. They may be triggered during an eruption by interaction between volcanic prod- TABLE 1.3 Characteristics of Different Eruption Styles |Hawaiian||Sustained fountaining of magmatic gas and pyroclasts (up to ~1,000 m) often generating clastogenic, gas-charged lava flows from single vents or from fissures| |Strombolian||Short-duration, low-vigor, episodic, small (<100s of meters) explosions driven by escape of pockets of gas and ejecting some bombs and spatter| |Vulcanian||Short-duration, moderately vigorous, magma-fragmenting explosions producing ash-rich columns that may reach heights >1,000 m| |Surtseyan||Short duration, weak phreatomagmatic explosive eruptions where fluid magma interacts with standing water| |Phreatoplinian||Prolonged powerful phreatomagmatic explosions where viscous magma interacts with surface water or groundwater| |Dome collapse||Dome collapse pyroclastic flows occur at unstable gas-charged domes either with an explosive central column eruption (e.g., Mount Pelee) or without (e.g., Unzen, Montserrat, and Santiaguito)| |Plinian||Very powerful, sustained eruptions with columns reaching the stratosphere (>15 km) and sometimes generating large pyroclastic density currents from collapsing eruption columns| ucts and snow, ice, rain, or groundwater. Lahars can be more devastating than the eruption itself. Ballistic blocks are large projectiles that typically fall within 1–5 km from vents. The largest eruptions create distal hazards. Explosive eruptions produce plumes that are capable of dispersing ash hundreds to thousands of kilometers from the volcano. The thickness of ash deposited depends on the intensity and duration of the eruption and the wind direction. Airborne ash and ash fall are the most severe distal hazards and are likely to affect many more people than near-source hazards. They cause respiratory problems and roof collapse, and also affect transport networks and infrastructure needed to support emergency response. Volcanic ash is a serious risk to air traffic. Several jets fully loaded with passengers have temporarily lost power on all engines after encountering dilute ash clouds (e.g., Guffanti et al., 2010). Large lava flows, such as the 1783 Laki eruption in Iceland, emit volcanic gases that create respiratory problems and acidic rain more than 1,000 km from the eruption. Observed impacts of basaltic eruptions in Hawaii and Iceland include regional volcanic haze (“vog”) and acid rain that affect both agriculture and human health (e.g., Thordarson and Self, 2003) and fluorine can contaminate grazing land and water supplies (e.g., Cronin et al., 2003). Diffuse degassing of CO2 can lead to deadly concentrations with fatal consequences such as occurred at Mammoth Lakes, California, or cause lakes to erupt, leading to massive CO2 releases that suffocate people (e.g., Lake Nyos, Cameroon). Secondary hazards can be more devastating than the initial eruption. Examples include lahars initiated by storms, earthquakes, landslides, and tsunamis from eruptions or flank collapse; volcanic ash remobilized by wind to affect human health and aviation for extended periods of time; and flooding because rain can no longer infiltrate the ground. Volcanic processes are governed by the laws of mass, momentum, and energy conservation. It is possible to develop models for magmatic and volcanic phenomena based on these laws, given sufficient information on mechanical and thermodynamic properties of the different components and how they interact with each other. Models are being developed for all processes in volcanic systems, including melt transport in the mantle, the evolution of magma bodies within the crust, the ascent of magmas to the surface, and the fate of magma that erupts effusively or explosively. A central challenge for developing models is that volcanic eruptions are complex multiphase and multicomponent systems that involve interacting processes over a wide range of length and time scales. For example, during storage and ascent, the composition, temperature, and physical properties of magma and host rocks evolve. Bubbles and crystals nucleate and grow in this magma and, in turn, greatly influence the properties of the magmas and lavas. In explosive eruptions, magma fragmentation creates a hot mixture of gas and particles with a wide range of sizes and densities. Magma also interacts with its surroundings: the deformable rocks that surround the magma chamber and conduit, the potentially volatile groundwater and surface water, a changing landscape over which pyroclastic density currents and lava flows travel, and the atmosphere through which eruption columns rise. Models for volcanic phenomena that involve a small number of processes and that are relatively amenable to direct observation, such as volcanic plumes, are relatively straightforward to develop and test. In contrast, phenomena that occur underground are more difficult to model because there are more interacting processes. In those cases, direct validation is much more challenging and in many cases impossible. Forecasting ash dispersal using plume models is more straightforward and testable than forecasting the onset, duration, and style of eruption using models that seek to explain geophysical and geochemical precursors. In all cases, however, the use of even imperfect models helps improve the understanding of volcanic systems. Modeling approaches can be divided into three categories: - Reduced models make simplifying assumptions about dynamics, heat transfer, and geometry to develop first-order explanations for key properties and processes, such as the velocity of lava flows and pyroclastic density currents, the height of eruption columns, the magma chamber size and depth, the dispersal of tephra, and the ascent of magma in conduits. Well-calibrated or tested reduced models offer a straightforward ap- - Multiphase and multiphysics models improve scientific understanding of complex processes by invoking fewer assumptions and idealizations than reduced models (Figure 1.10), but at the expense of increased complexity and computational demands. They also require additional components, such as a model for how magma in magma chambers and conduits deforms when stressed; a model for turbulence in pyroclastic density currents and plumes; terms that describe the thermal and mechanical exchange among gases, crystals, and particles; and a description of ash aggregation in eruption columns. A central challenge for multiphysics models is integrating small-scale processes with large-scale dynamics. Many of the models used in volcano science build on understanding developed in other science and engineering fields and for other ap- proach for combining observations and models in real time in an operational setting (e.g., ash dispersal forecasting for aviation safety). Models may not need to be complex if they capture the most important processes, although simplifications require testing against more comprehensive models and observations. plications. Multiphysics and multiscale models benefit from rapidly expanding computational capabilities. - Laboratory experiments simulate processes for which the geometry and physical and thermal processes and properties can be scaled (Mader et al., 2004). Such experiments provide insights on fundamental processes, such as crystal dynamics in flowing magmas, entrainment in eruption columns, propagation of dikes, and sedimentation from pyroclastic density currents (Figure 1.11). Experiments have also been used successfully to develop the subsystem models used in numerical simulations, and to validate computer simulations for known inputs and properties. The great diversity of existing models reflects to a large extent the many interacting processes that operate in volcanic eruptions and the corresponding simplifying assumptions currently required to construct such models. The challenge in developing models is often highlighted in discrepancies between models and observations of natural systems. Nevertheless, eruption models reveal essential processes governing volcanic eruptions, and they provide a basis for interpreting measurements from prehistoric and active eruptions and for closing observational gaps. Mathematical models offer a guide for what observations will be most useful. They may also be used to make quantitative and testable predictions, supporting forecasting and hazard assessment.
Train a model to identify if the sonar wave bounced off a rock or mine in the ocean. Sonar (sound navigation and ranging) is a technique based on the principle of reflection of ultrasonic sound waves. These waves propagate through water and reflect on hitting the ocean bed or any object obstructing its path. Sonar has been widely used in submarine navigation, communication with or detection of objects on or under the water surface (like other vessels), hazard identification, etc. There are two types of sonar technology used — passive (listening to the sound emitted by vessels in the ocean) and active (emitting pulses and listening for their echoes). It is important to note that research shows the use of active sonar can cause mass strandings of marine animals. Implementation of the idea on cAInvas — here! This dataset was used in Gorman, R. P., and Sejnowski, T. J. (1988). “Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets” in Neural Networks, Vol. 1, pp. 75–89. The CSV files contain data regarding sonar signals bounced off a metal cylinder (mines — M) and a roughly cylindrical rock (rock — R) at various angles and under various conditions. There are 60 attributes and one categorical column in the dataset. Looking into the spread of categorical values in the dataset. It is a fairly balanced dataset. The category column has R and M to denote the classes. We have to convert them into numeric values. Now that we have re-labeled the classes, we will define class names accordingly for later use. Balancing the dataset Even though there is only a difference of only 14 samples, in comparison to the total number of data samples available, this difference is significant and needs to be balanced. In order to balance the dataset, there are two options, - upsampling — resample the values to make their count equal to the class label with the higher count (here, 111). - downsampling — pick n samples from each class label where n = number of samples in class with least count (here, 97) Here, we will be upsampling. First, we divide the whole dataset into 2, one for each label. The sample() function of the data frame is used to resample and obtain 9200 samples. The append() function of the data frame is used to combine the rows in both the datasets. Defining the input and output columns We define the columns of the data frame to be used as input and output for the model. There are 60 input columns and 1 output column. Splitting the dataset into training and validation sets using a 90–10 split ratio. The datasets are then split into respective X and y arrays for further processing. The training set has 199 samples and the validation set has 23 samples. Here is a peek into the distribution of samples in the training and validation sets. The range of values for the attributes are almost of the same range, but the little difference has caused a shift of the means. Using the StandardScaler() function of the sklearn.preprocessing module to scale the values to have a mean = 0 and variance = 1. The StandardScaler instance is fit on the training input data and used to transform the train, validation, and test sets. The model is a simple one with 4 Dense layers where the 3 initial layers use the ReLU activation function and the last one uses the Sigmoid activation function. The model is compiled using the Binary cross-entropy loss function because the final layer of the model performs a two-class classification using the sigmoid activation function. The Adam optimizer is used and the accuracy of the model is tracked over epochs. The EarlyStopping callback function of the keras.callbacks module monitors the validation loss and stops the training if it doesn’t for 3 epochs continuously. The restore_best_weights parameter ensures that the model with the least validation loss is restored to the model variable. The model is trained first with a learning rate of 0.01 which is then reduced to 0.001. The model achieved around 91% accuracy on the validation set. Let’s perform predictions on random test data samples — deepC library, compiler, and inference framework are designed to enable and perform deep learning neural networks by focussing on features of small form-factor devices like micro-controllers, eFPGAs, CPUs, and other embedded devices like raspberry-pi, odroid, Arduino, SparkFun Edge, RISC-V, mobile phones, x86 and arm laptops among others. Compiling the model using deepC — Head over to the cAInvas platform (link to notebook given earlier) and check out the predictions by the .exe file! Credits: Ayisha D Also Read: Mineral Classification
4. Solving Equations Remember this kind of problem from primary school? ? + 5 = 7 We just needed to figure out which number should go into the box to make it a true statement. Clearly, we need to replace the question mark with "2": 2 + 5 = 7 Solving equations using algebra is really no different. Instead of using a box, we use a letter to represent a number. Our task is to find the correct number (or sometimes there may be more than one number) that makes the equation true. Sometimes we can "see" the right answer if it is simple (maybe we can just count up with our fingers, or whatever.) But when our equations become more complicated, we need a process to follow that will eventually give us the answer. - We are aiming to get x (or whatever letter the question uses) on the left hand side of the equals sign, by itself. - We solve equations by balancing: whatever we do to one side of an equation, we must do the same to the other side. So if we add 4 to the left hand side, we must add 4 to the right hand side as well. If we multiply on the left side by 2, we multiply on the right side by 2 as well. Solve the equation x − 6 = 10 We need to "get rid of" the -6 on the left hand side so we are left with x only on the left hand side. The opposite of subtracting 6 is adding 6. If we add 6 to both sides, we will remove the -6 on the left. x − 6 = 10 x − 6 + 6 = 10 + 6 x = 16 So the value of x needs to be 16 to make the equation true. CHECK into the original question: 16 − 6 = 10. It checks out okay. Solve 5x = 35 This time we are answering 5 × ? = 35 We could do this in our heads easily (right?), but if the problem is more complicated, we need to know what to do. On the left, we are multiplying our unknown quantity by 5. We'll use "x" for this quantity. `5x = 35` The opposite of multiplying by 5 is dividing by 5. So we divide both sides by 5: We obtain : `x = 7` CHECK: 5 × 7 = 35. It checks out okay. [These checks seem silly with easy examples, but it is a really good idea to check your solutions for all the equation problems that you do. It means you can leave the problem feeling good that you have the right answer and also, you learn more about how the solution works.] This time we need to do 2 steps to solve the equation. We notice there is a 4 on the bottom of the fraction. This is equivalent to dividing by 4. The opposite to dividing by 4 is multiplying by 4. So we do that first: `(3x)/4 xx 4=7xx4` Cancelling the 4's on the left gives: In the middle step we cancelled out the 4's, so we are left with no fraction. Now we need to divide both sides by 3, since we have a "3×" on the left hand side of the equation. Some countries (like the USA) will leave the answer as a single fraction (28/3), while the practice in other countries (like the UK and Australia) is to express the answer as a mixed numeral. Is our answer correct? Substituting our answer in the left hand side gives: `(3x)/4=3/4 x=3/4 xx 28/3` Canceling the 3's (which gives us 1) and the 28 with the 4 gives us 7: `3/4 xx 28/3 = 7` The right hand side in the question was 7, so we are confident our answer is correct. Solve 5 − (x + 2) = 5x First, we expand out the bracket. `5 - (x + 2) = 5x` `5 - x - 2 = 5x` `3 - x = 5x` Now we recognise that it is easier to get all the x's on the right side, by adding x to both sides : `3 = 6x` Now I divide both sides by 6 and swap the sides: `x = 0.5.` We check our answer in both sides of the equation. If it works, it must be the right answer. LHS = `5 - (0.5 + 2) = 2.5` RHS = `5 xx 0.5 = 2.5` = LHS. Solve 5x − 2(x − 5) = 4x Expanding the bracket: `5x - 2(x - 5) = 4x` `5x - 2x + 10 = 4x` `3x + 10 = 4x` Subtracting `3x` from both sides and swapping sides gives: `x = 10` LHS = `5 xx 10 - 2(10 - 5) = 50 - 10 = 40` RHS = `4 xx 10 = 40` = LHS. Easy to understand math videos: If you can, solve the equation − (7 − x) + 5 = x + 7 What do you conclude? − (7 − x) + 5 = x + 7 Expand out the brackets: −7 + x + 5 = x + 7 Subtract x from both sides: Simplify the left hand side: `-2 = 7` ???? This is not possible, so we conclude that there are no possible values for x. [There was a hint in the question that something funny may be going on. Always be aware that an equation may not have solutions. Also, there are times when you get solutions that cannot possibly work, so you have to discount them. We find such examples later in Equations with Radicals.]
Geologists identify minerals by their physical properties. In the field, where geologists may have limited access to advanced technology and powerful machines, they can still identify minerals by testing several physical properties: luster and color, streak, hardness, crystal habit, cleavage and fracture, and some special properties. Only a few common minerals make up the majority of Earth’s rocks and are usually seen as small grains in rocks. Of the several properties used for identifying minerals, it is good to consider which will be most useful for identifying them in small grains surrounded by other minerals. Luster and Color The first thing to notice about a mineral is its surface appearance, specifically luster and color. Luster describes how the mineral looks. Metallic luster looks like a shiny metal such as chrome, steel, silver, or gold. Submetallic luster has a duller appearance. Pewter, for example, shows submetallic luster. Nonmetallic luster doesn’t look like metal and may be described as vitreous (glassy), earthy, silky, pearly, and other surface qualities. Nonmetallic minerals may be shiny, although their vitreous shine is different from metallic luster. See the table for descriptions and examples of nonmetallic luster. |Vitreous/glassy||Surface is shiny like glass| |Earthy/dull||Dull, like dried mud or clay| |Silky||Soft shine like silk fabric| |Pearly||Like the inside of a clam shell or mother-of-pearl| |Submetallic||Has the appearance of dull metal, like pewter. These minerals would usually still be considered metallic. Submetallic appearance can occur in metallic minerals because of weathering.| Surface color may be helpful in identifying minerals, although it can be quite variable within the same mineral family. Mineral colors are affected by the main elements as well as impurities in the crystals. These impurities may be rare elements—like manganese, titanium, chromium, or lithium—even other molecules that are not normally part of the mineral formula. For example, the incorporation of water molecules gives quartz, which is normally clear, a milky color. Some minerals predominantly show a single color. Malachite and azurite are green and blue, respectively, because of their copper content. Other minerals have a predictable range of colors due to elemental substitutions, usually via a solid solution. Feldspars, the most abundant minerals in the earth’s crust, are complex, have solid solution series, and present several colors including pink, white, green, gray and others. Other minerals also come in several colors, influenced by trace amounts of several elements. The same element may show up as different colors, in different minerals. With notable exceptions, color is usually not a definitive property of minerals. For identifying many minerals. a more reliable indicator is a streak, which is the color of the powdered mineral. Streak examines the color of a powdered mineral and can be seen when a mineral sample is scratched or scraped on an unglazed porcelain streak plate. A paper page in a field notebook may also be used for the streak of some minerals. Minerals that are harder than the streak plate will not show streak but will scratch the porcelain. For these minerals, a streak test can be obtained by powdering the mineral with a hammer and smearing the powder across a streak plate or notebook paper. While mineral surface colors and appearances may vary, their streak colors can be diagnostically useful. An example of this property is seen in the iron-oxide mineral hematite. Hematite occurs in a variety of forms, colors, lusters (from shiny metallic silver to earthy red-brown), and different physical appearances. A hematite streak is consistently reddish-brown, no matter what the original specimen looks like. Iron sulfide, or pyrite, is a brassy metallic yellow. Commonly named fool’s gold, pyrite has a characteristic black to greenish-black streak. Hardness measures the ability of a mineral to scratch other substances. The Mohs Hardness Scale gives a number showing the relative scratch-resistance of minerals when compared to a standardized set of minerals of increasing hardness. The Mohs scale was developed by German geologist Fredrick Mohs in the early 20th century, although the idea of identifying minerals by hardness goes back thousands of years. Mohs hardness values are determined by the strength of a mineral’s atomic bonds. The figure shows the minerals associated with specific hardness values, together with some common items readily available for use in field testing and mineral identification. The hardness values run from 1 to 10, with 10 being the hardest; however, the scale is not linear. Diamond defines a hardness of 10 and is actually about four times harder than corundum, which is 9. A steel pocketknife blade, which has a hardness value of 5.5, separates between hard and soft minerals on many mineral identification keys. Minerals can be identified by crystal habit, how their crystals grow and appear in rocks. Crystal shapes are determined by the arrangement of the atoms within the crystal structure. For example, a cubic arrangement of atoms gives rise to a cubic-shaped mineral crystal. Crystal habit refers to typically observed shapes and characteristics; however, they can be affected by other minerals crystallizing in the same rock. When minerals are constrained so they do not develop their typical crystal habit, they are called anhedral. Subhedral crystals are partially formed shapes. For some minerals, characteristic crystal habit is to grow crystal faces even when surrounded by other crystals in the rock. An example is garnet. Minerals grow freely where the crystals are unconstrained and can take characteristic shapes often form crystal faces. A euhedral crystal has a perfectly formed, unconstrained shape. Some minerals crystallize in such tiny crystals, they do not show a specific crystal habit to the naked eye. Other minerals, like pyrite, can have an array of different crystal habits, including cubic, dodecahedral, octahedral, and massive. The table lists typical crystal habits of various minerals. long and flat crystals |kyanite, amphibole, gypsum| blobby, circular crystals |hematite, malachite, smithsonite| crystals that are small and coat surfaces |quartz, calcite, malachite, azurite| |pyrite, galena, halite| 12-sided polygon shapes |Mn-oxides, copper, gold| crystals that do not have a long direction |olivine, garnet, pyroxene| thin, very long crystals |serpentine, amphibole, zeolite| stacked, very thin, flat crystals |mica (biotite, muscovite, etc.)| crystals that are plate-like |selenite roses, wulfenite, calcite| crystals with six sides |quartz, hanksite, corundum| Crystals with no obvious shape, microscopic crystals |limonite, pyrite, azurite, bornite| 4-sided double pyramid crystals |diamond, fluorite, magnetite, pyrite| very long, cylindrical crystals |tourmaline, beryl, barite| crystals that grow from a point and fan-out |pyrite “suns”, pyrophyllite| crystals shaped like slanted cubes sharp-sided crystals with no long direction |feldspar, pyroxene, calcite| three-sided, pyramid-shaped crystals |magnetite, spinel, tetrahedrite| Another crystal habit that may be used to identify minerals is striations, which are dark and light parallel lines on a crystal face. Twinning is another, which occurs when the crystal structure replicates in mirror images along certain directions in the crystal. Striations and twinning are related properties in some minerals including plagioclase feldspar. Striations are optical lines on a cleavage surface. Because of twinning in the crystal, striations show up on one of the two cleavage faces of the plagioclase crystal. Cleavage and Fracture Minerals often show characteristic patterns of breaking along specific cleavage planes or show characteristic fracture patterns. Cleavage planes are smooth, flat, parallel planes within the crystal. The cleavage planes may show as reflective surfaces on the crystal, as parallel cracks that penetrate into the crystal, or show on the edge or side of the crystal as a series of steps like rice terraces. Cleavage arises in crystals where the atomic bonds between atomic layers are weaker along some directions than others, meaning they will break preferentially along these planes. Because they develop on atomic surfaces in the crystal, cleavage planes are optically smooth and reflect light, although the actual break on the crystal may appear jagged or uneven. In such cleavages, the cleavage surface may appear like rice terraces on a mountainside that all reflect sunlight from a particular sun angle. Some minerals have a strong cleavage, some minerals only have weak cleavage or do not typically demonstrate cleavage. For example, quartz and olivine rarely show cleavage and typically break into conchoidal fracture patterns. Graphite has its carbon atoms arranged into layers with relatively strong bonds within the layer and very weak bonds between the layers. Thus graphite cleaves readily between the layers and the layers slide easily over one another giving graphite its lubricating quality. Mineral fracture surfaces may be rough, uneven, or show a conchoidal fracture. Uneven fracture patterns are described as irregular, splintery, fibrous. A conchoidal fracture has a smooth, curved surface like a shallow bowl or conch shell, often with curved ridges. Natural volcanic glass, called obsidian, breaks with this characteristic conchoidal pattern To work with cleavage, it is important to remember that cleavage is a result of bonds separating along planes of atoms in the crystal structure. On some minerals, cleavage planes may be confused with crystal faces. This will usually not be an issue for crystals of minerals that grew together within rocks. The act of breaking the rock to expose a fresh face will most likely break the crystals along cleavage planes. Some cleavage planes are parallel with crystal faces but many are not. Cleavage planes are smooth, flat, parallel planes within the crystal. The cleavage planes may show as parallel cracks that penetrate into the crystal (see amphibole below), or show on the edge or side of the crystal as a series of steps like rice terraces. For some minerals, characteristic crystal habit is to grow crystal faces even when surrounded by other crystals in rock. An example is garnet. Minerals grow freely where the crystals are unconstrained and can take characteristic shapes often form crystal faces (see quartz below). In some minerals, distinguishing cleavage planes from crystal faces may be challenging for the student. Understanding the nature of cleavage and referring to the number of cleavage planes and cleavage angles on identification keys should provide the student with enough information to distinguish cleavages from crystal faces. Cleavage planes may show as multiple parallel cracks or flat surfaces on the crystal. Cleavage planes may be expressed as a series of steps like terraced rice paddies. See the cleavage surfaces on galena above or plagioclase below. Cleavage planes arise from the tendency of mineral crystals to break along specific planes of weakness within the crystal favored by atomic arrangements. The number of cleavage planes, the quality of the cleavage surfaces, and the angles between them are diagnostic for many minerals and cleavage is one of the most useful properties for identifying minerals. Learning to recognize cleavage is an especially important and useful skill in studying minerals. As an identification property of minerals, cleavage is usually given in terms of the quality of the cleavage (perfect, imperfect, or none), the number of cleavage surfaces, and the angles between the surfaces. The most common number of cleavage plane directions in the common rock-forming minerals are one perfect cleavage (as in mica), two cleavage planes (as in feldspar, pyroxene, and amphibole), and three cleavage planes (as in halite, calcite, and galena). One perfect cleavage (as in mica) develops on the top and bottom of the mineral specimen with many parallel cracks showing on the sides but no angle of intersection. Two cleavage planes intersect at an angle. Common cleavage angles are 60°, 75°, 90°, and 120°. Amphibole has two cleavage planes at 60° and 120°. Galena and halite have three cleavage planes at 90° (cubic cleavage). Calcite cleaves readily in three directions producing a cleavage figure called a rhomb that looks like a cube squashed over toward one corner giving rise to the approximately 75° cleavage angles. Pyroxene has an imperfect cleavage with two planes at 90°. Cleavages on Common Rock-Forming Minerals - Quartz—none (conchoidal fracture) - Olivine—none (conchoidal fracture) - Mica—1 perfect - Feldspar—2 perfect at 90° - Pyroxene—2 imperfect at 90° - Amphibole—2 perfect at 60°/120° - Calcite—3 perfect at approximately 75° - Halite, galena, pyrite—3 perfect at 90° Special properties are unique and identifiable characteristics used to identify minerals or that allow some minerals to be used for special purposes. Ulexite has a fiber-optic property that can project images through the crystal-like a high-definition television screen (see figure). A simple identifying special property is taste, such as the salty flavor of halite or common table salt (NaCl). Sylvite is potassium chloride (KCl) and has a more bitter taste. Another property geologists may use to identify minerals is a property related to density called specific gravity. Specific gravity measures the weight of a mineral specimen relative to the weight of an equal volume of water. The value is expressed as a ratio between the mineral and water weights. To measure specific gravity, a mineral specimen is first weighed in grams then submerged in a graduated cylinder filled with pure water at room temperature. The rise in water level is noted using the cylinder’s graduated scale. Since the weight of water at room temperature is 1 gram per cubic centimeter, the ratio of the two-weight numbers gives the specific gravity. Specific gravity is easy to measure in the laboratory but is less useful for mineral identification in the field than other more easily observed properties, except in a few rare cases such as the very dense galena or native gold. The high density of these minerals gives rise to a qualitative property called “heft.” Experienced geologists can roughly assess specific gravity by heft, a subjective quality of how heavy the specimen feels in one’s hand relative to its size. A simple test for identifying calcite and dolomite is to drop a bit of dilute hydrochloric acid (10-15% HCl) on the specimen. If the acid drop effervesces or fizzes on the surface of the rock, the specimen is calcite. If it does not, the specimen is scratched to produce a small amount of powder and test with acid again. If the acid drop fizzes slowly on the powdered mineral, the specimen is dolomite. The difference between these two minerals can be seen in the video. Geologists who work with carbonate rocks carry a small dropper bottle of dilute HCl in their field kit. Vinegar, which contains acetic acid, can be used for this test and is used to distinguish non-calcite fossils from limestone. While acidic, vinegar produces less of a fizzing reaction because acetic acid is a weaker acid. Some iron-oxide minerals are magnetic and are attracted to magnets. A common name for naturally magnetic iron oxide is lodestone. Others include magnetite (Fe3O4) and ilmenite (FeTiO3). Magnetite is strongly attracted to magnets and can be magnetized. Ilmenite and some types of hematite are weakly magnetic. Some minerals and mineraloids scatter light via a phenomenon called iridescence. This property occurs in labradorite (a variety of plagioclase) and opal. It is also seen in biologically created substances like pearls and seashells. Cut diamonds show iridescence and the jeweler’s diamond cut is designed to maximize this property. Striations on mineral cleavage faces are an optical property that can be used to separate plagioclase feldspar from potassium feldspar (K-spar). A process called twinning creates parallel zones in the crystal that are repeating mirror images. The actual cleavage angle in plagioclase is slightly different than 90o and the alternating mirror images in these twinned zones produce a series of parallel lines on one of plagioclase’s two cleavage faces. Light reflects off these twinned lines at slightly different angles which then appear as light and dark lines called striations on the cleavage surface. Potassium feldspar does not exhibit twinning or striations but may show linear features called exsolution lamellae, also known as perthitic lineation or simply perthite. Because sodium and potassium do not fit into the same feldspar crystal structure, the lines are created by small amounts of sodium feldspar (albite) separating from the dominant potassium feldspar (K-spar) within the crystal structure. The two different feldspars crystallize out into roughly parallel zones within the crystal, which are seen as these linear markings. One of the most interesting special mineral properties is fluorescence. Certain minerals, or trace elements within them, give off visible light when exposed to ultraviolet radiation or black light. Many mineral exhibits have a fluorescence room equipped with black lights so this property can be observed. An even rarer optical property is phosphorescence. Phosphorescent minerals absorb light and then slowly release it, much like a glow-in-the-dark sticker.
In a step that brings silicon-based quantum computers closer to reality, researchers at Princeton University have built a device in which a single electron can pass its quantum information to a particle of light. The particle of light, or photon, can then act as a messenger to carry the information to other electrons, creating connections that form the circuits of a quantum computer. The research, published today in the journal Science and conducted at Princeton and HRL Laboratories in Malibu, California, represents a more than five-year effort to build a robust capability for an electron to talk to a photon, said Jason Petta, a Princeton professor of physics. Princeton Professor of Physics Jason Petta, from left, and physics graduate students David Zajac and Xiao Mi, have built a device that is a step forward for silicon-based quantum computers, which when built will be able to solve problems beyond the capabilities of everyday computers. The device isolates an electron so that can pass its quantum information to a photon, which can then act as a messenger to carry the information to other electrons to form the circuits of the computer. (Photo by Denise Applewhite, Office of Communications) “Just like in human interactions, to have good communication a number of things need to work out — it helps to speak the same language and so forth,” Petta said. “We are able to bring the energy of the electronic state into resonance with the light particle, so that the two can talk to each other.” The discovery will help the researchers use light to link individual electrons, which act as the bits, or smallest units of data, in a quantum computer. Quantum computers are advanced devices that, when realized, will be able to perform advanced calculations using tiny particles such as electrons, which follow quantum rules rather than the physical laws of the everyday world. Each bit in an everyday computer can have a value of a 0 or a 1. Quantum bits — known as qubits — can be in a state of 0, 1, or both a 0 and a 1 simultaneously. This superposition, as it is known, enables quantum computers to tackle complex questions that today’s computers cannot solve. Simple quantum computers have already been made using trapped ions and superconductors, but technical challenges have slowed the development of silicon-based quantum devices. Silicon is a highly attractive material because it is inexpensive and is already widely used in today’s smartphones and computers. The qubit consists of a single electron that is trapped below the surface of a silicon chip (gray). The green, pink and purple wires on top of the silicon structure deliver precise voltages to the qubit. The purple plate reduces electronic interference that can destroy the qubit’s quantum information. By adjusting the voltages in the wires, the researchers can trap a single electron in a double quantum dot and adjust its energy so that it can communicate its quantum information to a nearby photon. (Photo courtesy of the Jason Petta research group, Department of Physics) The researchers trapped both an electron and a photon in the device, then adjusted the energy of the electron in such a way that the quantum information could transfer to the photon. This coupling enables the photon to carry the information from one qubit to another located up to a centimeter away. Quantum information is extremely fragile — it can be lost entirely due to the slightest disturbance from the environment. Photons are more robust against disruption and can potentially carry quantum information not just from qubit to qubit in a quantum computer circuit but also between quantum chips via cables. For these two very different types of particles to talk to each other, however, researchers had to build a device that provided the right environment. First, Peter Deelman at HRL Laboratories, a corporate research-and-development laboratory owned by the Boeing Company and General Motors, fabricated the semiconductor chip from layers of silicon and silicon-germanium. This structure trapped a single layer of electrons below the surface of the chip. Next, researchers at Princeton laid tiny wires, each just a fraction of the width of a human hair, across the top of the device. These nanometer-sized wires allowed the researchers to deliver voltages that created an energy landscape capable of trapping a single electron, confining it in a region of the silicon called a double quantum dot. The researchers used those same wires to adjust the energy level of the trapped electron to match that of the photon, which is trapped in a superconducting cavity that is fabricated on top of the silicon wafer. Prior to this discovery, semiconductor qubits could only be coupled to neighboring qubits. By using light to couple qubits, it may be feasible to pass information between qubits at opposite ends of a chip. The electron’s quantum information consists of nothing more than the location of the electron in one of two energy pockets in the double quantum dot. The electron can occupy one or the other pocket, or both simultaneously. By controlling the voltages applied to the device, the researchers can control which pocket the electron occupies. “We now have the ability to actually transmit the quantum state to a photon confined in the cavity,” said Xiao Mi, a graduate student in Princeton’s Department of Physics and first author on the paper. “This has never been done before in a semiconductor device because the quantum state was lost before it could transfer its information.” The success of the device is due to a new circuit design that brings the wires closer to the qubit and reduces interference from other sources of electromagnetic radiation. To reduce this noise, the researchers put in filters that remove extraneous signals from the wires that lead to the device. The metal wires also shield the qubit. As a result, the qubits are 100 to 1,000 times less noisy than the ones used in previous experiments. Jeffrey Cady, a 2015 graduate, helped develop the filters to reduce the noise as part of his undergraduate senior thesis, and graduate student David Zajac led the effort to use overlapping electrodes to confine single electrons in silicon quantum dots. Eventually the researchers plan to extend the device to work with an intrinsic property of the electron known as its spin. “In the long run we want systems where spin and charge are coupled together to make a spin qubit that can be electrically controlled,” Petta said. “We’ve shown we can coherently couple an electron to light, and that is an important step toward coupling spin to light.” David DiVincenzo, a physicist at the Institute for Quantum Information in RWTH Aachen University in Germany, who was not involved in the research, is the author of an influential 1996 paper outlining five minimal requirements necessary for creating a quantum computer. Of the Princeton-HRL work, in which he was not involved, DiVincenzo said: “It has been a long struggle to find the right combination of conditions that would achieve the strong coupling condition for a single-electron qubit. I am happy to see that a region of parameter space has been found where the system can go for the first time into strong-coupling territory.” Founded in 1746 in Elizabeth as the College of New Jersey, Princeton is one of the nine Colonial Colleges established before the American Revolution as well as the fourth chartered institution of higher education in the American colonies. The university moved to Newark in 1747, then to Princeton in 1756 and was renamed Princeton University in 1896. The present-day College of New Jersey in nearby Ewing Township, New Jersey, is an unrelated institution. Princeton had close ties to the Presbyterian Church, but has never been affiliated with any denomination and today imposes no religious requirements on its students. Princeton now provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences, and engineering. It does not have schools of medicine, law, divinity, or business, but it does offer professional degrees through the Woodrow Wilson School of Public and International Affairs, the Princeton University School of Engineering and Applied Science, and the School of Architecture. The institute has ties with the Institute for Advanced Study, Princeton Theological Seminary, and the Westminster Choir College of Rider University. Princeton has been associated with 35 Nobel Laureates, 17 National Medal of Science winners, and three National Humanities Medal winners. On a per-student basis, Princeton has the largest university endowment in the world. Princeton University research articles from Innovation Toronto - More than 1,200 new planets confirmed using new technique for verifying Kepler data – May 11, 2016 - How an artificial protein rescues dying cells – insight into how life can adapt and potentially be reinvented – March 13, 2016 - Army ants’ living bridges span collective intelligence, swarm robotics – November 26, 2015 - 3D-Printed Guide Helps Regrow Complex Nerves After Injury – September 19, 2015 - We Are Entering a “Golden Age” of Animal Tracking – June 13, 2015 - Dirty pool: Soil’s large carbon stores could be freed by increased CO2, plant growth (Nature Climate Change) – – December 24, 2014 - Much better, cheaper, brighter and flexible LED’s – September 28, 2014 - ‘Fracking’ in the dark: Biological fallout of shale-gas production still largely unknown – August 3, 2014 - Solar panels light the way from carbon dioxide to fuel – July 2, 2014 - Physicists Find a Link between Wormholes and Spooky Action at a Distance - Princeton Laser breakthrough will enable sniffing the air at a distance - “Futurity” service launches to promote university research as traditional science journalism declines - New Pattern Recognition Makes for Easier Sifting of Big Data - From slowdown to shutdown — US leadership in biomedical research takes a blow, says ASCB - Cool heads likely won’t prevail in a hotter, wetter world - Printable ‘bionic’ ear melds electronics and biology - Printable ‘bionic’ ear melds electronics and biology - Bacterial byproduct offers route to avoiding antibiotic resistance - Steps toward quantum computing - New Breakthrough Prize Awards Millions to Life Scientists - What Will It Take to Solve Climate Change? - The costs of climate change can be mitigated if economic activity moves in response - Synthetic fuels could eliminate entire U.S. need for crude oil, create ‘new economy’ - Tiny Structure Gives Big Boost to Solar Power - Breakthrough offers new route to large-scale quantum computing - A Chemist Comes Very Close to a Midas Touch - Forgoing College to Pursue Dreams - Experts propose ‘cyber war’ on cancer - New nanoparticle discovery opens door for pharmaceuticals - Innovation promises to cut massive power use at big data companies in a flash - Researchers use nanotech to make cancer 3M times more detectable - Effective World Government Will Be Needed to Stave Off Climate Catastrophe - Can Fracking and Carbon Sequestration Coexist? - ‘Storm of the Century’ May Become ‘Storm of the Decade’ - Academic Earth - Teen’s invention boosts solar panel output 40 percent - Plan B for Energy: 8 Revolutionary Energy Sources - Rubber sheets harness body movement to power electrical devices - Quantum computing researchers achieve control over individual electrons - Computer Scientists Take Over Electronic Voting Machine With New Programming Technique - Reverse Combustion: Can CO2 Be Turned Back into Fuel? - Lower cost solar panels using plastic electronics - Internet Ideology War: Google’s Spat with China Could Reshape Traditional Online Freedoms - Unreliable research: Trouble at the lab - Cultured Beef: Do We Really Need a $380,000 Burger Grown in Petri Dishes? - Ditch Time-Wasting Meetings By Turning Your Office Into An Ant Colony - College of Future Could Be Come One, Come All - Researchers say pharmaceutical ‘innovation crisis’ is a myth - Smart teeth invention makes cover of New York Times magazine - Cracking Open the Scientific Process - M.I.T. Game-Changer: Free Online Education For All - Exploring open access in higher education: live chat best bits - Beneficial Biofuels: Leading National Experts Reach Consensus Researchers at Columbia University, Princeton and Harvard University have developed a new approach for analyzing big data that can drastically improve the ability to make accurate predictions about medicine, complex diseases, social science phenomena, and other issues. In a study published in the December 13 issue of Proceedings of the National Academy of Sciences (PNAS), the authors introduce the Influence score, or “I-score,” as a statistic correlated with how much variables inherently can predict, or “predictivity”, which can consequently be used to identify highly predictive variables. “In our last paper, we showed that significant variables may not necessarily be predictive, and that good predictors may not appear statistically significant,” said principal investigator Shaw-Hwa Lo, a professor of statistics at Columbia University. “This left us with an important question: how can we find highly predictive variables then, if not through a guideline of statistical significance? In this article, we provide a theoretical framework from which to design good measures of prediction in general. Importantly, we introduce a variable set’s predictivity as a new parameter of interest to estimate, and provide the I-score as a candidate statistic to estimate variable set predictivity.” Current approaches to prediction generally include using a significance-based criterion for evaluating variables to use in models and evaluating variables and models simultaneously for prediction using cross-validation or independent test data. “Using the I-score prediction framework allows us to define a novel measure of predictivity based on observed data, which in turn enables assessing variable sets for, preferably high, predictivity,” Lo said, adding that, while intuitively obvious, not enough attention has been paid to the consideration of predictivity as a parameter of interest to estimate. Motivated by the needs of current genome-wide association studies (GWAS), the study authors provide such a discussion. In the paper, the authors describe the predictivity for a variable set and show that a simple sample estimation of predictivity directly does not provide usable information for the prediction-oriented researcher. They go on to demonstrate that the I-score can be used to compute a measure that asymptotically approaches predictivity. The I-score can effectively differentiate between noisy and predictive variables, Lo explained, making it helpful in variable selection. A further benefit is that while usual approaches require heavy use of cross-validation data or testing data to evaluate the predictors, the I-score approach does not rely as much on this as much. “We offer simulations and an application of the I-score on real data to demonstrate the statistic’s predictive performance on sample data,” he said. “These show that the I-score can capture highly predictive variable sets, estimates a lower bound for the theoretical correct prediction rate, and correlates well with the out of sample correct rate. We suggest that using the I-score method can aid in finding variable sets with promising prediction rates, however, further research in the avenue of sample-based measures of predictivity is needed.” The authors conclude that there are many applications for which using the I-score would be useful, for example in formulating predictions about diseases with high dimensional data, such as gene datasets, in the social sciences for text prediction or financial markets predictions; in terrorism, civil war, elections and financial markets. “We’re hoping to impress upon the scientific community the notion that for those of us who might be interested in predicting an outcome of interest, possibly with rather complex or high dimensional data, we might gain by reconsidering the question as one of how to search for highly predictive variables (or variable sets) and using statistics that measure predictivity to help us identify those variables to then predict well,” Lo said. “For statisticians in particular, we’re hoping this opens up a new field of work that would focus on designing new statistics that measure predictivity.” Researchers at Princeton, Columbia and Harvard have created a new method to analyze big data that better predicts outcomes in health care, politics and other fields. The study appears this week in the journal Proceedings of the National Academy of Sciences. A PDF is available on request. In previous studies, the researchers showed that significant variables might not be predictive and that good predictors might not appear statistically significant. This posed an important question: how can we find highly predictive variables if not through a guideline of statistical significance? Common approaches to prediction include using a significance-based criterion for evaluating variables to use in models and evaluating variables and models simultaneously for prediction using cross-validation or independent test data. In an effort to reduce the error rate with those methods, the researchers proposed a new measure called the influence score, or I-score, to better measure a variable’s ability to predict. They found that the I-score is effective in differentiating between noisy and predictive variables in big data and can significantly improve the prediction rate. For example, the I-score improved the prediction rate in breast cancer data from 70 percent to 92 percent. The I-score can be applied in a variety of fields, including terrorism, civil war, elections and financial markets. “The practical implications are what drove the project, so they’re quite broad,” says lead author Adeline Lo, a postdoctoral researcher in Princeton’s Department of Politics. “Essentially anytime you might be interested in predicting and identifying highly predictive variables, you might have something to gain by conducting variable selection through a statistic like the I-score, which is related to variable predictivity. That the I-score fares especially well in high dimensional data and with many complex interactions between variables is an extra boon for the researcher or policy expert interested in predicting something with large dimensional data.” Incentives that are designed to enable smarter use of the ocean while also protecting marine ecosystems can and do work, and offer significant hope to help address the multiple environmental threats facing the world’s oceans, researchers conclude in a new analysis. Whether economic or social, incentive-based solutions may be one of the best options for progress in reducing impacts from overfishing, climate change, ocean acidification and pollution, researchers from Oregon State University and Princeton University say in a new report published this week in Proceedings of the National Academy of Sciences. And positive incentives – the “carrot” – work better than negative incentives, or the “stick.” Part of the reason for optimism, the researchers report, is changing awareness, attitudes and social norms around the world, in which resource users and consumers are becoming more informed about environmental issues and demanding action to address them. That sets the stage for economic incentives that can convert near-disaster situations into sustainable fisheries, cleaner water and long-term solutions. “As we note in this report, the ocean is becoming higher, warmer, stormier, more acidic, lower in dissolved oxygen and overfished,” said Jane Lubchenco, the distinguished university professor in the College of Science and advisor in marine studies at Oregon State University, lead author of the new report, and U.S. science envoy for the ocean at the Department of State. “The threats facing the ocean are enormous, and can seem overwhelming. But there’s actually reason for hope, and it’s based on what we’ve learned about the use of incentives to change the way people, nations and institutions behave. We believe it’s possible to make that transition from a vicious to a virtuous cycle. Getting incentives right can flip a disaster to a resounding success.” Simon A. Levin, the James S. McDonnell distinguished university professor in ecology and evolutionary biology at Princeton University and co-author of the publication, had a similar perspective. “It is really very exciting that what, until recently, was theoretical optimism is proving to really work,” Levin said. “This gives me great hope for the future.” The stakes are huge, the scientists point out in their study. The global market value of marine and coastal resources and industries is about $3 trillion a year; more than 3 billion people depend on fish for a major source of protein; and marine fisheries involve more than 200 million people. Ocean and coastal ecosystems provide food, oxygen, climate regulation, pest control, recreational and cultural value. “Given the importance of marine resources, many of the 150 or more coastal nations, especially those in the developing world, are searching for new approaches to economic development, poverty alleviation and food security,” said Elizabeth Cerny-Chipman, a postdoctoral scholar working with Lubchenco. “Our findings can provide guidance to them about how to develop sustainably.” In recent years, the researchers said in their report, new incentive systems have been developed that tap into people’s desires for both economic sustainability and global environmental protection. In many cases, individuals, scientists, faith communities, businesses, nonprofit organizations and governments are all changing in ways that reward desirable and dissuade undesirable behaviors. One of the leading examples of progress is the use of “rights-based fisheries.” Instead of a traditional “race to fish” concept based on limited seasons, this growing movement allows fishers to receive a guaranteed fraction of the catch, benefit from a well-managed, healthy fishery and become part of a peer group in which cheating is not tolerated. There are now more than 200 rights-based fisheries covering more than 500 species among 40 countries, the report noted. One was implemented in the Gulf of Mexico red snapper commercial fishery, which was on the brink of collapse after decades of overfishing. A rights-based plan implemented in 2007 has tripled the spawning potential, doubled catch limits and increased fishery revenue by 70 percent. “Multiple turn-around stories in fisheries attest to the potential to end overfishing, recover depleted species, achieve healthier ocean ecosystems, and bring economic benefit to fishermen and coastal communities,” said Lubchenco. “It is possible to have your fish and eat them too.” A success story used by some nations has been combining “territorial use rights in fisheries,” which assign exclusive fishing access in a particular place to certain individuals or communities, together with adjacent marine reserves. Fish recover inside the no-take reserve and “spillover” to the adjacent fished area outside the reserve. Another concept of incentives has been “debt for nature” swaps used in some nations, in which foreign debt is exchanged for protection of the ocean. “In parallel to a change in economic incentives,” said Jessica Reimer, a graduate research assistant with Lubchenco, “there have been changes in behavioral incentives and social norms, such as altruism, ethical values, and other types of motivation that can be powerful drivers of change.” The European Union, based on strong environmental support among its public, has issued warnings and trade sanctions against countries that engage in illegal, unregulated and unreported fishing. In the U.S., some of the nation’s largest retailers, in efforts to improve their image with consumers, have moved toward sale of only certified sustainable seafood. Incentives are not a new idea, the researchers noted. But they emphasize that their power may have been under-appreciated. “Recognizing the extent to which a change in incentives can be explicitly used to achieve outcomes related to biodiversity, ecosystem health and sustainability . . . holds particular promise for conservation and management efforts in the ocean,” they wrote in their conclusion. Neural networks using light could lead to superfast computing. Neural networks are taking the world of computing by storm. Researchers have used them to create machines that are learning a huge range of skills that had previously been the unique preserve of humans—object recognition, face recognition, natural language processing, machine translation. All these skills, and more, are now becoming routine for machines. So there is great interest in creating more capable neural networks that can push the boundaries of artificial intelligence even further. The focus of this work is in creating circuits that operate more like neurons, so-called neuromorphic chips. But how to make these circuits significantly faster? Today, we get an answer of sorts thanks to the work of Alexander Tait and pals at Princeton University in New Jersey. These guys have built the world’s first photonic neuromorphic chip and show that it computes at ultrafast speeds. Optical computing has long been the great white hope of computer science. Photons have significantly more bandwidth than electrons and so can process more data more quickly. But the advantages of optical data processing systems have never outweighed the additional cost of making them, and so they have never been widely adopted. That has started to change in some areas of computing, such as analog signal processing, which requires the kind of ultrafast data processing that only photonic chips can provide. Now neural networks are opening up a new opportunity for photonics. “Photonic neural networks leveraging silicon photonic platforms could access new regimes of ultrafast information processing for radio, control, and scientific computing,” say Tait and co. At the heart of the challenge is to produce an optical device in which each node has the same response characteristics as a neuron. The nodes take the form of tiny circular waveguides carved into a silicon substrate in which light can circulate. When released this light then modulates the output of a laser working at threshold, a regime in which small changes in the incoming light have a dramatic impact on the laser’s output. Crucially, each node in the system works with a specific wavelength of light—a technique known as wave division multiplexing. The light from all the nodes can be summed by total power detection before being fed into the laser. And the laser output is fed back into the nodes to create a feedback circuit with a non-linear character. An important question is just how closely this non-linearity mimics neural behavior. Tait and co measure the output and show that it is mathematically equivalent to a device known as a continuous-time recurrent neural network. “This result suggests that programming tools for CTRNNs could be applied to larger silicon photonic neural networks,” they say. That’s an important result because it means the device that Tait and co have made can immediately exploit the vast range of programming nous that has been gathered for these kinds of neural networks. They go on to demonstrate how this can be done using a network consisting of 49 photonic nodes. They use this photonic neural network to solve the mathematical problem of emulating a certain kind of differential equation and compare it to an ordinary central processing unit. The results show just how fast photonic neural nets can be. “The effective hardware acceleration factor of the photonic neural network is estimated to be 1,960 × in this task,” say Tait and co. That’s a speed up of three orders of magnitude. That opens the doors to an entirely new industry that could bring optical computing into the mainstream for the first time. “Silicon photonic neural networks could represent first forays into a broader class of silicon photonic systems for scalable information processing,” say Taif and co. Of course much depends on how well the first generation of electronic neuromorphic chips perform. Photonic neural nets will have to offer significant advantages to be widely adopted and will therefore require much more detailed characterization. Clearly, there are interesting times ahead for photonics. Learn more: World’s First Photonic Neural Network Unveiled A system that can compare physical objects while potentially protecting sensitive information about the objects themselves has been demonstrated experimentally at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL). This work, by researchers at Princeton University and PPPL, marks an initial confirmation of the application of a powerful cryptographic technique in the physical world. “This is the first experimental demonstration of a physical zero-knowledge proof,” said Sébastien Philippe, a graduate student in the Department of Mechanical and Aerospace Engineering at Princeton University and lead author of the paper. “We have translated a major method of modern cryptography devised originally for computational tasks into use for a physical system.” Cryptography is the science of disguising information. This research, supported by funding from the DOE’s National Nuclear Security Administration through the Consortium for Verification Technology, marks a promising first experimental step toward a technique that could prove useful in future disarmament agreements, pending the results of further development, testing and evaluation. While important questions remain, the technique, first proposed in a paper published in 2014 in Nature magazine, might have potential application to verify that nuclear warheads presented for disarmament were in fact true warheads. Support for this work came also from the John D. and Catherine T. MacArthur Foundation and the Carnegie Foundation of New York. The research, outlined in a paper in Nature Communications on September 20, 2016, was conducted on a set of 2-inch steel and aluminum cubes arranged in different combinations. Researchers first organized the cubes into a designated “true” pattern and then into a number of “false” ones. Next, they beamed high-energy neutrons into each arrangement and recorded how many passed through to bubble neutron detectors produced by Yale University, on the other side. When a neutron interacts with a superheated droplet in the detector, it creates a stable macroscopic bubble. To avoid revealing information about the composition and configuration of the cubes, bubbles created in this manner were added to those already preloaded into the detectors. The preload was designed so that if a valid object were presented, the sum of the preload and the signal detected with the object present would equal the count produced by firing neutrons directly into the detectors – with no object in front of them. The experiment found that the count for the “true” pattern equaled the sum of the preload and the object when neutrons were beamed with nothing in front of them, while the count for the significantly different “false” arrangements clearly did not. “This was an extremely important experimental demonstration,” said Robert Goldston, a fusion scientist and coauthor of the paper who is former director of PPPL and a Princeton professor of astrophysical sciences. “We had a theoretical idea and have now provided a proven practical example.” Joining him as coauthors are Alex Glaser, associate professor in Princeton’s Woodrow Wilson School of Public and International Affairs and the Department of Mechanical and Aerospace Engineering; and Francesco d’Errico, senior research scientist at the Yale School of Medicine and professor at the University of Pisa, Italy. When further developed for a possible arms control application, the technique would add bubbles from irradiation of a putative warhead to those already preloaded into detectors by the warhead’s owner. If the total for the new and preloaded bubbles equaled the count produced by beaming neutrons into the detectors with nothing in front of them, the putative weapon would be verified to be a true one. But if the total count for the preload plus warhead irradiation did not match the no-object count, the inspected weapon would be exposed as a spoof. Prior to the test, the inspector would randomly select which preloaded detectors to use with which putative warhead, and which preload to use with a warhead that was, for example, selected from the owner’s active inventory. In a sensitive measurement, such as one involving a real nuclear warhead, the proposition is that no classified data would be exposed or shared in the process, and no electronic components that might be vulnerable to tampering or snooping would be used. Even statistical noise — or random variation in neutron measurement — would convey no data. Indeed, “For the zero-knowledge property to be conserved, neither the signal nor the noise may carry information,” the authors write. A necessary future step is to assess this proposition fully, and to develop and review a concept of operations in detail to determine actual viability and information sensitivity. Important questions yet to be resolved include the details of obtaining and confirming a target warhead during the zero-knowledge measurement; specifics of establishing and maintaining the pre-loaded detectors in a way that ensures inspecting party confidence without revealing any data considered sensitive by the inspected party; and feasibility questions associated with safely deploying active interrogation measurement techniques on actual nuclear warheads in sensitive physical environments, in a way that provides confidence to both the inspected and inspecting parties. Glaser, Goldston and Boaz Barak, a professor of computer science at Harvard University and former Princeton associate professor, first launched the concept for a zero-knowledge protocol for warhead verification in the 2014 paper in Nature magazine. That paper led Foreign Policy magazine to name the authors among its “100 Leading Global Thinkers of 2014,” and prompted other research centers to embark on similar projects. “We are happy to see this important field of research gain new momentum and create new opportunities for collaboration between national laboratories and universities,” Glaser said. Increased power and slashed energy consumption for data centers Princeton University researchers have built a new computer chip that promises to boost performance of data centers that lie at the core of online services from email to social media. Data centers – essentially giant warehouses packed with computer servers – enable cloud-based services, such as Gmail and Facebook, as well as store the staggeringly voluminous content available via the internet. Surprisingly, the computer chips at the hearts of the biggest servers that route and process information often differ little from the chips in smaller servers or everyday personal computers. By designing their chip specifically for massive computing systems, the Princeton researchers say they can substantially increase processing speed while slashing energy needs. The chip architecture is scalable; designs can be built that go from a dozen processing units (called cores) to several thousand. Also, the architecture enables thousands of chips to be connected together into a single system containing millions of cores. Called Piton, after the metal spikes driven by rock climbers into mountainsides to aid in their ascent, it is designed to scale. “With Piton, we really sat down and rethought computer architecture in order to build a chip specifically for data centers and the cloud,” said David Wentzlaff, an assistant professor of electrical engineering and associated faculty in the Department of Computer Science at Princeton University. “The chip we’ve made is among the largest chips ever built in academia and it shows how servers could run far more efficiently and cheaply.” Wentzlaff’s graduate student, Michael McKeown, will give a presentation about the Piton project Tuesday, Aug. 23, at Hot Chips, a symposium on high performance chips in Cupertino, California. The unveiling of the chip is a culmination of years of effort by Wentzlaff and his students. Mohammad Shahrad, a graduate student in Wentzlaff’s Princeton Parallel Group said that creating “a physical piece of hardware in an academic setting is a rare and very special opportunity for computer architects.” Other Princeton researchers involved in the project since its 2013 inception are Yaosheng Fu, Tri Nguyen, Yanqi Zhou, Jonathan Balkind, Alexey Lavrov, Matthew Matl, Xiaohua Liang, and Samuel Payne, who is now at NVIDIA. The Princeton team designed the Piton chip, which was manufactured for the research team by IBM. Primary funding for the project has come from the National Science Foundation, the Defense Advanced Research Projects Agency, and the Air Force Office of Scientific Research. The current version of the Piton chip measures six by six millimeters. The chip has over 460 million transistors, each of which are as small as 32 nanometers – too small to be seen by anything but an electron microscope. The bulk of these transistors are contained in 25 cores, the independent processors that carry out the instructions in a computer program. Most personal computer chips have four or eight cores. In general, more cores mean faster processing times, so long as software ably exploits the hardware’s available cores to run operations in parallel. Therefore, computer manufacturers have turned to multi-core chips to squeeze further gains out of conventional approaches to computer hardware. In recent years companies and academic institutions have produced chips with many dozens of cores; but Wentzlaff said the readily scalable architecture of Piton can enable thousands of cores on a single chip with half a billion cores in the data center. “What we have with Piton is really a prototype for future commercial server systems that could take advantage of a tremendous number of cores to speed up processing,” said Wentzlaff. The Piton chip’s design focuses on exploiting commonality among programs running simultaneously on the same chip. One method to do this is called execution drafting. It works very much like the drafting in bicycle racing, when cyclists conserve energy behind a lead rider who cuts through the air, creating a slipstream. At a data center, multiple users often run programs that rely on similar operations at the processor level. The Piton chip’s cores can recognize these instances and execute identical instructions consecutively, so that they flow one after another, like a line of drafting cyclists. Doing so can increase energy efficiency by about 20 percent compared to a standard core, the researchers said. A second innovation incorporated into the Piton chip parcels out when competing programs access computer memory that exists off of the chip. Called a memory traffic shaper, this function acts like a traffic cop at a busy intersection, considering each programs’ needs and adjusting memory requests and waving them through appropriately so they do not clog the system. This approach can yield an 18 percent performance jump compared to conventional allocation. The Piton chip also gains efficiency by its management of memory stored on the chip itself. This memory, known as the cache memory, is the fastest in the computer and used for frequently accessed information. In most designs, cache memory is shared across all of the chip’s cores. But that strategy can backfire when multiple cores access and modify the cache memory. Piton sidesteps this problem by assigning areas of the cache and specific cores to dedicated applications. The researchers say the system can increase efficiency by 29 percent when applied to a 1,024-core architecture. They estimate that this savings would multiply as the system is deployed across millions of cores in a data center. The researchers said these improvements could be implemented while keeping costs in line with current manufacturing standards. To hasten further developments leveraging and extending the Piton architecture, the Princeton researchers have made its design open source and thus available to the public and fellow researchers at the OpenPiton website: http://www. “We’re very pleased with all that we’ve achieved with Piton in an academic setting, where there are far fewer resources than at large, commercial chipmakers,” said Wentzlaff. “We’re also happy to give out our design to the world as open source, which has long been commonplace for software, but is almost never done for hardware.” Scientists from Princeton University and NASA have confirmed that 1,284 objects observed outside Earth’s solar system by NASA’s Kepler spacecraft are indeed planets. Reported in The Astrophysical Journal on May 10, it is thelargest single announcement of new planets to date and more than doubles the number of confirmed planets discovered by Kepler so far to more than 2,300. The researchers’ discovery hinges on a technique developed at Princeton that allows scientists to efficiently analyze thousands of signals Kepler has identified to determine which are most likely to be caused by planets and which are caused by non-planetary objects such as stars. This automated technique — implemented in a publicly available custom software package called Vespa — computes the chances that the signal is in fact caused by a planet. The researchers used Vespa to compute the reliability values for over 7,000 signals identified in the latest Kepler catalog, and verified the 1,284 planets with 99 percent certainty. They also independently verified 651 additional planet signals that had already been confirmed as planets by other methods. In addition, the researchers identified 428 candidates as likely “false positives,” or signals generated by something other than a planet. A team of researchers at Princeton University has predicted the existence of a new state of matter in which current flows only through a set of surface channels that resemble an hourglass. These channels are created through the action of a newly theorized particle, dubbed the “hourglass fermion,” which arises due to a special property of the material. The tuning of this property can sequentially create and destroy the hourglass fermions, suggesting a range of potential applications such as efficient transistor switching. In an article published in the journal Nature this week, the researchers theorize the existence of these hourglass fermions in crystals made of potassium and mercury combined with either antimony, arsenic or bismuth. The crystals are insulators in their interiors and on their top and bottom surfaces, but perfect conductors on two of their sides where the fermions create hourglass-shaped channels that enable electrons to flow. Scientists at the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) have helped design and test a component that could improve the performance of doughnut-shaped fusion facilities known as tokamaks. Called a “liquid lithium limiter,” the device has circulated the protective liquid metal within the walls of China’s Experimental Advanced Superconducting Tokamak (EAST) and kept the plasma from cooling down and halting fusion reactions. The journal Nuclear Fusion published results of the experiment in March 2016. The research was supported by the DOE Office of Science. “We demonstrated a continuous, recirculating lithium flow for several hours in a tokamak,” said Rajesh Maingi, head of boundary physics research and plasma-facing components at PPPL. “We also demonstrated that the flowing liquid lithium surface was compatible with high plasma confinement and with reduced recycling of the hydrogen isotope deuterium to an extent previously achieved only with evaporated lithium coatings. The recirculating lithium provides a fresh, clean surface that can be used for long-lasting plasma discharges.” A new study from Princeton has revealed how a synthetic protein revives E. coli cells that lack a life-sustaining gene, offering insight into how life can adapt to survive and potentially be reinvented. Researchers in the Hecht lab discovered the unexpected way in which a synthetic protein called SynSerB promotes the growth of cells that lack the natural SerB gene, which encodes an enzyme responsible for the last step in the production of the essential amino acid serine. The findings were published in the Proceedings of the National Academy of Sciences. The Hecht group first discovered SynSerB’s ability to rescue serine-depleted E. coli cells in 2011. At that time, they also discovered several other de novo proteins capable of rescuing the deletions of three other essential proteins in E. coli. “These are novel proteins that have never existed on Earth, and aren’t related to anything on Earth yet they enable life to grow where it otherwise would not,” said Michael Hecht, professor of chemistry at Princeton and corresponding author on the article. Natural proteins are complex molecular machines constructed from a pool of twenty different amino acids. Typically they range from several dozen to several hundred amino acids in length. In principle, there are more possible protein sequences than atoms in the universe, but through evolution Nature has selected just a small fraction to carry out the cellular functions that make life possible. “Those proteins must be really special,” Hecht said. “The driving question was, ‘Can we do that in the laboratory? Can we come up with non-natural sequences that are that special, from an enormous number of possibilities?” To address this question, the Hecht lab developed a library of non-natural proteins guided by a concept called binary design. The idea was to narrow down the number of possible sequences by choosing from eleven select amino acids that were divided into two groups: polar and non-polar. By using only the polar or non-polar characteristics of those amino acids, the researchers could design a plethora of novel proteins to fold into a particular shape based on their affinity to and repulsion from water. Then, by allowing the specific positions to have different amino acids, the researchers were able to produce a diverse library of about one million proteins, each 102 amino acids long. “We had to focus on certain subsets of proteins that we knew would fold and search there first for function,” said Katie Digianantonio, a graduate student in the Hecht lab and first author on the paper. “It’s like instead of searching the whole universe for life, we’re looking in specific solar systems.” Having found several non-natural proteins that could rescue specific cell lines, this latest work details their investigation specifically into how SynSerB promotes cell growth. The most obvious explanation, that SynSerB simply catalyzed the same reaction performed by the deleted SerB gene, was discounted by an early experiment. To discern SynSerB’s mechanism among the multitude of complex biochemical pathways in the cell, the researchers turned to a technique called RNA sequencing. This technique allowed them to take a detailed snapshot of the serine-depleted E. Coli cells with and without their synthetic protein and compare the differences. “Instead of guessing and checking, we wanted to look at the overall environment to see what was happening,” Digianantonio said. The RNA sequencing experiment revealed that SynSerB induced overexpression of a protein called HisB, high levels of which have been shown to promote the key reaction normally performed by the missing gene. By enlisting the help of HisB, the non-natural protein was able to induce the production of serine, which ultimately allowed the cell to survive. “Life is opportunistic. Some proteins are going to work by acting similarly to what they replaced and some will find another pathway,” Hecht said. “Either way it’s cool.” Learn more: How an artificial protein rescues dying cells