text
stringlengths
10
951k
source
stringlengths
39
44
Indigo Indigo is a deep and rich color close to the color wheel blue (a primary color in the RGB color space), as well as to some variants of ultramarine, based on the ancient dye of the same name. The word "indigo" comes from the Latin for "Indian", as the dye was originally imported to Europe from India. It is traditionally regarded as a color in the visible spectrum, as well as one of the seven colors of the rainbow: the color between violet and blue; however, sources differ as to its actual position in the electromagnetic spectrum. The first known recorded use of indigo as a color name in English was in 1289. "Indigofera tinctoria" and related species were cultivated in East Asia, Egypt, India, and Peru in antiquity. The earliest direct evidence for the use of indigo dates to around 4000 BC and comes from Huaca Prieta, in contemporary Peru. Pliny the Elder mentions India as the source of the dye after which it was named. It was imported from there in small quantities via the Silk Road. The Ancient Greek term for the dye was ("Indian dye"), which, adopted to Latin (second declension case) as "indicum" or "indico" and via Portuguese gave rise to the modern word "indigo". Spanish explorers discovered an American species of indigo and began to cultivate the product in Guatemala. The English and French subsequently began to encourage indigo cultivation in their colonies in the West Indies. Blue dye can be made from two different types of plants: the indigo plant, which produces the best results, and from the woad plant "Isatis tinctoria", also known as pastel. For a long time, woad was the main source of blue dye in Europe. Woad was replaced by true indigo as trade routes opened up, and both plant sources have now been largely replaced by synthetic dyes. The Early Modern English word "indigo" referred to the dye, not to the color (hue) itself, and "indigo" is not traditionally part of the basic color-naming system. Modern sources place indigo in the electromagnetic spectrum between 420 and 450 nanometers, which lies on the short-wave side of color wheel (RGB) blue, towards (spectral) violet. However, the correspondence of this definition with colors of actual indigo dyes is disputed. Optical scientists Hardy and Perrin list indigo as between 445 and 464 nm wavelength, which occupies a spectrum segment from roughly the color wheel (RGB) blue extending to the long-wave side, towards azure. Isaac Newton introduced indigo as one of the seven base colors of his work. In the mid-1660s, when Newton bought a pair of prisms at a fair near Cambridge, the East India Company had begun importing indigo dye into England, supplanting the homegrown woad as source of blue dye. In a pivotal experiment in the history of optics, the young Newton shone a narrow beam of sunlight through a prism to produce a rainbow-like band of colors on the wall. In describing this optical spectrum, Newton acknowledged that the spectrum had a continuum of colors, but named seven: "The originall or primary colours are Red, yellow, Green, Blew, & a violet purple; together with Orang, Indico, & an indefinite varietie of intermediate gradations." He linked the seven prismatic colors to the seven notes of a western major scale, as shown in his color wheel, with orange and indigo as the semitones. Having decided upon seven colors, he asked a friend to repeatedly divide up the spectrum that was projected from the prism onto the wall: I desired a friend to draw with a pencil lines cross the image, or pillar of colours, where every one of the seven aforenamed colours was most full and brisk, and also where he judged the truest confines of them to be, whilst I held the paper so, that the said image might fall within a certain compass marked on it. And this I did, partly because my own eyes are not very critical in distinguishing colours, partly because another, to whom I had not communicated my thoughts about this matter, could have nothing but his eyes to determine his fancy in making those marks. Indigo is therefore counted as one of the traditional colors of the rainbow, the order of which is given by the mnemonics "Richard of York gave battle in vain" and "Roy G. Biv". James Clerk Maxwell and Hermann von Helmholtz accepted indigo as an appropriate name for the color flanking violet in the spectrum. Later scientists conclude that Newton named the colors differently from current usage. According to Gary Waldman, "A careful reading of Newton's work indicates that the color he called indigo, we would normally call blue; his blue is then what we would name blue-green, cyan or light blue." If this is true, Newton's seven spectral colors would have been: Red: Orange: Yellow: Green: Blue: Indigo: Violet: The human eye does not readily differentiate hues in the wavelengths between what we today call blue and violet. If this is where Newton meant indigo to lie, most individuals would have difficulty distinguishing indigo from its neighbors. According to Isaac Asimov, "It is customary to list indigo as a color lying between blue and violet, but it has never seemed to me that indigo is worth the dignity of being considered a separate color. To my eyes it seems merely deep blue." Modern color scientists typically divide the spectrum between violet and blue at about 450 nm, with no indigo. Like many other colors (orange, rose, and violet are the best-known), indigo gets its name from an object in the natural world—the plant named indigo once used for dyeing cloth (see also Indigo dye). The color "electric indigo" is a bright and saturated color between the traditional indigo and violet. This is the brightest color indigo that can be approximated on a computer screen; it is a color located between the (primary) blue and the color violet of the RGB color wheel. The web color "blue violet" or "deep indigo" is a tone of indigo brighter than pigment indigo, but not as bright as electric indigo. The color "pigment indigo" is equivalent to the web color indigo and approximates the color indigo that is usually reproduced in pigments and colored pencils. The color of indigo dye is a different color from either spectrum indigo or pigment indigo. This is the actual color of the dye. A vat full of this dye is a darker color, approximating the web color midnight blue. Below are displayed these four major tones of indigo. "Electric indigo" is brighter than the pigment indigo reproduced below. When plotted on the CIE chromaticity diagram, this color is at 435 nanometers, in the middle of the portion of the spectrum traditionally considered indigo, i.e., between 450 and 420 nanometers. This color is only an approximation of spectral indigo, since actual spectral colors are outside the gamut of the sRGB color system. At right is displayed the web color "blue-violet", a color intermediate in brightness between electric indigo and pigment indigo. It is also known as "deep indigo". The color box at right displays the web color indigo, the color indigo as it would be reproduced by artists' paints as opposed to the brighter indigo above (electric indigo) that is possible to reproduce on a computer screen. Its hue is closer to violet than to indigo dye for which the color is named. Pigment indigo can be obtained by mixing 55% pigment cyan with about 45% pigment magenta. Compare the subtractive colors to the additive colors in the two primary color charts in the article on primary colors to see the distinction between electric colors as reproducible from light on a computer screen (additive colors) and the pigment colors reproducible with pigments (subtractive colors); the additive colors are significantly brighter because they are produced from light instead of pigment. Web color indigo represents the way the color indigo was always reproduced in pigments, paints, or colored pencils in the 1950s. By the 1970s, because of the advent of psychedelic art, artists became accustomed to brighter pigments. Pigments called "bright indigo" or "bright blue-violet" (the pigment equivalent of the electric indigo reproduced in the section above) became available in artists' pigments and colored pencils. 'Tropical Indigo' is the color that is called "añil" in the "Guía de coloraciones" (Guide to colorations) by Rosa Gallego and Juan Carlos Sanz, a color dictionary published in 2005 that is widely popular in the Hispanophone realm. "Indigo dye" is a greenish dark blue color, obtained from either the leaves of the tropical Indigo plant ("Indigofera"), or from woad ("Isatis tinctoria"), or the Chinese indigo ("Persicaria tinctoria"). Many societies make use of the "Indigofera" plant for producing different shades of blue. Cloth that is repeatedly boiled in an Indigo dye bath-solution (boiled and left to dry, boiled and left to dry, etc.), the blue pigment becomes darker on the cloth. After dyeing, the cloth is hung in the open air to dry. A Native American woman described the process used by the Cherokee Indians when extracting the dye: We raised our indigo which we cut in the morning while the dew was still on it; then we put it in a tub and soaked it overnight, and the next day we foamed it up by beating it with a gourd. We let it stand overnight again, and the next day rubbed tallow on our hands to kill the foam. Afterwards, we poured the water off, and the sediment left in the bottom we would pour into a pitcher or crock to let it get dry, and then we would put it into a poke made of cloth (i.e. sack made of coarse cloth) and then when we wanted any of it to dye [there]with, we would take the dry indigo. In Sa Pa, Vietnam, the tropical Indigo ("Indigo tinctoria") leaves are harvested and, while still fresh, placed inside a tub of room-temperature to lukewarm water where they are left to sit for 3 to 4 days and allowed to ferment, until the water turns green. Afterwards, crushed limestone (pickling lime) is added to the water, at which time the water with the leaves are vigorously agitated for 15 to 20 minutes, until the water turns blue. The blue pigment settles as sediment at the bottom of the tub. The sediment is scooped out and stored. When dyeing cloth, the pigment is then boiled in a vat of water; the cloth (usually made from yarns of hemp) is inserted into the vat for absorbing the dye. After hanging out to dry, the boiling process is repeated as often as needed to produce a darker color. Literature Marina Warner's novel "Indigo" (1992) is a retelling of Shakespeare's "The Tempest" and features the production of indigo dye by Sycorax. The French Army adopted dark blue indigo at the time of the French Revolution, as a replacement for the white uniforms previously worn by the Royal infantry regiments. In 1806, Napoleon decided to restore the white coats because of shortages of indigo dye imposed by the British continental blockade. However, the greater practicability of the blue color led to its retention, and indigo remained the dominant color of French military coats until 1914. The spiritualist applications use electric indigo, because the color is positioned between blue and violet on the spectrum.
https://en.wikipedia.org/wiki?curid=15250
International Monetary Fund The International Monetary Fund (IMF) is an international organization headquartered in Washington, D.C., consisting of 189 countries working to foster global monetary cooperation, secure financial stability, facilitate international trade, promote high employment and sustainable economic growth, and reduce poverty around the world while periodically depending on the World Bank for its resources. Formed in 1944 at the Bretton Woods Conference primarily by the ideas of Harry Dexter White and John Maynard Keynes, it came into formal existence in 1945 with 29 member countries and the goal of reconstructing the international payment system. It now plays a central role in the management of balance of payments difficulties and international financial crises. Countries contribute funds to a pool through a quota system from which countries experiencing balance of payments problems can borrow money. , the fund had XDR 477 billion (about US$667 billion). Through the fund and other activities such as the gathering of statistics and analysis, surveillance of its members' economies, and the demand for particular policies, the IMF works to improve the economies of its member countries. The organization's objectives stated in the Articles of Agreement are: to promote international monetary co-operation, international trade, high employment, exchange-rate stability, sustainable economic growth, and making resources available to member countries in financial difficulty. IMF funds come from two major sources: quotas and loans. Quotas, which are pooled funds of member nations, generate most IMF funds. The size of a member's quota depends on its economic and financial importance in the world. Nations with larger economic importance have larger quotas. The quotas are increased periodically as a means of boosting the IMF's resources in the form of special drawing rights. The current Managing Director (MD) and Chairwoman of the IMF is Bulgarian economist Kristalina Georgieva, who has held the post since October 1, 2019. Gita Gopinath was appointed as Chief Economist of IMF from 1 October 2018. She received her PhD in economics from Princeton University. Prior to her IMF appointment she was economic adviser to the Chief Minister of Kerala, India. According to the IMF itself, it works to foster global growth and economic stability by providing policy advice and financing the members by working with developing nations to help them achieve macroeconomic stability and reduce poverty. The rationale for this is that private international capital markets function imperfectly and many countries have limited access to financial markets. Such market imperfections, together with balance-of-payments financing, provide the justification for official financing, without which many countries could only correct large external payment imbalances through measures with adverse economic consequences. The IMF provides alternate sources of financing. Upon the founding of the IMF, its three primary functions were: to oversee the fixed exchange rate arrangements between countries, thus helping national governments manage their exchange rates and allowing these governments to prioritize economic growth, and to provide short-term capital to aid the balance of payments. This assistance was meant to prevent the spread of international economic crises. The IMF was also intended to help mend the pieces of the international economy after the Great Depression and World War II as well as to provide capital investments for economic growth and projects such as infrastructure. The IMF's role was fundamentally altered by the floating exchange rates post-1971. It shifted to examining the economic policies of countries with IMF loan agreements to determine if a shortage of capital was due to economic fluctuations or economic policy. The IMF also researched what types of government policy would ensure economic recovery. A particular concern of the IMF was to prevent financial crises such as those in Mexico in 1982, Brazil in 1987, East Asia in 1997–98, and Russia in 1998, from spreading and threatening the entire global financial and currency system. The challenge was to promote and implement policy that reduced the frequency of crises among the emerging market countries, especially the middle-income countries which are vulnerable to massive capital outflows. Rather than maintaining a position of oversight of only exchange rates, their function became one of surveillance of the overall macroeconomic performance of member countries. Their role became a lot more active because the IMF now manages economic policy rather than just exchange rates. In addition, the IMF negotiates conditions on lending and loans under their policy of conditionality, which was established in the 1950s. Low-income countries can borrow on concessional terms, which means there is a period of time with no interest rates, through the Extended Credit Facility (ECF), the Standby Credit Facility (SCF) and the Rapid Credit Facility (RCF). Nonconcessional loans, which include interest rates, are provided mainly through Stand-By Arrangements (SBA), the Flexible Credit Line (FCL), the Precautionary and Liquidity Line (PLL), and the Extended Fund Facility. The IMF provides emergency assistance via the Rapid Financing Instrument (RFI) to members facing urgent balance-of-payments needs. The IMF is mandated to oversee the international monetary and financial system and monitor the economic and financial policies of its member countries. This activity is known as surveillance and facilitates international co-operation. Since the demise of the Bretton Woods system of fixed exchange rates in the early 1970s, surveillance has evolved largely by way of changes in procedures rather than through the adoption of new obligations. The responsibilities changed from those of guardian to those of overseer of members' policies. The Fund typically analyses the appropriateness of each member country's economic and financial policies for achieving orderly economic growth, and assesses the consequences of these policies for other countries and for the global economy. In 1995 the International Monetary Fund began to work on data dissemination standards with the view of guiding IMF member countries to disseminate their economic and financial data to the public. The International Monetary and Financial Committee (IMFC) endorsed the guidelines for the dissemination standards and they were split into two tiers: The General Data Dissemination System (GDDS) and the Special Data Dissemination Standard (SDDS). The executive board approved the SDDS and GDDS in 1996 and 1997 respectively, and subsequent amendments were published in a revised "Guide to the General Data Dissemination System". The system is aimed primarily at statisticians and aims to improve many aspects of statistical systems in a country. It is also part of the World Bank Millennium Development Goals and Poverty Reduction Strategic Papers. The primary objective of the GDDS is to encourage member countries to build a framework to improve data quality and statistical capacity building to evaluate statistical needs, set priorities in improving the timeliness, transparency, reliability and accessibility of financial and economic data. Some countries initially used the GDDS, but later upgraded to SDDS. Some entities that are not themselves IMF members also contribute statistical data to the systems: IMF conditionality is a set of policies or conditions that the IMF requires in exchange for financial resources. The IMF does require collateral from countries for loans but also requires the government seeking assistance to correct its macroeconomic imbalances in the form of policy reform. If the conditions are not met, the funds are withheld. The concept of conditionality was introduced in a 1952 Executive Board decision and later incorporated into the Articles of Agreement. Conditionality is associated with economic theory as well as an enforcement mechanism for repayment. Stemming primarily from the work of Jacques Polak, the theoretical underpinning of conditionality was the "monetary approach to the balance of payments". Some of the conditions for structural adjustment can include: These conditions are known as the Washington Consensus. These loan conditions ensure that the borrowing country will be able to repay the IMF and that the country will not attempt to solve their balance-of-payment problems in a way that would negatively impact the international economy. The incentive problem of moral hazard—when economic agents maximise their own utility to the detriment of others because they do not bear the full consequences of their actions—is mitigated through conditions rather than providing collateral; countries in need of IMF loans do not generally possess internationally valuable collateral anyway. Conditionality also reassures the IMF that the funds lent to them will be used for the purposes defined by the Articles of Agreement and provides safeguards that country will be able to rectify its macroeconomic and structural imbalances. In the judgment of the IMF, the adoption by the member of certain corrective measures or policies will allow it to repay the IMF, thereby ensuring that the resources will be available to support other members. , borrowing countries have had a good track record for repaying credit extended under the IMF's regular lending facilities with full interest over the duration of the loan. This indicates that IMF lending does not impose a burden on creditor countries, as lending countries receive market-rate interest on most of their quota subscription, plus any of their own-currency subscriptions that are loaned out by the IMF, plus all of the reserve assets that they provide the IMF. The IMF was originally laid out as a part of the Bretton Woods system exchange agreement in 1944. During the Great Depression, countries sharply raised barriers to trade in an attempt to improve their failing economies. This led to the devaluation of national currencies and a decline in world trade. This breakdown in international monetary co-operation created a need for oversight. The representatives of 45 governments met at the Bretton Woods Conference in the Mount Washington Hotel in Bretton Woods, New Hampshire, in the United States, to discuss a framework for postwar international economic co-operation and how to rebuild Europe. There were two views on the role the IMF should assume as a global economic institution. American delegate Harry Dexter White foresaw an IMF that functioned more like a bank, making sure that borrowing states could repay their debts on time. Most of White's plan was incorporated into the final acts adopted at Bretton Woods. British economist John Maynard Keynes, on the other hand, imagined that the IMF would be a cooperative fund upon which member states could draw to maintain economic activity and employment through periodic crises. This view suggested an IMF that helped governments and to act as the United States government had during the New Deal to the great recession of the 1930s. The IMF formally came into existence on 27 December 1945, when the first 29 countries ratified its Articles of Agreement. By the end of 1946 the IMF had grown to 39 members. On 1 March 1947, the IMF began its financial operations, and on 8 May France became the first country to borrow from it. The IMF was one of the key organizations of the international economic system; its design allowed the system to balance the rebuilding of international capitalism with the maximisation of national economic sovereignty and human welfare, also known as embedded liberalism. The IMF's influence in the global economy steadily increased as it accumulated more members. The increase reflected in particular the attainment of political independence by many African countries and more recently the 1991 dissolution of the Soviet Union because most countries in the Soviet sphere of influence did not join the IMF. The Bretton Woods exchange rate system prevailed until 1971, when the United States government suspended the convertibility of the US$ (and dollar reserves held by other governments) into gold. This is known as the Nixon Shock. The changes to the IMF articles of agreement reflecting these changes were ratified by the 1976 Jamaica Accords. Later in the 1970s, large commercial banks began lending to states because they were awash in cash deposited by oil exporters. The lending of the so-called money center banks led to the IMF changing its role in the 1980s after a world recession provoked a crisis that brought the IMF back into global financial governance. The IMF provided two major lending packages in the early 2000s to Argentina (during the 1998–2002 Argentine great depression) and Uruguay (after the 2002 Uruguay banking crisis). However, by the mid-2000s, IMF lending was at its lowest share of world GDP since the 1970s. In May 2010, the IMF participated, in 3:11 proportion, in the first Greek bailout that totalled €110 billion, to address the great accumulation of public debt, caused by continuing large public sector deficits. As part of the bailout, the Greek government agreed to adopt austerity measures that would reduce the deficit from 11% in 2009 to "well below 3%" in 2014. The bailout did not include debt restructuring measures such as a haircut, to the chagrin of the Swiss, Brazilian, Indian, Russian, and Argentinian Directors of the IMF, with the Greek authorities themselves (at the time, PM George Papandreou and Finance Minister Giorgos Papakonstantinou) ruling out a haircut. A second bailout package of more than €100 billion was agreed over the course of a few months from October 2011, during which time Papandreou was forced from office. The so-called Troika, of which the IMF is part, are joint managers of this programme, which was approved by the Executive Directors of the IMF on 15 March 2012 for XDR 23.8 billion and saw private bondholders take a haircut of upwards of 50%. In the interval between May 2010 and February 2012 the private banks of Holland, France and Germany reduced exposure to Greek debt from €122 billion to €66 billion. , the largest borrowers from the IMF in order were Greece, Portugal, Ireland, Romania, and Ukraine. On 25 March 2013, a €10 billion international bailout of Cyprus was agreed by the Troika, at the cost to the Cypriots of its agreement: to close the country's second-largest bank; to impose a one-time bank deposit levy on Bank of Cyprus uninsured deposits. No insured deposit of €100k or less were to be affected under the terms of a novel bail-in scheme. The topic of sovereign debt restructuring was taken up by the IMF in April 2013 for the first time since 2005, in a report entitled "Sovereign Debt Restructuring: Recent Developments and Implications for the Fund's Legal and Policy Framework". The paper, which was discussed by the board on 20 May, summarised the recent experiences in Greece, St Kitts and Nevis, Belize, and Jamaica. An explanatory interview with Deputy Director Hugh Bredenkamp was published a few days later, as was a deconstruction by Matina Stevis of the "Wall Street Journal". In the October 2013 Fiscal Monitor publication, the IMF suggested that a capital levy capable of reducing Euro-area government debt ratios to "end-2007 levels" would require a very high tax rate of about 10%. The Fiscal Affairs department of the IMF, headed at the time by Acting Director Sanjeev Gupta, produced a January 2014 report entitled "Fiscal Policy and Income Inequality" that stated that "Some taxes levied on wealth, especially on immovable property, are also an option for economies seeking more progressive taxation ... Property taxes are equitable and efficient, but underutilized in many economies ... There is considerable scope to exploit this tax more fully, both as a revenue source and as a redistributive instrument." At the end of March 2014, the IMF secured an $18 billion bailout fund for the provisional government of Ukraine in the aftermath of the 2014 Ukrainian revolution. In March 2020, Kristalina Georgieva announced that the IMF stood ready to mobilize $1 trillion as its response to the COVID-19 pandemic. This was in addition to the $50 billion fund it had announced two weeks earlier, of which $5 billion had already been requested by Iran. One day earlier on 11 March, the UK called to pledge £150 billion to the IMF catastrophe relief fund. It came to light on 27 March that "more than 80 poor and middle-income countries" had sought a bailout due to the coronavirus. On 13 April 2020, the IMF said that it "would provide immediate debt relief to 25 member countries under its Catastrophe Containment and Relief Trust (CCRT)" programme. Not all member countries of the IMF are sovereign states, and therefore not all "member countries" of the IMF are members of the United Nations. Amidst "member countries" of the IMF that are not member states of the UN are non-sovereign areas with special jurisdictions that are officially under the sovereignty of full UN member states, such as Aruba, Curaçao, Hong Kong, and Macau, as well as Kosovo. The corporate members appoint "ex-officio" voting members, who are listed below. All members of the IMF are also International Bank for Reconstruction and Development (IBRD) members and vice versa. Former members are Cuba (which left in 1964), and the Republic of China (Taiwan), which was ejected from the UN in 1980 after losing the support of then United States President Jimmy Carter and was replaced by the People's Republic of China. However, "Taiwan Province of China" is still listed in the official IMF indices. Apart from Cuba, the other UN states that do not belong to the IMF are Andorra, Liechtenstein, Monaco and North Korea. The former Czechoslovakia was expelled in 1954 for "failing to provide required data" and was readmitted in 1990, after the Velvet Revolution. Poland withdrew in 1950—allegedly pressured by the Soviet Union—but returned in 1986. Any country may apply to be a part of the IMF. Post-IMF formation, in the early postwar period, rules for IMF membership were left relatively loose. Members needed to make periodic membership payments towards their quota, to refrain from currency restrictions unless granted IMF permission, to abide by the Code of Conduct in the IMF Articles of Agreement, and to provide national economic information. However, stricter rules were imposed on governments that applied to the IMF for funding. The countries that joined the IMF between 1945 and 1971 agreed to keep their exchange rates secured at rates that could be adjusted only to correct a "fundamental disequilibrium" in the balance of payments, and only with the IMF's agreement. Member countries of the IMF have access to information on the economic policies of all member countries, the opportunity to influence other members' economic policies, technical assistance in banking, fiscal affairs, and exchange matters, financial support in times of payment difficulties, and increased opportunities for trade and investment. The Board of Governors consists of one governor and one alternate governor for each member country. Each member country appoints its two governors. The Board normally meets once a year and is responsible for electing or appointing executive directors to the Executive Board. While the Board of Governors is officially responsible for approving quota increases, special drawing right allocations, the admittance of new members, compulsory withdrawal of members, and amendments to the Articles of Agreement and By-Laws, in practice it has delegated most of its powers to the IMF's Executive Board. The Board of Governors is advised by the International Monetary and Financial Committee and the Development Committee. The International Monetary and Financial Committee has 24 members and monitors developments in global liquidity and the transfer of resources to developing countries. The Development Committee has 25 members and advises on critical development issues and on financial resources required to promote economic development in developing countries. They also advise on trade and environmental issues. The Board of Governors reports directly to the Managing Director of the IMF, Kristalina Georgieva. 24 Executive Directors make up the Executive Board. The Executive Directors represent all 189 member countries in a geographically based roster. Countries with large economies have their own Executive Director, but most countries are grouped in constituencies representing four or more countries. Following the "2008 Amendment on Voice and Participation" which came into effect in March 2011, seven countries each appoint an Executive Director: the United States, Japan, China, Germany, France, the United Kingdom, and Saudi Arabia. The remaining 17 Directors represent constituencies consisting of 2 to 23 countries. This Board usually meets several times each week. The Board membership and constituency is scheduled for periodic review every eight years. The IMF is led by a managing director, who is head of the staff and serves as Chairman of the Executive Board. Historically, the IMF's managing director has been European and the president of the World Bank has been from the United States. However, this standard is increasingly being questioned and competition for these two posts may soon open up to include other qualified candidates from any part of the world. In August 2019, the International Monetary Fund has removed the age limit which is 65 or over for its managing director position. In 2011 the world's largest developing countries, the BRIC nations, issued a statement declaring that the tradition of appointing a European as managing director undermined the legitimacy of the IMF and called for the appointment to be merit-based. Former managing director Dominique Strauss-Kahn was arrested in connection with charges of sexually assaulting a New York hotel room attendant and resigned on 18 May. The charges were later dropped. On 28 June 2011 Christine Lagarde was confirmed as managing director of the IMF for a five-year term starting on 5 July 2011. She was re-elected by consensus for a second five-year term, starting 5 July 2016, being the only candidate nominated for the post of Managing Director. The managing director is assisted by a First Deputy managing director who, by convention, has always been a national of the United States. Together, the Managing Director and his/her First Deputy lead the senior management of the IMF. Like the Managing Director, the First Deputy traditionally serves a five-year term. Voting power in the IMF is based on a quota system. Each member has a number of basic votes (each member's number of basic votes equals 5.502% of the total votes), plus one additional vote for each special drawing right (SDR) of 100,000 of a member country's quota. The special drawing right is the unit of account of the IMF and represents a claim to currency. It is based on a basket of key international currencies. The basic votes generate a slight bias in favour of small countries, but the additional votes determined by SDR outweigh this bias. Changes in the voting shares require approval by a super-majority of 85% of voting power. In December 2015, the United States Congress adopted a legislation authorising the 2010 Quota and Governance Reforms. As a result, The IMF's quota system was created to raise funds for loans. Each IMF member country is assigned a quota, or contribution, that reflects the country's relative size in the global economy. Each member's quota also determines its relative voting power. Thus, financial contributions from member governments are linked to voting power in the organization. This system follows the logic of a shareholder-controlled organization: wealthy countries have more say in the making and revision of rules. Since decision making at the IMF reflects each member's relative economic position in the world, wealthier countries that provide more money to the IMF have more influence than poorer members that contribute less; nonetheless, the IMF focuses on redistribution. Quotas are normally reviewed every five years and can be increased when deemed necessary by the Board of Governors. IMF voting shares are relatively inflexible: countries that grow economically have tended to become under-represented as their voting power lags behind. Currently, reforming the representation of developing countries within the IMF has been suggested. These countries' economies represent a large portion of the global economic system but this is not reflected in the IMF's decision making process through the nature of the quota system. Joseph Stiglitz argues, "There is a need to provide more effective voice and representation for developing countries, which now represent a much larger portion of world economic activity since 1944, when the IMF was created." In 2008, a number of quota reforms were passed including shifting 6% of quota shares to dynamic emerging markets and developing countries. The IMF's membership is divided along income lines: certain countries provide financial resources while others use these resources. Both developed country "creditors" and developing country "borrowers" are members of the IMF. The developed countries provide the financial resources but rarely enter into IMF loan agreements; they are the creditors. Conversely, the developing countries use the lending services but contribute little to the pool of money available to lend because their quotas are smaller; they are the borrowers. Thus, tension is created around governance issues because these two groups, creditors and borrowers, have fundamentally different interests. The criticism is that the system of voting power distribution through a quota system institutionalizes borrower subordination and creditor dominance. The resulting division of the IMF's membership into borrowers and non-borrowers has increased the controversy around conditionality because the borrowers are interested in increasing loan access while creditors want to maintain reassurance that the loans will be repaid. A recent source revealed that the average overall use of IMF credit per decade increased, in real terms, by 21% between the 1970s and 1980s, and increased again by just over 22% from the 1980s to the 1991–2005 period. Another study has suggested that since 1950 the continent of Africa alone has received $300 billion from the IMF, the World Bank, and affiliate institutions. A study by Bumba Mukherjee found that developing democratic countries benefit more from IMF programs than developing autocratic countries because policy-making, and the process of deciding where loaned money is used, is more transparent within a democracy. One study done by Randall Stone found that although earlier studies found little impact of IMF programs on balance of payments, more recent studies using more sophisticated methods and larger samples "usually found IMF programs improved the balance of payments". The Exceptional Access Framework was created in 2003 when John B. Taylor was Under Secretary of the US Treasury for International Affairs. The new Framework became fully operational in February 2003 and it was applied in the subsequent decisions on Argentina and Brazil. Its purpose was to place some sensible rules and limits on the way the IMF makes loans to support governments with debt problem—especially in emerging markets—and thereby move away from the bailout mentality of the 1990s. Such a reform was essential for ending the crisis atmosphere that then existed in emerging markets. The reform was closely related to and put in place nearly simultaneously with the actions of several emerging market countries to place collective action clauses in their bond contracts. In 2010, the framework was abandoned so the IMF could make loans to Greece in an unsustainable and political situation. The topic of sovereign debt restructuring was taken up by IMF staff in April 2013 for the first time since 2005, in a report entitled "Sovereign Debt Restructuring: Recent Developments and Implications for the Fund's Legal and Policy Framework". The paper, which was discussed by the board on 20 May, summarised the recent experiences in Greece, St Kitts and Nevis, Belize and Jamaica. An explanatory interview with Deputy Director Hugh Bredenkamp was published a few days later, as was a deconstruction by Matina Stevis of the "Wall Street Journal". The staff was directed to formulate an updated policy, which was accomplished on 22 May 2014 with a report entitled "The Fund's Lending Framework and Sovereign Debt: Preliminary Considerations", and taken up by the Executive Board on 13 June. The staff proposed that "in circumstances where a (Sovereign) member has lost market access and debt is considered sustainable ... the IMF would be able to provide Exceptional Access on the basis of a debt operation that involves an extension of maturities", which was labeled a "reprofiling operation". These reprofiling operations would "generally be less costly to the debtor and creditors—and thus to the system overall—relative to either an upfront debt reduction operation or a bail-out that is followed by debt reduction ... (and) would be envisaged only when both (a) a member has lost market access and (b) debt is assessed to be sustainable, but not with high probability ... Creditors will only agree if they understand that such an amendment is necessary to avoid a worse outcome: namely, a default and/or an operation involving debt reduction ... Collective action clauses, which now exist in most—but not all—bonds would be relied upon to address collective action problems." According to a 2002 study by Randall W. Stone, the academic literature on the IMF shows "no consensus on the long-term effects of IMF programs on growth. Overseas Development Institute (ODI) research undertaken in 1980 included criticisms of the IMF which support the analysis that it is a pillar of what activist Titus Alexander calls global apartheid. ODI conclusions were that the IMF's very nature of promoting market-oriented approaches attracted unavoidable criticism. On the other hand, the IMF could serve as a scapegoat while allowing governments to blame international bankers. The ODI conceded that the IMF was insensitive to political aspirations of LDCs while its policy conditions were inflexible. Argentina, which had been considered by the IMF to be a model country in its compliance to policy proposals by the Bretton Woods institutions, experienced a catastrophic economic crisis in 2001, which some believe to have been caused by IMF-induced budget restrictions—which undercut the government's ability to sustain national infrastructure even in crucial areas such as health, education, and security—and privatisation of strategically vital national resources. Others attribute the crisis to Argentina's misdesigned fiscal federalism, which caused subnational spending to increase rapidly. The crisis added to widespread hatred of this institution in Argentina and other South American countries, with many blaming the IMF for the region's economic problems. The current—as of early 2006—trend toward moderate left-wing governments in the region and a growing concern with the development of a regional economic policy largely independent of big business pressures has been ascribed to this crisis. In 2006, a senior ActionAid policy analyst Akanksha Marphatia stated that IMF policies in Africa undermine any possibility of meeting the Millennium Development Goals (MDGs) due to imposed restrictions that prevent spending on important sectors, such as education and health. In an interview (2008-05-19), the former Romanian Prime Minister Călin Popescu-Tăriceanu claimed that "Since 2005, IMF is constantly making mistakes when it appreciates the country's economic performances". Former Tanzanian President Julius Nyerere, who claimed that debt-ridden African states were ceding sovereignty to the IMF and the World Bank, famously asked, "Who elected the IMF to be the ministry of finance for every country in the world?" Former chief economist of IMF and former Reserve Bank of India (RBI) Governor Raghuram Rajan who predicted the Financial crisis of 2007–08 criticised the IMF for remaining a sideline player to the developed world. He criticised the IMF for praising the monetary policies of the US, which he believed were wreaking havoc in emerging markets. He had been critical of the ultra-loose money policies of the Western nations and IMF. Countries such as Zambia have not received proper aid with long-lasting effects, leading to concern from economists. Since 2005, Zambia (as well as 29 other African countries) did receive debt write-offs, which helped with the country's medical and education funds. However, Zambia returned to a debt of over half its GDP in less than a decade. American economist William Easterly, sceptical of the IMF's methods, had initially warned that "debt relief would simply encourage more reckless borrowing by crooked governments unless it was accompanied by reforms to speed up economic growth and improve governance," according to "The Economist". The IMF has been criticised for being "out of touch" with local economic conditions, cultures, and environments in the countries they are requiring policy reform. The economic advice the IMF gives might not always take into consideration the difference between what spending means on paper and how it is felt by citizens. Countries charge that with excessive conditionality, they do not "own" the programs and the links are broken between a recipient country's people, its government, and the goals being pursued by the IMF. Jeffrey Sachs argues that the IMF's "usual prescription is 'budgetary belt tightening to countries who are much too poor to own belts'". Sachs wrote that the IMF's role as a generalist institution specialising in macroeconomic issues needs reform. Conditionality has also been criticised because a country can pledge collateral of "acceptable assets" to obtain waivers—if one assumes that all countries are able to provide "acceptable collateral". One view is that conditionality undermines domestic political institutions. The recipient governments are sacrificing policy autonomy in exchange for funds, which can lead to public resentment of the local leadership for accepting and enforcing the IMF conditions. Political instability can result from more leadership turnover as political leaders are replaced in electoral backlashes. IMF conditions are often criticised for reducing government services, thus increasing unemployment. Another criticism is that IMF programs are only designed to address poor governance, excessive government spending, excessive government intervention in markets, and too much state ownership. This assumes that this narrow range of issues represents the only possible problems; everything is standardised and differing contexts are ignored. A country may also be compelled to accept conditions it would not normally accept had they not been in a financial crisis in need of assistance. On top of that, regardless of what methodologies and data sets used, it comes to same the conclusion of exacerbating income inequality. With Gini coefficient, it became clear that countries with IMF programs face increased income inequality. It is claimed that conditionalities retard social stability and hence inhibit the stated goals of the IMF, while Structural Adjustment Programs lead to an increase in poverty in recipient countries. The IMF sometimes advocates "austerity programmes", cutting public spending and increasing taxes even when the economy is weak, to bring budgets closer to a balance, thus reducing budget deficits. Countries are often advised to lower their corporate tax rate. In "Globalization and Its Discontents", Joseph E. Stiglitz, former chief economist and senior vice-president at the World Bank, criticises these policies. He argues that by converting to a more monetarist approach, the purpose of the fund is no longer valid, as it was designed to provide funds for countries to carry out Keynesian reflations, and that the IMF "was not participating in a conspiracy, but it was reflecting the interests and ideology of the Western financial community." Stiglitz concludes, "Modern high-tech warfare is designed to remove physical contact: dropping bombs from 50,000 feet ensures that one does not 'feel' what one does. Modern economic management is similar: from one's luxury hotel, one can callously impose policies about which one would think twice if one knew the people whose lives one was destroying." The researchers Eric Toussaint and Damien Millet argue that the IMF's policies amount to a new form of colonization that does not need a military presence: International politics play an important role in IMF decision making. The clout of member states is roughly proportional to its contribution to IMF finances. The United States has the greatest number of votes and therefore wields the most influence. Domestic politics often come into play, with politicians in developing countries using conditionality to gain leverage over the opposition to influence policy. The IMF is only one of many international organisations, and it is a generalist institution that deals only with macroeconomic issues; its core areas of concern in developing countries are very narrow. One proposed reform is a movement towards close partnership with other specialist agencies such as UNICEF, the Food and Agriculture Organization (FAO), and the United Nations Development Program (UNDP). Jeffrey Sachs argues in "The End of Poverty" that the IMF and the World Bank have "the brightest economists and the lead in advising poor countries on how to break out of poverty, but the problem is development economics". Development economics needs the reform, not the IMF. He also notes that IMF loan conditions should be paired with other reforms—e.g., trade reform in developed nations, debt cancellation, and increased financial assistance for investments in basic infrastructure. IMF loan conditions cannot stand alone and produce change; they need to be partnered with other reforms or other conditions as applicable. The scholarly consensus is that IMF decision-making is not simply technocratic, but also guided by political and economic concerns. The United States is the IMF's most powerful member, and its influence reaches even into decision-making concerning individual loan agreements. The United States has historically been openly opposed to losing what Treasury Secretary Jacob Lew described in 2015 as its "leadership role" at the IMF, and the United States' "ability to shape international norms and practices". Reforms to give more powers to emerging economies were agreed by the G20 in 2010. The reforms could not pass, however, until they were ratified by the US Congress, since 85% of the Fund's voting power was required for the reforms to take effect, and the Americans held more than 16% of voting power at the time. After repeated criticism, the United States finally ratified the voting reforms at the end of 2015. The OECD countries maintained their overwhelming majority of voting share, and the United States in particular retained its share at over 16%. The criticism of the US-and-Europe-dominated IMF has led to what some consider 'disenfranchising the world' from the governance of the IMF. Raúl Prebisch, the founding secretary-general of the UN Conference on Trade and Development (UNCTAD), wrote that one of "the conspicuous deficiencies of the general economic theory, from the point of view of the periphery, is its false sense of universality." The role of the Bretton Woods institutions has been controversial since the late Cold War, because of claims that the IMF policy makers supported military dictatorships friendly to American and European corporations, but also other anti-communist and Communist regimes (such as Mobutu's Zaire and Ceaușescu's Romania, respectively). Critics also claim that the IMF is generally apathetic or hostile to human rights, and labour rights. The controversy has helped spark the anti-globalization movement. An example of IMF's support for a dictatorship was its ongoing support for Mobutu's rule in Zaire, although its own envoy, Erwin Blumenthal, provided a sobering report about the entrenched corruption and embezzlement and the inability of the country to pay back any loans. Arguments in favour of the IMF say that economic stability is a precursor to democracy; however, critics highlight various examples in which democratised countries fell after receiving IMF loans. A 2017 study found no evidence of IMF lending programs undermining democracy in borrowing countries. To the contrary, it found "evidence for modest but definitively positive conditional differences in the democracy scores of participating and non-participating countries." A number of civil society organisations have criticised the IMF's policies for their impact on access to food, particularly in developing countries. In October 2008, former United States president Bill Clinton delivered a speech to the United Nations on World Food Day, criticising the World Bank and IMF for their policies on food and agriculture: The FPIF remarked that there is a recurring pattern: "the destabilization of peasant producers by a one-two punch of IMF-World Bank structural adjustment programs that gutted government investment in the countryside followed by the massive influx of subsidized U.S. and European Union agricultural imports after the WTO's Agreement on Agriculture pried open markets." A 2009 study concluded that the strict conditions resulted in thousands of deaths in Eastern Europe by tuberculosis as public health care had to be weakened. In the 21 countries to which the IMF had given loans, tuberculosis deaths rose by 16.6%. In 2009, a book by Rick Rowden titled "The Deadly Ideas of Neoliberalism: How the IMF has Undermined Public Health and the Fight Against AIDS", claimed that the IMF's monetarist approach towards prioritising price stability (low inflation) and fiscal restraint (low budget deficits) was unnecessarily restrictive and has prevented developing countries from scaling up long-term investment in public health infrastructure. The book claimed the consequences have been chronically underfunded public health systems, leading to demoralising working conditions that have fuelled a "brain drain" of medical personnel, all of which has undermined public health and the fight against HIV/AIDS in developing countries. In 2016, the IMF's research department published a report titled "Neoliberalism: Oversold?" which, while praising some aspects of the "neoliberal agenda," claims that the organisation has been "overselling" fiscal austerity policies and financial deregulation, which they claim has exacerbated both financial crises and economic inequality around the world. IMF policies have been repeatedly criticised for making it difficult for indebted countries to say no to environmentally harmful projects that nevertheless generate revenues such as oil, coal, and forest-destroying lumber and agriculture projects. Ecuador, for example, had to defy IMF advice repeatedly to pursue the protection of its rainforests, though paradoxically this need was cited in the IMF argument to provide support to Ecuador. The IMF acknowledged this paradox in the 2010 report that proposed the IMF Green Fund, a mechanism to issue special drawing rights directly to pay for climate harm prevention and potentially other ecological protection as pursued generally by other environmental finance. While the response to these moves was generally positive possibly because ecological protection and energy and infrastructure transformation are more politically neutral than pressures to change social policy, some experts voiced concern that the IMF was not representative, and that the IMF proposals to generate only US$200 billion a year by 2020 with the SDRs as seed funds, did not go far enough to undo the general incentive to pursue destructive projects inherent in the world commodity trading and banking systems—criticisms often levelled at the World Trade Organization and large global banking institutions. In the context of the European debt crisis, some observers noted that Spain and California, two troubled economies within Europe and the United States, and also Germany, the primary and politically most fragile supporter of a euro currency bailout would benefit from IMF recognition of their leadership in green technology, and directly from Green Fund-generated demand for their exports, which could also improve their credit ratings. Globalization encompasses three institutions: global financial markets and transnational companies, national governments linked to each other in economic and military alliances led by the United States, and rising "global governments" such as World Trade Organization (WTO), IMF, and World Bank. Charles Derber argues in his book "People Before Profit," "These interacting institutions create a new global power system where sovereignty is globalized, taking power and constitutional authority away from nations and giving it to global markets and international bodies". Titus Alexander argues that this system institutionalises global inequality between western countries and the Majority World in a form of global apartheid, in which the IMF is a key pillar. The establishment of globalised economic institutions has been both a symptom of and a stimulus for globalisation. The development of the World Bank, the IMF regional development banks such as the European Bank for Reconstruction and Development (EBRD), and multilateral trade institutions such as the WTO signals a move away from the dominance of the state as the primary actor analysed in international affairs. Globalization has thus been transformative in terms of a reconceptualising of state sovereignty. Following United States President Bill Clinton's administration's aggressive financial deregulation campaign in the 1990s, globalisation leaders overturned long standing restrictions by governments that limited foreign ownership of their banks, deregulated currency exchange, and eliminated restrictions on how quickly money could be withdrawn by foreign investors. The IMF supports women's empowerment and promotes their rights in countries with a significant gender gap. Lagarde has been convicted of giving preferential treatment to businessman-turned-politician Bernard Tapie as he pursued a legal challenge against the French government. At the time, Lagarde was the French economic minister. Within hours of her conviction, in which she escaped any punishment, the fund's 24-member executive board put to rest any speculation that she might have to resign, praising her "outstanding leadership" and the "wide respect" she commands around the world. Rato was arrested on 16 April 2015 for alleged fraud, embezzlement and money laundering. On 23 February 2017, the Audiencia Nacional found Rato guilty of embezzlement and sentenced to 4 years' imprisonment. In September 2018, the sentence was confirmed by the Supreme Court of Spain. In March 2011, the Ministers of Economy and Finance of the African Union proposed to establish an African Monetary Fund. At the 6th BRICS summit in July 2014 the BRICS nations (Brazil, Russia, India, China, and South Africa) announced the BRICS Contingent Reserve Arrangement (CRA) with an initial size of US$100 billion, a framework to provide liquidity through currency swaps in response to actual or potential short-term balance-of-payments pressures. In 2014, the China-led Asian Infrastructure Investment Bank was established. "Life and Debt", a documentary film, deals with the IMF's policies' influence on Jamaica and its economy from a critical point of view. "Debtocracy", a 2011 independent Greek documentary film, also criticises the IMF. Portuguese musician 's 1982 album "" is inspired by the IMF's intervention in Portugal through monitored stabilisation programs in 1977–78. In the 2015 film, "Our Brand Is Crisis", the IMF is mentioned as a point of political contention, where the Bolivian population fears its electoral interference.
https://en.wikipedia.org/wiki?curid=15251
Islands of the Clyde The Islands of the Firth of Clyde are the fifth largest of the major Scottish island groups after the Inner and Outer Hebrides, Orkney and Shetland. They are situated in the Firth of Clyde between Ayrshire and Argyll and Bute. There are about forty islands and skerries, of which only four are inhabited and only nine larger than . The largest and most populous are Arran and Bute, and Great Cumbrae and Holy Island are also served by dedicated ferry routes. Unlike the four larger Scottish archipelagos, none of the isles in this group are connected to one another or to the mainland by bridges. The geology and geomorphology of the area is complex and the islands and the surrounding sea lochs each have distinctive features. The influence of the Atlantic Ocean and the North Atlantic Drift create a mild, damp oceanic climate. The larger islands have been continuously inhabited since Neolithic times, were influenced by the emergence of the kingdom of Dál Riata from 500 AD and then absorbed into the emerging Kingdom of Alba under Kenneth MacAlpin. They experienced Viking incursions during the Early Middle Ages and then became part of the Kingdom of Scotland in the 13th century. There is a diversity of wildlife, including three species of rare endemic tree. The Highland Boundary Fault runs past Bute and through the northern part of Arran, so from a geological perspective some of the islands are in the Highlands and some in the Central Lowlands. As a result, Arran is sometimes referred to as "Scotland in miniature" and the island is a popular destination for geologists, who come to see intrusive igneous landforms such as sills and dykes as well as sedimentary and metasedimentary rocks ranging widely in age. Visiting in 1787, the geologist James Hutton found his first example of an unconformity there and this spot is one of the most famous places in the study of geology. A group of weakly metamorphosed rocks that form the Highland Border Complex lie discontinuously along the Highland Boundary Fault. One of the most prominent exposures is along Loch Fad on Bute. Ailsa Craig, which lies some south of Arran, has been quarried for a rare type of micro-granite containing riebeckite known as "Ailsite" which is used by Kays of Scotland to make curling stones. As of 2004, 60 to 70% of all curling stones in use were made from granite from the island. In common with the rest of Scotland the Firth of Clyde was covered by ice sheets during the Pleistocene ice ages and the landscape is much affected by glaciation. Arran's highest peaks may have been nunataks at this time. After the last retreat of the ice sea level changes and the isostatic rise of land makes charting post glacial coastlines a complex task but the resultant clifflines behind raised beaches are a prominent feature of the entire coastline. The soils of the islands reflect the diverse geology. Bute has the most productive land, and a pattern of deposits that is typical of the southwest of Scotland. There is a mixture of boulder clay and other glacial deposits in the eroded valleys, and raised beach and marine deposits elsewhere, especially to the south and west which result in a machair landscape in places, inland from the sandy bays, such as Stravanan. The Firth of Clyde, in which these island lie, is north of the Irish Sea and has numerous branching inlets, some of them substantial features in their own right. These include Loch Goil, Loch Long, Gare Loch, Loch Fyne and the estuary of the River Clyde. In places the effect of glaciation on the seabed is pronounced. For example, the Firth is deep between Arran and Bute, although they are only apart. The islands are all exposed to wind and tide and various lighthouses, such as those on Ailsa Craig, Pladda and Davaar act as an aid to navigation. The Firth of Clyde lies between 55 and 56 degrees north, at the same latitude as Labrador in Canada and north of the Aleutian Islands, but the influence of the North Atlantic Drift—the northern extension of the Gulf Stream—ameliorates the winter weather and the area enjoys a mild, damp oceanic climate. Temperatures are generally cool, averaging about in January and in July at sea level. Snow seldom lies at sea level and frosts are generally less frequent than the mainland. In common with most islands of the west coast of Scotland, rainfall is generally high at between per annum on Bute, the Cumbraes and in the south of Arran and per annum in the north of Arran. The Arran mountains are wetter still with the summits receiving over annually. May, June and July are the sunniest months, with upwards of 200 hours of bright sunshine being recorded on average, southern Bute benefiting from a particularly high level of sunny days. Mesolithic humans arrived in the Firth of the Clyde during the fourth millennium BC, probably from Ireland. This was followed by a wave of Neolithic peoples using the same route and there is some evidence that the Firth of Clyde was a significant route via which mainland Scotland was colonised at this time. A particular style of megalithic structure developed in Argyll, the Clyde estuary and elsewhere in western Scotland that has become known as the Clyde cairn. They are rectangular or trapezoidal in shape with a small enclosing chamber faced with large slabs of stone set on end and sometimes subdivided into smaller compartments. A forecourt area may have been used for displays or rituals associated with the interment of the dead, who were placed inside the chambers. They are concentrated in Arran, Bute and Kintyre and it is likely that the Clyde cairns were the earliest forms of Neolithic monument constructed by incoming settlers although few of the 100 or so examples have been given a radiocarbon dating. An example at Monamore on Arran has been dated to 3160 BC, although it was almost certainly built earlier than that, possibly c. 4000BC. There are also numerous standing stones dating from prehistoric times, including six stone circles on Machrie Moor, Arran and other examples on Great Cumbrae and Bute. Bronze Age settlers also constructed megaliths at various sites, many of them dating from the second millennium BC, although the chambered cairns were replaced by burial cists, found on for example, Inchmarnock. Settlement evidence, especially from the early part of this era is however poor. The Queen of the Inch necklace is an article of jewellery made of jet found on Bute that dates from circa 2000 BC. During the early Iron Age Brythonic culture held sway, there being no evidence that the Roman occupation of southern Scotland extended to these islands. During the 2nd century AD Irish influence was at work in the region and by the 6th century the kingdom of Dál Riata was established. Unlike the P-Celtic speaking Brythons, these Gaels spoke a form of Gaelic that still survives in the Hebrides. Through the efforts of Saint Ninian and others Christianity slowly supplanted Druidism. Dál Riata flourished from the time of Fergus Mór in the late fifth century until the Viking incursions that commenced in the late eighth century. Islands close to the shores of modern Ayrshire would have remained part of the Kingdom of Strathclyde during this period, whilst the main islands became part of the emerging Kingdom of Alba founded by Kenneth MacAlpin (Cináed mac Ailpín). The Islands of the Clyde historically formed the border zone between the Norse "Suðreyjar" and Scotland. As such many of these islands fell under Norse hegemony between the 9th and 13th centuries. The islands of the Clyde may well have formed the power base of Somhairle mac Giolla Brighde and his descendants by the last half of the 12th century. At about this time period, the authority of the Steward of Scotland seems to have encroached into the region; and there is reason to suspect that, by the turn of the 13th century, the islands were consumed by the expanding Stewart lordship. The western extension of Scottish authority appears to have been one of the factors behind a Norwegian invasion of the region in 1230, in which the invaders seized Rothesay Castle. In 1263 Norwegian troops commanded by Haakon Haakonarson repeated the feat but the ensuing Battle of Largs between Scots and Norwegian forces, which took place on the shores of the Firth of Clyde, was inconclusive as a military contest. This marked an ultimately terminal weakening of Norwegian power in Scotland. Haakon retreated to Orkney, where he died in December 1263, entertained on his death bed by recitations of the sagas. Following this ill-fated expedition, all rights that the Norwegian Crown "had of old therein" in relation to the islands were yielded to the Kingdom of Scotland as a result of the 1266 Treaty of Perth. From the mid-thirteenth century to the present day all of the islands of the Clyde have remained part of Scotland. From the commencement of the early medieval period until 1387 all of these isles were part of the Diocese of Sodor and Man, based at Peel, on the Isle of Man. Thereafter, the seat of the Bishopric of the Isles was relocated to the north, firstly to Snizort on Skye and then Iona, a state of affairs which continued until the 16th century Scottish Reformation. The century following 1750 was time of significant change. New forms of transport, industry and agriculture brought sweeping changes, and an end to traditional ways of life that had endured for centuries. The aftermath of the Battle of Culloden marked the beginning of the end for the clan system and whilst there were marked improvements in living standards for some, these transformations came at a cost for others. In the early 19th century Alexander, 10th Duke of Hamilton (1767–1852) embarked on a programme of clearances that had a devastating effect on Arran's population. Whole villages were removed and the Gaelic culture of the island dealt a terminal blow. A memorial to this early form of ethnic cleansing has been constructed on the shore at Lamlash, paid for by a Canadian descendant of the emigrants. From the 1850s to the late 20th century the Clyde Puffer, made famous by the "Vital Spark", was the workhorse of the islands, carrying all kinds of produce and products to and from the islands. The Caledonian Steam Packet Company (CSP) was formed in May 1889 to operate steamer services to and from Gourock for the Caledonian Railway and soon expanded by taking over rival steamer operators. David MacBrayne Ltd operated the Glasgow to Ardrishaig steamer service, as part of the "Royal Route" to Oban. During the 20th century many of the islands were developed as tourist resorts for Glaswegians who went "Doon the Watter", in parallel to mainland resorts such as Largs and Troon. In 1973 CSP and MacBraynes commenced joint Clyde and West Highland operations under the new name of Caledonian MacBrayne. A publicly owned company, they serve Great Cumbrae, Arran and Bute as well as running mainland-to-mainland ferries across the firth. Private companies operate services from Arran to Holy Isle and from McInroy's Point (Gourock) to Hunter's Quay on the Cowal peninsula. The majority of the islands at one time made up the traditional County of Bute. Today the islands are split more or less equally between the modern unitary authorities of Argyll and Bute and North Ayrshire with only Ailsa Craig and Lady Isle in South Ayrshire falling outwith these two council areas. The following table gives a list of the islands of the Firth of Clyde with an area greater than 40 hectares (approximately 100 acres) plus adjacent smaller uninhabited islets, tidal islets only separated at higher stages of the tide, and skerries which are only exposed at lower stages of the tide. Six islands were inhabited in 2001 including Davaar and Sanda with 2 and 1 residents respectively. By the time of the 2011 census neither had a usually resident population. Some islets lie remote from the larger islands and are listed separately here by location. Gare Loch is a small loch which hosts the Faslane Naval Base, the home of the UK's Trident nuclear submarines. At its southern end, the loch opens into the Firth of Clyde, via the Rhu narrows. It contains two islets: Green Island and Perch Rock. The Kilbrannan Sound, which lies between Arran and the Kintyre peninsula, contains several islets: An Struthlag, Cour Island, Eilean Carrach (Carradale), Eilean Carrach (Skipness), Eilean Grianain, Eilean Sunadale, Gull Isle, Island Ross and Thorn Isle. In the late 11th century Magnus Barefoot, King of Norway, made an arrangement with King Malcolm III of Scotland that he could take possession of land on the west coast around which a ship could sail. He had his longship dragged across the long isthmus in the north of Kintyre between East Loch Tarbert and West Loch Tarbert as part of a campaign to increase his possessions. Magnus declared that Kintyre had "better land than the best of the Hebrides", and by taking command of his ship's tiller and "sailing" across the isthmus he was able to claim the entire peninsula was an island, which remained under Norse rule for more than a dozen years as a result. Loch Fyne, which extends inland from the Sound of Bute is the longest of Scotland's sea lochs and contains several islets and skerries. These are Duncuan Island, Eilean Ardgaddan, Eilean a' Bhuic, Eilean Aoghainn, Eilean a' Chomhraig, Eilean an Dúnain, Eilean Buidhe (Ardmarnock), Eilean Buidhe (Portavadie), Eilean Fraoch, Eilean Math-ghamhna, Eilean Mór, Glas Eilean, Heather Island, Inverneil Island, Kilbride Island and Liath Eilean. The North Ayrshire islets of Broad Rock, East Islet, Halftide Rock, High Rock and North Islet are all found surrounding Horse Isle. Lady Isle, which lies off the South Ayrshire coast near Troon once housed "ane old chapell with an excellent spring of water". However, in June 1821 someone set fire to the "turf and pasture", and permanently destroyed the island's grazing, with gales blowing much of the island's soil into the sea. Neither Loch Goil nor Loch Long, which are fjord-like arms of the firth to the north, contain islands. The following are places along that shores of the Firth of Clyde that are not islands and have misleading names, "eilean" being Gaelic for "island": Eilean na Beithe, Portavadie; Eilean Beag, Cove; Eilean Dubh, Dalchenna, Loch Fyne; Eilean nan Gabhar, Melldalloch, Kyles of Bute; Barmore Island, just north of Tarbert, Kintyre; Eilean Aoidh, south of Portavadie; Eilean Leathan, Kilbrannan Sound just south of Torrisdale Bay; Island Muller, Kilbrannan Sound north of Campbeltown. There are populations of red deer, red squirrel, badger, otter, adder and common lizard. Offshore there are harbour porpoises, basking sharks and various species of dolphin. Davaar is home to a population of wild goats. Over 200 species of bird have been recorded in the area including black guillemot, eider, peregrine falcon and the golden eagle. In 1981 there were 28 ptarmigan on Arran, but in 2009 it was reported that extensive surveys had been unable to record any. Similarly, the red-billed chough no longer breeds on the island. Arran also has three rare endemic species of tree, the Arran Whitebeams. These are the Scottish or Arran whitebeam, the cut-leaved whitebeam and the Catacol whitebeam, which are amongst the most endangered tree species in the world. They are found in a protected national nature reserve, and are monitored by staff from Scottish Natural Heritage. Only 283 Arran whitebeam and 236 cut-leaved whitebeam were recorded as mature trees in 1980. The Catacol whitebeam was discovered in 2007 and steps have been taken to protect the two known specimens. The Roman historian Tacitus refers to the "Clota" meaning the Clyde. The derivation is not certain but probably from the Brythonic "Clouta" which became "Clut" in Old Welsh. The name's literal meaning is "wash" but probably refers to the idea of a river goddess being "the washer" or "strongly flowing one". Bute's derivation is also uncertain. "Bót" is the Norse name and this is the Old Irish word for "fire", possibly a reference to signal fires. The etymology of Arran is no more clear—Haswell-Smith (2004) offers a Brythonic derivation and a meaning of "high place" although Watson (1926) suggests it may be pre-Celtic.
https://en.wikipedia.org/wiki?curid=15252
International Bank Account Number The International Bank Account Number (IBAN) is an internationally agreed system of identifying bank accounts across national borders to facilitate the communication and processing of cross border transactions with a reduced risk of transcription errors. It was originally adopted by the European Committee for Banking Standards (ECBS) and later as an international standard under ISO 13616:1997. The current standard is ISO 13616:2007, which indicates SWIFT as the formal registrar. Initially developed to facilitate payments within the European Union, it has been implemented by most European countries and numerous countries in other parts of the world, mainly in the Middle East and the Caribbean. As of February 2016, 69 countries were using the IBAN numbering system. The IBAN consists of up to 34 alphanumeric characters comprising a country code; two check digits; and a number that includes the domestic bank account number, branch identifier, and potential routing information. The check digits enable a check of the bank account number to confirm its integrity before submitting a transaction. Before IBAN, differing national standards for bank account identification (i.e. bank, branch, routing codes, and account number) were confusing for some users. This often led to necessary routing information being missing from payments. Routing information as specified by ISO 9362 (also known as Business Identifier Codes (BIC code), SWIFT ID or SWIFT code, and SWIFT-BIC) does not require a specific format for the transaction so the identification of accounts and transaction types is left to agreements of the transaction partners. It also does not contain check digits, so errors of transcription were not detectable and it was not possible for a sending bank to validate the routing information prior to submitting the payment. Routing errors caused delayed payments and incurred extra costs to the sending and receiving banks and often to intermediate routing banks. In 1997, to overcome these difficulties, the International Organization for Standardization (ISO) published ISO 13616:1997. This proposal had a degree of flexibility, which the European Committee for Banking Standards (ECBS) believed would make it unworkable, and they produced a "slimmed down" version of the standard which, amongst other things, permitted only upper-case letters and required that the IBAN for each country have a fixed length. ISO 13616:1997 was subsequently withdrawn and replaced by ISO 13616:2003. The standard was revised again in 2007 when it was split into two parts. ISO 13616-1:2007 "specifies the elements of an international bank account number (IBAN) used to facilitate the processing of data internationally in data interchange, in financial environments as well as within and between other industries" but "does not specify internal procedures, file organization techniques, storage media, languages, etc. to be used in its implementation". ISO 13616-2:2007 describes "the Registration Authority (RA) responsible for the registry of IBAN formats that are compliant with ISO 13616-1 [and] the procedures for registering ISO 13616-compliant IBAN formats". The official IBAN registrar under ISO 13616-2:2007 is SWIFT. IBAN imposes a flexible but regular format sufficient for account identification and contains validation information to avoid errors of transcription. It carries all the routing information needed to get a payment from one bank to another wherever it may be; it contains key bank account details such as country code, branch codes (known as sort codes in the UK and Ireland) and account numbers, and it contains "check digits" which can be validated at source according to a single standard procedure. Where used, IBANs have reduced trans-national money transfer errors to under 0.1% of total payments. The IBAN consists of up to 34 alphanumeric characters, as follows: The check digits enable a sanity check of the bank account number to confirm its integrity before submitting a transaction. The IBAN should not contain spaces when transmitted electronically. When printed it is expressed in groups of four characters separated by a single space, the last group being of variable length as shown in the example below: Permitted IBAN characters are the digits "0" to "9" and the 26 Latin alphabetic characters "A" to "Z". This applies even in countries (e.g., Thailand) where these characters are not used in the national language. The Basic Bank Account Number (BBAN) format is decided by the national central bank or designated payment authority of each country. There is no consistency between the formats adopted. The national authority may register its BBAN format with SWIFT, but is not obliged to do so. It may adopt IBAN without registration. SWIFT also acts as the registration authority for the SWIFT system, which is used by most countries that have not adopted IBAN. A major difference between the two systems is that under SWIFT there is no requirement that BBANs used within a country be of a pre-defined length. The BBAN must be of a fixed length for the country and comprise case-insensitive alphanumeric characters. It includes the domestic bank account number, branch identifier, and potential routing information. Each country can have a different national routing/account numbering system, up to a maximum of 30 alphanumeric characters. The check digits enable the sending bank (or its customer) to perform a sanity check of the routing destination and account number from a single string of data at the time of data entry. This check is guaranteed to detect any instances where a single character has been omitted, duplicated, mistyped or where two characters have been transposed. Thus routing and account number errors are virtually eliminated. One of the design aims of the IBAN was to enable as much validation as possible to be done at the point of data entry. In particular, the computer program that accepts an IBAN will be able to validate: The check digits are calculated using MOD-97-10 as per ISO/IEC 7064:2003 (abbreviated to "mod-97" in this article), which specifies a set of check character systems capable of protecting strings against errors which occur when people copy or key data. In particular, the standard states that the following can be detected: The underlying rules for IBANs is that the account-servicing financial institution should issue an IBAN, as there are a number of areas where different IBANs could be generated from the same account and branch numbers that would satisfy the generic IBAN validation rules. In particular cases where codice_8 is a valid check digit, codice_9 will not be a valid check digit, likewise, if codice_10 is a valid check digit, codice_11 will not be a valid check digit, similarly with codice_12 and codice_13. The UN CEFACT TBG5 has published a free IBAN validation service in 32 languages for all 57 countries that have adopted the IBAN standard. They have also published the Javascript source code of the verification algorithm. An English language IBAN checker for ECBS member country bank accounts is available on its website. An IBAN is validated by converting it into an integer and performing a basic "mod-97" operation (as described in ISO 7064) on it. If the IBAN is valid, the remainder equals 1. The algorithm of IBAN validation is as follows: If the remainder is 1, the check digit test is passed and the IBAN might be valid. Example (fictitious United Kingdom bank, sort code 12-34-56, account number 98765432): According to the ECBS "generation of the IBAN shall be the exclusive responsibility of the bank/branch servicing the account". The ECBS document replicates part of the ISO/IEC 7064:2003 standard as a method for generating check digits in the range 02 to 98. Check digits in the ranges 00 to 96, 01 to 97, and 03 to 99 will also provide validation of an IBAN, but the standard is silent as to whether or not these ranges may be used. The preferred algorithm is: Any computer programming language or software package that is used to compute "D" mod "97" directly must have the ability to handle integers of more than 30 digits. In practice, this can only be done by software that either supports arbitrary-precision arithmetic or that can handle 220 bit (unsigned) integers, features that are often not standard. If the application software in use does not provide the ability to handle integers of this size, the modulo operation can be performed in a piece-wise manner (as is the case with the UN CEFACT TBG5 Javascript program). Piece-wise calculation can be done in many ways. One such way is as follows: The result of the final calculation in step 2 will be "D" mod 97 = "N" mod "97". In this example, the above algorithm for "D" mod 97 will be applied to "D" = . (The digits are colour-coded to aid the description below.) If the result is one, the IBAN corresponding to "D" passes the check digit test. From step 8, the final result is "D" mod 97 = 1 and the IBAN has passed this check digit test. International bank transactions use either an IBAN or the ISO 9362 Business Identifier Code system (BIC or SWIFT code) in conjunction with the BBAN (Basic Bank Account Number). The banks of most countries in Europe publish account numbers using both the IBAN format and the nationally recognised identifiers, this being mandatory within the European Economic Area. Day-to-day administration of banking in British Overseas Territories varies from territory to territory; some, such as South Georgia and the South Sandwich Islands, have too small a population to warrant a banking system while others, such as Bermuda, have a thriving financial sector. The use of the IBAN is up to the local government—Gibraltar, being part of the European Union is required to use the IBAN, as are the Crown dependencies, which use the British clearing system, and the British Virgin Islands have chosen to do so. , no other British Overseas Territories have chosen to use the IBAN. Banks in the Caribbean Netherlands also do not use the IBAN. The IBAN designation scheme was chosen as the foundation for electronic straight-through processing in the European Economic Area. The European Parliament mandated that a bank charge needs to be the same amount for domestic credit transfers as for cross-border credit transfers regulated in decision 2560/2001 (updated in 924/2009). This regulation took effect in 2003. Only payments in euro up to €12,500 to a bank account designated by its IBAN were covered by the regulation. The Euro Payments regulation was the foundation for the decision to create a Single Euro Payments Area (SEPA). The European Central Bank has created the TARGET2 interbank network that unifies the technical infrastructure of the 26 central banks of the European Union (although Sweden has opted out). SEPA is a self-regulatory initiative by the banking sector of Europe as represented in the European Payments Council (EPC). The European Union made the scheme mandatory through the Payment Services Directive published in 2007. Since January 2008, all countries must support SEPA credit transfer, and SEPA direct debit must be supported since November 2009. The regulation on SEPA payments increased the charge cap (same price for domestic payments as for cross-border payments) to €50,000. With a further decision of the European Parliament, the IBAN scheme for bank accounts fully replaced the domestic numbering schemes from 31 December 2012. On 16 December 2010, the European Commission published regulations that made IBAN support mandatory for domestic credit transfer by 2013 and for domestic direct debit by 2014 (with a 12 and 24 months transition period respectively). Some countries had already replaced their traditional bank account scheme by IBAN. This included Switzerland where IBAN was introduced for national credit transfer on 1 January 2006 and the support for the old bank account numbers was not required from 1 January 2010. Based on a 20 December 2011 memorandum, the EU parliament resolved the mandatory dates for the adoption of the IBAN on 14 February 2012. On 1 February 2014, all national systems for credit transfer and direct debit were abolished and replaced by an IBAN-based system. This was then extended to all cross-border SEPA transactions on 1 February 2016 (Article 5 Section 7). After these dates the IBAN is sufficient to identify an account for home and foreign financial transactions in SEPA countries and banks are no longer permitted to require that the customer supply the BIC for the beneficiary's bank. In the run-up to the 1 February 2014 deadline, it became apparent that many old bank account numbers had not been allocated IBANs—an issue that was addressed on a country-by-country basis. In Germany, for example, Deutsche Bundesbank and the German Banking Industry Committee required that all holders of German bank codes ("Bankleitzahl") published the specifics of their IBAN generation format taking into account not only the generation of check digits but also the handling of legacy bank codes, thereby enabling third parties to generate IBANs independently of the bank. The first such catalogue was published in June 2013 as a variant of the old bank code catalog ("Bankleitzahlendatei"). Banks in numerous non-European countries including most states of the Middle East, North Africa and the Caribbean have implemented the IBAN format for account identification. In some countries the IBAN is used on an "ad hoc" basis, an example was Ukraine where account numbers used for international transfers by some domestic banks had additional aliases that followed the IBAN format as a precursor to formal SWIFT registration. This practice in Ukraine ended on 1 November 2019 when all Ukrainian banks had fully switched to the IBAN standard. The degree to which bank verifies the validity of a recipient's bank account number depends of the configuration of the transmitting bank's software—many major software packages supply bank account validation as a standard function. Some banks outside Europe may not recognize IBAN, though this is expected to diminish with time. Non-European banks usually accept IBANs for accounts in Europe, although they might not treat IBANs differently from other foreign bank account numbers. In particular, they might not check the IBAN's validity prior to sending the transfer. Banks in the United States do not use IBAN as account numbers for U.S. accounts and use ABA routing transit numbers. Any adoption of the IBAN standard by U.S. banks would likely be initiated by ANSI ASC X9, the U.S. financial services standards development organization: a working group (X9B20) was established as an X9 subcommittee to generate an IBAN construction for U.S. bank accounts. Canadian financial institutions have not adopted IBAN and use routing numbers issued by Payments Canada for domestic transfers, and SWIFT for international transfers. There is no formal governmental or private sector regulatory requirement in Canada for the major banks to use IBAN. Australia and New Zealand do not use IBAN. They use Bank State Branch codes for domestic transfers and SWIFT for international transfers. This table summarises the IBAN formats by country: In addition to the above list, Nordea has catalogued IBANs for countries listed below. In this list Addition list of countries, in the process of introducing the IBAN retrieved from SWIFT partner website are listed below. In this list There is criticism about the length and readability of IBAN. Printed on paper the IBAN is often difficult to read. Therefore, it is popular to group the IBAN with four symbols. However, for electronic documents (e.g. PDF invoice) the copy and paste of grouped IBAN can result in errors with online banking forms. However, most modern bank institutes allow and detect the copy and paste of both grouped and ungrouped IBAN.
https://en.wikipedia.org/wiki?curid=15253
Infinitive Infinitive (abbreviated ) is a linguistics term referring to certain verb forms existing in many languages, most often used as non-finite verbs. As with many linguistic concepts, there is not a single definition applicable to all languages. The word is derived from Late Latin "[modus] infinitivus", a derivative of "infinitus" meaning "unlimited". In traditional descriptions of English, the infinitive is the basic dictionary form of a verb when used non-finitely, with or without the particle "to". Thus "to go" is an infinitive, as is "go" in a sentence like "I must go there" (but not in "I go there", where it is a finite verb). The form without "to" is called the bare infinitive, and the form with "to" is called the full infinitive or "to"-infinitive. In many other languages the infinitive is a single word, often with a characteristic inflective ending, like "morir" ("(to) die") in Spanish, "manger" ("(to) eat") in French, "portare" ("(to) carry") in Latin, "lieben" ("(to) love") in German, etc. However, some languages have no infinitive forms. Many Native American languages and some languages in Africa and Australia do not have direct equivalents to infinitives or verbal nouns. Instead, they use finite verb forms in ordinary clauses or various special constructions. Being a verb, an infinitive may take objects and other complements and modifiers to form a verb phrase (called an infinitive phrase). Like other non-finite verb forms (like participles, converbs, gerunds and gerundives), infinitives do not generally have an expressed subject; thus an infinitive verb phrase also constitutes a complete non-finite clause, called an infinitive (infinitival) clause. Such phrases or clauses may play a variety of roles within sentences, often being nouns (for example being the subject of a sentence or being a complement of another verb), and sometimes being adverbs or other types of modifier. Many verb forms known as infinitives differ from gerunds (verbal nouns) in that they do not inflect for case or occur in adpositional phrases. Instead, infinitives often originate in earlier inflectional forms of verbal nouns. Unlike finite verbs, infinitives are not usually inflected for tense, person, etc. either, although some degree of inflection sometimes occurs; for example Latin has distinct active and passive infinitives. An "infinitive phrase" is a verb phrase constructed with the verb in infinitive form. This consists of the verb together with its objects and other complements and modifiers. Some examples of infinitive phrases in English are given below – these may be based on either the full infinitive (introduced by the particle "to") or the bare infinitive (without the particle "to"). Infinitive phrases often have an implied grammatical subject making them effectively clauses rather than phrases. Such "infinitive clauses" or "infinitival clauses", are one of several kinds of non-finite clause. They can play various grammatical roles like a constituent of a larger clause or sentence; for example it may form a noun phrase or adverb. Infinitival clauses may be embedded within each other in complex ways, like in the sentence: Here the infinitival clause "to get married" is contained within the finite dependent clause "that Brett Favre is going to get married"; this in turn is contained within another infinitival clause, which is contained in the finite independent clause (the whole sentence). The grammatical structure of an infinitival clause may differ from that of a corresponding finite clause. For example, in German, the infinitive form of the verb usually goes to the end of its clause, whereas a finite verb (in an independent clause) typically comes in second position. Following certain verbs or prepositions, infinitives commonly "do" have an expressed subject, e.g., As these examples illustrate, the subject of the infinitive is in the objective case (them, him) in contrast to the nominative case that would be used with a finite verb, e.g., "They ate their dinner." Such accusative and infinitive constructions are present in Latin and Ancient Greek, as well as many modern languages. The unusual case for the subject of an infinitive is an example of exceptional case-marking, where the infinitive clause's role being an object of a verb or preposition (want, for) overpowers the pronoun's subjective role within the clause. In some languages, infinitives may be marked for grammatical categories like voice, aspect, and to some extent tense. This may be done by inflection, as with the Latin perfect and passive infinitives, or by periphrasis (with the use of auxiliary verbs), as with the Latin future infinitives or the English perfect and progressive infinitives. Latin has present, perfect and future infinitives, with active and passive forms of each. For details see . English has infinitive constructions that are marked (periphrastically) for aspect: perfect, progressive (continuous), or a combination of the two (perfect progressive). These can also be marked for passive voice (as can the plain infinitive): Further constructions can be made with other auxiliary-like expressions, like "(to) be going to eat" or "(to) be about to eat", which have future meaning. For more examples of the above types of construction, see . Perfect infinitives are also found in other European languages that have perfect forms with auxiliaries similarly to English. For example, "avoir mangé" means "(to) have eaten" in French. Regarding English, the term "infinitive" is traditionally applied to the unmarked form of the verb (the "plain form") when it forms a non-finite verb, whether or not introduced by the particle "to". Hence "sit" and "to sit", as used in the following sentences, would each be considered an infinitive: The form without "to" is called the "bare infinitive"; the form introduced by "to" is called the "full infinitive" or "to-infinitive". The other non-finite verb forms in English are the gerund or present participle (the "-ing" form), and the past participle – these are not considered infinitives. Moreover, the unmarked form of the verb is not considered an infinitive when it forms a finite verb: like a present indicative ("I "sit" every day"), subjunctive ("I suggest that he "sit""), or imperative (""Sit" down!"). (For some irregular verbs the form of the infinitive coincides additionally with that of the past tense and/or past participle, like in the case of "put".) Certain auxiliary verbs are defective in that they do not have infinitives (or any other non-finite forms). This applies to the modal verbs ("can", "must", etc.), as well as certain related auxiliaries like the "had" of "had better" and the "used" of "used to". (Periphrases can be employed instead in some cases, like "(to) be able to" for "can", and "(to) have to" for "must".) It also applies to the auxiliary "do", like used in questions, negatives and emphasis like described under "do"-support. (Infinitives are negated by simply preceding them with "not". Of course the verb "do" when forming a main verb can appear in the infinitive.) However, the auxiliary verbs "have" (used to form the perfect) and "be" (used to form the passive voice and continuous aspect) both commonly appear in the infinitive: "I should have finished by now"; "It's thought to have been a burial site"; "Let him be released"; "I hope to be working tomorrow." Huddleston and Pullum's "Cambridge Grammar of the English Language" (2002) does not use the notion of the "infinitive" ("there is no form in the English verb paradigm called 'the infinitive'"), only that of the "infinitival clause", noting that English uses the same form of the verb, the "plain form", in infinitival clauses that it uses in imperative and present-subjunctive clauses. A matter of controversy among prescriptive grammarians and style writers has been the appropriateness of separating the two words of the "to"-infinitive (as in "I expect "to" happily "sit" here"). For details of this, see split infinitive. Opposing linguistic theories typically do not consider the "to"-infinitive a distinct constituent, instead regarding the scope of the particle "to" as an entire verb phrase; thus, "to buy a car" is parsed like "to [buy [a car]]", not like "[to buy] [a car]". The bare infinitive and the "to"-infinitive have a variety of uses in English. The two forms are mostly in complementary distribution – certain contexts call for one, and certain contexts for the other; they are not normally interchangeable, except in occasional instances like after the verb "help", where either can be used. The main uses of infinitives (or infinitive phrases) are like follows: The infinitive is also the usual dictionary form or citation form of a verb. The form listed in dictionaries is the bare infinitive, although the "to"-infinitive is often used in referring to verbs or in defining other verbs: "The word 'amble' means 'to walk slowly'"; "How do we conjugate the verb "to go"?" For further detail and examples of the uses of infinitives in English, see Bare infinitive and "To"-infinitive in the article on uses of English verb forms. The original Proto-Germanic ending of the infinitive was "-an", with verbs derived from other words ending in "-jan" or "-janan". In German it is "-en" ("sagen"), with "-eln" or "-ern" endings on a few words based on -l or -r roots ("segeln", "ändern"). The use of "zu" with infinitives is similar to English "to", but is less frequent than in English. German infinitives can form nouns, often expressing abstractions of the action, in which case they are of neuter gender: "das Essen" means "the eating", but also "the food". In Dutch infinitives also end in "-en" ("zeggen" — "to say"), sometimes used with "te" similar to English "to", e.g., "Het is niet moeilijk te begrijpen" → "It is not hard to understand." The few verbs with stems ending in "-a" have infinitives in -n ("gaan" — "to go", "slaan" — "to hit"). Afrikaans has lost the distinction between the infinitive and present forms of verbs, with the exception of the verbs "wees" (to be), which admits the present form "is", and the verb "hê" (to have), whose present form is "het". In North Germanic languages the final "-n" was lost from the infinitive as early as 500–540 AD, reducing the suffix to "-a". Later it has been further reduced to "-e" in Danish and some Norwegian dialects (including the written majority language bokmål). In the majority of Eastern Norwegian dialects and a few bordering Western Swedish dialects the reduction to "-e" was only partial, leaving some infinitives in "-a" and others in "-e" (å laga vs. å kaste). In northern parts of Norway the infinitive suffix is completely lost (å lag’ vs. å kast’) or only the "-a" is kept (å laga vs. å kast’). The infinitives of these languages are inflected for passive voice through the addition of "-s" or "-st" to the active form. This suffix appearance in Old Norse was a contraction of "mik" (“me”, forming "-mk") or "sik" (reflexive pronoun, forming "-sk") and was originally expressing reflexive actions: (hann) "kallar" (“(he) calls”) + "-sik" (“himself”) > (hann) "kallask" (“(he) calls himself”). The suffixes "-mk" and "-sk" later merged to "-s", which evolved to "-st" in the western dialects. The loss or reduction of "-a" in active voice in Norwegian did not occur in the passive forms ("-ast", "-as"), except for some dialects that have "-es". The other North Germanic languages have the same vowel in both forms. The formation of the infinitive in the Romance languages reflects that in their ancestor, Latin, almost all verbs had an infinitive ending with "-re" (preceded by one of various thematic vowels). For example, in Italian infinitives end in "-are", "-ere", "-rre" (rare), or "-ire" (which is still identical to the Latin forms), and in "-arsi", "-ersi", "-rsi", "-irsi" for the reflexive forms. In Spanish and Portuguese, infinitives end in "-ar", "-er", or "-ir" (Spanish also has reflexive forms in "-arse", "-erse", "-irse"), while similarly in French they typically end in "-re", "-er", "oir", and "-ir". In Romanian, both short and long-form infinitives exist; the so-called "long infinitives" end in "-are, -ere, -ire" and in modern speech are used exclusively as verbal nouns, while there are a few verbs that cannot be converted into the nominal long infinitive. The "short infinitives" used in verbal contexts (e.g., after an auxiliary verb) have the endings "-a","-ea", "-e", and "-i" (basically removing the ending in "-re"). In Romanian, the infinitive is usually replaced by a clause containing the conjunction "să" plus the subjunctive mood. The only verb that is modal in common modern Romanian is the verb "a putea", to be able to. However, in popular speech the infinitive after "a putea" is also increasingly replaced by the subjunctive. In all Romance languages, infinitives can also form nouns. Latin infinitives challenged several of the generalizations about infinitives. They did inflect for voice ("amare", "to love", "amari", to be loved) and for tense ("amare", "to love", "amavisse", "to have loved"), and allowed for an overt expression of the subject ("video Socratem currere", "I see Socrates running"). See . Romance languages inherited from Latin the possibility of an overt expression of the subject (as in Italian "vedo Socrate correre"). Moreover, the "inflected infinitive" (or "personal infinitive") found in Portuguese and Galician inflects for person and number. These, alongside Sardinian, are the only Indo-European languages that allow infinitives to take person and number endings. This helps to make infinitive clauses very common in these languages; for example, the English finite clause "in order that you/she/we have..." would be translated to Portuguese like "para teres/ela ter/termos..." (Portuguese is a null-subject language). The Portuguese personal infinitive has no proper tenses, only aspects (imperfect and perfect), but tenses can be expressed using periphrastic structures. For instance, "even though you sing/have sung/are going to sing" could be translated to "apesar de cantares/teres cantado/ires cantar". Other Romance languages (including Spanish, Romanian, Catalan, and some Italian dialects) allow uninflected infinitives to combine with overt nominative subjects. For example, Spanish "al abrir yo los ojos" ("when I opened my eyes") or "sin yo saberlo" ("without my knowing about it"). In Ancient Greek the infinitive has four tenses (present, future, aorist, perfect) and three voices (active, middle, passive). Present and perfect have the same infinitive for both middle and passive, while future and aorist have separate middle and passive forms. Thematic verbs form present active infinitives by adding to the stem the thematic vowel and the infinitive ending , and contracts to , e.g., . Athematic verbs, and perfect actives and aorist passives, add the suffix instead, e.g., . In the middle and passive, the present middle infinitive ending is , e.g., and most tenses of thematic verbs add an additional between the ending and the stem, e.g., . The infinitive "per se" does not exist in Modern Greek. To see this, consider the ancient Greek "ἐθέλω γράφειν" “I want to write”. In modern Greek this becomes "θέλω να γράψω" “I want that I write”. In modern Greek, the infinitive has thus changed form and function and is used mainly in the formation of periphrastic tense forms and not with an article or alone. Instead of the Ancient Greek infinitive system "γράφειν, γράψειν, γράψαι, γεγραφέναι", Modern Greek uses only the form "γράψει", a development of the ancient Greek aorist infinitive "γράψαι". This form is also invariable. The modern Greek infinitive has only two forms according to voice: for example, "γράψει" for the active voice and "γραφ(τ)εί" for the passive voice (coming from the ancient passive aorist infinitive "γραφῆναι"). The infinitive in Russian usually ends in "-t’" (ть) preceded by a thematic vowel, or "-ti" (ти), if not preceded by one; some verbs have a stem ending in a consonant and change the "t" to "č’", like "*mogt’ → moč’" (*могть → мочь) "can". Some other Balto-Slavic languages have the infinitive typically ending in, for example, "-ć" (sometimes "-c") in Polish, "-t’" in Slovak, "-t" (formerly "-ti") in Czech and Latvian (with a handful ending in -s on the latter), "-ty" (-ти) in Ukrainian, -ць ("-ts"') in Belarusian. Lithuanian infinitives end in -"ti", Slovenian end on -"ti" or -"či", and Croatian on -"ti" or -"ći". Serbian officially retains infinitives -"ti" or -"ći", but is more flexible than the other slavic languages in breaking the infinitive through a clause. The infinitive nevertheless remains the dictionary form. Bulgarian and Macedonian have lost the infinitive altogether except in a handful of frozen expressions where it is the same as the 3rd person singular aorist form. Almost all expressions where an infinitive may be used in Bulgarian are listed here; neverthess in all cases a subordinate clause is the more usual form. For that reason, the present first-person singular conjugation is the dictionary form in Bulgarian, while Macedonian uses the third person singular form of the verb in present tense. Hebrew has "two" infinitives, the infinitive absolute and the infinitive construct. The infinitive construct is used after prepositions and is inflected with pronominal endings to indicate its subject or object: "bikhtōbh hassōphēr" "when the scribe wrote", "ahare lekhtō" "after his going". When the infinitive construct is preceded by ("lə-", "li-", "lā-", "lo-") "to", it has a similar meaning to the English "to"-infinitive, and this is its most frequent use in Modern Hebrew. The infinitive absolute is used for verb focus and emphasis, like in "mōth yāmūth" (literally "a dying he will die"; figuratively, "he shall indeed/surely die"). This usage is commonplace in the Bible, but in Modern Hebrew it is restricted to high-register literary works. Note, however, that the "to"-infinitive of Hebrew is not the dictionary form; that is the third person singular perfect form. The Finnish grammatical tradition includes many non-finite forms that are generally labeled as (numbered) infinitives although many of these are functionally converbs. To form the so-called first infinitive, the strong form of the root (without consonant gradation or epenthetic 'e') is used, and these changes occur: As such, it is inconvenient for dictionary use, because the imperative would be closer to the root word. Nevertheless, dictionaries use the first infinitive. There are also four other infinitives, plus a "long" form of the first: Note that all of these must change to reflect vowel harmony, so the fifth infinitive (with a third-person suffix) of "hypätä" "jump" is "hyppäämäisillään" "he was about to jump", not "*hyppäämaisillaan". The Seri language of northwestern Mexico has infinitival forms used in two constructions (with the verb meaning 'want' and with the verb meaning 'be able'). The infinitive is formed by adding a prefix to the stem: either "iha-" (plus a vowel change of certain vowel-initial stems) if the complement clause is transitive, or "ica-" (and no vowel change) if the complement clause is intransitive. The infinitive shows agreement in number with the controlling subject. Examples are: "icatax ihmiimzo" 'I want to go', where "icatax" is the singular infinitive of the verb 'go' (singular root is "-atax"), and "icalx hamiimcajc" 'we want to go', where "icalx" is the plural infinitive. Examples of the transitive infinitive: "ihaho" 'to see it/him/her/them' (root "-aho"), and "ihacta" 'to look at it/him/her/them' (root "-oocta"). In languages without an infinitive, the infinitive is translated either as a "that"-clause or as a verbal noun. For example, in Literary Arabic the sentence "I want to write a book" is translated as either "urīdu an aktuba kitāban" (lit. "I want that I write a book", with a verb in the subjunctive mood) or "urīdu kitābata kitābin" (lit. "I want the writing of a book", with the "masdar" or verbal noun), and in Levantine Colloquial Arabic "biddi aktub kitāb" (subordinate clause with verb in subjunctive). Even in languages that have infinitives, similar constructions are sometimes necessary where English would allow the infinitive. For example, in French the sentence "I want you to come" translates to "Je veux que vous veniez" (lit. "I want that you come", with "come" being in the subjunctive mood). However, "I want to come" is simply "Je veux venir", using the infinitive, just as in English. In Russian, sentences such as "I want you to leave" do not use an infinitive. Rather, they use the conjunction чтобы "in order to/so that" with the past tense form (most probably remnant of subjunctive) of the verb: "Я хочу, чтобы вы ушли" (literally, "I want so that you left").
https://en.wikipedia.org/wiki?curid=15254
Immaculate Conception The Immaculate Conception is a dogma of the Roman Catholic Church which states that the Virgin Mary was free of original sin from the moment of her conception. It proved highly controversial in the Middle Ages, but revived in the 19th century and was adopted as church dogma when Pope Pius IX promulgated "Ineffabilis Deus" in 1854. The move had the overwhelming support of the church's hierarchy, although a few, including the Archbishop of Paris, warned that it is not stated in the New Testament and could not be deduced from it. Protestants overwhelmingly rejected "Ineffabilis Deus" as an exercise in papal power and the doctrine itself as without foundation in Scripture, and Orthodox Christianity, although it reveres Mary in its liturgy, called on the Roman church to return to the faith of the early centuries. The iconography of the Virgin of the Immaculate Conception shows her standing, with arms outspread or hands clasped in prayer, and her feast day is 8 December. The Immaculate Conception of Mary in the womb of her mother is not to be confused with her purity in the virgin birth of Jesus. The Immaculate Conception of Mary is one of the four Marian dogmas of the Catholic Church, meaning that it is held to be a truth divinely revealed, the denial of which is heresy. Defined by Pope Pius IX in 1854, it states that Mary, through God's grace, was conceived free from the stain of original sin of her role as the Mother of God: We declare, pronounce, and define that the doctrine which holds that the most Blessed Virgin Mary, in the first instance of her conception, by a singular grace and privilege granted by Almighty God, in view of the merits of Jesus Christ, the Saviour of the human race, was preserved free from all stain of original sin, is a doctrine revealed by God and therefore to be believed firmly and constantly by all the faithful. The mother of Mary is not a biblical character. She first appears in the late 2nd-century Protevangelium of James, which names her Anne, probably from Hannah, the mother of the prophet Samuel; she and her husband, Saint Joachim, are infertile, but God hears their prayers for a child, and so Mary is conceived and born, and, like Samuel, is taken to spend her childhood in the temple. In the earliest texts, probably representing the original version, the conception occurs without sexual intercourse between Anne and Joachim, but the story does not advance the idea of an immaculate conception. Original sin is the Christian doctrine that each human being is born in a state of sin inherited from the first man, Adam, who disobeyed God in eating the forbidden fruit (of knowledge of good and evil) and, in consequence, transmitted his sin and guilt by heredity to his descendants. The doctrine was defined by Augustine of Hippo (354-430 AD). Engaged in a controversy with the monk Pelagius over the question of whether infants could sin (Pelagius said they could not and therefore would not go to hell if unbaptised), he inserted original sin and the fall from grace into the story of the Garden of Eden and Paul's Letter to the Romans. Augustine identified male semen as the means by which original sin was made heritable, leaving only Jesus Christ, conceived without semen, free of the sin passed down from Adam through the sexual act. Mary's freedom from personal sin was affirmed in the 4th century, but Augustine's argument that original sin was transmitted through sex raised the question of whether she could also be free of the sin of Adam. The English ecclesiastic and scholar Eadmer (c.1060-c.1126) reasoned that it was possible in view of God's omnipotence and appropriate in view of Mary's role as Mother of God: "Potuit, decuit, fecit", "it was possible, it was fitting, therefore it was done;" Bernard of Clairvaux (1090-1153) and Thomas Aquinas (1225-1274), among others, objected that if Mary were free of original sin at her conception then she would have no need of redemption, making Christ superfluous; they were answered by Duns Scotus (1264-1308), who reasoned that her preservation from original sin was a redemption more perfect than that granted through Christ. Nevertheless, it was not theological theory that initiated discussion of Mary's freedom from mankind's curse, but the celebration of her liturgy: in the eleventh century the celebration of her liturgy, for the popular feast of her conception brought forth the objection that as normal human conception is sinful, to celebrate Mary's conception was to celebrate a sinful event. Some held that no sin had occurred, for Anne had conceived Mary not through sex but by kissing her husband Joachim, and that Anne's father and mother had likewise been conceived, but St Bridget of Sweden (c.1303-1373) told how Mary herself had revealed to her in a vision that although Anne and Joachim conceived their daughter through sexual union, the act was sinless because free of sexual desire. In 1431 the Council of Basel declared Mary's immaculate conception a "pious opinion" consistent with faith and Scripture; the Council of Trent, held in several sessions in the early 1500s, made no explicit declaration on the subject but exempted her from the universality of original sin; and by 1571 the Pope's Breviary (prayerbook) set out an elaborate celebration of the Feast of the Immaculate Conception on 8 December. It was popular support which led to the demand that the papacy elevate the doctrine to the status of dogma. In 1830 Catherine Labouré (May 2, 1806-December 31, 1876) was granted a vision of Mary as the Immaculate Conception standing on a globe while a voice commanded Catherine to have a medal made in imitation of what she saw, and the apparition marked the beginning of a great 19th-century Marian revival. At the time the church was engaged in a struggle against modernity and the promotion of its authority, and in 1849 Pope Pius IX asked the bishops of the church for their views on whether the doctrine should be defined as dogma: ninety percent of those who responded were supportive, and in 1854 the papal bull "Ineffabilis Deus" was promulgated. "Ineffabilis Deus" was one of the pivotal events of the papacy of Pius, pope from 16 June 1846 to his death on 7 February 1878. Up until this point it had been understood that dogma had to be based in Scripture and accepted by tradition, but Mary's immaculate conception is not stated in the New Testament and cannot be deduced from it, and it had caused a virtual civil war between Franciscans and Dominicans during the middle ages. "Ineffabilis Deus" therefore was a novelty, being based instead on the declaration of a special commission to the effect that neither scripture nor tradition were necessary to define dogma, but only the authority of the church expressed in the Pope. The Pope and his curia (the special body which forms the central government of the church) accordingly waived the absence of scriptural proof or a "broad and ancient" stream of tradition and promulgated Mary's immaculate conception solely on papal infallibility, itself promulgated as dogma in 1870. Four years later Mary appeared to the young Bernadette Soubirous at Lourdes, in southern France, to announce that she was the Immaculate Conception. The Archbishop of Paris had warned Pius that the Immaculate Conception "could be proved neither from the Scriptures nor from tradition," but "Ineffabilis Deus" found it in the Ark of Salvation (Noah's Ark), Jacob's Ladder, the Burning Bush at Sinai, the Enclosed Garden from the Song of Songs, and many more. From this wealth of support the pope's advisors singled out Genesis 3:15 as the basis for the Immaculate Conception: "The most glorious Virgin ... was foretold by God when he said to the serpent: 'I will put enmity between you and the woman,'" a prophecy which reached fulfillment in the figure of the Woman in the Revelation of John, crowned with stars and trampling the Dragon underfoot. Luke 1:28, and specifically the phrase "full of grace" by which Gabriel greeted Mary, was another reference to her immaculate conception: "she was never subject to the curse and was, together with her Son, the only partaker of perpetual benediction." The feast day of the Immaculate Conception is December 8. Its celebration seems to have begun in the Eastern church in the 7th century and may have spread to Ireland by the 8th, although the earliest well-attested record in the Western church is from England early in the 11th. It was suppressed there after the Norman Conquest (1066), and the first thorough exposition of the doctrine was a response to this suppression. It continued to spread despite strong theological objections (in 1125 St Bernard of Clairvaux wrote to Lyons Cathedral to express his surprise and concern that it had recently begun to be observed there), but in 1477 Sixtus IV, a Franciscan, placed it on the Roman calendar (i.e, list of Church festivals and observances). Pius V suppressed the word "immaculate", but following the promulgation of "Ineffabilis Deus" it was restored with the typically Franciscan phrase "immaculate conception" and given a formulary for the Mass, drawn largely from one composed for Sixtus IV, beginning "O God who by the Immaculate Conception of the Virgin...". By pontifical decree a number of countries are considered to be under the patronage of the Immaculate Conception. These include Argentina, Brazil, Korea, Nicaragua, Paraguay, the Philippines, Spain (including the old kingdoms and the present state), the United States and Uruguay. By royal decree under the House of Bragança, she is the principal Patroness of Portugal. The Roman Missal and the Roman Rite Liturgy of the Hours naturally includes references to Mary's immaculate conception in the feast of the Immaculate Conception. An example is the antiphon that begins: "Tota pulchra es, Maria, et macula originalis non est in te" ("You are all beautiful, Mary, and the original stain [of sin] is not in you." It continues: "Your clothing is white as snow, and your face is like the sun. You are all beautiful, Mary, and the original stain [of sin] is not in you. You are the glory of Jerusalem, you are the joy of Israel, you give honour to our people. You are all beautiful, Mary.") On the basis of the original Gregorian chant music, polyphonic settings have been composed by Anton Bruckner, Pablo Casals, Maurice Duruflé, Grzegorz Gerwazy Gorczycki, , José Maurício Nunes Garcia, and . Other prayers honouring Mary's immaculate conception are in use outside the formal liturgy. The Immaculata prayer, composed by Saint Maximillian Kolbe, is a prayer of entrustment to Mary as the Immaculata. A novena of prayers, with a specific prayer for each of the nine days has been composed under the title of the Immaculate Conception Novena. Ave Maris Stella is the vesper hymn of the feast of the Immaculate Conception. The hymn "Immaculate Mary", addressed to Mary as the Immaculately Conceived One, is closely associated with Lourdes. The Immaculate Conception became a popular subject in literature, but its abstract nature meant it was late in appearing as a subject in art. During the Medieval period it was depicted as "Joachim and Anne Meeting at the Golden Gate", meaning Mary's conception through the chaste kiss of her parents at the Golden Gate in Jerusalem; the 14th and 15th centuries were the heyday for this scene, after which it was gradually replaced by more allegorical depictions featuring an adult Mary. The 1476 extension of the feast of the Immaculate Conception to the entire Latin Church reduced the likelihood of controversy for the artist or patron in depicting an image, so that emblems depicting "The Immaculate Conception" began to appear. Many artists in the 15th century faced the problem of how to depict an abstract idea such as the Immaculate Conception, and the problem was not fully solved for 150 years. The Italian Renaissance artist Piero di Cosimo was among those artists who tried new solutions, but none of these became generally adopted so that the subject matter would be immediately recognisable to the faithful. The definitive iconography for the depiction of "Our Lady" seems to have been finally established by the painter and theorist Francisco Pacheco in his "El arte de la pintura" of 1649: a beautiful young girl of 12 or 13, wearing a white tunic and blue mantle, rays of light emanating from her head ringed by twelve stars and crowned by an imperial crown, the sun behind her and the moon beneath her feet. Pacheco's iconography influenced other Spanish artists or artists active in Spain such as El Greco, Bartolomé Murillo, Diego Velázquez, and Francisco Zurbarán, who each produced a number of artistic masterpieces based on the use of these same symbols. The popularity of this particular representation of "The Immaculate Conception" spread across the rest of Europe, and has since remained the best known artistic depiction of the concept: in a heavenly realm, moments after her creation, the spirit of Mary (in the form of a young woman) looks up in awe at (or bows her head to) God. The moon is under her feet and a halo of twelve stars surround her head, possibly a reference to "a woman clothed with the sun" from Revelation 12:1–2. Additional imagery may include clouds, a golden light, and putti. In some paintings the putti are holding lilies and roses, flowers often associated with Mary. Eastern Orthodoxy never accepted Augustine's ideas on original sin, and in consequence did not become involved in the later developments that took place in the Roman Catholic Church, including the Immaculate Conception. When in 1894 Pope Leo XIII addressed the Eastern church in his encyclical "Praeclara gratulationis", Ecumenical Patriarch Anthimos replied with an encyclical of his own in which he stigmatised the dogmas of the Immaculate Conception and papal infallibility as "Roman novelties" and called on the Roman church to return to the faith of the early centuries. The Eastern Orthodox liturgy reveres Mary as Mother of God, "immaculate, spotless, and altogether without stain", so that as one contemporary Orthodox bishop puts it, "the Latin dogma seems to us not so much erroneous as superfluous." In the mid-19th century some Catholics who were unable to accept the doctrine of papal infallibility left the Roman Church and formed the Old Catholic Church; their movement rejects the Immaculate Conception. Protestants overwhelmingly condemned the promulgation of "Ineffabilis Deus" as an exercise in papal power, and the doctrine itself as without foundation in Scripture, for it denied that all had sinned and rested on a translation of Luke 1:28 (the "full of grace" passage) that the original Greek did not support. With the exception of some Lutherans and Anglicans, most Protestants therefore teach that Mary was a sinner saved through grace like all believers. Martin Luther showed an abiding devotion to Mary, including her sinlessness and sanctity, and Lutherans hold Mary in high esteem, but the Immaculate Conception does not hold the status of a dogma within Lutheranism. The ecumenical "Lutheran-Catholic Statement on Saints, Mary", issued in 1990 after seven years of study and discussion, affirmed "that the Catholic teaching about the saints and Mary as set forth in the documents of Vatican II does not promote idolatrous belief or practice and is not opposed to the gospel," but conceded that Lutherans and Catholics remained separated "by differing views on matters such as the invocation of saints, the Immaculate Conception and the Assumption of Mary." December 8 is designated a lesser festival of the "Conception of the Blessed Virgin Mary" (without the adjective "immaculate") in the Church of England's "Book of Common Prayer" but although some Anglicans may hold the Immaculate Conception as an optional pious belief, the final report of the Anglican–Roman Catholic International Commission (ARCIC), created in 1969 to further ecumenical progress between the Roman Catholic Church and the Anglican Communion, recorded the disagreement of the Anglicans with the doctrine. A saying of Mohammad recorded in the 9th century by the Muslim scholar Muhammad al-Bukhari quotes the Prophet saying that Satan touches all the descendants of Adam "except Mary and her child"; medieval Christian monks later used this passage to claim that the Quran supported the Immaculate Conception, with the result that Muhammad was even depicted in altarpieces between the 16th and 18th centuries. Islam, however, lacks the concept of original sin: according to the Quran he was immediately forgiven for his sin in Eden, which could therefore never have been passed to his descendants.
https://en.wikipedia.org/wiki?curid=15256
Interactive Fiction Competition The Interactive Fiction Competition (also known as IFComp) is one of several annual competitions for works of interactive fiction. It has been held since 1995. It is intended for fairly short games, as judges are only allowed to spend two hours playing a game before deciding how many points to award it. The competition has been described as the "Super Bowl" of interactive fiction. The competition is organized by "Stephen Granade". Although the first competition had separate sections for Inform and TADS games, subsequent competitions have not been divided into sections and are open to games produced by any method, provided that the software used to play the game is freely available. Anyone can judge the games, and anyone can donate a prize. Almost always, there are enough prizes donated that anyone who enters will get one. Because anyone can judge and participate in the competition, authors are required to make their entries freely available to play. In addition to the main competition, the entries take part in the Miss Congeniality contest, where the participating authors vote for three games (not including their own). This was started in 1998 to distribute that year's surplus prizes; this additional contest has remained unchanged since then, even without the original reason for its existence. There is also a 'Golden Banana of Discord' side contest; the distinction is given to the entry with scores with the highest standard deviation. In 2016, operation of the competition was taken over by the Interactive Fiction Technology Foundation. The competition differs from the XYZZY Awards, as authors must specifically submit games to the Interactive Fiction Competition, but all games released in the past year are eligible for the XYZZY Awards. Many games win awards in both competitions. The following is a list of first place winners to date: A reviewer for "The A.V. Club" said of the 2008 competition, "Once again, the IF Competition delivers some of the best writing in games." The 2008 competition was described as containing "some real standouts both in quality of puzzles and a willingness to stretch the definition of text adventures/interactive fiction."
https://en.wikipedia.org/wiki?curid=15266
Inquests in England and Wales Inquests in England and Wales are held into sudden or unexplained deaths and also into the circumstances of and discovery of a certain class of valuable artefacts known as "treasure trove". In England and Wales, inquests are the responsibility of a coroner, who operates under the jurisdiction of the Coroners and Justice Act 2009. In some circumstances where an inquest cannot view or hear all the evidence, it may be suspended and a public inquiry held with the consent of the Home Secretary. There is a general duty upon every person to report a death to the coroner if an inquest is likely to be required. However, this duty is largely unenforceable in practice and the duty falls on the responsible registrar. The registrar must report a death where: The coroner must hold an inquest where the death is: Where the cause of death is unknown, the coroner may order a post mortem examination in order to determine whether the death was violent. If the death is found to be non-violent, an inquest is unnecessary. In 2004 in England and Wales, there were 514,000 deaths of which 225,500 were referred to the coroner. Of those, 115,800 resulted in post-mortem examinations and there were 28,300 inquests, 570 with a jury. In 2014 the Royal College of Pathologists claimed that up to 10,000 deaths a year recorded as being from natural causes should have been investigated by inquests. They were particularly concerned about people whose death occurred as a result of medical errors. "We believe a medical examiner would have been alerted to what was going on in Mid-Staffordshire long before this long list of avoidable deaths reached the total it did," said Archie Prentice, the pathologists' president. A coroner must summon a jury for an inquest if the death was not a result of natural causes and occurred when the deceased was in state custody (for example in prison, police custody, or whilst detained under the Mental Health Act 1983); or if it was the result of an act or omission of a police officer; or if it was a result of a notifiable accident, poisoning or disease. The senior coroner can also call a jury at his or her own discretion. This discretion has been heavily litigated in light of the Human Rights Act 1998, which means that juries are required now in a broader range of situations than expressly required by statute. The purpose of the inquest is to answer four questions: Evidence must be solely for the purpose of answering these questions and no other evidence is admitted. It is not for the inquest to ascertain "how the deceased died" or "in what broad circumstances", but "how the deceased came by his death", a more limited question. Moreover, it is not the purpose of the inquest to determine, or appear to determine, criminal or civil liability, to apportion guilt or attribute blame. For example, where a prisoner hanged himself in a cell, he came by his death by hanging and it was not the role of the inquest to enquire into the broader circumstances such as the alleged neglect of the prison authorities that might have contributed to his state of mind or given him the opportunity. However, the inquest should set out as many of the facts as the public interest requires. Under the terms of article 2 of the European Convention of Human Rights, governments are required to "establish a framework of laws, precautions, procedures and means of enforcement which will, to the greatest extent reasonably practicable, protect life". The European Court of Human Rights has interpreted this as mandating independent official investigation of any death where public servants may be implicated. Since the Human Rights Act 1998 came into force, in those cases alone, the inquest is now to consider the broader question "by what means and in what circumstances". In disasters, such as the 1987 King's Cross fire, a single inquest may be held into several deaths. Inquests are governed by the Rules. The coroner gives notice to near relatives, those entitled to examine witnesses and those whose conduct is likely to be scrutinised. Inquests are held in public except where there are real issues and substantial of national security but only the portions which relate to national security will be held behind closed doors. Individuals with an interest in the proceedings, such as relatives of the deceased, individuals appearing as witnesses, and organisations or individuals who may face some responsibility in the death of the individual, may be represented by a legal professional be that a solicitor or barrister at the discretion of the coroner. Witnesses may be compelled to testify subject to the privilege against self-incrimination. If there are matters of National Security or matters which relate to sensitive matters then under Schedule 1 of the Coroners and Justice Act 2009 an inquest may be suspended and replaced by a public inquiry under s.2 of the Inquiries Act 2005. This can only be ordered by the Home Secretary and must be announced to Parliament with the Coroner in charge being informed and the next of kin being informed. The next of kin and Coroner can appeal the decision of the Home Secretary. The following conclusions (formerly called verdicts) are not mandatory but are strongly recommended: In 2004, 37% of inquests recorded an outcome of death by accident / misadventure, 21% by natural causes, 13% suicide, 10% open verdicts, and 19% other outcomes. Since 2004 it has been possible for the coroner to record a narrative verdict, recording the circumstances of a death without apportioning blame or liability. Since 2009, other possible verdicts have included "alcohol/drug related death" and "road traffic collision". The civil standard of proof, on the balance of probabilities, is used for all conclusions. The standard of proof for suicide and unlawful killing changed in 2018 from beyond all reasonable doubt to the balance of probabilities following a case in the courts of appeal. Owing in particular to the failures to notice the serial murder committed by Harold Shipman, the Coroners and Justice Act 2009 modernised the system with:
https://en.wikipedia.org/wiki?curid=15268
Information retrieval Information retrieval (IR) is the activity of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other content-based indexing. Information retrieval is the science of searching for information in a document, searching for documents themselves, and also searching for the metadata that describes data, and for databases of texts, images or sounds. Automated information retrieval systems are used to reduce what has been called information overload. An IR system is a software system that provides access to books, journals and other documents; stores and manages those documents. Web search engines are the most visible IR applications. An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevancy. An object is an entity that is represented by information in a content collection or database. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. This ranking of results is a key difference of information retrieval searching compared to database searching. Depending on the application the data objects may be, for example, text documents, images, audio, mind maps or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata. Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query. The idea of using computers to search for relevant pieces of information was popularized in the article "As We May Think" by Vannevar Bush in 1945. It would appear that Bush was inspired by patents for a 'statistical machine' - filed by Emanuel Goldberg in the 1920s and '30s - that searched for documents stored on film. The first description of a computer searching for information was described by Holmstrom in 1948, detailing an early mention of the Univac computer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedy, Desk Set. In the 1960s, the first large information retrieval research group was formed by Gerard Salton at Cornell. By the 1970s several different retrieval techniques had been shown to perform well on small text corpora such as the Cranfield collection (several thousand documents). Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s. In 1992, the US Department of Defense along with the National Institute of Standards and Technology (NIST), cosponsored the Text Retrieval Conference (TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods that scale to huge corpora. The introduction of web search engines has boosted the need for very large scale retrieval systems even further. For effectively retrieving relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model. The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed for Boolean retrieval or top-k retrieval, include precision and recall. All measures assume a ground truth notion of relevancy: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may be ill-posed and there may be different shades of relevancy.
https://en.wikipedia.org/wiki?curid=15271
International Criminal Tribunal for the former Yugoslavia The International Tribunal for the Prosecution of Persons Responsible for Serious Violations of International Humanitarian Law Committed in the Territory of the Former Yugoslavia since 1991, more commonly referred to as the International Criminal Tribunal for the former Yugoslavia (ICTY), was a body of the United Nations established to prosecute serious crimes committed during the Yugoslav Wars, and to try their perpetrators. The tribunal was an ad hoc court located in The Hague, Netherlands. The Court was established by Resolution 827 of the United Nations Security Council, which was passed on 25 May 1993. It had jurisdiction over four clusters of crimes committed on the territory of the former Yugoslavia since 1991: grave breaches of the Geneva Conventions, violations of the laws or customs of war, genocide, and crimes against humanity. The maximum sentence it could impose was life imprisonment. Various countries signed agreements with the UN to carry out custodial sentences. A total of 161 persons were indicted; the final indictments were issued in December 2004, the last of which were confirmed and unsealed in the spring of 2005. The final fugitive, Goran Hadžić, was arrested on 20 July 2011. The final judgment was issued on 29 November 2017 and the institution formally ceased to exist on 31 December 2017. Residual functions of the ICTY, including oversight of sentences and consideration of any appeal proceedings initiated since 1 July 2013, are under the jurisdiction of a successor body, the International Residual Mechanism for Criminal Tribunals (IRMCT). United Nations Security Council Resolution 808 of 22 February 1993 decided that "an international tribunal shall be established for the prosecution of persons responsible for serious violations of international humanitarian law committed in the territory of the former Yugoslavia since 1991", and calling on the Secretary-General to "submit for consideration by the Council ... a report on all aspects of this matter, including specific proposals and where appropriate options ... taking into account suggestions put forward in this regard by Member States". The Court was originally proposed by German Foreign Minister Klaus Kinkel. By 25 May 1993, the international community had tried to pressure the leaders of the former Yugoslavian republics diplomatically, militarily, politically, economically, and – with Resolution 827 – through juridical means. Resolution 827 of 25 May 1993 approved S/25704 report of the Secretary-General and adopted the Statute of the International Tribunal annexed to it, formally creating the ICTY. It would have jurisdiction over four clusters of crime committed on the territory of the former SFR Yugoslavia since 1991: grave breaches of the Geneva Conventions, violations of the laws or customs of war, genocide, and crime against humanity. The maximum sentence it could impose was life imprisonment. In 1993, the ICTY built its internal infrastructure. 17 states have signed an agreement with the ICTY to carry out custodial sentences. 1993–1994: In the first year of its existence, the Tribunal laid the foundations for its existence as a judicial organ. The Tribunal established the legal framework for its operations by adopting the rules of procedure and evidence, as well as its rules of detention and directive for the assignment of defense counsel. Together these rules established a legal aid system for the Tribunal. As the ICTY is part of the United Nations and as it was the first "international" court for "criminal" justice, the development of a juridical infrastructure was considered quite a challenge. However after the first year the first ICTY judges had drafted and adopted all the rules for court proceedings. 1994–1995: The ICTY established its offices within the Aegon Insurance Building in The Hague (which was, at the time, still partially in use by Aegon) and detention facilities in Scheveningen in The Hague (the Netherlands). The ICTY hired now many staff members. By July 1994 there were sufficient staff members in the office of the prosecutor to begin field investigations and by November 1994 the first indictment was presented and confirmed. In 1995, the entire staff numbered more than 200 persons and came from all over the world. Moreover, some governments assigned their legally trained people to the ICTY. In 1994 the first indictment was issued against the Bosnian-Serb concentration camp commander Dragan Nikolić. This was followed on 13 February 1995 by two indictments comprising 21 individuals which were issued against a group of 21 Bosnian-Serbs charged with committing atrocities against Muslim and Croat civilian prisoners. While the war in the former Yugoslavia was still raging, the ICTY prosecutors showed that an international court was viable. However, no accused was arrested. The court confirmed eight indictments against 46 individuals and issued arrest warrants. Bosnian Serb indictee Duško Tadić became the subject of the Tribunal's first trial. Tadić was arrested by German police in Munich in 1994 for his alleged actions in the Prijedor region in Bosnia-Herzegovina (especially his actions in the Omarska, Trnopolje and Keraterm detention camps). He made his first appearance before the ICTY Trial Chamber on 26 April 1995, and pleaded not guilty to all of the charges in the indictment. 1995–1996: Between June 1995 and June 1996, 10 public indictments had been confirmed against a total of 33 individuals. Six of the newly indicted persons were transferred in the Tribunal's detention unit. In addition to Duško Tadic, by June 1996 the tribunal had Tihomir Blaškić, Dražen Erdemović, Zejnil Delalić, Zdravko Mucić, Esad Landžo and Hazim Delić in custody. Erdemović became the first person to enter a guilty plea before the tribunal's court. Between 1995 and 1996, the ICTY dealt with miscellaneous cases involving several detainees, which never reached the trial stage. In 2004, the ICTY published a list of five accomplishments "in justice and law": The United Nations Security Council passed resolutions 1503 in August 2003 and 1534 in March 2004, which both called for the completion of all cases at both the ICTY and its sister tribunal, the International Criminal Tribunal for Rwanda (ICTR) by 2010. In December 2010, the Security Council adopted Resolution 1966, which established the International Residual Mechanism for Criminal Tribunals (IRMCT), a body intended to gradually assume residual functions from both the ICTY and the ICTR as they wound down their mandate. Resolution 1966 called upon the Tribunal to finish its work by 31 December 2014 to prepare for its closure and the transfer of its responsibilities. In a "Completion Strategy Report" issued in May 2011, the ICTY indicated that it aimed to complete all trials by the end of 2012 and complete all appeals by 2015, with the exception of Radovan Karadžić whose trial was expected to end in 2014 and Ratko Mladić and Goran Hadžić, who were still at large at that time and were not arrested until later that year. The IRMCT's ICTY branch began functioning on 1 July 2013. Per the Transitional Arrangements adopted by the UN Security Council, the ICTY was to conduct and complete all outstanding first instance trials, including those of Karadžić, Mladić and Hadžić. The ICTY would also conduct and complete all appeal proceedings for which the notice of appeal against the judgement or sentence was filed before 1 July 2013. The IRMCT will handle any appeals for which notice is filed after that date. The final ICTY trial to be completed in the first instance was that of Ratko Mladić, who was convicted on 22 November 2017. The final case to be considered by the ICTY was an appeal proceeding encompassing six individuals, whose sentences were upheld on 29 November 2017. While operating, the Tribunal employed around 900 staff. Its organisational components were Chambers, Registry and the Office of the Prosecutor (OTP). The Prosecutor was responsible for investigating crimes, gathering evidence and prosecutions and was head of the Office of the Prosecutor (OTP). The Prosecutor was appointed by the UN Security Council upon nomination by the UN Secretary-General. The last prosecutor was Serge Brammertz. Previous Prosecutors have been Ramón Escovar Salom of Venezuela (1993–1994), however, he never took up that office, Richard Goldstone of South Africa (1994–1996), Louise Arbour of Canada (1996–1999), and Carla Del Ponte of Switzerland (1999–2007). Richard Goldstone, Louise Arbour and Carla Del Ponte also simultaneously served as the Prosecutor of the International Criminal Tribunal for Rwanda until 2003. Graham Blewitt of Australia served as the Deputy Prosecutor from 1994 until 2004. David Tolbert, the President of the International Center for Transitional Justice, was also appointed Deputy Prosecutor of the ICTY in 2004. Chambers encompassed the judges and their aides. The Tribunal operated three Trial Chambers and one Appeals Chamber. The President of the Tribunal was also the presiding Judge of the Appeals Chamber. At the time of the court's dissolution, there were seven permanent judges and one "ad hoc" judge who served on the Tribunal. A total of 86 judges have been appointed to the Tribunal from 52 United Nations member states. Of those judges, 51 were permanent judges, 36 were "ad litem" judges, and one was an "ad hoc" judge. Note that one judge served as both a permanent and "ad litem" judge, and another served as both a permanent and "ad hoc" judge. UN member and observer states could each submit up to two nominees of different nationalities to the UN Secretary-General. The UN Secretary-General submitted this list to the UN Security Council which selected from 28 to 42 nominees and submitted these nominees to the UN General Assembly. The UN General Assembly then elected 14 judges from that list. Judges served for four years and were eligible for re-election. The UN Secretary-General appointed replacements in case of vacancy for the remainder of the term of office concerned. On 21 October 2015, Judge Carmel Agius of Malta was elected President of the ICTY and Liu Daqun of China was elected Vice-President; they have assumed their positions on 17 November 2015. His predecessors were Antonio Cassese of Italy (1993–1997), Gabrielle Kirk McDonald of the United States (1997–1999), Claude Jorda of France (1999–2002), Theodor Meron of the United States (2002–2005), Fausto Pocar of Italy (2005–2008), Patrick Robinson of Jamaica (2008–2011), and Theodor Meron (2011–2015). The Registry was responsible for handling the administration of the Tribunal; activities included keeping court records, translating court documents, transporting and accommodating those who appear to testify, operating the Public Information Section, and such general duties as payroll administration, personnel management and procurement. It was also responsible for the Detention Unit for indictees being held during their trial and the Legal Aid program for indictees who cannot pay for their own defence. It was headed by the Registrar, a position occupied over the years by Theo van Boven of the Netherlands (February 1994 to December 1994), Dorothée de Sampayo Garrido-Nijgh of the Netherlands (1995–2000), Hans Holthuis of the Netherlands (2001–2009), and John Hocking of Australia (May 2009 to December 2017). Those defendants on trial and those who were denied a provisional release were detained at the United Nations Detention Unit on the premises of the Penitentiary Institution Haaglanden, location Scheveningen in Belgisch Park, a suburb of The Hague, located some 3 km by road from the courthouse. The indicted were housed in private cells which had a toilet, shower, radio, satellite TV, personal computer (without internet access) and other luxuries. They were allowed to phone family and friends daily and could have conjugal visits. There was also a library, a gym and various rooms used for religious observances. The inmates were allowed to cook for themselves. All of the inmates mixed freely and were not segregated on the basis of nationality. As the cells were more akin to a university residence instead of a jail, some had derisively referred to the ICT as the "Hague Hilton". The reason for this luxury relative to other prisons is that the first president of the court wanted to emphasise that the indictees were innocent until proven guilty. The Tribunal indicted 161 individuals between 1997 and 2004 and completed proceedings with them as follows: The indictees ranged from common soldiers to generals and police commanders all the way to prime ministers. Slobodan Milošević was the first sitting head of state indicted for war crimes. Other "high level" indictees included Milan Babić, former President of the Republika Srpska Krajina; Ramush Haradinaj, former Prime Minister of Kosovo; Radovan Karadžić, former President of the Republika Srpska; Ratko Mladić, former Commander of the Bosnian Serb Army; and Ante Gotovina, former General of the Croatian Army. The very first hearing at the ICTY was referral request in the Tadić case on 8 November 1994. Croat Serb General and former President of the Republic of Serbian Krajina Goran Hadžić was the last fugitive wanted by the Tribunal to be arrested on 20 July 2011. An additional 23 individuals have been the subject of contempt proceedings. Skeptics argued that an international court could not function while the war in the former Yugoslavia was still going on. This would be a huge undertaking for any court, but for the ICTY it would be an even greater one, as the new tribunal still needed judges, a prosecutor, a registrar, investigative and support staff, an extensive interpretation and translation system, a legal aid structure, premises, equipment, courtrooms, detention facilities, guards and all the related funding. Criticisms of the court include: Response to criticism of the work of the ICTY came from various scholars, academicians, and professionals, in various forms and in various publications. Example of Jelena Subotić's response to David Harland's summarize and illustrate underlying point of this debate in a competent manner. In response to Harland's "Selective Justice", Subotić, an assistant professor of political science at Georgia State University and author of "Hijacked Justice: Dealing with the Past in the Balkans", explained that the critics of the Tribunal missing the point, "(...) which is not to deliver justice for past wrongs equally for 'all sides', fostering reconciliation, but to carefully measure each case on its own merits ... We should judge the work of the tribunal by its legal expertise, not by the political outcomes we desire." Marko Hoare said that the accusations of the tribunal's "selective justice" stem from Serbian nationalist propaganda. He wrote: "This is, of course, the claim that hardline Serb nationalists and supporters of Slobodan Milosevic have been making for about the last two decades. Instead of carrying out any research into the actual record of the ICTY in order to support his thesis, Harland simply repeats a string of cliches of the kind that frequently appear in anti-Hague diatribes by Serb nationalists."
https://en.wikipedia.org/wiki?curid=15274
ISO 216 ISO 216 is an international standard for paper sizes, used everywhere except in North America and parts of Latin America. The standard defines the "A", "B" and "C" series of paper sizes, including A4, the most commonly available paper size worldwide. Two supplementary standards, ISO 217 and ISO 269, define related paper sizes; the ISO 269 "C" series is commonly listed alongside the A and B sizes. All ISO 216, ISO 217 and ISO 269 paper sizes (except some envelopes) have the same aspect ratio, :1, within rounding to millimetres. This ratio has the unique property that when cut or folded in half widthways, the halves also have the same aspect ratio. Each ISO paper size is one half of the area of the next larger size in the same series. The oldest known mention of the advantages of basing a paper size on an aspect ratio of is found in a letter written on 25 October 1786 by the German scientist Georg Christoph Lichtenberg to Johann Beckmann. The formats that became ISO paper sizes A2, A3, B3, B4, and B5 were developed in France. They were listed in a 1798 law on taxation of publications that was based in part on page sizes. Searching for a standard system of paper formats on a scientific basis by the association Die Brücke – Internationales Institut zur Organisierung der geistigen Arbeit (The Bridge – International institute to organise intellectual work), as a replacement for the vast variety of other paper formats that had been used before, in order to make paper stocking and document reproduction cheaper and more efficient, Wilhelm Ostwald proposed in 1911, over a hundred years after the “Loi sur le timbre”, a "Weltformat" (world format) for paper sizes based on the ratio 1:, referring to the argument advanced by Lichtenberg 1786 letter, and linking this to the metrical system by using 1 centimetre as the width of the base format. W. Porstmann argued in a long article published in 1918, that a firm basis for the system of paper formats, which deal with surfaces, could not be the length, but the surface, i.e. linking the system of paper formats to the metrical system of measures by the square metre, using the two formulae of "x" / "y" = 1: and "x" × "y" = 1. Porstmann also argued that formats for "containers" of paper like envelopes should be 10% larger than the paper format itself. In 1921, after a long discussion and another intervention by W. Porstmann, the "Normenausschuß der deutschen Industrie" (NADI, "Standardisation Committee of German Industry", today Deutsches Institut für Normung or short DIN) published German standard "DI Norm 476" the specification of 4 series of paper formats with ratio 1:, with series A as the always preferred formats and basis for the other series. All measures are rounded to the nearest millimetre. A0 has a surface area of 1 square metre up to a rounding error, with a width of 841 mm and height of 1189 mm, so an actual area of 0.999949 m2, and A4 recommended as standard paper size for business, administrative and government correspondence and A6 for postcards. Series B is based on B0 with width of 1 metre, C0 is 917 mm × 1297 mm, and D0 771 mm × 1090 mm. Series C is the basis for envelope formats. This German standardisation work was accompanied by contributions from other countries, and the published DIN paper-format concept was soon introduced as a national standard in many other countries, for example, Belgium (1924), Netherlands (1925), Norway (1926), Switzerland (1929), Sweden (1930), Soviet Union (1934), Hungary (1938), Italy (1939), Finland (1942), Uruguay (1942), Argentina (1943), Brazil (1943), Spain (1947), Austria (1948), Romania (1949), Japan (1951), Denmark (1953), Czechoslovakia (1953), Israel (1954), Portugal (1954), Yugoslavia (1956), India (1957), Poland (1957), United Kingdom (1959), Venezuela (1962), New Zealand (1963), Iceland (1964), Mexico (1965), South Africa (1966), France (1967), Peru (1967), Turkey (1967), Chile (1968), Greece (1970), Zimbabwe (1970), Singapore (1970), Bangladesh (1972), Thailand (1973), Barbados (1973), Australia (1974), Ecuador (1974), Columbia (1975) and Kuwait (1975). It finally became both an international standard (ISO 216) as well as the official United Nations document format in 1975 and it is today used in almost all countries on this planet, with the exception of North America, Peru, Colombia, and the Dominican Republic. In 1977, a large German car manufacturer performed a study of the paper formats found in their incoming mail and concluded that out of 148 examined countries, 88 already used the A series formats. The main advantage of this system is its scaling. Rectangular paper with an aspect ratio of has the unique property that, when cut or folded in half midway between its longer sides, each half has the same aspect ratio as the whole sheet before it was divided. Equivalently, if one lays two same-sized sheets of paper with an aspect ratio of side-by-side along their longer side, they form a larger rectangle with the aspect ratio of and double the area of each individual sheet. The ISO system of paper sizes exploits these properties of the aspect ratio. In each series of sizes (for example, series A), the largest size is numbered 0 (for example, A0), and each successive size (for example, A1, A2, etc.) has half the area of the preceding sheet and can be cut by halving the length of the preceding size sheet. The new measurement is rounded down to the nearest millimetre. A folded brochure can be made by using a sheet of the next larger size (for example, an A4 sheet is folded in half to make a brochure with size A5 pages. An office photocopier or printer can be designed to reduce a page from A4 to A5 or to enlarge a page from A4 to A3. Similarly, two sheets of A4 can be scaled down to fit one A4 sheet without excess empty paper. This system also simplifies calculating the weight of paper. Under ISO 536, paper's grammage is defined as a sheet's weight in grams (g) per area in square metres (abbreviated or gsm). One can derive the grammage of other sizes by arithmetic division in . A standard A4 sheet made from paper weighs , as it is (four halvings, ignoring rounding) of an A0 page. Thus the weight, and the associated postage rate, can be easily approximated by counting the number of sheets used. ISO 216 and its related standards were first published between 1975 and 1995: Paper in the A series format has an aspect ratio of (≈ 1.414, when rounded). A0 is defined so that it has an area of 1 m before rounding to the nearest millimetre. Successive paper sizes in the series (A1, A2, A3, etc.) are defined by halving the area of the preceding paper size and rounding down, so that the long side of is the same length as the short side of A"n". Hence, each next size is nearly exactly half of the prior size. So, an A1 page can fit 2 A2 pages inside the same area. The most used of this series is the size A4, which is and thus almost exactly in area. For comparison, the letter paper size commonly used in North America () is about () wider and () shorter than A4. Then, the size of A5 paper is half of A4, as × ( × ). The geometric rationale for using the square root of 2 is to maintain the aspect ratio of each subsequent rectangle after cutting or folding an A-series sheet in half, perpendicular to the larger side. Given a rectangle with a longer side, "x", and a shorter side, "y", ensuring that its aspect ratio, , will be the same as that of a rectangle half its size, , which means that = , which reduces to = ; in other words, an aspect ratio of 1:. The formula that gives the larger border of the paper size A"n" and prior to rounding is the geometric sequence: The paper size A"n" thus has the dimension and area (before rounding) The length of the long side of A"n" can be calculated as (brackets represent the floor function). The B series is defined in the standard as follows: "A subsidiary series of sizes is obtained by placing the geometrical means between adjacent sizes of the A series in sequence." The use of the geometric mean makes each step in size: B0, A0, B1, A1, B2 ... smaller than the previous one by the same factor. As with the A series, the lengths of the B series have the ratio , and folding one in half (and rounding down to the nearest millimetre) gives the next in the series. The shorter side of B0 is exactly 1 metre. The length of the long side of B"n" can be calculated as: There is also an incompatible Japanese B series which the JIS defines to have 1.5 times the area of the corresponding JIS A series (which is identical to the ISO A series). Thus, the lengths of JIS B series paper are ≈ 1.22 times those of A-series paper. By comparison, the lengths of ISO B series paper are ≈ 1.19 times those of A-series paper. The C series formats are geometric means between the B series and A series formats with the same number (e.g., C2 is the geometric mean between B2 and A2). The width to height ratio is as in the A and B series. The C series formats are used mainly for envelopes. An unfolded A4 page will fit into a C4 envelope. C series envelopes follow the same ratio principle as the A series pages. For example, if an A4 page is folded in half so that it is A5 in size, it will fit into a C5 envelope (which will be the same size as a C4 envelope folded in half). The lengths of ISO C series paper are therefore ≈ 1.09 times those of A-series paper. A, B, and C paper fit together as part of a geometric progression, with ratio of successive side lengths of , though there is no size half-way between B"n" and : A4, C4, B4, "D4", A3, ...; there is such a D-series in the Swedish extensions to the system. The length of the long side of C"n" can be calculated as: The tolerances specified in the standard are: These are related to comparison between series A, B and C. The ISO 216 formats are organized around the ratio 1:; two sheets next to each other together have the same ratio, sideways. In scaled photocopying, for example, two A4 sheets reduced to A5 size fit exactly onto one A4 sheet, and an A4 sheet in magnified size onto an A3 sheet; in each case, there is neither waste nor want. The principal countries not generally using the ISO paper sizes are the United States and Canada, which use North American paper sizes. Although they have also officially adopted the ISO 216 paper format, Mexico, Panama, Peru, Colombia, the Philippines, and Chile also use mostly U.S. paper sizes. Rectangular sheets of paper with the ratio 1: are popular in paper folding, such as origami, where they are sometimes called "A4 rectangles" or "silver rectangles". In other contexts, the term "silver rectangle" can also refer to a rectangle in the proportion 1:(1 + ), known as the silver ratio. An important adjunct to the ISO paper sizes, particularly the A series, are the technical drawing line widths specified in ISO 128, and the matching technical pen widths of 0.13, 0.18, 0.25, 0.35, 0.5, 0.7, 1.0, 1.40, and 2.0 mm, as specified in ISO 9175-1. Colour codes are assigned to each size to facilitate easy recognition by the drafter. These sizes increase by a factor of , so that particular pens can be used on particular sizes of paper, and then the next smaller or larger size can be used to continue the drawing after it has been reduced or enlarged, respectively. For example, a continuous thick line on A0 size paper shall be drawn with a 0.7 mm pen, the same line on A1 paper shall be drawn with a 0.5 mm pen, and finally on A2, A3, or A4 paper it shall be drawn with a 0.35 mm pen. The earlier DIN 6775 standard upon which ISO 9175-1 is based also specified a term and symbol for easy identification of pens and drawing templates compatible with the standard, called "Micronorm", which may still be found on some technical drafting equipment.
https://en.wikipedia.org/wiki?curid=15275
Isaac Abendana Isaac Abendana (ca. 1640 – 1699) was the younger brother of Jacob Abendana, and became "hakam" of the Spanish Portuguese Synagogue in London after his brother died. Abendana moved to England before his brother, in 1662, and taught Hebrew at Cambridge University. He completed an unpublished Latin translation of the "Mishnah" for the university in 1671. While he was at Cambridge, Abendana sold Hebrew books to the Bodleian Library of Oxford, and in 1689 he took a teaching position in Magdalen College. In Oxford, he wrote a series of Jewish almanacs for Christians, which he later collected and compiled as the "Discourses on the Ecclesiastical and Civil Polity of the Jews" (1706). Like his brother, he maintained an extensive correspondence with leading Christian scholars of his time, most notably with the philosopher Ralph Cudworth, master of Christ's College, Cambridge.
https://en.wikipedia.org/wiki?curid=15281
Internet Engineering Task Force The Internet Engineering Task Force (IETF) is an open standards organization, which develops and promotes voluntary Internet standards, in particular the standards that comprise the Internet protocol suite (TCP/IP). It has no formal membership roster or membership requirements. All participants and managers are volunteers, though their work is usually funded by their employers or sponsors. The IETF started out as an activity supported by the US federal government, but since 1993 it has operated as a standards-development function under the auspices of the Internet Society, an international membership-based non-profit organization. The IETF is organized into a large number of working groups and "birds of a feather" informal discussion groups, each dealing with a specific topic. The IETF operates in a bottom-up task creation mode, largely driven by these working groups. Each working group has an appointed chairperson (or sometimes several co-chairs); a charter that describes its focus; and what it is expected to produce, and when. It is open to all who want to participate and holds discussions on an open mailing list or at IETF meetings, where the entry fee in July 2014 was US$650 per person. Mid-2018 the fees are: early bird US$700, late payment US$875, student US$150 and a one day pass for US$375. Rough consensus is the primary basis for decision making. There are no formal voting procedures. Because the majority of the IETF's work is done via mailing lists, meeting attendance is not required for contributors. Each working group is intended to complete work on its topic and then disband. In some cases, the working group will instead have its charter updated to take on new tasks as appropriate. The working groups are organized into areas by subject matter. Current areas are Applications, General, Internet, Operations and Management, Real-time Applications and Infrastructure, Routing, Security, and Transport. Each area is overseen by an area director (AD), with most areas having two co-ADs. The ADs are responsible for appointing working group chairs. The area directors, together with the IETF Chair, form the Internet Engineering Steering Group (IESG), which is responsible for the overall operation of the IETF. The Internet Architecture Board (IAB) oversees the IETF's external relationships and relations with the RFC Editor. The IAB provides long-range technical direction for Internet development. The IAB is also jointly responsible for the IETF Administrative Oversight Committee (IAOC), which oversees the IETF Administrative Support Activity (IASA), which provides logistical, etc. support for the IETF. The IAB also manages the Internet Research Task Force (IRTF), with which the IETF has a number of cross-group relations. A Nominating Committee (NomCom) of ten randomly chosen volunteers who participate regularly at meetings is vested with the power to appoint, reappoint, and remove members of the IESG, IAB, IASA, and the IAOC. To date, no one has been removed by a NomCom, although several people have resigned their positions, requiring replacements. In 1993 the IETF changed from an activity supported by the US Federal Government to an independent, international activity associated with the Internet Society, an international membership-based non-profit organization. Because the IETF itself does not have members, nor is it an organization "per se", the Internet Society provides the financial and legal framework for the activities of the IETF and its sister bodies (IAB, IRTF). IETF activities are funded by meeting fees, meeting sponsors and by the Internet Society via its organizational membership and the proceeds of the Public Interest Registry. In December 2005 the IETF Trust was established to manage the copyrighted materials produced by the IETF. The Internet Engineering Steering Group (IESG) is a body composed of the Internet Engineering Task Force (IETF) chair and area directors. It provides the final technical review of Internet standards and is responsible for day-to-day management of the IETF. It receives appeals of the decisions of the working groups, and the IESG makes the decision to progress documents in the standards track. The chair of the IESG is the director of the General Area, who also serves as the overall IETF Chair. Members of the IESG include the two directors of each of the following areas: Liaison and "ex officio" members include: The first IETF meeting was attended by 21 US Federal Government-funded researchers on 16 January 1986. It was a continuation of the work of the earlier GADS Task Force. Representatives from non-governmental entities were invited to attend starting with the fourth IETF meeting in October 1986. Since that time all IETF meetings have been open to the public. Initially, the IETF met quarterly, but from 1991, it has been meeting three times a year. The initial meetings were very small, with fewer than 35 people in attendance at each of the first five meetings. The maximum attendance during the first 13 meetings was only 120 attendees. This occurred at the 12th meeting held during January 1989. These meetings have grown in both participation and scope a great deal since the early 1990s; it had a maximum attendance of 2,810 at the December 2000 IETF held in San Diego, California. Attendance declined with industry restructuring during the early 2000s, and is currently around 1,200. The location for IETF meetings vary greatly. A list of past and future meeting locations can be found on the IETF meetings page. The IETF strives to hold its meetings near where most of the IETF volunteers are located. For many years, the goal was three meetings a year, with two in North America and one in either Europe or Asia, alternating between them every other year. The current goal is to hold three meetings in North America, two in Europe and one in Asia during a two-year period. However, corporate sponsorship of the meetings is also an important factor and the schedule has been modified from time to time in order to decrease operational costs. The IETF also organizes hackathons during the IETF meetings. The focus is on implementing code that will improve standards in terms of quality and interoperability. The details of IETF operations have changed considerably as the organization has grown, but the basic mechanism remains publication of proposed specifications, development based on the proposals, review and independent testing by participants, and republication as a revised proposal, a draft proposal, or eventually as an Internet Standard. IETF standards are developed in an open, all-inclusive process in which any interested individual can participate. All IETF documents are freely available over the Internet and can be reproduced at will. Multiple, working, useful, interoperable implementations are the chief requirement before an IETF proposed specification can become a standard. Most specifications are focused on single protocols rather than tightly interlocked systems. This has allowed the protocols to be used in many different systems, and its standards are routinely re-used by bodies which create full-fledged architectures (e.g. 3GPP IMS). Because it relies on volunteers and uses "rough consensus and running code" as its touchstone, results can be slow whenever the number of volunteers is either too small to make progress, or so large as to make consensus difficult, or when volunteers lack the necessary expertise. For protocols like SMTP, which is used to transport e-mail for a user community in the many hundreds of millions, there is also considerable resistance to any change that is not fully backward compatible, except for IPv6. Work within the IETF on ways to improve the speed of the standards-making process is ongoing but, because the number of volunteers with opinions on it is very great, consensus on improvements has been slow to develop. The IETF cooperates with the W3C, ISO/IEC, ITU, and other standards bodies. Statistics are available that show who the top contributors by RFC publication are. While the IETF only allows for participation by individuals, and not by corporations or governments, sponsorship information is available from these statistics. The IETF Chairperson is selected by the Nominating Committee (NomCom) process for a 2-year renewable term. Before 1993, the IETF Chair was selected by the IAB. A list of the past and current Chairs of the IETF follows: It works on a broad range of networking technologies which provide foundation for the Internet's growth and evolution. It aims to improve the efficiency in management of networks as they grow in size and complexity. The IETF is also standardizing protocols for autonomic networking that enables networks to be self managing. It is a network of physical objects or things that are embedded with electronics, sensors, software and also enables objects to exchange data with operator, manufacturer and other connected devices. Several IETF working groups are developing protocols that are directly relevant to IoT. Its development provides the ability of internet applications to send data over the Internet. There are some well-established transport protocols such as TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) which are continuously getting extended and refined to meet the needs of the global Internet. It divides its work into a number of areas that have Working groups that have a relation to an area's focus. Area Directors handle the primary task of area management. Area Directors may be advised by one or more Directorates. The area structure is defined by the Internet Engineering Steering Group. The Nominations Committee can be used to add new members. In October 2018, Microsoft and Google engineers introduced a plan to create the Token Binding Protocol in order to stop replay attacks on OAuth tokens.
https://en.wikipedia.org/wiki?curid=15285
ISM band The ISM radio bands are portions of the radio spectrum reserved internationally for industrial, scientific and medical (ISM) purposes other than telecommunications. Examples of applications for the use of radio frequency (RF) energy in these bands include radio-frequency process heating, microwave ovens, and medical diathermy machines. The powerful emissions of these devices can create electromagnetic interference and disrupt radio communication using the same frequency, so these devices are limited to certain bands of frequencies. In general, communications equipment operating in these bands must tolerate any interference generated by ISM applications, and users have no regulatory protection from ISM device operation. Despite the intent of the original allocations, in recent years the fastest-growing use of these bands has been for short-range, low power wireless communications systems, since these bands are often approved for such devices which can be used without a government license, as would otherwise be required for transmitters; ISM frequencies are often chosen for this purpose as they already have interference issues. Cordless phones, Bluetooth devices, near field communication (NFC) devices, garage door openers, baby monitors and wireless computer networks (WiFi) may all use the ISM frequencies, although these low power transmitters are not considered to be ISM devices. The ISM bands are defined by the ITU Radio Regulations (article 5) in footnotes 5.138, 5.150, and 5.280 of the Radio Regulations. Individual countries' use of the bands designated in these sections may differ due to variations in national radio regulations. Because communication devices using the ISM bands must tolerate any interference from ISM equipment, unlicensed operations are typically permitted to use these bands, since unlicensed operation typically needs to be tolerant of interference from other devices anyway. The ISM bands share allocations with unlicensed and licensed operations; however, due to the high likelihood of harmful interference, licensed use of the bands is typically low. In the United States, uses of the ISM bands are governed by Part 18 of the Federal Communications Commission (FCC) rules, while Part 15 contains the rules for unlicensed communication devices, even those that share ISM frequencies. In Europe, the ETSI is responsible for regulating the use of Short Range Devices, some of which operate in ISM bands. The allocation of radio frequencies is provided according to "Article 5" of the ITU Radio Regulations (edition 2012). In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is within the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared. Type A (footnote 5.138) = frequency bands are designated for "ISM applications". The use of these frequency bands for ISM applications shall be subject to special authorization by the administration concerned, in agreement with other administrations whose radiocommunication services might be affected. In applying this provision, administrations shall have due regard to the latest relevant ITU-R Recommendations. Type B (footnote 5.150) = frequency bands are also designated for ISM applications. Radiocommunication services operating within these bands must accept harmful interference which may be caused by these applications. ITU RR, (Footnote 5.280) = In Germany, Austria, Bosnia and Herzegovina, Croatia, Macedonia, Liechtenstein, Montenegro, Portugal, Serbia, Slovenia and Switzerland, the band 433.05-434.79 MHz (center frequency 433.92 MHz) is designated for "ISM applications". Radio communication services of these countries operating within this band must accept harmful interference which may be caused by these applications. Footnote AU = Australia is part of ITU Region 3. The band 433.05 to 434.79 MHz is not a designated ISM band in Australia, however the operation of low powered devices in the radio frequency band 433.05 to 434.79 MHz is supported through Radio communications class licence for low interference potential devices (LIPDs). The ISM bands were first established at the International Telecommunications Conference of the ITU in Atlantic City, 1947. The American delegation specifically proposed several bands, including the now commonplace 2.4 GHz band, to accommodate the then nascent process of microwave heating; however, FCC annual reports of that time suggest that much preparation was done ahead of these presentations. The report of the August 9th 1947 meeting of the Allocation of Frequencies committee includes the remark: "The delegate of the United States, referring to his request that the frequency 2450 Mc/s be allocated for I.S.M., indicated that there was in existence in the United States, and working on this frequency a diathermy machine and an electronic cooker, and that the latter might eventually be installed in transatlantic ships and airplanes. There was therefore some point in attempting to reach world agreement on this subject." Radio frequencies in the ISM bands have been used for communication purposes, although such devices may experience interference from non-communication sources. In the United States, as early as 1958 Class D Citizens Band, a Part 95 service, was allocated to frequencies that are also allocated to ISM. [1] In the U.S., the FCC first made unlicensed spread spectrum available in the ISM bands in rules adopted on May 9, 1985. Many other countries later developed similar regulations, enabling use of this technology. The FCC action was proposed by Michael Marcus of the FCC staff in 1980 and the subsequent regulatory action took five more years. It was part of a broader proposal to allow civil use of spread spectrum technology and was opposed at the time by mainstream equipment manufacturers and many radio system operators. Industrial, scientific and medical (ISM) applications (of radio frequency energy) (short: ISM applications) are – according to "article 1.15" of the International Telecommunication Union´s (ITU) ITU Radio Regulations (RR) – defined as «"Operation of equipment or appliances designed to generate and use locally radio frequency energy for industrial, scientific, medical, domestic or similar purposes, excluding applications in the field of telecommunications".» The original ISM specifications envisioned that the bands would be used primarily for noncommunication purposes, such as heating. The bands are still widely used for these purposes. For many people, the most commonly encountered ISM device is the home microwave oven operating at 2.45 GHz which uses microwaves to cook food. Industrial heating is another big application area; such as induction heating, microwave heat treating, plastic softening, and plastic welding processes. In medical settings, shortwave and microwave diathermy machines use radio waves in the ISM bands to apply deep heating to the body for relaxation and healing. More recently hyperthermia therapy uses microwaves to heat tissue to kill cancer cells. However, as detailed below, the increasing congestion of the radio spectrum, the increasing sophistication of microelectronics, and the attraction of unlicensed use, has in recent decades led to an explosion of uses of these bands for short range communication systems for wireless devices, which are now by far the largest uses of these bands. These are sometimes called "non ISM" uses since they do not fall under the originally envisioned "industrial", "scientific", and "medical" application areas. One of the largest applications has been wireless networking (WiFi). The IEEE 802.11 wireless networking protocols, the standards on which almost all wireless systems are based, use the ISM bands. Virtually all laptops, tablet computers, computer printers and cellphones now have 802.11 wireless modems using the 2.4 and 5.7 GHz ISM bands. Bluetooth is another networking technology using the 2.4 GHz band, which can be problematic given the probability of interference. Near field communication devices such as proximity cards and contactless smart cards use the lower frequency 13 and 27 MHz ISM bands. Other short range devices using the ISM bands are: wireless microphones, baby monitors, garage door openers, wireless doorbells, keyless entry systems for vehicles, radio control channels for UAVs (drones), wireless surveillance systems, RFID systems for merchandise, and wild animal tracking systems. Some electrodeless lamp designs are ISM devices, which use RF emissions to excite fluorescent tubes. Sulfur lamps are commercially available plasma lamps, which use a 2.45 GHz magnetron to heat sulfur into a brightly glowing plasma. Long-distance wireless power systems have been proposed and experimented with which would use high-power transmitters and rectennas, in lieu of overhead transmission lines and underground cables, to send power to remote locations. NASA has studied using microwave power transmission on 2.45 GHz to send energy collected by solar power satellites back to the ground. Also in space applications, a Helicon Double Layer ion thruster is a prototype spacecraft propulsion engine which uses a 13.56 MHz transmission to break down and heat gas into plasma. In recent years ISM bands have also been shared with (non-ISM) license-free error-tolerant communications applications such as wireless sensor networks in the 915 MHz and 2.450 GHz bands, as well as wireless LANs and cordless phones in the 915 MHz, 2.450 GHz, and 5.800 GHz bands. Because unlicensed devices are required to be tolerant of ISM emissions in these bands, unlicensed low power users are generally able to operate in these bands without causing problems for ISM users. ISM equipment does not necessarily include a radio receiver in the ISM band (e.g. a microwave oven does not have a receiver). In the United States, according to 47 CFR Part 15.5, low power communication devices must accept interference from licensed users of that frequency band, and the Part 15 device must not cause interference to licensed users. Note that the 915 MHz band should not be used in countries outside Region 2, except those that specifically allow it, such as Australia and Israel, especially those that use the GSM-900 band for cellphones. The ISM bands are also widely used for Radio-frequency identification (RFID) applications with the most commonly used band being the 13.56 MHz band used by systems compliant with ISO/IEC 14443 including those used by biometric passports and contactless smart cards. In Europe, the use of the ISM band is covered by Short Range Device regulations issued by European Commission, based on technical recommendations by CEPT and standards by ETSI. In most of Europe, LPD433 band is allowed for license-free voice communication in addition to PMR446. Wireless LAN devices use wavebands as follows: IEEE 802.15.4, ZigBee and other personal area networks may use the and ISM bands because of frequency sharing between different allocations. Wireless LANs and cordless phones can also use bands other than those shared with ISM, but such uses require approval on a country by country basis. DECT phones use allocated spectrum outside the ISM bands that differs in Europe and North America. Ultra-wideband LANs require more spectrum than the ISM bands can provide, so the relevant standards such as IEEE 802.15.4a are designed to make use of spectrum outside the ISM bands. Despite the fact that these additional bands are outside the official ITU-R ISM bands, because they are used for the same types of low power personal communications, they are sometimes incorrectly referred to as ISM bands as well. Several brands of radio control equipment use the band range for low power remote control of toys, from gas powered cars to miniature aircraft. Worldwide Digital Cordless Telecommunications or WDCT is a technology that uses the radio spectrum. Google's Project Loon uses ISM bands (specifically 2.4 and 5.8 GHz bands) for balloon-to-balloon and balloon-to-ground communications. Pursuant to 47 CFR Part 97 some ISM bands are used by licensed amateur radio operators for communication - including amateur television.
https://en.wikipedia.org/wiki?curid=15286
Series (mathematics) In mathematics, a series is, roughly speaking, a description of the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics), through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance. For a long time, the idea that such a potentially infinite summation could produce a finite result was considered paradoxical. This paradox was resolved using the concept of a limit during the 17th century. Zeno's paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno concluded that Achilles could "never" reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch up with the tortoise. In modern terminology, any (ordered) infinite sequence formula_1 of terms (that is, numbers, functions, or anything that can be added) defines a series, which is the operation of adding the one after the other. To emphasize that there are an infinite number of terms, a series may be called an infinite series. Such a series is represented (or denoted) by an expression like or, using the summation sign, The infinite sequence of additions implied by a series cannot be effectively carried on (at least in a finite amount of time). However, if the set to which the terms and their finite sums belong has a notion of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as tends to infinity (if the limit exists) of the finite sums of the first terms of the series, which are called the th partial sums of the series. That is, When this limit exists, one says that the series is convergent or summable, or that the sequence formula_1 is summable. In this case, the limit is called the sum of the series. Otherwise, the series is said to be divergent. Generally, the terms of a series come from a ring, often the field formula_6 of the real numbers or the field formula_7 of the complex numbers. In this case, the set of all series is itself a ring (and even an associative algebra), in which the addition consists of adding the series term by term, and the multiplication is the Cauchy product. An infinite series or simply a series is an infinite sum, represented by an infinite expression of the form where formula_9 is any ordered sequence of terms, such as numbers, functions, or anything else that can be added (an abelian group). This is an expression that is obtained from the list of terms formula_10 by laying them side by side, and conjoining them with the symbol "+". A series may also be represented by using summation notation, such as If an abelian group "A" of terms has a concept of limit (for example, if it is a metric space), then some series, the convergent series, can be interpreted as having a value in "A", called the "sum of the series". This includes the common cases from calculus in which the group is the field of real numbers or the field of complex numbers. Given a series formula_12 its "k"th partial sum is By definition, the series formula_14 "converges" to the limit (or simply "sums" to ), if the sequence of its partial sums has a limit . In this case, one usually writes A series is said to be "convergent" if it converges to some limit or "divergent" when it does not. The value of this limit, if it exists, is then the value of the series. A series is said to converge or to "be convergent" when the sequence of partial sums has a finite limit. If the limit of is infinite or does not exist, the series is said to diverge. When the limit of partial sums exists, it is called the value (or sum) of the series An easy way that an infinite series can converge is if all the "a""n" are zero for "n" sufficiently large. Such a series can be identified with a finite sum, so it is only infinite in a trivial sense. Working out the properties of the series that converge even if infinitely many terms are non-zero is the essence of the study of series. Consider the example It is possible to "visualize" its convergence on the real number line: we can imagine a line of length 2, with successive segments marked off of lengths 1, ½, ¼, etc. There is always room to mark the next segment, because the amount of line remaining is always the same as the last segment marked: when we have marked off ½, we still have a piece of length ½ unmarked, so we can certainly mark the next ¼. This argument does not prove that the sum is "equal" to 2 (although it is), but it does prove that it is "at most" 2. In other words, the series has an upper bound. Given that the series converges, proving that it is equal to 2 requires only elementary algebra. If the series is denoted , it can be seen that Therefore, The idiom can be extended to other, equivalent notions of series. For instance, a recurring decimal, as in encodes the series Since these series always converge to real numbers (because of what is called the completeness property of the real numbers), to talk about the series in this way is the same as to talk about the numbers for which they stand. In particular, the decimal expansion 0.111... can be identified with 1/9. This leads to an argument that , which only relies on the fact that the limit laws for series preserve the arithmetic operations; this argument is presented in the article 0.999... and their generalizations (such as basic hypergeometric series and elliptic hypergeometric series) frequently appear in integrable systems and mathematical physics. and Partial summation takes as input a sequence, ("a""n"), and gives as output another sequence, ("S""N"). It is thus a unary operation on sequences. Further, this function is linear, and thus is a linear operator on the vector space of sequences, denoted Σ. The inverse operator is the finite difference operator, denoted Δ. These behave as discrete analogs of integration and differentiation, only for series (functions of a natural number) instead of functions of a real variable. For example, the sequence (1, 1, 1, ...) has series (1, 2, 3, 4, ...) as its partial summation, which is analogous to the fact that formula_49 In computer science it is known as prefix sum. Series are classified not only by whether they converge or diverge, but also by the properties of the terms an (absolute or conditional convergence); type of convergence of the series (pointwise, uniform); the class of the term an (whether it is a real number, arithmetic progression, trigonometric function); etc. When "an" is a non-negative real number for every "n", the sequence "SN" of partial sums is non-decreasing. It follows that a series ∑"an" with non-negative terms converges if and only if the sequence "SN" of partial sums is bounded. For example, the series is convergent, because the inequality and a telescopic sum argument implies that the partial sums are bounded by 2. The exact value of the original series is the Basel problem. A series "converges absolutely" if the series of absolute values converges. This is sufficient to guarantee not only that the original series converges to a limit, but also that any reordering of it converges to the same limit. A series of real or complex numbers is said to be conditionally convergent (or semi-convergent) if it is convergent but not absolutely convergent. A famous example is the alternating series which is convergent (and its sum is equal to ln 2), but the series formed by taking the absolute value of each term is the divergent harmonic series. The Riemann series theorem says that any conditionally convergent series can be reordered to make a divergent series, and moreover, if the "a""n" are real and "S" is any real number, that one can find a reordering so that the reordered series converges with sum equal to "S". Abel's test is an important tool for handling semi-convergent series. If a series has the form where the partial sums "B""N" = are bounded, "λ""n" has bounded variation, and exists: then the series is convergent. This applies to the pointwise convergence of many trigonometric series, as in with 0 < "x" < 2π. Abel's method consists in writing "b""n"+1 = "B""n"+1 − "B""n", and in performing a transformation similar to integration by parts (called summation by parts), that relates the given series to the absolutely convergent series The evaluation of truncation errors is an important procedure in numerical analysis (especially validated numerics and computer-assisted proof). When conditions of the alternating series test are satisfied by formula_59, there is an exact error evaluation. Set formula_60 to be the partial sum formula_61 of the given alternating series formula_62. Then the next inequality holds: Taylor's theorem is a statement that includes the evaluation of the error term when the Taylor series is truncated. By using the ratio, we can obtain the evaluation of the error term when the hypergeometric series is truncated. For the matrix exponential: the following error evaluation holds (scaling and squaring method): There exist many tests that can be used to determine whether particular series converge or diverge. A series of real- or complex-valued functions converges pointwise on a set "E", if the series converges for each "x" in "E" as an ordinary series of real or complex numbers. Equivalently, the partial sums converge to "ƒ"("x") as "N" → ∞ for each "x" ∈ "E". A stronger notion of convergence of a series of functions is the uniform convergence. A series converges uniformly if it converges pointwise to the function "ƒ"("x"), and the error in approximating the limit by the "N"th partial sum, can be made minimal "independently" of "x" by choosing a sufficiently large "N". Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the "ƒ""n" are integrable on a closed and bounded interval "I" and converge uniformly, then the series is also integrable on "I" and can be integrated term-by-term. Tests for uniform convergence include the Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion. More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a certain set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions converges in mean on a set "E" to a limit function "ƒ" provided as "N" → ∞. A power series is a series of the form The Taylor series at a point "c" of a function is a power series that, in many cases, converges to the function in a neighborhood of "c". For example, the series is the Taylor series of formula_72 at the origin and converges to it for every "x". Unless it converges only at "x"="c", such a series converges on a certain open disc of convergence centered at the point "c" in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients "a""n". The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets. Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required. While many uses of power series refer to their sums, it is also possible to treat power series as "formal sums", meaning that no addition operations are actually performed, and the symbol "+" is an abstract symbol of conjunction which is not necessarily interpreted as corresponding to addition. In this setting, the sequence of coefficients itself is of interest, rather than the convergence of the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example, using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras. Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series "formally", treating the symbol "+" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring. If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term. Laurent series generalize power series by admitting terms into the series with negative as well as positive exponents. A Laurent series is thus any series of the form If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence. A Dirichlet series is one of the form where "s" is a complex number. For example, if all "a""n" are equal to 1, then the Dirichlet series is the Riemann zeta function Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of "s" is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when Re "s" > 1, but the zeta function can be extended to a holomorphic function defined on formula_76  with a simple pole at 1. This series can be directly generalized to general Dirichlet series. A series of functions in which the terms are trigonometric functions is called a trigonometric series: The most important example of a trigonometric series is the Fourier series of a function. Greek mathematician Archimedes produced the first known summation of an infinite series with a method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π. Mathematicians from Kerala, India studied infinite series around 1350 CE. In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series. The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence. Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms "convergence" and "divergence" had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form. Abel (1826) in his memoir on the binomial series corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of formula_80 and formula_81. He showed the necessity of considering the subject of continuity in questions of convergence. Cauchy's methods led to special rather than general criteria, and the same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whose logarithmic test DuBois-Reymond (1873) and Pringsheim (1889) have shown to fail within a certain region; of Bertrand (1842), Bonnet (1843), Malmsten (1846, 1847, the latter without integration); Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt (1853). General criteria began with Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his various contributions to the theory of functions, Dini (1867), DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory. The theory of uniform convergence was treated by Cauchy (1821), his limitations being pointed out by Abel, but the first to attack it successfully were Seidel and Stokes (1847–48). Cauchy took up the problem again (1853), acknowledging Abel's criticism, and reaching the same conclusions which Stokes had already found. Thomae used the doctrine (1866), but there was great delay in recognizing the importance of distinguishing between uniform and non-uniform convergence, in spite of the demands of the theory of functions. A series is said to be semi-convergent (or conditionally convergent) if it is convergent but not absolutely convergent. Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi (1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847). Schlömilch ("Zeitschrift", Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function Genocchi (1852) has further contributed to the theory. Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Cayley (1873) brought it into prominence. Fourier series were being investigated as the result of physical considerations at the same time that Gauss, Abel, and Cauchy were working out the theory of infinite series. Series for the expansion of sines and cosines, of multiple arcs in powers of the sine and cosine of the arc had been treated by Jacob Bernoulli (1702) and his brother Johann Bernoulli (1701) and still earlier by Vieta. Euler and Lagrange simplified the subject, as did Poinsot, Schröter, Glaisher, and Kummer. Fourier (1807) set for himself a different problem, to expand a given function of "x" in terms of the sines or cosines of multiples of "x", a problem which he embodied in his "Théorie analytique de la chaleur" (1822). Euler had already given the formulas for determining the coefficients in the series; Fourier was the first to assert and attempt to prove the general theorem. Poisson (1820–23) also attacked the problem from a different standpoint. Fourier did not, however, settle the question of convergence of his series, a matter left for Cauchy (1826) to attempt and for Dirichlet (1829) to handle in a thoroughly scientific manner (see convergence of Fourier series). Dirichlet's treatment ("Crelle", 1829), of trigonometric series was the subject of criticism and improvement by Riemann (1854), Heine, Lipschitz, Schläfli, and du Bois-Reymond. Among other prominent contributors to the theory of trigonometric and Fourier series were Dini, Hermite, Halphen, Krause, Byerly and Appell. Asymptotic series, otherwise asymptotic expansions, are infinite series whose partial sums become good approximations in the limit of some point of the domain. In general they do not converge, but they are useful as sequences of approximations, each of which provides a value close to the desired answer for a finite number of terms. The difference is that an asymptotic series cannot be made to produce an answer as exact as desired, the way that convergent series can. In fact, after a certain number of terms, a typical asymptotic series reaches its best approximation; if more terms are included, most such series will produce worse answers. Under many circumstances, it is desirable to assign a limit to a series which fails to converge in the usual sense. A summability method is such an assignment of a limit to a subset of the set of divergent series which properly extends the classical notion of convergence. Summability methods include Cesàro summation, ("C","k") summation, Abel summation, and Borel summation, in increasing order of generality (and hence applicable to increasingly divergent series). A variety of general results concerning possible summability methods are known. The Silverman–Toeplitz theorem characterizes "matrix summability methods", which are methods for summing a divergent series by applying an infinite matrix to the vector of coefficients. The most general method for summing a divergent series is non-constructive, and concerns Banach limits. Definitions may be given for sums over an arbitrary index set . There are two main differences with the usual notion of series: first, there is no specific order given on the set ; second, this set may be uncountable. The notion of convergence needs to be strengthened, because the concept of conditional convergence depends on the ordering of the index set. If formula_83 is a function from an index set to a set , then the "series" associated to formula_84 is the formal sum of the elements formula_85 over the index elements formula_86 denoted by the When the index set is the natural numbers formula_88, the function formula_89 is a sequence denoted by formula_90. A series indexed on the natural numbers is an ordered formal sum and so we rewrite formula_91 as formula_92 in order to emphasize the ordering induced by the natural numbers. Thus, we obtain the common notation for a series indexed by the natural numbers When summing a family {"a""i"}, "i" ∈ "I", of non-negative numbers, one may define When the supremum is finite, the set of "i" ∈ "I" such that "ai" > 0 is countable. Indeed, for every "n" ≥ 1, the set formula_95 is finite, because If "I"  is countably infinite and enumerated as "I" = {"i"0, "i"1...} then the above defined sum satisfies provided the value ∞ is allowed for the sum of the series. Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions. Let "a" : "I" → "X", where "I"  is any set and "X"  is an abelian Hausdorff topological group. Let "F"  be the collection of all finite subsets of "I", with "F" viewed as a directed set, ordered under inclusion with union as join. Define the sum "S"  of the family "a" as the limit if it exists and say that the family "a" is unconditionally summable. Saying that the sum "S"  is the limit of finite partial sums means that for every neighborhood "V"  of 0 in "X", there is a finite subset "A"0 of "I"  such that Because "F"  is not totally ordered, this is not a limit of a sequence of partial sums, but rather of a net. For every "W", neighborhood of 0 in "X", there is a smaller neighborhood "V"  such that "V" − "V" ⊂ "W". It follows that the finite partial sums of an unconditionally summable family "ai", "i" ∈ "I", form a "Cauchy net", that is, for every "W", neighborhood of 0 in "X", there is a finite subset "A"0 of "I"  such that When "X"  is complete, a family "a" is unconditionally summable in "X"  if and only if the finite sums satisfy the latter Cauchy net condition. When "X"  is complete and "ai", "i" ∈ "I", is unconditionally summable in "X", then for every subset "J" ⊂ "I", the corresponding subfamily "aj", "j" ∈ "J", is also unconditionally summable in "X". When the sum of a family of non-negative numbers, in the extended sense defined before, is finite, then it coincides with the sum in the topological group "X" = R. If a family "a" in "X"  is unconditionally summable, then for every "W", neighborhood of 0 in "X", there is a finite subset "A"0 of "I"  such that "a""i" ∈ "W"  for every "i" not in "A"0. If "X"  is first-countable, it follows that the set of "i" ∈ "I"  such that "ai" ≠ 0 is countable. This need not be true in a general abelian topological group (see examples below). Suppose that "I" = N. If a family "a""n", "n" ∈ N, is unconditionally summable in an abelian Hausdorff topological group "X", then the series in the usual sense converges and has the same sum, By nature, the definition of unconditional summability is insensitive to the order of the summation. When ∑"a""n" is unconditionally summable, then the series remains convergent after any permutation "σ" of the set N of indices, with the same sum, Conversely, if every permutation of a series ∑"a""n" converges, then the series is unconditionally convergent. When "X"  is complete, then unconditional convergence is also equivalent to the fact that all subseries are convergent; if "X"  is a Banach space, this is equivalent to say that for every sequence of signs "ε""n" = ±1, the series converges in "X". If "X" is a topological vector space (TVS) and formula_104 is a (possibly uncountable) family in "X" then this family is summable if the limit formula_105 of the net formula_106 converges in "X", where formula_107 is the directed set of all finite subsets of "A" directed by inclusion formula_108 and formula_109. It is called absolutely summable if in addition, for every continuous seminorm "p" on "X", the family formula_110 is summable. If "X" is a normable space and if formula_104 is an absolutely summable family in "X", then necessarily all but a countable collection of formula_112's are 0. Hence, in normed spaces, it is usually only ever necessary to consider series with countably many terms. Summable families play an important role in the theory of nuclear spaces. The notion of series can be easily extended to the case of a seminormed space. If "x""n" is a sequence of elements of a normed space "X" and if "x" is in "X", then the series Σ"x""n" converges to "x"  in  "X" if the sequence of partial sums of the series formula_113 converges to "x" in "X"; to wit, as "N" → ∞. More generally, convergence of series can be defined in any abelian Hausdorff topological group. Specifically, in this case, Σ"x""n" converges to "x" if the sequence of partial sums converges to "x". If ("X", |·|)  is a semi-normed space, then the notion of absolute convergence becomes: A series formula_115 of vectors in "X"  converges absolutely if in which case all but at most countably many of the values formula_117 are necessarily zero. If a countable series of vectors in a Banach space converges absolutely then it converges unconditionally, but the converse only holds in finite-dimensional Banach spaces (theorem of ). Conditionally convergent series can be considered if "I" is a well-ordered set, for example, an ordinal number "α"0. One may define by transfinite recursion: and for a limit ordinal "α", if this limit exists. If all limits exist up to "α"0, then the series converges.
https://en.wikipedia.org/wiki?curid=15287
Interrupt In digital computers, an interrupt is a response by the processor to an event that needs attention from the software. An interrupt condition alerts the processor and serves as a request for the processor to interrupt the currently executing code when permitted, so that the event can be processed in a timely manner. If the request is accepted, the processor responds by suspending its current activities, saving its state, and executing a function called an "interrupt handler" (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, unless the interrupt indicates a fatal error, the processor resumes normal activities after the interrupt handler finishes. Interrupts are commonly used by hardware devices to indicate electronic or physical state changes that require attention. Interrupts are also commonly used to implement computer multitasking, especially in real-time computing. Systems that use interrupts in these ways are said to be interrupt-driven. Interrupt signals may be issued in response to hardware or software events. These are classified as hardware interrupts or software interrupts, respectively. For any particular processor, the number of interrupt types is limited by the architecture. A hardware interrupt is a condition related to the state of the hardware that may be signaled by an external hardware device, e.g., an interrupt request (IRQ) line on a PC, or detected by devices embedded in processor logic (e.g., the CPU timer in IBM System/370), to communicate that the device needs attention from the operating system (OS) or, if there is no OS, from the "bare-metal" program running on the CPU. Such external devices may be part of the computer (e.g., disk controller) or they may be external peripherals. For example, pressing a keyboard key or moving a mouse plugged into a PS/2 port triggers hardware interrupts that cause the processor to read the keystroke or mouse position. Hardware interrupts can arrive asynchronously with respect to the processor clock, and at any time during instruction execution. Consequently, all hardware interrupt signals are conditioned by synchronizing them to the processor clock, and acted upon only at instruction execution boundaries. In many systems, each device is associated with a particular IRQ signal. This makes it possible to quickly determine which hardware device is requesting service, and to expedite servicing of that device. On some older systems all interrupts went to the same location and the OS used a specialized instruction to determine the highest priority unmasked interrupt outstanding. On contemporary systems there is generally a distinct interrupt routine for each type of interrupt or for each interrupt source, often implemented as one or more interrupt vector tables.. Processors typically have an internal "interrupt mask" register which allows selective enabling and disabling of hardware interrupts. Each interrupt signal is associated with a bit in the mask register; on some systems, the interrupt is enabled when the bit is set and disabled when the bit is clear, while on others, a set bit disables the interrupt. When the interrupt is disabled, the associated interrupt signal will be ignored by the processor. Signals which are affected by the mask are called "maskable interrupts". Some interrupt signals are not affected by the interrupt mask and therefore cannot be disabled; these are called "non-maskable interrupts" (NMI). NMIs indicate high priority events which cannot be ignored under any circumstances, such as the timeout signal from a watchdog timer. To "mask" an interrupt is to disable it, while to "unmask" an interrupt is to enable it. A "spurious interrupt" is an invalid, short-duration signal on an interrupt input. These are usually caused by glitches resulting from electrical interference, race conditions, or malfunctioning devices. A software interrupt is requested by the processor itself upon executing particular instructions or when certain conditions are met. Every software interrupt signal is associated with a particular interrupt handler. A software interrupt may be intentionally caused by executing a special instruction which, by design, invokes an interrupt when executed. Such instructions function similarly to subroutine calls and are used for a variety of purposes, such as requesting operating system services and interacting with device drivers (e.g., to read or write storage media). Software interrupts may also be unexpectedly triggered by program execution errors. These interrupts typically are called "traps" or "exceptions". For example, a divide-by-zero exception will be "thrown" (a software interrupt is requested) if the processor executes a divide instruction with divisor equal to zero. Typically, the operating system will catch and handle this exception. Each interrupt signal input is designed to be triggered by either a logic signal level or a particular signal edge (level transition). Level-sensitive inputs continuously request processor service so long as a particular (high or low) logic level is applied to the input. Edge-sensitive inputs react to signal edges: a particular (rising or falling) edge will cause a service request to be latched; the processor resets the latch when the interrupt handler executes. A "level-triggered interrupt" is requested by holding the interrupt signal at its particular (high or low) active logic level. A device invokes a level-triggered interrupt by driving the signal to and holding it at the active level. It negates the signal when the processor commands it to do so, typically after the device has been serviced. The processor samples the interrupt input signal during each instruction cycle. The processor will recognize the interrupt request if the signal is asserted when sampling occurs. Level-triggered inputs allow multiple devices to share a common interrupt signal via wired-OR connections. The processor polls to determine which devices are requesting service. After servicing a device, the processor may again poll and, if necessary, service other devices before exiting the ISR. An "edge-triggered interrupt" is an interrupt signaled by a level transition on the interrupt line, either a falling edge (high to low) or a rising edge (low to high). A device wishing to signal an interrupt drives a pulse onto the line and then releases the line to its inactive state. If the pulse is too short to be detected by polled I/O then special hardware may be required to detect it. The processor samples the interrupt trigger signal during each instruction cycle, and will respond to the trigger only if the signal is asserted when sampling occurs. Regardless of the triggering method, the processor will begin interrupt processing at the next instruction boundary following a detected trigger, thus ensuring: Interrupts may be implemented in hardware as a distinct component with control lines, or they may be integrated into the memory subsystem. If implemented in hardware as a distinct component, an interrupt controller circuit such as the IBM PC's Programmable Interrupt Controller (PIC) may be connected between the interrupting device and the processor's interrupt pin to multiplex several sources of interrupt onto the one or two CPU lines typically available. If implemented as part of the memory controller, interrupts are mapped into the system's memory address space. Multiple devices may share an edge-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to its inactive state, which is the default state of it. Devices signal an interrupt by briefly driving the line to its non-default state, and let the line float (do not actively drive it) when not signaling an interrupt. This type of connection is also referred to as open collector. The line then carries all the pulses generated by all the devices. (This is analogous to the pull cord on some buses and trolleys that any passenger can pull to signal the driver that they are requesting a stop.) However, interrupt pulses from different devices may merge if they occur close in time. To avoid losing interrupts the CPU must trigger on the trailing edge of the pulse (e.g. the rising edge if the line is pulled up and driven low). After detecting an interrupt the CPU must check all the devices for service requirements. Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have with sharing. Service of a low-priority device can be postponed arbitrarily, while interrupts from high-priority devices continue to be received and get serviced. If there is a device that the CPU does not know how to service, which may raise spurious interrupts, it won't interfere with interrupt signaling of other devices. However, it is easy for an edge-triggered interrupt to be missed - for example, when interrupts are masked for a period - and unless there is some type of hardware latch that records the event it is impossible to recover. This problem caused many "lockups" in early computer hardware because the processor did not know it was expected to do something. More modern hardware often has one or more interrupt status registers that latch interrupts requests; well-written edge-driven interrupt handling code can check these registers to ensure no events are missed. The elderly Industry Standard Architecture (ISA) bus uses edge-triggered interrupts, without mandating that devices be able to share IRQ lines, but all mainstream ISA motherboards include pull-up resistors on their IRQ lines, so well-behaved ISA devices sharing IRQ lines should just work fine. The parallel port also uses edge-triggered interrupts. Many older devices assume that they have exclusive use of IRQ lines, making it electrically unsafe to share them. There are 3 ways multiple devices "sharing the same line" can be raised. First is by exclusive conduction (switching) or exclusive connection (to pins). Next is by bus (all connected to the same line listening): cards on a bus must know when they are to talk and not talk (ie, the ISA bus). Talking can be triggered in two ways: by accumulation latch or by logic gates. Logic gates expect a continual data flow that is monitored for key signals. Accumulators only trigger when the remote side excites the gate beyond a threshold, thus no negotiated speed is required. Each has its speed versus distance advantages. A trigger, generally, is the method in which excitation is detected: rising edge, falling edge, threshold (oscilloscope can trigger a wide variety of shapes and conditions). Triggering for software interrupts must be built into the software (both in OS and app). A 'C' app has a trigger table (a table of functions) in its header, which both the app and OS know of and use appropriately that is not related to hardware. However do not confuse this with hardware interrupts which signal the CPU (the CPU enacts software from a table of functions, similarly to software interrupts). Multiple devices sharing an interrupt line (of any triggering style) all act as spurious interrupt sources with respect to each other. With many devices on one line, the workload in servicing interrupts grows in proportion to the square of the number of devices. It is therefore preferred to spread devices evenly across the available interrupt lines. Shortage of interrupt lines is a problem in older system designs where the interrupt lines are distinct physical conductors. Message-signaled interrupts, where the interrupt line is virtual, are favored in new system architectures (such as PCI Express) and relieve this problem to a considerable extent. Some devices with a poorly designed programming interface provide no way to determine whether they have requested service. They may lock up or otherwise misbehave if serviced when they do not want it. Such devices cannot tolerate spurious interrupts, and so also cannot tolerate sharing an interrupt line. ISA cards, due to often cheap design and construction, are notorious for this problem. Such devices are becoming much rarer, as hardware logic becomes cheaper and new system architectures mandate shareable interrupts. Some systems use a hybrid of level-triggered and edge-triggered signaling. The hardware not only looks for an edge, but it also verifies that the interrupt signal stays active for a certain period of time. A common use of a hybrid interrupt is for the NMI (non-maskable interrupt) input. Because NMIs generally signal major – or even catastrophic – system events, a good implementation of this signal tries to ensure that the interrupt is valid by verifying that it remains active for a period of time. This 2-step approach helps to eliminate false interrupts from affecting the system. A "message-signaled interrupt" does not use a physical interrupt line. Instead, a device signals its request for service by sending a short message over some communications medium, typically a computer bus. The message might be of a type reserved for interrupts, or it might be of some pre-existing type such as a memory write. Message-signalled interrupts behave very much like edge-triggered interrupts, in that the interrupt is a momentary signal rather than a continuous condition. Interrupt-handling software treats the two in much the same manner. Typically, multiple pending message-signaled interrupts with the same message (the same virtual interrupt line) are allowed to merge, just as closely spaced edge-triggered interrupts can merge. Message-signalled interrupt vectors can be shared, to the extent that the underlying communication medium can be shared. No additional effort is required. Because the identity of the interrupt is indicated by a pattern of data bits, not requiring a separate physical conductor, many more distinct interrupts can be efficiently handled. This reduces the need for sharing. Interrupt messages can also be passed over a serial bus, not requiring any additional lines. PCI Express, a serial computer bus, uses message-signaled interrupts exclusively. In a push button analogy applied to computer systems, the term "doorbell" or "doorbell interrupt" is often used to describe a mechanism whereby a software system can signal or notify a computer hardware device that there is some work to be done. Typically, the software system will place data in some well-known and mutually agreed upon memory location(s), and "ring the doorbell" by writing to a different memory location. This different memory location is often called the doorbell region, and there may even be multiple doorbells serving different purposes in this region. It is this act of writing to the doorbell region of memory that "rings the bell" and notifies the hardware device that the data are ready and waiting. The hardware device would now know that the data are valid and can be acted upon. It would typically write the data to a hard disk drive, or send them over a network, or encrypt them, etc. The term "doorbell interrupt" is usually a misnomer. It is similar to an interrupt, because it causes some work to be done by the device; however, the doorbell region is sometimes implemented as a polled region, sometimes the doorbell region writes through to physical device registers, and sometimes the doorbell region is hardwired directly to physical device registers. When either writing through or directly to physical device registers, this may cause a real interrupt to occur at the device's central processor unit (CPU), if it has one. Doorbell interrupts can be compared to Message Signaled Interrupts, as they have some similarities. In multiprocessor systems, a processor may send an interrupt request to another processor via inter-processor interrupts" (IPI). Interrupts provide low overhead and good latency at low load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies. The phenomenon where the overall system performance is severely hindered by excessive amounts of processing time spent handling interrupts is called an interrupt storm. There are various forms of livelocks, when the system spends all of its time processing interrupts to the exclusion of other required tasks. Under extreme conditions, a large number of interrupts (like very high network traffic) may completely stall the system. To avoid such problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution. With multi-core processors, additional performance improvements in interrupt handling can be achieved through receive-side scaling (RSS) when multiqueue NICs are used. Such NICs provide multiple receive queues associated to separate interrupts; by routing each of those interrupts to different cores, processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed among multiple cores. Distribution of the interrupts among cores can be performed automatically by the operating system, or the routing of interrupts (usually referred to as "IRQ affinity") can be manually configured. A purely software-based implementation of the receiving traffic distribution, known as "receive packet steering" (RPS), distributes received traffic among cores later in the data path, as part of the interrupt handler functionality. Advantages of RPS over RSS include no requirements for specific hardware, more advanced traffic distribution filters, and reduced rate of interrupts produced by a NIC. As a downside, RPS increases the rate of inter-processor interrupts (IPIs). "Receive flow steering" (RFS) takes the software-based approach further by accounting for application locality; further performance improvements are achieved by processing interrupt requests by the same cores on which particular network packets will be consumed by the targeted application. Interrupts are commonly used to service hardware timers, transfer data to and from storage (e.g., disk I/O) and communication interfaces (e.g., UART, Ethernet), handle keyboard and mouse events, and to respond to any other time-sensitive events as required by the application system. Non-maskable interrupts are typically used to respond to high-priority requests such as watchdog timer timeouts, power-down signals and traps. Hardware timers are often used to generate periodic interrupts. In some applications, such interrupts are counted by the interrupt handler to keep track of absolute or elapsed time, or used by the OS task scheduler to manage execution of running processes, or both. Periodic interrupts are also commonly used to invoke sampling from input devices such as analog-to-digital converters, incremental encoder interfaces, and GPIO inputs, and to program output devices such as digital-to-analog converters, motor controllers, and GPIO outputs. A disk interrupt signals the completion of a data transfer from or to the disk peripheral; this may cause a process to run which is waiting to read or write. A power-off interrupt predicts imminent loss of power, allowing the computer to perform an orderly shut-down while there still remains enough power to do so. Keyboard interrupts typically cause keystrokes to be buffered so as to implement typeahead. Interrupts are sometimes used to emulate instructions which are unimplemented on some computers in a product family. For example floating point instructions may be implemented in hardware on some systems and emulated on lower-cost systems. In the latter case, execution of an unimplemented floating point instruction will cause an "illegal instruction" exception interrupt. The interrupt handler will implement the floating point function in software and then return to the interrupted program as if the hardware-implemented instruction had been executed. This provides application software portability across the entire line. Interrupts are similar to signals, the difference being that signals are used for inter-process communication (IPC), mediated by the kernel (possibly via system calls) and handled by processes, while interrupts are mediated by the processor and handled by the kernel. The kernel may pass an interrupt as a signal to the process that caused it (typical examples are SIGSEGV, SIGBUS, SIGILL and SIGFPE). Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops, waiting for external events. The first system to use this approach was the DYSEAC, completed in 1954, although earlier systems provided error trap functions. The UNIVAC 1103 computer is generally credited with the earliest use of interrupts in 1953. Earlier, on the UNIVAC I (1951) "Arithmetic overflow either triggered the execution a two-instruction fix-up routine at address 0, or, at the programmer's option, caused the computer to stop." The IBM 650 (1954) incorporated the first occurrence of interrupt masking. The National Bureau of Standards DYSEAC (1954) was the first to use interrupts for I/O. The IBM 704 was the first to use interrupts for debugging, with a "transfer trap", which could invoke a special routine when a branch instruction was encountered.The MIT Lincoln Laboratory TX-2 system (1957) was the first to provide multiple levels of priority interrupts.
https://en.wikipedia.org/wiki?curid=15289
Intercalation (timekeeping) Intercalation or embolism in timekeeping is the insertion of a leap day, week, or month into some calendar years to make the calendar follow the seasons or moon phases. Lunisolar calendars may require intercalations of both days and months. The solar or tropical year does not have a whole number of days (it is about 365.24 days), but a calendar year must have a whole number of days. The most common way to reconcile the two is to vary the number of days in the calendar year. In solar calendars, this is done by adding to a common year of 365 days, an extra day ("leap day" or "intercalary day") about every four years, causing a leap year to have 366 days (Julian, Gregorian and Indian national calendars). The Decree of Canopus, which was issued by the pharaoh Ptolemy III Euergetes of Ancient Egypt in 239 BCE, decreed a solar leap day system; an Egyptian leap year was not adopted until 25 BC, when the Roman Emperor Augustus successfully instituted a reformed Alexandrian calendar. In the Julian calendar, as well as in the Gregorian calendar, which improved upon it, intercalation is done by adding an extra day to February in each leap year. In the Julian calendar this was done every four years. In the Gregorian, years divisible by 100 but not 400 were exempted in order to improve accuracy. Thus, 2000 was a leap year; 1700, 1800, and 1900 were not. Epagomenal days are days within a solar calendar that are outside any regular month. Usually five epagomenal days are included within every year (Egyptian, Coptic, Ethiopian, Mayan Haab' and French Republican Calendars), but a sixth epagomenal day is intercalated every four years in some (Coptic, Ethiopian and French Republican calendars). The Bahá'í calendar includes enough epagomenal days (usually 4 or 5) before the last month (, "ʿalāʾ") to ensure that the following year starts on the March equinox. These are known as the Ayyám-i-Há. The solar year does not have a whole number of lunar months (it is about 12.37 lunations), so a lunisolar calendar must have a variable number of months in a year. Regular years have 12 months, but embolismic years insert a 13th "intercalary" or "leap" or "embolismic" month every second or third year (see blue moon). Whether to insert an intercalary month in a given year may be determined using regular cycles such as the 19-year Metonic cycle (Hebrew calendar and in the determination of Easter) or using calculations of lunar phases (Hindu lunisolar and Chinese calendars). The Buddhist calendar adds both an intercalary day and month on a usually regular cycle. The tabular Islamic calendar usually has 12 lunar months that alternate between 30 and 29 days every year, but an intercalary day is added to the last month of the year 11 times within a 30-year cycle. Some historians also linked the pre-Islamic practice of Nasi' to intercalation. The Solar Hijri calendar is based on solar calculations and is similar to the Gregorian calendar in its structure, and hence the intercalation, with the exception that the year date starts with the Hegira. The International Earth Rotation and Reference Systems Service can insert or remove leap seconds from the last day of any month (June and December are preferred). These are sometimes described as intercalary. ISO 8601 includes a specification for a 52/53-week year. Any year that has 53 Thursdays has 53 weeks; this extra week may be regarded as intercalary. The "xiuhpōhualli" (year count) system of the Aztec calendar had five intercalary days after the eighteenth and final month, the "nēmontēmi", in which the people reflect on the past year and do fasting.
https://en.wikipedia.org/wiki?curid=15290
Ink Ink is a liquid or paste that contain pigments or dyes and is used to color a surface to produce an image, text, or design. Ink is used for drawing or writing with a pen, brush, or quill. Thicker inks, in paste form, are used extensively in letterpress and lithographic printing. Ink can be a complex medium, composed of solvents, pigments, dyes, resins, lubricants, solubilizers, surfactants, particulate matter, fluorescents, and other materials. The components of inks serve many purposes; the ink's carrier, colorants, and other additives affect the flow and thickness of the ink and its dry appearance. In 2011 worldwide consumption of printing inks generated revenues of more than 20 billion US dollars. Demand by traditional print media is shrinking, on the other hand more and more printing inks are consumed for packagings. Many ancient cultures around the world have independently discovered and formulated inks for the purposes of writing and drawing. The knowledge of the inks, their recipes and the techniques for their production comes from archaeological analysis or from written text itself. The earliest inks from all civilisations are believed to have been made with "lampblack", a kind of soot, as this would have been easily collected as a by-product of fire. Ink was used in Ancient Egypt for writing and drawing on papyrus from at least the 26th century BC. Chinese inks may go back as far as three or maybe four millennia, to the Chinese Neolithic Period. These used plants, animal, and mineral inks based on such materials as graphite that were ground with water and applied with ink brushes. Direct evidence for the earliest Chinese inks, similar to modern inksticks, is around 256 BC in the end of the Warring States period and produced from soot and animal glue. The best inks for drawing or painting on paper or silk are produced from the resin of the pine tree. They must be between 50 and 100 years old. The Chinese inkstick is produced with a fish glue, whereas Japanese glue (膠 "nikawa") is from cow or stag. India ink was first invented in China, though materials were often traded from India, hence the name. The traditional Chinese method of making the ink was to grind a mixture of hide glue, carbon black, lampblack, and bone black pigment with a pestle and mortar, then pouring it into a ceramic dish to dry. To use the dry mixture, a wet brush would be applied until it reliquified. The manufacture of India ink was well-established by the Cao Wei Dynasty (220–265 AD). Indian documents written in Kharosthi with ink have been unearthed in Chinese Turkestan. The practice of writing with ink and a sharp pointed needle was common in early South India. Several Buddhist and Jain sutras in India were compiled in ink. In ancient Rome, atramentum was used; in an article for the "Christian Science Monitor", Sharon J. Huntington describes these other historical inks: About 1,600 years ago, a popular ink recipe was created. The recipe was used for centuries. Iron salts, such as ferrous sulfate (made by treating iron with sulfuric acid), were mixed with tannin from gallnuts (they grow on trees) and a thickener. When first put to paper, this ink is bluish-black. Over time it fades to a dull brown. Scribes in medieval Europe (about AD 800 to 1500) wrote principally on parchment or vellum. One 12th century ink recipe called for hawthorn branches to be cut in the spring and left to dry. Then the bark was pounded from the branches and soaked in water for eight days. The water was boiled until it thickened and turned black. Wine was added during boiling. The ink was poured into special bags and hung in the sun. Once dried, the mixture was mixed with wine and iron salt over a fire to make the final ink. The reservoir pen, which may have been the first fountain pen, dates back to 953, when Ma'ād al-Mu'izz, the caliph of Egypt, demanded a pen that would not stain his hands or clothes, and was provided with a pen that held ink in a reservoir. In the 15th century, a new type of ink had to be developed in Europe for the printing press by Johannes Gutenberg. According to Martyn Lyons in his book "Books: A Living History", Gutenberg's dye was indelible, oil-based, and made from the soot of lamps (lamp-black) mixed with varnish and egg white. Two types of ink were prevalent at the time: the Greek and Roman writing ink (soot, glue, and water) and the 12th century variety composed of ferrous sulfate, gall, gum, and water. Neither of these handwriting inks could adhere to printing surfaces without creating blurs. Eventually an oily, varnish-like ink made of soot, turpentine, and walnut oil was created specifically for the printing press. Ink formulas vary, but commonly involve two components: Inks generally fall into four classes: Pigment inks are used more frequently than dyes because they are more color-fast, but they are also more expensive, less consistent in color, and have less of a color range than dyes. Pigments are solid, opaque particles suspended in ink to provide color. Pigment molecules typically link together in crystalline structures that are 0.1–2 µm in size and comprise 5–30 percent of the ink volume. Qualities such as hue, saturation, and lightness vary depending on the source and type of pigment. Dye-based inks are generally much stronger than pigment-based inks and can produce much more color of a given density per unit of mass. However, because dyes are dissolved in the liquid phase, they have a tendency to soak into paper, potentially allowing the ink to bleed at the edges of an image. To circumvent this problem, dye-based inks are made with solvents that dry rapidly or are used with quick-drying methods of printing, such as blowing hot air on the fresh print. Other methods include harder paper sizing and more specialized paper coatings. The latter is particularly suited to inks used in non-industrial settings (which must conform to tighter toxicity and emission controls), such as inkjet printer inks. Another technique involves coating the paper with a charged coating. If the dye has the opposite charge, it is attracted to and retained by this coating, while the solvent soaks into the paper. Cellulose, the wood-derived material most paper is made of, is naturally charged, and so a compound that complexes with both the dye and the paper's surface aids retention at the surface. Such a compound is commonly used in ink-jet printing inks. An additional advantage of dye-based ink systems is that the dye molecules can interact with other ink ingredients, potentially allowing greater benefit as compared to pigmented inks from optical brighteners and color-enhancing agents designed to increase the intensity and appearance of dyes. Dye-based inks can be used for anti-counterfit purposes and can be found in some gel inks, fountain pen inks, and inks used for paper currency. These inks react with cellulose to bring about a permanent color change. Such inks are not affected by water, alcohol, and other solvents. As such, their use is recommended to prevent frauds that involve removing signatures, such as check washing. There is a misconception that ink is non-toxic even if swallowed. Once ingested, ink can be hazardous to one's health. Certain inks, such as those used in digital printers, and even those found in a common pen can be harmful. Though ink does not easily cause death, repeated skin contact or ingestion can cause effects such as severe headaches, skin irritation, or nervous system damage. These effects can be caused by solvents, or by pigment ingredients such as "p"-Anisidine, which helps create some inks' color and shine. Three main environmental issues with ink are: Some regulatory bodies have set standards for the amount of heavy metals in ink. There is a trend toward vegetable oils rather than petroleum oils in recent years in response to a demand for better environmental sustainability performance. Ink uses up non-renewable oils and metals, which has a negative impact on the environment. Carbon inks were commonly made from lampblack or soot and a binding agent such as gum arabic or animal glue. The binding agent keeps carbon particles in suspension and adhered to paper. Carbon particles do not fade over time even when bleached or when in sunlight. One benefit is that carbon ink does not harm paper. Over time, the ink is chemically stable and therefore does not threaten the paper's strength. Despite these benefits, carbon ink is not ideal for permanence and ease of preservation. Carbon ink tends to smudge in humid environments and can be washed off surfaces. The best method of preserving a document written in carbon ink is to store it in a dry environment (Barrow 1972). Recently, carbon inks made from carbon nanotubes have been successfully created. They are similar in composition to traditional inks in that they use a polymer to suspend the carbon nanotubes. These inks can be used in inkjet printers and produce electrically conductive patterns. Iron gall inks became prominent in the early 12th century; they were used for centuries and were widely thought to be the best type of ink. However, iron gall ink is corrosive and damages paper over time (Waters 1940). Items containing this ink can become brittle and the writing fades to brown. The original scores of Johann Sebastian Bach are threatened by the destructive properties of iron gall ink. The majority of his works are held by the German State Library, and about 25% of those are in advanced stages of decay (American Libraries 2000). The rate at which the writing fades is based on several factors, such as proportions of ink ingredients, amount deposited on the paper, and paper composition (Barrow 1972:16). Corrosion is caused by acid catalysed hydrolysis and iron(II)-catalysed oxidation of cellulose (Rouchon-Quillet 2004:389). Treatment is a controversial subject. No treatment undoes damage already caused by acidic ink. Deterioration can only be stopped or slowed. Some think it best not to treat the item at all for fear of the consequences. Others believe that non-aqueous procedures are the best solution. Yet others think an aqueous procedure may preserve items written with iron gall ink. Aqueous treatments include distilled water at different temperatures, calcium hydroxide, calcium bicarbonate, magnesium carbonate, magnesium bicarbonate, and calcium phytate. There are many possible side effects from these treatments. There can be mechanical damage, which further weakens the paper. Paper color or ink color may change, and ink may bleed. Other consequences of aqueous treatment are a change of ink texture or formation of plaque on the surface of the ink (Reibland & de Groot 1999). Iron gall inks require storage in a stable environment, because fluctuating relative humidity increases the rate that formic acid, acetic acid, and furan derivatives form in the material the ink was used on. Sulfuric acid acts as a catalyst to cellulose hydrolysis, and iron (II) sulfate acts as a catalyst to cellulose oxidation. These chemical reactions physically weaken the paper, causing brittleness. "Indelible" means "unremovable". Some types of indelible ink have a very short shelf life because of the quickly evaporating solvents used. India, Mexico, Indonesia, Malaysia and other developing countries have used indelible ink in the form of electoral stain to prevent electoral fraud. Election ink based on silver nitrate was first applied in the 1962 Indian general election, after being developed at the National Physical Laboratory of India. The Election Commission in India has used indelible ink for many elections. Indonesia used it in its last election in Aceh. In Mali, the ink is applied to the fingernail. Indelible ink itself is not infallible as it can be used to commit electoral fraud by marking opponent party members before they have chances to cast their votes. There are also reports of "indelible" ink washing off voters' fingers in Afghanistan.
https://en.wikipedia.org/wiki?curid=15292
Islamabad Capital Territory Islamabad Capital Territory () is the only federal territory of Pakistan. Located in north-central Pakistan between the provinces of Punjab and Khyber Pakhtunkhwa, it includes the country's federal capital Islamabad. The territory is represented in the National Assembly constituencies NA-52, NA-53 and NA-54. In 1960, land was transferred from Rawalpindi District of Punjab province to establish Pakistan's new capital. According to the 1960s master plan, the Capital Territory included Rawalpindi, and was to be composed of the following parts: However, Rawalpindi was eventually excluded from the Islamabad master plan in the 1980s. Islamabad is subdivided into five zones: Islamabad Capital Territory comprises urban and rural areas. The rural consists of 23 union councils, comprising 133 villages, while urban has 27 union councils. The climate of Islamabad has a humid subtropical climate (Köppen: Cwa), with five seasons: Winter (November–February), Spring (March and April), Summer (May and June), Rainy Monsoon (July and August) and Autumn (September and October). The hottest month is June, where average highs routinely exceed . Wettest month is July, with heavy rainfalls and evening thunderstorms with the possibility of cloudburst and flooding. Coolest month is January. Islamabad's micro-climate is regulated by three artificial reservoirs: Rawal, Simli, and Khanpur Dam. Last one is located on the Haro River near the town of Khanpur, about from Islamabad. Simli Dam is north of Islamabad. of the city consists of Margalla Hills National Park. Loi Bher Forest is situated along the Islamabad Highway, covering an area of . Highest monthly rainfall of was recorded during July 1995. Winters generally feature dense fog in the mornings and sunny afternoons. In the city, temperatures stay mild, with snowfall over the higher elevations points on nearby hill stations, notably Murree and Nathia Gali. The temperatures range from in January to in June. The highest recorded temperature was on 23 June 2005 while the lowest temperature was on 17 January 1967. The city has "recorded" snowfall. On 23 July 2001, Islamabad received a record breaking of rainfall in just 10 hours. It was the heaviest rainfall in Islamabad in the past 100 years and the highest rainfall in 24 hours as well. The main administrative authority of the city is Islamabad Capital Territory Administration with some help from Metropolitan Corporation Islamabad and Capital Development Authority (CDA), which oversees the planning, development, construction, and administration of the city. Islamabad Capital Territory is divided into eight zones: Administrative Zone, Commercial District, Educational Sector, Industrial Sector, Diplomatic Enclave, Residential Areas, Rural Areas and Green Area. Islamabad city is divided into five major zones: Zone I, Zone II, Zone III, Zone IV, and Zone V. Out of these, Zone IV is the largest in area. All sectors of ghouri town (1, 2, 3, VIP, 5, 4-A, 4-B, 4-C, 5-A, 5-B and sector 7) are located in this zone. Zone I consists mainly of all the developed residential sectors, while Zone II consists of the under-developed residential sectors. Each residential sector is identified by a letter of the alphabet and a number, and covers an area of approximately 4 square kilometres. The sectors are lettered from A to I, and each sector is divided into four numbered sub-sectors. Series A, B, and C are still underdeveloped. The D series has seven sectors (D-11 to D-17), of which only sector D-12 is completely developed. This series is located at the foot of Margalla Hills. The E Sectors are named from E-7 to E-17. Many foreigners and diplomatic personnel are housed in these sectors. In the revised Master Plan of the city, CDA has decided to develop a park on the pattern of Fatima Jinnah Park in sector E-14. Sectors E-8 and E-9 contain the campuses of Bahria University, Air University, and the National Defence University. The F and G series contains the most developed sectors. F series contains sectors F-5 to F-17; some sectors are still under-developed. F-5 is an important sector for the software industry in Islamabad, as the two software technology parks are located here. The entire F-9 sector is covered with Fatima Jinnah Park. The Centaurus complex will be one of the major landmarks of the F-8 sector. G sectors are numbered G-5 through G-17. Some important places include the Jinnah Convention Center and Serena Hotel in G-5, the Red Mosque in G-6, and the Pakistan Institute of Medical Sciences, the largest medical complex in the capital, located in G-8. The H sectors are numbered H-8 through H-17. The H sectors are mostly dedicated to educational and health institutions. National University of Sciences and Technology covers a major portion of sector H-12. The I sectors are numbered from I-8 to I-18. With the exception of I-8, which is a well-developed residential area, these sectors are primarily part of the industrial zone. Currently two sub-sectors of I-9 and one sub-sector of I-10 are used as industrial areas. CDA is planning to set up Islamabad Railway Station in Sector I-18 and Industrial City in sector I-17. Zone III consists primarily of the Margalla Hills and Margalla Hills National Park. Rawal Lake is in this zone. Zone IV and V consist of Islamabad Park, and rural areas of the city. The Soan River flows into the city through Zone V. While urban Islamabad is home to people from all over Pakistan as well as expatriates, in the rural areas a number of Pothohari speaking tribal communities can still be recognised. When the master plan for Islamabad was drawn up in 1960, Islamabad and Rawalpindi, along with the adjoining areas, was to be integrated to form a large metropolitan area called Islamabad/Rawalpindi Metropolitan Area. The area would consist of the developing Islamabad, the old colonial cantonment city of Rawalpindi, and Margalla Hills National Park, including surrounding rural areas. However, Islamabad city is part of the Islamabad Capital Territory, while Rawalpindi is part of Rawalpindi District, which is part of province of Punjab. Initially, it was proposed that the three areas would be connected by four major highways: Murree Highway, Islamabad Highway, Soan Highway, and Capital Highway. However, to date only two highways have been constructed: Kashmir Highway (the former Murree Highway) and Islamabad Highway. Plans of constructing Margalla Avenue are also underway. Islamabad is the hub all the governmental activities while Rawalpindi is the centre of all industrial, commercial, and military activities. The two cities are considered sister cities and are highly interdependent. Islamabad is a net contributor to the Pakistani economy, as whilst having only 0.8% of the country's population, it contributes 1% to the country's GDP. Islamabad Stock Exchange, founded in 1989, is Pakistan's third largest stock exchange after Karachi Stock Exchange and Lahore Stock Exchange. The exchange has 118 members with 104 corporate bodies and 18 individual members. The average daily turnover of the stock exchange is over 1 million shares. As of 2012, Islamabad LTU (Large Tax Unit) was responsible for Rs 371 billion in tax revenue, which amounts to 20% of all the revenue collected by Federal Board of Revenue. Islamabad has seen an expansion in information and communications technology with the addition two Software Technology Parks, which house numerous national and foreign technological and information technology companies. The tech parks are located in Evacuee Trust Complex and Awami Markaz. Awami Markaz houses 36 IT companies while Evacuee Trust house 29 companies. Call centres for foreign companies have been targeted as another significant area of growth, with the government making efforts to reduce taxes by as much as 10% to encourage foreign investments in the information technology sector. Most of Pakistan's state-owned companies like PIA, PTV, PTCL, OGDCL, and Zarai Taraqiati Bank Ltd. are based in Islamabad. Headquarters of all major telecommunication operators such as PTCL, Mobilink, Telenor, Ufone, and China Mobile are located in Islamabad. Being an expensive city, the prices of most of fruits, vegetable and poultry items increased in Islamabad during the year 2015-2020 Islamabad is connected to major destinations around the world through Islamabad International Airport. The airport is the largest in Pakistan, handling 9 million passengers per annum. The airport was built at a cost of $400 million and opened on May 3, 2018, replacing the former Benazir Bhutto International Airport. It is the first greenfield airport in Pakistan with an area of . The Rawalpindi-Islamabad Metrobus is a bus rapid transit system that serves the twin cities of Rawalpindi and Islamabad in Pakistan. It uses dedicated bus lanes for all of its route covering 24 bus stations. Islamabad is well connected with other parts of the country through car rental services such as Alvi Transport Network and Pakistan Car Rentals. All major cities and towns are accessible through regular trains and bus services running mostly from the neighbouring city of Rawalpindi. Lahore and Peshawar are linked to Islamabad through a network of motorways, which has significantly reduced travelling times between these cities. M-2 Motorway is long and connect Islamabad and Lahore. M-1 Motorway connects Islamabad with Peshawar and is long. Islamabad is linked to Rawalpindi through the Faizabad Interchange, which has a daily traffic volume of about 48,000 vehicles. Islamabad has the highest literacy rate of Pakistan at 95%. Islamabad also has some of Pakistan's major universities, including Quaid-i-Azam University, the International Islamic University, and the National University of Sciences and Technology and Pakistan Institute of Engineering and Applied Sciences Private School Network Islamabad is working for private educational institutions. The president of PSN is Dr. Muhammad Afzal Babur from Bhara Kahu. PSN is divided into eight zones in Islamabad. In Tarlai Zone Chaudhary Faisal Ali from Faisal Academy Tarlai Kalan is Zonal General Sectary of PSN. Quaid-e-Azam University has several faculties. The institute is located in a semi-hilly area, east of the Secretariat buildings and near the base of Margalla Hills. This Post-Graduate institute is spread over . The nucleus of the campus has been designed as an axial spine with a library as its center. Other universities include the following: Islamabad United became the first ever team to win Pakistan Super League in 2016. And now the federal team Is participating in the Pakistan Cup. The team is under the captaincy of Misbah-ul-Haq, former captain of Pakistan, The Islamabad United was also under Misbah.
https://en.wikipedia.org/wiki?curid=15294
Non-cognitivism Non-cognitivism is the meta-ethical view that ethical sentences do not express propositions (i.e., statements) and thus cannot be true or false (they are not truth-apt). A noncognitivist denies the cognitivist claim that "moral judgments are capable of being objectively true, because they describe some feature of the world". If moral statements cannot be true, and if one cannot know something that is not true, noncognitivism implies that moral knowledge is impossible. Non-cognitivism entails that non-cognitive attitudes underlie moral discourse and this discourse therefore consists of non-declarative speech acts, although accepting that its surface features may consistently and efficiently work as if moral discourse were cognitive. The point of interpreting moral claims as non-declarative speech acts is to explain what moral claims mean if they are neither true nor false (as philosophies such as logical positivism entail). Utterances like "Boo to killing!" and "Don't kill" are not candidates for truth or falsity, but have non-cognitive meaning. Emotivism, associated with A. J. Ayer, the Vienna Circle and C. L. Stevenson, suggests that ethical sentences are primarily emotional expressions of one's own attitudes and are intended to influence the actions of the listener. Under this view, "Killing is wrong" is translated as "Killing, boo!" or "I disapprove of killing." A close cousin of emotivism, developed by R. M. Hare, is called universal prescriptivism. Prescriptivists interpret ethical statements as being universal "imperatives", prescribing behavior for all to follow. According to prescriptivism, phrases like "Thou shalt not murder!" or "Do not steal!" are the clearest expressions of morality, while reformulations like "Killing is wrong" tend to obscure the meaning of moral sentences. Other forms of non-cognitivism include Simon Blackburn's quasi-realism and Allan Gibbard's norm-expressivism. Arguments for prescriptivism focus on the "function" of normative statements. Prescriptivists argue that factual statements and prescriptions are totally different, because of different expectations of change in cases of a clash between word and world. In a descriptive sentence, if one premises that "red is a number" then according to the rules of English grammar said statement would be false. Since said premise describes the objects "red" and "number", anyone with an adequate understanding of English would notice the falseness of such description and the falseness of said statement. However, if the norm "thou shalt not kill!" is uttered, and this premise is negated (by the fact of a person being murdered), the speaker is not to change his sentence upon observation of this into "kill other people!", but is to reiterate the moral outrage of the act of killing. Adjusting statements based upon objective reality and adjusting reality based upon statements are contrary uses of language; that is to say, descriptive statements are a different kind of sentence to normative statements. If truth is understood according to correspondence theory, the question of the truth or falsity of sentences not contingent upon external phenomena cannot be tested (see tautologies). Some cognitivists argue that some expressions like "courageous" have both a factual as well as a normative component which cannot be distinguished by analysis. Prescriptivists argue that according to context, either the factual or the normative component of the meaning is dominant. The sentence "Hero A behaved courageously" is wrong, if A ran away in the face of danger. But the sentence "Be brave and fight for the glory of your country!" has no truth value and cannot be falsified by someone who doesn't join the army. Prescriptivism is also supported by the actual way of speaking. Many moral statements are de facto uttered as recommendations or commands, e.g. when parents or teachers forbid children to do wrong actions. The most famous moral ideas are prescriptions: the Ten Commandments, the command of charity, the categorical imperative, and the Golden Rule command to do or not to do something rather than state that something is or is not the case. Prescriptivism can fit the theist idea of morality as obedience towards god. It is however different from the cognitivist supernaturalism which interprets morality as subjective will of god, while prescriptivism claims that moral rules are universal and can be found by reason alone without reference to a god. According to Hare, prescriptivists cannot argue that amoralists are logically wrong or contradictive. Everyone can choose to follow moral commands or not. This is the human condition according to the Christian reinterpretation of the Choice of Heracles. According to prescriptivism, morality is not about knowledge (of moral facts), but about character (to choose to do the right thing). Actors cannot externalize their responsibility and freedom of will towards some moral truth in the world, virtuous people don't need to wait for some cognition to choose what's right. Prescriptivism is also supported by imperative logic, in which there are no truth values for imperatives, and by the idea of the naturalistic fallacy: even if someone could prove the existence of an ethical property and express it in a factual statement, he could never derive any command from this statement, so the search for ethical properties is pointless. As with other anti-realist meta-ethical theories, non-cognitivism is largely supported by the argument from queerness: ethical properties, if they existed, would be different from any other thing in the universe, since they have no observable effect on the world. People generally have a negative attitude towards murder, which presumably keeps most of us from murdering. But does the actual "wrongness" of murder play an "independent" role? Is there any evidence that there is a property of wrongness that some types of acts have? Some people might think that the strong feelings we have when we see or consider a murder provide evidence of murder's wrongness. But it is not difficult to explain these feelings without saying that "wrongness" was their cause. Thus there is no way of discerning which, if any, ethical properties exist; by Occam's razor, the simplest assumption is that none do. The non-cognitivist then asserts that, since a proposition about an ethical property would have no referent, ethical statements must be something else. Arguments for emotivism focus on what normative statements "express" when uttered by a speaker. A person who says that killing is wrong certainly expresses her disapproval of killing. Emotivists claim that this is "all" she does, that the statement "killing is wrong" is not a truth-apt declaration, and that the burden of evidence is on the cognitivists who want to show that in addition to expressing disapproval, the claim "killing is wrong" is also true. Emotivists ask whether there really is evidence that killing is wrong. We have evidence that Jupiter has a magnetic field and that birds are oviparous, but as yet, we do not seem to have found evidence of moral properties, such as "goodness". Emotivists ask why, without such evidence, we should think there "is" such a property. Ethical intuitionists think the evidence comes not from science or reason but from our own feelings: good deeds make us feel a certain way and bad deeds make us feel very differently. But is this enough to show that there are genuinely good and bad deeds? Emotivists think not, claiming that we do not need to postulate the existence of moral "badness" or "wrongness" to explain why considering certain deeds makes us feel disapproval; that all we really observe when we introspect are feelings of disapproval. Thus the emotivist asks why not adopt the simple explanation and say that this is all there is, rather than insist that some intrinsic "badness" (of murder, for example) must be causing feelings when a simpler explanation is available. One argument against non-cognitivism is that it ignores the external "causes" of emotional and prescriptive reactions. If someone says, "John is a good person," something about John must have inspired that reaction. If John gives to the poor, takes care of his sick grandmother, and is friendly to others, and these are what inspire the speaker to think well of him, it is plausible to say, "John is a good person because he gives to the poor, takes care of his sick grandmother, and is friendly to others." If, in turn, the speaker responds positively to the idea of giving to the poor, then some aspect of that idea must have inspired a positive response; one could argue that that aspect is also the basis of its goodness. Another argument is the "embedding problem." Consider the following sentences: Attempts to translate these sentences in an emotivist framework seem to fail (e.g. "She does not realize, 'Boo on eating meat!'"). Prescriptivist translations fare only slightly better ("She does not realize that she is not to eat meat"). Even the act of forming such a construction indicates some sort of cognition in the process. According to some non-cognitivist points of view, these sentences simply assume the false premise that ethical statements are either true or false. They might be literally translated as: These translations, however, seem divorced from the way people actually use language. A non-cognitivist would have to disagree with someone saying, "'Eating meat is wrong' is a false statement" (since "Eating meat is wrong" is not truth-apt at all), but may be tempted to agree with a person saying, "Eating meat is not wrong." One might more constructively interpret these statements to describe the underlying emotional statement that they express, i.e.: I disapprove/do not disapprove of eating meat, I used to, he doesn't, I do and she doesn't, etc.; however, this interpretation is closer to ethical subjectivism than to non-cognitivism proper. A similar argument against non-cognitivism is that of ethical argument. A common argument might be, "If killing an innocent human is always wrong, and all fetuses are innocent humans, then killing a fetus is always wrong." Most people would consider such an utterance to represent an analytic proposition which is true "a priori". However, if ethical statements do not represent cognitions, it seems odd to use them as premises in an argument, and even odder to assume they follow the same rules of syllogism as true propositions. However, R.M. Hare, proponent of universal prescriptivism, has argued that the rules of logic are independent of grammatical mood, and thus the same logical relations may hold between imperatives as hold between indicatives. Many objections to non-cognitivism based on the linguistic characteristics of what purport to be moral judgments were originally raised by Peter Glassen in "The Cognitivity of Moral Judgments", published in "Mind" in January 1959, and in Glassen's follow-up article in the January 1963 issue of the same journal.
https://en.wikipedia.org/wiki?curid=21178
North Sea The North Sea is a sea of the Atlantic Ocean located between Great Britain (England and Scotland), Denmark, Norway, Germany, the Netherlands, Belgium and France. An epeiric (or "shelf") sea on the European continental shelf, it connects to the ocean through the English Channel in the south and the Norwegian Sea in the north. It is more than long and wide, with an area of . The North Sea has long been the site of important European shipping lanes as well as a major fishery. The coast is a popular destination for recreation and tourism in bordering countries, and more recently the sea has developed into a rich source of energy resources, including fossil fuels, wind, and early efforts in wave power. Historically, the North Sea has featured prominently in geopolitical and military affairs, particularly in Northern Europe. It was also important globally through the power northern Europeans projected worldwide during much of the Middle Ages and into the modern era. The North Sea was the centre of the Vikings' rise. Subsequently, the Hanseatic League, the Dutch Republic, and the British each sought to gain command the North Sea and thus access to the world's markets and resources. As Germany's only outlet to the ocean, the North Sea continued to be strategically important through both World Wars. The coast of the North Sea presents a diversity of geological and geographical features. In the north, deep fjords and sheer cliffs mark the Norwegian and Scottish coastlines, whereas in the south, the coast consists primarily of sandy beaches and wide mudflats. Due to the dense population, heavy industrialization, and intense use of the sea and area surrounding it, there have been various environmental issues affecting the sea's ecosystems. Adverse environmental issues – commonly including overfishing, industrial and agricultural runoff, dredging, and dumping, among others – have led to a number of efforts to prevent degradation of the sea while still making use of its economic potential. The North Sea is bounded by the Orkney Islands and east coast of Great Britain to the west and the northern and central European mainland to the east and south, including Norway, Denmark, Germany, the Netherlands, Belgium, and France. In the southwest, beyond the Straits of Dover, the North Sea becomes the English Channel connecting to the Atlantic Ocean. In the east, it connects to the Baltic Sea via the Skagerrak and Kattegat, narrow straits that separate Denmark from Norway and Sweden respectively. In the north it is bordered by the Shetland Islands, and connects with the Norwegian Sea, which is a marginal sea in the Arctic Ocean. The North Sea is more than long and wide, with an area of and a volume of . Around the edges of the North Sea are sizeable islands and archipelagos, including Shetland, Orkney, and the Frisian Islands. The North Sea receives freshwater from a number of European continental watersheds, as well as the British Isles. A large part of the European drainage basin empties into the North Sea, including water from the Baltic Sea. The largest and most important rivers flowing into the North Sea are the Elbe and the Rhine – Meuse. Around 185 million people live in the catchment area of the rivers discharging into the North Sea encompassing some highly industrialized areas. For the most part, the sea lies on the European continental shelf with a mean depth of . The only exception is the Norwegian trench, which extends parallel to the Norwegian shoreline from Oslo to an area north of Bergen. It is between wide and has a maximum depth of . The Dogger Bank, a vast moraine, or accumulation of unconsolidated glacial debris, rises to a mere below the surface. This feature has produced the finest fishing location of the North Sea. The Long Forties and the Broad Fourteens are large areas with roughly uniform depth in fathoms, (forty fathoms and fourteen fathoms or deep respectively). These great banks and others make the North Sea particularly hazardous to navigate, which has been alleviated by the implementation of satellite navigation systems. The Devil's Hole lies east of Dundee, Scotland. The feature is a series of asymmetrical trenches between long, wide and up to deep. Other areas which are less deep are Cleaver Bank, Fisher Bank and Noordhinder Bank. The International Hydrographic Organization defines the limits of the North Sea as follows: "On the Southwest." A line joining the Walde Lighthouse (France, 1°55'E) and Leathercoat Point (England, 51°10'N). "On the Northwest." From Dunnet Head (3°22'W) in Scotland to Tor Ness (58°47'N) in the Island of Hoy, thence through this island to the Kame of Hoy (58°55'N) on to Breck Ness on Mainland (58°58'N) through this island to Costa Head (3°14'W) and to Inga Ness (59'17'N) in Westray through Westray, to Bow Head, across to Mull Head (North point of Papa Westray) and on to Seal Skerry (North point of North Ronaldsay) and thence to Horse Island (South point of the Shetland Islands). "On the North." From the North point (Fethaland Point) of the Mainland of the Shetland Islands, across to Graveland Ness (60°39'N) in the Island of Yell, through Yell to Gloup Ness (1°04'W) and across to Spoo Ness (60°45'N) in Unst island, through Unst to Herma Ness (60°51'N), on to the SW point of the Rumblings and to Muckle Flugga () all these being included in the North Sea area; thence up the meridian of 0°53' West to the parallel of 61°00' North and eastward along this parallel to the coast of Norway, the whole of Viking Bank being thus included in the North Sea. "On the East." The Western limit of the Skagerrak [A line joining Hanstholm () and the Naze (Lindesnes, )]. The average temperature in summer is and in the winter. The average temperatures have been trending higher since 1988, which has been attributed to climate change. Air temperatures in January range on average between and in July between . The winter months see frequent gales and storms. The salinity averages between of water. The salinity has the highest variability where there is fresh water inflow, such as at the Rhine and Elbe estuaries, the Baltic Sea exit and along the coast of Norway. The main pattern to the flow of water in the North Sea is an anti-clockwise rotation along the edges. The North Sea is an arm of the Atlantic Ocean receiving the majority of ocean current from the northwest opening, and a lesser portion of warm current from the smaller opening at the English Channel. These tidal currents leave along the Norwegian coast. Surface and deep water currents may move in different directions. Low salinity surface coastal waters move offshore, and deeper, denser high salinity waters move inshore. The North Sea located on the continental shelf has different waves from those in deep ocean water. The wave speeds are diminished and the wave amplitudes are increased. In the North Sea there are two amphidromic systems and a third incomplete amphidromic system. In the North Sea the average tide difference in wave amplitude is between zero and . The Kelvin tide of the Atlantic Ocean is a semidiurnal wave that travels northward. Some of the energy from this wave travels through the English Channel into the North Sea. The wave continues to travel northward in the Atlantic Ocean, and once past the northern tip of Great Britain, the Kelvin wave turns east and south and once again enters the North Sea. The eastern and western coasts of the North Sea are jagged, formed by glaciers during the ice ages. The coastlines along the southernmost part are covered with the remains of deposited glacial sediment. The Norwegian mountains plunge into the sea creating deep fjords and archipelagos. South of Stavanger, the coast softens, the islands become fewer. The eastern Scottish coast is similar, though less severe than Norway. From north east of England, the cliffs become lower and are composed of less resistant moraine, which erodes more easily, so that the coasts have more rounded contours. In the Netherlands, Belgium and in East Anglia the littoral is low and marshy. The east coast and south-east of the North Sea (Wadden Sea) have coastlines that are mainly sandy and straight owing to longshore drift, particularly along Belgium and Denmark. The southern coastal areas were originally amphibious flood plains and swampy land. In areas especially vulnerable to storm surges, people settled behind elevated levees and on natural areas of high ground such as spits and geestland. As early as 500 BC, people were constructing artificial dwelling hills higher than the prevailing flood levels. It was only around the beginning of the High Middle Ages, in 1200 AD, that inhabitants began to connect single ring dikes into a dike line along the entire coast, thereby turning amphibious regions between the land and the sea into permanent solid ground. The modern form of the dikes supplemented by overflow and lateral diversion channels, began to appear in the 17th and 18th centuries, built in the Netherlands. The North Sea Floods of 1953 and 1962 were the impetus for further raising of the dikes as well as the shortening of the coast line so as to present as little surface area as possible to the punishment of the sea and the storms. Currently, 27% of the Netherlands is below sea level protected by dikes, dunes, and beach flats. Coastal management today consists of several levels. The dike slope reduces the energy of the incoming sea, so that the dike itself does not receive the full impact. Dikes that lie directly on the sea are especially reinforced. The dikes have, over the years, been repeatedly raised, sometimes up to and have been made flatter to better reduce wave erosion. Where the dunes are sufficient to protect the land behind them from the sea, these dunes are planted with beach grass ("Ammophila arenaria") to protect them from erosion by wind, water, and foot traffic. Storm surges threaten, in particular, the coasts of the Netherlands, Belgium, Germany, and Denmark and low lying areas of eastern England particularly around The Wash and Fens. Storm surges are caused by changes in barometric pressure combined with strong wind created wave action. The first recorded storm tide flood was the "Julianenflut", on 17 February 1164. In its wake, the Jadebusen, (a bay on the coast of Germany), began to form. A storm tide in 1228 is recorded to have killed more than 100,000 people. In 1362, the Second Marcellus Flood, also known as the "Grote Manndrenke", hit the entire southern coast of the North Sea. Chronicles of the time again record more than 100,000 deaths, large parts of the coast were lost permanently to the sea, including the now legendary lost city of Rungholt. In the 20th century, the North Sea flood of 1953 flooded several nations' coasts and cost more than 2,000 lives. 315 citizens of Hamburg died in the North Sea flood of 1962. Though rare, the North Sea has been the site of a number of historically documented tsunamis. The Storegga Slides were a series of underwater landslides, in which a piece of the Norwegian continental shelf slid into the Norwegian Sea. The immense landslips occurred between 8150 BCE and 6000 BCE, and caused a tsunami up to high that swept through the North Sea, having the greatest effect on Scotland and the Faeroe Islands. The Dover Straits earthquake of 1580 is among the first recorded earthquakes in the North Sea measuring between 5.6 and 5.9 on the Richter scale. This event caused extensive damage in Calais both through its tremors and possibly triggered a tsunami, though this has never been confirmed. The theory is a vast underwater landslide in the English Channel was triggered by the earthquake, which in turn caused a tsunami. The tsunami triggered by the 1755 Lisbon earthquake reached Holland, although the waves had lost their destructive power. The largest earthquake ever recorded in the United Kingdom was the 1931 Dogger Bank earthquake, which measured 6.1 on the Richter magnitude scale and caused a small tsunami that flooded parts of the British coast. Shallow epicontinental seas like the current North Sea have since long existed on the European continental shelf. The rifting that formed the northern part of the Atlantic Ocean during the Jurassic and Cretaceous periods, from about , caused tectonic uplift in the British Isles. Since then, a shallow sea has almost continuously existed between the uplands of the Fennoscandian Shield and the British Isles. This precursor of the current North Sea has grown and shrunk with the rise and fall of the eustatic sea level during geologic time. Sometimes it was connected with other shallow seas, such as the sea above the Paris Basin to the south-west, the Paratethys Sea to the south-east, or the Tethys Ocean to the south. During the Late Cretaceous, about , all of modern mainland Europe except for Scandinavia was a scattering of islands. By the Early Oligocene, , the emergence of Western and Central Europe had almost completely separated the North Sea from the Tethys Ocean, which gradually shrank to become the Mediterranean as Southern Europe and South West Asia became dry land. The North Sea was cut off from the English Channel by a narrow land bridge until that was breached by at least two catastrophic floods between 450,000 and 180,000 years ago. Since the start of the Quaternary period about , the eustatic sea level has fallen during each glacial period and then risen again. Every time the ice sheet reached its greatest extent, the North Sea became almost completely dry. The present-day coastline formed after the Last Glacial Maximum when the sea began to flood the European continental shelf. In 2006 a bone fragment was found while drilling for oil in the North Sea. Analysis indicated that it was a Plateosaurus from 199 to 216 million years ago. This was the deepest dinosaur fossil ever found and the first find for Norway. Copepods and other zooplankton are plentiful in the North Sea. These tiny organisms are crucial elements of the food chain supporting many species of fish. Over 230 species of fish live in the North Sea. Cod, haddock, whiting, saithe, plaice, sole, mackerel, herring, pouting, sprat, and sandeel are all very common and are fished commercially. Due to the various depths of the North Sea trenches and differences in salinity, temperature, and water movement, some fish such as blue-mouth redfish and rabbitfish reside only in small areas of the North Sea. Crustaceans are also commonly found throughout the sea. Norway lobster, deep-water prawns, and brown shrimp are all commercially fished, but other species of lobster, shrimp, oyster, mussels and clams all live in the North Sea. Recently non-indigenous species have become established including the Pacific oyster and Atlantic jackknife clam. The coasts of the North Sea are home to nature reserves including the Ythan Estuary, Fowlsheugh Nature Preserve, and Farne Islands in the UK and the Wadden Sea National Parks in Denmark, Germany and the Netherlands. These locations provide breeding habitat for dozens of bird species. Tens of millions of birds make use of the North Sea for breeding, feeding, or migratory stopovers every year. Populations of black legged kittiwakes, Atlantic puffins, northern fulmars, and species of petrels, gannets, seaducks, loons (divers), cormorants, gulls, auks, and terns, and many other seabirds make these coasts popular for birdwatching. The North Sea is also home to marine mammals. Common seals, and harbour porpoises can be found along the coasts, at marine installations, and on islands. The very northern North Sea islands such as the Shetland Islands are occasionally home to a larger variety of pinnipeds including bearded, harp, hooded and ringed seals, and even walrus. North Sea cetaceans include various porpoise, dolphin and whale species. Plant species in the North Sea include species of wrack, among them bladder wrack, knotted wrack, and serrated wrack. Algae, macroalgal, and kelp, such as oarweed and laminaria hyperboria, and species of maerl are found as well. Eelgrass, formerly common in the entirety of the Wadden Sea, was nearly wiped out in the 20th century by a disease. Similarly, sea grass used to coat huge tracts of ocean floor, but have been damaged by trawling and dredging have diminished its habitat and prevented its return. Invasive Japanese seaweed has spread along the shores of the sea clogging harbours and inlets and has become a nuisance. Due to the heavy human populations and high level of industrialization along its shores, the wildlife of the North Sea has suffered from pollution, overhunting, and overfishing. Flamingos and pelicans were once found along the southern shores of the North Sea, but became extinct over the 2nd millennium. Walruses frequented the Orkney Islands through the mid-16th century, as both Sable Island and Orkney Islands lay within its normal range. Gray whales also resided in the North Sea but were driven to extinction in the Atlantic in the 17th century Other species have dramatically declined in population, though they are still found. North Atlantic right whales, sturgeon, shad, rays, skates, salmon, and other species were common in the North Sea until the 20th century, when numbers declined due to overfishing. Other factors like the introduction of non-indigenous species, industrial and agricultural pollution, trawling and dredging, human-induced eutrophication, construction on coastal breeding and feeding grounds, sand and gravel extraction, offshore construction, and heavy shipping traffic have also contributed to the decline. For example, a resident killer whale pod was lost in the 1960s, presumably due to the peak in PCB pollution in this time period. The OSPAR commission manages the OSPAR convention to counteract the harmful effects of human activity on wildlife in the North Sea, preserve endangered species, and provide environmental protection. All North Sea border states are signatories of the MARPOL 73/78 Accords, which preserve the marine environment by preventing pollution from ships. Germany, Denmark, and the Netherlands also have a trilateral agreement for the protection of the Wadden Sea, or mudflats, which run along the coasts of the three countries on the southern edge of the North Sea. The North Sea has had various names through history. One of the earliest recorded names was "Septentrionalis Oceanus", or "Northern Ocean," which was cited by Pliny. The name "North Sea" probably came into English, however, via the Dutch "Noordzee", who named it thus either in contrast with the Zuiderzee ("South Sea"), located south of Frisia, or because the sea is generally to the north of the Netherlands. Before the adoption of "North Sea," the names used in English, in American English in particular, were "German Sea" or "German Ocean", referred to the Latin names "Mare Germanicum" and "Oceanus Germanicus", and these persisted in use until the First World War. Other common names in use for long periods were the Latin terms "Mare Frisicum", as well as the English equivalent, "Frisian Sea". The modern names of the sea in the other local languages are: ("West Sea") or "Nordsøen" , , , , , , , Northern Frisian: "Weestsiie" ("West Sea"), , , , , and Zeeuws: "Noôrdzeê". North Sea has provided waterway access for commerce and conquest. Many areas have access to the North Sea because of its long coastline and the European rivers that empty into it. The British Isles had been protected from invasion by the North Sea waters until the Roman conquest of Britain in 43 CE. The Romans established organised ports, which increased shipping, and began sustained trade. When the Romans abandoned Britain in 410, the Germanic Angles, Saxons, and Jutes began the next great migration across the North Sea during the Migration Period. They made successive invasions of the island. The Viking Age began in 793 with the attack on Lindisfarne; for the next quarter-millennium the Vikings ruled the North Sea. In their superior longships, they raided, traded, and established colonies and outposts along the coasts of the sea. From the Middle Ages through the 15th century, the northern European coastal ports exported domestic goods, dyes, linen, salt, metal goods and wine. The Scandinavian and Baltic areas shipped grain, fish, naval necessities, and timber. In turn the North Sea countries imported high-grade cloths, spices, and fruits from the Mediterranean region. Commerce during this era was mainly conducted by maritime trade due to underdeveloped roadways. In the 13th century the Hanseatic League, though centred on the Baltic Sea, started to control most of the trade through important members and outposts on the North Sea. The League lost its dominance in the 16th century, as neighbouring states took control of former Hanseatic cities and outposts. Their internal conflict prevented effective cooperation and defence. As the League lost control of its maritime cities, new trade routes emerged that provided Europe with Asian, American, and African goods. The 17th century Dutch Golden Age during which Dutch herring, cod and whale fisheries reached an all time high saw Dutch power at its zenith. Important overseas colonies, a vast merchant marine, powerful navy and large profits made the Dutch the main challengers to an ambitious England. This rivalry led to the first three Anglo-Dutch Wars between 1652 and 1673, which ended with Dutch victories. After the Glorious Revolution in 1688, the Dutch prince William ascended to the English throne. With unified leadership, commercial, military, and political power began to shift from Amsterdam to London. The British did not face a challenge to their dominance of the North Sea until the 20th century. Tensions in the North Sea were again heightened in 1904 by the Dogger Bank incident. During the Russo-Japanese War, several ships of the Russian Baltic Fleet, which was on its way to the Far East, mistook British fishing boats for Japanese ships and fired on them, and then upon each other, near the Dogger Bank, nearly causing Britain to enter the war on the side of Japan. During the First World War, Great Britain's Grand Fleet and Germany's Kaiserliche Marine faced each other in the North Sea, which became the main theatre of the war for surface action. Britain's larger fleet and North Sea Mine Barrage were able to establish an effective blockade for most of the war, which restricted the Central Powers' access to many crucial resources. Major battles included the Battle of Heligoland Bight, the Battle of the Dogger Bank, and the Battle of Jutland. World War I also brought the first extensive use of submarine warfare, and a number of submarine actions occurred in the North Sea. The Second World War also saw action in the North Sea, though it was restricted more to aircraft reconnaissance, and action by fighter/bomber aircraft, submarines, and smaller vessels such as minesweepers and torpedo boats. In the aftermath of the war, hundreds of thousands of tons of chemical weapons were disposed of by being dumped in the North Sea. After the war, the North Sea lost much of its military significance because it is bordered only by NATO member-states. However, it gained significant economic importance in the 1960s as the states around the North Sea began full-scale exploitation of its oil and gas resources. The North Sea continues to be an active trade route. Countries that border the North Sea all claim the of territorial waters, within which they have exclusive fishing rights. The Common Fisheries Policy of the European Union (EU) exists to coordinate fishing rights and assist with disputes between EU states and the EU border state of Norway. After the discovery of mineral resources in the North Sea, the Convention on the Continental Shelf established country rights largely divided along the median line. The median line is defined as the line "every point of which is equidistant from the nearest points of the baselines from which the breadth of the territorial sea of each State is measured." The ocean floor border between Germany, the Netherlands, and Denmark was only reapportioned after protracted negotiations and a judgement of the International Court of Justice. As early as 1859, oil was discovered in onshore areas around the North Sea and natural gas as early as 1910. Onshore resources, for example the K12-B field in the Netherlands continue to be exploited today. Offshore test drilling began in 1966 and then, in 1969, Phillips Petroleum Company discovered the Ekofisk oil field distinguished by valuable, low-sulphur oil. Commercial exploitation began in 1971 with tankers and, after 1975, by a pipeline, first to Teesside, England and then, after 1977, also to Emden, Germany. The exploitation of the North Sea oil reserves began just before the 1973 oil crisis, and the climb of international oil prices made the large investments needed for extraction much more attractive. The start in 1973 of the oil reserves by UK allowed them to stop the declining position in the international trade in 1974, and a huge increase after the discovery and exploitation of the huge oil field by Phillips group in 1977 as the Brae field. Although the production costs are relatively high, the quality of the oil, the political stability of the region, and the proximity of important markets in western Europe has made the North Sea an important oil-producing region. The largest single humanitarian catastrophe in the North Sea oil industry was the destruction of the offshore oil platform Piper Alpha in 1988 in which 167 people lost their lives. Besides the Ekofisk oil field, the Statfjord oil field is also notable as it was the cause of the first pipeline to span the Norwegian trench. The largest natural gas field in the North Sea, Troll gas field, lies in the Norwegian trench, dropping over , requiring the construction of the enormous Troll A platform to access it. The price of Brent Crude, one of the first types of oil extracted from the North Sea, is used today as a standard price for comparison for crude oil from the rest of the world. The North Sea contains western Europe's largest oil and natural gas reserves and is one of the world's key non-OPEC producing regions. In the UK sector of the North Sea, the oil industry invested £14.4 billion in 2013, and was on track to spend £13 billion in 2014. Industry body Oil & Gas UK put the decline down to rising costs, lower production, high tax rates, and less exploration. As of January 2018 The North Sea region contains 184 offshore rigs, which makes it the region with the highest number of offshore rigs in the world. The North Sea is Europe's main fishery accounting for over 5% of international commercial fish caught. Fishing in the North Sea is concentrated in the southern part of the coastal waters. The main method of fishing is trawling. In 1995, the total volume of fish and shellfish caught in the North Sea was approximately 3.5 million tonnes. Besides fish, it is estimated that one million tonnes of unmarketable by-catch is caught and discarded each year. In recent decades, overfishing has left many fisheries unproductive, disturbing marine food chain dynamics and costing jobs in the fishing industry. Herring, cod and plaice fisheries may soon face the same plight as mackerel fishing, which ceased in the 1970s due to overfishing. The objective of the European Union Common Fisheries Policy is to minimize the environmental impact associated with resource use by reducing fish discards, increasing productivity of fisheries, stabilising markets of fisheries and fish processing, and supplying fish at reasonable prices for the consumer. Whaling was an important economic activity from the 9th until the 13th century for Flemish whalers. The medieval Flemish, Basque and Norwegian whalers who were replaced in the 16th century by Dutch, English, Danes and Germans, took massive numbers of whales and dolphins and nearly depleted the right whales. This activity likely led to the extinction of the Atlantic population of the once common gray whale. By 1902 the whaling had ended. After being absent for 300 years a single gray whale returned, it probably was the first of many more to find its way through the now ice-free Northwest Passage. In addition to oil, gas, and fish, the states along the North Sea also take millions of cubic metres per year of sand and gravel from the ocean floor. These are used for beach nourishment, land reclamation and construction. Rolled pieces of amber may be picked up on the east coast of England. Due to the strong prevailing winds, and shallow water, countries on the North Sea, particularly Germany and Denmark, have used the shore for wind power since the 1990s. The North Sea is the home of one of the first large-scale offshore wind farms in the world, Horns Rev 1, completed in 2002. Since then many other wind farms have been commissioned in the North Sea (and elsewhere). As of 2013 the 630 megawatt (MW) London Array is the largest offshore wind farm in the world, with the 504 (MW) Greater Gabbard wind farm the second largest, followed by the 367 MW Walney Wind Farm. All are off the coast of the UK. These projects will be dwarfed by subsequent wind farms that are in the pipeline, including Dogger Bank at 4,800 MW, Norfolk Bank (7,200 MW), and Irish Sea (4,200 MW). At the end of June 2013 total European combined offshore wind energy capacity was 6,040 MW. UK installed 513.5 MW offshore windpower in the first half-year of 2013. The expansion of offshore wind farms has met with some resistance. Concerns have included shipping collisions and environmental effects on ocean ecology and wildlife such as fish and migratory birds, however, these concerns were found to be negligible in a long-term study in Denmark released in 2006 and again in a UK government study in 2009. There are also concerns about reliability, and the rising costs of constructing and maintaining offshore wind farms. Despite these, development of North Sea wind power is continuing, with plans for additional wind farms off the coasts of Germany, the Netherlands, and the UK. There have also been proposals for a transnational power grid in the North Sea to connect new offshore wind farms. Energy production from tidal power is still in a pre-commercial stage. The European Marine Energy Centre has installed a wave testing system at Billia Croo on the Orkney mainland and a tidal power testing station on the nearby island of Eday. Since 2003, a prototype Wave Dragon energy converter has been in operation at Nissum Bredning fjord of northern Denmark. The beaches and coastal waters of the North Sea are destinations for tourists. The Belgian, Dutch, German and Danish coasts are developed for tourism. The North Sea coast of the United Kingdom has tourist destinations with beach resorts and golf courses. Fife in Scotland is famous for its links golf courses; the coastal city of St. Andrews is renowned as the "Home of Golf". The coast of North East England has several tourist towns such as Scarborough, Bridlington, Seahouses, Whitby, Robin Hood's Bay and Seaton Carew, and has some long sandy beaches and links golfing locations such as Seaton Carew Golf Club and Goswick Golf Club. The North Sea Trail is a long-distance trail linking seven countries around the North Sea. Windsurfing and sailing are popular sports because of the strong winds. Mudflat hiking, recreational fishing and birdwatching are among other activities. The climatic conditions on the North Sea coast have been claimed to be healthy. As early as the 19th century, travellers visited the North Sea coast for curative and restorative vacations. The sea air, temperature, wind, water, and sunshine are counted among the beneficial conditions that are said to activate the body's defences, improve circulation, strengthen the immune system, and have healing effects on the skin and the respiratory system. The Wadden Sea in Denmark, Germany and the Netherlands is an UNESCO World Heritage Site. The North Sea is important for marine transport and its shipping lanes are among the busiest in the world. Major ports are located along its coasts: Rotterdam, the busiest port in Europe and the fourth busiest port in the world by tonnage , Antwerp (was 16th) and Hamburg (was 27th), Bremen/Bremerhaven and Felixstowe, both in the top 30 busiest container seaports, as well as the Port of Bruges-Zeebrugge, Europe's leading ro-ro port. Fishing boats, service boats for offshore industries, sport and pleasure craft, and merchant ships to and from North Sea ports and Baltic ports must share routes on the North Sea. The Dover Strait alone sees more than 400 commercial vessels a day. Because of this volume, navigation in the North Sea can be difficult in high traffic zones, so ports have established elaborate vessel traffic services to monitor and direct ships into and out of port. The North Sea coasts are home to numerous canals and canal systems to facilitate traffic between and among rivers, artificial harbours, and the sea. The Kiel Canal, connecting the North Sea with the Baltic Sea, is the most heavily used artificial seaway in the world reporting an average of 89 ships per day not including sporting boats and other small watercraft in 2009. It saves an average of , instead of the voyage around the Jutland peninsula. The North Sea Canal connects Amsterdam with the North Sea.
https://en.wikipedia.org/wiki?curid=21179
Natural Born Killers Natural Born Killers is a 1994 American crime film directed by Oliver Stone and starring Woody Harrelson, Juliette Lewis, Robert Downey Jr., Tom Sizemore, and Tommy Lee Jones. The film tells the story of two victims of traumatic childhoods who became lovers and mass murderers, and are irresponsibly glorified by the mass media. The film is based on an original screenplay by Quentin Tarantino that was heavily revised by Stone, writer David Veloz, and associate producer Richard Rutowski. Tarantino received a story credit though he subsequently disowned the film. Jane Hamsher, Don Murphy, and Clayton Townsend produced the film, with Arnon Milchan, Thom Mount, and Stone as executive producers. The film was released on August 26, 1994 in the United States, and also screened at the Venice Film Festival on August 29, 1994. It was a box office success, grossing over $50 million against a production budget of $34 million, but received a mixed critical reception. Some critics praised the plot, acting, humor, and combination of action and romance, while others found the film overly violent and graphic. Notorious for its violent content and inspiring "copycat" crimes, the film was named the eighth most controversial film in history by "Entertainment Weekly" in 2006. Mickey Knox and his wife Mallory stop at a diner in the New Mexico desert. A duo of rednecks arrive and begin sexually harassing Mallory as she dances by a jukebox. She initially encourages it before beating one of the men viciously. Mickey joins her, and the couple murder everyone in the diner, save one staff member, to whom they proudly declare their names before leaving. The couple camp in the desert, and Mallory reminisces about how she met Mickey, a meat deliveryman who serviced her family's household. After a whirlwind romance, the couple murdered Mallory's sexually abusive father and neglectful mother before having an unofficial marriage ceremony on a bridge. Later, Mickey and Mallory hold a woman hostage in their hotel room. Angered by Mickey's desire for a threesome, Mallory leaves, and Mickey rapes the hostage. Mallory drives to a nearby gas station, where she flirts with a mechanic. They begin to have sex on the hood of a car, but Mallory kills him after suffering a flashback of being raped by her father. The pair continue their killing spree, ultimately claiming fifty-two victims in New Mexico, Arizona, and Nevada. Pursuing them is Detective Jack Scagnetti, who became obsessed with mass murderers after witnessing his mother being shot and killed by Charles Whitman when he was eight. Beneath his heroic facade, he is also a violent psychopath and has murdered prostitutes in his past. Following the Knoxes murder spree is a self-serving tabloid journalist, Wayne Gale, who profiles them on his show, "American Maniacs", soon elevating them to cult hero status. Mickey and Mallory get lost in the desert after taking psychedelic mushrooms, and stumble upon a ranch owned by Warren Red Cloud, a Navajo man who provides them food and shelter. As Mickey and Mallory sleep, Warren senses evil in the couple and attempts to exorcise the demon he perceives in Mickey, chanting over him as he sleeps. Mickey, who has nightmares recounting his abusive parents, awakens during the exorcism, and shoots Warren to death. As they couple flee, they feel inexplicably guilty, and come across a giant field of rattlesnakes, where they are badly bitten. They reach a drugstore to purchase snakebite antidote, but the store is sold out. A pharmacist recognizes the couple and sets off an alarm before Mickey kills him. Police arrive shortly after and accost the couple, which ends in a shootout followed by police beating the couple while a news crew films the action. One year later, an imprisoned Mickey and Mallory are scheduled to be transferred to psychiatric hospitals. Scagnetti orders Warden Dwight McClusky to kill the Knoxes during their transfer, and claim they tried to escape. Meanwhile, Gale has persuaded Mickey to give a live interview that will air after the Super Bowl. During the interview, Mickey declares himself a "natural born killer," a phrase that inspires other inmates to start a prison riot. After the interview is terminated by McClusky, Mickey is left alone with Gale, the film crew, and several guards; he manages to overpower a guard and kill most of the people in the room, taking Gale and several others hostage. Gale and his crew give a live television report that profiles the riot. Meanwhile, Scagnetti attempts to seduce Mallory in her cell. She responds by beating him viciously, and another guard subdues her with tear gas. Mickey and Gale reach Mallory's cell, where Mickey kills the guards and engages in a standoff with Scagnetti before Mallory murders him. Gale's entire television crew is killed trying to escape the riot, while Gale himself begins indulging in violence, shooting at prison guards. Mickey and Mallory steal a van and escape into the woods with Gale, to whom they give a final interview before declaring that he must die. He attempts various arguments to change their minds, appealing to their trademark practice of leaving one survivor; Mickey informs him they are leaving a witness to tell the tale, his camera. Gale accepts his fate and is shot to death. Unbeknownst to the three, the entire exchange is transmitted to a horrified news anchor through Gale's in-ear microphone. Several years later, Mickey and Mallory, still on the run, travel in an RV, as a pregnant Mallory watches their two children play. One of the central themes of "Natural Born Killers" is the relationship between real-life violence and the mass media's coverage of it. This thematic preoccupation was declared in the film's promotional materials, with its theatrical poster advertising it as a "bold new film that takes a look at a country seduced by fame, obsessed by crime, and consumed by the media." The character of Wayne Gale, the television host of "American Maniacs", functions in the film as a figurehead of lurid true crime television documentaries, which recycle real-life incidents of violence and criminal activity into entertainment for the general public. On several occasions, expressionistic jump cuts featuring Gale as a blood-soaked Satan are interspersed into the film, which Muir suggests emphasizes the film's assertion that mass media and crime mutually reinforce one another. Media representation of the nuclear family has been identified as another theme in the film, particularly with the depiction of Mallory's dysfunctional family life, which includes a neglectful mother and a father who sexually abuses her. Muir notes that the sequence depicting Mallory's home life—presented as a television sitcom with the title "I Love Mallory" (a parody of "I Love Lucy")—charts "the colossal gulf between the imagery sold to America regarding family life and the truth, for many Americans, of such family life in the 1990s." The "sitcom" representation of Mallory's household results in a visual dichotomy between her "life as she imagined it should be (replete with an oppressive laugh track eradicating any scary sense of ambiguity)" and the "grim truth of it." "Natural Born Killers" was based on a screenplay written by Quentin Tarantino, in which a married couple suddenly decide to go on a killing spree. Tarantino had sold an option for his script to producers Jane Hamsher and Don Murphy for $10,000 after he had tried, and failed, to direct it himself for $500,000. Hamsher and Murphy subsequently sold the screenplay to Warner Bros. Around the same time, Oliver Stone was made aware of the script. He was keen to find something more straightforward than his previous production, "Heaven & Earth" (1993), a difficult shoot which had left him exhausted. David Veloz, associate producer Richard Rutowski, and Stone rewrote Tarantino's script, keeping much of the dialogue but changing the focus of the film from journalist Wayne Gale to Mickey and Mallory. The script was revised so drastically that Tarantino was credited for the story only. In a 1993 interview, Tarantino stated that he did not hold any animosity towards Stone, and that he wished the film well. Initially, when producers Hamsher and Murphy had first brought the script to Stone's attention, he had envisioned it as an action film; "something Arnold Schwarzenegger would be proud of." As the project developed however, incidents such as the O. J. Simpson case, the Menendez brothers case, the Tonya Harding/Nancy Kerrigan incident, the Rodney King incident, and the Federal assault of the Branch Davidian sect all took place. Stone came to feel that the media was heavily involved in the outcome of all of these cases, and that the media had become an all-pervasive entity which marketed violence and suffering for the good of ratings. As such, he changed the tone of the film from one of purely action to a "vicious, coldhearted farce" on the media. Coloring Stone's approach to the material, and contributing to the violent nature of the film, were the anger and sadness he felt at the breakdown of his second marriage. He also said in an interview that the film was influenced by the "vitality" of Indian cinema. Stone cast Woody Harrelson partly because, "frankly, he had that American, trashy look. There's something about Woody that evokes Kentucky or white trash." At the time, Harrelson was primarily known for his comedic performances, namely his role on the sitcom "Cheers", and Stone was compelled to cast him against type. Stone cast Lewis for similar reason, noting that, despite her success as portraying a defiled teenage daughter in "Cape Fear" (1991), he felt she could "pull off white trash, too. Juliette has malice in her eyes. She's got adorable eyes, but they jump and they gleam. I just felt they [were both] right. They didn't feel like they were upper-class people." Stone tried to convince Lewis to gain muscle mass for her role as Mallory so that she looked tougher, but she refused, saying she wanted the character to look like a pushover, not a bodybuilder. Robert Downey Jr. was cast as Wayne Gale, the reporter chronicling the Knoxes; Downey prepared for his role as reporter Wayne Gale by spending time with Australian TV shock-king Steve Dunleavy, and later convinced Stone to allow him to portray Gale with an Australian accent. Tom Sizemore was cast as Detective Jack Scagnetti, the psychotic police officer with murderous impulses himself, while Tommy Lee Jones was cast as Dwight McClusky, a prison warden who appears in the last act of the film. Rodney Dangerfield, primarily known as a stand-up comedian, portrayed Mallory's rapist father, and was allowed by Stone to rewrite all of his own character's lines. Principal photography took 56 days to shoot. Filming locations included the Rio Grande Gorge Bridge just west of Taos, New Mexico, where the wedding scene was filmed, and Stateville Correctional Center in Joliet, Illinois, where the prison riot was filmed. In Stateville, 80% of the prisoners are incarcerated for violent crimes. For the first two weeks on location at the prison, the extras were actual inmates with rubber weapons. For the subsequent two weeks, 200 extras were needed because the Stateville inmates were on lockdown. According to Tom Sizemore, during filming on the prison set, Stone would play African tribal music at full blast between takes to keep the frantic energy up. While shooting the POV scene wherein Mallory runs into the wire mesh, director of photography Robert Richardson broke his finger and the replacement cameraman cut his eye. According to Oliver Stone, he was not popular with the camera department on set that day. For the scenes involving rear projection, the projected footage was shot prior to principal photography, then edited together, and projected onto the stage, behind the live actors. For example, when Mallory drives past a building and flames are projected onto the wall, this was shot live using footage projected onto the facade of a real building. The famous Coca-Cola polar bear ad is seen twice during the film. According to Stone, Coca-Cola approved the use of the ad without having a full idea of what the film was about. When they saw the completed film, they were furious. "Natural Born Killers" was filmed and edited in a frenzied and psychedelic style, and features both color and black and white cinematography, as well as animation, and other unusual color schemes and visual compositions. Editing of the film lasted approximately 11 months, with the final film containing almost 3,000 cuts (most films have 600–700). The film also employs a wide range of camera angles, featuring Dutch tilts prominently throughout, with the camera rarely angling along a horizontal field of vision. Film scholar Robert Kolker notes that the Dutch angle's employment in the film is "the visual equivalent of a profound dislocation, a loss of object constancy, the slipperiness of subjectivity itself." Kolker comments that, unlike such films as "Bonnie and Clyde" from which "Natural Born Killers" draws influence, "from the very beginning...  the viewer is forced into a dual situation, neither one of which allows easy access to the main characters. One situation, continued throughout the film, is a kind of rhythmic attention created by a startling flow of images. Stone builds his visuals on unexpected linkages and disorienting juxtapositions within the shots and edits." Because the film is thematically preoccupied with media, Stone sought to implement visual elements of popular television into the film's visual tableau: "It had never quite been done before — a mixture of stocks and styles. I was influenced, I have to say, by MTV and some of the styles I saw in the early '80s and '90s on television. But no one had tried that style over the course of 90, 100 minutes." Commercials which were commonly on the air at the time of the film's release make brief, intermittent appearances as well. Concurrent with Stone's preoccupation with television as both a visual and thematic reference point, portions of the film are narrated through parodies of popular television series, including a sequence presented in the style of a sitcom about Mallory's dysfunctional family (titled "I Love Mallory"), a parody of "I Love Lucy". In the film's final montage, splices of real-life television news coverage of various criminal cases of the time are included, such as the O. J. Simpson case, the Menendez brothers, and the Tonya Harding/Nancy Kerrigan incident. Film scholar John Kenneth Muir notes this inclusion as an "exclamation point" concluding the film's thesis: "It seems to say, "Welcome to the tabloid-TV culture of America in the 1990, where crime pays and pays well."" The film's soundtrack was produced by Stone and Trent Reznor of Nine Inch Nails, who reportedly watched the film over 50 times to "get in the mood". Reznor reportedly produced the soundtrack while on tour. On his approach to compiling the soundtrack, Reznor told MTV: I suggested to Oliver [Stone] to try to turn the soundtrack into a collage-of-sound, kind of the way the movie used music: make edits, add dialog, and make it something interesting, rather than a bunch of previously released music. Some songs were written especially for the film or soundtrack, such as "Burn" by Nine Inch Nails. In its opening weekend, "Natural Born Killers" grossed a total of $11.2 million in 1,510 theaters, finishing 1st. It finished its theatrical run with a total gross of $50.3 million, against its $34 million budget. On review aggregator Rotten Tomatoes, the film has an approval rating of 46% based on 39 reviews, with an average rating of 5.63/10. The website's critical consensus reads, ""Natural Born Killers" explodes off the screen with style, but its satire is too blunt to offer any fresh insight into celebrity or crime -- pummeling the audience with depravity until the effect becomes deadening." On Metacritic, the film has an average weighted score of 74 out of 100, based on 20 critics, indicating "generally favorable reviews". Audiences polled by CinemaScore gave the film an average grade of "B–" on an A+ to F scale. Roger Ebert of the "Chicago Sun-Times" gave the film four stars out of four and wrote, "Seeing this movie once is not enough. The first time is for the visceral experience, the second time is for the meaning." On his television show, his partner Gene Siskel agreed with him, adding extra praise to the scene featuring Rodney Dangerfield. Other critics found the film unsuccessful in its aims. Hal Hinson of "The Washington Post" claimed that "Stone's sensibility is white-hot and personal. As much as he'd like us to believe that his camera is turned outward on the culture, it's vividly clear that he can't resist turning it inward on himself. This wouldn't be so troublesome if Stone didn't confuse the public and the private." Janet Maslin of "The New York Times" wrote, "for all its surface passions, "Natural Born Killers" never digs deep enough to touch the madness of such events, or even to send them up in any surprising way. Mr. Stone's vision is impassioned, alarming, visually inventive, characteristically overpowering. But it's no match for the awful truth." James Berardinelli gave the film a negative review but his criticism was different from many other such pans, which generally said that Oliver Stone was a hypocrite for making an ultra-violent film in the guise of a critique of American attitudes. Berardinelli noted that the movie "hits the bullseye" as a satire of America's lust for bloodshed, but repeated Stone's main point so often and so loudly that it became unbearable. At the 1994 Stinkers Bad Movie Awards, Harrelson was nominated for Worst Actor but lost to Bruce Willis for "Color of Night" and "North". The film was nominated for Worst Picture but lost to "North". "Natural Born Killers" was released on VHS in 1995 by Warner Home Video. A director's cut version of the film was released the following year on VHS by Vidmark/Lionsgate, who also released a non-anamorphic DVD of the director's cut in 2000. Distribution rights to Stone's director's cut reverted from Lionsgate to Warner Bros. in 2009, after which Warner issued an anamorphic DVD edition as well as a Blu-ray. After Quentin Tarantino attempted to publish his original screenplay to "Natural Born Killers" as a paperback book, as he had done with his scripts for "True Romance" and his own directorial efforts, "Reservoir Dogs" and "Pulp Fiction", the producers of "Natural Born Killers" filed a lawsuit against Tarantino, claiming that when he sold the script to them, he had forfeited the publishing rights; eventually, Tarantino was allowed to publish his original script. Tarantino disowned the film, saying, "I hated that fucking movie. If you like my stuff, don't watch that movie." When the film was first handed in to the MPAA, they told Stone they would give it an NC-17 unless he cut it. As such, Stone toned down the violence by cutting approximately four minutes of footage, and the MPAA re-rated the film as an R. In 1996, a Director's Cut was released on home video by Vidmark Entertainment and Pioneer Entertainment, as Warner Bros. wanted nothing to do with that particular version. Warner Home Video later released this cut on Blu-ray. The film was banned completely upon release in Ireland, including – controversially – from cinema clubs. The ban was later quietly lifted. In the UK, though the cinema release was delayed while the BBFC investigated reports that the film caused copycat murders in the USA and France, it was finally shown in cinemas in February 1995. The original intended UK home video release in March 1996 was cancelled due to the Dunblane massacre in Scotland. In the meantime, Channel Five showed the film in November 1997. It was finally released on video in July 2001. "Entertainment Weekly" ranked the film as the eighth most controversial film ever. From almost the moment of its release, the film has been accused of encouraging and inspiring numerous murderers in North America, including the Heath High School shooting and the Columbine High School massacre. The Columbine killers even code-named their attack: "NBK", an acronym for "Natural Born Killers".
https://en.wikipedia.org/wiki?curid=21180
Nancy Reagan Nancy Davis Reagan (born Anne Frances Robbins; July 6, 1921 – March 6, 2016) was an American film actress and the second wife of Ronald Reagan, the 40th president of the United States. She was the first lady of the United States from 1981 to 1989. She was born in New York City. After her parents separated, she lived in Maryland with an aunt and uncle for six years. When her mother remarried in 1929, she moved to Chicago and later took the name Davis from her stepfather. As Nancy Davis, she was a Hollywood actress in the 1940s and 1950s, starring in films such as "The Next Voice You Hear...", "Night into Morning", and "Donovan's Brain". In 1952, she married Ronald Reagan, who was then president of the Screen Actors Guild. They had two children together. Reagan was the first lady of California when her husband was governor from 1967 to 1975, and she began to work with the Foster Grandparents Program. Reagan became First Lady of the United States in January 1981, following her husband's victory in the 1980 presidential election. Early in his first term, she was criticized largely due to her decision to replace the White House china, which had been paid for by private donations. She championed recreational drug prevention causes when she founded the "Just Say No" drug awareness campaign, which was considered her major initiative as First Lady. More discussion of her role ensued following a 1988 revelation that she had consulted an astrologer to assist in planning the president's schedule after the attempted assassination of her husband in 1981. She generally had a strong influence on her husband and played a role in a few of his personnel and diplomatic decisions. After Ronald Reagan's term as president ended, the couple returned to their home in Bel Air, Los Angeles, California. Nancy devoted most of her time to caring for her husband, who was diagnosed with Alzheimer's disease in 1994, until his death at the age of 93 on June 5, 2004. Reagan remained active within the Reagan Library and in politics, particularly in support of embryonic stem cell research, until her death from congestive heart failure at age 94 on March 6, 2016. Anne Frances Robbins was born on July 6, 1921, at Sloane Hospital for Women in Midtown Manhattan. She was the only child of Kenneth Seymour Robbins (1892–1972), a farmer turned car salesman who had been born into a once-prosperous family, and his actress wife, Edith Prescott Luckett (1888–1987). Her godmother was silent-film-star Alla Nazimova. From birth, she was commonly called Nancy. She lived her first two years in Flushing, Queens, a borough of New York City, in a two-story house on Roosevelt Avenue between 149th and 150th Streets. Her parents separated soon after her birth and were divorced in 1928. After their separation, her mother traveled the country to pursue acting jobs and Robbins was raised in Bethesda, Maryland, for six years by her aunt, Virginia Luckett, and uncle, Audley Gailbraith, where she attended Sidwell Friends School for kindergarten through second grade. Nancy later described longing for her mother during those years: "My favorite times were when Mother had a job in New York, and Aunt Virgie would take me by train to stay with her." In 1929, her mother married Loyal Edward Davis (1896–1982), a prominent conservative neurosurgeon who moved the family to Chicago. Nancy and her stepfather got along very well; she later wrote that he was "a man of great integrity who exemplified old-fashioned values." He formally adopted her in 1938, and she would always refer to him as her father. At the time of the adoption, her name was legally changed to Nancy Davis. She attended the Girls' Latin School of Chicago (describing herself as an average student), from 1929, until she graduated in 1939, and later attended Smith College in Massachusetts, where she majored in English and drama, graduating in 1943. In 1940, a young Davis had appeared as a National Foundation for Infantile Paralysis volunteer in a memorable short subject film shown in movie theaters to raise donations for the crusade against polio. "The Crippler" featured a sinister figure spreading over playgrounds and farms, laughing over its victims, until finally dispelled by the volunteer. It was very effective in raising contributions. Following her graduation from college, Davis held jobs in Chicago as a sales clerk in Marshall Field's department store and as a nurse's aide. With the help of her mother's colleagues in theatre, including ZaSu Pitts, Walter Huston, and Spencer Tracy, she pursued a professional career as an actress. She first gained a part in Pitts' 1945 road tour of "Ramshackle Inn", moving to New York City. She landed the role of Si-Tchun, a lady-in-waiting, in the 1946 Broadway musical about the Orient, "Lute Song", starring Mary Martin and a pre-fame Yul Brynner. The show's producer told her, "You look like you could be Chinese." After passing a screen test, she moved to California and signed a seven-year contract with Metro-Goldwyn-Mayer Studios Inc. (MGM) in 1949; she later remarked, "Joining Metro was like walking into a dream world." Her combination of attractive appearance—centered on her large eyes—and somewhat distant and understated manner made her hard at first for MGM to cast and publicize. Davis appeared in eleven feature films, usually typecast as a "loyal housewife", "responsible young mother", or "the steady woman". Jane Powell, Debbie Reynolds, Leslie Caron, and Janet Leigh were among the actresses with whom she competed for roles at MGM. Davis' film career began with small supporting roles in two films that were released in 1949, "The Doctor and the Girl" with Glenn Ford and "East Side, West Side" starring Barbara Stanwyck. She played a child psychiatrist in the film noir "Shadow on the Wall" (1950) with Ann Sothern and Zachary Scott; her performance was called "beautiful and convincing" by "New York Times" critic A. H. Weiler. She co-starred in 1950's "The Next Voice You Hear...", playing a pregnant housewife who hears the voice of God from her radio. Influential reviewer Bosley Crowther of "The New York Times" wrote that "Nancy Davis [is] delightful as [a] gentle, plain, and understanding wife." In 1951, Davis appeared in "Night into Morning", her favorite screen role, a study of bereavement starring Ray Milland. Crowther said that Davis "does nicely as the fiancée who is widowed herself and knows the loneliness of grief," while another noted critic, "The Washington Post"'s Richard L. Coe, said Davis "is splendid as the understanding widow." MGM released Davis from her contract in 1952; she sought a broader range of parts, but also married Reagan, keeping her professional name as Davis, and had her first child that year. She soon starred in the science fiction film "Donovan's Brain" (1953); Crowther said that Davis, playing the role of a possessed scientist's "sadly baffled wife," "walked through it all in stark confusion" in an "utterly silly" film. In her next-to-last movie, "Hellcats of the Navy" (1957), she played nurse Lieutenant Helen Blair, and appeared in a film for the only time with her husband, playing what one critic called "a housewife who came along for the ride." Another reviewer, however, stated that Davis plays her part satisfactorily, and "does well with what she has to work with." Author Garry Wills has said that Davis was generally underrated as an actress because her constrained part in "Hellcats" was her most widely seen performance. In addition, Davis downplayed her Hollywood goals: promotional material from MGM in 1949 said that her "greatest ambition" was to have a "successful happy marriage"; decades later, in 1975, she would say, "I was never really a career woman but [became one] only because I hadn't found the man I wanted to marry. I couldn't sit around and do nothing, so I became an actress." Ronald Reagan biographer Lou Cannon nevertheless characterized her as a "reliable" and "solid" performer who held her own in performances with better-known actors. After her final film, "Crash Landing" (1958), Davis appeared for a brief time as a guest star in television dramas, such as the "Zane Grey Theatre" episode "The Long Shadow" (1961), where she played opposite Ronald Reagan, as well as "Wagon Train" and "The Tall Man", until she retired as an actress in 1962. During her career, Davis served for nearly ten years on the board of directors of the Screen Actors Guild. Decades later, Albert Brooks attempted to coax her out of acting retirement by offering her the title role opposite himself in his 1996 film "Mother". She declined in order to care for her husband, and Debbie Reynolds played the part. During her Hollywood career, Davis dated many actors, including Clark Gable, Robert Stack, and Peter Lawford; she later called Gable the nicest of the stars she had met. On November 15, 1949, she met Ronald Reagan, who was then president of the Screen Actors Guild. She had noticed that her name had appeared on the Hollywood blacklist. Davis sought Reagan's help to maintain her employment as a guild actress in Hollywood and for assistance in having her name removed from the list. Ronald Reagan informed her that she had been confused with another actress of the same name. The two began dating and their relationship was the subject of many gossip columns; one Hollywood press account described their nightclub-free times together as "the romance of a couple who have no vices". Ronald Reagan was skeptical about marriage, however, following his painful 1949 divorce from Jane Wyman, and he still saw other women. After three years of dating, they eventually decided to marry while discussing the issue in the couple's favorite booth at Chasen's, a restaurant in Beverly Hills. The couple wed on March 4, 1952, at the Little Brown Church in the San Fernando Valley of Los Angeles, in a simple and hastily arranged ceremony designed to avoid the press; the marriage was her first and his second. The only people in attendance were fellow actor William Holden (the best man) and his wife, actress Brenda Marshall (the matron of honor). Nancy was likely already pregnant during the ceremony; the couple's first child, Patricia Ann Reagan (later better known by her professional name, Patti Davis), was born less than eight months later on October 21, 1952. Their son, Ronald Prescott Reagan (later better known as Ron Reagan) was born six years later on May 20, 1958. Reagan also became stepmother to Maureen Reagan (1941–2001) and Michael Reagan (b. 1945), her husband's children from his first marriage to Jane Wyman. Observers described Nancy and Ronald's relationship as intimate. As president and first lady, the Reagans were reported to display their affection frequently, with one press secretary noting, "They never took each other for granted. They never stopped courting." Ronald often called Nancy "Mommy"; she called him "Ronnie". While the president was recuperating in the hospital after the 1981 assassination attempt, Nancy wrote in her diary, "Nothing can happen to my Ronnie. My life would be over." In a letter to Nancy, Ronald wrote, "whatever I treasure and enjoy ... all would be without meaning if I didn't have you." In 1998, a few years after her husband had been given a diagnosis of Alzheimer's disease, Nancy told "Vanity Fair", "Our relationship is very special. We were very much in love and still are. When I say my life began with Ronnie, well, it's true. It did. I can't imagine life without him." Nancy was known for the focused and attentive look, termed "the Gaze", that she fastened upon her husband during his speeches and appearances. President Reagan's death in June 2004 ended what Charlton Heston called "the greatest love affair in the history of the American Presidency." Nancy's relationship with her children was not always as close as the bond with her husband. She frequently quarreled with her children and her stepchildren. Her relationship with Patti was the most contentious; Patti flouted American conservatism, rebelled against her parents by joining the nuclear freeze movement, and authored many anti-Reagan books. The nearly 20 years of family feuding left Patti very much estranged from both her mother and father. Soon after her father's Alzheimer's disease was diagnosed, Patti and her mother reconciled and began to speak on a daily basis. Nancy's disagreements with Michael were also public matters; in 1984, she was quoted as saying that the two were in an "estrangement right now". Michael responded that Nancy was trying to cover up for the fact she had not met his daughter, Ashley, who had been born nearly a year earlier. They too eventually made peace. Nancy was thought to be closest to her stepdaughter Maureen during the White House years, but each of the Reagan children experienced periods of estrangement from their parents. Nancy Reagan was First Lady of California during her husband's two terms as governor. She disliked living in the state capital of Sacramento, which lacked the excitement, social life, and mild climate to which she was accustomed in Los Angeles. She first attracted controversy early in 1967; after four months' residence in the California Governor's Mansion in Sacramento, she moved her family into a wealthy suburb because fire officials had labelled the mansion as a "firetrap." Though the Reagans had leased the new house at their expense, the move was viewed as snobbish when the matter was brought to the attention of the general public. Reagan defended her actions as being for the good of her family, a judgment with which her husband readily agreed. Friends of the family later helped support the cost of the leased house, while Reagan supervised construction of a new ranch-style governor's residence in nearby Carmichael. The new residence was finished just as Ronald Reagan left office in 1975, but his successor, Jerry Brown, refused to live there. It was sold in 1982, and California governors lived in improvised arrangements until Brown moved into the Governor's Mansion in 2015. In 1967, Governor Reagan appointed his wife to the California Arts Commission, and a year later she was named "Los Angeles Times"' Woman of the Year; in its profile, the "Times" labeled her "A Model First Lady". Her glamour, style, and youthfulness, made her a frequent subject for press photographers. As first lady, Reagan visited veterans, the elderly, and the handicapped, and worked with a number of charities. She became involved with the Foster Grandparents Program, helping to popularize it in the United States and Australia. She later expanded her work with the organization after arriving in Washington, and wrote about her experiences in her 1982 book "To Love a Child". The Reagans held dinners for former POWs and Vietnam War veterans while governor and first lady. Governor Reagan's gubernatorial time in office ended in 1975, and he did not run for a third term; instead, he met with advisors to discuss a possible bid for the 1976 presidency, challenging incumbent President Gerald Ford. Ronald still needed to convince a reluctant Nancy before running, however. She feared for her husband's health and his career as a whole, though she felt that he was the right man for the job and eventually approved. Nancy took on a traditional role in the campaign, holding coffees, luncheons, and talks. She also oversaw personnel, monitored her husband's schedule, and occasionally provided press conferences. The 1976 campaign included the so-called "battle of the queens", contrasting Nancy with First Lady Betty Ford. They both spoke out over the course of the campaign on similar issues, but with different approaches. Nancy was upset by the warmonger image that the Ford campaign had drawn of her husband. Though he lost the 1976 Republican nomination, Ronald Reagan ran for the presidency a second time in 1980. He succeeded in winning the nomination and defeated incumbent rival Jimmy Carter in a landslide. During this second campaign, Nancy played a prominent role, and her management of staff became more apparent. She organized a meeting among feuding campaign managers John Sears and Michael Deaver and her husband, which resulted in Deaver leaving the campaign and Sears being given full control. After the Reagan camp lost the Iowa Caucus and fell behind in New Hampshire polls, Nancy organized a second meeting and decided it was time to fire Sears and his associates; she gave Sears a copy of the press release announcing his dismissal. Her influence on her husband became particularly notable; her presence at rallies, luncheons, and receptions increased his confidence. Reagan became the first lady of the United States when Ronald Reagan was inaugurated as president in January 1981. Early in her husband's presidency, Reagan stated her desire to create a more suitable "first home" in the White House, as the building had fallen into a state of disrepair following years of neglect. White House aide Michael Deaver described the second and third floor family residence as having "cracked plaster walls, chipped paint [and] beaten up floors"; rather than use government funds to renovate and redecorate, she sought private donations. In 1981, Reagan directed a major renovation of several White House rooms, including all of the second and third floors and rooms adjacent to the Oval Office, including the press briefing room. The renovation included repainting walls, refinishing floors, repairing fireplaces, and replacing antique pipes, windows, and wires. The closet in the master bedroom was converted into a beauty parlor and dressing room, and the West bedroom was made into a small gymnasium. The first lady secured the assistance of renowned interior designer Ted Graber, popular with affluent West Coast social figures, to redecorate the family living quarters. A Chinese-pattern, handpainted wallpaper was added to the master bedroom. Family furniture was placed in the president's private study. The first lady and her designer retrieved a number of White House antiques, which had been in storage, and placed them throughout the mansion. In addition, many of Reagan's own collectibles were put out for display, including around twenty-five Limoges Boxes, as well as some porcelain eggs and a collection of plates. The extensive redecoration was paid for by private donations. Many significant and long-lasting changes occurred as a result of the renovation and refurbishment, of which Reagan said, "This house belongs to all Americans, and I want it to be something of which they can be proud." The renovations received some criticisms for being funded by tax-deductible donations, meaning some of it eventually did indirectly come from the tax-paying public. Reagan's interest in fashion was another one of her trademarks. While her husband was still president-elect, press reports speculated about Reagan's social life and interest in fashion. In many press accounts, Reagan's sense of style was favorably compared to that of a previous first lady, Jacqueline Kennedy. Friends and those close to her remarked that, while fashionable like Kennedy, she would be different from other first ladies; close friend Harriet Deutsch was quoted as saying, "Nancy has her own imprint." White House photographer Mary Anne Fackelman-Miner, who was assigned to Reagan, said of her, "She always photographed so easily and was at ease in front of the cameras." Reagan's wardrobe consisted of dresses, gowns, and suits made by luxury designers, including James Galanos, Bill Blass, and Oscar de la Renta. Her white, hand-beaded, one shoulder Galanos 1981 inaugural gown was estimated to cost $10,000, while the overall price of her inaugural wardrobe was said to cost $25,000. She favored the color red, calling it "a picker-upper", and wore it accordingly. Her wardrobe included red so often that the fire-engine shade became known as "Reagan red". She employed two private hairdressers, who would style her hair on a regular basis in the White House. Fashion designers were pleased with the emphasis Reagan placed on clothing. Adolfo said the first lady embodied an "elegant, affluent, well-bred, chic American look", while Bill Blass commented, "I don't think there's been anyone in the White House since Jacqueline Kennedy Onassis who has her flair." William Fine, president of cosmetic company Frances Denney, noted that she "stays in style, but she doesn't become trendy." Though her elegant fashions and wardrobe were hailed as a "glamorous paragon of chic", they were also controversial subjects. In 1982, she revealed that she had accepted thousands of dollars in clothing, jewelry, and other gifts, but defended her actions by stating that she had borrowed the clothes, and that they would either be returned or donated to museums, and that she was promoting the American fashion industry. Facing criticism, she soon said she would no longer accept such loans. While often buying her clothes, she continued to borrow and sometimes keep designer clothes throughout her time as first lady, which came to light in 1988. None of this had been included on financial disclosure forms; the non-reporting of loans under $10,000 in liability was in violation of a voluntary agreement the White House had made in 1982, while not reporting more valuable loans or clothes not returned was a possible violation of the Ethics in Government Act. Reagan expressed through her press secretary "regrets that she failed to heed counsel's advice" on disclosing them. Despite the controversy, many designers who allowed her to borrow clothing, noted that the arrangement was good for their businesses, as well as for the American fashion industry overall. In 1989, Reagan was honored at the annual gala awards dinner of the Council of Fashion Designers of America, during which she received the council's lifetime achievement award. Barbara Walters said of her, "She has served every day for eight long years the word 'style.'" Approximately a year into her husband's first term, Nancy explored the idea of ordering new state china service for the White House. A full china service had not been purchased since the Truman administration in the 1940s, as only a partial service was ordered in the Johnson administration. She was quoted as saying, "The White House really badly, badly needs china." Working with Lenox, the primary porcelain manufacturer in America, the first lady chose a design scheme of a red with etched gold band, bordering the scarlet and cream colored ivory plates with a raised presidential seal etched in gold in the center. The full service comprised 4,370 pieces, with 19 pieces per individual set. The service totaled $209,508. Although it was paid for by private donations, some from the private J. P. Knapp Foundation, the purchase generated quite a controversy, for it was ordered at a time when the nation was undergoing an economic recession. Furthermore, news of the china purchase emerged at the same time that her husband's administration had proposed school lunch regulations that would allow ketchup to be counted as a vegetable. The new china, White House renovations, expensive clothing, and her attendance at the wedding of Charles and Diana, Prince and Princess of Wales, gave her an aura of being "out of touch" with the American people during the recession. This built upon the reputation she had coming to Washington, wherein many people concluded that Reagan was a vain and shallow woman, and her taste for splendor inspired the derogatory nickname "Queen Nancy". While Jacqueline Kennedy had also faced some press criticism for her spending habits, Reagan's treatment was much more consistent and negative. In an attempt to deflect the criticism, she self-deprecatingly donned a baglady costume at the 1982 Gridiron Dinner and sang "Second-Hand Clothes", mimicking the song "Second-Hand Rose". The skit helped to restore her reputation. Reagan reflected on the criticisms in her 1989 autobiography, "My Turn". She described lunching with former Democratic National Committee chairman Robert S. Strauss, wherein Strauss said to her, "When you first came to town, Nancy, I didn't like you at all. But after I got to know you, I changed my mind and said, 'She's some broad!'" Reagan responded, "Bob, based on the press reports I read then, I wouldn't have liked me either!" After the presidency of Jimmy Carter (who dramatically reduced the formality of presidential functions), Reagan brought a Kennedy-esque glamour back into the White House. She hosted 56 state dinners over eight years. She remarked that hosting the dinners is "the easiest thing in the world. You don't have to do anything. Just have a good time and do a little business. And that's the way Washington works." The White House residence staff found Reagan demanding to work for during the preparation for the state dinners, with the first lady overseeing every aspect of meal presentations, and sometimes requesting one dessert after another be prepared, before finally settling on one she approved of. In general, the first lady's desire for everything to appear just right in the White House led the residence staff to consider her not easy to work for, with tirades following what she perceived as mistakes. One staffer later recalled, "I remember hearing her call for her personal maid one day and it scared the dickens out of me—just her tone. I never wanted to be on the wrong side of her." She did show loyalty and respect to a number of the staff. In particular, she came to the public defense of a maid who was indicted on charges of helping to smuggle ammunition to Paraguay, providing an affidavit to the maid's good character (even though it was politically inopportune to do so at the time of the Iran–Contra affair); charges were subsequently dropped, and the maid returned to work at the White House. In 1987, Mikhail Gorbachev became the first Soviet leader to visit Washington, D.C. since Nikita Khrushchev made the trip in 1959 at the height of the Cold War. Nancy was in charge of planning and hosting the important and highly anticipated state dinner, with the goal to impress both the Soviet leader and especially his wife Raisa Gorbacheva. After the meal, she recruited pianist Van Cliburn to play a rendition of "Moscow Nights" for the Soviet delegation, to which Mikhail and Raisa broke out into song. Secretary of State George P. Shultz later commented on the evening, saying "We felt the ice of the Cold War crumbling." Reagan concluded, "It was a perfect ending for one of the great evenings of my husband's presidency." The first lady launched the "Just Say No" drug awareness campaign in 1982, which was her primary project and major initiative as first lady. Reagan first became aware of the need to educate young people about drugs during a 1980 campaign stop in Daytop village, New York. She remarked in 1981 that "Understanding what drugs can do to your children, understanding peer pressure and understanding why they turn to drugs is ... the first step in solving the problem." Her campaign focused on drug education and informing the youth of the danger of drug abuse. In 1982, Reagan was asked by a schoolgirl what to do when offered drugs; Reagan responded: "Just say no." The phrase proliferated in the popular culture of the 1980s, and was eventually adopted as the name of club organizations and school anti-drug programs. Reagan became actively involved by traveling more than throughout the United States and several nations, visiting drug abuse prevention programs and drug rehabilitation centers. She also appeared on television talk shows, recorded public service announcements, and wrote guest articles. She appeared in single episodes of the television drama "Dynasty" and the sitcom "Diff'rent Strokes", to underscore support for the "Just Say No" campaign, and in a rock music video, "Stop the Madness" (1985). In 1985, Reagan expanded the campaign to an international level by inviting the First Ladies of various nations to the White House for a conference on drug abuse. On October 27, 1986, President Reagan signed a drug enforcement bill into law, which granted $1.7 billion in funding to fight the perceived crisis and ensured a mandatory minimum penalty for drug offenses. Although the bill was criticized, Reagan considered it a personal victory. In 1988, she became the first first lady invited to address the United Nations General Assembly, where she spoke on international drug interdiction and trafficking laws. Critics of Reagan's efforts questioned their purpose, labelled Reagan's approach to promoting drug awareness as simplistic, and argued that the program did not address many social issues associated with drug use, including unemployment, poverty, and family dissolution. Reagan assumed the role of unofficial "protector" for her husband after the attempted assassination of him in 1981. On March 30 of that year, President Reagan and three others were shot by troubled 25-year old John Hinckley, Jr as they left the Washington Hilton hotel. Nancy was alerted and arrived at George Washington University Hospital, where the President was hospitalized. She recalled having seen "emergency rooms before, but I had never seen one like this – with my husband in it." She was escorted into a waiting room, and when granted access to see her husband, he quipped to her, "Honey, I forgot to duck", borrowing the defeated boxer Jack Dempsey's jest to his wife. An early example of the first lady's protective nature occurred when Senator Strom Thurmond entered the President's hospital room that day in March, passing the Secret Service detail by claiming he was the President's "close friend", presumably to acquire media attention. Nancy was outraged and demanded he leave. While the President recuperated in the hospital, the first lady slept with one of his shirts to be comforted by the scent. When Ronald Reagan was released from the hospital on April 12, she escorted him back to the White House. Press accounts framed Reagan as her husband's "chief protector", an extension of their general initial framing of her as a helpmate and a Cold War domestic ideal. As it happened, the day after her husband was shot, Reagan fell off a chair while trying to take down a picture to bring to him in the hospital; she suffered several broken ribs, but was determined to not reveal it publicly. During the Reagan administration, Nancy Reagan consulted a San Francisco astrologer, Joan Quigley, who provided advice on which days and times would be optimal for the president's safety and success. White House Chief of Staff Donald Regan grew frustrated with this regimen, which created friction between him and the first lady. This friction escalated with the revelation of the Iran–Contra affair, an administration scandal, in which the first lady felt Regan was damaging the president. She thought he should resign, and expressed this to her husband, although he did not share her view. Regan wanted President Reagan to address the Iran-Contra matter in early 1987 by means of a press conference, though the first lady refused to allow her husband to overexert himself due to a recent prostate surgery and astrological warnings. She became so angry with Regan that he hung up on her during a 1987 telephone conversation. According to the recollections of ABC News correspondent Sam Donaldson, when the President heard of this treatment, he demanded—and eventually received—Regan's resignation. Vice President George H. W. Bush is also reported to have suggested to her to have Regan fired. In his 1988 memoir, "For the Record: From Wall Street to Washington", Regan wrote the following about Nancy Reagan's consultations with an astrologer: Regan further claimed that Quigley selected the date of the 1985 Geneva Summit. For her part, Quigley stated in 1998 that she had "'absolutely nothing'" to do with arranging the summit and added that others were "'overemphasizing'" her role; however, in 1990, she released a book in which she asserted that she was "in charge" of the President's scheduling during the Reagan administration. Reagan acknowledged in her memoirs that she altered the President's schedule without his knowledge based on astrological advice, but argues that "no political decision was ever based [on astrology]". She added, "Astrology was simply one of the ways I coped with the fear I felt after my husband almost died ... Was astrology one of the reasons [further attempts did not occur]? I don't "really" believe it was, but I don't "really" believe it wasn't." Nancy Reagan wielded a powerful influence over President Reagan. In her memoirs, Reagan stated, "I felt panicky every time [Ronald Reagan] left the White House". Following the assassination attempt, she strictly controlled access to the president; occasionally, she even attempted to influence her husband's decision making. Beginning in 1985, she strongly encouraged her husband to hold "summit" conferences with Soviet general secretary Mikhail Gorbachev, and suggested they form a personal relationship beforehand. Both Ronald Reagan and Gorbachev had developed a productive relationship through their summit negotiations. The relationship between Nancy Reagan and Raisa Gorbacheva was anything but the friendly, diplomatic one between their husbands; Reagan found Gorbacheva hard to converse with and their relationship was described as "frosty". The two women usually had tea and discussed differences between the USSR and the United States. Visiting the United States for the first time in 1987, Gorbacheva irked Reagan with lectures on subjects ranging from architecture to socialism, reportedly prompting the American president's wife to quip, "Who does that dame think she is?" Press framing of Reagan changed from that of just helpmate and protector to someone with hidden power. As the image of her as a political interloper grew, she sought to explicitly deny that she was the power behind the throne. At the end of her time as First Lady, however, she said that her husband had not been well-served by his staff. She acknowledged her role in reaction in influencing him on personnel decisions, saying "In no way do I apologize for it." She wrote in her memoirs, "I don't think I was as bad, or as extreme in my power or my weakness, as I was depicted," but went on, "However the first lady fits in, she has a unique and important role to play in looking after her husband. And it's only natural that she'll let him know what she thinks. I always did that for Ronnie, and I always will." In October 1987, a mammogram detected a lesion in Reagan's left breast and she was subsequently diagnosed with breast cancer. She chose to undergo a mastectomy rather than a lumpectomy, and the breast was removed on October 17, 1987. Ten days after the operation, her 91-year-old mother, Edith Luckett Davis, died in Phoenix, Arizona, leading Reagan to dub the period "a terrible month". After the surgery, more women across the country had mammograms, which exemplified the influence that the first lady possessed. Though Reagan was a controversial first lady, 56 percent of Americans had a favorable opinion of her when her husband left office on January 20, 1989, with 18 percent having an unfavorable opinion, and the balance not giving an opinion. Compared to fellow First Ladies when their husbands left office, Reagan's approval was higher than those of Rosalynn Carter and Hillary Clinton. However, she was less popular than Barbara Bush, and her disapproval rating was double that of Carter's. Upon leaving the White House, the couple returned to California, where they purchased a home in the wealthy East Gate Old Bel Air neighborhood of Bel Air, Los Angeles, dividing their time between Bel Air and the Reagan Ranch in Santa Barbara, California. Ronald and Nancy regularly attended the Bel Air Church as well. After leaving Washington, Reagan made numerous public appearances, many on behalf of her husband. She continued to reside at the Bel Air home, where she lived with her husband until he died on June 5, 2004. In late 1989, the former first lady established the Nancy Reagan Foundation, which aimed to continue to educate people about the dangers of substance abuse. The Foundation teamed with the BEST Foundation For A Drug-Free Tomorrow in 1994, and developed the Nancy Reagan Afterschool Program. She continued to travel around the United States, speaking out against drug and alcohol abuse. Her memoirs, "My Turn: The Memoirs of Nancy Reagan" (1989), are an account of her life in the White House, commenting openly about her influence within the Reagan administration, and discussing the myths and controversies that surrounded the couple. In 1991, the author Kitty Kelley wrote an unauthorized and largely uncited biography about Reagan, repeating accounts of a poor relationship with her children, and introducing rumors of alleged sexual relations with singer Frank Sinatra. A wide range of sources commented that Kelley's largely unsupported claims are most likely false. In 1989, the IRS (Internal Revenue Service) began investigating the Reagans over allegations they owed additional tax on the gifts and loans of high-fashion clothes and jewellery to the first lady during their time in the White House (recipients benefiting from the display of such items recognize taxable income even if they are returned). In 1992, the IRS determined the Reagans had failed to include some $3 million worth of fashion items between 1983 and 1988 on their tax returns; they were billed for a large amount of back taxes and interest, which was subsequently paid. After President Reagan revealed that he had been diagnosed with Alzheimer's disease in 1994, she made herself his primary caregiver, and became actively involved with the National Alzheimer's Association and its affiliate, the Ronald and Nancy Reagan Research Institute in Chicago, Illinois. In April 1997, Nancy Reagan joined President Bill Clinton and former Presidents Ford and Bush in signing the Summit Declaration of Commitment in advocating for participation by private citizens in solving domestic issues within the United States. Nancy Reagan was awarded the Presidential Medal of Freedom, the nation's highest civilian honor, by President George W. Bush on July 9, 2002. President Reagan received his own Presidential Medal of Freedom in January 1993. Reagan and her husband were jointly awarded the Congressional Gold Medal on May 16, 2002, at the United States Capitol building, and were only the third president and first lady to receive it; she accepted the medal on behalf of both of them. Ronald Reagan died in their Bel Air home on June 5, 2004. During the seven-day state funeral, Nancy, accompanied by her children and military escort, led the nation in mourning. She kept a strong composure, traveling from her home to the Reagan Library for a memorial service, then to Washington, D.C., where her husband's body lay in state for 34 hours prior to a national funeral service in the Washington National Cathedral. She returned to the library in Simi Valley for a sunset memorial service and interment, where, overcome with emotion, she lost her composure and cried in public for the first time during the week. After receiving the folded flag, she kissed the casket and mouthed "I love you" before leaving. During the week, CNN journalist Wolf Blitzer said, "She's a very, very strong woman, even though she looks frail." She had directed the detailed planning of the funeral, which included scheduling all the major events and asking former President George H. W. Bush, as well as former British Prime Minister Margaret Thatcher, former Soviet Union Leader Mikhail Gorbachev, and former Canadian Prime Minister Brian Mulroney to speak during the National Cathedral Service. She paid very close attention to the details, something she had always done in her husband's life. Betsy Bloomingdale, one of Reagan's closest friends, stated, "She looks a little frail. But she is very strong inside. She is. She has the strength. She is doing her last thing for Ronnie. And she is going to get it right." The funeral marked her first major public appearance since she delivered a speech to the 1996 Republican National Convention on her husband's behalf. The funeral had a great impact on her public image. Following substantial criticism during her tenure as first lady, she was seen somewhat as a national heroine, praised by many for supporting and caring for her husband while he suffered from Alzheimer's disease. "U.S. News & World Report" opined, "after a decade in the shadows, a different, softer Nancy Reagan emerged." Following her husband's death, Reagan remained active in politics, particularly relating to stem cell research. Beginning in 2004, she favored what many consider to be the Democratic Party's position, and urged President George W. Bush to support federally funded embryonic stem cell research, in the hope that this science could lead to a cure for Alzheimer's disease. Although she failed to change the president's position, she did support his campaign for a second term. In 2005, Reagan was honored at a gala dinner at the Ronald Reagan Building in Washington, D.C., where guests included Dick Cheney, Harry Reid, and Condoleezza Rice. In 2007, she attended the national funeral service for Gerald Ford in the Washington National Cathedral. Reagan hosted two 2008 Republican presidential debates at the Reagan Presidential Library, the first in May 2007 and the second in January 2008. On March 25, she formally endorsed Senator John McCain, then the presumptive Republican party nominee for president, but McCain would go on to lose the election to Barack Obama. Reagan attended the funeral of Lady Bird Johnson in Austin, Texas, on July 14, 2007, and three days later accepted the highest Polish distinction, the Order of the White Eagle, on behalf of Ronald Reagan at the Reagan Library. The Reagan Library opened the temporary exhibit "Nancy Reagan: A First Lady's Style", which displayed over eighty designer dresses belonging to her. Reagan's health and well-being became a prominent concern in 2008. In February, she suffered a fall at her Bel Air home and was taken to Saint John's Health Center in Santa Monica, California. Doctors reported that she did not break her hip as feared, and she was released from the hospital two days later. News commentators noted that Reagan's step had slowed significantly, as the following month she walked in very slow strides with John McCain. In October 2008, Reagan was admitted to Ronald Reagan UCLA Medical Center after falling at home. Doctors determined that the 87-year-old had fractured her pelvis and sacrum, and could recuperate at home with a regimen of physical therapy. As a result of her mishap, medical articles were published containing information on how to prevent falls. In January 2009, Reagan was said to be "improving every day and starting to get out more and more." In March 2009, she praised President Barack Obama for reversing the ban on federally funded embryonic stem cell research. She traveled to Washington, D.C. in June 2009 to unveil a statue of her late husband in the Capitol rotunda. She was also on hand as President Obama signed the Ronald Reagan Centennial Commission Act, and lunched privately with Michelle Obama. Reagan revealed in an interview with "Vanity Fair" that Michelle Obama had telephoned her for advice on living and entertaining in the White House. Following the death of Senator Ted Kennedy in August 2009, she said she was "terribly saddened ... Given our political differences, people are sometimes surprised how close Ronnie and I have been to the Kennedy family ... I will miss him." She attended the funeral of Betty Ford in Rancho Mirage, California, on July 12, 2011. Reagan hosted a 2012 Republican presidential debate at the Reagan Presidential Library on September 7, 2011. She suffered a fall in March 2012. Two months later, she endured several broken ribs, which prevented her from attending a speech given by Paul Ryan in the Reagan Presidential Library in May 2012. She endorsed Republican presidential candidate Mitt Romney on May 31, 2012, explaining that her husband would have liked Romney's business background and what she called "strong principles". Following the death of former British Prime Minister Margaret Thatcher in April 2013, she stated, "The world has lost a true champion of freedom and democracy ... Ronnie and I knew her as a dear and trusted friend, and I will miss her." On March 6, 2016, Reagan died of congestive heart failure at the age of 94. On March 7, President Barack Obama issued a presidential proclamation ordering the flag of the United States to be flown at half-staff until sunset on the day of Reagan's interment. Her funeral was held on March 11 at the Ronald Reagan Presidential Library in Simi Valley, California. Representatives from ten first families were in attendance, including former president George W. Bush and first ladies Michelle Obama, Laura Bush, Hillary Clinton, and Rosalynn Carter. The other representatives were presidential children Steven Ford, Tricia Nixon Cox, Luci Baines Johnson, and Caroline Kennedy, and presidential grandchild Anne Eisenhower Flottl. Other prominent individuals in attendance included California governor Jerry Brown and former governors Arnold Schwarzenegger and Pete Wilson, then-former House speaker Nancy Pelosi and former House speaker Newt Gingrich, and former members of the Reagan administration, including George P. Shultz and Edwin Meese. A sizable contingent from the Hollywood entertainment industry attended as well, including Mr. T, Maria Shriver (Schwarzenegger's then-wife), Wayne Newton, Johnny Mathis, Anjelica Huston, John Stamos, Tom Selleck, Bo Derek, and Melissa Rivers. In all there were some 1,000 guests. Eulogies were given by former prime minister of Canada Brian Mulroney, former secretary of state James Baker, Diane Sawyer, Tom Brokaw, and her children Patti Davis and Ron Reagan. After the funeral, Nancy Reagan was interred next to her husband. As noted earlier, Nancy Reagan was awarded the Presidential Medal of Freedom in 2002 and the Congressional Gold Medal, in the same year. In 1989, she received the Council of Fashion Designers of America's lifetime achievement award. As First Lady, Nancy Reagan received an Honorary Doctorate of Laws degree from Pepperdine University in 1983. Later, she received an Honorary Doctor of Humane Letters degree from Eureka College in Illinois, her husband's alma mater, in 2009. As Nancy Davis, she also made a number of television appearances from 1953 to 1962, as a guest star in dramatic shows or installments of anthology series. These included "Ford Television Theatre" (her first appearance with Ronald Reagan came during a 1953 episode titled "First Born"), "Schlitz Playhouse of Stars", "Dick Powell's Zane Grey Theatre" (appearing with Ronald Reagan in the 1961 episode "The Long Shadow"), "Wagon Train", "The Tall Man", and "General Electric Theater" (hosted by Ronald Reagan).
https://en.wikipedia.org/wiki?curid=21181
New Brunswick New Brunswick () is one of four Atlantic provinces on the east coast of Canada. According to the Constitution of Canada, New Brunswick is the only bilingual province. About two-thirds of the population declare themselves anglophones and one-third francophones. One-third of the population describes themselves as bilingual. Atypically for Canada, only about half of the population lives in urban areas, mostly in Greater Moncton, Greater Saint John and the capital Fredericton. Unlike the other Maritime provinces, New Brunswick's terrain is mostly forested uplands, with much of the land further from the coast, giving it a harsher climate. New Brunswick is 83% forested and less densely populated than the rest of the Maritimes. Being relatively close to Europe, New Brunswick was among the first places in North America to be explored and settled by Europeans. In 1784, after an influx of refugees from the American Revolutionary War, the province was founded on territory from the partition of Nova Scotia. In 1785 Saint John became the first incorporated city in what is now Canada. The province prospered in the early 1800s and the population grew rapidly, reaching about a quarter of a million by mid-century. In 1867, New Brunswick was one of four founding provinces of the Canadian Confederation, along with Nova Scotia and the Province of Canada (now Ontario and Quebec). After Confederation, wooden shipbuilding and lumbering declined, while protectionism disrupted trade ties with New England. The mid-1900s found New Brunswick to be one of the poorest regions of Canada, now mitigated by Canadian transfer payments and improved support for rural areas. As of 2002, provincial gross domestic product was derived as follows: services (about half being government services and public administration) 43%; construction, manufacturing, and utilities 24%; real estate rental 12%; wholesale and retail 11%; agriculture, forestry, fishing, hunting, mining, oil and gas extraction 5%; transportation and warehousing 5%. Tourism accounts for about 9% of the labour force directly or indirectly. Popular destinations include Fundy National Park and the Hopewell Rocks, Kouchibouguac National Park, and Roosevelt Campobello International Park. In 2013, 64 cruise ships called at Port of Saint John, carrying, on average, 2,600 passengers each. Indigenous peoples have been in the area since about 7000 BC. At the time of European contact, inhabitants were the Mi'kmaq, the Maliseet, and the Passamaquoddy. Although these tribes did not leave a written record, their language is present in many placenames, such as Aroostook, Bouctouche, Memramcook, Petitcodiac, Quispamsis, Richibucto and Shediac. New Brunswick may have been part of Vinland during the Norse exploration of North America, and Basque, Breton, and Norman fishermen may have visited the Bay of Fundy in the early 1500s. The first documented European visits were by Jacques Cartier in 1534. In 1604, a party including Samuel de Champlain visited the mouth of the Saint John River on the eponymous Saint-Jean-Baptiste Day. Now Saint John, this was later the site of the first permanent European settlement in New Brunswick. French settlement eventually extended up the river to the site of present-day Fredericton. Other settlements in the southeast extended from Beaubassin, near the present-day border with Nova Scotia, to Baie Verte, and up the Petitcodiac, Memramcook, and Shepody Rivers. By the early 1700s the French settlements formed a part of Acadia, a colonial division of New France. Acadia covered what is now the Maritimes, as well as bits of Quebec and Maine. The British conquest of most of the Acadian peninsula occurred during the Queen Anne's War, and was formalized in the Treaty of Utrecht of 1713. After the war, French Acadia was reduced to Île Saint-Jean (Prince Edward Island) and Île-Royale (Cape Breton Island). The ownership of continental Acadia (New Brunswick) remained disputed, with an informal border on the Isthmus of Chignecto. In an effort to limit British expansion into continental Acadia, the French built Fort Beauséjour at the isthmus in 1751. From 1749 to 1755, the British engaged in a campaign to consolidate its control over Nova Scotia. The resulting conflict led to an Acadian Exodus to French controlled territories in North America, including portions of continental Acadia. In 1755, the British captured Fort Beauséjour, severing the Acadian supply lines to Nova Scotia, and Île-Royale. Unable to make most of the Acadians sign an unconditional oath of allegiance, British authorities undertook a campaign to expel the Acadians in the initial periods of the Seven Years' War. Continental Acadia was eventually incorporated into the British colony of Nova Scotia, with nearly all of New France being surrendered to the British with the Treaty of Paris in 1763. Acadians that returned from exile discovered several thousand immigrants, mostly from New England, on their former lands. Some settled around Memramcook and along the Saint John River. In 1766, settlers from Pennsylvania founded Moncton, and English settlers from Yorkshire arrived in the Sackville area. However, settlement of the area remained slow in the mid 18th century. After the American Revolution, about 10,000 loyalist refugees settled along the north shore of the Bay of Fundy, commemorated in the province's motto, ("hope restored"). The number reached almost 14,000 by 1784, with about one in ten eventually returning to America. New Brunswick was founded in 1784 upon the partition of Nova Scotia into two areas which became the Provinces of Nova Scotia and New Brunswick. In the same year New Brunswick formed its first elected assembly. The colony was named New Brunswick in honour of George III, King of Great Britain, King of Ireland, and Prince-elector of Brunswick-Lüneburg in what is now Germany. In 1785, Saint John became Canada's first incorporated city. The population of the colony reached 26,000 in 1806 and 35,000 in 1812. The 1800s saw an age of prosperity based on wood export and shipbuilding, bolstered by The Canadian–American Reciprocity Treaty of 1854 and demand from the American Civil War. St. Martins became the third most productive shipbuilding town in the Maritimes, producing over 500 vessels. In 1848, responsible home government was granted and the 1850s saw the emergence of political parties largely organised along religious and ethnic lines. The first half of the 1800s saw large-scale immigration from Ireland and Scotland, with the population reaching 252,047 by 1861. The notion of unifying the separate colonies of British North America was discussed increasingly in the 1860s. Many felt that the American Civil war was the result of weak central government and wished to avoid such violence and chaos. The 1864 Charlottetown Conference was intended to discuss a Maritime Union, but concerns over possible conquest by the Americans, coupled with a belief that Britain was unwilling to defend its colonies against American attack, led to a request from the Province of Canada (now Ontario and Quebec) to expand the meeting's scope. In 1866 the US cancelled the Canadian–American Reciprocity Treaty, leading to loss of trade with New England and prompting a desire to build trade within British North America, while Fenian raids increased support for union. On 1 July 1867, New Brunswick entered the Canadian Confederation along with Nova Scotia and the Province of Canada. Confederation brought into existence the Intercolonial Railway in 1872, a consolidation of the existing Nova Scotia Railway, European and North American Railway, and Grand Trunk Railway. In 1879 John A. Macdonald's Conservatives enacted the National Policy which called for high tariffs and opposed free trade, disrupting the trading relationship between the Maritimes and New England. The economic situation was worsened by the decline of the wooden ship building industry. The railways and tariffs did foster the growth of new industries in the province such as textile manufacturing, iron mills, and sugar refineries, many of which eventually failed to compete with better capitalized industry in central Canada. In 1937 New Brunswick had the highest infant mortality and illiteracy rates in Canada. At the end of the Great Depression the New Brunswick standard of living was much below the Canadian average. In 1940 the Rowell–Sirois Commission reported that the federal government attempts to manage the depression illustrated grave flaws in the Canadian constitution. While the federal government had most of the revenue gathering powers, the provinces had many expenditure responsibilities such as healthcare, education, and welfare, which were becoming increasingly expensive. The Commission recommended the creation of equalization payments, implemented in 1957. After Canada joined World War II, 14 NB army units were organized, in addition to The Royal New Brunswick Regiment, and first deployed in the Italian campaign in 1943. After the Normandy landings they redeployed to northwestern Europe, along with The North Shore Regiment. The British Commonwealth Air Training Plan, a training program for ally pilots, established bases in Moncton, Chatham, and Pennfield Ridge, as well as a military typing school in Saint John. While relatively unindustrialized before the war, New Brunswick became home to 34 plants on military contracts from which the province received over $78 million. Prime Minister William Lyon Mackenzie King, who had promised no conscription, asked the provinces if they would release the government of said promise. New Brunswick voted 69.1% yes. The policy was not implemented until 1944, too late for many of the conscripts to be deployed. There were 1808 NB fatalities among the armed forces. The Acadians in northern New Brunswick had long been geographically and linguistically isolated from the more numerous English speakers to the south. The population of French origin grew dramatically after Confederation, from about 16 per cent in 1871 to 34 per cent in 1931. Government services were often not available in French, and the infrastructure in Francophone areas was less developed than elsewhere. In 1960 Premier Louis Robichaud embarked on the New Brunswick Equal Opportunity program, in which education, rural road maintenance, and healthcare fell under the sole jurisdiction of a provincial government that insisted on equal coverage throughout the province, rather than the former county-based system. The flag of New Brunswick, based on the coat of arms, was adopted in 1965. The conventional heraldic representations of a lion and a ship represent colonial ties with Europe, and the importance of shipping at the time the coat of arms was assigned. Roughly square, New Brunswick is bordered on the north by Quebec, on the east by the Atlantic Ocean, on the south by the Bay of Fundy, and on the west by the US state of Maine. The southeast corner of the province is connected to Nova Scotia at the isthmus of Chignecto. Glaciation has left much of New Brunswick's uplands with only shallow, acidic soils which have discouraged settlement but which are home to enormous forests. New Brunswick's climate is more severe than that of the other Maritime provinces, which are lower and have more shoreline along the moderating sea. New Brunswick has a humid continental climate, with slightly milder winters on the Gulf of St. Lawrence coastline. Elevated parts of the far north of the province have a subarctic climate. Evidence of climate change in New Brunswick can be seen in its more intense precipitation events, more frequent winter thaws, and one quarter to half the amount of snowpack. Today the sea level is about 30 cm higher than it was 100 years ago, and it is expected to rise twice that much again by the year 2100. Most of New Brunswick is forested with secondary forest or tertiary forest. At the start of European settlement, the Maritimes were covered from coast to coast by a forest of mature trees, giants by today's standards. Today less than one per cent of old-growth Acadian forest remains, and the World Wide Fund for Nature lists the Acadian Forest as endangered. Following the frequent large scale disturbances caused by settlement and timber harvesting, the Acadian forest is not growing back as it was, but is subject to borealization. This means that exposure-resistant species that are well adapted to the frequent large scale disturbances common in the boreal forest are increasingly abundant. These include jack pine, balsam fir, black spruce, white birch, and poplar. Forest ecosystems support large carnivores such as the bobcat, Canada lynx, and black bear, and the large herbivores moose and white-tailed deer. Fiddlehead greens are harvested from the Ostrich fern which grows on riverbanks. Furbish's lousewort a perennial herb endemic to the shores of the upper Saint John River, is an endangered species threatened by habitat destruction, riverside development, forestry, littering and recreational use of the riverbank. Many wetlands are being disrupted by the highly invasive Introduced species purple loosestrife. Bedrock types range from 1 billion to 200 million years old. Much of the bedrock in the west and north derives from ocean deposits in the Ordovician that were subject to folding and igneous intrusion and that were eventually covered with lava during the Paleozoic, peaking during the Acadian orogeny. During the Carboniferous era, about 340 million years ago, New Brunswick was in the Maritimes Basin, a sedimentary basin near the equator. Sediments, brought by rivers from surrounding highlands, accumulated there; after being compressed, they produced the Albert oil shales of southern New Brunswick. Eventually, sea water from the Panthalassic Ocean invaded the basin, forming the Windsor Sea. Once this receded, conglomerates, sandstones, and shales accumulated. The rust colour of these was caused by the oxidation of iron in the beds between wet and dry periods. Such late carboniferous rock formed the Hopewell Rocks, which have been shaped by the extreme tidal range of the Bay of Fundy. In the early Triassic, as Pangea drifted north it was rent apart, forming the rift valley that is the Bay of Fundy. Magma pushed up through the cracks, forming basalt columns on Grand Manan. New Brunswick lies entirely within the Appalachian Mountain range. All of the rivers of New Brunswick drain into the Gulf of Saint Lawrence to the east or the Bay of Fundy to the south. These watersheds include lands in Quebec and Maine. New Brunswick and the rest of the Maritime Peninsula was covered by thick layers of ice during the last glacial period (the Wisconsinian glaciation). It cut U-shaped valleys in the Saint John and Nepisiguit River valleys and pushed granite boulders from the Miramichi highlands south and east, leaving them as erratics when the ice receded at the end of the Wisconsin glaciation, along with deposits such as the eskers between Woodstock and St George, which are today sources of sand and gravel. The four Atlantic Provinces are Canada's least populated, with New Brunswick the third-least populous at 747,101 in 2016. The Atlantic provinces also have higher rural populations. New Brunswick was largely rural until 1951; since then, the rural-urban split has been roughly even. Population density in the Maritimes is above average among Canadian provinces, which reflects their small size and the fact that they do not possess large, unpopulated hinterlands, as do the other seven provinces and three territories. New Brunswick's 107 municipalities cover of the province's land mass but are home to of its population. The three major urban areas are in the south of the province and are Greater Moncton, population 126,424, Greater Saint John, population 122,389, and Greater Fredericton, population 85,688. In the 2001 census, the most commonly reported ethnicities were British 40%, French Canadian and Acadian 31%, Irish 18%, other European 7%, First Nations 3%, Asian Canadian 2%. Each person could choose more than one ethnicity. According to the Canadian Constitution, both English and French are the official languages of New Brunswick, making it the only officially bilingual province. Government and public services are available in both English and French. For education, English-language and French-language systems serve the two linguistic communities at all levels. Anglophone New Brunswickers make up roughly two-thirds of the population, while about one-third are Francophone. Recently there has been growth in the numbers of people reporting themselves as bilingual, with 34% reporting that they speak both English and French. This reflects a trend across Canada. In the 2011 census, 84% of provincial residents reported themselves as Christian: 52% were Roman Catholic, 8% Baptist, 8% United Church of Canada, 7% Anglican and 9% other Christian. Fifteen percent of residents reported no religion. As of October 2017, seasonally-adjusted employment is 73,400 for the goods-producing sector and 280,900 for the services-producing sector. Those in the goods-producing industries are mostly employed in manufacturing or construction, while those in services work in social assistance, trades, and health care. A large portion of the economy is controlled by the Irving Group of Companies, which consists of the holdings of the family of K. C. Irving. The companies have significant holdings in agriculture, forestry, food processing, freight transport (including railways and trucking), media, oil, and shipbuilding. The United States is the province's largest export market, accounting for 92% of a foreign trade valued in 2014 at almost $13 billion, with refined petroleum making up 63% of that, followed by seafood products, pulp, paper and sawmill products and non-metallic minerals (chiefly potash). The value of exports, mostly to the United States, was $1.6 billion in 2016. About half of that came from lobster. Other products include salmon, crab, and herring. In 2015, spending on non-resident tourism in New Brunswick was $441 million, which provided $87 million in tax revenue. A large number of residents from New Brunswick are employed in the primary sector of industry. More than 13,000 New Brunswickers work in agriculture, shipping products worth over $1 billion, half of which is from crops, and half of that from potatoes, mostly in the Saint John River valley. McCain Foods is one of the world's largest manufacturers of frozen potato products. Other products include apples, cranberries, and maple syrup. New Brunswick was in 2015 the biggest producer of wild blueberries in Canada. The value of the livestock sector is about a quarter of a billion dollars, nearly half of which is dairy. Other sectors include poultry, fur, and goats, sheep, and pigs. About 83% of New Brunswick is forested. Historically important, it accounted for more than 80% of exports in the mid-1800s. By the end of the 1800s the industry, and shipbuilding, were declining due to external economic factors. The 1920s saw the development of a pulp and paper industry. In the mid-1960s, forestry practices changed from the controlled harvests of a commodity to the cultivation of the forests. The industry employs nearly 12,000, generating revenues around $437 million. Mining was historically unimportant in the province, but has grown since the 1950s. The province's GDP from the Mining and Quarrying industry in 2015 was $299.5 million. Mines in New Brunswick produce lead, zinc, copper, and potash. Public education elementary and secondary education in the province is administered by the provincial Department of Education and Early Childhood Development. New Brunswick has a parallel system of Anglophone and Francophone public schools. The province also operates five public post-secondary institutions, including a college and four universities. Four public universities operate campuses in New Brunswick, including the oldest English-language university in the country, the University of New Brunswick. The other universities in the province include Mount Allison University, St. Thomas University, and the Université de Moncton. All four universities offer undergraduate, and postgraduate education. Additionally, the Université de Moncton, and the University of New Brunswick also offer professional education. Medical education programs have also been established at both the Université de Moncton and at UNBSJ in Saint John (affiliated with Université de Sherbrooke and Dalhousie University respectively). Public colleges in the province are managed as a part of the New Brunswick Community College (NBCC) system. In addition to public institutions, the province is also home to several private vocational schools, such as the Moncton Flight College; and universities, the largest being Crandall University. Under Canadian federalism, power is divided between federal and provincial governments. Among areas under federal jurisdiction are citizenship, foreign affairs, national defence, fisheries, criminal law, indigenous policies, and many others. Provincial jurisdiction covers public lands, health, education, and local government, among other things. Jurisdiction is shared for immigration, pensions, agriculture, and welfare. The parliamentary system of government is modelled on the British Westminster system. Forty-nine representatives, nearly always members of political parties, are elected to the Legislative Assembly of New Brunswick. The head of government is the Premier of New Brunswick, normally the leader of the party or coalition with the most seats in the legislative assembly. Governance is handled by the executive council (cabinet), with about 32 ministries. Ceremonial duties of the Monarchy in New Brunswick are mostly carried out by the Lieutenant Governor of New Brunswick. Under amendments to the province's Legislative Assembly Act in 2007, a provincial election is held every four years. The two largest political parties are the New Brunswick Liberal Association and the Progressive Conservative Party of New Brunswick. Since the 2018 election, minor parties are the Green Party of New Brunswick and the People's Alliance of New Brunswick. The Court of Appeal of New Brunswick is the highest provincial court. It hears appeals from: The system consists of eight Judicial Districts, loosely based on the counties. The Chief Justice of New Brunswick serves at the apex of this court structure. Ninety-two per cent of the land in the province, inhabited by about 35% of the population, is under provincial administration and has no local, elected representation. The 51% of the province that is Crown land is administered by the Department of Natural Resources and Energy Development. Most of the province is administrated as a local service district (LSD), an unincorporated unit of local governance. As of 2017, there are 237 LSDs. Services, paid for by property taxes, include a variety of services such as fire protection, solid waste management, street lighting, and dog regulation. LSDs may elect advisory committees and work with the Department of Local Government to recommend how to spend locally collected taxes. In 2006 there were three rural communities. This is a relatively new entity; to be created, it requires a population of 3,000 and a tax base of $200 million. In 2006 there were 101 municipalities. Regional Service Commissions, which number 12, were introduced in 2013 to regulate regional planning and solid waste disposal, and provide a forum for discussion on a regional level of police and emergency services, climate change adaptation planning, and regional sport, recreational and cultural facilities. The commissions' administrative councils are populated by the mayors of each municipality or rural community within a region. Historically the province was divided into counties with elected governance, but this was abolished in 1966. These were further subdivided into 152 parishes, which also lost their political significance in 1966 but are still used as census subdivisions by Statistics Canada. New Brunswick has the most poorly-performing economy of any Canadian province, with a per capita income of $28,000. The government has historically run at a large deficit. With about half of the population being rural, it is expensive for the government to provide education and health services, which account for 60 per cent of government expenditure. Thirty-six per cent of the provincial budget is covered by federal cash transfers. The government has frequently attempted to create employment through subsidies, which has often failed to generate long-term economic prosperity and has resulted in bad debt, examples of which include Bricklin, Atcon, and the Marriott call centre in Fredericton. According to a 2014 study by the Atlantic Institute for Market Studies, the large public debt is a very serious problem. Government revenues are shrinking because of a decline in federal transfer payments. Though expenditures are down (through government pension reform and a reduction in the number of public employees), they have increased relative to GDP, necessitating further measures to reduce debt in the future. In the 2014–15 fiscal year, provincial debt reached $12.2 billion or 37.7 per cent of nominal GDP, an increase over the $10.1 billion recorded in 2011–12. The debt-to-GDP ratio is projected to fall to 36.7% in 2019–20. Publicly owned NB Power operates 13 of New Brunswick's generating stations, deriving power from fuel oil and diesel (1497 MW), hydro (889 MW), nuclear (660 MW), and coal (467 MW). There were 30 active natural gas production sites in 2012. The Department of Transportation and Infrastructure maintains government facilities and the province's highway network and ferries. The Trans-Canada Highway is not under federal jurisdiction, and traverses the province from Edmundston following the Saint John River Valley, through Fredericton, Moncton, and on to Nova Scotia and Prince Edward Island. Via Rail's Ocean service, which connects Montreal to Halifax, is currently the oldest continuously operated passenger route in North America, with stops from west to east at Campbellton, Charlo, Jacquet River, Petit Rocher, Bathurst, Miramichi, Rogersville, Moncton, and Sackville. Canadian National Railway operates freight services along the same route, as well as a subdivision from Moncton to Saint John. The New Brunswick Southern Railway, a division of J. D. Irving Limited, together with its sister company Eastern Maine Railway form a continuous main line connecting Saint John and Brownville Junction, Maine. There are about 61 historic places in New Brunswick, including Fort Beauséjour, Kings Landing Historical Settlement and the Village Historique Acadien. Established in 1842, the New Brunswick Museum in Saint John was designated as the provincial museum of New Brunswick. The province is also home to a number of other museums in addition to the provincial museum. New Brunswick is home to a number of individuals that worked as musicians, in the performing arts, and/or the visual arts. Music of New Brunswick includes artists such as Henry Burr, Roch Voisine, Lenny Breau, and Édith Butler. Symphony New Brunswick, based in Saint John, tours extensively in the province. Symphony New Brunswick based in Saint John and the Atlantic Ballet Theatre of Canada (based in Moncton), tours nationally and internationally. Theatre New Brunswick (based in Fredericton), tours plays around the province. Canadian playwright Norm Foster saw his early works premiere with Theatre New Brunswick. Other live theatre troops include the Théatre populaire d'Acadie in Caraquet, and Live Bait Theatre in Sackville. The refurbished Imperial and Capitol Theatres are found in Saint John and Moncton, respectively; the more modern Playhouse is in Fredericton. Mount Allison University in Sackville began offering classes in 1854. The program came into its own under John A. Hammond, from 1893 to 1916. Alex Colville and Lawren Harris later studied and taught art there, and both Christopher Pratt and Mary Pratt were trained at Mount Allison. The university also opened an art gallery in 1895 and is named for its patron, John Owens of Saint John. The art gallery at Mount Allison University is presently the oldest university-operated art gallery in Canada. Modern New Brunswick artists include landscape painter Jack Humphrey, sculptor Claude Roussel, and Miller Brittain. The province is also home to the Beaverbrook Art Gallery, which was designated as the provincial art gallery in 1994. Julia Catherine Beckwith, born in Fredericton, was Canada's first published novelist. Poet Bliss Carman and his cousin Charles G. D. Roberts were some of the first Canadians to achieve international fame for letters. Antonine Maillet was the first non-European winner of France's Prix Goncourt. Other modern writers include Alfred Bailey, Alden Nowlan, John Thompson, Douglas Lochhead, K. V. Johansen, David Adams Richards, Raymond Fraser, and France Daigle. A recent New Brunswick Lieutenant-Governor, Herménégilde Chiasson, is a poet and playwright. The Fiddlehead, established in 1945 at University of New Brunswick, is Canada's oldest literary magazine. New Brunswick has four daily newspapers: the "Times & Transcript", serving eastern New Brunswick; the "Telegraph-Journal", based in Saint John and distributed province-wide; "The Daily Gleaner", based in Fredericton; and "L'Acadie Nouvelle", based in Caraquet. The three English-language dailies and the majority of the weeklies are owned and operated by Brunswick News—which is privately owned by James K. Irving. Due to its dominant position, critics have accused Brunswick News of being biased towards the Irving Group of Companies, including its reluctance to publish stories that are critical of the group. The Canadian Broadcasting Corporation has anglophone television and radio operations in Fredericton. Télévision de Radio-Canada is based in Moncton. CTV and Global also operate stations in New Brunswick, which operate largely as sub-feeds of their stations in Halifax as part of regional networks.
https://en.wikipedia.org/wiki?curid=21182
Nova Scotia Nova Scotia () is a province in eastern Canada. With a population of 923,598 as of 2016, it is the most populous of Canada's three Maritime provinces and four Atlantic provinces. It is the country's second-most densely populated province and second-smallest province by area, both after neighbouring Prince Edward Island. Its area of includes Cape Breton Island and 3,800 other coastal islands. The peninsula that makes up Nova Scotia's mainland is connected to the rest of North America by the Isthmus of Chignecto, on which the province's land border with New Brunswick is located. The province borders the Bay of Fundy to the northwest and the Atlantic Ocean to the south and east, and is separated from Prince Edward Island and the island of Newfoundland by the Northumberland and Cabot straits, respectively. The land that comprises what is now Nova Scotia has been inhabited by the indigenous Miꞌkmaq people for thousands of years. France's first settlement in North America, , was established in 1605 and intermittently served in various locations as the capital of the French colony of Acadia for over a hundred years. The Fortress of Louisbourg was a key focus point in the struggle between the British and French for control of the area, changing hands numerous times until France relinquished its claims with the Treaty of Paris in 1763. During the American Revolutionary War, thousands of Loyalists settled in Nova Scotia. In 1848, Nova Scotia became the first British colony to achieve responsible government, and it federated in July 1867 with New Brunswick and the Province of Canada (now Ontario and Quebec) to form what is now the country of Canada. Nova Scotia's capital and largest city is Halifax, which today is home to about 45 percent of the province's population, is the thirteenth-largest census metropolitan area in Canada, the largest city in Atlantic Canada, and Canada's second-largest coastal city (after Vancouver). "Nova Scotia" means "New Scotland" in Latin and is the recognized English-language name for the province. In both French and Scottish Gaelic, the province is directly translated as "New Scotland" (French: '. Gaelic: '). In general, Romance and Slavic languages use a direct translation of "New Scotland", while most other languages use direct transliterations of the Latin / English name. The province was first named in the 1621 Royal Charter granting to Sir William Alexander in 1632 the right to settle lands including modern Nova Scotia, Cape Breton Island, Prince Edward Island, New Brunswick and the Gaspé Peninsula. Nova Scotia is Canada's second-smallest province in area, after Prince Edward Island. The province's mainland is the Nova Scotia peninsula, surrounded by the Atlantic Ocean and including numerous bays and estuaries. Nowhere in Nova Scotia is more than from the ocean. Cape Breton Island, a large island to the northeast of the Nova Scotia mainland, is also part of the province, as is Sable Island, a small island notorious for being the site of offshore shipwrecks, approximately from the province's southern coast. Nova Scotia has many ancient fossil-bearing rock formations. These formations are particularly rich on the Bay of Fundy's shores. Blue Beach near Hantsport, Joggins Fossil Cliffs, on the Bay of Fundy's shores, has yielded an abundance of Carboniferous-age fossils. Wasson's Bluff, near the town of Parrsboro, has yielded both Triassic- and Jurassic-age fossils. The province contains 5,400 lakes. Nova Scotia lies in the mid-temperate zone and, although the province is almost surrounded by water, the climate is closer to continental climate rather than maritime. The winter and summer temperature extremes of the continental climate are moderated by the ocean. However, winters are cold enough to be classified as continental—still being nearer the freezing point than inland areas to the west. The Nova Scotian climate is in many ways similar to the central Baltic Sea coast in Northern Europe, only wetter and snowier. This is true although Nova Scotia is some fifteen parallels further south. Areas not on the Atlantic coast experience warmer summers more typical of inland areas, and winter lows are a little colder. Described on the provincial vehicle licence plate as Canada's Ocean Playground, Nova Scotia is surrounded by four major bodies of water: the Gulf of Saint Lawrence to the north, the Bay of Fundy to the west, the Gulf of Maine to the southwest, and the Atlantic Ocean to the east. The province includes regions of the Mi'kmaq nation of Mi'kma'ki (""). (The territory of the Nation of Mi'kma'ki also includes the Maritimes, parts of Maine, Newfoundland and the Gaspé Peninsula.) The Mi'kmaq people are among the large Algonquian-language family and inhabited Nova Scotia at the time the first European colonists arrived. Warfare was a notable feature in Nova Scotia during the 17th and 18th centuries. The French arrived in 1604, and Catholic Mi'kmaq and Acadians formed the majority of the population of the colony for the next 150 years. In 1605, French colonists established the first permanent European settlement in the future Canada (and the first north of Florida) at Port Royal, founding what would become known as Acadia. During the first 80 years the French and Acadians lived in Nova Scotia, nine significant military clashes took place as the English and Scottish (later British), Dutch and French fought for possession of the area. These encounters happened at Port Royal, Saint John, Cap de Sable (present-day Port La Tour, Nova Scotia), Jemseg (1674 and 1758) and Baleine (1629). The Acadian Civil War took place from 1640 to 1645. Beginning with King William's War in 1688, a series of six wars took place between the English/British and the French, with Nova Scotia being a consistent theatre of conflict between the two powers. Hostilities between the British and French resumed from 1702 to 1713, known as Queen Anne's War. The British siege of Port Royal took place in 1710, ending French-rule in peninsular Acadia. The subsequent signing of the Treaty of Utrecht in 1713 formally recognized this, while returning Cape Breton Island (') and Prince Edward Island (') to the French. Despite the British conquest of Acadia in 1710, Nova Scotia remained primarily occupied by Catholic Acadians and Mi'kmaq, who confined British forces to Annapolis and to Canso. Present-day New Brunswick then still formed a part of the French colony of Acadia. Immediately after the capture of Port Royal in 1710, Francis Nicholson announced it would be renamed Annapolis Royal in honor of Queen Anne. As a result of Father Rale's War (1722–1725), the Mi'kmaq signed a series of treaties with Great Britain in 1725. The British signed a treaty (or "agreement") with the Mi'kmaq, but the authorities have often disputed its definition of the rights of the Mi'kmaq to hunt and fish on their lands. However, conflict between the Acadians, Mi'kmaq, French, and the British persisted in the following decades with King George's War (1744–1748). Father Le Loutre's War (1749–1755) began when Edward Cornwallis arrived to establish Halifax with 13 transports on 21 June 1749. A General Court, made up of the governor and the Council, was the highest court in the colony at the time. Jonathan Belcher was sworn in as chief justice of the Nova Scotia Supreme Court on 21 October 1754. The first legislative assembly in Halifax, under the Governorship of Charles Lawrence, met on 2 October 1758. During the French and Indian War of 1754–63 (the North American theatre of the Seven Years' War of 1756–1763), the British deported the Acadians and recruited New England Planters to resettle the colony. The 75-year period of war ended with the Halifax Treaties between the British and the Mi'kmaq (1761). After the war, some Acadians were allowed to return and the British made treaties with the Mi’kmaq. In 1763, most of Acadia (Cape Breton Island, St. John's Island (now Prince Edward Island), and New Brunswick) became part of Nova Scotia. In 1765, the county of Sunbury was created. This included the territory of present-day New Brunswick and eastern Maine as far as the Penobscot River. In 1769, St. John's Island became a separate colony. The American Revolution (1775–1783) had a significant impact on shaping Nova Scotia. Initially, Nova Scotia—"the 14th American Colony" as some called it—displayed ambivalence over whether the colony should join the more southern colonies in their defiance of Britain, and rebellion flared at the Battle of Fort Cumberland (1776) and at the Siege of Saint John (1777). Throughout the war, American privateers devastated the maritime economy by capturing ships and looting almost every community outside of Halifax. These American raids alienated many sympathetic or neutral Nova Scotians into supporting the British. By the end of the war Nova Scotia had outfitted a number of privateers to attack American shipping. British military forces based at Halifax succeeded in preventing American support for rebels in Nova Scotia and deterred any invasion of Nova Scotia. However the British navy failed to establish naval supremacy. While the British captured many American privateers in battles such as the Naval battle off Halifax (1782), many more continued attacks on shipping and settlements until the final months of the war. The Royal Navy struggled to maintain British supply lines, defending convoys from American and French attacks as in the fiercely fought convoy battle, the Naval battle off Cape Breton (1781). After the Thirteen Colonies and their French allies forced the British forces to surrender (1781), approximately 33,000 Loyalists (the King's Loyal Americans, allowed to place "United Empire Loyalist" after their names) settled in Nova Scotia (14,000 of them in what became New Brunswick) on lands granted by the Crown as some compensation for their losses. (The British administration divided Nova Scotia and hived off Cape Breton and New Brunswick in 1784). The Loyalist exodus created new communities across Nova Scotia, including Shelburne, which briefly became one of the larger British settlements in North America, and infused Nova Scotia with additional capital and skills. There are also a number of Black loyalists buried in unmarked graves in the Old Burying Ground (Halifax, Nova Scotia). However the migration also caused political tensions between Loyalist leaders and the leaders of the existing New England Planters settlement. The Loyalist influx also pushed Nova Scotia's 2000 Mi'kmaq People to the margins as Loyalist land grants encroached on ill-defined native lands. As part of the Loyalist migration, about 3,000 Black Loyalists arrived; they founded the largest free Black settlement in North America at Birchtown, near Shelburne. Many Nova Scotian communities were settled by British regiments that fought in the war. During the War of 1812, Nova Scotia's contribution to the British war effort involved communities either purchasing or building various privateer ships to attack U.S. vessels. Perhaps the most dramatic moment in the war for Nova Scotia occurred when HMS "Shannon" escorted the captured American frigate USS "Chesapeake" into Halifax Harbour (1813). Many of the U.S. prisoners were kept at Deadman's Island, Halifax. During this century, Nova Scotia became the first colony in British North America and in the British Empire to achieve responsible government in January–February 1848 and become self-governing through the efforts of Joseph Howe. Nova Scotia had established representative government in 1758, an achievement later commemorated by the erection of the Dingle Tower in 1908. Nova Scotians fought in the Crimean War of 1853–1856. The Welsford-Parker Monument in Halifax is the second-oldest war monument in Canada (1860) and the only Crimean War monument in North America. It commemorates the 1854–55 Siege of Sevastopol. Thousands of Nova Scotians fought in the American Civil War (1861–1865), primarily on behalf of the North. The British Empire (including Nova Scotia) in the conflict. As a result, Britain (and Nova Scotia) continued to trade with both the South and the North. Nova Scotia's economy boomed during the Civil War. Soon after the American Civil War, Pro-Canadian Confederation premier Charles Tupper led Nova Scotia into the Canadian Confederation on 1 July 1867, along with New Brunswick and the Province of Canada. The Anti-Confederation Party was led by Joseph Howe. Almost three months later, in the election of 18 September 1867, the Anti-Confederation Party won 18 out of 19 federal seats, and 36 out of 38 seats in the provincial legislature. Throughout the 19th century, numerous businesses developed in Nova Scotia became of pan-Canadian and international importance: the Starr Manufacturing Company (first skate-manufacturer in Canada), the Bank of Nova Scotia, Cunard Line, Alexander Keith's Brewery, Morse's Tea Company (first tea company in Canada), among others. Nova Scotia became a world leader in both building and owning wooden sailing ships in the second half of the 19th century. Nova Scotia produced internationally recognized shipbuilders Donald McKay and William Dawson Lawrence. The fame Nova Scotia achieved from sailors was assured when Joshua Slocum became the first man to sail single-handedly around the world (1895). International attention continued into the following century with the many racing victories of the "Bluenose" schooner. Nova Scotia was also the birthplace and home of Samuel Cunard, a British shipping magnate (born at Halifax, Nova Scotia) who founded the Cunard Line. In December 1917, about 2,000 people were killed in the Halifax Explosion. In April 2020, a killing spree occurred across the province and became the deadliest rampage in Canada's history. According to the 2006 Canadian census the largest ethnic group in Nova Scotia is Scottish (31.9%), followed by English (31.8%), Irish (21.6%), French (17.9%), German (11.3%), Aboriginal origin (5.3%), Dutch (4.1%), Black Canadians (2.8%), Welsh (1.9%) Italian (1.5%), and Scandinavian (1.4%). 40.9% of respondents identified their ethnicity as "Canadian". The 2011 Canadian census showed a population of 921,727. Of the 904,285 singular responses to the census question concerning mother tongue, the most commonly reported languages were: Figures shown are for the number of single-language responses and the percentage of total single-language responses. Nova Scotia is home to the largest Scottish Gaelic-speaking community outside of Scotland, with a small number of native speakers in Pictou County, Antigonish County, and Cape Breton Island, and the language is taught in a number of secondary schools throughout the province. In 2018 the government launched a new Gaelic vehicle licence plate to raise awareness of the language and help fund Gaelic language and culture initiatives. They estimated that there were 2,000 Gaelic speakers in the province. In 1871, the largest religious denominations were Protestant with 103,500 (27%); Roman Catholic with 102,000 (26%); Baptist with 73,295 (19%); Anglican with 55,124 (14%); Methodist with 40,748 (10%), Lutheran with 4,958 (1.3%); and Congregationalist with 2,538 (0.65%). According to the 2011 census, the largest denominations by number of adherents were the Christians with 78.2%.About 21.18 % were Non-religious and 1 % were Muslims. Jews, Hindus and Sikhs constitute around 0.20%. Nova Scotia's per capita GDP in 2016 was $44,924, significantly lower than the national average per capita GDP of $57,574. GDP growth has lagged behind the rest of the country for at least the past decade. As of 2017, the median family income in Nova Scotia was $85,970, below the national average of $92,990; in Halifax the figure rises to $98,870. The province is the world's largest exporter of Christmas trees, lobster, gypsum, and wild berries. Its export value of fish exceeds $1 billion, and fish products are received by 90 countries around the world. Nevertheless, the province's imports far exceed its exports. While these numbers were roughly equal from 1992 until 2004, since that time the trade deficit has ballooned. In 2012, exports from Nova Scotia were 12.1% of provincial GDP, while imports were 22.6%. Nova Scotia's traditionally resource-based economy has diversified in recent decades. The rise of Nova Scotia as a viable jurisdiction in North America, historically, was driven by the ready availability of natural resources, especially the fish stocks off the Scotian Shelf. The fishery was a pillar of the economy since its development as part of New France in the 17th century; however, the fishery suffered a sharp decline due to overfishing in the late 20th century. The collapse of the cod stocks and the closure of this sector resulted in a loss of approximately 20,000 jobs in 1992. Other sectors in the province were also hit hard, particularly during the last two decades: coal mining in Cape Breton and northern mainland Nova Scotia has virtually ceased, and a large steel mill in Sydney closed during the 1990s. More recently, the high value of the Canadian dollar relative to the US dollar has hurt the forestry industry, leading to the shutdown of a long-running pulp and paper mill near Liverpool. Mining, especially of gypsum and salt and to a lesser extent silica, peat and barite, is also a significant sector. Since 1991, offshore oil and gas has become an important part of the economy, although production and revenue are now declining. However, agriculture remains an important sector in the province, particularly in the Annapolis Valley. Nova Scotia's defence and aerospace sector generates approximately $500 million in revenues and contributes about $1.5 billion to the provincial economy each year. To date, 40% of Canada's military assets reside in Nova Scotia. Nova Scotia has the fourth-largest film industry in Canada hosting over 100 productions yearly, more than half of which are the products of international film and television producers. In 2015, the government of Nova Scotia eliminated tax credits to film production in the province, jeopardizing the industry given most other jurisdictions continue to offer such credits. The province also boasts a rapidly developing Information & Communication Technology (ICT) sector which consists of over 500 companies, and employs roughly 15,000 people. In 2006, the manufacturing sector brought in over $2.6 billion in chained GDP, the largest output of any industrial sector in Nova Scotia. Michelin remains by far the largest single employer in this sector, operating three production plants in the province. Michelin is also the province's largest private-sector employer. The Nova Scotia tourism industry includes more than 6,500 direct businesses, supporting nearly 40,000 jobs. Cruise ships pay regular visits to the province. In 2010, the Port of Halifax received 261,000 passengers and Sydney 69,000. This industry contributes approximately $1.3 billion annually to the economy. A 2008 Nova Scotia tourism campaign included advertising a fictional mobile phone called Pomegranate and establishing website, which after reading about "new phone" redirected to tourism info about region. Nova Scotia's tourism industry showcases Nova Scotia's culture, scenery and coastline. Nova Scotia has many museums reflecting its ethnic heritage, including the Glooscap Heritage Centre, Grand-Pré National Historic Site, Hector Heritage Quay and the Black Cultural Centre for Nova Scotia. Other museums tell the story of its working history, such as the Cape Breton Miners' Museum, and the Maritime Museum of the Atlantic. Nova Scotia is home to several internationally renowned musicians and there are visitor centres in the home towns of Hank Snow, Rita MacNeil, and Anne Murray Centre. There are also numerous music and cultural festivals such as the Stan Rogers Folk Festival, Celtic Colours, the Nova Scotia Gaelic Mod, Royal Nova Scotia International Tattoo, the Atlantic Film Festival and the Atlantic Fringe Festival. The province has 87 National Historic Sites of Canada, including the Habitation at Port-Royal, the Fortress of Louisbourg and Citadel Hill (Fort George) in Halifax. Nova Scotia has two national parks, Kejimkujik and Cape Breton Highlands, and many other protected areas. The Bay of Fundy has the highest tidal range in the world, and the iconic Peggys Cove is internationally recognized and receives 600,000-plus visitors a year. Old Town Lunenburg is a port town on the South Shore that was declared a UNESCO World Heritage Site. Acadian Skies and Mi'kmaq Lands is a starlight reserve in southwestern Nova Scotia. It is the first certified UNESCO-Starlight Tourist Destination. Starlight tourist destinations are locations that offer conditions for observations of stars which are protected from light pollution. Nova Scotia is ordered by a parliamentary government within the construct of constitutional monarchy; the monarchy in Nova Scotia is the foundation of the executive, legislative, and judicial branches. The sovereign is Queen Elizabeth II, who also serves as head of state of 15 other Commonwealth countries, each of Canada's nine other provinces, and the Canadian federal realm, and resides predominantly in the United Kingdom. As such, the Queen's representative, the Lieutenant Governor of Nova Scotia (at present Arthur Joseph LeBlanc), carries out most of the royal duties in Nova Scotia. The direct participation of the royal and viceroyal figures in any of these areas of governance is limited, though; in practice, their use of the executive powers is directed by the Executive Council, a committee of ministers of the Crown responsible to the unicameral, elected House of Assembly and chosen and headed by the Premier of Nova Scotia (presently Stephen McNeil), the head of government. To ensure the stability of government, the lieutenant governor will usually appoint as premier the person who is the current leader of the political party that can obtain the confidence of a plurality in the House of Assembly. The leader of the party with the second-most seats usually becomes the Leader of Her Majesty's Loyal Opposition (presently Tim Houston) and is part of an adversarial parliamentary system intended to keep the government in check. Each of the 51 Members of the Legislative Assembly in the House of Assembly is elected by single member plurality in an electoral district or riding. General elections must be called by the lieutenant governor on the advice of the premier, or may be triggered by the government losing a confidence vote in the House. There are three dominant political parties in Nova Scotia: the Liberal Party, the New Democratic Party, and the Progressive Conservative Party. The other two registered parties are the Green Party of Nova Scotia and the Atlantica Party, neither of which has a seat in the House of Assembly. The province's revenue comes mainly from the taxation of personal and corporate income, although taxes on tobacco and alcohol, its stake in the Atlantic Lottery Corporation, and oil and gas royalties are also significant. In 2006–07, the province passed a budget of $6.9 billion, with a projected $72 million surplus. Federal equalization payments account for $1.385 billion, or 20.07% of the provincial revenue. The province participates in the HST, a blended sales tax collected by the federal government using the GST tax system. Nova Scotia no longer has any incorporated cities; they were amalgamated into Regional Municipalities in 1996. The cuisine of Nova Scotia is typically Canadian with an emphasis on local seafood. One endemic dish (in the sense of "peculiar to" and "originating from") is the Halifax donair, a distant variant of the doner kebab prepared using thinly sliced beef meatloaf and a sweet condensed milk sauce. As well, hodge podge, a creamy soup of fresh baby vegetables, is native to Nova Scotia. The province is also known for a dessert called blueberry fungy or blueberry grunt. There are a number of festivals and cultural events that are recurring in Nova Scotia, or notable in its history. The following is an incomplete list of festivals and other cultural gatherings in the province: Nova Scotia has produced numerous film actors. Academy Award nominee Ellen Page ("Juno", "Inception") was born in Halifax, Nova Scotia; five-time Academy Award nominee Arthur Kennedy ("Lawrence of Arabia", "High Sierra") called Nova Scotia his home; and two time Golden Globe winner Donald Sutherland ("MASH", "Ordinary People") spent most of his youth in the province. Other actors include John Paul Tremblay, Robb Wells, Mike Smith and John Dunsworth of "Trailer Park Boys" and actress Joanne Kelly of "Warehouse 13". Nova Scotia has also produced numerous film directors such as Thom Fitzgerald ("The Hanging Garden"), Daniel Petrie ("Resurrection"—Academy Award nominee) and Acadian film director Phil Comeau's multiple award-winning local story ("Le secret de Jérôme"). Nova Scotian stories are the subject of numerous feature films: "Margaret's Museum" (starring Helena Bonham Carter); "The Bay Boy" (directed by Daniel Petrie and starring Kiefer Sutherland); "New Waterford Girl"; "The Story of Adele H." (the story of unrequited love of Adèle Hugo); and two films of "Evangeline" (one starring Miriam Cooper and another starring Dolores del Río). There is a significant film industry in Nova Scotia. Feature filmmaking began in Canada with "Evangeline" (1913), made by Canadian Bioscope Company in Halifax, which released six films before it closed. The film has since been lost. Some of the award-winning feature films made in the province are "Titanic" (starring Leonardo DiCaprio and Kate Winslet); "The Shipping News" (starring Kevin Spacey and Julianne Moore); "" (starring Harrison Ford and Liam Neeson); "Amelia" (starring Hilary Swank, Richard Gere and Ewan McGregor) and "The Lighthouse" (starring Robert Pattinson and William Dafoe). Nova Scotia has also produced numerous television series: "This Hour Has 22 Minutes", "Don Messer's Jubilee", "Black Harbour", "Haven", "Trailer Park Boys", "Mr. D", "Call Me Fitz", and "Theodore Tugboat". The "Jesse Stone" film series on CBS starring Tom Selleck is also routinely produced in the province. Nova Scotia has long been a centre for artistic and cultural excellence. The capital, Halifax, hosts institutions such as Nova Scotia College of Art and Design University, Art Gallery of Nova Scotia, Neptune Theatre, Dalhousie Arts Centre, Two Planks and a Passion Theatre, and the Ship's Company Theatre. The province is home to avant-garde visual art and traditional crafting, writing and publishing and a film industry. Much of the historic public art sculptures in the province were made by New York sculptor J. Massey Rhind as well as Canadian sculptors Hamilton MacCarthy, George Hill, Emanuel Hahn and Louis-Philippe Hébert. Some of this public art was also created by Nova Scotian John Wilson. Nova Scotian George Lang was a stone sculptor who also built many landmark buildings in the province, including the Welsford-Parker Monument. Two valuable sculptures/ monuments in the province are in St. Paul's Church (Halifax): one by John Gibson (for Richard John Uniacke, Jr.) and another monument by Sir Francis Leggatt Chantrey (for Amelia Ann Smyth). Both Gibson and Chantry were famous British sculptors during the Victorian era and have numerou sculptures in the Tate, Museum of Fine Arts, Boston and Westminster Abbey. Some of the province's greatest painters were Maud Lewis, William Valentine, Maria Morris, Jack L. Gray, Mabel Killiam Day, Ernest Lawson, Frances Bannerman, Alex Colville, Tom Forrestall and ship portrait artist John O'Brien. Some of most notable artists whose works have been acquired by Nova Scotia are British artist Joshua Reynolds (collection of Art Gallery of Nova Scotia); William Gush and William J. Weaver (both have works in Province House); Robert Field (Government House), as well as leading American artists Benjamin West (self portrait in The Halifax Club, portrait of chief justice in Nova Scotia Supreme Court), John Singleton Copley, Robert Feke, and Robert Field (the latter three have works in the Uniacke Estate). Two famous Nova Scotian photographers are Wallace R. MacAskill and Sherman Hines. Three of the most accomplished illustrators were George Wylie Hutchinson, Bob Chambers (cartoonist) and Donald A. Mackay. There are numerous Nova Scotian authors who have achieved international fame: Thomas Chandler Haliburton ("The Clockmaker"), Alistair MacLeod ("No Great Mischief"), Evelyn Richardson "(We Keep A Light)", Margaret Marshall Saunders "(Beautiful Joe)," Laurence B. Dakin "(Marco Polo)," and Joshua Slocum "(Sailing Alone Around the World)." Other authors include Johanna Skibsrud "(The Sentimentalists)," Alden Nowlan "(Bread, Wine and Salt)," George Elliott Clarke "(Execution Poems)," Lesley Choyce "(Nova Scotia: Shaped by the Sea)," Thomas Raddall "(Halifax: Warden of the North)," Donna Morrissey "(Kit's Law)," and Frank Parker Day "(Rockbound)." Nova Scotia has also been the subject of numerous literary books. Some of the international best-sellers are: "Last Man Out: The Story of the Springhill Mining Disaster" (by Melissa Fay Greene) ; "Curse of the Narrows: The Halifax Explosion 1917" (by Laura MacDonald); "In the Village" (short story by Pulitzer Prize–winning author Elizabeth Bishop); and National Book Critics Circle Award winner "Rough Crossings" (by Simon Schama). Other authors who have written novels about Nova Scotian stories include: Linden MacIntyre ("The Bishop's Man"); Hugh MacLennan ("Barometer Rising"); Rebecca McNutt ("Mandy and Alecto"); Ernest Buckler ("The Valley and the Mountain"); Archibald MacMechan ("Red Snow on Grand Pré"), Henry Wadsworth Longfellow (long poem "Evangeline"); Lawrence Hill ("The Book of Negroes") and John Mack Faragher ("Great and Nobel Scheme"). Nova Scotia is home to Symphony Nova Scotia, a symphony orchestra based in Halifax. The province has produced more than its fair share of famous musicians, including Grammy Award winners Denny Doherty (from The Mamas & the Papas), Anne Murray, and Sarah McLachlan, country singers Hank Snow, George Canyon, and Drake Jensen, jazz vocalist Holly Cole, classical performers Portia White and Barbara Hannigan, multi Juno Award nominated rapper Classified, and such diverse artists as Rita MacNeil, Matt Mays, Sloan, Feist, Todd Fancey, The Rankin Family, Natalie MacMaster, Susan Crowe, Buck 65, Joel Plaskett, and the bands April Wine and Grand Dérangement There are numerous songs written about Nova Scotia: The Ballad of Springhill (written by Peggy Seeger and performed by Irish folk singer Luke Kelly, a member of The Dubliners); several songs by Stan Rogers including Bluenose, Watching The Apples Grow, The Jeannie C (mentions Little Dover, NS), Barrett's Privateers, Giant, and The Rawdon Hills; Farewell to Nova Scotia (traditional); Blue Nose (Stompin' Tom Connors); She's Called Nova Scotia (by Rita MacNeil); Cape Breton (by David Myles); Acadian Driftwood (by Robbie Robertson); Acadie (by Daniel Lanois); Song For The Mira (by Allister MacGillivray) and My Nova Scotia Home (by Hank Snow). Nova Scotia has produced many significant songwriters, such as Grammy Award winning Gordie Sampson, who has written songs for Carrie Underwood ("Jesus, Take the Wheel", "Just a Dream", "Get Out of This Town"), Martina McBride ("If I Had Your Name", "You're Not Leavin Me"), LeAnn Rimes ("Long Night", "Save Myself"), and George Canyon ("My Name"). Many of Hank Snow's songs went on to be recorded by the likes of The Rolling Stones, Elvis Presley, and Johnny Cash. Cape Bretoners Allister MacGillivray and Leon Dubinsky have both written songs which, by being covered by so many popular artists, and by entering the repertoire of so many choirs around the world, have become iconic representations of Nova Scotian style, values and ethos. Dubinsky's pop ballad "We Rise Again" might be called the unofficial anthem of Cape Breton. Music producer Brian Ahern is a Nova Scotian. He got his start by being music director for CBC television's Singalong Jubilee. He later produced 12 albums for Anne Murray ("Snowbird", "Danny's Song" and "You Won't See Me"); 11 albums for Emmylou Harris (whom he married at his home in Halifax on 9 January 1977). He also produced discs for Johnny Cash, George Jones, Roy Orbison, Glen Campbell, Don Williams, Jesse Winchester and Linda Ronstadt. Sport is an important part of Nova Scotia culture. There are numerous semi pro, university and amateur sports teams, for example, The Halifax Mooseheads, 2013 Canadian Hockey League Memorial Cup Champions, and the Cape Breton Screaming Eagles, both of the Quebec Major Junior Hockey League. The Halifax Hurricanes of the National Basketball League of Canada is another team that calls Nova Scotia home, and were 2016 league champions. Professional soccer came to the province in 2019 in the form of Canadian Premier League club HFX Wanderers FC. The Nova Scotia Open is a professional golf tournament on the Web.com Tour since 2014. The province has also produced numerous athletes such as Sidney Crosby (ice hockey), Nathan Mackinnon (ice hockey), Brad Marchand (ice hockey), Colleen Jones (curling), Al MacInnis (ice hockey), TJ Grant (mixed martial arts), Rocky Johnson (wrestling, and father of Dwayne "The Rock" Johnson), George Dixon (boxing) and Kirk Johnson (boxing). The achievements of Nova Scotian athletes are presented at the Nova Scotia Sport Hall of Fame. The Minister of Education is responsible for the administration and delivery of education, as defined by the Education Act and other acts relating to colleges, universities and private schools. The powers of the Minister and the Department of Education are defined by the Ministerial regulations and constrained by the Governor-In-Council regulations. All children until the age of 16 are legally required to attend school or the parent needs to perform home schooling. Nova Scotia's education system is split up into eight different regions including; Tri-County (22 schools), Annapolis Valley (42 schools), South Shore (25 schools), Chignecto-Central (67 schools), Halifax (135 schools), Strait (20 schools) and Cape Breton-Victoria Regional Centre for Education (39 schools). Nova Scotia has more than 450 public schools for children. The public system offers primary to Grade 12. There are also private schools in the province. Public education is administered by seven regional school boards, responsible primarily for English instruction and French immersion, and also province-wide by the Conseil Scolaire Acadien Provincial, which administers French instruction to students whose primary language is French. The Nova Scotia Community College system has 13 campuses around the province. With a focus on training and education, the college was established in 1988 by amalgamating the province's former vocational schools. In addition to the provincial community college system, there are more than 90 registered private colleges in Nova Scotia. Ten universities are also situated in Nova Scotia, including Dalhousie University, University of King's College, Saint Mary's University, Mount Saint Vincent University, NSCAD University, Acadia University, Université Sainte-Anne, Saint Francis Xavier University, Cape Breton University and the Atlantic School of Theology.
https://en.wikipedia.org/wiki?curid=21184
Northwest Territories The Northwest Territories (abbr. NT or NWT) is a federal territory of Canada. At a land area of approximately and a 2016 census population of 41,786, it is the second-largest and the most populous of the three territories in Northern Canada. Its estimated population as of 2020 is 44,982. Yellowknife became the territorial capital in 1967, following recommendations by the Carrothers Commission. The Northwest Territories, a portion of the old North-Western Territory, entered the Canadian Confederation on July 15, 1870. Since then, the territory has been divided four times to create new provinces and territories or enlarge existing ones. Its current borders date from April 1, 1999, when the already-smaller territory was decreased again by the creation of a new territory of Nunavut to the east, via the "Nunavut Act" and the Nunavut Land Claims Agreement. While Nunavut is mostly Arctic tundra, the Northwest Territories has a slightly warmer climate and is both boreal forest (taiga) and tundra, and its most northern regions form part of the Canadian Arctic Archipelago. The Northwest Territories is bordered by Canada's two other territories, Nunavut to the east and Yukon to the west, and by the provinces of British Columbia, Alberta, and Saskatchewan to the south, and may touch Manitoba at a quadripoint to the southeast. The name is descriptive, adopted by the British government during the colonial era to indicate where it lay in relation to the rest of Rupert's Land. It is shortened from North-Western Territory, which became the term North-West Territories ("see" History). In Inuktitut, the Northwest Territories are referred to as (), "beautiful land." The northernmost region of the territory is home to Inuvialuit, part of Inuit Nunangat called Nunangit, while the southern portion is called (an Athabaskan language word meaning "our land"). is the vast Dene country, stretching from central Alaska to Hudson Bay, within which lie the homelands of the numerous Dene nations. There was some discussion of changing the name of the Northwest Territories after the splitting off of Nunavut, possibly to a term from an Indigenous language. One proposal was "Denendeh," as advocated by the former premier Stephen Kakfwi, among others. One of the most popular proposals for a new name—to name the territory "Bob"—began as a prank, but for a while it was at or near the top in the public-opinion polls. In the end, a poll conducted prior to division showed that strong support remained to keep the name "Northwest Territories." This name arguably became more appropriate following division than it had been when the territories extended far into Canada's north-central and northeastern areas. Located in northern Canada, the territory borders Canada's two other territories, Yukon to the west and Nunavut to the east, as well as three provinces: British Columbia to the southwest, and Alberta and Saskatchewan to the south. It possibly meets Manitoba at a quadripoint to the extreme southeast, though surveys have not been completed. It has a land area of . Geographical features include Great Bear Lake, the largest lake entirely within Canada, and Great Slave Lake, the deepest body of water in North America at , as well as the Mackenzie River and the canyons of the Nahanni National Park Reserve, a national park and UNESCO World Heritage Site. Territorial islands in the Canadian Arctic Archipelago include Banks Island, Borden Island, Prince Patrick Island, and parts of Victoria Island and Melville Island. Its highest point is Mount Nirvana near the border with Yukon at an elevation of . The Northwest Territories extends for more than and has a large climate variant from south to north. The southern part of the territory (most of the mainland portion) has a subarctic climate, while the islands and northern coast have a polar climate. Summers in the north are short and cool, daytime highs of 14–17 degrees Celsius (57–63 °F), and lows of 1–5 degrees Celsius (34–41 °F). Winters are long and harsh, with daytime highs , lows , and the coldest nights typically reaching each year. Extremes are common with summer highs in the south reaching and lows reaching below . In winter in the south, it is not uncommon for the temperatures to reach , but they can also reach the low teens during the day. In the north, temperatures can reach highs of , and lows into the low negatives. In winter in the north it is not uncommon for the temperatures to reach but they can also reach single digits during the day. Thunderstorms are not rare in the south. In the north they are very rare, but do occur. Tornadoes are extremely rare but have happened with the most notable one happening just outside Yellowknife that destroyed a communications tower. The Territory has a fairly dry climate due to the mountains in the west. About half of the territory is above the tree line. There are not many trees in most of the eastern areas of the territory, or in the north islands. Prior to the arrival of Europeans, a number of First Nations and Inuit nations occupied the area that became the Northwest Territories. Inuit nations include the Caribou, Central, and Copper nations. First Nations groups include the Beaver, Chipewyan, Dogrib, Nahanni, Sekani, Slavey, and Yellowknives. In 1670, the Hudson's Bay Company (HBC) was formed from a royal charter, and was granted a commercial monopoly over Rupert's Land. Present day Northwest Territories laid northwest of Rupert's Land, known as the North-Western Territory. Although not formally part of Rupert's Land, the HBC made regular use of the region as a part of its trading area. The Treaty of Utrecht saw the British became the only European power with practical access to the North-Western Territory, with the French surrendering its claim to the Hudson Bay coast. Europeans have visited the region for the purposes of fur trading, and exploration for new trade routes, including the Northwest Passage. Arctic expeditions launched in the 19th century include the Coppermine expedition. In 1867, first Canadian residential school opened in the region in Fort Resolution. The opening of the school was followed by several others in regions across the territory, thus contributing to it reaching the highest percentage of students in residential schools compared to other area in Canada. The present-day territory came under the authority of the Government of Canada in July 1870, after the Hudson's Bay Company transferred Rupert's Land and the North-Western Territory to the British Crown, which subsequently transferred them to Canada, giving it the name the North-west Territories. This immense region comprised all of today's Canada except British Columbia, early form of Manitoba (a small square area around Winnipeg), early forms of present-day Ontario and Quebec (the coast of the Great Lakes, the Saint Lawrence River valley and the southern third of modern Quebec), the Maritimes (Nova Scotia, Prince Edward Island and New Brunswick), Newfoundland, the Labrador coast, and the Arctic Islands (except the southern half of Baffin Island). After the 1870 transfer, some of the North-West Territories was whittled away. The province of Manitoba was enlarged in 1881 to a rectangular region composing the modern province's south. By the time British Columbia joined Confederation on July 20, 1871, it had already (1866) been granted the portion of North-Western Territory south of 60 degrees north and west of 120 degrees west, an area that comprised most of the Stickeen Territories. After the North-West Rebellion of 1885 a North-West Territories Council was created in 1887 for regional government of Canada west of the province of Manitoba; the Council was reorganized in 1888 as the Legislative Assembly of the North-West Territories. Frederick Haultain, an Ontario lawyer who practised at Fort Macleod from 1884, became its chairman in 1891 and Premier when the Assembly was reorganized in 1897. The modern provinces of Saskatchewan and Alberta were created in 1905. Contemporary records show Haultain recommended that the NWT become a single province, named Buffalo, but the Canadian government of Sir Wilfid Laurier acted otherwise. In the meantime, the Province of Ontario was enlarged northwestward in 1882. Quebec was also extended northwards in 1898. Yukon was made a separate territory that year, due to the Klondike Gold Rush, to free the North-west Territories government in Regina from the burden of addressing the problems caused by the sudden boom of population and economic activity, and the influx of non-Canadians. One year after the provinces of Alberta and Saskatchewan were created in 1905, the Parliament of Canada renamed the "North-West Territories" as the "Northwest Territories", dropping all hyphenated forms of it. Manitoba, Ontario and Quebec acquired the last addition to their modern landmass from the Northwest Territories in 1912. This left only the districts of Mackenzie, Franklin (which absorbed the remnants of Ungava in 1920) and Keewatin within what was then given the name Northwest Territories. In 1925, the boundaries of the Northwest Territories were extended all the way to the North Pole on the sector principle, vastly expanding its territory onto the northern ice cap. Between 1925 and 1999, the Northwest Territories covered a land area of —larger than India. On April 1, 1999, a separate Nunavut territory was formed from the eastern Northwest Territories to represent the Inuit people. The NWT is one of two jurisdictions in Canada – Nunavut being the other – where Aboriginal peoples are in the majority, constituting 50.4% of the population. According to the 2016 Canadian census, the 10 major ethnic groups were: French was made an official language in 1877 by the territorial government. After a lengthy and bitter debate resulting from a speech from the throne in 1888 by Lieutenant Governor Joseph Royal, the members of the day voted on more than one occasion to nullify that and make English the only language used in the assembly. After some conflict with the Confederation Government in Ottawa, and a decisive vote on January 19, 1892, the assembly members voted for an English-only territory. Currently, the Northwest Territories' "Official Languages Act" recognizes the following eleven official languages: NWT residents have a right to use any of the above languages in a territorial court, and in the debates and proceedings of the legislature. However, the laws are legally binding only in their French and English versions, and the NWT government only publishes laws and other documents in the territory's other official languages when the legislature asks it to. Furthermore, access to services in any language is limited to institutions and circumstances where there is a significant demand for that language or where it is reasonable to expect it given the nature of the services requested. In practical terms, English language services are universally available, and there is no guarantee that other languages, including French, will be used by any particular government service, except for the courts. The 2016 census returns showed a population of 41,786. Of the 40,565 singular responses to the census question regarding each inhabitant's "mother tongue", the most reported languages were the following (italics indicate an official language of the NWT): There were also 630 responses of both English and a "non-official language"; 35 of both French and a "non-official language"; 145 of both English and French, and about 400 people who either did not respond to the question, or reported multiple non-official languages, or else gave some other unenumerable response. (Figures shown are for the number of single language responses and the percentage of total single-language responses.) The largest denominations by number of adherents according to the 2001 census were Roman Catholic with 16,940 (46.7%); the Anglican Church of Canada with 5,510 (14.9%); and the United Church of Canada with 2,230 (6.0%), while a total of 6,465 (17.4%) people stated no religion. As of 2014, there are 33 official communities in the NWT. These range in size from Yellowknife with a population of 19,569 to Kakisa with 36 people. Governance of each community differs, some are run under various types of First Nations control, while others are designated as a city, town, village or hamlet, but most communities are municipal corporations. Yellowknife is the largest community and has the largest number of Aboriginal peoples, 4,520 (23.4%) people. However, Behchokǫ̀, with a population of 1,874, is the largest First Nations community, 1,696 (90.9%), and Inuvik with 3,243 people is the largest Inuvialuit community, 1,315 (40.5%). There is one Indian reserve in the NWT, Hay River Reserve, located on the south shore of the Hay River. The Gross Domestic Product of the Northwest Territories was C$4.856 billion in 2017. The Northwest Territories has the highest per capita GDP of all provinces or territories in Canada, C$76,000 in 2009. The NWT's geological resources include gold, diamonds, natural gas and petroleum. British Petroleum (BP) is the only oil company currently producing oil in the Territory. NWT diamonds are promoted as an alternative to purchasing blood diamonds. Two of the biggest mineral resource companies in the world, BHP Billiton and Rio Tinto mine many of their diamonds from the NWT. In 2010, NWT accounted for 28.5% of Rio Tinto's total diamond production (3.9 million carats, 17% more than in 2009, from the Diavik Diamond Mine) and 100% of BHP's (3.05 million carats from the EKATI mine). During the winter, many international visitors go to Yellowknife to watch the auroras. Five areas managed by Parks Canada are situated within the territory. Aulavik National Park and Tuktut Nogait National Park are in the northern part of Northwest Territories. Portions of Wood Buffalo National Park are located within the Northwest Territories, although most of it is located in neighbouring Alberta. Parks Canada also manages two park reserves, Nááts'ihch'oh National Park Reserve, and Nahanni National Park Reserve. As a territory, the NWT has fewer rights than the provinces. During his term, Premier Kakfwi pushed to have the federal government accord more rights to the territory, including having a greater share of the returns from the territory's natural resources go to the territory. Devolution of powers to the territory was an issue in the 20th general election in 2003, and has been ever since the territory began electing members in 1881. The Commissioner of the NWT is the chief executive and is appointed by the Governor-in-Council of Canada on the recommendation of the federal Minister of Aboriginal Affairs and Northern Development. The position used to be more administrative and governmental, but with the devolution of more powers to the elected assembly since 1967, the position has become symbolic. The Commissioner had full governmental powers until 1980 when the territories were given greater self-government. The Legislative Assembly then began electing a cabinet and "Government Leader", later known as the Premier. Since 1985 the Commissioner no longer chairs meetings of the Executive Council (or cabinet), and the federal government has instructed commissioners to behave like a provincial Lieutenant Governor. Unlike Lieutenant Governors, the Commissioner of the Northwest Territories is not a formal representative of the Queen of Canada. Unlike provincial governments and the government of Yukon, the government of the Northwest Territories does not have political parties, except for the period between 1898 and 1905. It is a consensus government called the Legislative Assembly. This group is composed of one member elected from each of the nineteen constituencies. After each general election, the new Assembly elects the Premier and the Speaker by secret ballot. Seven MLAs are also chosen as cabinet ministers, with the remainder forming the opposition. The membership of the current Legislative Assembly was set by the 2019 Northwest Territories general election on October 1, 2019. Caroline Cochrane was selected as the new Premier on October 24, 2019. The member of Parliament for the Northwest Territories is Michael McLeod (Liberal Party). The Commissioner of the Northwest Territories is Margaret Thom. In the Parliament of Canada, the NWT comprises a single Senate division and a single House of Commons electoral district, titled Northwest Territories ("Western Arctic" until 2014). The Northwest Territories is divided into five administrative regions (with regional seat): The Government of Northwest Territories comprises the following departments: Aboriginal issues in the Northwest Territories include the fate of the Dene who, in the 1940s, were employed to carry radioactive uranium ore from the mines on Great Bear Lake. Of the thirty plus miners who worked at the Port Radium site, at least fourteen have died due to various forms of cancer. A study was done in the community of Deline, called "A Village of Widows" by Cindy Kenny-Gilday, which indicated that the number of people involved were too small to be able to confirm or deny a link. There has been racial tension based on a history of violent conflict between the Dene and the Inuit, who have now taken recent steps towards reconciliation. Land claims in the NWT began with the Inuvialuit Final Agreement, signed on June 5, 1984. It was the first Land Claim signed in the Territory, and the second in Canada. It culminated with the creation of the Inuit homeland of Nunavut, the result of the Nunavut Land Claims Agreement, the largest land claim in Canadian history. Another land claims agreement with the Tłı̨chǫ people created a region within the NWT called Tli Cho, between Great Bear and Great Slave Lakes, which gives the Tłı̨chǫ their own legislative bodies, taxes, resource royalties, and other affairs, though the NWT still maintains control over such areas as health and education. This area includes two of Canada's three diamond mines, at Ekati and Diavik. Among the festivals in the region are the Great Northern Arts Festival, the Snowking Winter Festival, Folk on the Rocks music festival in Yellowknife, and Rockin the Rocks. Northwest Territories has nine numbered highways. The longest is the Mackenzie Highway, which stretches from the Alberta Highway 35's northern terminus in the south at the Alberta – Northwest Territories border at the 60th parallel to Wrigley, Northwest Territories in the north. Ice roads and winter roads are also prominent and provide road access in winter to towns and mines which would otherwise be fly-in locations. Yellowknife Highway branches out from Mackenzie Highway and connects it to Yellowknife. Dempster Highway is the continuation of Klondike Highway. It starts just west of Dawson City, Yukon, and continues east for over to Inuvik. As of 2017, the all-season Inuvik-Tuktoyaktuk Highway connects Inuvik to communities along the Arctic Ocean as an extension of the Dempster Highway. Yellowknife did not have an all-season road access to the rest of Canada's highway network until the completion of Deh Cho Bridge in 2012. Prior to that, traffic relied on ferry service in summer and ice road in winter to cross the Mackenzie River. This became a problem during spring and fall time when the ice was not thick enough to handle vehicle load but the ferry could not pass through the ice, which would require all goods from fuel to groceries to be airlifted during the transition period. Yellowknife Transit is the public transportation agency in the city, and is the only transit system within the Northwest Territories. Yellowknife Airport is the largest airport in the territory in terms of aircraft movements and passengers. It is the gateway airport to other destinations within the Northwest Territories. As the airport of the territory capital, it is part of the National Airports System. It is the hub of multiple regional airlines. Major airlines serving destinations within Northwest Territories include Buffalo Airways, Canadian North, First Air, North-Wright Airways.
https://en.wikipedia.org/wiki?curid=21186
Nez Perce people The Nez Perce (; autonym: , meaning "the walking people" or "we, the people") are an Indigenous people of the Plateau who have lived on the Columbia River Plateau in the Pacific Northwest region for at least 11,500 years. Members of the Sahaptin language group, the Niimíipuu were the dominant people of the Columbia Plateau for much of that time, especially after acquiring the horses that led them to breed the appaloosa horse in the 18th century. Prior to "first contact" with Western civilization the Nimiipuu were economically and culturally influential in trade and war, interacting with other indigenous nations in a vast network from the western shores of Oregon and Washington, the high plains of Montana, and the northern Great Basin in southern Idaho and northern Nevada. French explorers and trappers indiscriminately used and popularized the name "Nez Percé" for the Niimíipuu and nearby Chinook. The name translates as "pierced nose", but only the Chinook used that form of body modification. Today they are a federally recognized tribe, the Nez Perce Tribe of Idaho, and govern their Indian reservation in Idaho through a central government headquartered in Lapwai, Idaho known as the Nez Perce Tribal Executive Committee (NPTEC). They are one of five federally recognized tribes in the state of Idaho. Some still speak their traditional language, and the Tribe owns and operates two casinos along the Clearwater River in Idaho in Kamiah, Idaho and outside of Lewiston, Idaho, health clinics, a police force and court, community centers, salmon fisheries, radio station, and other things that promote economic and cultural self-determination. Cut off from most of their horticultural sites throughout the Camas Prairie by an 1863 treaty, confinement to reservations in Idaho, Washington and Oklahoma Indian Territory after the Nez Perce War of 1877, and Dawes Act of 1887 land allotments (today some Nez Perce lease land to farmers or loggers, but the Nez Perce only own 12% of their own reservation), the Nez Perce remain as a distinct culture and political economic influence within and outside their reservation. Today, hatching, harvesting and eating salmon is an important cultural and economic strength of the Nez Perce through full ownership or co-management of various salmon fish hatcheries, such as the Kooskia National Fish Hatchery in Kooskia, Idaho or the Dworshak National Fish Hatchery in Orofino, Idaho. Their name for themselves is "Nimíipuu" (pronounced ), meaning, "The People", in their language, part of the Sahaptin family. "Nez Percé" is an exonym given by French Canadian fur traders who visited the area regularly in the late 18th century, meaning literally "pierced nose". English-speaking traders and settlers adopted the name in turn. Since the late 20th century, the Nez Perce identify most often as Niimíipuu in Sahaptin. The Lakota/ Dakota named them the "Watopala", or "Canoe" people, from "Watopa". However, after Nez Perce became a more common name, they changed it to "Watopahlute". This comes from "pahlute", nasal passage and is simply a play on words. If translated literally, it would come out as either ""Nasal Passage of the Canoe"" (Watopa-pahlute) or ""Nasal Passage of the Grass"" (Wato-pahlute). The Assiniboine called them "Pasú oȟnógA wįcaštA", the Arikara "sinitčiškataríwiš". The tribe also uses the term "Nez Perce", as does the United States Government in its official dealings with them, and contemporary historians. Older historical ethnological works and documents use the French spelling of "Nez Percé", with the diacritic. The original French pronunciation is , with three syllables. The interpreter of the Lewis and Clark Expedition mistakenly identified this people as the Nez Perce when the team encountered the tribe in 1805. Writing in 1889, anthropologist Alice Fletcher, who the U.S. government had sent to Idaho to allot the Nez Perce Reservation, explained the mistaken naming. She wrote, In his journals, William Clark referred to the people as the Chopunnish , a transliteration of a Sahaptin term. According to D.E. Walker in 1998, writing for the Smithsonian, this term is an adaptation of the term "cú·pʼnitpeľu" (the Nez Perce people). The term is formed from "cú·pʼnit" (piercing with a pointed object) and "peľu" (people). By contrast, the "Nez Perce Language Dictionary" has a different analysis than did Walker for the term "cúpnitpelu". The prefix "cú"- means "in single file". This prefix, combined with the verb "-piní", "to come out (e.g. of forest, bushes, ice)". Finally, with the suffix of "-pelú", meaning "people or inhabitants of". Together, these three elements: "cú"- + -"piní" + "pelú" = "cúpnitpelu", or "the People Walking Single File Out of the Forest". Nez Perce oral tradition indicates the name "Cuupn'itpel'uu" meant "we walked out of the woods or walked out of the mountains" and referred to the time before the Nez Perce had horses. The Nez Perce language, or Niimiipuutímt, is a Sahaptian language related to the several dialects of Sahaptin. The Sahaptian sub-family is one of the branches of the Plateau Penutian family, which in turn may be related to a larger Penutian grouping. The Nez Perce territory at the time of Lewis and Clark (1804–1806) was approximately and covered parts of present-day Washington, Oregon, Montana, and Idaho, in an area surrounding the Snake (Weyikespe), Grande Ronde River, Salmon (Naco’x kuus) ("Chinook salmon Water") and the Clearwater (Koos-Kai-Kai) ("Clear Water") rivers. The tribal area extended from the Bitterroots in the east (the door to the Northwestern Plains of Montana) to the Blue Mountains in the west between latitudes 45°N and 47°N. In 1800, the Nez Perce had more than 100 permanent villages, ranging from 50 to 600 individuals, depending on the season and social grouping. Archeologists have identified a total of about 300 related sites including camps and villages, mostly in the Salmon River Canyon. In 1805, the Nez Perce were the largest tribe on the Columbia River Plateau, with a population of about 6,000. By the beginning of the 20th century, the Nez Perce had declined to about 1,800 due to epidemics, conflicts with non-Indians, and other factors. A total of 3,499 Nez Perce were counted in the 2010 Census. Like other Plateau tribes, the Nez Perce had seasonal villages and camps to take advantage of natural resources throughout the year. Their migration followed a recurring pattern from permanent winter villages through several temporary camps, nearly always returning to the same locations each year. The Nez Perce traveled via the Lolo Trail (Salish: Naptnišaqs – "Nez Perce Trail") (Khoo-say-ne-ise-kit) far east as the Plains (Khoo-sayn / Kuseyn) ("Buffalo country") of Montana to hunt buffalo (Qoq'a lx) and as far west as the Pacific Coast (’Eteyekuus) ("Big Water"). Before 1957 construction of The Dalles Dam, which flooded this area, Celilo Falls (Silayloo) was a favored location on the Columbia River (Xuyelp) ("The Great River") for salmon (lé'wliks)-fishing. The Nez Perce had many allies and trading partners among neighboring peoples, but also enemies and ongoing antagonist tribes. To the north of them lived the Coeur d’Alene (Schitsu'umsh) (’Iskíicu’mix), Spokane (Sqeliz) (Heyéeynimuu), and further north the Kalispel (Ql̓ispé) (Qem’éespel’uu, both meaning "Camas People"), Colville (Páapspaloo) and Kootenay / Kootenai (Ktunaxa) (Kuuspel’úu), to the northwest lived the Palus (Pelúucpuu) and to the west the Cayuse (Lik-si-yu) (Weyíiletpuu – "Ryegrass People"), west bound there were found the Umatilla (Imatalamłáma) (Hiyówatalampoo), Walla Walla, Wasco (Wecq’úupuu) and Sk'in (Tike’éspel’uu) and northwest of the latter various Yakama bands (Lexéyuu), to the south lived the Snake Indians (various Northern Paiute (Numu) bands (Hey’ǘuxcpel’uu) in the southwest and Bannock (Nimi Pan a'kwati)-Northern Shoshone (Newe) bands (Tiwélqe) in the southeast), to the east lived the Lemhi Shoshone (Lémhaay), north of them the Bitterroot Salish / Flathead (Seliš) (Séelix), further east and northeast on the Northern Plains were the Crow (Apsáalooke) (’Isúuxe) and two powerful alliances – the Iron Confedery (Nehiyaw-Pwat) (named after the dominating Plains and Woods Cree (Paskwāwiyiniwak and Sakāwithiniwak) and Assiniboine (Nakoda) (Wihnen’íipel’uu), an alliance of northern plains Native American nations based around the fur trade, and later included the Stoney (Nakoda), Western Saulteaux / Plains Ojibwe (Bungi or Nakawē), and Métis) and the Blackfoot Confederacy (Niitsitapi or Siksikaitsitapi) (’Isq’óyxnix) (composed of three Blackfoot speaking peoples – the Piegan or Peigan (Piikáni), the Kainai or Bloods (Káínaa), and the Siksika or Blackfoot (Siksikáwa), later joined by the unrelated Sarcee (Tsuu T'ina) and (for a time) by Gros Ventre or Atsina (A'aninin)). Because of large amount of inter-marriage between Nez Perce bands and neighboring tribes or bands to forge alliances and peace (often living in mixed bilingual villages together), the following bands were also counted to the Nez Perce (which today are viewed as being linguistically and culturally closely related, but separate ethnic groups): The semi-sedentary Nez Percés were Hunter-gatherer without agriculture living in a society in which most or all food is obtained by foraging (collecting wild plants and roots and pursuing wild animals). They depended on hunting, fishing, and the gathering of wild roots and berries. Nez Perce people historically depended on various Pacific salmon and Pacific trout for their food: Chinook salmon or ""nacoox"" (Oncorhynchus tschawytscha) were eaten the most, but other species such as Pacific lamprey (Entosphenus tridentatus or Lampetra tridentata), and chiselmouth. Other important fishes included the Sockeye salmon (Oncorhynchus nerka), Silver salmon or "ka'llay" (Oncorhynchus kisutch), Chum salmon or dog salmon or "ka'llay" (Oncorhynchus keta), Mountain whitefish or ""ci'mey"" (Prosopium williamsoni), White sturgeon (Acipenser transmontanus), White sucker or ""mu'quc"" (Catostomus commersonii), and varieties of trout – West Coast steelhead or ""heyey"" (Oncorhynchus mykiss), brook trout or ""pi'ckatyo"" (Salvelinus fontinalis), bull trout or ""i'slam"" (Salvelinus confluentus), and Cutthroat trout or ""wawa'lam"" (Oncorhynchus clarkii). Prior to contact with Europeans, the Nez Perce's traditional hunting and fishing areas spanned from the Cascade Range in the west to the Bitterroot Mountains in the east. Historically, in late May and early June, Nez Perce villagers crowded to communal fishing sites to trap eels, steelhead, and chinook salmon, or haul in fish with large dip nets. Fishing took place throughout the summer and fall, first on the lower streams and then on the higher tributaries, and catches also included salmon, sturgeon, whitefish, suckers, and varieties of trout. Most of the supplies for winter use came from a second run in the fall, when large numbers of Sockeye salmon, silver, and dog salmon appeared in the rivers. Fishing is traditionally an important ceremonial and commercial activity for the Nez Perce tribe. Today Nez Perce fishers participate in tribal fisheries in the mainstream Columbia River between Bonneville and McNary dams. The Nez Perce also fish for spring and summer Chinook salmon and Rainbow trout/steelhead in the Snake River and its tributaries. The Nez Perce tribe runs the Nez Perce Tribal Hatchery on the Clearwater River, as well as several satellite hatchery programs. The first fishing of the season was accompanied by prescribed rituals and a ceremonial feast known as ""kooyit"". Thanksgiving was offered to the Creator and to the fish for having returned and given themselves to the people as food. In this way, it was hoped that the fish would return the next year. Like salmon, plants contributed to traditional Nez Perce culture in both material and spiritual dimensions. Aside from fish and game, Plant foods provided over half of the dietary calories, with winter survival depending largely on dried roots, especially Kouse, or ""qáamsit"" (when fresh) and ""qáaws"" (when peeled and dried) (Lomatium especially Lomatium cous), and Camas, or ""qém'es"" (Nez Perce: "sweet") (Camassia quamash), the first being roasted in pits, while the other was ground in mortars and molded into cakes for future use, both plants had been traditionally an important food and trade item. Women were primarily responsible for the gathering and preparing of these root crops. Camas bulbs were gathered in the region between the Salmon and Clearwater river drainages. Techniques for preparing and storing winter foods enabled people to survive times of colder winters with little or no fresh foods. Favorite fruits dried for winter were serviceberries or ""kel"" (Amelanchier alnifolia or Saskatoon berry), black huckleberries or ""cemi'tk"" (Vaccinium membranaceum), red elderberries or ""mi'ttip"" (Sambucus racemosa var. melanocarpa), and chokecherries or ""ti'ms"" (Prunus virginiana var. melanocarpa). Nez Perce textiles were made primarily from dogbane or ""qeemu"" (Apocynum cannabinum or Indian hemp), tules or ""to'ko"" (Schoenoplectus acutus var. acutus), and western redcedar or ""tala'tat"" (Thuja plicata). The most important industrial woods were redcedar, ponderosa pine or ""la'qa"" (Pinus ponderosa), Douglas fir or ""pa'ps"" (Pseudotsuga menziesii), sandbar willow or ""tax's"" (Salix exigua), and hard woods such as Pacific yew or ""ta'mqay"" (Taxus brevifolia) and syringa or ""sise'qiy"" (Philadelphus lewisii or Indian arrowwood). Many fishes and plants important to Nez Perce culture are today state symbols: the black huckleberry or ""cemi'tk"" is the official state fruit and the Indian arrowwood or ""sise'qiy"", the Douglas fir or ""pa'ps"" is the state tree of Oregon and the ponderosa pine or ""la'qa"" of Montana, the Chinook salmon is the state fish of Oregon, the cutthroat trout or ""wawa'lam"" of Idaho, Montana and Wyoming, and the West Coast steelhead or "heyey" of Washington. The Nez Perce believed in spirits called "weyekins" (Wie-a-kins) which would, they thought, offer a link to the invisible world of spiritual power". The weyekin would protect one from harm and become a personal guardian spirit. To receive a weyekin, a seeker would go to the mountains alone on a vision quest. This included fasting and meditation over several days. While on the quest, the individual may receive a vision of a spirit, which would take the form of a mammal or bird. This vision could appear physically or in a dream or trance. The weyekin was to bestow the animal's powers on its bearer—for example; a deer might give its bearer swiftness. A person's weyekin was very personal. It was rarely shared with anyone and was contemplated in private. The weyekin stayed with the person until death. Helen Hunt Jackson, author of "A Century of Dishonor", written in 1889 refers to the Nez Perce as "the richest, noblest, and most gentle" of Indian peoples as well as the most industrious. The museum at the Nez Perce National Historical Park, headquartered in Spalding, Idaho, and managed by the National Park Service includes a research center, archives, and library. Historical records are available for on-site study and interpretation of Nez Perce history and culture. The park includes 38 sites associated with the Nez Perce in the states of Idaho, Montana, Oregon, and Washington, many of which are managed by local and state agencies. In 1805 William Clark was the first known Euro-American to meet any of the tribe, excluding the aforementioned French Canadian traders. While he, Meriwether Lewis and their men were crossing the Bitterroot Mountains, they ran low of food, and Clark took six hunters and hurried ahead to hunt. On September 20, 1805, near the western end of the Lolo Trail, he found a small camp at the edge of the camas-digging ground, which is now called Weippe Prairie. The explorers were favorably impressed by the Nez Perce whom they met. Preparing to make the remainder of their journey to the Pacific by boats on rivers, they entrusted the keeping of their horses until they returned to "2 brothers and one son of one of the Chiefs." One of these Indians was "Walammottinin" (meaning "Hair Bunched and tied," but more commonly known as Twisted Hair). He was the father of Chief Lawyer, who by 1877 was a prominent member of the "Treaty" faction of the tribe. The Nez Perce were generally faithful to the trust; the party recovered their horses without serious difficulty when they returned. Recollecting the Nez Perce encounter with the Lewis and Clark party, in 1889 anthropologist Alice Fletcher wrote that "the Lewis and Clark explorers were the first white men that many of the people had ever seen and the women thought them beautiful." She wrote that the Nez Perce "were kind to the tired and hungry party. They furnished fresh horses and dried meat and fish with wild potatoes and other roots which were good to eat, and the refreshed white men went further on, westward, leaving their bony, wornout horses for the Indians to take care of and have fat and strong when Lewis and Clark should come back on their way home." On their return trip they arrived at the Nez Perce encampment the following spring, again hungry and exhausted. The tribe constructed a large tent for them and again fed them. Desiring fresh red meat, the party offered an exchange for a Nez Perce horse. Quoting from the Lewis and Clark diary, Fletcher writes, "The hospitality of the Chiefs was offended at the idea of an exchange. He observed that his people had an abundance of young horses and that if we were disposed to use that food, we might have as many as we wanted." The party stayed with the Nez Perce for a month before moving on. The Nez Perce were one of the tribal nations at the Walla Walla Council (1855) (along with the Cayuse, Umatilla, Walla Walla, and Yakama), which signed the Treaty of Walla Walla. Under pressure from the European Americans, in the late 19th century the Nez Perce split into two groups: one side accepted the coerced relocation to a reservation and the other refused to give up their fertile land in Idaho and Oregon. Those willing to go to a reservation made a treaty in 1877. The flight of the non-treaty Nez Perce began on June 15, 1877, with Chief Joseph, Looking Glass, White Bird, Ollokot, Lean Elk (Poker Joe) and Toohoolhoolzote leading 2,900 men, women and children in an attempt to reach a peaceful sanctuary. They intended to seek shelter with their allies the Crow but, upon the Crow's refusal to offer help, the Nez Perce tried to reach the camp in Canada of Lakota Chief Sitting Bull. He had migrated there instead of surrendering after the Indian victory at the Battle of the Little Bighorn. The Nez Perce were pursued by over 2,000 soldiers of the U.S. Army on an epic flight to freedom of more than across four states and multiple mountain ranges. The 800 Nez Perce warriors defeated or held off the pursuing troops in 18 battles, skirmishes, and engagements. More than 300 US soldiers and 1,000 Nez Perce (including women and children) were killed in these conflicts. A majority of the surviving Nez Perce were finally forced to surrender on October 5, 1877, after the Battle of the Bear Paw Mountains in Montana, from the Canada–US border. Chief Joseph surrendered to General Oliver O. Howard of the U.S. Cavalry. During the surrender negotiations, Chief Joseph sent a message, usually described as a speech, to the US soldiers. It has become renowned as one of the greatest American speeches: "...Hear me, my chiefs, I am tired. My heart is sick and sad. From where the sun now stands, I will fight no more forever." The route of the Nez Perce flight is preserved by the Nez Perce National Historic Trail. The annual Cypress Hills ride in June commemorates the Nez Perce people's attempt to escape to Canada. In 1994 the Nez Perce tribe began a breeding program, based on crossbreeding the Appaloosa and a Central Asian breed called Akhal-Teke, to produce what they called the Nez Perce Horse. They wanted to restore part of their traditional horse culture, where they had conducted selective breeding of their horses, long considered a marker of wealth and status, and trained their members in a high quality of horsemanship. Social disruption due to reservation life and assimilationist pressures by Americans and the government resulted in the destruction of their horse culture in the 19th century. The 20th-century breeding program was financed by the United States Department of Health and Human Services, the Nez Perce tribe, and the nonprofit called the First Nations Development Institute. It has promoted businesses in Native American country that reflect values and traditions of the peoples. The Nez Perce Horse breed is noted for its speed. The current tribal lands consist of a reservation in North Central Idaho at , primarily in the Camas Prairie region south of the Clearwater River, in parts of four counties. In descending order of surface area, the counties are Nez Perce, Lewis, Idaho, and Clearwater. The total land area is about , and the reservation's population at the 2000 census was 17,959. Due to tribal loss of lands, the population on the reservation is predominantly white, nearly 90% in 1988. The largest community is the city of Orofino, near its northeast corner. Lapwai is the seat of tribal government, and it has the highest percentage of Nez Percé people as residents, at about 81.4 percent. Similar to the opening of Native American lands in Oklahoma by allowing acquisition of surplus by non-natives after households received plots, the U.S. government opened the Nez Percé reservation for general settlement on November 18, 1895. The proclamation had been signed less than two weeks earlier by President Grover Cleveland. Thousands rushed to grab land on the reservation, staking out their claims even on land owned by Nez Percé families. In addition, the Colville Indian Reservation in eastern Washington contains the Joseph band of Nez Percé.
https://en.wikipedia.org/wiki?curid=21187
Neolithic The Neolithic (, also known as the "New Stone Age"), the final division of the Stone Age, began about 12,000 years ago when the first developments of farming appeared in the Epipalaeolithic Near East, and later in other parts of the world. The Neolithic division lasted (in that part of the world) until the transitional period of the Chalcolithic from about 6,500 years ago (4500 BC), marked by the development of metallurgy, leading up to the Bronze Age and Iron Age. In other places the Neolithic lasted longer. In Northern Europe, the Neolithic lasted until about 1700 BC, while in China it extended until 1200 BC. Other parts of the world (including the Americas and Oceania) remained broadly in the Neolithic stage of development until European contact. The Neolithic comprises a progression of behavioral and cultural characteristics and changes, including the use of wild and domestic crops and of domesticated animals. The term "Neolithic" derives from the Greek , "new", and , "stone", literally meaning "New Stone Age". The term was coined by Sir John Lubbock in 1865 as a refinement of the three-age system. Following the ASPRO chronology, the Neolithic started in around 10,200 BC in the Levant, arising from the Natufian culture, when pioneering use of wild cereals evolved into early farming. The Natufian period or "proto-Neolithic" lasted from 12,500 to 9,500 BC, and is taken to overlap with the Pre-Pottery Neolithic (PPNA) of 10,200–8800 BC. As the Natufians had become dependent on wild cereals in their diet, and a sedentary way of life had begun among them, the climatic changes associated with the Younger Dryas (about 10,000 BC) are thought to have forced people to develop farming. By 10,200–8,800 BC farming communities had arisen in the Levant and spread to Asia Minor, North Africa and North Mesopotamia. Mesopotamia is the site of the earliest developments of the Neolithic Revolution from around 10,000 BC. Early Neolithic farming was limited to a narrow range of plants, both wild and domesticated, which included einkorn wheat, millet and spelt, and the keeping of dogs, sheep and goats. By about 6900–6400 BC, it included domesticated cattle and pigs, the establishment of permanently or seasonally inhabited settlements, and the use of pottery. Not all of these cultural elements characteristic of the Neolithic appeared everywhere in the same order: the earliest farming societies in the Near East did not use pottery. In other parts of the world, such as Africa, South Asia and Southeast Asia, independent domestication events led to their own regionally distinctive Neolithic cultures, which arose completely independently of those in Europe and Southwest Asia. Early Japanese societies and other East Asian cultures used pottery "before" developing agriculture. In the Middle East, cultures identified as Neolithic began appearing in the 10th millennium BC. Early development occurred in the Levant (e.g. Pre-Pottery Neolithic A and Pre-Pottery Neolithic B) and from there spread eastwards and westwards. Neolithic cultures are also attested in southeastern Anatolia and northern Mesopotamia by around 8000 BC. The prehistoric Beifudi site near Yixian in Hebei Province, China, contains relics of a culture contemporaneous with the Cishan and Xinglongwa cultures of about 6000–5000 BC, Neolithic cultures east of the Taihang Mountains, filling in an archaeological gap between the two Northern Chinese cultures. The total excavated area is more than , and the collection of Neolithic findings at the site encompasses two phases. The Neolithic 1 (PPNA) period began roughly around 10,000 BC in the Levant. A temple area in southeastern Turkey at Göbekli Tepe, dated to around 9500 BC, may be regarded as the beginning of the period. This site was developed by nomadic hunter-gatherer tribes, as evidenced by the lack of permanent housing in the vicinity, and may be the oldest known human-made place of worship. At least seven stone circles, covering , contain limestone pillars carved with animals, insects, and birds. Stone tools were used by perhaps as many as hundreds of people to create the pillars, which might have supported roofs. Other early PPNA sites dating to around 9500–9000 BC have been found in Tell es-Sultan (ancient Jericho), Israel (notably Ain Mallaha, Nahal Oren, and Kfar HaHoresh), Gilgal in the Jordan Valley, and Byblos, Lebanon. The start of Neolithic 1 overlaps the Tahunian and Heavy Neolithic periods to some degree. The major advance of Neolithic 1 was true farming. In the proto-Neolithic Natufian cultures, wild cereals were harvested, and perhaps early seed selection and re-seeding occurred. The grain was ground into flour. Emmer wheat was domesticated, and animals were herded and domesticated (animal husbandry and selective breeding). In 2006, remains of figs were discovered in a house in Jericho dated to 9400 BC. The figs are of a mutant variety that cannot be pollinated by insects, and therefore the trees can only reproduce from cuttings. This evidence suggests that figs were the first cultivated crop and mark the invention of the technology of farming. This occurred centuries before the first cultivation of grains. Settlements became more permanent, with circular houses, much like those of the Natufians, with single rooms. However, these houses were for the first time made of mudbrick. The settlement had a surrounding stone wall and perhaps a stone tower (as in Jericho). The wall served as protection from nearby groups, as protection from floods, or to keep animals penned. Some of the enclosures also suggest grain and meat storage. The Neolithic 2 (PPNB) began around 8800 BC according to the ASPRO chronology in the Levant (Jericho, West Bank). As with the PPNA dates, there are two versions from the same laboratories noted above. This system of terminology, however, is not convenient for southeast Anatolia and settlements of the middle Anatolia basin. A settlement of 3,000 inhabitants was found in the outskirts of Amman, Jordan. Considered to be one of the largest prehistoric settlements in the Near East, called 'Ain Ghazal, it was continuously inhabited from approximately 7250 BC to approximately 5000 BC. Settlements have rectangular mud-brick houses where the family lived together in single or multiple rooms. Burial findings suggest an ancestor cult where people preserved skulls of the dead, which were plastered with mud to make facial features. The rest of the corpse could have been left outside the settlement to decay until only the bones were left, then the bones were buried inside the settlement underneath the floor or between houses. Work at the site of 'Ain Ghazal in Jordan has indicated a later Pre-Pottery Neolithic C period. Juris Zarins has proposed that a Circum Arabian Nomadic Pastoral Complex developed in the period from the climatic crisis of 6200 BCE, partly as a result of an increasing emphasis in PPNB cultures upon domesticated animals, and a fusion with Harifian hunter gatherers in the Southern Levant, with affiliate connections with the cultures of Fayyum and the Eastern Desert of Egypt. Cultures practicing this lifestyle spread down the Red Sea shoreline and moved east from Syria into southern Iraq. The Neolithic 3 (PN) began around 6,400 BC in the Fertile Crescent. By then distinctive cultures emerged, with pottery like the Halafian (Turkey, Syria, Northern Mesopotamia) and Ubaid (Southern Mesopotamia). This period has been further divided into PNA (Pottery Neolithic A) and PNB (Pottery Neolithic B) at some sites. The Chalcolithic (Stone-Bronze) period began about 4500 BC, then the Bronze Age began about 3500 BC, replacing the Neolithic cultures. Around 10,000 BC the first fully developed Neolithic cultures belonging to the phase Pre-Pottery Neolithic A (PPNA) appeared in the Fertile Crescent. Around 10,700–9400 BC a settlement was established in Tell Qaramel, north of Aleppo. The settlement included two temples dating to 9650 BC. Around 9000 BC during the PPNA, one of the world's first towns, Jericho, appeared in the Levant. It was surrounded by a stone wall and contained a population of 2,000–3,000 people and a massive stone tower. Around 6400 BC the Halaf culture appeared in Syria and Northern Mesopotamia. In 1981 a team of researchers from the Maison de l'Orient et de la Méditerranée, including Jacques Cauvin and Oliver Aurenche divided Near East Neolithic chronology into ten periods (0 to 9) based on social, economic and cultural characteristics. In 2002 Danielle Stordeur and Frédéric Abbès advanced this system with a division into five periods. They also advanced the idea of a transitional stage between the PPNA and PPNB between 8800 and 8600 BC at sites like Jerf el Ahmar and Tell Aswad. Alluvial plains (Sumer/Elam). Low rainfall makes irrigation systems necessary. Ubaid culture from 6,900 BC. Domestication of sheep and goats reached Egypt from the Near East possibly as early as 6000 BC. Graeme Barker states "The first indisputable evidence for domestic plants and animals in the Nile valley is not until the early fifth millennium BC in northern Egypt and a thousand years later further south, in both cases as part of strategies that still relied heavily on fishing, hunting, and the gathering of wild plants" and suggests that these subsistence changes were not due to farmers migrating from the Near East but was an indigenous development, with cereals either indigenous or obtained through exchange. Other scholars argue that the primary stimulus for agriculture and domesticated animals (as well as mud-brick architecture and other Neolithic cultural features) in Egypt was from the Middle East. In southeast Europe agrarian societies first appeared in the 7th millennium BC, attested by one of the earliest farming sites of Europe, discovered in Vashtëmi, southeastern Albania and dating back to 6500 BC. In Northwest Europe it is much later, typically lasting just under 3,000 years from c. 4500 BC–1700 BC. Anthropomorphic figurines have been found in the Balkans from 6000 BC, and in Central Europe by around 5800 BC (La Hoguette). Among the earliest cultural complexes of this area are the Sesklo culture in Thessaly, which later expanded in the Balkans giving rise to Starčevo-Körös (Cris), Linearbandkeramik, and Vinča. Through a combination of cultural diffusion and migration of peoples, the Neolithic traditions spread west and northwards to reach northwestern Europe by around 4500 BC. The Vinča culture may have created the earliest system of writing, the Vinča signs, though archaeologist Shan Winn believes they most likely represented pictograms and ideograms rather than a truly developed form of writing. The Cucuteni-Trypillian culture built enormous settlements in Romania, Moldova and Ukraine from 5300 to 2300 BC. The megalithic temple complexes of Ġgantija on the Mediterranean island of Gozo (in the Maltese archipelago) and of Mnajdra (Malta) are notable for their gigantic Neolithic structures, the oldest of which date back to around 3600 BC. The Hypogeum of Ħal-Saflieni, Paola, Malta, is a subterranean structure excavated around 2500 BC; originally a sanctuary, it became a necropolis, the only prehistoric underground temple in the world, and shows a degree of artistry in stone sculpture unique in prehistory to the Maltese islands. After 2500 BC, these islands were depopulated for several decades until the arrival of a new influx of Bronze Age immigrants, a culture that cremated its dead and introduced smaller megalithic structures called dolmens to Malta. In most cases there are small chambers here, with the cover made of a large slab placed on upright stones. They are claimed to belong to a population different from that which built the previous megalithic temples. It is presumed the population arrived from Sicily because of the similarity of Maltese dolmens to some small constructions found there. Settled life, encompassing the transition from foraging to farming and pastoralism, began in South Asia in the region of Balochistan, Pakistan, around 7,000 BCE. At the site of Mehrgarh, Balochistan, presence can be documented of the domestication of wheat and barley, rapidly followed by that of goats, sheep, and cattle. In April 2006, it was announced in the scientific journal "Nature" that the oldest (and first "early Neolithic") evidence for the drilling of teeth "in vivo" (using bow drills and flint tips) was found in Mehrgarh. In South India, the Neolithic began by 6500 BC and lasted until around 1400 BC when the Megalithic transition period began. South Indian Neolithic is characterized by Ash mounds from 2500 BC in Karnataka region, expanded later to Tamil Nadu. In East Asia, the earliest sites include the Nanzhuangtou culture around 9500–9000 BC, Pengtoushan culture around 7500–6100 BC, and Peiligang culture around 7000–5000 BC. The 'Neolithic' (defined in this paragraph as using polished stone implements) remains a living tradition in small and extremely remote and inaccessible pockets of West Papua (Indonesian New Guinea). Polished stone adze and axes are used in the present day () in areas where the availability of metal implements is limited. This is likely to cease altogether in the next few years as the older generation die off and steel blades and chainsaws prevail. In 2012, news was released about a new farming site discovered in Munam-ri, Goseong, Gangwon Province, South Korea, which may be the earliest farmland known to date in east Asia. "No remains of an agricultural field from the Neolithic period have been found in any East Asian country before, the institute said, adding that the discovery reveals that the history of agricultural cultivation at least began during the period on the Korean Peninsula". The farm was dated between 3600 and 3000 BC. Pottery, stone projectile points, and possible houses were also found. "In 2002, researchers discovered prehistoric earthenware, jade earrings, among other items in the area". The research team will perform accelerator mass spectrometry (AMS) dating to retrieve a more precise date for the site. In Mesoamerica, a similar set of events (i.e., crop domestication and sedentary lifestyles) occurred by around 4500 BC, but possibly as early as 11,000–10,000 BC. These cultures are usually not referred to as belonging to the Neolithic; in America different terms are used such as Formative stage instead of mid-late Neolithic, Archaic Era instead of Early Neolithic and Paleo-Indian for the preceding period. The Formative stage is equivalent to the Neolithic Revolution period in Europe, Asia, and Africa. In the southwestern United States it occurred from 500 to 1200 AD when there was a dramatic increase in population and development of large villages supported by agriculture based on dryland farming of maize, and later, beans, squash, and domesticated turkeys. During this period the bow and arrow and ceramic pottery were also introduced. In later periods cities of considerable size developed, and some metallurgy by 700 BCE. Australia, in contrast to New Guinea, has generally been held not to have had a Neolithic period, with a hunter-gatherer lifestyle continuing until the arrival of Europeans. This view can be challenged in terms of the definition of agriculture, but "Neolithic" remains a rarely used and not very useful concept in discussing Australian prehistory. During most of the Neolithic age of Eurasia, people lived in small tribes composed of multiple bands or lineages. There is little scientific evidence of developed social stratification in most Neolithic societies; social stratification is more associated with the later Bronze Age. Although some late Eurasian Neolithic societies formed complex stratified chiefdoms or even states, generally states evolved in Eurasia only with the rise of metallurgy, and most Neolithic societies on the whole were relatively simple and egalitarian. Beyond Eurasia, however, states were formed during the local Neolithic in three areas, namely in the Preceramic Andes with the Norte Chico Civilization, Formative Mesoamerica and Ancient Hawaiʻi. However, most Neolithic societies were noticeably more hierarchical than the Upper Paleolithic cultures that preceded them and hunter-gatherer cultures in general. The domestication of large animals (c. 8000 BC) resulted in a dramatic increase in social inequality in most of the areas where it occurred; New Guinea being a notable exception. Possession of livestock allowed competition between households and resulted in inherited inequalities of wealth. Neolithic pastoralists who controlled large herds gradually acquired more livestock, and this made economic inequalities more pronounced. However, evidence of social inequality is still disputed, as settlements such as Catal Huyuk reveal a striking lack of difference in the size of homes and burial sites, suggesting a more egalitarian society with no evidence of the concept of capital, although some homes do appear slightly larger or more elaborately decorated than others. Families and households were still largely independent economically, and the household was probably the center of life. However, excavations in Central Europe have revealed that early Neolithic Linear Ceramic cultures (""Linearbandkeramik"") were building large arrangements of circular ditches between 4800 and 4600 BC. These structures (and their later counterparts such as causewayed enclosures, burial mounds, and henge) required considerable time and labour to construct, which suggests that some influential individuals were able to organise and direct human labour — though non-hierarchical and voluntary work remain possibilities. There is a large body of evidence for fortified settlements at "Linearbandkeramik" sites along the Rhine, as at least some villages were fortified for some time with a palisade and an outer ditch. Settlements with palisades and weapon-traumatized bones, such as those found at the Talheim Death Pit, have been discovered and demonstrate that "...systematic violence between groups" and warfare was probably much more common during the Neolithic than in the preceding Paleolithic period. This supplanted an earlier view of the Linear Pottery Culture as living a "peaceful, unfortified lifestyle". Control of labour and inter-group conflict is characteristic of tribal groups with social rank that are headed by a charismatic individual — either a 'big man' or a proto-chief — functioning as a lineage-group head. Whether a non-hierarchical system of organization existed is debatable, and there is no evidence that explicitly suggests that Neolithic societies functioned under any dominating class or individual, as was the case in the chiefdoms of the European Early Bronze Age. Theories to explain the apparent implied egalitarianism of Neolithic (and Paleolithic) societies have arisen, notably the Marxist concept of primitive communism. The shelter of the early people changed dramatically from the Upper Paleolithic to the Neolithic era. In the Paleolithic, people did not normally live in permanent constructions. In the Neolithic, mud brick houses started appearing that were coated with plaster. The growth of agriculture made permanent houses possible. Doorways were made on the roof, with ladders positioned both on the inside and outside of the houses. The roof was supported by beams from the inside. The rough ground was covered by platforms, mats, and skins on which residents slept. Stilt-houses settlements were common in the Alpine and Pianura Padana (Terramare) region. Remains have been found at the Ljubljana Marshes in Slovenia and at the Mondsee and Attersee lakes in Upper Austria, for example. A significant and far-reaching shift in human subsistence and lifestyle was to be brought about in areas where crop farming and cultivation were first developed: the previous reliance on an essentially nomadic hunter-gatherer subsistence technique or pastoral transhumance was at first supplemented, and then increasingly replaced by, a reliance upon the foods produced from cultivated lands. These developments are also believed to have greatly encouraged the growth of settlements, since it may be supposed that the increased need to spend more time and labor in tending crop fields required more localized dwellings. This trend would continue into the Bronze Age, eventually giving rise to permanently settled farming towns, and later cities and states whose larger populations could be sustained by the increased productivity from cultivated lands. The profound differences in human interactions and subsistence methods associated with the onset of early agricultural practices in the Neolithic have been called the "Neolithic Revolution", a term coined in the 1920s by the Australian archaeologist Vere Gordon Childe. One potential benefit of the development and increasing sophistication of farming technology was the possibility of producing surplus crop yields, in other words, food supplies in excess of the immediate needs of the community. Surpluses could be stored for later use, or possibly traded for other necessities or luxuries. Agricultural life afforded securities that nomadic life could not, and sedentary farming populations grew faster than nomadic. However, early farmers were also adversely affected in times of famine, such as may be caused by drought or pests. In instances where agriculture had become the predominant way of life, the sensitivity to these shortages could be particularly acute, affecting agrarian populations to an extent that otherwise may not have been routinely experienced by prior hunter-gatherer communities. Nevertheless, agrarian communities generally proved successful, and their growth and the expansion of territory under cultivation continued. Another significant change undergone by many of these newly agrarian communities was one of diet. Pre-agrarian diets varied by region, season, available local plant and animal resources and degree of pastoralism and hunting. Post-agrarian diet was restricted to a limited package of successfully cultivated cereal grains, plants and to a variable extent domesticated animals and animal products. Supplementation of diet by hunting and gathering was to variable degrees precluded by the increase in population above the carrying capacity of the land and a high sedentary local population concentration. In some cultures, there would have been a significant shift toward increased starch and plant protein. The relative nutritional benefits and drawbacks of these dietary changes and their overall impact on early societal development are still debated. In addition, increased population density, decreased population mobility, increased continuous proximity to domesticated animals, and continuous occupation of comparatively population-dense sites would have altered sanitation needs and patterns of disease. The identifying characteristic of Neolithic technology is the use of polished or ground stone tools, in contrast to the flaked stone tools used during the Paleolithic era. Neolithic people were skilled farmers, manufacturing a range of tools necessary for the tending, harvesting and processing of crops (such as sickle blades and grinding stones) and food production (e.g. pottery, bone implements). They were also skilled manufacturers of a range of other types of stone tools and ornaments, including projectile points, beads, and statuettes. But what allowed forest clearance on a large scale was the polished stone axe above all other tools. Together with the adze, fashioning wood for shelter, structures and canoes for example, this enabled them to exploit their newly won farmland. Neolithic peoples in the Levant, Anatolia, Syria, northern Mesopotamia and Central Asia were also accomplished builders, utilizing mud-brick to construct houses and villages. At Çatalhöyük, houses were plastered and painted with elaborate scenes of humans and animals. In Europe, long houses built from wattle and daub were constructed. Elaborate tombs were built for the dead. These tombs are particularly numerous in Ireland, where there are many thousand still in existence. Neolithic people in the British Isles built long barrows and chamber tombs for their dead and causewayed camps, henges, flint mines and cursus monuments. It was also important to figure out ways of preserving food for future months, such as fashioning relatively airtight containers, and using substances like salt as preservatives. The peoples of the Americas and the Pacific mostly retained the Neolithic level of tool technology until the time of European contact. Exceptions include copper hatchets and spearheads in the Great Lakes region. Most clothing appears to have been made of animal skins, as indicated by finds of large numbers of bone and antler pins that are ideal for fastening leather. Wool cloth and linen might have become available during the later Neolithic, as suggested by finds of perforated stones that (depending on size) may have served as spindle whorls or loom weights. The clothing worn in the Neolithic Age might be similar to that worn by Ötzi the Iceman, although he was not Neolithic (since he belonged to the later Copper Age). Neolithic human settlements include: The world's oldest known engineered roadway, the Sweet Track in England, dates from 3800 BC and the world's oldest freestanding structure is the Neolithic temple of Ġgantija in Gozo, Malta. "Note: Dates are very approximate, and are only given for a rough estimate; consult each culture for specific time periods." Early Neolithic "Periodization: The Levant: 9500–8000 BC; Europe: 5000–4000 BC; Elsewhere: varies greatly, depending on region." Middle Neolithic "Periodization: The Levant: 8000–6000 BC; Europe: 4000–3500 BC; Elsewhere: varies greatly, depending on region." Later Neolithic "Periodization: 6500–4500 BC; Europe: 3500–3000 BC; Elsewhere: varies greatly, depending on region." "Periodization: Near East: 4500–3300 BC; Europe: 3000–1700 BC; Elsewhere: varies greatly, depending on region. In the Americas, the Eneolithic ended as late as the 19th century AD for some peoples."
https://en.wikipedia.org/wiki?curid=21189
Nomic Nomic is a game created in 1982 by philosopher Peter Suber in which the of the game include mechanisms for the players to change those rules, usually beginning through a system of democratic voting. The initial was designed by Peter Suber, and first published in Douglas Hofstadter's column "Metamagical Themas" in "Scientific American" in June 1982. The column discussed Suber's then-upcoming book, "The Paradox of Self-Amendment", which was published some years later. Nomic now refers to many games, all based on the initial ruleset. The game is in some ways modeled on modern government systems. It demonstrates that in any system where rule changes are possible, a situation may arise in which the resulting laws are contradictory or insufficient to determine what is in fact legal. Because the game models (and exposes conceptual questions about) a legal system and the problems of legal interpretation, it is named after (""), Greek for "law". While the victory condition in Suber's initial ruleset is the accumulation of 100 points by the roll of dice, he once said that "this rule is deliberately boring so that players will quickly amend it to please themselves". Players can change the rules to such a degree that points can become irrelevant in favor of a true currency, or make victory an unimportant concern. Any rule in the game, including the rules specifying the criteria for winning and even the rule that rules must be obeyed, can be changed. Any loophole in the ruleset, however, may allow the first player to discover it the chance to pull a "scam" and modify the rules to win the game. Complicating this process is the fact that Suber's initial ruleset allows for the appointment of judges to preside over issues of rule interpretation. The game can be played face-to-face with as many written notes as are required, or through any of a number of Internet media (usually an archived mailing list or Internet forum). Initially, gameplay occurs in clockwise order, with each player taking a turn. In that turn, they propose a change in rules that all the other players vote on, and then roll a die to determine the number of points they add to their score. If this rule change is passed, it comes into effect at the end of their round. Any rule can be changed with varying degrees of difficulty, including the core rules of the game itself. As such, the gameplay may quickly change. Under Suber's initial ruleset, rules are divided into two types: mutable and immutable. The main difference between these is that immutable rules must be changed into mutable rules (called "transmuting") before they can be modified or removed. Immutable rules also take precedence over mutable ones. A rule change may be: Alternative starting rulesets exist for Internet and mail games, wherein gameplay occurs in alphabetical order by surname, and points added to the score are based on the success of a proposed rule change rather than random dice rolls. Not only can every aspect of the rules be altered in some way over the course of a game of Nomic, but myriad variants also exist: some that have themes, begin with a single rule, or begin with a dictator instead of a democratic process to validate rules. Others combine Nomic with an existing game (such as Monopoly, chess, or in one humorously paradoxical attempt, Mornington Crescent). There is even a version in which the players are games of Nomic themselves. Even more unusual variants include a ruleset in which the rules are hidden from players' view, and a game which, instead of allowing voting on rules, splits into two sub-games, one with the rule, and one without it. Online versions often have initial rulesets where play is not turn-based; typically, players in such games may propose rule changes at any time, rather than having to wait for their turn. One spin-off of a now-defunct Nomic (Nomic World) is called the Fantasy Rules Committee; it adds every legal rule submitted by a player to the ruleset until the players run out of ideas, after which all the "fantasy rules" are repealed and the game begins again. The game of Nomic is particularly suited to being played online, where all proposals and rules can be shared in web pages or email archives for ease of reference. Such games of Nomic sometimes last for a very long time – Agora has been running since 1993. The longevity of nomic games can pose a serious problem, in that the rulesets can grow so complex that current players do not fully understand them and prospective players are deterred from joining. One currently active game, BlogNomic, gets around this problem by dividing the game into "dynasties"; every time someone wins, a new dynasty begins, and all the rules except a privileged few are repealed. This keeps the game relatively simple and accessible. Nomicron (now defunct) was similar in that it had rounds – when a player won a round, a convention was started to plan for the next round. A game of Nomic on reddit, (now defunct), used a similar mechanism modeled on Nomicron's system. Another facet of Nomic is the way in which the implementation of the rules affects the way the game of Nomic itself works. ThermodyNomic, for example, had a ruleset in which rule changes were carefully considered before implementation, and rules were rarely introduced which provide loopholes for the players to exploit. B Nomic, by contrast, was once described by one of its players as "the equivalent of throwing logical hand grenades". This is essentially part of the differentiation between "procedural" games, where the aim (acknowledged or otherwise) is to tie the entire ruleset into a paradoxical condition during each turn (a player who has no legal move available wins), and "substantive" games, which try to avoid paradox and reward winning by achieving certain goals, such as attaining a given number of points. While "Nomic" is traditionally capitalized as the proper name of the game it describes, it has also sometimes been used in a more informal way as a lowercased generic term, "nomic", referring to anything with Nomic-like characteristics, including games where the rules may be changed during play as well as non-gaming situations where it can be alleged that "rules lawyers" are tinkering with the process used to amend rules and policies (in an organization or community) in a manner akin to a game of Nomic. In a computerized Nomic, the rules are interpreted by a computer, rather than by humans. This implies that the rules should be written in a language that a computer can understand, typically some sort of programming language or Game Description Language. Nomyx is such an implementation. The Nomyx website is defunct, but the code is still on github.
https://en.wikipedia.org/wiki?curid=21190
Nintendo Since then, Nintendo has produced some of the most successful consoles in the video game industry, such as the Game Boy, the Super Nintendo Entertainment System, the Wii, and the Nintendo Switch. Nintendo has also released numerous influential franchises, including "Donkey Kong", "Mario", "The Legend of Zelda", "Kirby", "Metroid", "Fire Emblem", "Splatoon", "Super Smash Bros", and "Pokémon". Nintendo has multiple subsidiaries in Japan and abroad, in addition to business partners such as The Pokémon Company and HAL Laboratory. Both the company and its staff have received numerous awards for their achievements, including Emmy Awards for Technology & Engineering, Game Developers Choice Awards and British Academy Games Awards among others. Nintendo is one of the wealthiest and most valuable companies in the Japanese market. Nintendo was founded as Nintendo Koppai on September 23, 1889 by craftsman Fusajiro Yamauchi in Kyoto, Japan, to produce and distribute "hanafuda" ('flower cards'). The word "Nintendo" can be translated as 'leave luck to heaven,' or alternatively as 'the temple of free hanafuda.' Yamauchi manufactured these playing cards using white mulberry bark, which he himself painted by hand. He was able to market them despite the fact that gambling had been prohibited by Japanese authorities since 1633, because the cards incorporated illustrations rather than numbers. With the increase in popularity of the cards, Yamauchi hired assistants to mass-produce in order to satisfy demand. Despite a favorable start, however, the slow and expensive manufacturing process and high product price led the company to financial difficulties. Adding to the issue was the limited niche market to which the company belonged, as well as the long durability of the cards, which impacted sales due to the low replacement rate of the product. As a solution, Yamauchi produced a cheaper and lower-quality line of playing cards, "Tengu", while also seeking to offer his products in other cities such as Osaka, where considerable profits were found in card games. In addition, local merchants were interested in the prospect of a continuous renewal of decks, thus avoiding the suspicions that reusing cards would generate. According to data from the company itself, Nintendo's first western-style deck was put on the market in 1902, although other documents postpone the date to 1907, shortly after the Russo-Japanese War. The war created considerable difficulties for companies in the leisure sector, which were subject to new levies such as the "Karuta Zei" ('playing cards tax'). Despite this, Nintendo subsisted and, in 1907, entered into an agreement with Nihon Senbai—later known as the Japan Tobacco—to market its cards to various cigarette stores throughout the country. Japanese culture stipulated that for Nintendo Koppai to continue as a family business after Yamauchi's retirement, Yamauchi had to adopt his son-in-law so that he may take over the business. As result, Sekiryo Kaneda adopted the Yamauchi surname in 1907 and became the second president of Nintendo Koppai in 1929. By that time, Nintendo Koppai was the largest card game company in Japan. In 1933, Sekiryo Kaneda established the company as a general partnership titled Yamauchi Nintendo & Co. Ltd., investing in the construction of a new corporate headquarters located next to the original building, near the Toba-kaidō train station. Because Sekiryo's marriage to Yamauchi's daughter produced no male heirs, he planned to adopt his son-in-law Shikanojo Inaba, an artist in the company's employ and the father of his grandson Hiroshi, born in 1927. However, Inaba abandoned his family and the company, so Hiroshi was made Sekiryo's eventual successor. World War II negatively impacted the company as Japanese authorities prohibited the diffusion of foreign card games, and as the priorities of Japanese society shifted, its interest in recreational activities waned. During this time, Nintendo was partly supported by a financial injection from Hiroshi's wife Michiko Inaba, who came from a wealthy family. In 1947, Sekiryo founded the distribution company Marufuku Co. Ltd. In 1950, due to Sekiryo's deteriorating health, Hiroshi assumed the presidency of Nintendo. His first actions involved several important changes in the operation of the company: in 1951, he changed the company name to Nintendo Playing Card Co. Ltd., while the Marufuku Company adopted the name Nintendo Karuta Co. Ltd. In 1952, he centralized the production of cards in the Kyoto factories, which led to the expansion of the offices. The company's new line of plastic cards enjoyed considerable success in Japan. Some of the company's employees, accustomed to a more cautious and conservative leadership, viewed the new measures with concern, and the rising tension led to a call for a strike. However, the measure had no major impact, as Hiroshi resorted to the dismissal of several dissatisfied workers. In 1959, Nintendo entered into an agreement with Walt Disney to incorporate his company's animated characters into the cards. Nintendo also developed a distribution system that allowed it to offer its products in toy stores. By 1961, the company had sold more than 1.5 million card packs and held a high market share, for which it relied on televised advertising campaigns. The need for diversification led the company to list stock on the second section of the Osaka and Kyoto stock exchanges, in addition to becoming a public company and changing its name to Nintendo Co., Ltd. in 1963. In 1964, Nintendo earned an income of ¥150 million. Although the company was experiencing a period of economic prosperity, the Disney cards and derived products made it dependent on the children's market. The situation was exacerbated by the falling sales of its adult-oriented "hanafuda" cards caused by Japanese society gravitating toward other hobbies such as pachinko, bowling and nightly outings. When Disney card sales began to show signs of exhaustion, Nintendo realized that it had no real alternative with which to alleviate the situation. After the 1964 Tokyo Olympics, Nintendo's stock price plummeted to its lowest recorded level of ¥60. Between 1963 and 1968, Yamauchi invested in several business lines for Nintendo that were far from its traditional market and, for the most part, were unsuccessful. Among these ventures were packages of instant rice, a chain of love hotels, and a taxi service named "Daiya". Although the taxi service was better received than the previous efforts, Yamauchi rejected this initiative after a series of disagreements with local unions. Yamauchi's experience with the previous initiatives led him to increase Nintendo's investment in a research and development department directed by Hiroshi Imanishi, an employee with a long history in other areas of the company. In 1969, Gunpei Yokoi joined the department and was responsible for coordinating various projects. Yokoi's experience in manufacturing electronic devices led Yamauchi to put him in charge of the company's games department, and his products would be mass-produced. During this period, Nintendo built a new production plant in Uji City, just outside of Kyoto, and distributed classic tabletop games such as chess, shogi, go, and mahjong, as well as other foreign games under the Nippon Game brand. The company's restructuring preserved a couple of areas dedicated to "hanafuda" card manufacturing. The early 1970s represented a watershed moment in Nintendo's history as it released Japan's first electronic toy—the Nintendo Beam Gun, an optoelectronic pistol designed by Masayuki Uemura. In total, more than a million units were sold. During that period, Nintendo began trading on the main section of the Osaka stock exchange and opened a new headquarters. Other popular toys released at the time include the Ultra Hand, the Ultra Machine, the Ultra Scope, and the Love Tester, all designed by Yokoi. The Ultra Hand sold more than 1.2 million units in Japan. The growing demand for Nintendo's products led Yamauchi to further expand the offices, for which he acquired the surrounding land and assigned the production of cards to the original Nintendo building. Meanwhile, Yokoi, Uemura, and new employees such as Genyo Takeda, continued to develop innovative products for the company. The Laser Clay Shooting System was released in 1973 and managed to surpass bowling in popularity. In 1974, Nintendo released "Wild Gunman", a skeet shooting simulator consisting of a 16 mm image projector with a sensor that detects a beam from the player's light gun. Both the Laser Clay Shooting System and "Wild Gunman" were successfully exported to Europe and North America. Despite this, Nintendo's production speeds were still slow compared to rival companies such as Bandai and Tomy, and their prices were high, which led to the discontinuation of some of their light gun products. The subsidiary Nintendo Leisure System Co., Ltd., which developed these products, was closed as a result of the economic impact dealt by the 1973 oil crisis. Yamauchi, motivated by the successes of Atari and Magnavox with their video game consoles, acquired the Japanese distribution rights for the Magnavox Odyssey in 1974, and reached an agreement with Mitsubishi Electric to develop similar products between 1975 and 1978, including the first microprocessor for video games systems, the Color TV-Game series, and an arcade game inspired by Othello. During this period, Takeda developed the video game "EVR Race", and Shigeru Miyamoto joined Yokoi's team with the responsibility of designing the casing for the Color TV-Game consoles. In 1978, Nintendo's research and development department was split into two facilities, Nintendo Research & Development 1 and Nintendo Research & Development 2, respectively managed by Yokoi and Uemura. Two key events in Nintendo's history occurred in 1979: its American subsidiary was opened in New York City, and a new department focused on arcade game development was created. In 1980, the first handheld video game system, the Game & Watch, was created by Yokoi from the technology used in portable calculators. It became one of Nintendo's most successful products, with over 43.4 million units sold worldwide during its production period, and for which 59 games were made in total. Nintendo's success in arcade games grew in 1981 with the release of "Donkey Kong", which was developed by Miyamoto and one of the first video games that allowed the player character to jump. The character, Jumpman, would later become Mario and Nintendo's official mascot. Mario was named after Mario Segale, the landlord of Nintendo's offices in Tukwila, Washington. In 1983, Nintendo opened a new production facility in Uji and was listed on the first section of the Tokyo Stock Exchange. Uemura, taking inspiration from the ColecoVision, began creating a new video game console that would incorporate a ROM cartridge format for video games as well as both a central processing unit and a physics processing unit. The Family Computer, or Famicom, was released in Japan in July 1983 along with three games adapted from their original arcade versions: "Donkey Kong", "Donkey Kong Jr." and "Popeye". Its success was such that in 1984, it surpassed the market share held by Sega's SG-1000. At this time, Nintendo adopted a series of guidelines that involved the validation of each game produced for the Famicom before its distribution on the market, agreements with developers to ensure that no Famicom game would be adapted to other consoles within two years of its release, and restricting developers from producing more than five games a year for the Famicom. In the early 1980s, several video game consoles proliferated in the United States, as well as low-quality games produced by third-party developers, which oversaturated the market and led to the video game crash of 1983. Consequently, a recession hit the American video game industry, whose revenues went from over $3 billion to $100 million between 1983 and 1985. Nintendo's initiative to launch the Famicom in America was also impacted. To differentiate the Famicom from its competitors in America, Nintendo opted to redesign the Famicom as an "entertainment system" compatible with "Game Paks", a euphemism for cartridges, and with a design reminiscent of a VCR. Nintendo implemented a lockout chip in the Game Paks that gave it control on what games were published for the console to avoid the market saturation that occurred in the United States' market. The resulting product was the Nintendo Entertainment System, or NES, which was released in North America in 1985. The landmark titles "Super Mario Bros." and "The Legend of Zelda" were produced for the console by Miyamoto and Takashi Tezuka. The work of composer Koji Kondo for both games reinforced the idea that musical themes could act as a compliment to game mechanics rather than simply a miscellaneous element. Production of the NES lasted until 1995, and production of the Famicom lasted until 2003. In total, around 62 million Famicom and NES consoles were sold worldwide. During this period, Nintendo created a measure against piracy of its video games in the form of the Official Nintendo Seal of Quality, a seal that was added to their products so that customers may recognize their authenticity in the market. By this time, Nintendo's network of electronic suppliers had extended to around thirty companies, among which were Ricoh—Nintendo's main source for semiconductors—and the Sharp Corporation. In 1988, Gunpei Yokoi and his team at Nintendo R&D1 conceived the Game Boy, the first handheld video game console to be compatible with interchangeable game cartridges. Nintendo released the Game Boy in 1989. In North America, the Game Boy was bundled with the popular third-party game "Tetris" after a difficult negotiation process with Elektronorgtechnica. The Game Boy was a significant success: in its first two weeks of sale in Japan, it sold out its initial inventory of 300,000 units, while in the United States, an additional 40,000 units were sold on its first day of distribution. Around this time, Nintendo entered into an agreement with Sony to develop the Super Famicom CD-ROM Adapter, a peripheral for the upcoming Super Famicom capable of playing CD-ROMs. However, the collaboration did not last as Yamauchi preferred to continue developing the technology with Philips, which would result in the CD-i, and Sony's independent efforts resulted in the creation of the PlayStation console. The first issue of the magazine "Nintendo Power", which had an annual circulation of 1.5 million copies in the United States, was published in 1988. In July 1989, Nintendo held the first Nintendo Space World trade show under the name "Shoshinkai" for the purpose of announcing and demonstrating upcoming Nintendo products. The same year, the first World of Nintendo stores-within-a-store, which carried official Nintendo merchandise, were opened in the United States. According to company information, more than 25% of homes in the United States had an NES in 1989. The late 1980s marked the slip of Nintendo's dominance in the video game market with the appearance of NEC's PC Engine and Sega's Mega Drive, game systems designed with a 16-bit architecture that allowed for improved graphics and audio compared to the NES. In response to the competition, Uemura designed the Super Famicom, which launched in 1990. The first batch of 300,000 consoles sold out in a matter of hours. The following year, as with the NES, Nintendo distributed a modified version of the Super Famicom to the United States market, titled the Super Nintendo Entertainment System (SNES). Launch games for the Super Famicom and SNES include "Super Mario World", "F-Zero", "Pilotwings", "SimCity", and "Gradius III". By mid-1992, over 46 million Super Famicom and SNES consoles were sold. The console's life cycle lasted until 1999 in the United States, and until 2003 in Japan. In March 1990, the first Nintendo World Championship was held, with participants from 29 American cities competing for the title of "best Nintendo player in the world". In June 1990, the subsidiary Nintendo of Europe was opened in Großostheim, Germany; in 1993, subsequent subsidiaries were established in the Netherlands (where Bandai had previously distributed Nintendo's products), France, the United Kingdom, Spain, Belgium and Australia. In 1992, Nintendo acquired a majority stake in the Seattle Mariners baseball team, and sold its shares in 2016. Nintendo ceased manufacturing arcade games and systems in September 1992. In 1993, "Star Fox" was released, which marked an industry milestone by being the first video game to make use of the Super FX chip. The proliferation of graphically violent video games, such as "Mortal Kombat", caused controversy and led to the creation of the Interactive Digital Software Association and the Entertainment Software Rating Board, in whose development Nintendo collaborated during 1994. These measures also encouraged Nintendo to abandon the content guildelines it had enforced since the release of the NES. Commercial strategies implemented by Nintendo during this time include the Nintendo Gateway System, an in-flight entertainment service available for airlines, cruise ships and hotels, and the "Play It Loud!" advertising campaign for Game Boys with different-colored casings. The Advanced Computer Modelling graphics used in "Donkey Kong Country" for the SNES and "Donkey Kong Land" for the Game Boy were a technological innovation, as was the Satellaview satellite modem peripheral for the Super Famicom, which allowed the digital transmission of data by means of a communications satellite. In mid-1993, Nintendo and Silicon Graphics announced a strategic alliance to develop the Nintendo 64. NEC, Toshiba and Sharp also contributed technology to the console. The Nintendo 64 was marketed as one of the first consoles to be designed with 64-bit architecture. As part of an agreement with Midway Games, the arcade games "Killer Instinct" and "Cruis'n USA" were ported to the console. Although the Nintendo 64 was planned for release in 1995, the production schedules of third-party developers influenced a delay, and the console was released in June and September 1996 in Japan and the United States respectively, and in March 1997 in Europe. By the end of its production in 2002, around 33 million Nintendo 64 consoles were sold worldwide, and it is considered one of the most recognized video game systems in history. 388 games were produced for the Nintendo 64 in total, some of which – particularly "Super Mario 64", "" and "GoldenEye 007" – have been distinguished as some of the greatest of all time. In 1995, Nintendo released the Virtual Boy, a console designed by Gunpei Yokoi with virtual reality technology and stereoscopic graphics. Critics were generally disappointed with the quality of the games and red-colored graphics, and complained of gameplay-induced headaches. The system sold poorly and was quietly discontinued. Amid the system's failure, Yokoi formally retired from Nintendo. In February 1996, "Pocket Monsters Red" and "Green", known internationally as "Pokémon Red" and "Blue", was released in Japan by Game Freak for the Game Boy, and established the popular "Pokémon" franchise. In 1997, Nintendo released the Rumble Pak, a plug-in device that connects to the Nintendo 64 controller and produces a vibration during certain moments of a game. In 1998, the Game Boy Color was released. In addition to allowing backward compatibility with Game Boy games, the console's similar capacity to the NES resulted in select adaptations of games from that library, such as "Super Mario Bros. Deluxe". Since then, over 118.6 million Game Boy and Game Boy Color consoles have been sold worldwide. In May 1999, with the advent of the PlayStation 2, Nintendo entered an agreement with IBM and Panasonic to develop the 128-bit Gekko processor and the DVD drive to be used in Nintendo's next home console. Meanwhile, a series of administrative changes occurred in 2000, when Nintendo's corporate offices were moved to the Minami-ku neighborhood in Kyoto, and Nintendo Benelux was established to manage the Dutch and Belgian territories. The year 2001 marked the introduction of two new Nintendo consoles: the Game Boy Advance, which was designed by Gwénaël Nicolas and stylistically departed from its predecessors, and the GameCube. During the first week of the Game Boy Advance's North American release in June 2001, over 500,000 units were sold, making it the fastest-selling video game console in the United States at the time. By the end of its production cycle in 2010, more than 81.5 million units had been sold worldwide. As for the GameCube, despite such distinguishing features as the miniDVD format of its games and internet connectivity for a limited number of games, its sales were lower than those of its predecessors, and during the six years of its production, 21.7 million units were sold worldwide. An innovative product developed by Nintendo during this time was the Nintendo e-Reader, a Game Boy Advance peripheral that allows the transfer of data stored on a series of cards to the console. In 2002, the Pokémon Mini was released. Its dimensions were smaller than that of the Game Boy Advance and it weighed 70 grams, making it the smallest video game console in history. Nintendo collaborated with Sega and Namco to develop Triforce, an arcade board to facilitate the conversion of arcade titles to the GameCube. Following the European release of the GameCube in May 2002, Hiroshi Yamauchi announced his resignation as the president of Nintendo, and Satoru Iwata was selected by the company as his successor. Yamauchi would remain as advisor and director of the company until 2005, and he died in 2013. Iwata's appointment as president ended the Yamauchi succession at the helm of the company, a practice that had been in place since its foundation. In 2003, Nintendo released the Game Boy Advance SP, an improved version of the Game Boy Advance that incorporated a folding design, an illuminated display and a rechargeable battery. By the end of its production cycle in 2010, over 43.5 million units had been sold worldwide. Nintendo also released the Game Boy Player, a peripheral that allows Game Boy and Game Boy Advance games to be played on the GameCube. In 2004, Nintendo released the Nintendo DS, which featured such innovations as dual screens – one of which being a touchscreen – and wireless connectivity for multiplayer play. Throughout its lifetime, more than 154 million units were sold, making it the most successful handheld console and the second best-selling console in history. In 2005, Nintendo released the Game Boy Micro, the last system in the Game Boy line. Sales did not meet Nintendo's expectations, with 2.5 million units being sold by 2007. In mid-2005, the Nintendo World Store was inaugurated in New York City. Nintendo's next home console was conceived in 2001, although the designing commenced in 2003, taking inspiration from the Nintendo DS. The Wii was released in November 2006, with a total of 33 launch titles. With the Wii, Nintendo sought to reach a broader demographic than its seventh generation competitors, with the intention of also encompassing the "non-consumer" sector. To this end, Nintendo invested in a $200 million advertising campaign. The Wii's innovations include the Wii Remote controller, equipped with an accelerometer system and infrared sensors that allow it to detect its position in a three-dimensional environment with the aid of a sensor bar; the Nunchuk peripheral that includes an analog controller as well as an accelerometer; and the Wii MotionPlus expansion that increases the sensitivity of the main controller with the aid of gyroscopes. By 2016, more than 101 million Wii consoles had been sold worldwide, making it the most successful console of its generation, a distinction that Nintendo had not achieved since the 1990s with the SNES. A number of accessories were released for the Wii from 2007 to 2010, such as the Wii Balance Board, the Wii Wheel and the WiiWare download service. In 2009, Nintendo Iberica S.A. expanded its commercial operations to Portugal through a new office in Lisbon. By that year, Nintendo held a 68.3% share of the worldwide handheld gaming market. In 2010, Nintendo celebrated the 25th anniversary of Mario's debut appearance, for which certain allusive products were put on sale. The event included the release of "Super Mario All-Stars 25th Anniversary Edition" and special editions of the Nintendo DSi XL and Wii. Following an announcement in March 2010, Nintendo released the Nintendo 3DS in 2011. The console is capable of producing stereoscopic effects without the need for 3D glasses. By 2018, more than 69 million units had been sold worldwide; the figure increased to 75 million by the start of 2019. In 2011, Nintendo celebrated the 25th anniversary of "The Legend of Zelda" with the orchestra concert tour and the video game "". The years 2012 and 2013 marked the introduction of two new Nintendo game consoles: the Wii U, which incorporated high-definition graphics and a Gamepad controller with near-field communication technology, and the Nintendo 2DS, a version of the 3DS that lacks the clamshell-like design of Nintendo's previous handheld consoles and the stereoscopic effects of the 3DS. With 13.5 million units sold worldwide, the Wii U is the least successful video game console in Nintendo's history. In 2014, a new line of products was released consisting of figures of Nintendo characters called amiibos. On September 25, 2013, Nintendo announced its acquisition of a 28% stake in PUX Corporation, a subsidiary of Panasonic, for the purpose of developing facial, voice and text recognition for its video games. Due to a 30% decrease in company income between April and December 2013, Iwata announced a temporary 50% cut to his salary, with other executives seeing reductions by 20%–30%. In January 2015, Nintendo ceased operations in the Brazilian market due in part to high import duties. Although this did not affect the rest of Nintendo's Latin American market due to an alliance with Juegos de Video Latinoamérica, in 2017, Nintendo reached an agreement with NC Games for Nintendo's products to resume distribution in Brazil. On July 11, 2015, Iwata died of bile duct cancer, and after a couple of months in which Miyamoto and Takeda jointly operated the company, Tatsumi Kimishima was named as Iwata's successor on September 16, 2015. As part of the management's restructuring, Miyamoto and Takeda were respectively named creative and technological advisors. The financial losses caused by the Wii U, along with Sony's intention to release its video games to other platforms such as smart TVs, motivated Nintendo to rethink its strategy concerning the production and distribution of its properties. In 2015, Nintendo formalized agreements with DeNA and Universal Parks & Resorts to extend its presence to smart devices and amusement parks respectively. In March 2016, Nintendo's first mobile app for the iOS and Android systems, "Miitomo", was released. Since then, Nintendo has produced other similar apps, such as "Super Mario Run", "Fire Emblem Heroes", "", "Mario Kart Tour" and "Pokémon Go", the last being developed by Niantic and having generated $115 million in revenue for Nintendo. The theme park area Super Nintendo World is set to open at Universal Studios Japan in 2020. In March 2016, the loyalty program My Nintendo replaced Club Nintendo. The NES Classic Edition was released in November 2016. The console is a redesigned version of the NES that includes support for the HDMI interface and Wiimote compatibility. Its successor, the Super NES Classic Edition, was released in September 2017. By October 2018, around ten million units of both consoles combined had been sold worldwide. The Wii U's replacement in the eighth generation of video game consoles, the Nintendo Switch, was released in March 2017. The Nintendo Switch features a hybrid design as a home and handheld console, independently functioning Joy-Con controllers that each contain an accelerometer and gyroscope, and the simultaneous wireless connection of up to eight consoles. To expand its games catalog, Nintendo entered alliances with several third-party and independent developers; by February 2019, more than 1,800 games had been released for the Nintendo Switch. Worldwide sales of the Nintendo Switch exceeded 55 million units by March 2020. In April 2018, the Nintendo Labo line was released, consisting of cardboard accessories that interact with the Nintendo Switch and the Joy-Con controllers. The Nintendo Labo Variety Kit sold more than a million units in its first year on the market. In 2018, Shuntaro Furukawa replaced Kimishima as company president, while in 2019, Doug Bowser replaced Nintendo of America president Reggie Fils-Aimé. In April 2019, Nintendo formed an alliance with Tencent to distribute the Nintendo Switch in China starting in December. In April 2020, ValueAct Capital Partners announced an acquisition of $1.1 billion in Nintendo stock purchases, giving them an overall stake of 2% in Nintendo. Although the 2019 coronavirus disease pandemic caused delays in the production and distribution of some of Nintendo's products, the situation "had limited impact on business results"; in May 2020, Nintendo reported a 75% increase in income compared to the previous fiscal year, mainly contributed by the Nintendo Switch Online service. Nintendo's central focus is the research, development, production and distribution of entertainment products, primarily video game software and hardware and card games, and its main markets are Japan, America and Europe, although more than 70% of its total sales come from the latter two territories. Since the launch of the Color TV-Game in 1977, Nintendo has produced and distributed various video game consoles, including home, handheld, dedicated and hybrid consoles. Each console is compatible with a variety of accessories and controllers, such as the NES Zapper, the Game Boy Camera, the Super NES Mouse, the Rumble Pak, the Wii MotionPlus, the Wii U Pro Controller and the Nintendo Switch Pro Controller. The first video games produced by Nintendo were arcade titles, with "EVR Race" (1975) being the company's first electromechanical title, and "Donkey Kong" (1981) being the first platform game in history. Since then, both Nintendo and other development companies have produced and distributed an extensive catalogue of video games for Nintendo's different consoles. Nintendo's games are sold in both physical and digital formats; the latter are distributed via services such as the Nintendo eShop and the Nintendo Network. Nintendo of America has engaged in several high-profile marketing campaigns to define and position its brand. One of its earliest and most enduring slogans was "Now you're playing with power!," used first to promote its Nintendo Entertainment System. It modified the slogan to include "SUPER power" for the Super Nintendo Entertainment System, and "PORTABLE power" for the Game Boy. Its 1994 "Play It Loud!" campaign played upon teenage rebellion and fostered an edgy reputation. During the Nintendo 64 era, the slogan was "Get N or get out." During the GameCube era, the "Who Are You?" suggested a link between the games and the players' identities. The company promoted its Nintendo DS handheld with the tagline "Touching is Good." For the Wii, they used the "Wii would like to play" slogan to promote the console with the people who tried the games including "Super Mario Galaxy" and "Super Paper Mario". The Nintendo 3DS used the slogan "Take a look inside." The Wii U used the slogan "How U will play next." The Nintendo Switch uses the slogan "Switch and Play" in North America, and "Play anywhere, anytime, with anyone" in Europe. During the peak of Nintendo's success in the video game industry in the 1990s, their name was ubiquitously used to refer to any video game console, regardless of the manufacturer. To prevent their trademark from becoming generic, Nintendo pushed usage of the term "game console", and succeeded in preserving their trademark. Used since the 1960s, Nintendo's most recognizable logo is the racetrack shape, especially the red-colored wordmark typically (though not always) displayed on a white background, primarily used in the Western markets from 1985 to 2006. In Japan, a monochromatic version that lacks a colored background is on Nintendo's own Famicom, Super Famicom, Nintendo 64, GameCube, and handheld console packaging and marketing. Since 2006, in conjunction with the launch of the Wii, Nintendo changed its logo to a gray variant that lacks a colored background inside the wordmark, making it transparent. Nintendo's official, corporate logo remains this variation. For consumer products and marketing, a white variant on a red background has been used since 2015, and has been in full effect since the launch of the Nintendo Switch in 2017. Nintendo's internal research and development operations are divided into three main divisions: The Nintendo Entertainment Planning & Development division is the primary software development division at Nintendo, formed as a merger between their former Entertainment Analysis & Development and Software Planning & Development divisions in 2015. Led by Shinya Takahashi, the division holds the largest concentration of staff at the company, housing more than 800 engineers, producers, directors, planners and designers. The Nintendo Platform Technology Development division is a combination of Nintendo's former Integrated Research & Development (or IRD) and System Development (or SDD) divisions. Led by Ko Shiota, the division is responsible for designing hardware and developing Nintendo's operating systems, developer environment and internal network as well as maintenance of the Nintendo Network. The Nintendo Business Development division was formed following Nintendo's foray into software development for smart devices such as mobile phones and tablets. They are responsible for refining Nintendo's business model for the dedicated video game system business, and for furthering Nintendo's venture into development for smart devices. Although most of the research and development is being done in Japan, there are some R&D facilities in the United States, Europe and China that are focused on developing software and hardware technologies used in Nintendo products. Although they all are subsidiaries of Nintendo (and therefore first-party), they are often referred to as external resources when being involved in joint development processes with Nintendo's internal developers by the Japanese personal involved. This can be seen in the "Iwata asks..." interview series. Nintendo Software Technology (NST) and Nintendo Technology Development (NTD) are located in Redmond, Washington, United States, while Nintendo European Research & Development ("NERD") is located in Paris, France, and Nintendo Network Service Database (NSD) is located in Kyoto, Japan. Most external first-party software development is done in Japan, since the only overseas subsidiary is Retro Studios in the United States. Although these studios are all subsidiaries of Nintendo, they are often referred to as external resources when being involved in joint development processes with Nintendo's internal developers by the Nintendo Entertainment Planning & Development (EPD) division. 1-Up Studio and Nd Cube are located in Tokyo, Japan, while Monolith Soft has one studio located in Tokyo and another in Kyoto. Retro Studios is located in Austin, Texas. Nintendo also established The Pokémon Company alongside Creatures and Game Freak in order to effectively manage the Pokémon brand. Similarly, Warpstar Inc. was formed through a joint investment with HAL Laboratory, which was in charge of the "" animated series. Bergsala, a third-party company based in Sweden, exclusively handles Nintendo operations in the Scandinavian region. Bergsala's relationship with Nintendo was established in 1981 when the company sought to distribute "Game & Watch" units to Sweden, which later expanded to the NES console by 1986. Bergsala were the only non-Nintendo owned distributor of Nintendo's products, up until 2019 when Tor Gaming gained distribution rights in Israel. Nintendo has partnered with Tencent to release Nintendo products in China, following the lifting of the country's console ban in 2015. In addition to distributing hardware, Tencent will help bring Nintendo's games through the governmental approval process for video game software. In January 2019, it was reported by ynet and IGN Israel that negotiations about official distribution of Nintendo products in the country were ongoing. After two months, IGN Israel announced that Tor Gaming Ltd., a company that established in earlier 2019, gained a distribution agreement with Nintendo of Europe, handling official retailing beginning at the start of March, followed by opening an official online store the next month. In June 2019, Tor Gaming launched an official Nintendo Store at Dizengoff Center in Tel Aviv, making it the second official Nintendo Store worldwide, 13 years after NYC. Headquartered in Kyoto, Japan since the beginning, Nintendo Co., Ltd. oversees the organization's global operations and manages Japanese operations specifically. The company's two major subsidiaries, Nintendo of America and Nintendo of Europe, manage operations in North America and Europe respectively. Nintendo Co., Ltd. moved from its original Kyoto location to a new office in Higashiyama-ku, Kyoto, in 2000, this became the research and development building when the head office relocated to its location in Minami-ku, Kyoto. Nintendo founded its North American subsidiary in 1980 as Nintendo of America (NoA). Hiroshi Yamauchi appointed his son-in-law Minoru Arakawa as president, who in turn hired his own wife and Yamauchi's daughter Yoko Yamauchi as the first employee. The Arakawa family moved from Vancouver to select an office in Manhattan, New York, due to its central status in American commerce. Both from extremely affluent families, their goals were set more by achievement than moneyand all their seed capital and products would now also be automatically inherited from Nintendo in Japan, and their inaugural target is the existing $8 billion-per-year coin-op arcade video game market and largest entertainment industry in the US, which already outclassed movies and television combined. During the couple's arcade research excursions, NoA hired gamer youths to work in the filthy, hot, ratty warehouse in New Jersey for the receiving and service of game hardware from Japan. In late 1980 NoA contracted the Seattle-based arcade sales and distribution company Far East Video, consisting solely of experienced arcade salespeople Ron Judy and Al Stone. The two had already built a decent reputation and a distribution network, founded specifically for the independent import and sales of games from Nintendo because the Japanese company had for years been the under-represented maverick in America. Now as direct associates to the new NoA, they told Arakawa they could always clear all Nintendo inventory if Nintendo produced better games. Far East Video took NoA's contract for a fixed per-unit commission on the exclusive American distributorship of Nintendo games, to be settled by their Seattle-based lawyer, Howard Lincoln. Based on favorable test arcade sites in Seattle, Arakawa wagered most of NoA's modest finances on a huge order of 3,000 "Radar Scope" cabinets. He panicked when the game failed in the fickle market upon its arrival from its four-month boat ride from Japan. Far East Video was already in financial trouble due to declining sales and Ron Judy borrowed his aunt's life savings of $50,000, while still hoping Nintendo would develop its first "Pac-Man"-sized hit. Arakawa regretted founding the Nintendo subsidiary, with the distressed Yoko trapped between her arguing husband and father. Amid financial threat, Nintendo of America relocated from Manhattan to the Seattle metro to remove major stressors: the frenetic New York and New Jersey lifestyle and commute, and the extra weeks or months on the shipping route from Japan as was suffered by the "Radar Scope" disaster. With the Seattle harbor being the US's closest to Japan at only nine days by boat, and having a lumber production market for arcade cabinets, Arakawa's real estate scouts found a warehouse for rent containing three officesone for Arakawa and one for Judy and Stone. This warehouse in the Tukwila suburb was owned by Mario Segale after whom the Mario character would be named, and was initially managed by former Far East Video employee Don James. After one month, James recruited his college friend Howard Phillips as assistant, who soon took over as warehouse manager. The company remained at fewer than 10 employees for some time, handling sales, marketing, advertising, distribution, and limited manufacturing of arcade cabinets and "Game & Watch" handheld units, all sourced and shipped from Nintendo. Arakawa was still panicked over NoA's ongoing financial crisis. With the parent company having no new game ideas, he had been repeatedly pleading for Yamauchi to reassign some top talent away from existing Japanese products to develop something for Americaespecially to redeem the massive dead stock of "Radar Scope" cabinets. Since all of Nintendo's key engineers and programmers were busy, and with NoA representing only a tiny fraction of the parent's overall business, Yamauchi allowed only the assignment of Gunpei Yokoi's young assistant who had no background in engineering, Shigeru Miyamoto. NoA's staffexcept the sole young gamer Howard Phillipswere uniformly revolted at the sight of the freshman developer Miyamoto's debut game, which they had imported in the form of emergency conversion kits for the overstock of "Radar Scope" cabinets. The kits transformed the cabinets into NoA's massive windfall gain of from Miyamoto's smash hit "Donkey Kong" in 1981–1983 alone. They sold 4,000 new arcade units each month in America, making the 24-year-old Phillips "the largest volume shipping manager for the entire Port of Seattle". Arakawa used these profits to buy of land in Redmond in July 1982 and to perform the $50 million launch of the Nintendo Entertainment System in 1985 which revitalized the entire video game industry from its devastating 1983 crash. A second warehouse in Redmond was soon secured, and managed by Don James. The company stayed at around 20 employees for some years. The organization was reshaped nationwide in the following decades, and those core sales and marketing business functions are now directed by the office in Redwood City, California. The company's distribution centers are Nintendo Atlanta in Atlanta, Georgia, and Nintendo North Bend in North Bend, Washington. , the Nintendo North Bend facility processes more than 20,000 orders a day to Nintendo customers, which include retail stores that sell Nintendo products in addition to consumers who shop Nintendo's website. Nintendo of America operates two retail stores in the United States: Nintendo New York on Rockefeller Plaza in New York City, which is open to the public; and Nintendo Redmond, co-located at NoA headquarters in Redmond, Washington, which is open only to Nintendo employees and invited guests. Nintendo of America's Canadian branch, Nintendo of Canada, is based in Vancouver, British Columbia with a distribution center in Toronto, Ontario. Nintendo Treehouse is NoA's localization team, composed of around 80 staff who are responsible for translating text from Japanese to English, creating videos and marketing plans, and quality assurance. Nintendo's European subsidiary was established in June 1990, based in Großostheim, Germany. The company handles operations across Europe excluding Scandinavia, as well as South Africa. Nintendo of Europe's United Kingdom branch (Nintendo UK) handles operations in that country and in Ireland from its headquarters in Windsor, Berkshire. In June 2014, NOE initiated a reduction and consolidation process, yielding a combined 130 layoffs: the closing of its office and warehouse, and termination of all employment, in Großostheim; and the consolidation of all of those operations into, and terminating some employment at, its Frankfurt location. As of July 2018, the company employs 850 people. In 2019, NoE signed with Tor Gaming Ltd. for official distribution in Israel. Nintendo's Australian subsidiary is based in Melbourne, Victoria. It handles the publishing, distribution, sales, and marketing of Nintendo products in Australia, New Zealand, and Oceania (Cook Islands, Fiji, New Caledonia, Papua New Guinea, Samoa, and Vanuatu). It also manufactures some Wii games locally. Nintendo Australia is also a third-party distributor of some games from Rising Star Games, Bandai Namco Entertainment, Atlus, The Tetris Company, Sega, Koei Tecmo, and Capcom. Originally a Chinese joint venture between its founder, Wei Yen, and Nintendo, manufactures and distributes official Nintendo consoles and games for the mainland Chinese market, under the iQue brand. The product lineup for the Chinese market is considerably different from that for other markets. For example, Nintendo's only console in China is the iQue Player, a modified version of the Nintendo 64. In 2013, the company became a fully owned subsidiary of Nintendo. Nintendo's South Korean subsidiary was established on 7 July 2006, and is based in Seoul. In March 2016, the subsidiary was heavily downsized due to a corporate restructuring after analyzing shifts in the current market, laying off 80% of its employees, leaving only ten people, including CEO Hiroyuki Fukuda. This did not affect any games scheduled for release in South Korea, and Nintendo continued operations there as usual. For many years, Nintendo had a policy of strict content guidelines for video games published on its consoles. Although Nintendo allowed graphic violence in its video games released in Japan, nudity and sexuality were strictly prohibited. Former Nintendo president Hiroshi Yamauchi believed that if the company allowed the licensing of pornographic games, the company's image would be forever tarnished. Nintendo of America went further in that games released for Nintendo consoles could not feature nudity, sexuality, profanity (including racism, sexism or slurs), blood, graphic or domestic violence, drugs, political messages or religious symbols (with the exception of widely unpracticed religions, such as the Greek Pantheon). The Japanese parent company was concerned that it may be viewed as a "Japanese Invasion" by forcing Japanese community standards on North American and European children. Past the strict guidelines, some exceptions have occurred: "Bionic Commando" (though swastikas were eliminated in the US version), "Smash TV" and "" contain human violence, the latter also containing implied sexuality and tobacco use; "River City Ransom" and "" contain nudity, and the latter also contains religious images, as do "" and "". A known side effect of this policy is the Genesis version of "Mortal Kombat" having more than double the unit sales of the Super NES version, mainly because Nintendo had forced publisher Acclaim to recolor the red blood to look like white sweat and replace some of the more gory graphics in its release of the game, making it less violent. By contrast, Sega allowed blood and gore to remain in the Genesis version (though a code is required to unlock the gore). Nintendo allowed the Super NES version of "Mortal Kombat II" to ship uncensored the following year with a content warning on the packaging. Video game ratings systems were introduced with the Entertainment Software Rating Board of 1994 and the Pan European Game Information of 2003, and Nintendo discontinued most of its censorship policies in favor of consumers making their own choices. Today, changes to the content of games are done primarily by the game's developer or, occasionally, at the request of Nintendo. The only clear-set rule is that ESRB AO-rated games will not be licensed on Nintendo consoles in North America, a practice which is also enforced by Sony and Microsoft, its two greatest competitors in the present market. Nintendo has since allowed several mature-content games to be published on its consoles, including these: "Perfect Dark", "Conker's Bad Fur Day", "Doom", "Doom 64", "BMX XXX", the "Resident Evil" series, "Killer7", the "Mortal Kombat" series, "", "BloodRayne", "Geist", "", "Bayonetta 2", "Devil's Third", and "". Certain games have continued to be modified, however. For example, Konami was forced to remove all references to cigarettes in the 2000 Game Boy Color game "Metal Gear Solid" (although the previous NES version of "Metal Gear" and the subsequent GameCube game "" both included such references, as did Wii game "MadWorld"), and maiming and blood were removed from the Nintendo 64 port of "Cruis'n USA". Another example is in the Game Boy Advance game "Mega Man Zero 3", in which one of the bosses, called Hellbat Schilt in the Japanese and European releases, was renamed Devilbat Schilt in the North American localization. In North America releases of the "Mega Man Zero" games, enemies and bosses killed with a saber attack do not gush blood as they do in the Japanese versions. However, the release of the Wii was accompanied by a number of even more controversial games, such as "Manhunt 2", "No More Heroes", "", and "MadWorld", the latter three of which were published exclusively for the console. Nintendo of America also had guidelines before 1993 that had to be followed by its licensees to make games for the Nintendo Entertainment System, in addition to the above content guidelines. Guidelines were enforced through the 10NES lockout chip. The last rule was circumvented in a number of ways; for example, Konami, wanting to produce more games for Nintendo's consoles, formed Ultra Games and later Palcom to produce more games as a technically different publisher. This disadvantaged smaller or emerging companies, as they could not afford to start additional companies. In another side effect, Square Co (now Square Enix) executives have suggested that the price of publishing games on the Nintendo 64 along with the degree of censorship and control that Nintendo enforced over its games, most notably "Final Fantasy VI", were factors in switching its focus towards Sony's PlayStation console. In 1993, a class action suit was taken against Nintendo under allegations that their lockout chip enabled unfair business practices. The case was settled, with the condition that California consumers were entitled to a $3 discount coupon for a game of Nintendo's choice. Nintendo has generally been proactive to assure its intellectual property in both hardware and software is protected. With the NES system, Nintendo employed a lock-out system that only allowed authorized game cartridges they manufactured to be playable on the system. Nintendo has used emulation by itself or licensed from third parties to provide means to re-release games from their older platforms on newer systems, with Virtual Console, which re-released classic games as downloadable titles, the NES and SNES library for Nintendo Switch Online subscribers, and with dedicated consoles like the NES Mini and SNES Mini. However, Nintendo has taken a hard stance against unlicensed emulation of its video games and consoles, stating that it is the single largest threat to the intellectual property rights of video game developers. Further, Nintendo has taken action against fan-made games which have used significant facets of their IP, issuing cease & desist letters to these projects or Digital Millennium Copyright Act-related complaints to services that host these projects. In recent years, Nintendo has taken legal action against sites that knowingly distribute ROM images of their games. On 19 July 2018, Nintendo sued Jacob Mathias, the owner of ROM image distribution websites LoveROMs and LoveRetro, for "brazen and mass-scale infringement of Nintendo's intellectual property rights." Nintendo settled with Mathias in November 2018 for more than along with relinquishing all ROM images in their ownership. While Nintendo is likely to have agreed to a smaller fine in private, the large amount was seen as a deterrent to prevent similar sites from sharing ROM images. Nintendo filed a separate suit against RomUniverse in September 2019 which also offered infringing copies of Nintendo DS and Switch games in addition to ROM images. Nintendo also successfully won a suit in the United Kingdom that same month to force the major Internet service providers in the country to block access to sites that offered copyright-infringing copies of Switch software or hacks for the Nintendo Switch to run unauthorized software. Ironically, individuals who hacked the Wii Virtual Console version of "Super Mario Bros." discovered that the ROM image Nintendo used had likely been downloaded from a ROM distribution site. In a notable case, Nintendo sought enforcement action against a hacker that for several years had gotten into Nintendo's internal database by various means including phishing to obtain plans of what games and hardware they had planned to announce for upcoming shows like E3, leaking this information to the Internet, impacting how Nintendo's own announcements were received. Though the person was a minor when Nintendo brought the United States Federal Bureau of Investigation (FBI) to investigate, and had been warned by the FBI to desist, the person continued over 2018 and 2019 as an adult, taunting their actions over social media. They were arrested in July 2019, and in addition to documents confirming the hacks, the FBI found a number of unauthorized game files as well as child pornography on their computers, leading to their admission of guilt for all crimes in January 2020. Similarly, Nintendo spent significant time to identify who had leaked information about "Pokémon Sword" and "Shield" several weeks before its planned Nintendo Directs, ultimately tracing the leaks back to a Portugal game journalist who leaked the information from official review copies of the game and subsequently severed ties with the publication. Despite efforts to protects its IP, a major leak of documents, including source code, design documents, hardware drawings and documentation and other internal information primarily related to the Nintendo 64, GameCube, and Wii, appeared in May 2020. The leak may have been related to BroadOn, a company that Nintendo had contracted to help with the Wii's design, but could also be the work of Zammis Clark, a Malwarebytes employee and hacker who infiltrated Nintendo's servers between March and May 2018. The gold sunburst seal was first used by Nintendo of America, and later Nintendo of Europe. It is displayed on any game, system, or accessory licensed for use on one of its video game consoles, denoting the game has been properly approved by Nintendo. The seal is also displayed on any Nintendo-licensed merchandise, such as trading cards, game guides, or apparel, albeit with the words "Official Nintendo Licensed Product." In 2008, game designer Sid Meier cited the Seal of Quality as one of the three most important innovations in video game history, as it helped set a standard for game quality that protected consumers from shovelware. In NTSC regions, this seal is an elliptical starburst named the "Official Nintendo Seal". Originally, for NTSC countries, the seal was a large, black and gold circular starburst. The seal read as follows: "This seal is your assurance that NINTENDO has approved and guaranteed the quality of this product." This seal was later altered in 1988: "approved and guaranteed" was changed to "evaluated and approved." In 1989, the seal became gold and white, as it currently appears, with a shortened phrase, "Official Nintendo Seal of Quality." It was changed in 2003 to read "Official Nintendo Seal." The seal currently reads: In PAL regions, the seal is a circular starburst named the "Original Nintendo Seal of Quality." Text near the seal in the Australian Wii manual states: In 1992, Nintendo teamed with the Starlight Children's Foundation to build Starlight Fun Center mobile entertainment units and install them in hospitals. 1,000 Starlight Nintendo Fun Center units were installed by the end of 1995. These units combine several forms of multimedia entertainment, including gaming, and serve as a distraction to brighten moods and boost kids' morale during hospital stays. Nintendo has consistently been ranked last in Greenpeace's "Guide to Greener Electronics" due to Nintendo's failure to publish information. Similarly, they are ranked last in the Enough Project's "Conflict Minerals Company Rankings" due to Nintendo's refusal to respond to multiple requests for information. Like many other electronics companies, Nintendo offers a take-back recycling program which allows customers to mail in old products they no longer use. Nintendo of America claimed that it took in 548 tons of returned products in 2011, 98% of which was either reused or recycled.
https://en.wikipedia.org/wiki?curid=21197
Nobel Prize The Nobel Prize (, ; ; ) is a set of annual international awards bestowed in several categories by Swedish and Norwegian institutions in recognition of academic, cultural, or scientific advances. The will of the Swedish chemist, engineer and industrialist Alfred Nobel established the five Nobel prizes in 1895. The prizes in Chemistry, Literature, Peace, Physics, and Physiology or Medicine were first awarded in 1901. The prizes are widely regarded as the most prestigious awards available in their respective fields. In 1968, Sveriges Riksbank, Sweden's central bank, established the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. The award is based on a donation received by the Nobel Foundation in 1968 from Sveriges Riksbank on the occasion of the bank's 300th anniversary. The first Prize in Economic Sciences was awarded to Ragnar Frisch and Jan Tinbergen in 1969. The Prize in Economic Sciences is awarded by the Royal Swedish Academy of Sciences, Stockholm, Sweden, according to the same principles as for the Nobel Prizes that have been awarded since 1901. However, as it is not one of the prizes that Alfred Nobel established in his will in 1895, it is not a Nobel Prize. The Royal Swedish Academy of Sciences awards the Nobel Prize in Chemistry, the Nobel Prize in Physics, and the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel; the Nobel Assembly at the Karolinska Institute awards the Nobel Prize in Physiology or Medicine; the Swedish Academy grants the Nobel Prize in Literature; and the Norwegian Nobel Committee awards the Nobel Peace Prize. Between 1901 and 2019, the Nobel Prizes (and the Prizes in Economic Sciences, from 1969 on) were awarded 597 times to 950 people and organizations. With some receiving the Nobel Prize more than once, this makes a total of 27 organizations and 908 individuals. The prize ceremonies take place annually in Stockholm, Sweden (with the exception of the Peace Prize ceremony, which is held in Oslo, Norway). Each recipient (known as a "laureate") receives a gold medal, a diploma, and a sum of money that has been decided by the Nobel Foundation. (, each prize is worth 9,000,000 SEK, or about , €848,678, or £716,224.) Medals made before 1980 were struck in 23-carat gold, and later in 18-carat green gold plated with a 24-carat gold coating. The prize is not awarded posthumously; however, if a person is awarded a prize and dies before receiving it, the prize may still be presented. A prize may not be shared among more than three individuals, although the Nobel Peace Prize can be awarded to organizations of more than three people. Alfred Nobel () was born on 21 October 1833 in Stockholm, Sweden, into a family of engineers. He was a chemist, engineer, and inventor. In 1894, Nobel purchased the Bofors iron and steel mill, which he made into a major armaments manufacturer. Nobel also invented ballistite. This invention was a precursor to many smokeless military explosives, especially the British smokeless powder cordite. As a consequence of his patent claims, Nobel was eventually involved in a patent infringement lawsuit over cordite. Nobel amassed a fortune during his lifetime, with most of his wealth coming from his 355 inventions, of which dynamite is the most famous. In 1888, Nobel was astonished to read his own obituary, titled "The merchant of death is dead", in a French newspaper. It was Alfred's brother Ludvig who had died; the obituary was eight years premature. The article disconcerted Nobel and made him apprehensive about how he would be remembered. This inspired him to change his will. On 10 December 1896, Alfred Nobel died in his villa in San Remo, Italy, from a cerebral haemorrhage. He was 63 years old. Nobel wrote several wills during his lifetime. He composed the last over a year before he died, signing it at the Swedish–Norwegian Club in Paris on 27 November 1895. To widespread astonishment, Nobel's last will specified that his fortune be used to create a series of prizes for those who confer the "greatest benefit on mankind" in physics, chemistry, physiology or medicine, literature, and peace. Nobel bequeathed 94% of his total assets, 31 million SEK (c. US$186 million, €150 million in 2008), to establish the five Nobel Prizes. Owing to skepticism surrounding the will, it was not approved by the Storting in Norway until 26 April 1897. The executors of the will, Ragnar Sohlman and Rudolf Lilljequist, formed the Nobel Foundation to take care of the fortune and to organise the awarding of prizes. Nobel's instructions named a Norwegian Nobel Committee to award the Peace Prize, the members of whom were appointed shortly after the will was approved in April 1897. Soon thereafter, the other prize-awarding organizations were designated. These were Karolinska Institute on 7 June, the Swedish Academy on 9 June, and the Royal Swedish Academy of Sciences on 11 June. The Nobel Foundation reached an agreement on guidelines for how the prizes should be awarded; and, in 1900, the Nobel Foundation's newly created statutes were promulgated by King Oscar II. In 1905, the personal union between Sweden and Norway was dissolved. According to his will and testament read in Stockholm on 30 December 1896, a foundation established by Alfred Nobel would reward those who serve humanity. The Nobel Prize was funded by Alfred Nobel's personal fortune. According to the official sources, Alfred Nobel bequeathed from the shares 94% of his fortune to the Nobel Foundation that now forms the economic base of the Nobel Prize. The Nobel Foundation was founded as a private organization on 29 June 1900. Its function is to manage the finances and administration of the Nobel Prizes. In accordance with Nobel's will, the primary task of the Foundation is to manage the fortune Nobel left. Robert and Ludvig Nobel were involved in the oil business in Azerbaijan, and according to Swedish historian E. Bargengren, who accessed the Nobel family archive, it was this "decision to allow withdrawal of Alfred's money from Baku that became the decisive factor that enabled the Nobel Prizes to be established". Another important task of the Nobel Foundation is to market the prizes internationally and to oversee informal administration related to the prizes. The Foundation is not involved in the process of selecting the Nobel laureates. In many ways, the Nobel Foundation is similar to an investment company, in that it invests Nobel's money to create a solid funding base for the prizes and the administrative activities. The Nobel Foundation is exempt from all taxes in Sweden (since 1946) and from investment taxes in the United States (since 1953). Since the 1980s, the Foundation's investments have become more profitable and as of 31 December 2007, the assets controlled by the Nobel Foundation amounted to 3.628 billion Swedish "kronor" (c. US$560 million). According to the statutes, the Foundation consists of a board of five Swedish or Norwegian citizens, with its seat in Stockholm. The Chairman of the Board is appointed by the Swedish King in Council, with the other four members appointed by the trustees of the prize-awarding institutions. An Executive Director is chosen from among the board members, a Deputy Director is appointed by the King in Council, and two deputies are appointed by the trustees. However, since 1995, all the members of the board have been chosen by the trustees, and the Executive Director and the Deputy Director appointed by the board itself. As well as the board, the Nobel Foundation is made up of the prize-awarding institutions (the Royal Swedish Academy of Sciences, the Nobel Assembly at Karolinska Institute, the Swedish Academy, and the Norwegian Nobel Committee), the trustees of these institutions, and auditors. The capital of the Nobel Foundation today is invested 50% in shares, 20% bonds and 30% other investments (e.g. hedge funds or real estate). The distribution can vary by 10 percent. At the beginning of 2008, 64% of the funds were invested mainly in American and European stocks, 20% in bonds, plus 12% in real estate and hedge funds. In 2011, the total annual cost was approximately 120 million krona, with 50 million krona as the prize money. Further costs to pay institutions and persons engaged in giving the prizes were 27.4 million krona. The events during the Nobel week in Stockholm and Oslo cost 20.2 million krona. The administration, Nobel symposium, and similar items had costs of 22.4 million krona. The cost of the Economic Sciences prize of 16.5 Million krona is paid by the Sveriges Riksbank. Once the Nobel Foundation and its guidelines were in place, the Nobel Committees began collecting nominations for the inaugural prizes. Subsequently, they sent a list of preliminary candidates to the prize-awarding institutions. The Nobel Committee's Physics Prize shortlist cited Wilhelm Röntgen's discovery of X-rays and Philipp Lenard's work on cathode rays. The Academy of Sciences selected Röntgen for the prize. In the last decades of the 19th century, many chemists had made significant contributions. Thus, with the Chemistry Prize, the Academy "was chiefly faced with merely deciding the order in which these scientists should be awarded the prize". The Academy received 20 nominations, eleven of them for Jacobus van 't Hoff. Van 't Hoff was awarded the prize for his contributions in chemical thermodynamics. The Swedish Academy chose the poet Sully Prudhomme for the first Nobel Prize in Literature. A group including 42 Swedish writers, artists, and literary critics protested against this decision, having expected Leo Tolstoy to be awarded. Some, including Burton Feldman, have criticised this prize because they consider Prudhomme a mediocre poet. Feldman's explanation is that most of the Academy members preferred Victorian literature and thus selected a Victorian poet. The first Physiology or Medicine Prize went to the German physiologist and microbiologist Emil von Behring. During the 1890s, von Behring developed an antitoxin to treat diphtheria, which until then was causing thousands of deaths each year. The first Nobel Peace Prize went to the Swiss Jean Henri Dunant for his role in founding the International Red Cross Movement and initiating the Geneva Convention, and jointly given to French pacifist Frédéric Passy, founder of the Peace League and active with Dunant in the Alliance for Order and Civilization. In 1938 and 1939, Adolf Hitler's Third Reich forbade three laureates from Germany (Richard Kuhn, Adolf Friedrich Johann Butenandt, and Gerhard Domagk) from accepting their prizes. Each man was later able to receive the diploma and medal. Even though Sweden was officially neutral during the Second World War, the prizes were awarded irregularly. In 1939, the Peace Prize was not awarded. No prize was awarded in any category from 1940 to 1942, due to the occupation of Norway by Germany. In the subsequent year, all prizes were awarded except those for literature and peace. During the occupation of Norway, three members of the Norwegian Nobel Committee fled into exile. The remaining members escaped persecution from the Germans when the Nobel Foundation stated that the Committee building in Oslo was Swedish property. Thus it was a safe haven from the German military, which was not at war with Sweden. These members kept the work of the Committee going, but did not award any prizes. In 1944, the Nobel Foundation, together with the three members in exile, made sure that nominations were submitted for the Peace Prize and that the prize could be awarded once again. In 1968, Sweden's central bank Sveriges Riksbank celebrated its 300th anniversary by donating a large sum of money to the Nobel Foundation to be used to set up a prize in honour of Alfred Nobel. The following year, the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel was awarded for the first time. The Royal Swedish Academy of Sciences became responsible for selecting laureates. The first laureates for the Economics Prize were Jan Tinbergen and Ragnar Frisch "for having developed and applied dynamic models for the analysis of economic processes". The Board of the Nobel Foundation decided that after this addition, it would allow no further new prizes. The award process is similar for all of the Nobel Prizes, the main difference being who can make nominations for each of them. Nomination forms are sent by the Nobel Committee to about 3,000 individuals, usually in September the year before the prizes are awarded. These individuals are generally prominent academics working in a relevant area. Regarding the Peace Prize, inquiries are also sent to governments, former Peace Prize laureates, and current or former members of the Norwegian Nobel Committee. The deadline for the return of the nomination forms is 31 January of the year of the award. The Nobel Committee nominates about 300 potential laureates from these forms and additional names. The nominees are not publicly named, nor are they told that they are being considered for the prize. All nomination records for a prize are sealed for 50 years from the awarding of the prize. The Nobel Committee then prepares a report reflecting the advice of experts in the relevant fields. This, along with the list of preliminary candidates, is submitted to the prize-awarding institutions. The institutions meet to choose the laureate or laureates in each field by a majority vote. Their decision, which cannot be appealed, is announced immediately after the vote. A maximum of three laureates and two different works may be selected per award. Except for the Peace Prize, which can be awarded to institutions, the awards can only be given to individuals. Although posthumous nominations are not presently permitted, individuals who died in the months between their nomination and the decision of the prize committee were originally eligible to receive the prize. This has occurred twice: the 1931 Literature Prize awarded to Erik Axel Karlfeldt, and the 1961 Peace Prize awarded to UN Secretary General Dag Hammarskjöld. Since 1974, laureates must be thought alive at the time of the October announcement. There has been one laureate, William Vickrey, who in 1996 died after the prize (in Economics) was announced but before it could be presented. On 3 October 2011, the laureates for the Nobel Prize in Physiology or Medicine were announced; however, the committee was not aware that one of the laureates, Ralph M. Steinman, had died three days earlier. The committee was debating about Steinman's prize, since the rule is that the prize is not awarded posthumously. The committee later decided that as the decision to award Steinman the prize "was made in good faith", it would remain unchanged. Nobel's will provided for prizes to be awarded in recognition of discoveries made "during the preceding year". Early on, the awards usually recognised recent discoveries. However, some of those early discoveries were later discredited. For example, Johannes Fibiger was awarded the 1926 Prize in Physiology or Medicine for his purported discovery of a parasite that caused cancer. To avoid repeating this embarrassment, the awards increasingly recognised scientific discoveries that had withstood the test of time. According to Ralf Pettersson, former chairman of the Nobel Prize Committee for Physiology or Medicine, "the criterion 'the previous year' is interpreted by the Nobel Assembly as the year when the full impact of the discovery has become evident." The interval between the award and the accomplishment it recognises varies from discipline to discipline. The Literature Prize is typically awarded to recognise a cumulative lifetime body of work rather than a single achievement. The Peace Prize can also be awarded for a lifetime body of work. For example, 2008 laureate Martti Ahtisaari was awarded for his work to resolve international conflicts. However, they can also be awarded for specific recent events. For instance, Kofi Annan was awarded the 2001 Peace Prize just four years after becoming the Secretary-General of the United Nations. Similarly Yasser Arafat, Yitzhak Rabin, and Shimon Peres received the 1994 award, about a year after they successfully concluded the Oslo Accords. Awards for physics, chemistry, and medicine are typically awarded once the achievement has been widely accepted. Sometimes, this takes decades – for example, Subrahmanyan Chandrasekhar shared the 1983 Physics Prize for his 1930s work on stellar structure and evolution. Not all scientists live long enough for their work to be recognised. Some discoveries can never be considered for a prize if their impact is realised after the discoverers have died. Except for the Peace Prize, the Nobel Prizes are presented in Stockholm, Sweden, at the annual Prize Award Ceremony on 10 December, the anniversary of Nobel's death. The recipients' lectures are normally held in the days prior to the award ceremony. The Peace Prize and its recipients' lectures are presented at the annual Prize Award Ceremony in Oslo, Norway, usually on 10 December. The award ceremonies and the associated banquets are typically major international events. The Prizes awarded in Sweden's ceremonies' are held at the Stockholm Concert Hall, with the Nobel banquet following immediately at Stockholm City Hall. The Nobel Peace Prize ceremony has been held at the Norwegian Nobel Institute (1905–1946), at the auditorium of the University of Oslo (1947–1989), and at Oslo City Hall (1990–present). The highlight of the Nobel Prize Award Ceremony in Stockholm occurs when each Nobel laureate steps forward to receive the prize from the hands of the King of Sweden. In Oslo, the Chairman of the Norwegian Nobel Committee presents the Nobel Peace Prize in the presence of the King of Norway. At first, King Oscar II did not approve of awarding grand prizes to foreigners. It is said that he changed his mind once his attention had been drawn to the publicity value of the prizes for Sweden. After the award ceremony in Sweden, a banquet is held in the Blue Hall at the Stockholm City Hall, which is attended by the Swedish Royal Family and around 1,300 guests. The Nobel Peace Prize banquet is held in Norway at the Oslo Grand Hotel after the award ceremony. Apart from the laureate, guests include the President of the Storting, on occasion the Swedish prime minister, and, since 2006, the King and Queen of Norway. In total, about 250 guests attend. According to the statutes of the Nobel Foundation, each laureate is required to give a public lecture on a subject related to the topic of their prize. The Nobel lecture as a rhetorical genre took decades to reach its current format. These lectures normally occur during Nobel Week (the week leading up to the award ceremony and banquet, which begins with the laureates arriving in Stockholm and normally ends with the Nobel banquet), but this is not mandatory. The laureate is only obliged to give the lecture within six months of receiving the prize, but some have happened even later. For example, US President Theodore Roosevelt received the Peace Prize in 1906 but gave his lecture in 1910, after his term in office. The lectures are organized by the same association which selected the laureates. The Nobel Foundation announced on 30 May 2012 that it had awarded the contract for the production of the five (Swedish) Nobel Prize medals to Svenska Medalj AB. Between 1902 and 2010, the Nobel Prize medals were minted by Myntverket (the Swedish Mint), Sweden's oldest company, which ceased operations in 2011 after 107 years. In 2011, the Mint of Norway, located in Kongsberg, made the medals. The Nobel Prize medals are registered trademarks of the Nobel Foundation. Each medal features an image of Alfred Nobel in left profile on the obverse. The medals for physics, chemistry, physiology or medicine, and literature have identical obverses, showing the image of Alfred Nobel and the years of his birth and death. Nobel's portrait also appears on the obverse of the Peace Prize medal and the medal for the Economics Prize, but with a slightly different design. For instance, the laureate's name is engraved on the rim of the Economics medal. The image on the reverse of a medal varies according to the institution awarding the prize. The reverse sides of the medals for chemistry and physics share the same design. All medals made before 1980 were struck in 23 carat gold. Since then, they have been struck in 18 carat green gold plated with 24 carat gold. The weight of each medal varies with the value of gold, but averages about for each medal. The diameter is and the thickness varies between and . Because of the high value of their gold content and tendency to be on public display, Nobel medals are subject to medal theft. During World War II, the medals of German scientists Max von Laue and James Franck were sent to Copenhagen for safekeeping. When Germany invaded Denmark, Hungarian chemist (and Nobel laureate himself) George de Hevesy dissolved them in aqua regia (nitro-hydrochloric acid), to prevent confiscation by Nazi Germany and to prevent legal problems for the holders. After the war, the gold was recovered from solution, and the medals re-cast. Nobel laureates receive a diploma directly from the hands of the King of Sweden, or in the case of the peace prize, the Chairman of the Norwegian Nobel Committee. Each diploma is uniquely designed by the prize-awarding institutions for the laureates that receive them. The diploma contains a picture and text in Swedish which states the name of the laureate and normally a citation of why they received the prize. None of the Nobel Peace Prize laureates has ever had a citation on their diplomas. The laureates are given a sum of money when they receive their prizes, in the form of a document confirming the amount awarded. The amount of prize money depends upon how much money the Nobel Foundation can award each year. The purse has increased since the 1980s, when the prize money was 880,000 SEK per prize (c. 2.6 million SEK altogether, US$350,000 today). In 2009, the monetary award was 10 million SEK (US$1.4 million). In June 2012, it was lowered to 8 million SEK. If two laureates share the prize in a category, the award grant is divided equally between the recipients. If there are three, the awarding committee has the option of dividing the grant equally, or awarding one-half to one recipient and one-quarter to each of the others. It is common for recipients to donate prize money to benefit scientific, cultural, or humanitarian causes. Among other criticisms, the Nobel Committees have been accused of having a political agenda, and of omitting more deserving candidates. They have also been accused of Eurocentrism, especially for the Literature Prize. Among the most criticised Nobel Peace Prizes was the one awarded to Henry Kissinger and Lê Đức Thọ. This led to the resignation of two Norwegian Nobel Committee members. Kissinger and Thọ were awarded the prize for negotiating a ceasefire between North Vietnam and the United States in January 1973. However, when the award was announced, both sides were still engaging in hostilities. Critics sympathetic to the North announced that Kissinger was not a peace-maker but the opposite, responsible for widening the war. Those hostile to the North and what they considered its deceptive practices during negotiations were deprived of a chance to criticise Lê Đức Thọ, as he declined the award. Yasser Arafat, Shimon Peres, and Yitzhak Rabin received the Peace Prize in 1994 for their efforts in making peace between Israel and Palestine. Immediately after the award was announced, one of the five Norwegian Nobel Committee members denounced Arafat as a terrorist and resigned. Additional misgivings about Arafat were widely expressed in various newspapers. Another controversial Peace Prize was that awarded to Barack Obama in 2009. Nominations had closed only eleven days after Obama took office as President of the United States, but the actual evaluation occurred over the next eight months. Obama himself stated that he did not feel deserving of the award, or worthy of the company in which it would place him. Past Peace Prize laureates were divided, some saying that Obama deserved the award, and others saying he had not secured the achievements to yet merit such an accolade. Obama's award, along with the previous Peace Prizes for Jimmy Carter and Al Gore, also prompted accusations of a left-wing bias. The award of the 2004 Literature Prize to Elfriede Jelinek drew a protest from a member of the Swedish Academy, Knut Ahnlund. Ahnlund resigned, alleging that the selection of Jelinek had caused "irreparable damage to all progressive forces, it has also confused the general view of literature as an art". He alleged that Jelinek's works were "a mass of text shovelled together without artistic structure". The 2009 Literature Prize to Herta Müller also generated criticism. According to "The Washington Post", many US literary critics and professors were ignorant of her work. This made those critics feel the prizes were too Eurocentric. In 1949, the neurologist António Egas Moniz received the Physiology or Medicine Prize for his development of the prefrontal leucotomy. The previous year, Dr. Walter Freeman had developed a version of the procedure which was faster and easier to carry out. Due in part to the publicity surrounding the original procedure, Freeman's procedure was prescribed without due consideration or regard for modern medical ethics. Endorsed by such influential publications as "The New England Journal of Medicine", leucotomy or "lobotomy" became so popular that about 5,000 lobotomies were performed in the United States in the three years immediately following Moniz's receipt of the Prize. The Norwegian Nobel Committee confirmed that Mahatma Gandhi was nominated for the Peace Prize in 1937–39, 1947 and, a few days before he was assassinated in January 1948. Later, members of the Norwegian Nobel Committee expressed regret that he was not given the prize. Geir Lundestad, Secretary of Norwegian Nobel Committee in 2006, said, "The greatest omission in our 106 year history is undoubtedly that Mahatma Gandhi never received the Nobel Peace prize. Gandhi could do without the Nobel Peace prize. Whether Nobel committee can do without Gandhi is the question". In 1948, the year of Gandhi's death, the Nobel Committee declined to award a prize on the grounds that "there was no suitable living candidate" that year. Later, when the 14th Dalai Lama was awarded the Peace Prize in 1989, the chairman of the committee said that this was "in part a tribute to the memory of Mahatma Gandhi". Other high-profile individuals with widely recognised contributions to peace have been missed out. "Foreign Policy" lists Eleanor Roosevelt, Václav Havel, Ken Saro-Wiwa, Sari Nusseibeh, and Corazon Aquino as people who "never won the prize, but should have". In 1965, UN Secretary General U Thant was informed by the Norwegian Permanent Representative to the UN that he would be awarded that year's prize and asked whether or not he would accept. He consulted staff and later replied that he would. At the same time, Chairman Gunnar Jahn of the Nobel Peace prize committee, lobbied heavily against giving U Thant the prize and the prize was at the last minute awarded to UNICEF. The rest of the committee all wanted the prize to go to U Thant, for his work in defusing the Cuban Missile Crisis, ending the war in the Congo, and his ongoing work to mediate an end to the Vietnam War. The disagreement lasted three years and in 1966 and 1967 no prize was given, with Gunnar Jahn effectively vetoing an award to U Thant. The Literature Prize also has controversial omissions. Adam Kirsch has suggested that many notable writers have missed out on the award for political or extra-literary reasons. The heavy focus on European and Swedish authors has been a subject of criticism. The Eurocentric nature of the award was acknowledged by Peter Englund, the 2009 Permanent Secretary of the Swedish Academy, as a problem with the award and was attributed to the tendency for the academy to relate more to European authors. This tendency towards European authors still leaves some European writers on a list of notable writers that have been overlooked for the Literature Prize, including Europe's Leo Tolstoy, Anton Chekhov, J. R. R. Tolkien, Émile Zola, Marcel Proust, Vladimir Nabokov, James Joyce, August Strindberg, Simon Vestdijk, Karel Čapek, the New World's Jorge Luis Borges, Ezra Pound, John Updike, Arthur Miller, Mark Twain, and Africa's Chinua Achebe. Candidates can receive multiple nominations the same year. Gaston Ramon received a total of 155 nominations in physiology or medicine from 1930 to 1953, the last year with public nomination data for that award . He died in 1963 without being awarded. Pierre Paul Émile Roux received 115 nominations in physiology or medicine, and Arnold Sommerfeld received 84 in physics. These are the three most nominated scientists without awards in the data published . Otto Stern received 79 nominations in physics 1925–43 before being awarded in 1943. The strict rule against awarding a prize to more than three people is also controversial. When a prize is awarded to recognise an achievement by a team of more than three collaborators, one or more will miss out. For example, in 2002, the prize was awarded to Koichi Tanaka and John Fenn for the development of mass spectrometry in protein chemistry, an award that did not recognise the achievements of Franz Hillenkamp and Michael Karas of the Institute for Physical and Theoretical Chemistry at the University of Frankfurt. According to one of the nominees for the prize in physics, the three person limit deprived him and two other members of his team of the honor in 2013: the team of Carl Hagen, Gerald Guralnik, and Tom Kibble published a paper in 1964 that gave answers to how the cosmos began, but did not share the 2013 Physics Prize awarded to Peter Higgs and François Englert, who had also published papers in 1964 concerning the subject. All five physicists arrived at the same conclusion, albeit from different angles. Hagen contends that an equitable solution is to either abandon the three limit restriction, or expand the time period of recognition for a given achievement to two years. Similarly, the prohibition of posthumous awards fails to recognise achievements by an individual or collaborator who dies before the prize is awarded. The Economics Prize was not awarded to Fischer Black, who died in 1995, when his co-author Myron Scholes received the honor in 1997 for their landmark work on option pricing along with Robert C. Merton, another pioneer in the development of valuation of stock options. In the announcement of the award that year, the Nobel committee prominently mentioned Black's key role. Political subterfuge may also deny proper recognition. Lise Meitner and Fritz Strassmann, who co-discovered nuclear fission along with Otto Hahn, may have been denied a share of Hahn's 1944 Nobel Chemistry Award due to having fled Germany when the Nazis came to power. The Meitner and Strassmann roles in the research was not fully recognised until years later, when they joined Hahn in receiving the 1966 Enrico Fermi Award. Alfred Nobel left his fortune to finance annual prizes to be awarded "to those who, during the preceding year, shall have conferred the greatest benefit on mankind". He stated that the Nobel Prizes in Physics should be given "to the person who shall have made the most important 'discovery' or 'invention' within the field of physics". Nobel did not emphasise discoveries, but they have historically been held in higher respect by the Nobel Prize Committee than inventions: 77% of the Physics Prizes have been given to discoveries, compared with only 23% to inventions. Christoph Bartneck and Matthias Rauterberg, in papers published in "Nature" and "Technoetic Arts", have argued this emphasis on discoveries has moved the Nobel Prize away from its original intention of rewarding the greatest contribution to society. In terms of the most prestigious awards in STEM fields, only a small proportion have been awarded to women. Out of 210 laureates in Physics, 181 in Chemistry and 216 in Medicine between 1901 and 2018, there were only three female laureates in physics, five in chemistry and 12 in medicine. Factors proposed to contribute to the discrepancy between this and the roughly equal human sex ratio include biased nominations, fewer women than men being active in the relevant fields, Nobel Prizes typically being awarded decades after the research was done (reflecting a time when gender bias in the relevant fields was greater), a greater delay in awarding Nobel Prizes for women's achievements making longevity a more important factor for women (Nobel Prizes are not awarded posthumously), and a tendency to omit women from jointly awarded Nobel Prizes. Despite these factors, Marie Curie is to date the only person awarded Nobel Prizes in two different sciences (Physics in 1903, Chemistry in 1911); she is one of only three people who have received two Nobel Prizes in sciences (see Multiple laureates below). Four people have received two Nobel Prizes. Marie Curie received the Physics Prize in 1903 for her work on radioactivity and the Chemistry Prize in 1911 for the isolation of pure radium, making her the only person to be awarded a Nobel Prize in two different sciences. Linus Pauling was awarded the 1954 Chemistry Prize for his research into the chemical bond and its application to the structure of complex substances. Pauling was also awarded the Peace Prize in 1962 for his activism against nuclear weapons, making him the only laureate of two unshared prizes. John Bardeen received the Physics Prize twice: in 1956 for the invention of the transistor and in 1972 for the theory of superconductivity. Frederick Sanger received the prize twice in Chemistry: in 1958 for determining the structure of the insulin molecule and in 1980 for inventing a method of determining base sequences in DNA. Two organizations have received the Peace Prize multiple times. The International Committee of the Red Cross received it three times: in 1917 and 1944 for its work during the world wars; and in 1963 during the year of its centenary. The United Nations High Commissioner for Refugees has been awarded the Peace Prize twice for assisting refugees: in 1954 and 1981. The Curie family has received the most prizes, with four prizes awarded to five individual laureates. Marie Curie received the prizes in Physics (in 1903) and Chemistry (in 1911). Her husband, Pierre Curie, shared the 1903 Physics prize with her. Their daughter, Irène Joliot-Curie, received the Chemistry Prize in 1935 together with her husband Frédéric Joliot-Curie. In addition, the husband of Marie Curie's second daughter, Henry Labouisse, was the director of UNICEF when he accepted the Nobel Peace Prize in 1965 on that organisation's behalf. Although no family matches the Curie family's record, there have been several with two laureates. The husband-and-wife team of Gerty Cori and Carl Ferdinand Cori shared the 1947 Prize in Physiology or Medicine as did the husband-and-wife team of May-Britt Moser and Edvard Moser in 2014 (along with John O'Keefe). J. J. Thomson was awarded the Physics Prize in 1906 for showing that electrons are particles. His son, George Paget Thomson, received the same prize in 1937 for showing that they also have the properties of waves. William Henry Bragg and his son, William Lawrence Bragg, shared the Physics Prize in 1915 for inventing the X-ray crystallography. Niels Bohr was awarded the Physics prize in 1922, as was his son, Aage Bohr, in 1975. Manne Siegbahn, who received the Physics Prize in 1924, was the father of Kai Siegbahn, who received the Physics Prize in 1981. Hans von Euler-Chelpin, who received the Chemistry Prize in 1929, was the father of Ulf von Euler, who was awarded the Physiology or Medicine Prize in 1970. C. V. Raman was awarded the Physics Prize in 1930 and was the uncle of Subrahmanyan Chandrasekhar, who was awarded the same prize in 1983. Arthur Kornberg received the Physiology or Medicine Prize in 1959; Kornberg's son, Roger later received the Chemistry Prize in 2006. Jan Tinbergen, who was awarded the first Economics Prize in 1969, was the brother of Nikolaas Tinbergen, who received the 1973 Physiology or Medicine Prize. Alva Myrdal, Peace Prize laureate in 1982, was the wife of Gunnar Myrdal who was awarded the Economics Prize in 1974. Economics laureates Paul Samuelson and Kenneth Arrow were brothers-in-law. Frits Zernike, who was awarded the 1953 Physics Prize, is the great-uncle of 1999 Physics laureate Gerard 't Hooft. In 2019, married couple Abhijit Banerjee and Esther Duflo were awarded the Nobel Prize for Economics. Two laureates have voluntarily declined the Nobel Prize. In 1964, Jean-Paul Sartre was awarded the Literature Prize but refused, stating, "A writer must refuse to allow himself to be transformed into an institution, even if it takes place in the most honourable form." Lê Đức Thọ, chosen for the 1973 Peace Prize for his role in the Paris Peace Accords, declined, stating that there was no actual peace in Vietnam. George Bernard Shaw attempted to decline the prize money while accepting the 1925 Literature Prize; eventually it was agreed to use it to found the Anglo-Swedish Literary Foundation. During the Third Reich, Adolf Hitler hindered Richard Kuhn, Adolf Butenandt, and Gerhard Domagk from accepting their prizes. All of them were awarded their diplomas and gold medals after World War II. In 1958, Boris Pasternak declined his prize for literature due to fear of what the Soviet Union government might do if he travelled to Stockholm to accept his prize. In return, the Swedish Academy refused his refusal, saying "this refusal, of course, in no way alters the validity of the award." The Academy announced with regret that the presentation of the Literature Prize could not take place that year, holding it back until 1989 when Pasternak's son accepted the prize on his behalf. Aung San Suu Kyi was awarded the Nobel Peace Prize in 1991, but her children accepted the prize because she had been placed under house arrest in Burma; Suu Kyi delivered her speech two decades later, in 2012. Liu Xiaobo was awarded the Nobel Peace Prize in 2010 while he and his wife were under house arrest in China as political prisoners, and he was unable to accept the prize in his lifetime. Being a symbol of scientific or literary achievement that is recognisable worldwide, the Nobel Prize is often depicted in fiction. This includes films like "The Prize" and "Nobel Son" about fictional Nobel laureates as well as fictionalised accounts of stories surrounding real prizes such as "Nobel Chor", a film based on the theft of Rabindranath Tagore's prize. The memorial symbol "Planet of Alfred Nobel" was opened in Dnipropetrovsk University of Economics and Law in 2008. On the globe, there are 802 Nobel laureates' reliefs made of a composite alloy obtained when disposing of military strategic missiles.
https://en.wikipedia.org/wiki?curid=21201
Niels Bohr Niels Henrik David Bohr (; 7 October 1885 – 18 November 1962) was a Danish physicist who made foundational contributions to understanding atomic structure and quantum theory, for which he received the Nobel Prize in Physics in 1922. Bohr was also a philosopher and a promoter of scientific research. Bohr developed the Bohr model of the atom, in which he proposed that energy levels of electrons are discrete and that the electrons revolve in stable orbits around the atomic nucleus but can jump from one energy level (or orbit) to another. Although the Bohr model has been supplanted by other models, its underlying principles remain valid. He conceived the principle of complementarity: that items could be separately analysed in terms of contradictory properties, like behaving as a wave or a stream of particles. The notion of complementarity dominated Bohr's thinking in both science and philosophy. Bohr founded the Institute of Theoretical Physics at the University of Copenhagen, now known as the Niels Bohr Institute, which opened in 1920. Bohr mentored and collaborated with physicists including Hans Kramers, Oskar Klein, George de Hevesy, and Werner Heisenberg. He predicted the existence of a new zirconium-like element, which was named hafnium, after the Latin name for Copenhagen, where it was discovered. Later, the element bohrium was named after him. During the 1930s Bohr helped refugees from Nazism. After Denmark was occupied by the Germans, he had a famous meeting with Heisenberg, who had become the head of the German nuclear weapon project. In September 1943 word reached Bohr that he was about to be arrested by the Germans, and he fled to Sweden. From there, he was flown to Britain, where he joined the British Tube Alloys nuclear weapons project, and was part of the British mission to the Manhattan Project. After the war, Bohr called for international cooperation on nuclear energy. He was involved with the establishment of CERN and the Research Establishment Risø of the Danish Atomic Energy Commission and became the first chairman of the Nordic Institute for Theoretical Physics in 1957. Bohr was born in Copenhagen, Denmark, on 7 October 1885, the second of three children of Christian Bohr, a professor of physiology at the University of Copenhagen, and Ellen Adler Bohr, who came from a wealthy Danish Jewish family prominent in banking and parliamentary circles. He had an elder sister, Jenny, and a younger brother Harald. Jenny became a teacher, while Harald became a mathematician and footballer who played for the Danish national team at the 1908 Summer Olympics in London. Niels was a passionate footballer as well, and the two brothers played several matches for the Copenhagen-based Akademisk Boldklub (Academic Football Club), with Niels as goalkeeper. Bohr was educated at Gammelholm Latin School, starting when he was seven. In 1903, Bohr enrolled as an undergraduate at Copenhagen University. His major was physics, which he studied under Professor Christian Christiansen, the university's only professor of physics at that time. He also studied astronomy and mathematics under Professor Thorvald Thiele, and philosophy under Professor Harald Høffding, a friend of his father. In 1905 a gold medal competition was sponsored by the Royal Danish Academy of Sciences and Letters to investigate a method for measuring the surface tension of liquids that had been proposed by Lord Rayleigh in 1879. This involved measuring the frequency of oscillation of the radius of a water jet. Bohr conducted a series of experiments using his father's laboratory in the university; the university itself had no physics laboratory. To complete his experiments, he had to make his own glassware, creating test tubes with the required elliptical cross-sections. He went beyond the original task, incorporating improvements into both Rayleigh's theory and his method, by taking into account the viscosity of the water, and by working with finite amplitudes instead of just infinitesimal ones. His essay, which he submitted at the last minute, won the prize. He later submitted an improved version of the paper to the Royal Society in London for publication in the "Philosophical Transactions of the Royal Society". Harald became the first of the two Bohr brothers to earn a master's degree, which he earned for mathematics in April 1909. Niels took another nine months to earn his on the electron theory of metals, a topic assigned by his supervisor, Christiansen. Bohr subsequently elaborated his master's thesis into his much-larger Doctor of Philosophy (dr. phil.) thesis. He surveyed the literature on the subject, settling on a model postulated by Paul Drude and elaborated by Hendrik Lorentz, in which the electrons in a metal are considered to behave like a gas. Bohr extended Lorentz's model, but was still unable to account for phenomena like the Hall effect, and concluded that electron theory could not fully explain the magnetic properties of metals. The thesis was accepted in April 1911, and Bohr conducted his formal defence on 13 May. Harald had received his doctorate the previous year. Bohr's thesis was groundbreaking, but attracted little interest outside Scandinavia because it was written in Danish, a Copenhagen University requirement at the time. In 1921, the Dutch physicist Hendrika Johanna van Leeuwen would independently derive a theorem from Bohr's thesis that is today known as the Bohr–van Leeuwen theorem. In 1910, Bohr met Margrethe Nørlund, the sister of the mathematician Niels Erik Nørlund. Bohr resigned his membership in the Church of Denmark on 16 April 1912, and he and Margrethe were married in a civil ceremony at the town hall in Slagelse on 1 August. Years later, his brother Harald similarly left the church before getting married. Bohr and Margrethe had six sons. The oldest, Christian, died in a boating accident in 1934, and another, Harald, died from childhood meningitis. Aage Bohr became a successful physicist, and in 1975 was awarded the Nobel Prize in physics, like his father. became a physician; , a chemical engineer; and Ernest, a lawyer. Like his uncle Harald, Ernest Bohr became an Olympic athlete, playing field hockey for Denmark at the 1948 Summer Olympics in London. In September 1911, Bohr, supported by a fellowship from the Carlsberg Foundation, travelled to England. At the time, it was where most of the theoretical work on the structure of atoms and molecules was being done. He met J. J. Thomson of the Cavendish Laboratory and Trinity College, Cambridge. He attended lectures on electromagnetism given by James Jeans and Joseph Larmor, and did some research on cathode rays, but failed to impress Thomson. He had more success with younger physicists like the Australian William Lawrence Bragg, and New Zealand's Ernest Rutherford, whose 1911 small central nucleus Rutherford model of the atom had challenged Thomson's 1904 plum pudding model. Bohr received an invitation from Rutherford to conduct post-doctoral work at Victoria University of Manchester, where Bohr met George de Hevesy and Charles Galton Darwin (whom Bohr referred to as "the grandson of the real Darwin"). Bohr returned to Denmark in July 1912 for his wedding, and travelled around England and Scotland on his honeymoon. On his return, he became a "privatdocent" at the University of Copenhagen, giving lectures on thermodynamics. Martin Knudsen put Bohr's name forward for a "docent", which was approved in July 1913, and Bohr then began teaching medical students. His three papers, which later became famous as "the trilogy", were published in "Philosophical Magazine" in July, September and November of that year. He adapted Rutherford's nuclear structure to Max Planck's quantum theory and so created his Bohr model of the atom. Planetary models of atoms were not new, but Bohr's treatment was. Taking the 1912 paper by Darwin on the role of electrons in the interaction of alpha particles with a nucleus as his starting point, he advanced the theory of electrons travelling in orbits around the atom's nucleus, with the chemical properties of each element being largely determined by the number of electrons in the outer orbits of its atoms. He introduced the idea that an electron could drop from a higher-energy orbit to a lower one, in the process emitting a quantum of discrete energy. This became a basis for what is now known as the old quantum theory. In 1885, Johann Balmer had come up with his Balmer series to describe the visible spectral lines of a hydrogen atom: where λ is the wavelength of the absorbed or emitted light and "R"H is the Rydberg constant. Balmer's formula was corroborated by the discovery of additional spectral lines, but for thirty years, no one could explain why it worked. In the first paper of his trilogy, Bohr was able to derive it from his model: where "m"e is the electron's mass, "e" is its charge, "h" is Planck's constant and "Z" is the atom's atomic number (1 for hydrogen). The model's first hurdle was the Pickering series, lines which did not fit Balmer's formula. When challenged on this by Alfred Fowler, Bohr replied that they were caused by ionised helium, helium atoms with only one electron. The Bohr model was found to work for such ions. Many older physicists, like Thomson, Rayleigh and Hendrik Lorentz, did not like the trilogy, but the younger generation, including Rutherford, David Hilbert, Albert Einstein, Enrico Fermi, Max Born and Arnold Sommerfeld saw it as a breakthrough. The trilogy's acceptance was entirely due to its ability to explain phenomena which stymied other models, and to predict results that were subsequently verified by experiments. Today, the Bohr model of the atom has been superseded, but is still the best known model of the atom, as it often appears in high school physics and chemistry texts. Bohr did not enjoy teaching medical students. He decided to return to Manchester, where Rutherford had offered him a job as a reader in place of Darwin, whose tenure had expired. Bohr accepted. He took a leave of absence from the University of Copenhagen, which he started by taking a holiday in Tyrol with his brother Harald and aunt Hanna Adler. There, he visited the University of Göttingen and the Ludwig Maximilian University of Munich, where he met Sommerfeld and conducted seminars on the trilogy. The First World War broke out while they were in Tyrol, greatly complicating the trip back to Denmark and Bohr's subsequent voyage with Margrethe to England, where he arrived in October 1914. They stayed until July 1916, by which time he had been appointed to the Chair of Theoretical Physics at the University of Copenhagen, a position created especially for him. His docentship was abolished at the same time, so he still had to teach physics to medical students. New professors were formally introduced to King Christian X, who expressed his delight at meeting such a famous football player. In April 1917 Bohr began a campaign to establish an Institute of Theoretical Physics. He gained the support of the Danish government and the Carlsberg Foundation, and sizeable contributions were also made by industry and private donors, many of them Jewish. Legislation establishing the Institute was passed in November 1918. Now known as the Niels Bohr Institute, it opened on 3 March 1921, with Bohr as its director. His family moved into an apartment on the first floor. Bohr's institute served as a focal point for researchers into quantum mechanics and related subjects in the 1920s and 1930s, when most of the world's best known theoretical physicists spent some time in his company. Early arrivals included Hans Kramers from the Netherlands, Oskar Klein from Sweden, George de Hevesy from Hungary, Wojciech Rubinowicz from Poland and Svein Rosseland from Norway. Bohr became widely appreciated as their congenial host and eminent colleague. Klein and Rosseland produced the Institute's first publication even before it opened. The Bohr model worked well for hydrogen, but could not explain more complex elements. By 1919, Bohr was moving away from the idea that electrons orbited the nucleus and developed heuristics to describe them. The rare-earth elements posed a particular classification problem for chemists, because they were so chemically similar. An important development came in 1924 with Wolfgang Pauli's discovery of the Pauli exclusion principle, which put Bohr's models on a firm theoretical footing. Bohr was then able to declare that the as-yet-undiscovered element 72 was not a rare-earth element, but an element with chemical properties similar to those of zirconium. He was immediately challenged by the French chemist Georges Urbain, who claimed to have discovered a rare-earth element 72, which he called "celtium". At the Institute in Copenhagen, Dirk Coster and George de Hevesy took up the challenge of proving Bohr right and Urbain wrong. Starting with a clear idea of the chemical properties of the unknown element greatly simplified the search process. They went through samples from Copenhagen's Museum of Mineralogy looking for a zirconium-like element and soon found it. The element, which they named hafnium ("Hafnia" being the Latin name for Copenhagen) turned out to be more common than gold. In 1922 Bohr was awarded the Nobel Prize in Physics "for his services in the investigation of the structure of atoms and of the radiation emanating from them". The award thus recognised both the Trilogy and his early leading work in the emerging field of quantum mechanics. For his Nobel lecture, Bohr gave his audience a comprehensive survey of what was then known about the structure of the atom, including the correspondence principle, which he had formulated. This states that the behaviour of systems described by quantum theory reproduces classical physics in the limit of large quantum numbers. The discovery of Compton scattering by Arthur Holly Compton in 1923 convinced most physicists that light was composed of photons, and that energy and momentum were conserved in collisions between electrons and photons. In 1924, Bohr, Kramers and John C. Slater, an American physicist working at the Institute in Copenhagen, proposed the Bohr–Kramers–Slater theory (BKS). It was more a programme than a full physical theory, as the ideas it developed were not worked out quantitatively. BKS theory became the final attempt at understanding the interaction of matter and electromagnetic radiation on the basis of the old quantum theory, in which quantum phenomena were treated by imposing quantum restrictions on a classical wave description of the electromagnetic field. Modelling atomic behaviour under incident electromagnetic radiation using "virtual oscillators" at the absorption and emission frequencies, rather than the (different) apparent frequencies of the Bohr orbits, led Max Born, Werner Heisenberg and Kramers to explore different mathematical models. They led to the development of matrix mechanics, the first form of modern quantum mechanics. The BKS theory also generated discussion of, and renewed attention to, difficulties in the foundations of the old quantum theory. The most provocative element of BKS – that momentum and energy would not necessarily be conserved in each interaction, but only statistically – was soon shown to be in conflict with experiments conducted by Walther Bothe and Hans Geiger. In light of these results, Bohr informed Darwin that "there is nothing else to do than to give our revolutionary efforts as honourable a funeral as possible". The introduction of spin by George Uhlenbeck and Samuel Goudsmit in November 1925 was a milestone. The next month, Bohr travelled to Leiden to attend celebrations of the 50th anniversary of Hendrick Lorentz receiving his doctorate. When his train stopped in Hamburg, he was met by Wolfgang Pauli and Otto Stern, who asked for his opinion of the spin theory. Bohr pointed out that he had concerns about the interaction between electrons and magnetic fields. When he arrived in Leiden, Paul Ehrenfest and Albert Einstein informed Bohr that Einstein had resolved this problem using relativity. Bohr then had Uhlenbeck and Goudsmit incorporate this into their paper. Thus, when he met Werner Heisenberg and Pascual Jordan in Göttingen on the way back, he had become, in his own words, "a prophet of the electron magnet gospel". Heisenberg first came to Copenhagen in 1924, then returned to Göttingen in June 1925, shortly thereafter developing the mathematical foundations of quantum mechanics. When he showed his results to Max Born in Göttingen, Born realised that they could best be expressed using matrices. This work attracted the attention of the British physicist Paul Dirac, who came to Copenhagen for six months in September 1926. Austrian physicist Erwin Schrödinger also visited in 1926. His attempt at explaining quantum physics in classical terms using wave mechanics impressed Bohr, who believed it contributed "so much to mathematical clarity and simplicity that it represents a gigantic advance over all previous forms of quantum mechanics". When Kramers left the Institute in 1926 to take up a chair as professor of theoretical physics at the Utrecht University, Bohr arranged for Heisenberg to return and take Kramers's place as a "lektor" at the University of Copenhagen. Heisenberg worked in Copenhagen as a university lecturer and assistant to Bohr from 1926 to 1927. Bohr became convinced that light behaved like both waves and particles and, in 1927, experiments confirmed the de Broglie hypothesis that matter (like electrons) also behaved like waves. He conceived the philosophical principle of complementarity: that items could have apparently mutually exclusive properties, such as being a wave or a stream of particles, depending on the experimental framework. He felt that it was not fully understood by professional philosophers. In Copenhagen in 1927 Heisenberg developed his uncertainty principle, which Bohr embraced. In a paper he presented at the Volta Conference at Como in September 1927, he demonstrated that the uncertainty principle could be derived from classical arguments, without quantum terminology or matrices. Einstein preferred the determinism of classical physics over the probabilistic new quantum physics to which he himself had contributed. Philosophical issues that arose from the novel aspects of quantum mechanics became widely celebrated subjects of discussion. Einstein and Bohr had good-natured arguments over such issues throughout their lives. In 1914 Carl Jacobsen, the heir to Carlsberg breweries, bequeathed his mansion to be used for life by the Dane who had made the most prominent contribution to science, literature or the arts, as an honorary residence (). Harald Høffding had been the first occupant, and upon his death in July 1931, the Royal Danish Academy of Sciences and Letters gave Bohr occupancy. He and his family moved there in 1932. He was elected president of the Academy on 17 March 1939. By 1929 the phenomenon of beta decay prompted Bohr to again suggest that the law of conservation of energy be abandoned, but Enrico Fermi's hypothetical neutrino and the subsequent 1932 discovery of the neutron provided another explanation. This prompted Bohr to create a new theory of the compound nucleus in 1936, which explained how neutrons could be captured by the nucleus. In this model, the nucleus could be deformed like a drop of liquid. He worked on this with a new collaborator, the Danish physicist Fritz Kalckar, who died suddenly in 1938. The discovery of nuclear fission by Otto Hahn in December 1938 (and its theoretical explanation by Lise Meitner) generated intense interest among physicists. Bohr brought the news to the United States where he opened the Fifth Washington Conference on Theoretical Physics with Fermi on 26 January 1939. When Bohr told George Placzek that this resolved all the mysteries of transuranic elements, Placzek told him that one remained: the neutron capture energies of uranium did not match those of its decay. Bohr thought about it for a few minutes and then announced to Placzek, Léon Rosenfeld and John Wheeler that "I have understood everything." Based on his liquid drop model of the nucleus, Bohr concluded that it was the uranium-235 isotope and not the more abundant uranium-238 that was primarily responsible for fission with thermal neutrons. In April 1940, John R. Dunning demonstrated that Bohr was correct. In the meantime, Bohr and Wheeler developed a theoretical treatment which they published in a September 1939 paper on "The Mechanism of Nuclear Fission". Heisenberg said of Bohr that he was "primarily a philosopher, not a physicist". Bohr read the 19th-century Danish Christian existentialist philosopher, Søren Kierkegaard. Richard Rhodes argued in "The Making of the Atomic Bomb" that Bohr was influenced by Kierkegaard through Høffding. In 1909, Bohr sent his brother Kierkegaard's "Stages on Life's Way" as a birthday gift. In the enclosed letter, Bohr wrote, "It is the only thing I have to send home; but I do not believe that it would be very easy to find anything better ... I even think it is one of the most delightful things I have ever read." Bohr enjoyed Kierkegaard's language and literary style, but mentioned that he had some disagreement with Kierkegaard's philosophy. Some of Bohr's biographers suggested that this disagreement stemmed from Kierkegaard's advocacy of Christianity, while Bohr was an atheist. There has been some dispute over the extent to which Kierkegaard influenced Bohr's philosophy and science. David Favrholdt argued that Kierkegaard had minimal influence over Bohr's work, taking Bohr's statement about disagreeing with Kierkegaard at face value, while Jan Faye argued that one can disagree with the content of a theory while accepting its general premises and structure. Regarding the nature of physics and quantum mechanics Bohr opined that "There is no quantum world. This is only an abstract physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature". The rise of Nazism in Germany prompted many scholars to flee their countries, either because they were Jewish or because they were political opponents of the Nazi regime. In 1933, the Rockefeller Foundation created a fund to help support refugee academics, and Bohr discussed this programme with the President of the Rockefeller Foundation, Max Mason, in May 1933 during a visit to the United States. Bohr offered the refugees temporary jobs at the Institute, provided them with financial support, arranged for them to be awarded fellowships from the Rockefeller Foundation, and ultimately found them places at institutions around the world. Those that he helped included Guido Beck, Felix Bloch, James Franck, George de Hevesy, Otto Frisch, Hilde Levi, Lise Meitner, George Placzek, Eugene Rabinowitch, Stefan Rozental, Erich Ernst Schneider, Edward Teller, Arthur von Hippel and Victor Weisskopf. In April 1940, early in the Second World War, Nazi Germany invaded and occupied Denmark. To prevent the Germans from discovering Max von Laue's and James Franck's gold Nobel medals, Bohr had de Hevesy dissolve them in aqua regia. In this form, they were stored on a shelf at the Institute until after the war, when the gold was precipitated and the medals re-struck by the Nobel Foundation. Bohr's own medal had been donated to an auction to the Fund for Finnish Relief, and was auctioned off in March 1940, along with the medal of August Krogh. The buyer later donated the two medals to the Danish Historical Museum in Frederiksborg Castle, where they are still kept. Bohr kept the Institute running, but all the foreign scholars departed. Bohr was aware of the possibility of using uranium-235 to construct an atomic bomb, referring to it in lectures in Britain and Denmark shortly before and after the war started, but he did not believe that it was technically feasible to extract a sufficient quantity of uranium-235. In September 1941, Heisenberg, who had become head of the German nuclear energy project, visited Bohr in Copenhagen. During this meeting the two men took a private moment outside, the content of which has caused much speculation, as both gave differing accounts. According to Heisenberg, he began to address nuclear energy, morality and the war, to which Bohr seems to have reacted by terminating the conversation abruptly while not giving Heisenberg hints about his own opinions. Ivan Supek, one of Heisenberg's students and friends, claimed that the main subject of the meeting was Carl Friedrich von Weizsäcker, who had proposed trying to persuade Bohr to mediate peace between Britain and Germany. In 1957, Heisenberg wrote to Robert Jungk, who was then working on the book "". Heisenberg explained that he had visited Copenhagen to communicate to Bohr the views of several German scientists, that production of a nuclear weapon was possible with great efforts, and this raised enormous responsibilities on the world's scientists on both sides. When Bohr saw Jungk's depiction in the Danish translation of the book, he drafted (but never sent) a letter to Heisenberg, stating that he never understood the purpose of Heisenberg's visit, was shocked by Heisenberg's opinion that Germany would win the war, and that atomic weapons could be decisive. Michael Frayn's 1998 play "Copenhagen" explores what might have happened at the 1941 meeting between Heisenberg and Bohr. A BBC television film version of the play was first screened on 26 September 2002, with Stephen Rea as Bohr, and Daniel Craig as Heisenberg. The same meeting had previously been dramatised by the BBC's "Horizon" science documentary series in 1992, with Anthony Bate as Bohr, and Philip Anthony as Heisenberg. The meeting is also dramatized in the Norwegian/Danish/British miniseries "The Heavy Water War". In September 1943, word reached Bohr and his brother Harald that the Nazis considered their family to be Jewish, since their mother was Jewish, and that they were therefore in danger of being arrested. The Danish resistance helped Bohr and his wife escape by sea to Sweden on 29 September. The next day, Bohr persuaded King Gustaf V of Sweden to make public Sweden's willingness to provide asylum to Jewish refugees. On 2 October 1943, Swedish radio broadcast that Sweden was ready to offer asylum, and the mass rescue of the Danish Jews by their countrymen followed swiftly thereafter. Some historians claim that Bohr's actions led directly to the mass rescue, while others say that, though Bohr did all that he could for his countrymen, his actions were not a decisive influence on the wider events. Eventually, over 7,000 Danish Jews escaped to Sweden. When the news of Bohr's escape reached Britain, Lord Cherwell sent a telegram to Bohr asking him to come to Britain. Bohr arrived in Scotland on 6 October in a de Havilland Mosquito operated by the British Overseas Airways Corporation (BOAC). The Mosquitos were unarmed high-speed bomber aircraft that had been converted to carry small, valuable cargoes or important passengers. By flying at high speed and high altitude, they could cross German-occupied Norway, and yet avoid German fighters. Bohr, equipped with parachute, flying suit and oxygen mask, spent the three-hour flight lying on a mattress in the aircraft's bomb bay. During the flight, Bohr did not wear his flying helmet as it was too small, and consequently did not hear the pilot's intercom instruction to turn on his oxygen supply when the aircraft climbed to high altitude to overfly Norway. He passed out from oxygen starvation and only revived when the aircraft descended to lower altitude over the North Sea. Bohr's son Aage followed his father to Britain on another flight a week later, and became his personal assistant. Bohr was warmly received by James Chadwick and Sir John Anderson, but for security reasons Bohr was kept out of sight. He was given an apartment at St James's Palace and an office with the British Tube Alloys nuclear weapons development team. Bohr was astonished at the amount of progress that had been made. Chadwick arranged for Bohr to visit the United States as a Tube Alloys consultant, with Aage as his assistant. On 8 December 1943, Bohr arrived in Washington, D.C., where he met with the director of the Manhattan Project, Brigadier General Leslie R. Groves, Jr. He visited Einstein and Pauli at the Institute for Advanced Study in Princeton, New Jersey, and went to Los Alamos in New Mexico, where the nuclear weapons were being designed. For security reasons, he went under the name of "Nicholas Baker" in the United States, while Aage became "James Baker". In May 1944 the Danish resistance newspaper De frie Danske reported that they had learned that 'the famous son of Denmark Professor Niels Bohr' in October the previous year had fled his country via Sweden to London and from there travelled to Moscow from where he could be assumed to support the war effort. Bohr did not remain at Los Alamos, but paid a series of extended visits over the course of the next two years. Robert Oppenheimer credited Bohr with acting "as a scientific father figure to the younger men", most notably Richard Feynman. Bohr is quoted as saying, "They didn't need my help in making the atom bomb." Oppenheimer gave Bohr credit for an important contribution to the work on modulated neutron initiators. "This device remained a stubborn puzzle," Oppenheimer noted, "but in early February 1945 Niels Bohr clarified what had to be done." Bohr recognised early that nuclear weapons would change international relations. In April 1944, he received a letter from Peter Kapitza, written some months before when Bohr was in Sweden, inviting him to come to the Soviet Union. The letter convinced Bohr that the Soviets were aware of the Anglo-American project, and would strive to catch up. He sent Kapitza a non-committal response, which he showed to the authorities in Britain before posting. Bohr met Churchill on 16 May 1944, but found that "we did not speak the same language". Churchill disagreed with the idea of openness towards the Russians to the point that he wrote in a letter: "It seems to me Bohr ought to be confined or at any rate made to see that he is very near the edge of mortal crimes." Oppenheimer suggested that Bohr visit President Franklin D. Roosevelt to convince him that the Manhattan Project should be shared with the Soviets in the hope of speeding up its results. Bohr's friend, Supreme Court Justice Felix Frankfurter, informed President Roosevelt about Bohr's opinions, and a meeting between them took place on 26 August 1944. Roosevelt suggested that Bohr return to the United Kingdom to try to win British approval. When Churchill and Roosevelt met at Hyde Park on 19 September 1944, they rejected the idea of informing the world about the project, and the aide-mémoire of their conversation contained a rider that "enquiries should be made regarding the activities of Professor Bohr and steps taken to ensure that he is responsible for no leakage of information, particularly to the Russians". In June 1950, Bohr addressed an "Open Letter" to the United Nations calling for international cooperation on nuclear energy. In the 1950s, after the Soviet Union's first nuclear weapon test, the International Atomic Energy Agency was created along the lines of Bohr's suggestion. In 1957 he received the first ever Atoms for Peace Award. With the war now ended, Bohr returned to Copenhagen on 25 August 1945, and was re-elected President of the Royal Danish Academy of Arts and Sciences on 21 September. At a memorial meeting of the Academy on 17 October 1947 for King Christian X, who had died in April, the new king, Frederick IX, announced that he was conferring the Order of the Elephant on Bohr. This award was normally awarded only to royalty and heads of state, but the king said that it honoured not just Bohr personally, but Danish science. Bohr designed his own coat of arms which featured a taijitu (symbol of yin and yang) and a motto in , "opposites are complementary". The Second World War demonstrated that science, and physics in particular, now required considerable financial and material resources. To avoid a brain drain to the United States, twelve European countries banded together to create CERN, a research organisation along the lines of the national laboratories in the United States, designed to undertake Big Science projects beyond the resources of any one of them alone. Questions soon arose regarding the best location for the facilities. Bohr and Kramers felt that the Institute in Copenhagen would be the ideal site. Pierre Auger, who organised the preliminary discussions, disagreed; he felt that both Bohr and his Institute were past their prime, and that Bohr's presence would overshadow others. After a long debate, Bohr pledged his support to CERN in February 1952, and Geneva was chosen as the site in October. The CERN Theory Group was based in Copenhagen until their new accommodation in Geneva was ready in 1957. Victor Weisskopf, who later became the Director General of CERN, summed up Bohr's role, saying that "there were other personalities who started and conceived the idea of CERN. The enthusiasm and ideas of the other people would not have been enough, however, if a man of his stature had not supported it." Meanwhile, Scandinavian countries formed the Nordic Institute for Theoretical Physics in 1957, with Bohr as its chairman. He was also involved with the founding of the Research Establishment Risø of the Danish Atomic Energy Commission, and served as its first chairman from February 1956. Bohr died of heart failure at his home in Carlsberg on 18 November 1962. He was cremated, and his ashes were buried in the family plot in the Assistens Cemetery in the Nørrebro section of Copenhagen, along with those of his parents, his brother Harald, and his son Christian. Years later, his wife's ashes were also interred there. On 7 October 1965, on what would have been his 80th birthday, the Institute for Theoretical Physics at the University of Copenhagen was officially renamed to what it had been called unofficially for many years: the Niels Bohr Institute. Bohr received numerous honours and accolades. In addition to the Nobel Prize, he received the Hughes Medal in 1921, the Matteucci Medal in 1923, the Franklin Medal in 1926, the Copley Medal in 1938, the Order of the Elephant in 1947, the Atoms for Peace Award in 1957 and the Sonning Prize in 1961. He became foreign member of the Royal Netherlands Academy of Arts and Sciences in 1923, and of the Royal Society in 1926. The Bohr model's semicentennial was commemorated in Denmark on 21 November 1963 with a postage stamp depicting Bohr, the hydrogen atom and the formula for the difference of any two hydrogen energy levels: formula_3. Several other countries have also issued postage stamps depicting Bohr. In 1997, the Danish National Bank began circulating the 500-krone banknote with the portrait of Bohr smoking a pipe. An asteroid, 3948 Bohr, was named after him, as was the Bohr lunar crater and bohrium, the chemical element with atomic number 107.
https://en.wikipedia.org/wiki?curid=21210
National Football League The National Football League (NFL) is a professional American football league consisting of 32 teams, divided equally between the National Football Conference (NFC) and the American Football Conference (AFC). The NFL is one of the four major professional sports leagues in North America, the highest professional level of American football in the world, the wealthiest professional sport league by revenue, and the sport league with the most valuable teams. The NFL's 17-week regular season runs from early September to late December, with each team playing 16 games and having one bye week. Following the conclusion of the regular season, seven teams from each conference (four division winners and three wild card teams) advance to the playoffs, a single-elimination tournament culminating in the Super Bowl, which is usually held on the first Sunday in February and is played between the champions of the NFC and AFC. The NFL was formed in 1920 as the American Professional Football Association (APFA) before renaming itself the National Football League for the 1922 season. After initially determining champions through end-of-season standings, a playoff system was implemented in 1933 that culminated with the until 1966. Following an agreement to merge the NFL with the American Football League (AFL), the Super Bowl was first held in 1967 to determine a champion between the best teams from the two leagues and has remained as the final game of each NFL season since the merger was completed in 1970. Today, the NFL has the highest average attendance (67,591) of any professional sports league in the world and is the most popular sports league in the United States. The Super Bowl is also among the biggest club sporting events in the world, with the individual games accounting for many of the most watched television programs in American history and all occupying the Nielsen's Top 5 tally of the all-time most watched U.S. television broadcasts by 2015. The Green Bay Packers hold the most combined NFL championships with 13, winning nine titles before the Super Bowl era and four Super Bowls afterwards. Since the creation of the Super Bowl, the Pittsburgh Steelers and New England Patriots both have the most championship titles at six. On August 20, 1920, a meeting was held by representatives of the Akron Pros, Canton Bulldogs, Cleveland Indians, and Dayton Triangles at the Jordan and Hupmobile auto showroom in Canton, Ohio. This meeting resulted in the formation of the American Professional Football Conference (APFC), a group who, according to the "Canton Evening Repository", intended to "raise the standard of professional football in every way possible, to eliminate bidding for players between rival clubs and to secure cooperation in the formation of schedules". Another meeting was held on September 17, 1920 with representatives from teams from four states: Akron, Canton, Cleveland, and Dayton from Ohio; the Hammond Pros and Muncie Flyers from Indiana; the Rochester Jeffersons from New York; and the Rock Island Independents, Decatur Staleys, and Racine (Chicago) Cardinals from Illinois. The league was renamed to the American Professional Football Association (APFA). The league elected Jim Thorpe as its first president, and consisted of 14 teams (the Buffalo All-Americans, Chicago Tigers, Columbus Panhandles, and Detroit Heralds joined the league during the year). The Massillon Tigers from Massillon, Ohio was also at the September 17 meeting, but did not field a team in 1920. Only two of these teams, the Decatur Staleys (now the Chicago Bears) and the Chicago Cardinals (now the Arizona Cardinals), remain. Although the league did not maintain official standings for its 1920 inaugural season and teams played schedules that included non-league opponents, the APFA awarded the Akron Pros the championship by virtue of their (8 wins, 0 losses, and 3 ties) record. The first event occurred on September 26, 1920 when the Rock Island Independents defeated the non-league St. Paul Ideals 48–0 at Douglas Park. On October 3, 1920, the first full week of league play occurred. The following season resulted in the Chicago Staleys controversially winning the title over the Buffalo All-Americans. On June 24, 1922, the APFA changed its name to the National Football League (NFL). In 1932, the season ended with the Chicago Bears () and the Portsmouth Spartans () tied for first in the league standings. At the time, teams were ranked on a single table and the team with the highest winning percentage (not including ties, which were not counted towards the standings) at the end of the season was declared the champion; the only tiebreaker was that in the event of a tie, if two teams played twice in a season, the result of the second game determined the title (the source of the 1921 controversy). This method had been used since the league's creation in 1920, but no situation had been encountered where two teams were tied for first. The league quickly determined that a playoff game between Chicago and Portsmouth was needed to decide the league's champion. The teams were originally scheduled to play the playoff game, officially a regular season game that would count towards the regular season standings, at Wrigley Field in Chicago, but a combination of heavy snow and extreme cold forced the game to be moved indoors to Chicago Stadium, which did not have a regulation-size football field. Playing with altered rules to accommodate the smaller playing field, the Bears won the game 9–0 and thus won the championship. Fan interest in the "de facto" championship game led the NFL, beginning in 1933, to split into two divisions with a championship game to be played between the division champions. The 1934 season also marked the first of 12 seasons in which African Americans were absent from the league. The "de facto" ban was rescinded in 1946, following public pressure and coinciding with the removal of a similar ban in Major League Baseball. The NFL was always the foremost professional football league in the United States; it nevertheless faced a large number of rival professional leagues through the 1930s and 1940s. Rival leagues included at least three separate American Football Leagues and the All-America Football Conference (AAFC), on top of various regional leagues of varying caliber. Three NFL teams trace their histories to these rival leagues, including the Los Angeles Rams (who came from a 1936 iteration of the American Football League), the Cleveland Browns and San Francisco 49ers (the last two of which came from the AAFC). By the 1950s, the NFL had an effective monopoly on professional football in the United States; its only competition in North America was the professional Canadian football circuit, which formally became the Canadian Football League (CFL) in 1958. With Canadian football being a different football code than the American game, the CFL established a niche market in Canada and still survives as an independent league. A new professional league, the fourth American Football League (AFL), began play in 1960. The upstart AFL began to challenge the established NFL in popularity, gaining lucrative television contracts and engaging in a bidding war with the NFL for free agents and draft picks. The two leagues announced a merger on June 8, 1966, to take full effect in 1970. In the meantime, the leagues would hold a common draft and championship game. The game, the Super Bowl, was held four times before the merger, with the NFL winning Super Bowl I and Super Bowl II, and the AFL winning Super Bowl III and Super Bowl IV. After the league merged, it was reorganized into two conferences: the National Football Conference (NFC), consisting of most of the pre-merger NFL teams, and the American Football Conference (AFC), consisting of all of the AFL teams as well as three pre-merger NFL teams. Today, the NFL is the most popular sports league in North America; much of its growth is attributed to former Commissioner Pete Rozelle, who led the league from 1960 to 1989. Overall annual attendance increased from 3 million at the beginning of his tenure to 17 million by the end of his tenure, and 400 million global viewers watched 1989's Super Bowl XXIII. The NFL established NFL Properties in 1963. The league's licensing wing, NFL Properties earns the league billions of dollars annually; Rozelle's tenure also marked the creation of NFL Charities and a national partnership with United Way. Paul Tagliabue was elected as commissioner to succeed Rozelle; his seventeen-year tenure, which ended in 2006, was marked by large increases in television contracts and the addition of four expansion teams, as well as the introduction of league initiatives to increase the number of minorities in league and team management roles. The league's current Commissioner, Roger Goodell, has focused on reducing the number of illegal hits and making the sport safer, mainly through fining or suspending players who break rules. These actions are among many the NFL is taking to reduce concussions and improve player safety. From 1920 to 1934, the NFL did not have a set number of games for teams to play, instead setting a minimum. The league mandated a 12-game regular season for each team beginning in 1935, later shortening this to 11 games in 1937 and 10 games in 1943, mainly due to World War II. After the war ended, the number of games returned to 11 games in 1946 and to 12 in 1947. The NFL went to a 14-game schedule in 1961, which it retained until switching to the current 16-game schedule in 1978. Proposals to increase the regular season to 18 games have been made, but have been rejected in labor negotiations with the National Football League Players Association (NFLPA). The NFL operated in a two-conference system from 1933 to 1966, where the champions of each conference would meet in the . If two teams tied for the conference lead, they would meet in a one-game playoff to determine the conference champion. In 1967, the NFL expanded from 15 teams to 16 teams. Instead of just evening out the conferences by adding the expansion New Orleans Saints to the seven-member Western Conference, the NFL realigned the conferences and split each into two four-team divisions. The four division champions would meet in the NFL playoffs, a two-round playoff. The NFL also operated the Playoff Bowl (officially the Bert Bell Benefit Bowl) from 1960 to 1969. Effectively a third-place game, pitting the two conference runners-up against each other, the league considers Playoff Bowls to have been exhibitions rather than playoff games. The league discontinued the Playoff Bowl in 1970 due to its perception as a game for losers. Following the addition of the former AFL teams into the NFL in 1970, the NFL split into two conferences with three divisions each. The expanded league, now with twenty-six teams, would also feature an expanded eight-team playoff, the participants being the three division champions from each conference as well as one 'wild card' team (the team with the best win percentage) from each conference. In 1978, the league added a second wild card team from each conference, bringing the total number of playoff teams to ten, and a further two wild card teams were added in 1990 to bring the total to twelve. When the NFL expanded to 32 teams in 2002, the league realigned, changing the division structure from three divisions in each conference to four divisions in each conference. As each division champion gets a playoff bid, the number of wild card teams from each conference dropped from three to two. The playoffs expanded again in 2020, adding two more wild card teams to bring the total to fourteen playoff teams. At the corporate level, the National Football League considers itself a trade association made up of and financed by its 32 member teams. Up until 2015, the league was an unincorporated nonprofit 501(c)(6) association. Section 501(c)(6) of the Internal Revenue Code provides an exemption from federal income taxation for "Business leagues, chambers of commerce, real-estate boards, boards of trade, or professional football leagues (whether or not administering a pension fund for football players), not organized for profit and no part of the net earnings of which inures to the benefit of any private shareholder or individual.". In contrast, each individual team (except the non-profit Green Bay Packers) is subject to tax because they make a profit. The NFL gave up the tax exempt status in 2015 following public criticism; in a letter to the club owners, Commissioner Roger Goodell labeled it a "distraction", saying "the effects of the tax exempt status of the league office have been mischaracterized repeatedly in recent years... Every dollar of income generated through television rights fees, licensing agreements, sponsorships, ticket sales, and other means is earned by the 32 clubs and is taxable there. This will remain the case even when the league office and Management Council file returns as taxable entities, and the change in filing status will make no material difference to our business." As a result, the league office might owe around US$10 million in income taxes, but it is no longer required to disclose the salaries of its executive officers. The league has three defined officers: the commissioner, secretary, and treasurer. Each conference has one defined officer, the president, which is essentially an honorary position with few powers and mostly ceremonial duties (such as awarding the conference championship trophy). The commissioner is elected by affirmative vote of two-thirds or 18 (whichever is greater) of the members of the league, while the president of each conference is elected by an affirmative vote of three-fourths or ten of the conference members. The commissioner appoints the secretary and treasurer and has broad authority in disputes between clubs, players, coaches, and employees. He is the "principal executive officer" of the NFL and also has authority in hiring league employees, negotiating television contracts, disciplining individuals that own part or all of an NFL team, clubs, or employed individuals of an NFL club if they have violated league bylaws or committed "conduct detrimental to the welfare of the League or professional football". The commissioner can, in the event of misconduct by a party associated with the league, suspend individuals, hand down a fine of up to US$500,000, cancel contracts with the league, and award or strip teams of draft picks. In extreme cases, the commissioner can offer recommendations to the NFL's Executive Committee up to and including the "cancellation or forfeiture" of a club's franchise or any other action he deems necessary. The commissioner can also issue sanctions up to and including a lifetime ban from the league if an individual connected to the NFL has bet on games or failed to notify the league of conspiracies or plans to bet on or fix games. The current Commissioner of the National Football League is Roger Goodell, who was elected in 2006 after Paul Tagliabue, the previous commissioner, retired. NFL revenue is from three primary sources: NFL Ventures (merchandising), NFL Enterprises (NFL Network and NFL Sunday Ticket, which the league controls), and the television contract. The league distributes such revenue equally among teams, regardless of performance. each team receives $255 million annually from the league's television contract, up 150% from $99.9 million in 2010. Most NFL teams' financial statements are secret. "The Kansas City Star" obtained the Kansas City Chiefs' tax returns for 2008–2010. According to the "Star", the team's revenue rose from $231 million in 2008 to $302 million in 2010. In 2010, two thirds of revenue came from the league: $99.8 million from NFL Ventures ($55.3 million) and NFL Enterprises ($44.6 million), and the $99.9 million share of the television contract. The remaining one third was from tickets ($42.4 million), corporate sponsorships ($6.6 million), food sales ($5 million), parking passes ($4.7 million), in-stadium advertising ($3.7 million), radio contract ($2.7 million), and miscellaneous sources. The largest Chiefs expense in 2010 was $148 million for players, coaches, and other employees. Of the $38 million in operating income, Clark, Lamar Jr., two other children, and widow of former team owner Lamar Hunt divided $17.6 million, and reinvested the remaining $20 million into the team. According to economist Richard D. Wolff, the NFL's revenue model is in contravention of the typical corporate structure. By redistributing profits to all teams the NFL is ensuring that one team will not dominate the league through excessive earnings. Roger Noll described the revenue sharing as the league's "most important structural weakness", however, as there is no disincentive against a team playing badly and the largest cost item, player salaries, is capped. The NFL consists of 32 clubs divided into two conferences of 16 teams in each. Each conference is divided into four divisions of four clubs in each. During the regular season, each team is allowed a maximum of 55 players on its roster; only 48 of these may be active (eligible to play) on game days. Each team can also have a 12-player practice squad separate from its main roster. Each NFL club is granted a franchise, the league's authorization for the team to operate in its home city. This franchise covers 'Home Territory' (the 75 miles surrounding the city limits, or, if the team is within 100 miles of another league city, half the distance between the two cities) and 'Home Marketing Area' (Home Territory plus the rest of the state the club operates in, as well as the area the team operates its training camp in for the duration of the camp). Each NFL member has the exclusive right to host professional football games inside its Home Territory and the exclusive right to advertise, promote, and host events in its Home Marketing Area. There are a couple of exceptions to this rule, mostly relating to teams with close proximity to each other: teams that operate in the same city (e.g. New York City and Los Angeles) or the same state (e.g. California, Florida, and Texas) share the rights to the city's Home Territory and the state's Home Marketing Area, respectively. Every NFL team is based in the contiguous United States. Although no team is based in a foreign country, the Jacksonville Jaguars began playing one home game a year at Wembley Stadium in London, England, in 2013 as part of the NFL International Series. The Jaguars' agreement with Wembley was originally set to expire in 2016, but has since been extended through 2020. The Buffalo Bills played one home game every season at Rogers Centre in Toronto, Ontario, Canada, as part of the Bills Toronto Series from to . Mexico also hosted an NFL regular-season game, a 2005 game between the San Francisco 49ers and Arizona Cardinals known as "Fútbol Americano", and 39 international preseason games were played from 1986 to 2005 as part of the American Bowl series. The Raiders and Houston Texans played a game in Mexico City at Estadio Azteca on November 21, 2016. According to "Forbes", the Dallas Cowboys, at approximately US$5 billion, are the most valuable NFL franchise and the most valuable sports team in the world. Also, 26 of the 32 NFL teams rank among the Top 50 most valuable sports teams in the world; and 16 of the NFL's owners are listed on the Forbes 400, the most of any sports league or organization. The 32 teams are organized into eight geographic divisions of four teams each. These divisions are further organized into two conferences, the National Football Conference and the American Football Conference. The two-conference structure has its origins in a time when major American professional football was organized into two independent leagues, the National Football League and its younger rival, the American Football League. The leagues merged in the late 1960s, adopting the older league's name and reorganizing slightly to ensure the same number of teams in both conferences. The NFL season format consists of a four-week preseason, a seventeen-week regular season (each team plays 16 games), and a fourteen-team single-elimination playoff culminating in the Super Bowl, the league's championship game. The NFL preseason begins with the Pro Football Hall of Fame Game, played at Fawcett Stadium in Canton. Each NFL team is required to schedule four preseason games, two of which must be at its designated home stadium, but the teams involved in the Hall of Fame game, as well as any teams that played in an American Bowl game, play five preseason games. Preseason games are exhibition matches and do not count towards regular-season totals. Because the preseason does not count towards standings, teams generally do not focus on winning games; instead, they are used by coaches to evaluate their teams and by players to show their performance, both to their current team and to other teams if they get cut. The quality of preseason games has been criticized by some fans, who dislike having to pay full price for exhibition games, as well as by some players and coaches, who dislike the risk of injury the games have, while others have felt the preseason is a necessary part of the NFL season. This chart of the 2019 season standings displays an application of the NFL scheduling formula. The Chiefs in 2019 (highlighted in green) finished in first place in the AFC West. Thus, in 2020, the Chiefs will play two games against each of its division rivals (highlighted in light blue), one game against each team in the AFC East and NFC South (highlighted in yellow), and one game each against the first-place finishers in the AFC North and AFC South (highlighted in orange). Currently, the thirteen opponents each team faces over the 16-game regular season schedule are set using a pre-determined formula: The league runs a seventeen-week, 256-game regular season. Since 2001, the season has begun the week after Labor Day (first Monday in September) and concluded the week after Christmas. The opening game of the season is normally a home game on a Thursday for the league's defending champion. Most NFL games are played on Sundays, with a Monday night game typically held at least once a week and Thursday night games occurring on most weeks as well. NFL games are not normally played on Fridays or Saturdays until late in the regular season, as federal law prohibits professional football leagues from competing with college or high school football. Because high school and college teams typically play games on Friday and Saturday, respectively, the NFL cannot hold games on those days until the Friday before the third Saturday in December. While Saturday games late in the season are common, the league rarely holds Friday games, the most recent one being on Christmas Day in 2009. NFL games are rarely scheduled for Tuesday or Wednesday, and those days have only been used twice since 1948: in 2010, when a Sunday game was rescheduled to Tuesday due to a blizzard, and in 2012, when the Kickoff game was moved from Thursday to Wednesday to avoid conflict with the Democratic National Convention. NFL regular season matchups are determined according to a scheduling formula. Within a division, all four teams play fourteen out of their sixteen games against common opponents – two games (home and away) are played against the other three teams in the division, while one game is held against all the members of a division from the NFC and a division from the AFC as determined by a rotating cycle (three years for the conference the team is in, and four years in the conference they are not in). The other two games are intraconference games, determined by the standings of the previous year – for example, if a team finishes first in its division, it will play two other first-place teams in its conference, while a team that finishes last would play two other last-place teams in the conference. In total, each team plays sixteen games and has one bye week, where they do not play any games. Although a team's home and away opponents are known by the end of the previous year's regular season, the exact dates and times for NFL games are not determined until much later because the league has to account for, among other things, the Major League Baseball postseason and local events that could pose a scheduling conflict with NFL games. During the 2010 season, over 500,000 potential schedules were created by computers, 5,000 of which were considered "playable schedules" and were reviewed by the NFL's scheduling team. After arriving at what they felt was the best schedule out of the group, nearly 50 more potential schedules were developed to try to ensure that the chosen schedule would be the best possible one. Following the conclusion of the regular season, the NFL Playoffs, a fourteen-team single elimination tournament, is then held. Seven teams are selected from each conference: the winners of each of the four divisions as well as three wild card teams (the three remaining teams with the best overall record). These teams are seeded according to overall record, with the division champions always ranking higher than any of the wild card teams. The top team (seeded one) from each conference are awarded a bye week, while the remaining six teams (seeded 2–7) from each conference compete in the first round of the playoffs, the Wild Card round, with the second seed competing against the seventh, the third seed competing against the sixth seed and the fourth seed competing against the fifth seed. The winners of the Wild Card round advance to the Divisional Round, which matches the lower seeded team against the first seed and the higher seeded team against the second seed. The winners of those games then compete in the Conference Championships, with the higher remaining seed hosting the lower remaining seed. The AFC and NFC champions then compete in the Super Bowl to determine the league champion. The only other postseason event hosted by the NFL is the Pro Bowl, the league's all-star game. Since 2009, the Pro Bowl has been held the week before the Super Bowl; in previous years, the game was held the week following the Super Bowl, but in an effort to boost ratings, the game was moved to the week before. Because of this, players from the teams participating in the Super Bowl are exempt from participating in the game. The Pro Bowl is not considered as competitive as a regular-season game because the biggest concern of teams is to avoid injuries to the players. The National Football League has used three different trophies to honor its champion over its existence. The first trophy, the Brunswick-Balke Collender Cup, was donated to the NFL (then APFA) in 1920 by the Brunswick-Balke Collender Corporation. The trophy, the appearance of which is only known by its description as a "silver loving cup", was intended to be a traveling trophy and not to become permanent until a team had won at least three titles. The league awarded it to the Akron Pros, champions of the inaugural 1920 season; however, the trophy was discontinued and its current whereabouts are unknown. A second trophy, the Ed Thorp Memorial Trophy, was issued by the NFL from 1934 to 1967. The trophy's namesake, Ed Thorp, was a referee in the league and a friend to many early league owners; upon his death in 1934, the league created the trophy to honor him. In addition to the main trophy, which would be in the possession of the current league champion, the league issued a smaller replica trophy to each champion, who would maintain permanent control over it. The current location of the Ed Thorp Memorial Trophy, long thought to be lost, is believed to be possessed by the Green Bay Packers Hall of Fame. The current trophy of the NFL is the Vince Lombardi Trophy. The Super Bowl trophy was officially renamed in 1970 after Vince Lombardi, who as head coach led the Green Bay Packers to victories in the first two Super Bowls. Unlike the previous trophies, a new Vince Lombardi Trophy is issued to each year's champion, who maintains permanent control of it. Lombardi Trophies are made by Tiffany & Co. out of sterling silver and are worth anywhere from US$25,000 to US$300,000. Additionally, each player on the winning team as well as coaches and personnel are awarded Super Bowl rings to commemorate their victory. The winning team chooses the company that makes the rings; each ring design varies, with the NFL mandating certain ring specifications (which have a degree of room for deviation), in addition to requiring the Super Bowl logo be on at least one side of the ring. The losing team are also awarded rings, which must be no more than half as valuable as the winners' rings, but those are almost never worn. The conference champions receive trophies for their achievement. The champions of the NFC receive the George Halas Trophy, named after Chicago Bears founder George Halas, who is also considered as one of the co-founders of the NFL. The AFC champions receive the Lamar Hunt Trophy, named after Lamar Hunt, the founder of the Kansas City Chiefs and the principal founder of the American Football League. Players on the winning team also receive a conference championship ring. The NFL recognizes a number of awards for its players and coaches at its annual NFL Honors presentation. The most prestigious award is the AP Most Valuable Player (MVP) award. Other major awards include the AP Offensive Player of the Year, AP Defensive Player of the Year, AP Comeback Player of the Year, and the AP Offensive and Defensive Rookie of the Year awards. Another prestigious award is the Walter Payton Man of the Year Award, which recognizes a player's off-field work in addition to his on-field performance. The NFL Coach of the Year award is the highest coaching award. The NFL also gives out weekly awards such as the FedEx Air & Ground NFL Players of the Week and the Pepsi MAX NFL Rookie of the Week awards. In the United States, the National Football League has television contracts with four networks: CBS, ESPN, Fox, and NBC. Collectively, these contracts cover every regular season and postseason game. In general, CBS televises afternoon games in which the away team is an AFC team, and Fox carries afternoon games in which the away team belongs to the NFC. These afternoon games are not carried on all affiliates, as multiple games are being played at once; each network affiliate is assigned one game per time slot, according to a complicated set of rules. Since 2011, the league has reserved the right to give Sunday games that, under the contract, would normally air on one network to the other network (known as "flexible scheduling"). The only way to legally watch a regionally televised game not being carried on the local network affiliates is to purchase NFL Sunday Ticket, the league's out-of-market sports package, which is only available to subscribers to the DirecTV satellite service. The league also provides RedZone, an omnibus telecast that cuts to the most relevant plays in each game, live as they happen. In addition to the regional games, the league also has packages of telecasts, mostly in prime time, that are carried nationwide. NBC broadcasts the primetime "Sunday Night Football" package', which includes the Thursday NFL Kickoff game that starts the regular season and a primetime Thanksgiving Day game. ESPN carries all Monday Night Football games. The NFL's own network, NFL Network, broadcasts a series titled "Thursday Night Football", which was originally exclusive to the network, but which in recent years has had several games simulcast on CBS (since 2014) and NBC (since 2016) (except the Thanksgiving and kickoff games, which remain exclusive to NBC). For the 2017 season, the NFL Network will broadcast 18 regular season games under its "Thursday Night Football" brand, 16 Thursday-evening contests (10 of which are simulcast on either NBC or CBS) as well as one of the NFL International Series games on a Sunday morning and one of the 2017 Christmas afternoon games. In addition, 10 of the Thursday night games will be streamed live on Amazon Prime. In 2017, the NFL games occupied the top three rates for a 30-second advertisement: $699,602 for Sunday Night Football, $550,709 for Thursday Night Football (NBC), and $549,791 for Thursday Night Football (CBS). The Super Bowl television rights are rotated on a three-year basis between CBS, Fox, and NBC. In 2011, all four stations signed new nine-year contracts with the NFL, each running until 2022; CBS, Fox, and NBC are estimated by "Forbes" to pay a combined total of US$3 billion a year, while ESPN will pay US$1.9 billion a year. The league also has deals with Spanish-language broadcasters NBC Universo, Fox Deportes, and ESPN Deportes, which air Spanish language dubs of their respective English-language sister networks' games. The league's contracts do not cover preseason games, which individual teams are free to sell to local stations directly; a minority of preseason games are distributed among the league's national television partners. Through the 2014 season, the NFL had a blackout policy in which games were 'blacked out' on local television in the home team's area if the home stadium was not sold out. Clubs could elect to set this requirement at only 85%, but they would have to give more ticket revenue to the visiting team; teams could also request a specific exemption from the NFL for the game. The vast majority of NFL games were not blacked out; only 6% of games were blacked out during the 2011 season, and only two games were blacked out in and none in . The NFL announced in March 2015 that it would suspend its blackout policy for at least the 2015 season. According to Nielsen, the NFL regular season since 2012 was watched by at least 200 million individuals, accounting for 80% of all television households in the United States and 69% of all potential viewers in the United States. NFL regular season games accounted for 31 out of the top 32 most-watched programs in the fall season and an NFL game ranked as the most-watched television show in all 17 weeks of the regular season. At the local level, NFL games were the highest-ranked shows in NFL markets 92% of the time. Super Bowls account for the 22 most-watched programs (based on total audience) in US history, including a record 167 million people that watched Super Bowl XLVIII, the conclusion to the 2013 season. In addition to radio networks run by each NFL team, select NFL games are broadcast nationally by Westwood One (known as Dial Global for the 2012 season). These games are broadcast on over 500 networks, giving all NFL markets access to each primetime game. The NFL's deal with Westwood One was extended in 2012 and will run through 2017. Some broadcasting innovations have either been introduced or popularized during NFL telecasts. Among them, the Skycam camera system was used for the first time in a live telecast, at a 1984 preseason NFL game in San Diego between the Chargers and 49ers, and televised by CBS. Commentator John Madden famously used a telestrator during games between the early 1980s to the mid-2000s, boosting the device's popularity. The NFL, as a one-time experiment, distributed the October 25, 2015 International Series game from Wembley Stadium in London between the Buffalo Bills and Jacksonville Jaguars. The game was live streamed on the Internet exclusively via Yahoo!, except for over-the-air broadcasts on the local CBS-TV affiliates in the Buffalo and Jacksonville markets. In 2015, the NFL began sponsoring a series of public service announcements to bring attention to domestic abuse and sexual assault in response to what was seen as poor handling of incidents of violence by players. Each April (excluding 2014 when it took place in May), the NFL holds a draft of college players. The draft consists of seven rounds, with each of the 32 clubs getting one pick in each round. The draft order for non-playoff teams is determined by regular-season record; among playoff teams, teams are first ranked by the furthest round of the playoffs they reached, and then are ranked by regular-season record. For example, any team that reached the divisional round will be given a higher pick than any team that reached the conference championships, but will be given a lower pick than any team that did not make the divisional round. The Super Bowl champion always drafts last, and the losing team from the Super Bowl always drafts next-to-last. All potential draftees must be at least three years removed from high school in order to be eligible for the draft. Underclassmen that have met that criterion to be eligible for the draft must write an application to the NFL by January 15 renouncing their remaining college eligibility. Clubs can trade away picks for future draft picks, but cannot trade the rights to players they have selected in previous drafts. Aside from the 7 picks each club gets, compensatory draft picks are given to teams that have lost more compensatory free agents than they have gained. These are spread out from rounds 3 to 7, and a total of 32 are given. Clubs are required to make their selection within a certain period of time, the exact time depending on which round the pick is made in. If they fail to do so on time, the clubs behind them can begin to select their players in order, but they do not lose the pick outright. This happened in the 2003 draft, when the Minnesota Vikings failed to make their selection on time. The Jacksonville Jaguars and Carolina Panthers were able to make their picks before the Vikings were able to use theirs. Selected players are only allowed to negotiate contracts with the team that picked them, but if they choose not to sign they become eligible for the next year's draft. Under the current collective bargaining contract, all contracts to drafted players must be four-year deals with a club option for a fifth. Contracts themselves are limited to a certain amount of money, depending on the exact draft pick the player was selected with. Players who were draft eligible but not picked in the draft are free to sign with any club. The NFL operates several other drafts in addition to the NFL draft. The league holds a supplemental draft annually. Clubs submit emails to the league stating the player they wish to select and the round they will do so, and the team with the highest bid wins the rights to that player. The exact order is determined by a lottery held before the draft, and a successful bid for a player will result in the team forfeiting the rights to its pick in the equivalent round of the next NFL draft. Players are only eligible for the supplemental draft after being granted a petition for special eligibility. The league holds expansion drafts, the most recent happening in 2002 when the Houston Texans began play as an expansion team. Other drafts held by the league include an allocation draft in 1950 to allocate players from several teams that played in the dissolved All-America Football Conference and a supplemental draft in 1984 to give NFL teams the rights to players who had been eligible for the main draft but had not been drafted because they had signed contracts with the United States Football League or Canadian Football League. Like the other major sports leagues in the United States, the NFL maintains protocol for a disaster draft. In the event of a 'near disaster' (less than 15 players killed or disabled) that caused the club to lose a quarterback, they could draft one from a team with at least three quarterbacks. In the event of a 'disaster' (15 or more players killed or disabled) that results in a club's season being canceled, a restocking draft would be held. Neither of these protocols has ever had to be implemented. Free agents in the National Football League are divided into restricted free agents, who have three accrued seasons and whose current contract has expired, and unrestricted free agents, who have four or more accrued seasons and whose contract has expired. An accrued season is defined as "six or more regular-season games on a club's active/inactive, reserved/injured or reserve/physically unable to perform lists". Restricted free agents are allowed to negotiate with other clubs besides their former club, but the former club has the right to match any offer. If they choose not to, they are compensated with draft picks. Unrestricted free agents are free to sign with any club, and no compensation is owed if they sign with a different club. Clubs are given one franchise tag to offer to any unrestricted free agent. The franchise tag is a one-year deal that pays the player 120% of his previous contract or no less than the average of the five highest-paid players at his position, whichever is greater. There are two types of franchise tags: exclusive tags, which do not allow the player to negotiate with other clubs, and non-exclusive tags, which allow the player to negotiate with other clubs but gives his former club the right to match any offer and two first-round draft picks if they decline to match it. Clubs also have the option to use a transition tag, which is similar to the non-exclusive franchise tag but offers no compensation if the former club refuses to match the offer. Due to that stipulation, the transition tag is rarely used, even with the removal of the "poison pill" strategy (offering a contract with stipulations that the former club would be unable to match) that essentially ended the usage of the tag league-wide. Each club is subject to a salary cap, which is set at US$188.2 million for the 2019 season, US$11 million more than that of 2018. Members of clubs' practice squads, despite being paid by and working for their respective clubs, are also simultaneously a kind of free agent and are able to sign to any other club's active roster (provided their new club is not their previous club's next opponent within a set number of days) without compensation to their previous club; practice squad players cannot be signed to other clubs' practice squads, however, unless released by their original club first.
https://en.wikipedia.org/wiki?curid=21211
Nazi Germany Nazi Germany, officially known as the German Reich until 1943 and Greater German Reich in 1943–45, was the German state between 1933 and 1945, when Adolf Hitler and the Nazi Party (NSDAP) controlled the country which they transformed into a dictatorship. Under Hitler's rule, Germany became a totalitarian state where nearly all aspects of life were controlled by the government. The Third Reich – meaning "Third Realm" or "Third Empire" – alluded to the Nazis' conceit that Nazi Germany was the successor to the earlier Holy Roman Empire (800–1806) and German Empire (1871–1918). The Third Reich, which Hitler and the Nazis referred to as the Thousand Year Reich, ended in May 1945 after just 12 years, when the Allies defeated Germany, ending World War II in Europe. On 30 January 1933, Hitler was appointed Chancellor of Germany, the head of government, by the President of the Weimar Republic, Paul von Hindenburg, the head of State. The Nazi Party then began to eliminate all political opposition and consolidate its power. Hindenburg died on 2 August 1934 and Hitler became dictator of Germany by merging the offices and powers of the Chancellery and Presidency. A national referendum held 19 August 1934 confirmed Hitler as sole "Führer" (leader) of Germany. All power was centralised in Hitler's person and his word became the highest law. The government was not a coordinated, co-operating body, but a collection of factions struggling for power and Hitler's favour. In the midst of the Great Depression, the Nazis restored economic stability and ended mass unemployment using heavy military spending and a mixed economy. Using deficit spending, the regime undertook a massive secret rearmament program and the construction of extensive public works projects, including the construction of "Autobahnen" (motorways). The return to economic stability boosted the regime's popularity. Racism, Nazi eugenics, and especially antisemitism, were central ideological features of the regime. The Germanic peoples were considered by the Nazis to be the master race, the purest branch of the Aryan race. Discrimination and the persecution of Jews and Romani people began in earnest after the seizure of power. The first concentration camps were established in March 1933. Jews and others deemed undesirable were imprisoned, and liberals, socialists, and communists were killed, imprisoned, or exiled. Christian churches and citizens that opposed Hitler's rule were oppressed and many leaders imprisoned. Education focused on racial biology, population policy, and fitness for military service. Career and educational opportunities for women were curtailed. Recreation and tourism were organised via the Strength Through Joy program, and the 1936 Summer Olympics showcased Germany on the international stage. Propaganda Minister Joseph Goebbels made effective use of film, mass rallies, and Hitler's hypnotic oratory to influence public opinion. The government controlled artistic expression, promoting specific art forms and banning or discouraging others. The Nazi regime dominated neighbours through military threats in the years leading up to war. Nazi Germany made increasingly aggressive territorial demands, threatening war if these were not met. It seized Austria and almost all of Czechoslovakia in 1938 and 1939. Germany signed a non-aggression pact with the Soviet Union and invaded Poland on 1 September 1939, launching World War II in Europe. By early 1941, Germany controlled much of Europe. "Reichskommissariats" took control of conquered areas and a German administration was established in the remainder of Poland. Germany exploited the raw materials and labour of both its occupied territories and its allies. Genocide and mass murder became hallmarks of the regime. Starting in 1939, hundreds of thousands of German citizens with mental or physical disabilities were killed in hospitals and asylums. "Einsatzgruppen" paramilitary death squads accompanied the German armed forces inside the occupied territories and conducted the mass killings of millions of Jews and other Holocaust victims. After 1941, millions of others were imprisoned, worked to death, or murdered in Nazi concentration camps and extermination camps. This genocide is known as the Holocaust. While the German invasion of the Soviet Union in 1941 was initially successful, the Soviet resurgence and entry of the United States into the war meant the "Wehrmacht" (German armed forces) lost the initiative on the Eastern Front in 1943 and by late 1944 had been pushed back to the pre-1939 border. Large-scale aerial bombing of Germany escalated in 1944 and the Axis powers were driven back in Eastern and Southern Europe. After the Allied invasion of France, Germany was conquered by the Soviet Union from the east and the other Allies from the west, and capitulated in May 1945. Hitler's refusal to admit defeat led to massive destruction of German infrastructure and additional war-related deaths in the closing months of the war. The victorious Allies initiated a policy of denazification and put many of the surviving Nazi leadership on trial for war crimes at the Nuremberg trials. Common English terms for the German state in the Nazi era are "Nazi Germany" and "Third Reich". The latter, a translation of the Nazi propaganda term "Drittes Reich", was first used in "Das Dritte Reich", a 1923 book by Arthur Moeller van den Bruck. The book counted the Holy Roman Empire (962–1806) as the first Reich and the German Empire (1871–1918) as the second. Germany was known as the Weimar Republic during the years 1919 to 1933. It was a republic with a semi-presidential system. The Weimar Republic faced numerous problems, including hyperinflation, political extremism (including violence from left- and right-wing paramilitaries), contentious relationships with the Allied victors of World War I, and a series of failed attempts at coalition government by divided political parties. Severe setbacks to the German economy began after World War I ended, partly because of reparations payments required under the 1919 Treaty of Versailles. The government printed money to make the payments and to repay the country's war debt, but the resulting hyperinflation led to inflated prices for consumer goods, economic chaos, and food riots. When the government defaulted on their reparations payments in January 1923, French troops occupied German industrial areas along the Ruhr and widespread civil unrest followed. The National Socialist German Workers' Party ("Nationalsozialistische Deutsche Arbeiterpartei", NSDAP), commonly known as the Nazi Party, was founded in 1920. It was the renamed successor of the German Workers' Party (DAP) formed one year earlier, and one of several far-right political parties then active in Germany. The Nazi Party platform included destruction of the Weimar Republic, rejection of the terms of the Treaty of Versailles, radical antisemitism, and anti-Bolshevism. They promised a strong central government, increased "Lebensraum" ("living space") for Germanic peoples, formation of a national community based on race, and racial cleansing via the active suppression of Jews, who would be stripped of their citizenship and civil rights. The Nazis proposed national and cultural renewal based upon the "Völkisch" movement. The party, especially its paramilitary organisation "Sturmabteilung" (SA; Storm Detachment), or Brownshirts, used physical violence to advance their political position, disrupting the meetings of rival organisations and attacking their members as well as Jewish people on the streets. Such far-right armed groups were common in Bavaria, and were tolerated by the sympathetic far-right state government of Gustav Ritter von Kahr. When the stock market in the United States crashed on 24 October 1929, the effect in Germany was dire. Millions were thrown out of work and several major banks collapsed. Hitler and the Nazis prepared to take advantage of the emergency to gain support for their party. They promised to strengthen the economy and provide jobs. Many voters decided the Nazi Party was capable of restoring order, quelling civil unrest, and improving Germany's international reputation. After the federal election of 1932, the party was the largest in the Reichstag, holding 230 seats with 37.4 percent of the popular vote. Although the Nazis won the greatest share of the popular vote in the two Reichstag general elections of 1932, they did not have a majority. Hitler therefore led a short-lived coalition government formed with the German National People's Party. Under pressure from politicians, industrialists, and the business community, President Paul von Hindenburg appointed Hitler as Chancellor of Germany on 30 January 1933. This event is known as the "Machtergreifung" ("seizure of power"). On the night of 27 February 1933, the Reichstag building was set afire. Marinus van der Lubbe, a Dutch communist, was found guilty of starting the blaze. Hitler proclaimed that the arson marked the start of a communist uprising. The Reichstag Fire Decree, imposed on 28 February 1933, rescinded most civil liberties, including rights of assembly and freedom of the press. The decree also allowed the police to detain people indefinitely without charges. The legislation was accompanied by a propaganda campaign that led to public support for the measure. Violent suppression of communists by the SA was undertaken nationwide and 4,000 members of the Communist Party of Germany were arrested. In March 1933, the Enabling Act, an amendment to the Weimar Constitution, passed in the Reichstag by a vote of 444 to 94. This amendment allowed Hitler and his cabinet to pass laws—even laws that violated the constitution—without the consent of the president or the Reichstag. As the bill required a two-thirds majority to pass, the Nazis used intimidation tactics as well as the provisions of the Reichstag Fire Decree to keep several Social Democratic deputies from attending, and the Communists had already been banned. On 10 May, the government seized the assets of the Social Democrats, and they were banned on 22 June. On 21 June, the SA raided the offices of the German National People's Party – their former coalition partners – which then disbanded on 29 June. The remaining major political parties followed suit. On 14 July 1933 Germany became a one-party state with the passage of a law decreeing the Nazi Party to be the sole legal party in Germany. The founding of new parties was also made illegal, and all remaining political parties which had not already been dissolved were banned. The Enabling Act would subsequently serve as the legal foundation for the dictatorship the Nazis established. Further elections in November 1933, 1936, and 1938 were Nazi-controlled, with only members of the Party and a small number of independents elected. The Hitler cabinet used the terms of the Reichstag Fire Decree and later the Enabling Act to initiate the process of "Gleichschaltung" ("co-ordination"), which brought all aspects of life under party control. Individual states not controlled by elected Nazi governments or Nazi-led coalitions were forced to agree to the appointment of Reich Commissars to bring the states in line with the policies of the central government. These Commissars had the power to appoint and remove local governments, state parliaments, officials, and judges. In this way Germany became a "de facto" unitary state, with all state governments controlled by the central government under the Nazis. The state parliaments and the "Reichsrat" (federal upper house) were abolished in January 1934, with all state powers being transferred to the central government. All civilian organisations, including agricultural groups, volunteer organisations, and sports clubs, had their leadership replaced with Nazi sympathisers or party members; these civic organisations either merged with the Nazi Party or faced dissolution. The Nazi government declared a "Day of National Labor" for May Day 1933, and invited many trade union delegates to Berlin for celebrations. The day after, SA stormtroopers demolished union offices around the country; all trade unions were forced to dissolve and their leaders were arrested. The Law for the Restoration of the Professional Civil Service, passed in April, removed from their jobs all teachers, professors, judges, magistrates, and government officials who were Jewish or whose commitment to the party was suspect. This meant the only non-political institutions not under control of the Nazis were the churches. The Nazi regime abolished the symbols of the Weimar Republic—including the black, red, and gold tricolour flag—and adopted reworked symbolism. The previous imperial black, white, and red tricolour was restored as one of Germany's two official flags; the second was the swastika flag of the Nazi Party, which became the sole national flag in 1935. The Party anthem "Horst-Wessel-Lied" ("Horst Wessel Song") became a second national anthem. Germany was still in a dire economic situation, as six million people were unemployed and the balance of trade deficit was daunting. Using deficit spending, public works projects were undertaken beginning in 1934, creating 1.7 million new jobs by the end of that year alone. Average wages began to rise. The SA leadership continued to apply pressure for greater political and military power. In response, Hitler used the "Schutzstaffel" (SS) and Gestapo to purge the entire SA leadership. Hitler targeted SA "Stabschef" (Chief of Staff) Ernst Röhm and other SA leaders who—along with a number of Hitler's political adversaries (such as Gregor Strasser and former chancellor Kurt von Schleicher)—were arrested and shot. Up to 200 people were killed from 30 June to 2 July 1934 in an event that became known as the Night of the Long Knives. On 2 August 1934, Hindenburg died. The previous day, the cabinet had enacted the "Law Concerning the Highest State Office of the Reich", which stated that upon Hindenburg's death the office of president would be abolished and its powers merged with those of the chancellor. Hitler thus became head of state as well as head of government and was formally named as "Führer und Reichskanzler" ("Leader and Chancellor"), although eventually "Reichskanzler" was dropped. Germany was now a totalitarian state with Hitler at its head. As head of state, Hitler became Supreme Commander of the armed forces. The new law provided an altered loyalty oath for servicemen so that they affirmed loyalty to Hitler personally rather than the office of supreme commander or the state. On 19 August, the merger of the presidency with the chancellorship was approved by 90 percent of the electorate in a plebiscite. Most Germans were relieved that the conflicts and street fighting of the Weimar era had ended. They were deluged with propaganda orchestrated by Minister of Public Enlightenment and Propaganda Joseph Goebbels, who promised peace and plenty for all in a united, Marxist-free country without the constraints of the Versailles Treaty. The Nazi Party obtained and legitimised power through its initial revolutionary activities, then through manipulation of legal mechanisms, the use of police powers, and by taking control of the state and federal institutions. The first major Nazi concentration camp, initially for political prisoners, was opened at Dachau in 1933. Hundreds of camps of varying size and function were created by the end of the war. Beginning in April 1933, scores of measures defining the status of Jews and their rights were instituted. These measures culminated in the establishment of the Nuremberg Laws of 1935, which stripped them of their basic rights. The Nazis would take from the Jews their wealth, their right to intermarry with non-Jews, and their right to occupy many fields of labour (such as law, medicine, or education). Eventually the Nazis declared the Jews as undesirable to remain among German citizens and society. In the early years of the regime, Germany was without allies, and its military was drastically weakened by the Versailles Treaty. France, Poland, Italy, and the Soviet Union each had reasons to object to Hitler's rise to power. Poland suggested to France that the two nations engage in a preventive war against Germany in March 1933. Fascist Italy objected to German claims in the Balkans and on Austria, which Benito Mussolini considered to be in Italy's sphere of influence. As early as February 1933, Hitler announced that rearmament must begin, albeit clandestinely at first, as to do so was in violation of the Versailles Treaty. On 17 May 1933, Hitler gave a speech before the Reichstag outlining his desire for world peace and accepted an offer from American President Franklin D. Roosevelt for military disarmament, provided the other nations of Europe did the same. When the other European powers failed to accept this offer, Hitler pulled Germany out of the World Disarmament Conference and the League of Nations in October, claiming its disarmament clauses were unfair if they applied only to Germany. In a referendum held in November, 95 percent of voters supported Germany's withdrawal. In 1934, Hitler told his military leaders that a war in the east should begin in 1942. The Saarland, which had been placed under League of Nations supervision for 15 years at the end of World War I, voted in January 1935 to become part of Germany. In March 1935, Hitler announced the creation of an air force, and that the "Reichswehr" would be increased to 550,000 men. Britain agreed to Germany building a naval fleet with the signing of the Anglo-German Naval Agreement on 18 June 1935. When the Italian invasion of Ethiopia led to only mild protests by the British and French governments, on 7 March 1936 Hitler used the Franco-Soviet Treaty of Mutual Assistance as a pretext to order the army to march 3,000 troops into the demilitarised zone in the Rhineland in violation of the Versailles Treaty. As the territory was part of Germany, the British and French governments did not feel that attempting to enforce the treaty was worth the risk of war. In the one-party election held on 29 March, the Nazis received 98.9 percent support. In 1936, Hitler signed an Anti-Comintern Pact with Japan and a non-aggression agreement with Mussolini, who was soon referring to a "Rome-Berlin Axis". Hitler sent military supplies and assistance to the Nationalist forces of General Francisco Franco in the Spanish Civil War, which began in July 1936. The German Condor Legion included a range of aircraft and their crews, as well as a tank contingent. The aircraft of the Legion destroyed the city of Guernica in 1937. The Nationalists were victorious in 1939 and became an informal ally of Nazi Germany. In February 1938, Hitler emphasised to Austrian Chancellor Kurt Schuschnigg the need for Germany to secure its frontiers. Schuschnigg scheduled a plebiscite regarding Austrian independence for 13 March, but Hitler sent an ultimatum to Schuschnigg on 11 March demanding that he hand over all power to the Austrian Nazi Party or face an invasion. German troops entered Austria the next day, to be greeted with enthusiasm by the populace. The Republic of Czechoslovakia was home to a substantial minority of Germans, who lived mostly in the Sudetenland. Under pressure from separatist groups within the Sudeten German Party, the Czechoslovak government offered economic concessions to the region. Hitler decided not just to incorporate the Sudetenland into the Reich, but to destroy the country of Czechoslovakia entirely. The Nazis undertook a propaganda campaign to try to generate support for an invasion. Top German military leaders opposed the plan, as Germany was not yet ready for war. The crisis led to war preparations by Britain, Czechoslovakia, and France (Czechoslovakia's ally). Attempting to avoid war, British Prime Minister Neville Chamberlain arranged a series of meetings, the result of which was the Munich Agreement, signed on 29 September 1938. The Czechoslovak government was forced to accept the Sudetenland's annexation into Germany. Chamberlain was greeted with cheers when he landed in London, saying the agreement brought "peace for our time". In addition to the German annexation, Poland seized a narrow strip of land near Cieszyn on 2 October, while as a consequence of the Munich Agreement, Hungary demanded and received along their northern border in the First Vienna Award on 2 November. Following negotiations with President Emil Hácha, Hitler seized the rest of the Czech half of the country on 15 March 1939 and created the Protectorate of Bohemia and Moravia, one day after the proclamation of the Slovak Republic in the Slovak half. Also on 15 March, Hungary occupied and annexed the recently proclaimed and unrecognized Carpatho-Ukraine and an additional sliver of land disputed with Slovakia. Austrian and Czech foreign exchange reserves were seized by the Nazis, as were stockpiles of raw materials such as metals and completed goods such as weaponry and aircraft, which were shipped to Germany. The "Reichswerke Hermann Göring" industrial conglomerate took control of steel and coal production facilities in both countries. In January 1934, Germany signed a non-aggression pact with Poland. In March 1939, Hitler demanded the return of the Free City of Danzig and the Polish Corridor, a strip of land that separated East Prussia from the rest of Germany. The British announced they would come to the aid of Poland if it was attacked. Hitler, believing the British would not actually take action, ordered an invasion plan should be readied for September 1939. On 23 May, Hitler described to his generals his overall plan of not only seizing the Polish Corridor but greatly expanding German territory eastward at the expense of Poland. He expected this time they would be met by force. The Germans reaffirmed their alliance with Italy and signed non-aggression pacts with Denmark, Estonia, and Latvia whilst trade links were formalised with Romania, Norway, and Sweden. Foreign Minister Joachim von Ribbentrop arranged in negotiations with the Soviet Union a non-aggression pact, the Molotov–Ribbentrop Pact, signed in August 1939. The treaty also contained secret protocols dividing Poland and the Baltic states into German and Soviet spheres of influence. Germany's wartime foreign policy involved the creation of allied governments controlled directly or indirectly from Berlin. They intended to obtain soldiers from allies such as Italy and Hungary and workers and food supplies from allies such as Vichy France. Hungary was the fourth nation to join the Axis, signing the Tripartite Pact on 27 September 1940. Bulgaria signed the pact on 17 November. German efforts to secure oil included negotiating a supply from their new ally, Romania, who signed the Pact on 23 November, alongside the Slovak Republic. By late 1942, there were 24 divisions from Romania on the Eastern Front, 10 from Italy, and 10 from Hungary. Germany assumed full control in France in 1942, Italy in 1943, and Hungary in 1944. Although Japan was a powerful ally, the relationship was distant, with little co-ordination or co-operation. For example, Germany refused to share their formula for synthetic oil from coal until late in the war. Germany invaded Poland and captured the Free City of Danzig on 1 September 1939, beginning World War II in Europe. Honouring their treaty obligations, Britain and France declared war on Germany two days later. Poland fell quickly, as the Soviet Union attacked from the east on 17 September. Reinhard Heydrich, chief of the "Sicherheitspolizei" (SiPo; Security Police) and "Sicherheitsdienst" (SD; Security Service), ordered on 21 September that Polish Jews should be rounded up and concentrated into cities with good rail links. Initially the intention was to deport them further east, or possibly to Madagascar. Using lists prepared in advance, some 65,000 Polish intelligentsia, noblemen, clergy, and teachers were killed by the end of 1939 in an attempt to destroy Poland's identity as a nation. Soviet forces advanced into Finland in the Winter War, and German forces saw action at sea. But little other activity occurred until May, so the period became known as the "Phoney War". From the start of the war, a British blockade on shipments to Germany affected its economy. Germany was particularly dependent on foreign supplies of oil, coal, and grain. Thanks to trade embargoes and the blockade, imports into Germany declined by 80 per cent. To safeguard Swedish iron ore shipments to Germany, Hitler ordered the invasion of Denmark and Norway, which began on 9 April. Denmark fell after less than a day, while most of Norway followed by the end of the month. By early June, Germany occupied all of Norway. Against the advice of many of his senior military officers, Hitler ordered an attack on France and the Low Countries, which began in May 1940. They quickly conquered Luxembourg and the Netherlands. After outmanoeuvring the Allies in Belgium and forcing the evacuation of many British and French troops at Dunkirk, France fell as well, surrendering to Germany on 22 June. The victory in France resulted in an upswing in Hitler's popularity and an upsurge in war fever in Germany. In violation of the provisions of the Hague Convention, industrial firms in the Netherlands, France, and Belgium were put to work producing war materiel for Germany. The Nazis seized from the French thousands of locomotives and rolling stock, stockpiles of weapons, and raw materials such as copper, tin, oil, and nickel. Payments for occupation costs were levied upon France, Belgium, and Norway. Barriers to trade led to hoarding, black markets, and uncertainty about the future. Food supplies were precarious; production dropped in most of Europe. Famine was experienced in many occupied countries. Hitler's peace overtures to the new British Prime Minister Winston Churchill were rejected in July 1940. Grand Admiral Erich Raeder had advised Hitler in June that air superiority was a pre-condition for a successful invasion of Britain, so Hitler ordered a series of aerial attacks on Royal Air Force (RAF) airbases and radar stations, as well as nightly air raids on British cities, including London, Plymouth, and Coventry. The German Luftwaffe failed to defeat the RAF in what became known as the Battle of Britain, and by the end of October, Hitler realised that air superiority would not be achieved. He permanently postponed the invasion, a plan which the commanders of the German army had never taken entirely seriously. Several historians, including Andrew Gordon, believe the primary reason for the failure of the invasion plan was the superiority of the Royal Navy, not the actions of the RAF. In February 1941, the German "Afrika Korps" arrived in Libya to aid the Italians in the North African Campaign. On 6 April, Germany launched an invasion of Yugoslavia and Greece. All of Yugoslavia and parts of Greece were subsequently divided between Germany, Hungary, Italy, and Bulgaria. On 22 June 1941, contravening the Molotov–Ribbentrop Pact, about 3.8 million Axis troops attacked the Soviet Union. In addition to Hitler's stated purpose of acquiring "Lebensraum", this large-scale offensive—codenamed Operation Barbarossa—was intended to destroy the Soviet Union and seize its natural resources for subsequent aggression against the Western powers. The reaction among Germans was one of surprise and trepidation as many were concerned about how much longer the war would continue or suspected that Germany could not win a war fought on two fronts. The invasion conquered a huge area, including the Baltic states, Belarus, and west Ukraine. After the successful Battle of Smolensk in September 1941, Hitler ordered Army Group Centre to halt its advance to Moscow and temporarily divert its Panzer groups to aid in the encirclement of Leningrad and Kiev. This pause provided the Red Army with an opportunity to mobilise fresh reserves. The Moscow offensive, which resumed in October 1941, ended disastrously in December. On 7 December 1941, Japan attacked Pearl Harbor, Hawaii. Four days later, Germany declared war on the United States. Food was in short supply in the conquered areas of the Soviet Union and Poland, as the retreating armies had burned the crops in some areas, and much of the remainder was sent back to the Reich. In Germany, rations were cut in 1942. In his role as Plenipotentiary of the Four Year Plan, Hermann Göring demanded increased shipments of grain from France and fish from Norway. The 1942 harvest was good, and food supplies remained adequate in Western Europe. Germany and Europe as a whole was almost totally dependent on foreign oil imports. In an attempt to resolve the shortage, in June 1942 Germany launched "Fall Blau" ("Case Blue"), an offensive against the Caucasian oilfields. The Red Army launched a counter-offensive on 19 November and encircled the Axis forces, who were trapped in Stalingrad on 23 November. Göring assured Hitler that the 6th Army could be supplied by air, but this turned out to be infeasible. Hitler's refusal to allow a retreat led to the deaths of 200,000 German and Romanian soldiers; of the 91,000 men who surrendered in the city on 31 January 1943, only 6,000 survivors returned to Germany after the war. Losses continued to mount after Stalingrad, leading to a sharp reduction in the popularity of the Nazi Party and deteriorating morale. Soviet forces continued to push westward after the failed German offensive at the Battle of Kursk in the summer of 1943. By the end of 1943, the Germans had lost most of their eastern territorial gains. In Egypt, Field Marshal Erwin Rommel's "Afrika Korps" were defeated by British forces under Field Marshal Bernard Montgomery in October 1942. The Allies landed in Sicily in July 1943 and in Italy in September. Meanwhile, American and British bomber fleets based in Britain began operations against Germany. Many sorties were intentionally given civilian targets in an effort to destroy German morale. German aircraft production could not keep pace with losses, and without air cover the Allied bombing campaign became even more devastating. By targeting oil refineries and factories, they crippled the German war effort by late 1944. On 6 June 1944, American, British, and Canadian forces established a front in France with the D-Day landings in Normandy. On 20 July 1944, Hitler survived an assassination attempt. He ordered brutal reprisals, resulting in 7,000 arrests and the execution of more than 4,900 people. The failed Ardennes Offensive (16 December 1944 – 25 January 1945) was the last major German offensive on the western front, and Soviet forces entered Germany on 27 January. Hitler's refusal to admit defeat and his insistence that the war be fought to the last man led to unnecessary death and destruction in the war's closing months. Through his Justice Minister Otto Georg Thierack, Hitler ordered that anyone who was not prepared to fight should be court-martialed, and thousands of people were put to death. In many areas, people surrendered to the approaching Allies in spite of exhortations of local leaders to continue to fight. Hitler ordered the destruction of transport, bridges, industries, and other infrastructure—a scorched earth decree—but Armaments Minister Albert Speer prevented this order from being fully carried out. During the Battle of Berlin (16 April 1945 – 2 May 1945), Hitler and his staff lived in the underground "Führerbunker" while the Red Army approached. On 30 April, when Soviet troops were within two blocks of the Reich Chancellery, Hitler, along with his girlfriend and by then wife Eva Braun committed suicide. On 2 May, General Helmuth Weidling unconditionally surrendered Berlin to Soviet General Vasily Chuikov. Hitler was succeeded by Grand Admiral Karl Dönitz as Reich President and Goebbels as Reich Chancellor. Goebbels and his wife Magda committed suicide the next day after murdering their six children. Between 4 and 8 May 1945, most of the remaining German armed forces unconditionally surrendered. The German Instrument of Surrender was signed 8 May, marking the end of the Nazi regime and the end of World War II in Europe. Popular support for Hitler almost completely disappeared as the war drew to a close. Suicide rates in Germany increased, particularly in areas where the Red Army was advancing. Among soldiers and party personnel, suicide was often deemed an honourable and heroic alternative to surrender. First-hand accounts and propaganda about the uncivilised behaviour of the advancing Soviet troops caused panic among civilians on the Eastern Front, especially women, who feared being raped. More than a thousand people (out of a population of around 16,000) committed suicide in Demmin on and around 1 May 1945 as the 65th Army of 2nd Belorussian Front first broke into a distillery and then rampaged through the town, committing mass rapes, arbitrarily executing civilians, and setting fire to buildings. High numbers of suicides took place in many other locations, including Neubrandenburg (600 dead), Stolp in Pommern (1,000 dead), and Berlin, where at least 7,057 people committed suicide in 1945. Estimates of the total German war dead range from 5.5 to 6.9 million persons. A study by German historian Rüdiger Overmans puts the number of German military dead and missing at 5.3 million, including 900,000 men conscripted from outside of Germany's 1937 borders. Richard Overy estimated in 2014 that about 353,000 civilians were killed in Allied air raids. Other civilian deaths include 300,000 Germans (including Jews) who were victims of Nazi political, racial, and religious persecution and 200,000 who were murdered in the Nazi euthanasia program. Political courts called "Sondergerichte" sentenced some 12,000 members of the German resistance to death, and civil courts sentenced an additional 40,000 Germans. Mass rapes of German women also took place. As a result of their defeat in World War I and the resulting Treaty of Versailles, Germany lost Alsace-Lorraine, Northern Schleswig, and Memel. The Saarland became a protectorate of France under the condition that its residents would later decide by referendum which country to join, and Poland became a separate nation and was given access to the sea by the creation of the Polish Corridor, which separated Prussia from the rest of Germany, while Danzig was made a free city. Germany regained control of the Saarland through a referendum held in 1935 and annexed Austria in the "Anschluss" of 1938. The Munich Agreement of 1938 gave Germany control of the Sudetenland, and they seized the remainder of Czechoslovakia six months later. Under threat of invasion by sea, Lithuania surrendered the Memel district in March 1939. Between 1939 and 1941, German forces invaded Poland, Denmark, Norway, France, Luxembourg, the Netherlands, Belgium, Yugoslavia, Greece, and the Soviet Union. Germany annexed parts of northern Yugoslavia in April 1941, while Mussolini ceded Trieste, South Tyrol, and Istria to Germany in 1943. Some of the conquered territories were incorporated into Germany as part of Hitler's long-term goal of creating a Greater Germanic Reich. Several areas, such as Alsace-Lorraine, were placed under the authority of an adjacent "Gau" (regional district). The "Reichskommissariate" (Reich Commissariats), quasi-colonial regimes, were established in some occupied countries. Areas placed under German administration included the Protectorate of Bohemia and Moravia, "Reichskommissariat Ostland" (encompassing the Baltic states and Belarus), and "Reichskommissariat Ukraine". Conquered areas of Belgium and France were placed under control of the Military Administration in Belgium and Northern France. Belgian Eupen-Malmedy, which had been part of Germany until 1919, was annexed. Part of Poland was incorporated into the Reich, and the General Government was established in occupied central Poland. The governments of Denmark, Norway ("Reichskommissariat Norwegen"), and the Netherlands ("Reichskommissariat Niederlande") were placed under civilian administrations staffed largely by natives. Hitler intended to eventually incorporate many of these areas into the Reich. Germany occupied the Italian protectorate of Albania and the Italian governorate of Montenegro in 1943 and installed a puppet government in occupied Serbia in 1941. The Nazis were a far-right fascist political party which arose during the social and financial upheavals that occurred following the end of World War I. The Party remained small and marginalised, receiving 2.6% of the federal vote in 1928, prior to the onset of the Great Depression in 1929. By 1930 the Party won 18.3% of the federal vote, making it the Reichstag's second largest political party. While in prison after the failed Beer Hall Putsch of 1923, Hitler wrote "Mein Kampf", which laid out his plan for transforming German society into one based on race. Nazi ideology brought together elements of antisemitism, racial hygiene, and eugenics, and combined them with pan-Germanism and territorial expansionism with the goal of obtaining more "Lebensraum" for the Germanic people. The regime attempted to obtain this new territory by attacking Poland and the Soviet Union, intending to deport or kill the Jews and Slavs living there, who were viewed as being inferior to the Aryan master race and part of a Jewish-Bolshevik conspiracy. The Nazi regime believed that only Germany could defeat the forces of Bolshevism and save humanity from world domination by International Jewry. Other people deemed life unworthy of life by the Nazis included the mentally and physically disabled, Romani people, homosexuals, Jehovah's Witnesses, and social misfits. Influenced by the "Völkisch" movement, the regime was against cultural modernism and supported the development of an extensive military at the expense of intellectualism. Creativity and art were stifled, except where they could serve as propaganda media. The party used symbols such as the Blood Flag and rituals such as the Nazi Party rallies to foster unity and bolster the regime's popularity. Hitler ruled Germany autocratically by asserting the "Führerprinzip" ("leader principle"), which called for absolute obedience of all subordinates. He viewed the government structure as a pyramid, with himself—the infallible leader—at the apex. Party rank was not determined by elections, and positions were filled through appointment by those of higher rank. The party used propaganda to develop a cult of personality around Hitler. Historians such as Kershaw emphasise the psychological impact of Hitler's skill as an orator. Roger Gill states: "His moving speeches captured the minds and hearts of a vast number of the German people: he virtually hypnotized his audiences". While top officials reported to Hitler and followed his policies, they had considerable autonomy. He expected officials to "work towards the Führer" – to take the initiative in promoting policies and actions in line with party goals and Hitler's wishes, without his involvement in day-to-day decision-making. The government was a disorganised collection of factions led by the party elite, who struggled to amass power and gain the Führer's favour. Hitler's leadership style was to give contradictory orders to his subordinates and to place them in positions where their duties and responsibilities overlapped. In this way he fostered distrust, competition, and infighting among his subordinates to consolidate and maximise his own power. Successive "Reichsstatthalter" decrees between 1933 and 1935 abolished the existing "Länder" (constituent states) of Germany and replaced them with new administrative divisions, the "Gaue", governed by Nazi leaders ("Gauleiters"). The change was never fully implemented, as the Länder were still used as administrative divisions for some government departments such as education. This led to a bureaucratic tangle of overlapping jurisdictions and responsibilities typical of the administrative style of the Nazi regime. Jewish civil servants lost their jobs in 1933, except for those who had seen military service in World War I. Members of the Party or party supporters were appointed in their place. As part of the process of "Gleichschaltung", the Reich Local Government Law of 1935 abolished local elections, and mayors were appointed by the Ministry of the Interior. In August 1934, civil servants and members of the military were required to swear an oath of unconditional obedience to Hitler. These laws became the basis of the "Führerprinzip", the concept that Hitler's word overrode all existing laws. Any acts that were sanctioned by Hitler—even murder—thus became legal. All legislation proposed by cabinet ministers had to be approved by the office of Deputy Führer Rudolf Hess, who could also veto top civil service appointments. Most of the judicial system and legal codes of the Weimar Republic remained in place to deal with non-political crimes. The courts issued and carried out far more death sentences than before the Nazis took power. People who were convicted of three or more offences—even petty ones—could be deemed habitual offenders and jailed indefinitely. People such as prostitutes and pickpockets were judged to be inherently criminal and a threat to the community. Thousands were arrested and confined indefinitely without trial. A new type of court, the "Volksgerichtshof" ("People's Court"), was established in 1934 to deal with political cases. This court handed out over 5,000 death sentences until its dissolution in 1945. The death penalty could be issued for offences such as being a communist, printing seditious leaflets, or even making jokes about Hitler or other officials. The Gestapo was in charge of investigative policing to enforce National Socialist ideology as they located and confined political offenders, Jews, and others deemed undesirable. Political offenders who were released from prison were often immediately re-arrested by the Gestapo and confined in a concentration camp. The Nazis used propaganda to promulgate the concept of "Rassenschande" ("race defilement") to justify the need for racial laws. In September 1935, the Nuremberg Laws were enacted. These laws initially prohibited sexual relations and marriages between Aryans and Jews and were later extended to include "Gypsies, Negroes or their bastard offspring". The law also forbade the employment of German women under the age of 45 as domestic servants in Jewish households. The Reich Citizenship Law stated that only those of "German or related blood" could be citizens. Thus Jews and other non-Aryans were stripped of their German citizenship. The law also permitted the Nazis to deny citizenship to anyone who was not supportive enough of the regime. A supplementary decree issued in November defined as Jewish anyone with three Jewish grandparents, or two grandparents if the Jewish faith was followed. The unified armed forces of Germany from 1935 to 1945 were called the "Wehrmacht" (defence force). This included the "Heer" (army), "Kriegsmarine" (navy), and the "Luftwaffe" (air force). From 2 August 1934, members of the armed forces were required to pledge an oath of unconditional obedience to Hitler personally. In contrast to the previous oath, which required allegiance to the constitution of the country and its lawful establishments, this new oath required members of the military to obey Hitler even if they were being ordered to do something illegal. Hitler decreed that the army would have to tolerate and even offer logistical support to the "Einsatzgruppen"—the mobile death squads responsible for millions of deaths in Eastern Europe—when it was tactically possible to do so. "Wehrmacht" troops also participated directly in the Holocaust by shooting civilians or committing genocide under the guise of anti-partisan operations. The party line was that the Jews were the instigators of the partisan struggle and therefore needed to be eliminated. On 8 July 1941, Heydrich announced that all Jews in the eastern conquered territories were to be regarded as partisans and gave the order for all male Jews between the ages of 15 and 45 to be shot. By August, this was extended to include the entire Jewish population. In spite of efforts to prepare the country militarily, the economy could not sustain a lengthy war of attrition. A strategy was developed based on the tactic of "Blitzkrieg" ("lightning war"), which involved using quick coordinated assaults that avoided enemy strong points. Attacks began with artillery bombardment, followed by bombing and strafing runs. Next the tanks would attack and finally the infantry would move in to secure the captured area. Victories continued through mid-1940, but the failure to defeat Britain was the first major turning point in the war. The decision to attack the Soviet Union and the decisive defeat at Stalingrad led to the retreat of the German armies and the eventual loss of the war. The total number of soldiers who served in the "Wehrmacht" from 1935 to 1945 was around 18.2 million, of whom 5.3 million died. The "Sturmabteilung" (SA; Storm Detachment), or Brownshirts, founded in 1921, was the first paramilitary wing of the Nazi Party; their initial assignment was to protect Nazi leaders at rallies and assemblies. They also took part in street battles against the forces of rival political parties and violent actions against Jews and others. Under Ernst Röhm's leadership the SA grew by 1934 to over half a million members—4.5 million including reserves—at a time when the regular army was still limited to 100,000 men by the Versailles Treaty. Röhm hoped to assume command of the army and absorb it into the ranks of the SA. Hindenburg and Defence Minister Werner von Blomberg threatened to impose martial law if the activities of the SA were not curtailed. Therefore, less than a year and a half after seizing power, Hitler ordered the deaths of the SA leadership, including Rohm. After the purge of 1934, the SA was no longer a major force. Initially a small bodyguard unit under the auspices of the SA, the "Schutzstaffel" (SS; Protection Squadron) grew to become one of the largest and most powerful groups in Nazi Germany. Led by "Reichsführer-SS" Heinrich Himmler from 1929, the SS had over a quarter million members by 1938. Himmler initially envisioned the SS as being an elite group of guards, Hitler's last line of defence. The Waffen-SS, the military branch of the SS, evolved into a second army. It was dependent on the regular army for heavy weaponry and equipment, and most units were under tactical control of the High Command of the Armed Forces (OKW). By the end of 1942, the stringent selection and racial requirements that had initially been in place were no longer followed. With recruitment and conscription based only on expansion, by 1943 the Waffen-SS could not longer claim to be an elite fighting force. SS formations committed many war crimes against civilians and allied servicemen. From 1935 onward, the SS spearheaded the persecution of Jews, who were rounded up into ghettos and concentration camps. With the outbreak of World War II, the SS "Einsatzgruppen" units followed the army into Poland and the Soviet Union, where from 1941 to 1945 they killed more than two million people, including 1.3 million Jews. A third of the "Einsatzgruppen" members were recruited from Waffen-SS personnel. The "SS-Totenkopfverbände" (death's head units) ran the concentration camps and extermination camps, where millions more were killed. Up to 60,000 Waffen-SS men served in the camps. In 1931, Himmler organised an SS intelligence service which became known as the "Sicherheitsdienst" (SD; Security Service) under his deputy, Heydrich. This organisation was tasked with locating and arresting communists and other political opponents. Himmler established the beginnings of a parallel economy under the auspices of the SS Economy and Administration Head Office. This holding company owned housing corporations, factories, and publishing houses. The most pressing economic matter the Nazis initially faced was the 30 percent national unemployment rate. Economist Dr. Hjalmar Schacht, President of the Reichsbank and Minister of Economics, created a scheme for deficit financing in May 1933. Capital projects were paid for with the issuance of promissory notes called Mefo bills. When the notes were presented for payment, the Reichsbank printed money. Hitler and his economic team expected that the upcoming territorial expansion would provide the means of repaying the soaring national debt. Schacht's administration achieved a rapid decline in the unemployment rate, the largest of any country during the Great Depression. Economic recovery was uneven, with reduced hours of work and erratic availability of necessities, leading to disenchantment with the regime as early as 1934. In October 1933, the Junkers Aircraft Works was expropriated. In concert with other aircraft manufacturers and under the direction of Aviation Minister Göring, production was ramped up. From a workforce of 3,200 people producing 100 units per year in 1932, the industry grew to employ a quarter of a million workers manufacturing over 10,000 technically advanced aircraft annually less than ten years later. An elaborate bureaucracy was created to regulate imports of raw materials and finished goods with the intention of eliminating foreign competition in the German marketplace and improving the nation's balance of payments. The Nazis encouraged the development of synthetic replacements for materials such as oil and textiles. As the market was experiencing a glut and prices for petroleum were low, in 1933 the Nazi government made a profit-sharing agreement with IG Farben, guaranteeing them a 5 percent return on capital invested in their synthetic oil plant at Leuna. Any profits in excess of that amount would be turned over to the Reich. By 1936, Farben regretted making the deal, as excess profits were by then being generated. In another attempt to secure an adequate wartime supply of petroleum, Germany intimidated Romania into signing a trade agreement in March 1939. Major public works projects financed with deficit spending included the construction of a network of "Autobahnen" and providing funding for programmes initiated by the previous government for housing and agricultural improvements. To stimulate the construction industry, credit was offered to private businesses and subsidies were made available for home purchases and repairs. On the condition that the wife would leave the workforce, a loan of up to 1,000 Reichsmarks could be accessed by young couples of Aryan descent who intended to marry, and the amount that had to be repaid was reduced by 25 percent for each child born. The caveat that the woman had to remain unemployed outside the home was dropped by 1937 due to a shortage of skilled labourers. Envisioning widespread car ownership as part of the new Germany, Hitler arranged for designer Ferdinand Porsche to draw up plans for the "KdF-wagen" (Strength Through Joy car), intended to be an automobile that everyone could afford. A prototype was displayed at the International Motor Show in Berlin on 17 February 1939. With the outbreak of World War II, the factory was converted to produce military vehicles. None were sold until after the war, when the vehicle was renamed the Volkswagen (people's car). Six million people were unemployed when the Nazis took power in 1933 and by 1937 there were fewer than a million. This was in part due to the removal of women from the workforce. Real wages dropped by 25 percent between 1933 and 1938. After the dissolution of the trade unions in May 1933, their funds were seized and their leadership arrested, including those who attempted to co-operate with the Nazis. A new organisation, the German Labour Front, was created and placed under Nazi Party functionary Robert Ley. The average work week was 43 hours in 1933; by 1939 this increased to 47 hours. By early 1934, the focus shifted towards rearmament. By 1935, military expenditures accounted for 73 percent of the government's purchases of goods and services. On 18 October 1936, Hitler named Göring as Plenipotentiary of the Four Year Plan, intended to speed up rearmament. In addition to calling for the rapid construction of steel mills, synthetic rubber plants, and other factories, Göring instituted wage and price controls and restricted the issuance of stock dividends. Large expenditures were made on rearmament in spite of growing deficits. Plans unveiled in late 1938 for massive increases to the navy and air force were impossible to fulfil, as Germany lacked the finances and material resources to build the planned units, as well as the necessary fuel required to keep them running. With the introduction of compulsory military service in 1935, the "Reichswehr", which had been limited to 100,000 by the terms of the Versailles Treaty, expanded to 750,000 on active service at the start of World War II, with a million more in the reserve. By January 1939, unemployment was down to 301,800 and it dropped to only 77,500 by September. The Nazi war economy was a mixed economy that combined a free market with central planning. Historian Richard Overy describes it as being somewhere in between the command economy of the Soviet Union and the capitalist system of the United States. In 1942, after the death of Armaments Minister Fritz Todt, Hitler appointed Albert Speer as his replacement. Wartime rationing of consumer goods led to an increase in personal savings, funds which were in turn lent to the government to support the war effort. By 1944, the war was consuming 75 percent of Germany's gross domestic product, compared to 60 percent in the Soviet Union and 55 percent in Britain. Speer improved production by centralising planning and control, reducing production of consumer goods, and using forced labour and slavery. The wartime economy eventually relied heavily upon the large-scale employment of slave labour. Germany imported and enslaved some 12 million people from 20 European countries to work in factories and on farms. Approximately 75 percent were Eastern European. Many were casualties of Allied bombing, as they received poor air raid protection. Poor living conditions led to high rates of sickness, injury, and death, as well as sabotage and criminal activity. The wartime economy also relied upon large-scale robbery, initially through the state seizing the property of Jewish citizens and later by plundering the resources of occupied territories. Foreign workers brought into Germany were put into four classifications: guest workers, military internees, civilian workers, and Eastern workers. Each group was subject to different regulations. The Nazis issued a ban on sexual relations between Germans and foreign workers. By 1944, over a half million women served as auxiliaries in the German armed forces. The number of women in paid employment only increased by 271,000 (1.8 percent) from 1939 to 1944. As the production of consumer goods had been cut back, women left those industries for employment in the war economy. They also took jobs formerly held by men, especially on farms and in family-owned shops. Very heavy strategic bombing by the Allies targeted refineries producing synthetic oil and gasoline, as well as the German transportation system, especially rail yards and canals. The armaments industry began to break down by September 1944. By November, fuel coal was no longer reaching its destinations and the production of new armaments was no longer possible. Overy argues that the bombing strained the German war economy and forced it to divert up to one-fourth of its manpower and industry into anti-aircraft resources, which very likely shortened the war. During the course of the war, the Nazis extracted considerable plunder from occupied Europe. Historian and war correspondent William L. Shirer writes: "The total amount of [Nazi] loot will never be known; it has proved beyond man's capacity to accurately compute." Gold reserves and other foreign holdings were seized from the national banks of occupied nations, while large "occupation costs" were usually imposed. By the end of the war, occupation costs were calculated by the Nazis at 60 billion Reichsmarks, with France alone paying 31.5 billion. The Bank of France was forced to provide 4.5 billion Reichsmarks in "credits" to Germany, while a further 500,000 Reichsmarks were assessed against Vichy France by the Nazis in the form of "fees" and other miscellaneous charges. The Nazis exploited other conquered nations in a similar way. After the war, the United States Strategic Bombing Survey concluded Germany had obtained 104 billion Reichsmarks in the form of occupation costs and other wealth transfers from occupied Europe, including two-thirds of the gross domestic product of Belgium and the Netherlands. Nazi plunder included private and public art collections, artefacts, precious metals, books, and personal possessions. Hitler and Göring in particular were interested in acquiring looted art treasures from occupied Europe, the former planning to use the stolen art to fill the galleries of the planned "Führermuseum" (Leader's Museum), and the latter for his personal collection. Göring, having stripped almost all of occupied Poland of its artworks within six months of Germany's invasion, ultimately grew a collection valued at over 50 million Reichsmarks. In 1940, the Reichsleiter Rosenberg Taskforce was established to loot artwork and cultural material from public and private collections, libraries, and museums throughout Europe. France saw the greatest extent of Nazi plunder. Some 26,000 railroad cars of art treasures, furniture, and other looted items were sent to Germany from France. By January 1941, Rosenberg estimated the looted treasures from France to be valued at over one billion Reichsmarks. In addition, soldiers looted or purchased goods such as produce and clothing—items, which were becoming harder to obtain in Germany—for shipment home. Goods and raw materials were also taken. In France, an estimated of cereals were seized during the course of the war, including 75 percent of its oats. In addition, 80 percent of the country's oil and 74 percent of its steel production were taken. The valuation of this loot is estimated to be 184.5 billion francs. In Poland, Nazi plunder of raw materials began even before the German invasion had concluded. Following Operation Barbarossa, the Soviet Union was also plundered. In 1943 alone, 9,000,000 tons of cereals, of fodder, of potatoes, and of meats were sent back to Germany. During the course of the German occupation, some 12 million pigs and 13 million sheep were taken. The value of this plunder is estimated at 4 billion Reichsmarks. This relatively low number in comparison to the occupied nations of Western Europe can be attributed to the devastating fighting on the Eastern Front. Racism and antisemitism were basic tenets of the Nazi Party and the Nazi regime. Nazi Germany's racial policy was based on their belief in the existence of a superior master race. The Nazis postulated the existence of a racial conflict between the Aryan master race and inferior races, particularly Jews, who were viewed as a mixed race that had infiltrated society and were responsible for the exploitation and repression of the Aryan race. Discrimination against Jews began immediately after the seizure of power. Following a month-long series of attacks by members of the SA on Jewish businesses and synagogues, on 1 April 1933 Hitler declared a national boycott of Jewish businesses. The Law for the Restoration of the Professional Civil Service passed on 7 April forced all non-Aryan civil servants to retire from the legal profession and civil service. Similar legislation soon deprived other Jewish professionals of their right to practise, and on 11 April a decree was promulgated that stated anyone who had even one Jewish parent or grandparent was considered non-Aryan. As part of the drive to remove Jewish influence from cultural life, members of the National Socialist Student League removed from libraries any books considered un-German, and a nationwide book burning was held on 10 May. The regime used violence and economic pressure to encourage Jews to voluntarily leave the country. Jewish businesses were denied access to markets, forbidden to advertise, and deprived of access to government contracts. Citizens were harassed and subjected to violent attacks. Many towns posted signs forbidding entry to Jews. In November 1938 a young Jewish man requested an interview with the German ambassador in Paris and met with a legation secretary, whom he shot and killed to protest his family's treatment in Germany. This incident provided the pretext for a pogrom the Nazis incited against the Jews on 9 November 1938. Members of the SA damaged or destroyed synagogues and Jewish property throughout Germany. At least 91 German Jews were killed during this pogrom, later called "Kristallnacht", the Night of Broken Glass. Further restrictions were imposed on Jews in the coming months – they were forbidden to own businesses or work in retail shops, drive cars, go to the cinema, visit the library, or own weapons, and Jewish pupils were removed from schools. The Jewish community was fined one billion marks to pay for the damage caused by "Kristallnacht" and told that any insurance settlements would be confiscated. By 1939, around 250,000 of Germany's 437,000 Jews had emigrated to the United States, Argentina, Great Britain, Palestine, and other countries. Many chose to stay in continental Europe. Emigrants to Palestine were allowed to transfer property there under the terms of the Haavara Agreement, but those moving to other countries had to leave virtually all their property behind, and it was seized by the government. Like the Jews, the Romani people were subjected to persecution from the early days of the regime. The Romani were forbidden to marry people of German extraction. They were shipped to concentration camps starting in 1935 and many were killed. Following the invasion of Poland, 2,500 Roma and Sinti people were deported from Germany to the General Government, where they were imprisoned in labour camps. The survivors were likely exterminated at Bełżec, Sobibor, or Treblinka. A further 5,000 Sinti and Austrian Lalleri people were deported to the Łódź Ghetto in late 1941, where half were estimated to have died. The Romani survivors of the ghetto were subsequently moved to the Chełmno extermination camp in early 1942. The Nazis intended on deporting all Romani people from Germany, and confined them to "Zigeunerlager" (Gypsy camps) for this purpose. Himmler ordered their deportation from Germany in December 1942, with few exceptions. A total of 23,000 Romani were deported to Auschwitz concentration camp, of whom 19,000 died. Outside of Germany, the Romani people were regularly used for forced labour, though many were killed. In the Baltic states and the Soviet Union, 30,000 Romani were killed by the SS, the German Army, and "Einsatzgruppen". In occupied Serbia, 1,000 to 12,000 Romani were killed, while nearly all 25,000 Romani living in the Independent State of Croatia were killed. The estimates at end of the war put the total death toll at around 220,000, which equalled approximately 25 percent of the Romani population in Europe. Action T4 was a programme of systematic murder of the physically and mentally handicapped and patients in psychiatric hospitals that took place mainly from 1939 to 1941, and continued until the end of the war. Initially the victims were shot by the "Einsatzgruppen" and others; gas chambers and gas vans using carbon monoxide were used by early 1940. Under the Law for the Prevention of Hereditarily Diseased Offspring, enacted on 14 July 1933, over 400,000 individuals underwent compulsory sterilisation. Over half were those considered mentally deficient, which included not only people who scored poorly on intelligence tests, but those who deviated from expected standards of behaviour regarding thrift, sexual behaviour, and cleanliness. Most of the victims came from disadvantaged groups such as prostitutes, the poor, the homeless, and criminals. Other groups persecuted and killed included Jehovah's Witnesses, homosexuals, social misfits, and members of the political and religious opposition. Germany's war in the East was based on Hitler's long-standing view that Jews were the great enemy of the German people and that "Lebensraum" was needed for Germany's expansion. Hitler focused his attention on Eastern Europe, aiming to conquer Poland and the Soviet Union. After the occupation of Poland in 1939, all Jews living in the General Government were confined to ghettos, and those who were physically fit were required to perform compulsory labour. In 1941 Hitler decided to destroy the Polish nation completely; within 15 to 20 years the General Government was to be cleared of ethnic Poles and resettled by German colonists. About 3.8 to 4 million Poles would remain as slaves, part of a slave labour force of 14 million the Nazis intended to create using citizens of conquered nations. The "Generalplan Ost" ("General Plan for the East") called for deporting the population of occupied Eastern Europe and the Soviet Union to Siberia, for use as slave labour or to be murdered. To determine who should be killed, Himmler created the "Volksliste", a system of classification of people deemed to be of German blood. He ordered that those of Germanic descent who refused to be classified as ethnic Germans should be deported to concentration camps, have their children taken away, or be assigned to forced labour. The plan also included the kidnapping of children deemed to have Aryan-Nordic traits, who were presumed to be of German descent. The goal was to implement "Generalplan Ost" after the conquest of the Soviet Union, but when the invasion failed Hitler had to consider other options. One suggestion was a mass forced deportation of Jews to Poland, Palestine, or Madagascar. In addition to eliminating Jews, the Nazis planned to reduce the population of the conquered territories by 30 million people through starvation in an action called the Hunger Plan. Food supplies would be diverted to the German army and German civilians. Cities would be razed and the land allowed to return to forest or resettled by German colonists. Together, the Hunger Plan and "Generalplan Ost" would have led to the starvation of 80 million people in the Soviet Union. These partially fulfilled plans resulted in the democidal deaths of an estimated 19.3 million civilians and prisoners of war (POWs) throughout the USSR and elsewhere in Europe. During the course of the war, the Soviet Union lost a total of 27 million people; less than nine million of these were combat deaths. One in four of the Soviet population were killed or wounded. Around the time of the failed offensive against Moscow in December 1941, Hitler resolved that the Jews of Europe were to be exterminated immediately. While the murder of Jewish civilians had been ongoing in the occupied territories of Poland and the Soviet Union, plans for the total eradication of the Jewish population of Europe—eleven million people—were formalised at the Wannsee Conference on 20 January 1942. Some would be worked to death and the rest would be killed in the implementation of the Final Solution to the Jewish Question. Initially the victims were killed by "Einsatzgruppen" firing squads, then by stationary gas chambers or by gas vans, but these methods proved impractical for an operation of this scale. By 1942 extermination camps equipped with gas chambers were established at Auschwitz, Chełmno, Sobibor, Treblinka, and elsewhere. The total number of Jews murdered is estimated at 5.5 to six million, including over a million children. The Allies received information about the murders from the Polish government-in-exile and Polish leadership in Warsaw, based mostly on intelligence from the Polish underground. German citizens had access to information about what was happening, as soldiers returning from the occupied territories reported on what they had seen and done. Historian Richard J. Evans states that most German citizens disapproved of the genocide. Poles were viewed by Nazis as subhuman non-Aryans, and during the German occupation of Poland 2.7 million ethnic Poles were killed. Polish civilians were subject to forced labour in German industry, internment, wholesale expulsions to make way for German colonists, and mass executions. The German authorities engaged in a systematic effort to destroy Polish culture and national identity. During operation AB-Aktion, many university professors and members of the Polish intelligentsia were arrested, transported to concentration camps, or executed. During the war, Poland lost an estimated 39 to 45 percent of its physicians and dentists, 26 to 57 percent of its lawyers, 15 to 30 percent of its teachers, 30 to 40 percent of its scientists and university professors, and 18 to 28 percent of its clergy. The Nazis captured 5.75 million Soviet prisoners of war, more than they took from all the other Allied powers combined. Of these, they killed an estimated 3.3 million, with 2.8 million of them being killed between June 1941 and January 1942. Many POWs starved to death or resorted to cannibalism while being held in open-air pens at Auschwitz and elsewhere. From 1942 onward, Soviet POWs were viewed as a source of forced labour, and received better treatment so they could work. By December 1944, 750,000 Soviet POWs were working, including in German armaments factories (in violation of the Hague and Geneva conventions), mines, and farms. Antisemitic legislation passed in 1933 led to the removal of all Jewish teachers, professors, and officials from the education system. Most teachers were required to belong to the "Nationalsozialistischer Lehrerbund" (NSLB; National Socialist Teachers League) and university professors were required to join the National Socialist German Lecturers. Teachers had to take an oath of loyalty and obedience to Hitler, and those who failed to show sufficient conformity to party ideals were often reported by students or fellow teachers and dismissed. Lack of funding for salaries led to many teachers leaving the profession. The average class size increased from 37 in 1927 to 43 in 1938 due to the resulting teacher shortage. Frequent and often contradictory directives were issued by Interior Minister Wilhelm Frick, Bernhard Rust of the Reich Ministry of Science, Education and Culture, and other agencies regarding content of lessons and acceptable textbooks for use in primary and secondary schools. Books deemed unacceptable to the regime were removed from school libraries. Indoctrination in National Socialist thought was made compulsory in January 1934. Students selected as future members of the party elite were indoctrinated from the age of 12 at Adolf Hitler Schools for primary education and National Political Institutes of Education for secondary education. Detailed National Socialist indoctrination of future holders of elite military rank was undertaken at Order Castles. Primary and secondary education focused on racial biology, population policy, culture, geography, and physical fitness. The curriculum in most subjects, including biology, geography, and even arithmetic, was altered to change the focus to race. Military education became the central component of physical education, and education in physics was oriented toward subjects with military applications, such as ballistics and aerodynamics. Students were required to watch all films prepared by the school division of the Reich Ministry of Public Enlightenment and Propaganda. At universities, appointments to top posts were the subject of power struggles between the education ministry, the university boards, and the National Socialist German Students' League. In spite of pressure from the League and various government ministries, most university professors did not make changes to their lectures or syllabus during the Nazi period. This was especially true of universities located in predominantly Catholic regions. Enrolment at German universities declined from 104,000 students in 1931 to 41,000 in 1939, but enrolment in medical schools rose sharply as Jewish doctors had been forced to leave the profession, so medical graduates had good job prospects. From 1934, university students were required to attend frequent and time-consuming military training sessions run by the SA. First-year students also had to serve six months in a labour camp for the Reich Labour Service; an additional ten weeks service were required of second-year students. Women were a cornerstone of Nazi social policy. The Nazis opposed the feminist movement, claiming that it was the creation of Jewish intellectuals, instead advocating a patriarchal society in which the German woman would recognise that her "world is her husband, her family, her children, and her home". Feminist groups were shut down or incorporated into the National Socialist Women's League, which coordinated groups throughout the country to promote motherhood and household activities. Courses were offered on childrearing, sewing, and cooking. Prominent feminists, including Anita Augspurg, Lida Gustava Heymann, and Helene Stöcker, felt forced to live in exile. The League published the "NS-Frauen-Warte", the only Nazi-approved women's magazine in Nazi Germany; despite some propaganda aspects, it was predominantly an ordinary woman's magazine. Women were encouraged to leave the workforce, and the creation of large families by racially suitable women was promoted through a propaganda campaign. Women received a bronze award—known as the "Ehrenkreuz der Deutschen Mutter" (Cross of Honour of the German Mother)—for giving birth to four children, silver for six, and gold for eight or more. Large families received subsidies to help with expenses. Though the measures led to increases in the birth rate, the number of families having four or more children declined by five percent between 1935 and 1940. Removing women from the workforce did not have the intended effect of freeing up jobs for men, as women were for the most part employed as domestic servants, weavers, or in the food and drink industries—jobs that were not of interest to men. Nazi philosophy prevented large numbers of women from being hired to work in munitions factories in the build-up to the war, so foreign labourers were brought in. After the war started, slave labourers were extensively used. In January 1943, Hitler signed a decree requiring all women under the age of fifty to report for work assignments to help the war effort. Thereafter women were funnelled into agricultural and industrial jobs, and by September 1944 14.9 million women were working in munitions production. Nazi leaders endorsed the idea that rational and theoretical work was alien to a woman's nature, and as such discouraged women from seeking higher education. A law passed in April 1933 limited the number of females admitted to university to ten percent of the number of male attendees. This resulted in female enrolment in secondary schools dropping from 437,000 in 1926 to 205,000 in 1937. The number of women enrolled in post-secondary schools dropped from 128,000 in 1933 to 51,000 in 1938. However, with the requirement that men be enlisted into the armed forces during the war, women comprised half of the enrolment in the post-secondary system by 1944. Women were expected to be strong, healthy, and vital. The sturdy peasant woman who worked the land and bore strong children was considered ideal, and women were praised for being athletic and tanned from working outdoors. Organisations were created for the indoctrination of Nazi values. From 25 March 1939 membership in the Hitler Youth was made compulsory for all children over the age of ten. The "Jungmädelbund" (Young Girls League) section of the Hitler Youth was for girls age 10 to 14 and the "Bund Deutscher Mädel" (BDM; League of German Girls) was for young women age 14 to 18. The BDM's activities focused on physical education, with activities such as running, long jumping, somersaulting, tightrope walking, marching, and swimming. The Nazi regime promoted a liberal code of conduct regarding sexual matters and was sympathetic to women who bore children out of wedlock. Promiscuity increased as the war progressed, with unmarried soldiers often intimately involved with several women simultaneously. Soldier's wives were frequently involved in extramarital relationships. Sex was sometimes used as a commodity to obtain better work from a foreign labourer. Pamphlets enjoined German women to avoid sexual relations with foreign workers as a danger to their blood. With Hitler's approval, Himmler intended that the new society of the Nazi regime should destigmatise illegitimate births, particularly of children fathered by members of the SS, who were vetted for racial purity. His hope was that each SS family would have between four and six children. The "Lebensborn" (Fountain of Life) association, founded by Himmler in 1935, created a series of maternity homes to accommodate single mothers during their pregnancies. Both parents were examined for racial suitability before acceptance. The resulting children were often adopted into SS families. The homes were also made available to the wives of SS and Nazi Party members, who quickly filled over half the available spots. Existing laws banning abortion except for medical reasons were strictly enforced by the Nazi regime. The number of abortions declined from 35,000 per year at the start of the 1930s to fewer than 2,000 per year at the end of the decade, though in 1935 a law was passed allowing abortions for eugenics reasons. Nazi Germany had a strong anti-tobacco movement, as pioneering research by Franz H. Müller in 1939 demonstrated a causal link between smoking and lung cancer. The Reich Health Office took measures to try to limit smoking, including producing lectures and pamphlets. Smoking was banned in many workplaces, on trains, and among on-duty members of the military. Government agencies also worked to control other carcinogenic substances such as asbestos and pesticides. As part of a general public health campaign, water supplies were cleaned up, lead and mercury were removed from consumer products, and women were urged to undergo regular screenings for breast cancer. Government-run health care insurance plans were available, but Jews were denied coverage starting in 1933. That same year, Jewish doctors were forbidden to treat government-insured patients. In 1937, Jewish doctors were forbidden to treat non-Jewish patients and in 1938 their right to practice medicine was removed entirely. Medical experiments, many of them pseudoscientific, were performed on concentration camp inmates beginning in 1941. The most notorious doctor to perform medical experiments was SS-"Hauptsturmführer" Dr. Josef Mengele, camp doctor at Auschwitz. Many of his victims died or were intentionally killed. Concentration camp inmates were made available for purchase by pharmaceutical companies for drug testing and other experiments. Nazi society had elements supportive of animal rights and many people were fond of zoos and wildlife. The government took several measures to ensure the protection of animals and the environment. In 1933, the Nazis enacted a stringent animal-protection law that affected what was allowed for medical research. The law was only loosely enforced, and in spite of a ban on vivisection, the Ministry of the Interior readily handed out permits for experiments on animals. The Reich Forestry Office under Göring enforced regulations that required foresters to plant a variety of trees to ensure suitable habitat for wildlife, and a new Reich Animal Protection Act became law in 1933. The regime enacted the Reich Nature Protection Act in 1935 to protect the natural landscape from excessive economic development. It allowed for the expropriation of privately owned land to create nature preserves and aided in long-range planning. Perfunctory efforts were made to curb air pollution, but little enforcement of existing legislation was undertaken once the war began. When the Nazis seized power in 1933, roughly 67 percent of the population of Germany was Protestant, 33 percent was Roman Catholic, while Jews made up less than 1 percent. According to 1939 census, 54 percent considered themselves Protestant, 40 percent Roman Catholic, 3.5 percent "Gottgläubig" (God-believing; a Nazi religious movement) and 1.5 percent nonreligious. Under the "Gleichschaltung" process, Hitler attempted to create a unified Protestant Reich Church from Germany's 28 existing Protestant state churches, with the ultimate goal of eradication of the churches in Germany. Pro-Nazi Ludwig Müller was installed as Reich Bishop and the pro-Nazi pressure group German Christians gained control of the new church. They objected to the Old Testament because of its Jewish origins and demanded that converted Jews be barred from their church. Pastor Martin Niemöller responded with the formation of the Confessing Church, from which some clergymen opposed the Nazi regime. When in 1935 the Confessing Church synod protested the Nazi policy on religion, 700 of their pastors were arrested. Müller resigned and Hitler appointed Hanns Kerrl as Minister for Church Affairs to continue efforts to control Protestantism. In 1936, a Confessing Church envoy protested to Hitler against the religious persecutions and human rights abuses. Hundreds more pastors were arrested. The church continued to resist and by early 1937 Hitler abandoned his hope of uniting the Protestant churches. Niemöller was arrested on 1 July 1937 and spent most of the next seven years in Sachsenhausen concentration camp and Dachau. Theological universities were closed and pastors and theologians of other Protestant denominations were also arrested. Persecution of the Catholic Church in Germany followed the Nazi takeover. Hitler moved quickly to eliminate political Catholicism, rounding up functionaries of the Catholic-aligned Bavarian People's Party and Catholic Centre Party, which along with all other non-Nazi political parties ceased to exist by July. The "Reichskonkordat" (Reich Concordat) treaty with the Vatican was signed in 1933, amid continuing harassment of the church in Germany. The treaty required the regime to honour the independence of Catholic institutions and prohibited clergy from involvement in politics. Hitler routinely disregarded the Concordat, closing all Catholic institutions whose functions were not strictly religious. Clergy, nuns and lay leaders were targeted, with thousands of arrests over the ensuing years, often on trumped-up charges of currency smuggling or immorality. Several Catholic leaders were targeted in the 1934 Night of the Long Knives assassinations. Most Catholic youth groups refused to dissolve themselves and Hitler Youth leader Baldur von Schirach encouraged members to attack Catholic boys in the streets. Propaganda campaigns claimed the church was corrupt, restrictions were placed on public meetings and Catholic publications faced censorship. Catholic schools were required to reduce religious instruction and crucifixes were removed from state buildings. Pope Pius XI had the ""Mit brennender Sorge"" ("With Burning Concern") encyclical smuggled into Germany for Passion Sunday 1937 and read from every pulpit as it denounced the systematic hostility of the regime toward the church. In response, Goebbels renewed the regime's crackdown and propaganda against Catholics. Enrolment in denominational schools dropped sharply and by 1939 all such schools were disbanded or converted to public facilities. Later Catholic protests included the 22 March 1942 pastoral letter by the German bishops on "The Struggle against Christianity and the Church". About 30 percent of Catholic priests were disciplined by police during the Nazi era. A vast security network spied on the activities of clergy and priests were frequently denounced, arrested or sent to concentration camps – many to the dedicated clergy barracks at Dachau. In the areas of Poland annexed in 1939, the Nazis instigated a brutal suppression and systematic dismantling of the Catholic Church. Alfred Rosenberg, head of the Nazi Party Office of Foreign Affairs and Hitler's appointed cultural and educational leader for Nazi Germany, considered Catholicism to be among the Nazis' chief enemies. He planned the "extermination of the foreign Christian faiths imported into Germany", and for the Bible and Christian cross to be replaced in all churches, cathedrals, and chapels with copies of "Mein Kampf" and the swastika. Other sects of Christianity were also targeted, with Chief of the Nazi Party Chancellery Martin Bormann publicly proclaiming in 1941, "National Socialism and Christianity are irreconcilable." Shirer writes that opposition to Christianity within Party leadership was so pronounced that, "the Nazi regime intended to eventually destroy Christianity in Germany, if it could, and substitute the old paganism of the early tribal Germanic gods and the new paganism of the Nazi extremists." While no unified resistance movement opposing the Nazi regime existed, acts of defiance such as sabotage and labour slowdowns took place, as well as attempts to overthrow the regime or assassinate Hitler. The banned Communist and Social Democratic parties set up resistance networks in the mid-1930s. These networks achieved little beyond fomenting unrest and initiating short-lived strikes. Carl Friedrich Goerdeler, who initially supported Hitler, changed his mind in 1936 and was later a participant in the July 20 plot. The Red Orchestra spy ring provided information to the Allies about Nazi war crimes, helped orchestrate escapes from Germany, and distributed leaflets. The group was detected by the Gestapo and more than 50 members were tried and executed in 1942. Communist and Social Democratic resistance groups resumed activity in late 1942, but were unable to achieve much beyond distributing leaflets. The two groups saw themselves as potential rival parties in post-war Germany, and for the most part did not co-ordinate their activities. The White Rose resistance group was primarily active in 1942–43, and many of its members were arrested or executed, with the final arrests taking place in 1944. Another civilian resistance group, the Kreisau Circle, had some connections with the military conspirators, and many of its members were arrested after the failed 20 July plot. While civilian efforts had an impact on public opinion, the army was the only organisation with the capacity to overthrow the government. A major plot by men in the upper echelons of the military originated in 1938. They believed Britain would go to war over Hitler's planned invasion of Czechoslovakia, and Germany would lose. The plan was to overthrow Hitler or possibly assassinate him. Participants included Generaloberst Ludwig Beck, Generaloberst Walther von Brauchitsch, Generaloberst Franz Halder, Admiral Wilhelm Canaris, and Generalleutnant Erwin von Witzleben, who joined a conspiracy headed by Oberstleutnant Hans Oster and Major Helmuth Groscurth of the Abwehr. The planned coup was cancelled after the signing of the Munich Agreement in September 1938. Many of the same people were involved in a coup planned for 1940, but again the participants changed their minds and backed down, partly because of the popularity of the regime after the early victories in the war. Attempts to assassinate Hitler resumed in earnest in 1943, with Henning von Tresckow joining Oster's group and attempting to blow up Hitler's plane in 1943. Several more attempts followed before the failed 20 July 1944 plot, which was at least partly motivated by the increasing prospect of a German defeat in the war. The plot, part of Operation Valkyrie, involved Claus von Stauffenberg planting a bomb in the conference room at Wolf's Lair at Rastenburg. Hitler, who narrowly survived, later ordered savage reprisals resulting in the execution of more than 4,900 people. The regime promoted the concept of "Volksgemeinschaft", a national German ethnic community. The goal was to build a classless society based on racial purity and the perceived need to prepare for warfare, conquest and a struggle against Marxism. The German Labour Front founded the "Kraft durch Freude" (KdF; Strength Through Joy) organisation in 1933. As well as taking control of tens of thousands of privately run recreational clubs, it offered highly regimented holidays and entertainment such as cruises, vacation destinations and concerts. The "Reichskulturkammer" (Reich Chamber of Culture) was organised under the control of the Propaganda Ministry in September 1933. Sub-chambers were set up to control aspects of cultural life such as film, radio, newspapers, fine arts, music, theatre and literature. Members of these professions were required to join their respective organisation. Jews and people considered politically unreliable were prevented from working in the arts, and many emigrated. Books and scripts had to be approved by the Propaganda Ministry prior to publication. Standards deteriorated as the regime sought to use cultural outlets exclusively as propaganda media. Radio became popular in Germany during the 1930s; over 70 percent of households owned a receiver by 1939, more than any other country. By July 1933, radio station staffs were purged of leftists and others deemed undesirable. Propaganda and speeches were typical radio fare immediately after the seizure of power, but as time went on Goebbels insisted that more music be played so that listeners would not turn to foreign broadcasters for entertainment. Newspapers, like other media, were controlled by the state; the Reich Press Chamber shut down or bought newspapers and publishing houses. By 1939, over two-thirds of the newspapers and magazines were directly owned by the Propaganda Ministry. The Nazi Party daily newspaper, the "Völkischer Beobachter" ("Ethnic Observer"), was edited by Rosenberg, who also wrote "The Myth of the Twentieth Century", a book of racial theories espousing Nordic superiority. Goebbels controlled the wire services and insisted that all newspapers in Germany only publish content favourable to the regime. Under Goebbels, the Propaganda Ministry issued two dozen directives every week on exactly what news should be published and what angles to use; the typical newspaper followed the directives closely, especially regarding what to omit. Newspaper readership plummeted, partly because of the decreased quality of the content and partly because of the surge in popularity of radio. Propaganda became less effective towards the end of the war, as people were able to obtain information outside of official channels. Authors of books left the country in droves and some wrote material critical of the regime while in exile. Goebbels recommended that the remaining authors concentrate on books themed on Germanic myths and the concept of blood and soil. By the end of 1933, over a thousand books—most of them by Jewish authors or featuring Jewish characters—had been banned by the Nazi regime. Nazi book burnings took place; nineteen such events were held on the night of 10 May 1933. Tens of thousands of books from dozens of figures, including Albert Einstein, Sigmund Freud, Helen Keller, Alfred Kerr, Marcel Proust, Erich Maria Remarque, Upton Sinclair, Jakob Wassermann, H. G. Wells, and Émile Zola were publicly burned. Pacifist works, and literature espousing liberal, democratic values were targeted for destruction, as well as any writings supporting the Weimar Republic or those written by Jewish authors. Hitler took a personal interest in architecture and worked closely with state architects Paul Troost and Albert Speer to create public buildings in a neoclassical style based on Roman architecture. Speer constructed imposing structures such as the Nazi party rally grounds in Nuremberg and a new Reich Chancellery building in Berlin. Hitler's plans for rebuilding Berlin included a gigantic dome based on the Pantheon in Rome and a triumphal arch more than double the height of the Arc de Triomphe in Paris. Neither structure was built. Hitler's belief that abstract, Dadaist, expressionist and modern art were decadent became the basis for policy. Many art museum directors lost their posts in 1933 and were replaced by party members. Some 6,500 modern works of art were removed from museums and replaced with works chosen by a Nazi jury. Exhibitions of the rejected pieces, under titles such as "Decadence in Art", were launched in sixteen different cities by 1935. The Degenerate Art Exhibition, organised by Goebbels, ran in Munich from July to November 1937. The exhibition proved wildly popular, attracting over two million visitors. Composer Richard Strauss was appointed president of the "Reichsmusikkammer" (Reich Music Chamber) on its founding in November 1933. As was the case with other art forms, the Nazis ostracised musicians who were deemed racially unacceptable and for the most part disapproved of music that was too modern or atonal. Jazz was considered especially inappropriate and foreign jazz musicians left the country or were expelled. Hitler favoured the music of Richard Wagner, especially pieces based on Germanic myths and heroic stories, and attended the Bayreuth Festival each year from 1933 to 1942. Movies were popular in Germany in the 1930s and 1940s, with admissions of over a billion people in 1942, 1943 and 1944. By 1934, German regulations restricting currency exports made it impossible for US film makers to take their profits back to America, so the major film studios closed their German branches. Exports of German films plummeted, as their antisemitic content made them impossible to show in other countries. The two largest film companies, Universum Film AG and Tobis, were purchased by the Propaganda Ministry, which by 1939 was producing most German films. The productions were not always overtly propagandistic, but generally had a political subtext and followed party lines regarding themes and content. Scripts were pre-censored. Leni Riefenstahl's "Triumph of the Will" (1935)—documenting the 1934 Nuremberg Rally—and "Olympia" (1938)—covering the 1936 Summer Olympics—pioneered techniques of camera movement and editing that influenced later films. New techniques such as telephoto lenses and cameras mounted on tracks were employed. Both films remain controversial, as their aesthetic merit is inseparable from their propagandising of National Socialist ideals. The Allied powers organised war crimes trials, beginning with the Nuremberg trials, held from November 1945 to October 1946, of 23 top Nazi officials. They were charged with four counts—conspiracy to commit crimes, crimes against peace, war crimes and crimes against humanity—in violation of international laws governing warfare. All but three of the defendants were found guilty and twelve were sentenced to death. Twelve Subsequent Nuremberg trials of 184 defendants were held between 1946 and 1949. Between 1946 and 1949, the Allies investigated 3,887 cases, of which 489 were brought to trial. The result was convictions of 1,426 people; 297 of these were sentenced to death and 279 to life in prison, with the remainder receiving lesser sentences. About 65 percent of the death sentences were carried out. Poland was more active than other nations in investigating war crimes, for example prosecuting 673 of the total 789 Auschwitz staff brought to trial. The political programme espoused by Hitler and the Nazis brought about a world war, leaving behind a devastated and impoverished Europe. Germany itself suffered wholesale destruction, characterised as "Stunde Null" (Zero Hour). The number of civilians killed during the Second World War was unprecedented in the history of warfare. As a result, Nazi ideology and the actions taken by the regime are almost universally regarded as gravely immoral. Historians, philosophers, and politicians often use the word "evil" to describe Hitler and the Nazi regime. Interest in Nazi Germany continues in the media and the academic world. While Evans remarks that the era "exerts an almost universal appeal because its murderous racism stands as a warning to the whole of humanity", young neo-Nazis enjoy the shock value the use Nazi symbols or slogans provides. The display or use of Nazi symbolism such as flags, swastikas, or greetings is illegal in Germany and Austria. The process of denazification, which was initiated by the Allies as a way to remove Nazi Party members was only partially successful, as the need for experts in such fields as medicine and engineering was too great. However, expression of Nazi views was frowned upon, and those who expressed such views were frequently dismissed from their jobs. From the immediate post-war period through the 1950s, people avoided talking about the Nazi regime or their own wartime experiences. While virtually every family suffered losses during the war has a story to tell, Germans kept quiet about their experiences and felt a sense of communal guilt, even if they were not directly involved in war crimes. The trial of Adolf Eichmann in 1961 and the broadcast of the television miniseries "Holocaust" in 1979 brought the process of "Vergangenheitsbewältigung" (coping with the past) to the forefront for many Germans. Once study of Nazi Germany was introduced into the school curriculum starting in the 1970s, people began researching the experiences of their family members. Study of the era and a willingness to critically examine its mistakes has led to the development of a strong democracy in Germany, but with lingering undercurrents of antisemitism and neo-Nazi thought. In 2017 a Körber Foundation survey found that 40 percent of 14-year-olds in Germany did not know what Auschwitz was. The journalist Alan Posener attributed the country's "growing historical amnesia" in part to a failure by the German film and television industry to reflect the country's history accurately.
https://en.wikipedia.org/wiki?curid=21212
Naraoiidae Naraoiidae is a family, of extinct, soft-shelled trilobite-like arthropods, that belongs to the order Nectaspida. Species included in the Naraoiidae are known from the second half of the Lower Cambrian to the end of the Upper Silurian. The total number of collection sites is limited and distributed over a vast period of time: Maotianshan Shale and Balang Formation (China), Burgess Shale and Bertie Formation (Canada), the Šárka Formation (Czech Republic), Emu Bay Shale (Australia), Idaho and Utah (USA). This is probably due to the rare occurrence of the right circumstances for soft tissue preservation, needed for these non-calcified exoskeletons. Naraoiids probably were deposit feeders ("Naraoia" and "Pseudonaraoia"), predators or scavengers ("Misszhouia"), living on the sea floor. The species of the family "Naraoiidae" are almost flat (dorso-ventrally). The upper (or dorsal) side of the body consists of a non-calcified transversely oval or semi-circular headshield (cephalon), and a circular to long oval tailshield (pygidium) equal to or longer than the cephalon, without any body segments in between. The body is narrowed at the articulation between cephalon and pygidium. The antennas are long and many-segmented. There are no eyes. The 17 to 25 pairs of legs have two branches on a common basis, like trilobites. The outer (dorsal) branches of the limbs (exopods) have flattened side branches (setae) on the shaft (probably acting as gills). The inner branches (or endopods are composed of 6 or 7 segments (or podomeres). Naraoiidae lack thoracic segments (or tergites), while the species of the sister family Liwiidae have between 3 and 6 tergites. The taxonomic placement of the Naraoiidae has long been debated until detailed appendages were uncovered, that showed that "N. compacta" shares biramous legs of very comparable anatomy with trilobites. Some debate is still going on if the parent taxon "Nektaspida" should be included in the Trilobita, or is better placed as a sister group.
https://en.wikipedia.org/wiki?curid=21214
Northwest Passage The Northwest Passage (NWP) is the sea route to the Pacific Ocean through the Arctic Ocean, along the northern coast of North America via waterways through the Canadian Arctic Archipelago. The eastern route along the Arctic coasts of Norway and Siberia is accordingly called the Northeast Passage (NEP). The various islands of the archipelago are separated from one another and from the Canadian mainland by a series of Arctic waterways collectively known as the Northwest Passages or Northwestern Passages. For centuries, European explorers sought a navigable passage as a possible trade route to Asia. An ice-bound northern route was discovered in 1850 by the Irish explorer Robert McClure; it was through a more southerly opening in an area explored by the Scotsman John Rae in 1854 that Norwegian Roald Amundsen made the first complete passage in 1903–1906. Until 2009, the Arctic pack ice prevented regular marine shipping throughout most of the year. Arctic sea ice decline has rendered the waterways more navigable for ice navigation. The contested sovereignty claims over the waters may complicate future shipping through the region: the Canadian government maintains that the Northwestern Passages are part of Canadian Internal Waters, but the United States and various European countries claim that they are an international strait and transit passage, allowing free and unencumbered passage. If, as has been claimed, parts of the eastern end of the Passage are barely deep, the route's viability as a Euro-Asian shipping route is reduced. In 2016 a Chinese shipping line expressed a desire to make regular voyages of cargo ships using the passage to the eastern United States and Europe, after a successful passage by "Nordic Orion" of 73,500 tonnes deadweight tonnage in September 2013. Fully loaded, "Nordic Orion" sat too deep in the water to sail through the Panama Canal. Before the Little Ice Age (late Middle Ages to the 19th century), Norwegian Vikings sailed as far north and west as Ellesmere Island, Skraeling Island and Ruin Island for hunting expeditions and trading with the Inuit and people of the Dorset culture who already inhabited the region. Between the end of the 15th century and the 20th century, colonial powers from Europe dispatched explorers in an attempt to discover a commercial sea route north and west around North America. The Northwest Passage represented a new route to the established trading nations of Asia. England called the hypothetical northern route the "Northwest Passage." The desire to establish such a route motivated much of the European exploration of both coasts of North America, also known as the New World. When it became apparent that there was no route through the heart of the continent, attention turned to the possibility of a passage through northern waters. There was a lack of scientific knowledge about conditions; for instance, some people believed that seawater was incapable of freezing. (As late as the mid-18th century, Captain James Cook had reported that Antarctic icebergs had yielded fresh water, seemingly confirming the hypothesis). Explorers thought that an open water route close to the North Pole must exist. The belief that a route lay to the far north persisted for several centuries and led to numerous expeditions into the Arctic. Many ended in disaster, including that by Sir John Franklin in 1845. While searching for him the McClure Arctic Expedition discovered the Northwest Passage in 1850. In 1906, the Norwegian explorer Roald Amundsen first successfully completed a passage from Greenland to Alaska in the sloop . Since that date, several fortified ships have made the journey. From east to west, the direction of most early exploration attempts, expeditions entered the passage from the Atlantic Ocean via the Davis Strait and through Baffin Bay, both of which are in Canada. Five to seven routes have been taken through the Canadian Arctic Archipelago, via the McClure Strait, Dease Strait, and the Prince of Wales Strait, but not all of them are suitable for larger ships. From there ships passed through waterways through the Beaufort Sea, Chukchi Sea, and Bering Strait (separating Russia and Alaska), into the Pacific Ocean. In the 21st century, major changes to the ice pack due to climate change have stirred speculation that the passage may become clear enough of ice to permit safe commercial shipping for at least part of the year. On August 21, 2007, the Northwest Passage became open to ships without the need of an icebreaker. According to Nalan Koc of the Norwegian Polar Institute, this was the first time the Passage has been clear since they began keeping records in 1972. The Northwest Passage opened again on August 25, 2008. It is usually reported in mainstream media that ocean thawing will open up the Northwest Passage (and the Northern Sea Route) for various kind of ships, making it possible to sail around the Arctic ice cap and possibly cutting thousands of miles off shipping routes. Warning that the NASA satellite images indicated the Arctic may have entered a "death spiral" caused by climate change, Professor Mark Serreze, a sea ice specialist at the U.S. National Snow and Ice Data Center (NSIDC) said: "The passages are open. It's a historic event. We are going to see this more and more as the years go by." On the other hand, some thick sections of ice will remain hard to melt in the shorter term. Such drifting and large chunks of ice, especially in springtime, can be problematic as they can clog entire straits or severely damage a ship's hull. Cargo routes may therefore be slower and uncertain, depending on prevailing conditions and the ability to predict them. Because a plurality of containerized traffic operates in a just-in-time mode (which does not tolerate delays well) and the relative isolation of the passage (which impedes shipping companies from optimizing their operations by grouping multiple stopovers on the same itinerary), the Northwest Passage and other Arctic routes are not always seen as promising shipping lanes by industry insiders, at least for the time being. The uncertainty related to physical damage to ships is also thought to translate into higher insurance premiums, especially because of the technical challenges posed by Arctic navigation (as of 2014, only 12 percent of Canada's Arctic waters have been charted to modern standards). The Beluga group of Bremen, Germany, sent the first Western commercial vessels through the Northern Sea Route (Northeast Passage) in 2009. Canada's Prime Minister Stephen Harper announced that "ships entering the North-West passage should first report to his government." The first commercial cargo ship to have sailed through the Northwest Passage was in August 1969. SS "Manhattan", of 115,000 deadweight tonnage, was the largest commercial vessel ever to navigate the Northwest Passage. The largest passenger ship to navigate the Northwest Passage was the cruise liner of gross tonnage 69,000. Starting on August 10, 2016, the ship sailed from Vancouver to New York City with 1,500 passengers and crew, taking 28 days. In 2018, two of the freighters leaving Baffinland's port in the Milne Inlet, on Baffin Island's north shore, were bound for ports in Asia. Those freighters did not sail west through the remainder of the Northwest Passage, they sailed east, rounded the tip of Greenland, and transitted Russia's Northern Sea Route. The Northwest Passage includes three sections: Many attempts were made to find a salt water exit west from Hudson Bay, but the Fury and Hecla Strait in the far north is blocked by ice. The eastern entrance and main axis of the northwest passage, the Parry Channel, was found in 1819. The approach from the west through Bering Strait is impractical because of the need to sail around ice near Point Barrow. East of Point Barrow the coast is fairly clear in summer. This area was mapped in pieces from overland in 1821–1839. This leaves the large rectangle north of the coast, south of Parry Channel and east of Baffin Island. This area was mostly mapped in 1848–1854 by ships looking for Franklin's lost expedition. The first crossing was made by Amundsen in 1903–1905. He used a small ship and hugged the coast. The International Hydrographic Organization defines the limits of the Northwestern Passages as follows: As a result of their westward explorations and their settlement of Greenland, the Vikings sailed as far north and west as Ellesmere Island, Skraeling Island for hunting expeditions and trading with Inuit groups. The subsequent arrival of the Little Ice Age is thought to have been one of the reasons that European seafaring into the Northwest Passage ceased until the late 15th century. In 1539, Hernán Cortés commissioned Francisco de Ulloa to sail along the Baja California Peninsula on the western coast of North America. Ulloa concluded that the Gulf of California was the southernmost section of a strait supposedly linking the Pacific with the Gulf of Saint Lawrence. His voyage perpetuated the notion of the Island of California and saw the beginning of a search for the Strait of Anián. The strait probably took its name from Ania, a Chinese province mentioned in a 1559 edition of Marco Polo's book; it first appears on a map issued by Italian cartographer Giacomo Gastaldi about 1562. Five years later Bolognino Zaltieri issued a map showing a narrow and crooked Strait of Anian separating Asia from the Americas. The strait grew in European imagination as an easy sea lane linking Europe with the residence of Khagan (the Great Khan) in Cathay (northern China). Cartographers and seamen tried to demonstrate its reality. Sir Francis Drake sought the western entrance in 1579. The Greek pilot Juan de Fuca, sailing from Acapulco (in Mexico) under the flag of the Spanish crown, claimed he had sailed the strait from the Pacific to the North Sea and back in 1592. The Spaniard Bartholomew de Fonte claimed to have sailed from Hudson Bay to the Pacific via the strait in 1640. The first recorded attempt to discover the Northwest Passage was the east-west voyage of John Cabot in 1497, sent by Henry VII in search of a direct route to the Orient. In 1524, Charles V sent Estêvão Gomes to find a northern Atlantic passage to the Spice Islands. An English expedition was launched in 1576 by Martin Frobisher, who took three trips west to what is now the Canadian Arctic in order to find the passage. Frobisher Bay, which he first charted, is named after him. As part of another expedition, in July 1583 Sir Humphrey Gilbert, who had written a treatise on the discovery of the passage and was a backer of Frobisher, claimed the territory of Newfoundland for the English crown. On August 8, 1585, the English explorer John Davis entered Cumberland Sound, Baffin Island. The major rivers on the east coast were also explored in case they could lead to a transcontinental passage. Jacques Cartier's explorations of the Saint Lawrence River in 1535 were initiated in hope of finding a way through the continent. Cartier became persuaded that the St. Lawrence was the Passage; when he found the way blocked by rapids at what is now Montreal, he was so certain that these rapids were all that was keeping him from China (in French, "la Chine"), that he named the rapids for China. Samuel de Champlain renamed them Sault Saint-Louis in 1611, but the name was changed to Lachine Rapids in the mid-19th century. In 1602, George Weymouth became the first European to explore what would later be called Hudson Strait when he sailed into the Strait. Weymouth's expedition to find the Northwest Passage was funded jointly by the British East India Company and the Muscovy Company. "Discovery" was the same ship used by Henry Hudson on his final voyage. John Knight, employed by the British East India Company and the Muscovy Company, set out in 1606 to follow up on Weymouth's discoveries and find the Northwest Passage. After his ship ran aground and was nearly crushed by ice, Knight disappeared while searching for a better anchorage. In 1609, Henry Hudson sailed up what is now called the Hudson River in search of the Passage; encouraged by the saltiness of the water in the estuary, he reached present-day Albany, New York, before giving up. On September 14, 1609, the explorer Henry Hudson entered the Tappan Zee while sailing upstream from New York Harbor. At first, Hudson believed the widening of the river indicated that he had found the Northwest Passage. He proceeded upstream as far as present-day Troy before concluding that no such strait existed there. He later explored the Arctic and Hudson Bay. In 1611, while in James Bay, Hudson's crew mutinied. They set Hudson and his teenage son John, along with seven sick, infirm, or loyal crewmen, adrift in a small open boat. He was never seen again. Cree oral legend reports that the survivors lived and traveled with the Cree for more than a year. A mission was sent out in 1612, again in "Discovery", commanded by Sir Thomas Button to find Henry Hudson and continue through the Northwest Passage. After failing to find Hudson, and exploring the west coast of Hudson Bay, Button returned home due to illness in the crew. In 1614, William Gibbons attempted to find the Passage, but was turned back by ice. The next year, 1615, Robert Bylot, a survivor of Hudson's crew, returned to Hudson Strait in "Discovery", but was turned back by ice. Bylot tried again in 1616 with William Baffin. They sailed as far as Lancaster Sound and reached 77°45′ North latitude, a record which stood for 236 years, before being blocked by ice. On May 9, 1619, under the auspices of King Christian IV of Denmark–Norway, Jens Munk set out with 65 men and the king's two ships, "Einhörningen" (Unicorn), a small frigate, and "Lamprenen" (Lamprey), a sloop, which were outfitted under his own supervision. His mission was to discover the Northwest Passage to the Indies and China. Munk penetrated Davis Strait as far north as 69°, found Frobisher Bay, and then spent almost a month fighting his way through Hudson Strait. In September 1619, he found the entrance to Hudson Bay and spent the winter near the mouth of the Churchill River. Cold, famine, and scurvy destroyed so many of his men that only he and two other men survived. With these men, he sailed for home with "Lamprey" on July 16, 1620, reaching Bergen, Norway, on September 20, 1620. René-Robert Cavelier, Sieur de La Salle built the sailing ship, , in his quest to find the Northwest Passage via the upper Great Lakes. "Le Griffon" disappeared in 1679 on the return trip of her maiden voyage. In the spring of 1682, La Salle made his famous voyage down the Mississippi River to the Gulf of Mexico. La Salle led an expedition from France in 1684 to establish a French colony on the Gulf of Mexico. He was murdered by his followers in 1687. Henry Ellis, born in Ireland, was part of a company aiming to discover the Northwest Passage in May 1746. After the difficult extinction of a fire on board the ship, he sailed to Greenland, where he traded goods with the Inuit peoples on July 8, 1746. He crossed to the town of Fort Nelson and spent the summer on the Hayes River. He renewed his efforts in June 1747, without success, before returning to England. In 1772, Samuel Hearne travelled overland northwest from Hudson Bay to the Arctic Ocean, thereby proving that there was no strait connecting Hudson Bay to the Pacific Ocean. Most Northwest Passage expeditions originated in Europe or on the east coast of North America, seeking to traverse the Passage in the westbound direction. Some progress was made in exploring the western reaches of the imagined passage. In 1728 Vitus Bering, a Danish Navy officer in Russian service, used the strait first discovered by Semyon Dezhnyov in 1648 but later accredited to and named after Bering (the Bering Strait). He concluded that North America and Russia were separate land masses by sailing between them. In 1741 with Lieutenant Aleksei Chirikov, he explored seeking further lands beyond Siberia. While they were separated, Chirikov discovered several of the Aleutian Islands while Bering charted the Alaskan region. His ship was wrecked off the Kamchatka Peninsula, as many of his crew were disabled by scurvy. The Spanish made several voyages to the northwest coast of North America during the late 18th century. Determining whether a Northwest Passage existed was one of the motives for their efforts. Among the voyages that involved careful searches for a Passage included the 1775 and 1779 voyages of Juan Francisco de la Bodega y Quadra. The journal of Francisco Antonio Mourelle, who served as Quadra's second in command in 1775, fell into English hands. It was translated and published in London, stimulating exploration. Captain James Cook made use of the journal during his explorations of the region. In 1791 Alessandro Malaspina sailed to Yakutat Bay, Alaska, which was rumoured to be a Passage. In 1790 and 1791 Francisco de Eliza led several exploring voyages into the Strait of Juan de Fuca, searching for a possible Northwest Passage and finding the Strait of Georgia. To fully explore this new inland sea, an expedition under Dionisio Alcalá Galiano was sent in 1792. He was explicitly ordered to explore all channels that might turn out to be a Northwest Passage. In 1776, Captain James Cook was dispatched by the Admiralty in Great Britain on an expedition to explore the Passage. A 1745 act, when extended in 1775, promised a £20,000 prize for whoever discovered the passage. Initially the Admiralty had wanted Charles Clerke to lead the expedition, with Cook (in retirement following his exploits in the Pacific) acting as a consultant. However, Cook had researched Bering's expeditions, and the Admiralty ultimately placed their faith in the veteran explorer to lead, with Clerke accompanying him. After journeying through the Pacific, to make an attempt from the west, Cook began at Nootka Sound in April 1778. He headed north along the coastline, charting the lands and searching for the regions sailed by the Russians 40 years previously. The Admiralty's orders had commanded the expedition to ignore all inlets and rivers until they reached a latitude of 65°N. Cook, however, failed to make any progress in sighting a Northwestern Passage. Various officers on the expedition, including William Bligh, George Vancouver, and John Gore, thought the existence of a route was 'improbable'. Before reaching 65°N they found the coastline pushing them further south, but Gore convinced Cook to sail on into the Cook Inlet in the hope of finding the route. They continued to the limits of the Alaskan peninsula and the start of the chain of Aleutian Islands. Despite reaching 70°N, they encountered nothing but icebergs. From 1792 to 1794, the Vancouver Expedition (led by George Vancouver who had previously accompanied Cook) surveyed in detail all the passages from the Northwest Coast. He confirmed that there was no such passage south of the Bering Strait. This conclusion was supported by the evidence of Alexander MacKenzie, who explored the Arctic and Pacific Oceans in 1793. In the first half of the 19th century, some parts of the Northwest Passage (north of the Bering Strait) were explored separately by many expeditions, including those by John Ross, Elisha Kent Kane, William Edward Parry, and James Clark Ross; overland expeditions were also led by John Franklin, George Back, Peter Warren Dease, Thomas Simpson, and John Rae. In 1826 Frederick William Beechey explored the north coast of Alaska, discovering Point Barrow. Sir Robert McClure was credited with the discovery of the Northwest Passage in 1851 when he looked across McClure Strait from Banks Island and viewed Melville Island. However, this strait was not navigable to ships at that time. The only usable route linking the entrances of Lancaster Sound and Dolphin and Union Strait was discovered by John Rae in 1854. In 1845, a lavishly equipped two-ship expedition led by Sir John Franklin sailed to the Canadian Arctic to chart the last unknown swaths of the Northwest Passage. Confidence was high, as they estimated there was less than remaining of unexplored Arctic mainland coast. When the ships failed to return, relief expeditions and search parties explored the Canadian Arctic, which resulted in a thorough charting of the region, along with a possible passage. Many artifacts from the expedition were found over the next century and a half, including notes that the ships were ice-locked in 1846 near King William Island, about halfway through the passage, and unable to break free. Records showed Franklin died in 1847 and Captain Francis Rawdon Moira Crozier took over command. In 1848 the expedition abandoned the two ships and its members tried to escape south across the tundra by sledge. Although some of the crew may have survived into the early 1850s, no evidence has ever been found of any survivors. In 1853 explorer John Rae was told by local Inuit about the disastrous fate of Franklin's expedition, but his reports were not welcomed in Britain. Starvation, exposure and scurvy all contributed to the men's deaths. In 1981 Owen Beattie, an anthropologist from the University of Alberta, examined remains from sites associated with the expedition. This led to further investigations and the examination of tissue and bone from the frozen bodies of three seamen, John Torrington, William Braine and John Hartnell, exhumed from the permafrost of Beechey Island. Laboratory tests revealed high concentrations of lead in all three (the expedition carried 8,000 tins of food sealed with a lead-based solder). Another researcher has suggested botulism caused deaths among crew members. New evidence, confirming reports first made by John Rae in 1854 based on Inuit accounts, has shown that the last of the crew resorted to cannibalism of deceased members in an effort to survive. During the search for Franklin, Commander Robert McClure and his crew in traversed the Northwest Passage from west to east in the years 1850 to 1854, partly by ship and partly by sledge. McClure started out from England in December 1849, sailed the Atlantic Ocean south to Cape Horn and entered the Pacific Ocean. He sailed the Pacific north and passed through the Bering Strait, turning east at that point and reaching Banks Island. McClure's ship was trapped in the ice for three winters near Banks Island, at the western end of Viscount Melville Sound. Finally McClure and his crew—who were by that time dying of starvation—were found by searchers who had travelled by sledge over the ice from a ship of Sir Edward Belcher's expedition. They rescued McClure and his crew, returning with them to Belcher's ships, which had entered the Sound from the east. McClure and his crew returned to England in 1854 on one of Belcher's ships. They were the first people known to circumnavigate the Americas and to discover and transit the Northwest Passage, albeit by ship and by sledge over the ice. (Both McClure and his ship were found by a party from HMS "Resolute", one of Belcher's ships, so his sledge journey was relatively short.) This was an astonishing feat for that day and age, and McClure was knighted and promoted in rank. (He was made rear-admiral in 1867.) Both he and his crew also shared £10,000 awarded them by the British Parliament. In July 2010 Canadian archaeologists found his ship, HMS "Investigator," fairly intact but sunk about below the surface. The expeditions by Franklin and McClure were in the tradition of British exploration: well-funded ship expeditions using modern technology, and usually including British Naval personnel. By contrast, John Rae was an employee of the Hudson's Bay Company, which operated a far-flung trade network and drove exploration of the Canadian North. They adopted a pragmatic approach and tended to be land-based. While Franklin and McClure tried to explore the passage by sea, Rae explored by land. He used dog sleds and techniques of surviving in the environment which he had learned from the native Inuit. The Franklin and McClure expeditions each employed hundreds of personnel and multiple ships. John Rae's expeditions included fewer than ten people and succeeded. Rae was also the explorer with the best safety record, having lost only one man in years of traversing Arctic lands. In 1854, Rae returned to the cities with information from the Inuit about the disastrous fate of the Franklin expedition. The first explorer to conquer the Northwest Passage solely by ship was the Norwegian explorer Roald Amundsen. In a three-year journey between 1903 and 1906, Amundsen explored the passage with a crew of six. Amundsen, who had sailed to escape creditors seeking to stop the expedition, completed the voyage in the converted 45 net register tonnage () herring boat "Gjøa". "Gjøa" was much smaller than vessels used by other Arctic expeditions and had a shallow draft. Amundsen intended to hug the shore, live off the limited resources of the land and sea through which he was to travel, and had determined that he needed to have a tiny crew to make this work. (Trying to support much larger crews had contributed to the catastrophic failure of John Franklin's expedition fifty years previously). The ship's shallow draft was intended to help her traverse the shoals of the Arctic straits. Amundsen set out from Kristiania (Oslo) in June 1903 and was west of the Boothia Peninsula by late September. "Gjøa" was put into a natural harbour on the south shore of King William Island; by October 3 she was iced in. There the expedition remained for nearly two years, with the expedition members learning from the local Inuit people and undertaking measurements to determine the location of the North Magnetic Pole. The harbour, now known as Gjoa Haven, later developed as the only permanent settlement on the island. After completing the Northwest Passage portion of this trip and having anchored near Herschel Island, Amundsen skied to the city of Eagle, Alaska. He sent a telegram announcing his success and skied the return to rejoin his companions. Although his chosen east–west route, via the Rae Strait, contained young ice and thus was navigable, some of the waterways were extremely shallow ( deep), making the route commercially impractical. The first traversal of the Northwest Passage via dog sled was accomplished by Greenlander Knud Rasmussen while on the Fifth Thule Expedition (1921–1924). Rasmussen and two Greenland Inuit travelled from the Atlantic to the Pacific over the course of 16 months via dog sled. Canadian Royal Canadian Mounted Police officer Henry Larsen was the second to sail the passage, crossing west to east, leaving Vancouver on June 23, 1940 and arriving at Halifax on October 11, 1942. More than once on this trip, he was uncertain whether , a Royal Canadian Mounted Police "ice-fortified" schooner, would survive the pressures of the sea ice. At one point, Larsen wondered "if we had come this far only to be crushed like a nut on a shoal and then buried by the ice." The ship and all but one of her crew survived the winter on Boothia Peninsula. Each of the men on the trip was awarded a medal by Canada's sovereign, King George VI, in recognition of this feat of Arctic navigation. Later in 1944, Larsen's return trip was far more swift than his first. He made the trip in 86 days to sail back from Halifax, Nova Scotia, to Vancouver, British Columbia. He set a record for traversing the route in a single season. The ship, after extensive upgrades, followed a more northerly, partially uncharted route. In 1954, completed the east-to-west transit, under the command of Captain O.C.S. Robertson, conducting hydrographic soundings along the route. She was the first warship (and the first deep draft ship) to transit the Northwest Passage and the first warship to circumnavigate North America. In 1956, HMCS "Labrador" again completed the east-to-west transit, this time under the command of Captain T.C. Pullen. On July 1, 1957, the United States Coast Guard cutter departed in company with and to search for a deep-draft channel through the Arctic Ocean and to collect hydrographic information. The US Coast Guard Squadron was escorted through Bellot Strait and the Eastern Arctic by HMCS "Labrador". Upon her return to Greenland waters, "Storis" became the first U.S.-registered vessel to circumnavigate North America. Shortly after her return in late 1957, she was reassigned to her new home port of Kodiak, Alaska. In 1960, completed the first submarine transit of the Northwest Passage, heading east-to-west. In 1969, SS "Manhattan" made the passage, accompanied by the Canadian icebreakers and . The U.S. Coast Guard icebreakers and also sailed in support of the expedition. "Manhattan" was a specially reinforced supertanker sent to test the viability of the passage for the transport of oil. While "Manhattan" succeeded, the route was deemed not to be cost-effective. The United States built the Alaska Pipeline instead. In June 1977, sailor Willy de Roos left Belgium to attempt the Northwest Passage in his steel yacht "Williwaw". He reached the Bering Strait in September and after a stopover in Victoria, British Columbia, went on to round Cape Horn and sail back to Belgium, thus being the first sailor to circumnavigate the Americas entirely by ship. In 1981 as part of the Transglobe Expedition, Ranulph Fiennes and Charles R. Burton completed the Northwest Passage. They left Tuktoyaktuk on July 26, 1981, in the open Boston Whaler and reached Tanquary Fiord on August 31, 1981. Their journey was the first open-boat transit from west to east and covered around , taking a route through Dolphin and Union Strait following the south coast of Victoria and King William islands, north to Resolute Bay via Franklin Strait and Peel Sound, around the south and east coasts of Devon Island, through Hell Gate and across Norwegian Bay to Eureka, Greely Bay and the head of Tanquary Fiord. Once they reached Tanquary Fiord, they had to trek via Lake Hazen to Alert before setting up their winter base camp. In 1984, the commercial passenger vessel (which sank in the Antarctic Ocean in 2007) became the first cruise ship to navigate the Northwest Passage. In July 1986, Jeff MacInnis and Mike Beedell set out on an catamaran called "Perception" on a 100-day sail, west to east, through the Northwest Passage. This pair was the first to sail the passage, although they had the benefit of doing so over a couple of summers. In July 1986, David Scott Cowper set out from England in a lifeboat named "Mabel El Holland", and survived three Arctic winters in the Northwest Passage before reaching the Bering Strait in August 1989. He continued around the world via the Cape of Good Hope to return to England on September 24, 1990. His was the first vessel to circumnavigate the world via the Northwest Passage. On July 1, 2000, the Royal Canadian Mounted Police patrol vessel , having assumed the name "St Roch II", departed Vancouver on a "Voyage of Rediscovery." "Nadon"s mission was to circumnavigate North America via the Northwest Passage and the Panama Canal, recreating the epic voyage of her predecessor, "St. Roch." The Voyage of Rediscovery was intended to raise awareness concerning "St. Roch" and kick off the fund-raising efforts necessary to ensure the continued preservation of "St. Roch". The voyage was organized by the Vancouver Maritime Museum and supported by a variety of corporate sponsors and agencies of the Canadian government. "Nadon" is an aluminum, catamaran-hulled, high-speed patrol vessel. To make the voyage possible, she was escorted and supported by the Canadian Coast Guard icebreaker . The Coast Guard vessel was chartered by the Voyage of Rediscovery and crewed by volunteers. Throughout the voyage, she provided a variety of necessary services, including provisions and spares, fuel and water, helicopter facilities, and ice escort; she also conducted oceanographic research during the voyage. The Voyage of Rediscovery was completed in five and a half months, with "Nadon" reaching Vancouver on December 16, 2000. On September 1, 2001, "Northabout", an aluminium sailboat with diesel engine, built and captained by Jarlath Cunnane, completed the Northwest Passage east-to-west from Ireland to the Bering Strait. The voyage from the Atlantic to the Pacific was completed in 24 days. Cunnane cruised in "Northabout" in Canada for two years before returning to Ireland in 2005 via the Northeast Passage; he completed the first east-to-west circumnavigation of the pole by a single sailboat. The Northeast Passage return along the coast of Russia was slower, starting in 2004, requiring an ice stop and winter over in Khatanga, Siberia. He returned to Ireland via the Norwegian coast in October 2005. On January 18, 2006, the Cruising Club of America awarded Jarlath Cunnane their Blue Water Medal, an award for "meritorious seamanship and adventure upon the sea displayed by amateur sailors of all nationalities." On July 18, 2003, a father-and-son team, Richard and Andrew Wood, with Zoe Birchenough, sailed the yacht "Norwegian Blue" into the Bering Strait. Two months later she sailed into the Davis Strait to become the first British yacht to transit the Northwest Passage from west to east. She also became the only British vessel to complete the Northwest Passage in one season, as well as the only British sailing yacht to return from there to British waters. In 2006, a scheduled cruise liner () successfully ran the Northwest Passage, helped by satellite images telling the location of sea ice. On May 19, 2007, a French sailor, Sébastien Roubinet, and one other crew member left Anchorage, Alaska, in "Babouche", a ice catamaran designed to sail on water and slide over ice. The goal was to navigate west to east through the Northwest Passage by sail only. Following a journey of more than , Roubinet reached Greenland on September 9, 2007, thereby completing the first Northwest Passage voyage made in one season without engine. In April 2009, planetary scientist Pascal Lee and a team of four on the Northwest Passage Drive Expedition drove the HMP "Okarian" Humvee rover a record-setting on sea-ice from Kugluktuk to Cambridge Bay, Nunavut, the longest distance driven on sea-ice in a road vehicle. The HMP "Okarian" was being ferried from the North American mainland to the Haughton–Mars Project (HMP) Research Station on Devon Island, where it would be used as a simulator of future pressurized rovers for astronauts on the Moon and Mars. The HMP "Okarian" was eventually flown from Cambridge Bay to Resolute Bay in May 2009, and then driven again on sea-ice by Lee and a team of five from Resolute to the West coast of Devon Island in May 2010. The HMP "Okarian" reached the HMP Research Station in July 2011. The Northwest Passage Drive Expedition is captured in the motion picture documentary film "Passage To Mars" (2016). In 2009, sea ice conditions were such that at least nine small vessels and two cruise ships completed the transit of the Northwest Passage. These trips included one by Eric Forsyth on board the Westsail sailboat "Fiona", a boat he built in the 1980s. Self-financed, Forsyth, a retired engineer from the Brookhaven National Laboratory, and winner of the Cruising Club of America's Blue Water Medal, sailed the Canadian Archipelago with sailor Joey Waits, airline captain Russ Roberts and carpenter David Wilson. After successfully sailing the Passage, the 77-year-old Forsyth completed the circumnavigation of North America, returning to his home port on Long Island, New York. Cameron Dueck and his crew aboard the 40-foot sailing yacht Silent Sound also transited in the summer of 2009. Their voyage began in Victoria, BC on June 6 and they arrived in Halifax on October 10. Dueck wrote a book about the voyage called The New Northwest Passage. On September 9, 2010, Bear Grylls and a team of five completed a point-to-point navigation between Pond Inlet and Tuktoyaktuk in the Northwest Territories on a rigid inflatable boat (RIB). The expedition drew attention to how the effects of global warming made this journey possible and raised funds for the Global Angels charity. On August 30, 2012 Sailing yacht , , an English SY, successfully completed the Northwest Passage in Nome, Alaska, while sailing a northern route never sailed by a sailing pleasure vessel before. After six cruising seasons in the Arctic (Greenland, Baffin Bay, Devon Island, Kane Basin, Lancaster Sound, Peel Sound, Regent Sound) and four seasons in the South (Antarctic Peninsula, Patagonia, Falkland Islands, South Georgia), SY "Billy Budd", owned by and under the command of an Italian sporting enthusiast, Mariacristina Rapisardi. Crewed by Marco Bonzanigo, five Italian friends, one Australian, one Dutch, one South African, and one New Zealander, it sailed through the Northwest Passage. The northernmost route was chosen. "Billy Budd" sailed through the Parry Channel, Viscount Melville Sound and Prince of Wales Strait, a channel long and wide which flows south into the Amundsen Gulf. During the passage "Billy Budd" – likely a first for a pleasure vessel – anchored in Winter Harbour in Melville Island, the very same site where almost 200 years ago Sir William Parry was blocked by ice and forced to winter. On August 29, 2012, the Swedish yacht "Belzebub II," a fibreglass cutter captained by Canadian Nicolas Peissel, Swede Edvin Buregren and Morgan Peissel, became the first sailboat in history to sail through McClure Strait, part of a journey of achieving the most northerly Northwest Passage. "Belzebub II" departed Newfoundland following the coast of Greenland to Qaanaaq before tracking the sea ice to Grise Fiord, Canada's most northern community. From there the team continued through Parry Channel into McClure Strait and the Beaufort Sea, tracking the highest latitudes of 2012's record sea ice depletion before completing their Northwest Passage September 14, 2012. The expedition received extensive media coverage, including recognition by former U.S. Vice President Al Gore. The accomplishment is recorded in the Polar Scott Institute's record of Northwest Passage Transits and recognized by the Explorers Club and the Royal Canadian Geographic Society. At 18:45 GMT on September 18, 2012, "Best Explorer", a steel cutter , skipper Nanni Acquarone, passing between the two Diomedes, was the first Italian sailboat to complete the Northwest Passage along the classical Amundsen route. Twenty-two Italian amateur sailors took part of the trip, in eight legs from Tromsø, Norway, to King Cove, Alaska, totalling . Later in 2019 "Best Explorer" skppered again by Nanni Acquarone became the first Italian sailboat to circumnavigate the Arctic sailing north of Siberia from Petropavlovsk-Kamchatsky to Tromsø and the second ever to do it clockwise. Setting sail from Nome, Alaska, on August 18, 2012, and reaching Nuuk, Greenland, on September 12, 2012, became the largest passenger vessel to transit the Northwest Passage. The ship, carrying 481 passengers, for 26 days and at sea, followed in the path of Captain Roald Amundsen. "The World" transit of the Northwest Passage was documented by "National Geographic" photographer Raul Touzon. In September 2013, became the first commercial bulk carrier to transit the Northwest Passage. She was carrying a cargo of of coking coal from Port Metro Vancouver, Canada, to the Finnish Port of Pori, more than would have been possible via the traditional Panama Canal route. The Northwest Passage shortened the distance by compared to traditional route via the Panama Canal. In August and September 2016 a cruise ship was sailed through the Northwest Passage. The ship "Crystal Serenity", (with 1,000 passengers, and 600 crew) left Seward, Alaska, used Amundsen's route and reached New York on September 17. Tickets for the 32-day trip started at $22,000 and were quickly sold out. The trip was repeated in 2017. In 2017 33 vessels made a complete transit, breaking the prior record of 20 in 2012. In September 2018, sailing yacht "Infinity" (a 36·6 m ketch) and her 22 person crew successfully sailed through the Northwest Passage. This was part of their mission to plant the Flag of Planet Earth on the remaining Arctic ice. Supported by the initiative, EarthToday, this voyage was a symbol for future global collaboration against climate change. The Flag of Planet Earth was planted on September 21, 2018, the International Day of Peace. The Canadian government classifies the waters of the Northwest Passage in the Canadian Arctic Archipelago, as internal waters of Canada as per the United Nations Convention on the Law of the Sea and by the precedent in the drawing of baselines for other archipelagos, giving Canada the right to bar transit through these waters. Some maritime nations, including the United States and some of the European Union, claim these waters to be an international strait, where foreign vessels have the right of "transit passage." In such a regime, Canada would have the right to enact fishing and environmental regulation, and fiscal and smuggling laws, as well as laws intended for the safety of shipping, but not the right to close the passage. If the passage's deep waters become completely ice-free in summer months, they will be particularly enticing for supertankers that are too big to pass through the Panama Canal and must otherwise navigate around the tip of South America. The dispute between Canada and the United States arose in 1969 with the trip of the U.S. oil tanker through the Arctic Archipelago. The prospect of more American traffic headed to the Prudhoe Bay Oil Field made the Canadian government realize that political action was required. In 1985, the U.S. Coast Guard icebreaker passed through from Greenland to Alaska; the ship submitted to inspection by the Canadian Coast Guard before passing through, but the event infuriated the Canadian public and resulted in a diplomatic incident. The United States government, when asked by a Canadian reporter, indicated that they did not ask for permission as they insist that the waters were an international strait. The Canadian government issued a declaration in 1986 reaffirming Canadian rights to the waters. The United States refused to recognize the Canadian claim. In 1988 the governments of Canada and the United States signed an agreement, "Arctic Cooperation," that resolved the practical issue without solving the sovereignty questions. Under the law of the sea, ships engaged in transit passage are not permitted to engage in research. The agreement states that all U.S. Coast Guard and Navy vessels are engaged in research, and so would require permission from the Government of Canada to pass through. However, in late 2005, it was reported that U.S. nuclear submarines had travelled unannounced through Canadian Arctic waters, breaking the "Arctic Cooperation" agreement and sparking outrage in Canada. In his first news conference after the 2006 federal election, Prime Minister-designate Stephen Harper contested an earlier statement made by the U.S. ambassador that Arctic waters were international, stating the Canadian government's intention to enforce its sovereignty there. The allegations arose after the U.S. Navy released photographs of surfaced at the North Pole. On April 9, 2006, Canada's Joint Task Force (North) declared that the Canadian Forces will no longer refer to the region as the Northwest Passage, but as the Canadian Internal Waters. The declaration came after the successful completion of Operation Nunalivut (Inuktitut for "the land is ours"), which was an expedition into the region by five military patrols. In 2006 a report prepared by the staff of the Parliamentary Information and Research Service of Canada suggested that because of the September 11 attacks, the United States might be less interested in pursuing the international waterways claim in the interests of having a more secure North American perimeter. This report was based on an earlier paper, "The Northwest Passage Shipping Channel: Is Canada's Sovereignty Really Floating Away?" by Andrea Charron, given to the 2004 Canadian Defence and Foreign Affairs Institute Symposium. Later in 2006 former United States Ambassador to Canada, Paul Cellucci agreed with this position; however, the succeeding ambassador, David Wilkins, stated that the Northwest Passage was in international waters. On July 9, 2007, Prime Minister Harper announced the establishment of a deep-water port in the far North. In the press release Harper said, "Canada has a choice when it comes to defending our sovereignty over the Arctic. We either use it or lose it. And make no mistake, this Government intends to use it. Because Canada's Arctic is central to our national identity as a northern nation. It is part of our history. And it represents the tremendous potential of our future." On July 10, 2007, Rear Admiral Timothy McGee of the U.S. Navy and Rear Admiral Brian Salerno of the U.S. Coast Guard announced that the United States would be increasing its ability to patrol the Arctic. In June 2019, the U.S. State Department spokesperson Morgan Ortagus said the United States "believes that Canada's claim of the Northwest Passage are internal waters of Canada as inconsistent with international law" despite historical precedent regarding archipelago baselines. In the summer of 2000, two Canadian ships took advantage of thinning summer ice cover on the Arctic Ocean to make the crossing. It is thought that climate change is likely to open the passage for increasing periods, making it potentially attractive as a major shipping route. However, the passage through the Arctic Ocean would require significant investment in escort vessels and staging ports, and it would remain seasonal. Therefore, the Canadian commercial marine transport industry does not anticipate the route as a viable alternative to the Panama Canal within the next 10 to 20 years (as of 2004). On September 14, 2007, the European Space Agency stated that ice loss that year had opened up the historically impassable passage, setting a new low of ice cover as seen in satellite measurements which went back to 1978. According to the Arctic Climate Impact Assessment, the latter part of the 20th century and the start of the 21st had seen marked shrinkage of ice cover. The extreme loss in 2007 rendered the passage "fully navigable." However, the ESA study was based only on analysis of satellite images and could in practice not confirm anything about the actual navigation of the waters of the passage. ESA suggested the passage would be navigable "during reduced ice cover by multi-year ice pack" (namely sea ice surviving one or more summers) where previously any traverse of the route had to be undertaken during favourable seasonable climatic conditions or by specialist vessels or expeditions. The agency's report speculated that the conditions prevalent in 2007 had shown the passage may "open" sooner than expected. An expedition in May 2008 reported that the passage was not yet continuously navigable even by an icebreaker and not yet ice-free. Scientists at a meeting of the American Geophysical Union on December 13, 2007, revealed that NASA satellites observing the western Arctic showed a 16% decrease in cloud coverage during the summer of 2007 compared to 2006. This would have the effect of allowing more sunlight to penetrate Earth's atmosphere and warm the Arctic Ocean waters, thus melting sea ice and contributing to the opening the Northwest Passage. In 2006 the cruise liner MS "Bremen" successfully ran the Northwest Passage, helped by satellite images telling where sea ice was. On November 28, 2008, the Canadian Broadcasting Corporation reported that the Canadian Coast Guard confirmed the first commercial ship sailed through the Northwest Passage. In September 2008, , owned by Desgagnés Transarctik Inc. and, along with the Arctic Cooperative, is part of Nunavut Sealift and Supply Incorporated (NSSI), transported cargo from Montreal to the hamlets of Cambridge Bay, Kugluktuk, Gjoa Haven, and Taloyoak. A member of the crew is reported to have claimed that "there was no ice whatsoever." Shipping from the east was to resume in the fall of 2009. Although sealift is an annual feature of the Canadian Arctic this is the first time that the western communities have been serviced from the east. The western portion of the Canadian Arctic is normally supplied by Northern Transportation Company Limited (NTCL) from Hay River, and the eastern portion by NNSI and NTCL from Churchill and Montreal. In January 2010, the ongoing reduction in the Arctic sea ice led telecoms cable specialist Kodiak-Kenai Cable to propose the laying of a fiberoptic cable connecting London and Tokyo, by way of the Northwest Passage, saying the proposed system would nearly cut in half the time it takes to send messages from the United Kingdom to Japan. In September 2013, the first large ice strengthened sea freighter, "Nordic Orion", used the passage. In 2016 a new record was set when the cruise ship "Crystal Serenity" transited with 1,700 passengers and crew. "Crystal Serenity" is the largest cruise ship to navigate the Northwest Passage. Starting on August 10, 2016, the ship sailed from Vancouver to New York City, taking 28 days for the journey. Scientists believe that reduced sea ice in the Northwest Passage has permitted some new species to migrate across the Arctic Ocean. The gray whale "Eschrichtius robustus" has not been seen in the Atlantic since it was hunted to extinction there in the 18th century, but in May 2010, one such whale turned up in the Mediterranean. Scientists speculated the whale had followed its food sources through the Northwest Passage and simply kept on going. The plankton species "Neodenticula seminae" had not been recorded in the Atlantic for 800,000 years. Over the past few years, however, it has become increasingly prevalent there. Again, scientists believe that it got there through the reopened Northwest Passage. In August 2010, two bowhead whales from West Greenland and Alaska respectively, entered the Northwest Passage from opposite directions and spent approximately 10 days in the same area.
https://en.wikipedia.org/wiki?curid=21215
Nevada Nevada ( or , ) is a state in the Western United States. It is bordered by Oregon to the northwest, Idaho to the northeast, California to the west, Arizona to the southeast, and Utah to the east. Nevada is the 7th most extensive, the 32nd most populous, but the 9th least densely populated of the U.S. states. Nearly three-quarters of Nevada's people live in Clark County, which contains the Las Vegas–Paradise metropolitan area, including three of the state's four largest incorporated cities. Nevada's capital is Carson City. Nevada is officially known as the "Silver State" because of the importance of silver to its history and economy. It is also known as the "Battle Born State" because it achieved statehood during the Civil War (the words "Battle Born" also appear on the state flag); as the "Sagebrush State", for the native plant of the same name; and as the "Sage-hen State". Nevada is largely desert and semi-arid, much of it within the Great Basin. Areas south of the Great Basin are within the Mojave Desert, while Lake Tahoe and the Sierra Nevada lie on the western edge. About 86% of the state's land is managed by various jurisdictions of the U.S. federal government, both civilian and military. Before European contact — and still today, American Indians of the Paiute, Shoshone, and Washoe tribes inhabited the land that is now Nevada. The first Europeans to explore the region were Spanish. They called the region "Nevada" (snowy) because of the snow which covered the mountains in winter. The area formed part of the Viceroyalty of New Spain, and became part of Mexico when it gained independence in 1821. The United States annexed the area in 1848 after its victory in the Mexican–American War, and it was incorporated as part of Utah Territory in 1850. The discovery of silver at the Comstock Lode in 1859 led to a population boom that became an impetus to the creation of Nevada Territory out of western Utah Territory in 1861. Nevada became the 36th state on October 31, 1864, as the second of two states added to the Union during the Civil War (the first being West Virginia). Nevada has a reputation for its libertarian laws. In 1940, with a population of just over 110,000 people, Nevada was by far the least-populated state, with less than half the population of the next least-populated state. However, legalized gambling and lenient marriage and divorce laws transformed Nevada into a major tourist destination in the 20th century. Nevada is the only U.S. state where prostitution is legal, though it is illegal in its most populated regions — Clark County (Las Vegas), Washoe County (Reno) and Carson City (which, as an independent city, is not within the boundaries of any county). The tourism industry remains Nevada's largest employer, with mining continuing as a substantial sector of the economy: Nevada is the fourth-largest producer of gold in the world. The name "Nevada" comes from the Spanish "nevada" , meaning "snow-covered". Nevadans pronounce the second syllable with the "a" as in "trap" () while some people from outside of the state can pronounce it with the "a" as in "palm" (). Although the latter pronunciation is closer to the Spanish pronunciation, it is not the pronunciation used by Nevadans. State Assemblyman Harry Mortenson proposed a bill to recognize the alternate (quasi-Spanish) pronunciation of Nevada, though the bill was not supported by most legislators and never received a vote. The Nevadan pronunciation is the one used by the state legislature. At one time, the state's official tourism organization, TravelNevada, stylized the name of the state as "Nevăda", with a breve over the "a" indicating the locally preferred pronunciation which was also available as a license plate design until 2007. Nevada is almost entirely within the Basin and Range Province and is broken up by many north-south mountain ranges. Most of these ranges have endorheic valleys between them, which belies the image portrayed by the term Great Basin. Much of the northern part of the state is within the Great Basin, a mild desert that experiences hot temperatures in the summer and cold temperatures in the winter. Occasionally, moisture from the Arizona Monsoon will cause summer thunderstorms; Pacific storms may blanket the area with snow. The state's highest recorded temperature was in Laughlin (elevation of ) on June 29, 1994. The coldest recorded temperature was set in San Jacinto in 1972, in the northeastern portion of the state. The Humboldt River crosses the state from east to west across the northern part of the state, draining into the Humboldt Sink near Lovelock. Several rivers drain from the Sierra Nevada eastward, including the Walker, Truckee, and Carson rivers. All of these rivers are endorheic basins, ending in Walker Lake, Pyramid Lake, and the Carson Sink, respectively. However, not all of Nevada is within the Great Basin. Tributaries of the Snake River drain the far north, while the Colorado River, which also forms much of the boundary with Arizona, drains much of southern Nevada. The mountain ranges, some of which have peaks above , harbor lush forests high above desert plains, creating sky islands for endemic species. The valleys are often no lower in elevation than , while some in central Nevada are above . The southern third of the state, where the Las Vegas area is situated, is within the Mojave Desert. The area receives less rain in the winter but is closer to the Arizona Monsoon in the summer. The terrain is also lower, mostly below , creating conditions for hot summer days and cool to chilly winter nights. Nevada and California have by far the longest diagonal line (in respect to the cardinal directions) as a state boundary at just over . This line begins in Lake Tahoe nearly offshore (in the direction of the boundary), and continues to the Colorado River where the Nevada, California, and Arizona boundaries merge southwest of the Laughlin Bridge. The largest mountain range in the southern portion of the state is the Spring Mountain Range, just west of Las Vegas. The state's lowest point is along the Colorado River, south of Laughlin. Nevada has 172 mountain summits with of prominence. Nevada ranks second in the United States by the number of mountains, behind Alaska, and ahead of California, Montana, and Washington. Nevada is the most mountainous state in the contiguous United States. Nevada is the driest state in the United States. It is made up of mostly desert and semi-arid climate regions, and, with the exception of the Las Vegas Valley, the average summer diurnal temperature range approaches in much of the state. While winters in northern Nevada are long and fairly cold, the winter season in the southern part of the state tends to be of short duration and mild. Most parts of Nevada receive scarce precipitation during the year. The most rain that falls in the state falls on the lee side (east and northeast slopes) of the Sierra Nevada. The average annual rainfall per year is about ; the wettest parts get around . Nevada's highest recorded temperature is at Laughlin on June 29, 1994 and the lowest recorded temperature is at San Jacinto on January 8, 1937. Nevada's reading is the third highest statewide record high temperature of a U.S. state, just behind Arizona's reading and California's reading. The vegetation of Nevada is diverse and differs by state area. Nevada contains six biotic zones: alpine, sub-alpine, ponderosa pine, pinion-juniper, sagebrush and creosotebush. Nevada is divided into political jurisdictions designated as "counties". Carson City is officially a consolidated municipality; however, for many purposes under state law, it is considered to be a county. As of 1919, there were 17 counties in the state, ranging from . Lake County, one of the original nine counties formed in 1861, was renamed Roop County in 1862. Part of the county became Lassen County, California in 1864. In 1883, Washoe County annexed the portion that remained in Nevada. In 1969, Ormsby County was dissolved and the Consolidated Municipality of Carson City was created by the Legislature in its place coterminous with the old boundaries of Ormsby County. Bullfrog County was formed in 1987 from part of Nye County. After the creation was declared unconstitutional, the county was abolished in 1989. Humboldt county was designated as a county in 1856 by Utah Territorial Legislature and again in 1861 by the new Nevada Legislature. Clark County is the most populous county in Nevada, accounting for nearly three-quarters of its residents. Las Vegas, Nevada's most populous city, has been the county seat since the county was created in 1909 from a portion of Lincoln County, Nevada. Before that, it was a part of Arizona Territory. Clark County attracts numerous tourists: An estimated 44 million people visited Clark County in 2014. Washoe County is the second-most populous county of Nevada. Its county seat is Reno. Washoe County includes the Reno–Sparks metropolitan area. Lyon County is the third most populous county. It was one of the nine original counties created in 1861. It was named after Nathaniel Lyon, the first Union General to be killed in the Civil War. Its current county seat is Yerington. Its first county seat was established at Dayton on November 29, 1861. Francisco Garcés was the first European in the area. Nevada was annexed as a part of the Spanish Empire in the northwestern territory of New Spain. Administratively, the area of Nevada was part of the Commandancy General of the Provincias Internas in the Viceroyalty of New Spain. Nevada became a part of Alta California ("Upper California") province in 1804 when the Californias were split. With the Mexican War of Independence won in 1821, the province of Alta California became a territory (state) of Mexico, with a small population. Jedediah Smith entered the Las Vegas Valley in 1827, and Peter Skene Ogden traveled the Humboldt River in 1828. When the Mormons created the State of Deseret in 1847, they laid claim to all of Nevada within the Great Basin and the Colorado watershed. They also founded the first white settlement in what is now Nevada, Mormon Station (modern-day Genoa), in 1851. In June 1855, William Bringhurst and 29 fellow Mormon missionaries from Utah arrived at a site just northeast of downtown Las Vegas and built a 150-foot square adobe fort, the first permanent structure erected in the valley, which remained under the control of Salt Lake City until the winter of 1858–1859. As a result of the Mexican–American War and the Treaty of Guadalupe Hidalgo, Mexico permanently lost Alta California in 1848. The new areas acquired by the United States continued to be administered as territories. As part of the Mexican Cession (1848) and the subsequent California Gold Rush that used Emigrant Trails through the area, the state's area evolved first as part of the Utah Territory, then the Nevada Territory (March 2, 1861; named for the Sierra Nevada). See History of Utah, History of Las Vegas, and the discovery of the first major U.S. deposit of silver ore in Comstock Lode under Virginia City, Nevada, in 1859. On March 2, 1861, the Nevada Territory separated from the Utah Territory and adopted its current name, shortened from "The Sierra Nevada" (Spanish for "snow-covered mountain range"). The 1861 southern boundary is commemorated by Nevada Historical Markers 57 and 58 in Lincoln and Nye counties. Eight days before the presidential election of 1864, Nevada became the 36th state in the union, despite lacking the minimum requisite 60,000 residents in order to become a state. (At the time Nevada's population was little more than 10,000.) Rather than sending the Nevada constitution to Washington by Pony Express, the full text was sent by telegraph at a cost of $4,303.27 — the most costly telegraph on file at the time for a single dispatch. Finally, the response from Washington came on October 31, 1864: "the pain is over, the child is born, Nevada this day was admitted into the Union". Statehood was rushed to the date of October 31 to help ensure Abraham Lincoln's reelection on November8 and post-Civil War Republican dominance in Congress, as Nevada's mining-based economy tied it to the more industrialized Union. As it turned out, however, Lincoln and the Republicans won the election handily and did not need Nevada's help. Nevada is one of only two states to significantly expand its borders after admission to the Union. (The other is Missouri, which acquired additional territory in 1837 due to the Platte Purchase.) In 1866 another part of the western Utah Territory was added to Nevada in the eastern part of the state, setting the current eastern boundary. Nevada achieved its current southern boundaries on January 18, 1867, when it absorbed the portion of Pah-Ute County in the Arizona Territory west of the Colorado River, essentially all of present-day Nevada south of the 37th parallel. The transfer was prompted by the discovery of gold in the area, and officials thought Nevada would be better able to oversee the expected population boom. This area includes most of what is now Clark County and the Las Vegas metropolitan area. Mining shaped Nevada's economy for many years (see "Silver mining in Nevada"). When Mark Twain lived in Nevada during the period described in "Roughing It", mining had led to an industry of speculation and immense wealth. However, both mining and population declined in the late 19th century. However, the rich silver strike at Tonopah in 1900, followed by strikes in Goldfield and Rhyolite, again put Nevada's population on an upward trend. Unregulated gambling was commonplace in the early Nevada mining towns but was outlawed in 1909 as part of a nationwide anti-gambling crusade. Because of subsequent declines in mining output and the decline of the agricultural sector during the Great Depression, Nevada again legalized gambling on March 19, 1931, with approval from the legislature. Governor Fred B. Balzar's signature enacted the most liberal divorce laws in the country and open gambling. The reforms came just eight days after the federal government presented the $49 million construction contract for Boulder Dam (now Hoover Dam). The Nevada Test Site, northwest of the city of Las Vegas, was founded on January 11, 1951, for the testing of nuclear weapons. The site consists of about of the desert and mountainous terrain. Nuclear testing at the Nevada Test Site began with a bomb dropped on Frenchman Flat on January 27, 1951. The last atmospheric test was conducted on July 17, 1962, and the underground testing of weapons continued until September 23, 1992. The location is known for having the highest concentration of nuclear-detonated weapons in the U.S. Over 80% of the state's area is owned by the federal government. The primary reason for this is homesteads were not permitted in large enough sizes to be viable in the arid conditions that prevail throughout desert Nevada. Instead, early settlers would homestead land surrounding a water source, and then graze livestock on the adjacent public land, which is useless for agriculture without access to water (this pattern of ranching still prevails). The United States Census Bureau estimates the population of Nevada on July 1, 2019, was 3,080,156, an increase of 45,764 residents (1.51%) since the 2018 US Census estimate and an increase of 379,605 residents (14.06%) since the 2010 United States Census. Nevada had the highest percentage growth in population from 2017 to 2018. At the 2010 Census, 6.9% of the state's population were reported as under 5, 24.6% were under 18, and 12.0% were 65 or older. Females made up about 49.5% of the population. Since the 2010 census, the population of Nevada had a natural increase of 87,581 (the net difference between 222,508 births and 134,927 deaths); and an increase due to net migration of 146,626 (of which 104,032 was due to domestic and 42,594 was due to international migration). The center of population of Nevada is in southern Nye County. In this county, the unincorporated town of Pahrump, west of Las Vegas on the California state line, has grown very rapidly from 1980 to 2010. At the 2010 census, the town had 36,441 residents. Las Vegas grew from a gulch of 100 people in 1900 to 10,000 by 1950 to 100,000 by 1970, and was America's fastest-growing city and metropolitan area from 1960 to 2000. From about the 1940s until 2003, Nevada was the fastest-growing state in the U.S. percentage-wise. Between 1990 and 2000, Nevada's population increased by 66%, while the nation's population increased by 13%. More than two-thirds of the population live in Clark County, which is coextensive with the Las Vegas metropolitan area. Thus, in terms of population, Nevada is one of the most centralized states in the nation. Henderson and North Las Vegas are among the top 20 fastest-growing U.S. cities with populations over 100,000. The rural community of Mesquite northeast of Las Vegas was an example of micropolitan growth in the 1990s and 2000s. Other desert towns like Indian Springs and Searchlight on the outskirts of Las Vegas have seen some growth as well. Since 1950, the rate of population born in Nevada has never peaked 27 percent, the lowest rate of all states. In 2012, only 25% of Nevadans were born in-state. Large numbers of new residents in the state originate from California, which led some locals to feel their state is being "Californicated". According to the 2017 American Community Survey, 28.2% of Nevada's population were of Hispanic or Latino origin (of any race): Mexican (21.4%), Puerto Rican (0.9%), Cuban (1.0%), and other Hispanic or Latino origin (4.8%). The five largest non-Hispanic White ancestry groups were: German (11.3%), Irish (9.0%), English (6.9%), Italian (5.8%), and American (4.7%). In 1980, non-Hispanic whites made up 83.3% of the state's population. As of 2011, 63.6% of Nevada's population younger than age1 were minorities. Las Vegas is a minority majority city. According to the United States Census Bureau estimates, as of July 1, 2018, non-Hispanic Whites made up 48.7% of Nevada's population. In Douglas, Mineral, and Pershing counties, a plurality of residents are of Mexican ancestry. In Nye County and Humboldt County, residents are mostly of German ancestry; Washoe County has many Irish Americans. Americans of English descent form pluralities in Lincoln County, Churchill County, Lyon County, White Pine County, and Eureka County. Asian Americans lived in the state since the California Gold Rush of the 1850s brought thousands of Chinese miners to Washoe county. They were followed by a few hundred Japanese farmworkers in the late 19th century. By the late 20th century, many immigrants from China, Japan, Korea, the Philippines, Bangladesh, India, and Vietnam came to the Las Vegas metropolitan area. The city now has one of America's most prolific Asian American communities, with a mostly Chinese and Taiwanese area known as "Chinatown" west of I-15 on Spring Mountain Road. Filipino Americans form the largest Asian American group in the state, with a population of more than 113,000. They comprise 56.5% of the Asian American population in Nevada and constitute about 4.3% of the entire state's population. Mining booms drew many Greek and Eastern European immigrants to Nevada. Native American tribes in Nevada are the Koso, Paiute, Panamint, Shoshoni, Walapi, Washoe and Ute tribes. The top countries of origin for immigrants in Nevada were Mexico (39.5 percent of immigrants), the Philippines (14.3 percent), El Salvador (5.2 percent), China (3.1 percent), and Cuba (3 percent). "Note: Births within the table do not add up, due to Hispanics being counted both by their ethnicity and by their race, giving a higher overall number." A small percentage of Nevada's population lives in rural areas. The culture of these places differs significantly from major metropolitan areas. People in these rural counties tend to be native Nevada residents, unlike in the Las Vegas and Reno areas, where the vast majority of the population was born in another state. The rural population is also less diverse in terms of race and ethnicity. Mining plays an important role in the economies of the rural counties, with tourism being less prominent. Ranching also has a long tradition in rural Nevada. Church attendance in Nevada is among the lowest of all U.S. states. In a 2009 Gallup poll only 30% of Nevadans said they attended church weekly or almost weekly, compared to 42% of all Americans (only four states were found to have a lower attendance rate than Nevada's). Major religious affiliations of the people of Nevada are: Protestant 35%, no religion 28%, Roman Catholic 25%, Latter-day Saint 4%, Jewish 2%, Hindu less than 1%, Buddhist 0.5% and Islam less than 0.1%. Parts of Nevada (in the eastern parts of the state) are situated in the Mormon Corridor. The largest denominations by number of adherents in 2010 were the Roman Catholic Church with 451,070; The Church of Jesus Christ of Latter-day Saints with 175,149; and the Southern Baptist Convention with 45,535; Buddhist congregations 14,727; Bahá'í 1,723; and Muslim 1,700. The Jewish community is represented by The Rohr Jewish Learning Institute and Chabad. The economy of Nevada is tied to tourism (especially entertainment and gambling related), mining, and cattle ranching. Nevada's industrial outputs are tourism, mining, machinery, printing and publishing, food processing, and electric equipment. The Bureau of Economic Analysis estimates Nevada's total state product in 2018 was $170 billion. The state's per capita personal income in 2018 was $43,820, ranking 35th in the nation. Nevada's state debt in 2012 was calculated to be $7.5 billion, or $3,100 per taxpayer. As of December 2014, the state's unemployment rate was 6.8%. The economy of Nevada has long been tied to vice industries. "[Nevada was] founded on mining and refounded on sin—beginning with prizefighting and easy divorce a century ago and later extending to gaming and prostitution", said the August 21, 2010 issue of "The Economist". In portions of the state outside of the Las Vegas and Reno metropolitan areas mining plays a major economic role. By value, gold is by far the most important mineral mined. In 2004, of gold worth $2.84 billion were mined in Nevada, and the state accounted for 8.7% of world gold production (see "Gold mining in Nevada"). Silver is a distant second, with worth $69 million mined in 2004 (see "Silver mining in Nevada"). Other minerals mined in Nevada include construction aggregates, copper, gypsum, diatomite and lithium. Despite its rich deposits, the cost of mining in Nevada is generally high, and output is very sensitive to world commodity prices. Cattle ranching is a major economic activity in rural Nevada. Nevada's agricultural outputs are cattle, hay, alfalfa, dairy products, onions, and potatoes. As of January 1, 2006, there were an estimated 500,000 head of cattle and 70,000 head of sheep in Nevada. Most of these animals forage on rangeland in the summer, with supplemental feed in the winter. Calves are generally shipped to out-of-state feedlots in the fall to be fattened for the market. Over 90% of Nevada's of cropland is used to grow hay, mostly alfalfa, for livestock feed. The largest employers in the state, as of the first fiscal quarter of 2011, are the following, according to the Nevada Department of Employment, Training and Rehabilitation: Amtrak's "California Zephyr" train uses the Union Pacific's original transcontinental railroad line in daily service from Chicago to Emeryville, California, serving Elko, Winnemucca, and Reno. Las Vegas has had no passenger train service since Amtrak's Desert Wind was discontinued in 1997. Amtrak Thruway Motorcoaches provide connecting service from Las Vegas to trains at Needles, California, Los Angeles, and Bakersfield, California; and from Stateline, Nevada, to Sacramento, California. There have been a number of proposals to re-introduce service to either Los Angeles or Southern California. The Union Pacific Railroad has some railroads in the north and south of Nevada. Greyhound Lines provide some bus service to the state. Interstate 15 passes through the southern tip of the state, serving Las Vegas and other communities. I-215 and spur route I-515 also serve the Las Vegas metropolitan area. Interstate 80 crosses through the northern part of Nevada, roughly following the path of the Humboldt River from Utah in the east and the Truckee River westward through Reno into California. It has a spur route, I-580. Nevada also is served by several U.S. highways: US6, US 50, US 93, US 95 and US 395. There are also 189 Nevada state routes. Many of Nevada's counties have a system of county routes as well, though many are not signed or paved in rural areas. Nevada is one of a few states in the U.S. that does not have a continuous interstate highway linking its two major population centers—the road connection between the Las Vegas and Reno areas is a combination of several different Interstate and U.S. highways. The Interstate 11 proposed routing may eventually remedy this. The state is one of just a few in the country to allow semi-trailer trucks with three trailers—what might be called a "road train" in Australia. But American versions are usually smaller, in part because they must ascend and descend some fairly steep mountain passes. RTC Transit is the public transit system in the Las Vegas metropolitan area. The agency is the largest transit agency in the state and operates a network of bus service across the Las Vegas Valley, including the use of The Deuce, double-decker buses, on the Las Vegas Strip and several outlying routes. RTC RIDE operates a system of local transit bus service throughout the Reno-Sparks metropolitan area. Other transit systems in the state include Carson City's JAC. Most other counties in the state do not have public transportation at all. Additionally, a monorail system provides public transportation in the Las Vegas area. The Las Vegas Monorail line services several casino properties and the Las Vegas Convention Center on the east side of the Las Vegas Strip, running near Paradise Road, with a possible future extension to McCarran International Airport. Several hotels also run their own monorail lines between each other, which are typically several blocks in length. McCarran International Airport in Las Vegas is the busiest airport serving Nevada. The Reno-Tahoe International Airport (formerly known as the Reno Cannon International Airport) is the other major airport in the state. Nevada has had a thriving solar energy sector. An independent study in 2013 concluded that solar users created a $36m net benefit. However, in December 2015, the Public Utility Commission let the state's only power company, NV Energy, charge higher rates and fees to solar panel users, leading to an immediate collapse of rooftop solar panel use In December 1987, Congress amended the Nuclear Waste Policy Act to designate Yucca Mountain nuclear waste repository as the only site to be characterized as a permanent repository for all of the nation's highly radioactive waste. Under the Constitution of the State of Nevada, the powers of the Nevada government are divided among three separate departments: the Executive consisting of the Governor of Nevada and their cabinet along with the other elected constitutional officers; the Legislative consisting of the Nevada Legislature, which includes the Assembly and the Senate; and the Judicial consisting of the Supreme Court of Nevada and lower courts. The Governor of Nevada is the chief magistrate of Nevada, the head of the executive department of the state's government, and the commander-in-chief of the state's military forces. The current Governor of Nevada is Steve Sisolak, a Democrat. The Nevada Legislature is a bicameral body divided into an Assembly and Senate. Members of the Assembly serve two years, and members of the Senate serve four years. Both houses of the Nevada Legislature will be impacted by term limits starting in 2010, as Senators and Assemblymen/women will be limited to a maximum of twelve years in each house (by appointment or election which is a lifetime limit)—a provision of the constitution which was recently upheld by the Supreme Court of Nevada in a unanimous decision. Each session of the Legislature meets for a constitutionally mandated 120 days in every odd-numbered year, or longer if the Governor calls a special session. On December 18, 2018, Nevada became the first in the United States with a female majority in its legislature. Women hold nine of the 21 seats in the Nevada Senate, and 23 of the 42 seats in the Nevada Assembly. The Supreme Court of Nevada is the state supreme court and the head of the Nevada Judiciary. Original jurisdiction is divided between the district courts (with general jurisdiction), and justice courts and municipal courts (both of limited jurisdiction). Appeals from District Courts are made directly to the Nevada Supreme Court, which under a deflective model of jurisdiction, has the discretion to send cases to the Court of Appeals for final resolution. Incorporated towns in Nevada, known as cities, are given the authority to legislate anything not prohibited by law. A recent movement has begun to permit home rule to incorporate Nevada cities to give them more flexibility and fewer restrictions from the Legislature. Town Boards for unincorporated towns are limited local governments created by either the local county commission, or by referendum, and form a purely advisory role and in no way diminish the responsibilities of the county commission that creates them. In 1900, Nevada's population was the smallest of all states and was shrinking, as the difficulties of living in a "barren desert" began to outweigh the lure of silver for many early settlers. Historian Lawrence Friedman has explained what happened next: Nevada, in a burst of ingenuity, built an economy by exploiting its sovereignty. Its strategy was to legalize all sorts of things that were illegal in California... after the easy divorce came easy marriage and casino gaming. Even prostitution is legal in Nevada, in any county that decides to allow it. Quite a few of them do. With the advent of air conditioning for summertime use and Southern Nevada's mild winters, the fortunes of the state began to turn around, as it did for Arizona, making these two states the fastest growing in the Union. Nevada is the only state where prostitution is legal—in a licensed brothel in a county which has specifically voted to permit it. It is illegal in larger jurisdictions such as Clark County (which contains Las Vegas), Washoe County (which contains Reno), and the independent city of Carson City. Nevada's early reputation as a "divorce haven" arose from the fact that before the no-fault divorce revolution in the 1970s, divorces were difficult to obtain in the United States. Already having legalized gambling and prostitution, Nevada continued the trend of boosting its profile by adopting one of the most liberal divorce statutes in the nation. This resulted in "Williams v. North Carolina (1942)", , in which the U.S. Supreme Court ruled North Carolina had to give "full faith and credit" to a Nevada divorce. The Court modified its decision in "Williams v. North Carolina" (1945), , by holding a state need not recognize a Nevada divorce unless one of the parties was domiciled there at the time the divorce was granted and the forum state was entitled to make its own determination. As of 2009, Nevada's divorce rate was above the national average. Nevada's tax laws are intended to draw new residents and businesses to the state. Nevada has no personal income tax or corporate income tax. Since Nevada does not collect income data it cannot share such information with the federal government, the IRS. The state sales tax (similar to VAT or GST) in Nevada is variable depending upon the county. The statewide tax rate is 6.85%, with five counties (Elko, Esmeralda, Eureka, Humboldt, and Mineral) charging this amount. Counties may impose additional rates via voter approval or through approval of the state legislature; therefore, the applicable sales tax varies by county from 6.85% to 8.375% (Clark County). Clark County, which includes Las Vegas, imposes four separate county option taxes in addition to the statewide rate: 0.25% for flood control, 0.50% for mass transit, 0.25% for infrastructure, and 0.25% for more cops. In Washoe County, which includes Reno, the sales tax rate is 7.725%, due to county option rates for flood control, the ReTRAC train trench project, and mass transit, and an additional county rate approved under the Local Government Tax Act of 1991. The minimum Nevada sales tax rate changed on July 1, 2009. The lodging tax rate in unincorporated Clark County, which includes the Las Vegas Strip, is 12%. Within the boundaries of the cities of Las Vegas and Henderson, the lodging tax rate is 13%. Corporations such as Apple Inc. allegedly have set up investment companies and funds in Nevada to avoid paying taxes. In 2009, the Nevada Legislature passed a bill creating a domestic partnership registry that enables gay couples to enjoy the same rights as married couples. In June 2015, gay marriage became legal in Nevada. Nevada provides a friendly environment for the formation of corporations, and many (especially California) businesses have incorporated in Nevada to take advantage of the benefits of the Nevada statute. Nevada corporations offer great flexibility to the Board of Directors and simplify or avoid many of the rules that are cumbersome to business managers in some other states. In addition, Nevada has no franchise tax, although it does require businesses to have a license for which the business has to pay the state. Similarly, many U.S. states have usury laws limiting the amount of interest a lender can charge, but federal law allows corporations to 'import' these laws from their home state. Nevada has very liberal alcohol laws. Bars are permitted to remain open 24 hours, with no "last call". Liquor stores, convenience stores and supermarkets may also sell alcohol 24  hours per day and may sell beer, wine and spirits. In 2016, Nevada voters approved Question2, which legalized the possession, transportation and cultivation of personal use amounts of marijuana for adults age 21 years and older, and authorized the creation of a regulated market for the sale of marijuana to adults age 21 years and older through state-licensed retail outlets. Nevada voters had previously approved medical marijuana in 2000, but rejected marijuana legalization in a similar referendum in 2006. Marijuana in all forms remains illegal under federal law. Aside from cannabis legalization, non-alcohol drug laws are a notable exception to Nevada's otherwise libertarian principles. It is notable for having the harshest penalties for drug offenders in the country. Nevada remains the only state to still use mandatory minimum sentencing guidelines for possession of drugs. Nevada voters enacted a smoking ban ("The Nevada Clean Indoor Air Act") in November 2006 that became effective on December 8, 2006. It outlaws smoking in most workplaces and public places. Smoking is permitted in bars, but only if the bar serves no food, or the bar is inside a larger casino. Smoking is also permitted in casinos, certain hotel rooms, tobacco shops, and brothels. However, some businesses do not obey this law and the government tends not to enforce it. In 2011, smoking restrictions in Nevada were loosened for certain places which allow only people 21 or older inside. In 2006, the crime rate in Nevada was about 24% higher than the national average rate, though crime has since decreased. Property crimes accounted for about 85% of the total crime rate in Nevada, which was 21% higher than the national rate. The remaining 20.3% were violent crimes. A complete listing of crime data in the state for 2013 can be found here: Due to heavy growth in the southern portion of the state, there is a noticeable divide between the politics of northern and southern Nevada. The north has long maintained control of key positions in state government, even while the population of southern Nevada is larger than the rest of the state combined. The north sees the high population south becoming more influential and perhaps commanding majority rule. The south sees the north as the "old guard" trying to rule as an oligarchy. This has fostered some resentment, however, due to a term limit amendment passed by Nevada voters in 1994, and again in 1996, some of the north's hold over key positions will soon be forfeited to the south, leaving northern Nevada with less power. Historically, northern Nevada has been very Republican. The more rural counties of the north are among the most conservative regions of the country. Carson City, the state's capital, is a Republican-leaning swing city/county. Washoe County, home to Reno, has historically been strongly Republican, but now has become more of a Democratic-leaning swing county. Clark County, home to Las Vegas, has been a stronghold for the Democratic Party since it was founded in 1909, having voted Republican only six times and once for a third-party candidate. Clark and Washoe counties have long dominated the state's politics. Between them, they cast 87 percent of Nevada's vote, and elect a substantial majority of the state legislature. The last Republican to carry Clark County was George H.W. Bush in 1988, and the last Republican to carry Washoe County was George W. Bush in 2004. The great majority of the state's elected officials are from either Las Vegas or Reno. Nevada voted for the winner in nearly every presidential election from 1912 to 2012, with the exception in 1976 when it voted for Gerald Ford over Jimmy Carter. This includes Nevada supporting Democrats John F. Kennedy and Lyndon B. Johnson in 1960 and 1964, respectively. Republican Richard Nixon in 1968 and in 1972, Republican Ronald Reagan in 1980 and in 1984, Republican George H.W. Bush in 1988, Democrat Bill Clinton in 1992 and 1996, Republican George W. Bush in 2000 and 2004, and Democrat Barack Obama winning the state in both 2008 and 2012. This gives the state status as a political bellwether. From 1912 to 2012, Nevada has been carried by the presidential victor the most out of any state (26 of 27 elections). In 2016, Nevada lost its bellwether status when it narrowly cast its votes for Hillary Clinton. Nevada was one of only three states won by John F. Kennedy in the American West in the election of 1960, albeit narrowly. The state's U.S. Senators are Democrats Catherine Cortez Masto and Jacky Rosen. The Governorship is held by Steve Sisolak, a Democrat. Nevada is the only U.S. state to have a none of the above option available on its ballots. Officially called None of These Candidates, the option was first added to the ballot in 1975 and is used in all statewide elections, including president, US Senate and all state constitutional positions. In the event "None of These Candidates" receives a plurality of votes in the election, the candidate with the next-highest total is elected. Education in Nevada is achieved through public and private elementary, middle, and high schools, as well as colleges and universities. A May 2015 educational reform law expanded school choice options to 450,000 Nevada students who are at up to 185% of the federal poverty level. Education savings accounts (ESAs) are enabled by the new law to help pay the tuition for private schools. Alternatively, families "can use funds in these accounts to also pay for textbooks and tutoring". Approximately 86.9% of Nevada high school students graduate, putting it below the national average of 88.3%. Public school districts in Nevada include: The Nevada Aerospace Hall of Fame provides educational resources and promotes the aerospace and aviation history of the state. There are 68 designated wilderness areas in Nevada, protecting some under the jurisdiction of the National Park Service, U.S. Forest Service, and Bureau of Land Management. The Nevada state parks comprise protected areas managed by the state of Nevada, including state parks, state historic sites, and state recreation areas. There are 24 state park units, including Van Sickle Bi-State Park which opened in July 2011 and is operated in partnership with the state of California. Resort areas like Las Vegas, Reno, Lake Tahoe, and Laughlin attract visitors from around the nation and world. In FY08 their 266 casinos (not counting ones with annual revenue under a million dollars) brought in $12 "billion" in gaming revenue and another $13 billion in non-gaming revenue. A review of gaming statistics can be found at Nevada gaming area. Nevada has by far the most hotel rooms per capita in the United States. According to the American Hotel and Lodging Association, there were 187,301 rooms in 584 hotels (of 15 or more rooms). The state is ranked just below California, Texas, Florida, and New York in the total number of rooms, but those states have much larger populations. Nevada has one hotel room for every 14 residents, far above the national average of one hotel room per 67 residents. Prostitution is legal in parts of Nevada in licensed brothels, but only counties with populations under 400,000 have the option to legalize it. Although prostitution is not a major part of the Nevada economy, employing roughly 300 women as independent contractors, it is a very visible endeavor. Of the 14 counties permitted to legalize prostitution under state law, eight have chosen to legalize brothels. State law prohibits prostitution in Clark County (which contains Las Vegas), and Washoe County (which contains Reno). However, prostitution is legal in Storey County, which is part of the Reno–Sparks metropolitan area. The Las Vegas Valley is home to the Vegas Golden Knights of the National Hockey League who began to play in the 2017–18 NHL season at T-Mobile Arena on the Las Vegas Strip in Paradise, Nevada, the Las Vegas Raiders of the National Football League who begin play at Allegiant Stadium in Las Vegas in 2020 after moving from Oakland, California, and the Las Vegas Aces of the WNBA who began playing in 2018 at Mandalay Bay Events Center. The team moved from San Antonio. Nevada takes pride in college sports, most notably its college football. College teams in the state include the Nevada Wolf Pack (representing the University of Nevada, Reno) and the UNLV Rebels (representing the University of Nevada, Las Vegas), both in the Mountain West Conference (MW). UNLV is most remembered for its men's basketball program, which experienced its height of supremacy in the late 1980s and early 1990s. Coached by Jerry Tarkanian, the Runnin' Rebels became one of the most elite programs in the country. In 1990, UNLV won the Men's Division I Championship by defeating Duke 103–73, which set tournament records for most points scored by a team and largest margin of victory in the national title game. In 1991, UNLV finished the regular season undefeated, a feat that would not be matched in Division I men's basketball for more than 20 years. Forward Larry Johnson won several awards, including the Naismith Award. UNLV reached the Final Four yet again, but lost their national semifinal against Duke 79–77. The Runnin' Rebels were the Associated Press pre-season No.1 back to back (1989–90, 1990–91). North Carolina is the only other team to accomplish that (2007–08, 2008–09). The state's involvement in major-college sports is not limited to its local schools. In the 21st century, the Las Vegas area has become a significant regional center for college basketball conference tournaments. The MW, West Coast Conference, and Western Athletic Conference all hold their men's and women's tournaments in the area, and the Pac-12 holds its men's tournament there as well. The Big Sky Conference, after decades of holding its men's and women's conference tournaments at campus sites, began holding both tournaments in Reno in 2016. Las Vegas has hosted several professional boxing matches, most recently at the MGM Grand Garden Arena with bouts such as Mike Tyson vs. Evander Holyfield, Evander Holyfield vs. Mike Tyson II, Oscar De La Hoya vs. Floyd Mayweather and Oscar De La Hoya vs. Manny Pacquiao and at the newer T-Mobile Arena with Canelo Álvarez vs. Amir Khan. Along with significant rises in popularity in mixed martial arts (MMA), a number of fight leagues such as the UFC have taken interest in Las Vegas as a primary event location due to the number of suitable host venues. The Mandalay Bay Events Center and MGM Grand Garden Arena are among some of the more popular venues for fighting events such as MMA and have hosted several UFC and other MMA title fights. The city has held the most UFC events with 86 events. The state is also home to the Las Vegas Motor Speedway, which hosts NASCAR's Pennzoil 400 and South Point 400. Two venues in the immediate Las Vegas area host major annual events in rodeo. The Thomas & Mack Center, built for UNLV men's basketball, hosts the National Finals Rodeo. The PBR World Finals, operated by the bull riding-only Professional Bull Riders, was also held at the Thomas & Mack Center before moving to T-Mobile Arena in 2016. The state is also home to one of the most famous tennis players of all time, Andre Agassi, and current baseball superstar Bryce Harper. Several United States Navy ships have been named USS "Nevada" in honor of the state. They include: Area 51 is near Groom Lake, a dry salt lake bed. The much smaller Creech Air Force Base is in Indian Springs, Nevada; Hawthorne Army Depot in Hawthorne; the Tonopah Test Range near Tonopah; and Nellis AFB in the northeast part of the Las Vegas Valley. Naval Air Station Fallon in Fallon; NSAWC, (pronounced "EN-SOCK") in western Nevada. NSAWC consolidated three Command Centers into a single Command Structure under a flag officer on July 11, 1996. The Naval Strike Warfare Center (STRIKE "U") based at NAS Fallon since 1984, was joined with the Navy Fighter Weapons School (TOPGUN) and the Carrier Airborne Early Warning Weapons School (TOPDOME) which both moved from NAS Miramar as a result of a Base Realignment and Closure (BRAC) decision in 1993 which transferred that installation back to the Marine Corps as MCAS Miramar. The Seahawk Weapon School was added in 1998 to provide tactical training for Navy helicopters. These bases host a number of activities including the Joint Unmanned Aerial Systems Center of Excellence, the Naval Strike and Air Warfare Center, Nevada Test and Training Range, Red Flag, the U.S. Air Force Thunderbirds, the United States Air Force Warfare Center, the United States Air Force Weapons School, and the United States Navy Fighter Weapons School.
https://en.wikipedia.org/wiki?curid=21216
Native Americans in the United States Native Americans, also known as American Indians, Indigenous Americans and other terms, are the indigenous peoples of the United States, except Hawaii and territories of the United States. There are 574 federally recognized tribes living within the US, about half of which are associated with Indian reservations. The term "American Indian" excludes Native Hawaiians and some Alaskan Natives, while "Native Americans" (as defined by the US Census) are American Indians, plus Alaska Natives of all ethnicities. The US Census does not include Native Hawaiians or Chamorro, instead being included in the Census grouping of "Native Hawaiian and other Pacific Islander". The ancestors of living Native Americans arrived in what is now the United States at least 15,000 years ago, possibly much earlier, from Asia via Beringia. A vast variety of peoples, societies and cultures subsequently developed. European colonization of the Americas, which began in 1492, resulted in a precipitous decline in Native American population through introduced diseases, warfare, ethnic cleansing, and slavery. After its formation, the United States, as part of its policy of settler colonialism, continued to wage war and perpetrated massacres against many Native American peoples, removed them from their ancestral lands, and subjected them to one-sided treaties and to discriminatory government policies, later focused on forced assimilation, into the 20th century. Since the 1960s, Native American self-determination movements have resulted in changes to the lives of Native Americans, though there are still many contemporary issues faced by Native Americans. Today, there are over five million Native Americans in the United States, 78% of whom live outside reservations. When the United States was created, established Native American tribes were generally considered semi-independent nations, as they generally lived in communities separate from white settlers. The federal government signed treaties at a government-to-government level until the Indian Appropriations Act of 1871 ended recognition of independent native nations, and started treating them as "domestic dependent nations" subject to federal law. This law did preserve the rights and privileges agreed to under the treaties, including a large degree of tribal sovereignty. For this reason, many (but not all) Native American reservations are still independent of state law and actions of tribal citizens on these reservations are subject only to tribal courts and federal law. The Indian Citizenship Act of 1924 granted U.S. citizenship to all Native Americans born in the United States who had not yet obtained it. This emptied the "Indians not taxed" category established by the United States Constitution, allowed natives to vote in state and federal elections, and extended the Fourteenth Amendment protections granted to people "subject to the jurisdiction" of the United States. However, some states continued to deny Native Americans voting rights for several decades. Bill of Rights protections do not apply to tribal governments, except for those mandated by the Indian Civil Rights Act of 1968. Since the end of the 15th century, the migration of Europeans to the Americas has led to centuries of population, cultural, and agricultural transfer and adjustment between Old and New World societies, a process known as the Columbian exchange. As most Native American groups had historically preserved their histories by oral traditions and artwork, the first written sources of the conflict were written by Europeans. Ethnographers commonly classify the indigenous peoples of North America into ten geographical regions with shared cultural traits, called cultural areas. Some scholars combine the Plateau and Great Basin regions into the Intermontane West, some separate Prairie peoples from Great Plains peoples, while some separate Great Lakes tribes from the Northeastern Woodlands. The ten cultural areas are as follows: At the time of the first contact, the indigenous cultures were quite different from those of the proto-industrial and mostly Christian immigrants. Some Northeastern and Southwestern cultures, in particular, were matrilineal and operated on a more collective basis than that with which Europeans were familiar. The majority of indigenous American tribes maintained their hunting grounds and agricultural lands for use of the entire tribe. Europeans at that time had patriarchal cultures and had developed concepts of individual property rights with respect to land that were extremely different. The differences in cultures between the established Native Americans and immigrant Europeans, as well as shifting alliances among different nations in times of war, caused extensive political tension, ethnic violence, and social disruption. Even before the European settlement of what is now the United States, Native Americans suffered high fatalities from contact with new European diseases, to which they had not yet acquired immunity; the diseases were endemic to the Spanish and other Europeans, and spread by direct contact and likely through pigs that escaped from expeditions. Smallpox epidemics are thought to have caused the greatest loss of life for indigenous populations. William M. Denevan, noted author and Professor Emeritus of Geography at the University of Wisconsin-Madison, said on this subject in his essay "The Pristine Myth: The Landscape of the Americas in 1492"; "The decline of native American populations was rapid and severe, probably the greatest demographic disaster ever. Old World diseases were the primary killer. In many regions, particularly the tropical lowlands, populations fell by 90 percent or more in the first century after the contact. " Estimates of the pre-Columbian population of what today constitutes the U.S. vary significantly, ranging from William M. Denevan's 3.8 million in his 1992 work "The Native Population of the Americas in 1492", to 18 million in Henry F. Dobyns' "Their Number Become Thinned" (1983). Henry F. Dobyns' work, being the highest single point estimate by far within the realm of professional academic research on the topic, has been criticized for being "politically motivated". Perhaps Dobyns' most vehement critic is David Henige, a bibliographer of Africana at the University of Wisconsin, whose "Numbers From Nowhere" (1998) is described as "a landmark in the literature of demographic fulmination". "Suspect in 1966, it is no less suspect nowadays," Henige wrote of Dobyns's work. "If anything, it is worse." After the thirteen colonies revolted against Great Britain and established the United States, President George Washington and Secretary of War Henry Knox conceived of the idea of "civilizing" Native Americans in preparation for assimilation as U.S. citizens. Assimilation (whether voluntary, as with the Choctaw, or forced) became a consistent policy through American administrations. During the 19th century, the ideology of manifest destiny became integral to the American nationalist movement. Expansion of European-American populations to the west after the American Revolution resulted in increasing pressure on Native American lands, warfare between the groups, and rising tensions. In 1830, the U.S. Congress passed the Indian Removal Act, authorizing the government to relocate Native Americans from their homelands within established states to lands west of the Mississippi River, accommodating European-American expansion. This resulted in the ethnic cleansing of many tribes, with the brutal, forced marches coming to be known as The Trail of Tears. Contemporary Native Americans have a unique relationship with the United States because they may be members of nations, tribes, or bands with sovereignty and treaty rights. Cultural activism since the late 1960s has increased political participation and led to an expansion of efforts to teach and preserve indigenous languages for younger generations and to establish a greater cultural infrastructure: Native Americans have founded independent newspapers and online media, recently including First Nations Experience, the first Native American television channel; established Native American studies programs, tribal schools, and universities, and museums and language programs; and have increasingly been published as authors in numerous genres. The terms used to refer to Native Americans have at times been controversial. The ways Native Americans refer to themselves vary by region and generation, with many older Native Americans self-identifying as "Indians" or "American Indians", while younger Native Americans often identify as "Indigenous" or "Aboriginal". The term "Native American" has not traditionally included Native Hawaiians or certain Alaskan Natives, such as Aleut, Yup'ik, or Inuit peoples. By comparison, the indigenous peoples of Canada are generally known as First Nations. It is not definitively known how or when the Native Americans first settled the Americas and the present-day United States. The prevailing theory proposes that people migrated from Eurasia across Beringia, a land bridge that connected Siberia to present-day Alaska during the Ice Age, and then spread southward throughout the Americas over the subsequent generations. Genetic evidence suggests at least three waves of migrants arrived from Asia, with the first occurring at least fifteen thousand years ago. These migrations may have begun as early as 30,000 years ago and continued through to about 10,000 years ago, when the land bridge became submerged by the rising sea level caused by the ending of the last glacial period. The pre-Columbian era incorporates all period subdivisions in the history and prehistory of the Americas before the appearance of significant European influences on the American continents, spanning the time of the original settlement in the Upper Paleolithic period to European colonization during the Early Modern period. While technically referring to the era before Christopher Columbus' voyages of 1492 to 1504, in practice the term usually includes the history of American indigenous cultures until they were conquered or significantly influenced by Europeans, even if this happened decades or even centuries after Columbus' initial landing. Native American cultures are not normally included in characterizations of advanced stone age cultures as "Neolithic," which is a category that more often includes only the cultures in Eurasia, Africa, and other regions. The archaeological periods used are the classifications of archaeological periods and cultures established in Gordon Willey and Philip Phillips' 1958 book "Method and Theory in American Archaeology". They divided the archaeological record in the Americas into five phases; see Archaeology of the Americas. Numerous Paleoindian cultures occupied North America, with some arrayed around the Great Plains and Great Lakes of the modern United States and Canada, as well as adjacent areas to the West and Southwest. According to the oral histories of many of the indigenous peoples of the Americas, they have been living on this continent since their genesis, described by a wide range of traditional creation stories. Other tribes have stories that recount migrations across long tracts of land and a great river, believed to be the Mississippi River. Genetic and linguistic data connect the indigenous people of this continent with ancient northeast Asians. Archeological and linguistic data has enabled scholars to discover some of the migrations within the Americas. Archeological evidence at the Gault site near Austin, Texas, demonstrates that pre-Clovis peoples settled in Texas some 16,000—20,000 years ago. Evidence of pre-Clovis cultures have also been found in the Paisley Caves in south-central Oregon and butchered mastodon bones in a sinkhole near Tallahassee, Florida. More convincingly but also controversially, another pre-Clovis has been discovered at Monte Verde, Chile. The Clovis culture, a megafauna hunting culture, is primarily identified by the use of fluted spear points. Artifacts from this culture were first excavated in 1932 near Clovis, New Mexico. The Clovis culture ranged over much of North America and also appeared in South America. The culture is identified by the distinctive Clovis point, a flaked flint spear-point with a notched flute, by which it was inserted into a shaft. Dating of Clovis materials has been by association with animal bones and by the use of carbon dating methods. Recent reexaminations of Clovis materials using improved carbon-dating methods produced results of 11,050 and 10,800 radiocarbon years B.P. (roughly 9100 to 8850 BCE). The Folsom Tradition was characterized by the use of Folsom points as projectile tips and activities known from kill sites, where slaughter and butchering of bison took place. Folsom tools were left behind between 9000 BCE and 8000 BCE. Na-Dené-speaking peoples entered North America starting around 8000 BCE, reaching the Pacific Northwest by 5000 BCE, and from there migrating along the Pacific Coast and into the interior. Linguists, anthropologists, and archaeologists believe their ancestors comprised a separate migration into North America, later than the first Paleo-Indians. They migrated into Alaska and northern Canada, south along the Pacific Coast, into the interior of Canada, and south to the Great Plains and the American Southwest. Na-Dené-speaking peoples were the earliest ancestors of the Athabascan-speaking peoples, including the present-day and historical Navajo and Apache. They constructed large multi-family dwellings in their villages, which were used seasonally. People did not live there year-round, but for the summer to hunt and fish, and to gather food supplies for the winter. Since the 1990s, archeologists have explored and dated eleven Middle Archaic sites in present-day Louisiana and Florida at which early cultures built complexes with multiple earthwork mounds; they were societies of hunter-gatherers rather than the settled agriculturalists believed necessary according to the theory of Neolithic Revolution to sustain such large villages over long periods. The prime example is Watson Brake in northern Louisiana, whose 11-mound complex is dated to 3500 BCE, making it the oldest, dated site in North America for such complex construction. It is nearly 2,000 years older than the Poverty Point site. Construction of the mounds went on for 500 years until the site was abandoned about 2800 BCE, probably due to changing environmental conditions. The Oshara Tradition people lived from 700–1000 CE. They were part of the Southwestern Archaic Tradition centered in north-central New Mexico, the San Juan Basin, the Rio Grande Valley, southern Colorado, and southeastern Utah. Poverty Point culture is a Late Archaic archaeological culture that inhabited the area of the lower Mississippi Valley and surrounding Gulf Coast. The culture thrived from 2200 BCE to 700 BCE, during the Late Archaic period. Evidence of this culture has been found at more than 100 sites, from the major complex at Poverty Point, Louisiana (a UNESCO World Heritage Site) across a range to the Jaketown Site near Belzoni, Mississippi. The Formative, Classic and post-Classic stages are sometimes incorporated together as the Post-archaic period, which runs from 1000 BCE onward. Sites & cultures include: Adena, Old Copper, Oasisamerica, Woodland, Fort Ancient, Hopewell tradition and Mississippian cultures. The Woodland period of North American pre-Columbian cultures refers to the time period from roughly 1000 BCE to 1000 CE in the eastern part of North America. The Eastern Woodlands cultural region covers what is now eastern Canada south of the Subarctic region, the Eastern United States, along to the Gulf of Mexico. The Hopewell tradition describes the common aspects of the culture that flourished along rivers in the northeastern and midwestern United States from 100 BCE to 500 CE, in the Middle Woodland period. The Hopewell tradition was not a single culture or society, but a widely dispersed set of related populations. They were connected by a common network of trade routes, This period is considered a developmental stage without any massive changes in a short period, but instead having a continuous development in stone and bone tools, leather working, textile manufacture, tool production, cultivation, and shelter construction. The indigenous peoples of the Pacific Northwest Coast were of many nations and tribal affiliations, each with distinctive cultural and political identities, but they shared certain beliefs, traditions, and practices, such as the centrality of salmon as a resource and spiritual symbol. Their gift-giving feast, potlatch, is a highly complex event where people gather in order to commemorate special events. These events include the raising of a Totem pole or the appointment or election of a new chief. The most famous artistic feature of the culture is the Totem pole, with carvings of animals and other characters to commemorate cultural beliefs, legends, and notable events. The Mississippian culture was a mound-building Native American civilization archaeologists date from approximately 800 CE to 1600 CE, varying regionally. It was composed of a series of urban settlements and satellite villages (suburbs) linked together by a loose trading network, the largest city being Cahokia, believed to be a major religious center. The civilization flourished in what is now the Midwestern, Eastern, and Southeastern United States. Numerous pre-Columbian societies were sedentary, such as the Pueblo peoples, Mandan, Hidatsa and others, and some established large settlements, even cities, such as Cahokia, in what is now Illinois. The Iroquois League of Nations or "People of the Long House" was a politically advanced, democratic society, which is thought by some historians to have influenced the United States Constitution, with the Senate passing a resolution to this effect in 1988. Other historians have contested this interpretation and believe the impact was minimal, or did not exist, pointing to numerous differences between the two systems and the ample precedents for the constitution in European political thought. After 1492, European exploration and colonization of the Americas revolutionized how the Old and New Worlds perceived themselves. Many of the first major contacts were in Florida and the Gulf coast by Spanish explorers. From the 16th through the 19th centuries, the population of Native Americans sharply declined. Most mainstream scholars believe that, among the various contributing factors, epidemic disease was the overwhelming cause of the population decline of the Native Americans because of their lack of immunity to new diseases brought from Europe. It is difficult to estimate the number of pre-Columbian Native Americans who were living in what is today the United States of America. Estimates range from a low of 2.1 million to a high of 18 million (Dobyns 1983). By 1800, the Native population of the present-day United States had declined to approximately 600,000, and only 250,000 Native Americans remained in the 1890s. Chicken pox and measles, endemic but rarely fatal among Europeans (long after being introduced from Asia), often proved deadly to Native Americans. In the 100 years following the arrival of the Spanish to the Americas, large disease epidemics depopulated large parts of the eastern United States in the 16th century. There are a number of documented cases where diseases were deliberately spread among Native Americans as a form of biological warfare. The most well-known example occurred in 1763, when Sir Jeffery Amherst, Commander-in-Chief of the Forces of the British Army, wrote praising the use of smallpox-infected blankets to "extirpate" the Indian race. Blankets infected with smallpox were given to Native Americans besieging Fort Pitt. The effectiveness of the attempt is unclear. In 1634, Fr. Andrew White of the Society of Jesus established a mission in what is now the state of Maryland, and the purpose of the mission, stated through an interpreter to the chief of an Indian tribe there, was "to extend civilization and instruction to his ignorant race, and show them the way to heaven". Fr. Andrew's diaries report that by 1640, a community had been founded which they named St. Mary's, and the Indians were sending their children there "to be educated among the English". This included the daughter of the Piscataway Indian chief Tayac, which exemplifies not only a school for Indians, but either a school for girls, or an early co-ed school. The same records report that in 1677, "a school for humanities was opened by our Society in the centre of [Maryland], directed by two of the Fathers; and the native youth, applying themselves assiduously to study, made good progress. Maryland and the recently established school sent two boys to St. Omer who yielded in abilities to few Europeans, when competing for the honor of being first in their class. So that not gold, nor silver, nor the other products of the earth alone, but men also are gathered from thence to bring those regions, which foreigners have unjustly called ferocious, to a higher state of virtue and cultivation." Through the mid-17th century the Beaver Wars were fought over the fur trade between the Iroquois and the Hurons, the northern Algonquians, and their French allies. During the war the Iroquois destroyed several large tribal confederacies, including the Huron, Neutral, Erie, Susquehannock, and Shawnee, and became dominant in the region and enlarged their territory. In 1727, the Sisters of the Order of Saint Ursula founded Ursuline Academy in New Orleans, which is currently the oldest continuously operating school for girls and the oldest Catholic school in the United States. From the time of its foundation, it offered the first classes for Native American girls, and would later offer classes for female African-American slaves and free women of color. Between 1754 and 1763, many Native American tribes were involved in the French and Indian War/Seven Years' War. Those involved in the fur trade tended to ally with French forces against British colonial militias. The British had made fewer allies, but it was joined by some tribes that wanted to prove assimilation and loyalty in support of treaties to preserve their territories. They were often disappointed when such treaties were later overturned. The tribes had their own purposes, using their alliances with the European powers to battle traditional Native enemies. Some Iroquois who were loyal to the British, and helped them fight in the American Revolution, fled north into Canada. After European explorers reached the West Coast in the 1770s, smallpox rapidly killed at least 30% of Northwest Coast Native Americans. For the next eighty to one hundred years, smallpox and other diseases devastated native populations in the region. Puget Sound area populations, once estimated as high as 37,000 people, were reduced to only 9,000 survivors by the time settlers arrived en masse in the mid-19th century. Smallpox epidemics in 1780–82 and 1837–38 brought devastation and drastic depopulation among the Plains Indians. By 1832, the federal government established a smallpox vaccination program for Native Americans ("The Indian Vaccination Act of 1832"). It was the first federal program created to address a health problem of Native Americans. With the meeting of two worlds, animals, insects, and plants were carried from one to the other, both deliberately and by chance, in what is called the Columbian Exchange. In the 16th century, Spaniards and other Europeans brought horses to Mexico. Some of the horses escaped and began to breed and increase their numbers in the wild. As Native Americans adopted use of the animals, they began to change their cultures in substantial ways, especially by extending their nomadic ranges for hunting. The reintroduction of the horse to North America had a profound impact on Native American culture of the Great Plains. King Philip's War, also called Metacom's War or Metacom's Rebellion, was the last major armed conflict between Native American inhabitants of present-day southern New England and English colonists and their Native American allies from 1675 to 1676. It continued in northern New England (primarily on the Maine frontier) even after King Philip was killed, until a treaty was signed at Casco Bay in April 1678. Some European philosophers considered Native American societies to be truly "natural" and representative of a golden age known to them only in folk history. During the American Revolution, the newly proclaimed United States competed with the British for the allegiance of Native American nations east of the Mississippi River. Most Native Americans who joined the struggle sided with the British, based both on their trading relationships and hopes that colonial defeat would result in a halt to further colonial expansion onto Native American land. The first native community to sign a treaty with the new United States Government was the Lenape. In 1779 the Sullivan Expedition was carried out during the American Revolutionary War against the British and the four allied nations of the Iroquois. George Washington gave orders that made it clear he wanted the Iroquois threat completely eliminated: The British made peace with the Americans in the Treaty of Paris (1783), through which they ceded vast Native American territories to the United States without informing or consulting with the Native Americans. The United States was eager to expand, develop farming and settlements in new areas, and satisfy land hunger of settlers from New England and new immigrants. The national government initially sought to purchase Native American land by treaties. The states and settlers were frequently at odds with this policy. United States policy toward Native Americans continued to evolve after the American Revolution. George Washington and Henry Knox believed that Native Americans were equals but that their society was inferior. Washington formulated a policy to encourage the "civilizing" process. Washington had a six-point plan for civilization which included: In the late 18th century, reformers starting with Washington and Knox, supported educating native children and adults, in efforts to "civilize" or otherwise assimilate Native Americans to the larger society (as opposed to relegating them to reservations). The Civilization Fund Act of 1819 promoted this civilization policy by providing funding to societies (mostly religious) who worked on Native American improvement. The population of California Indians was reduced by 90% during the 19th century—from more than 200,000 in the early 19th century to approximately 15,000 at the end of the century, mostly due to disease. Epidemics swept through California Indian Country, such as the 1833 malaria epidemic. The population went into decline as a result of the Spanish authorities forcing Native Californians to live in the missions where they contracted diseases from which they had little immunity. Dr. Cook estimates that 15,250 or 45% of the population decrease in the Missions was caused by disease. Two epidemics of measles, one in 1806 and the other in 1828, caused many deaths. The mortality rates were so high that the missions were constantly dependent upon new conversions. During the California Gold Rush, many natives were killed by incoming settlers as well as by militia units financed and organized by the California government. Some scholars contend that the state financing of these militias, as well as the US government's role in other massacres in California, such as the Bloody Island and Yontoket Massacres, in which up to 400 or more natives were killed in each massacre, constitutes a campaign of genocide against the native people of California. As American expansion continued, Native Americans resisted settlers' encroachment in several regions of the new nation (and in unorganized territories), from the Northwest to the Southeast, and then in the West, as settlers encountered the Native American tribes of the Great Plains. East of the Mississippi River, an intertribal army led by Tecumseh, a Shawnee chief, fought a number of engagements in the Northwest during the period 1811–12, known as Tecumseh's War. During the War of 1812, Tecumseh's forces allied themselves with the British. After Tecumseh's death, the British ceased to aid the Native Americans south and west of Upper Canada and American expansion proceeded with little resistance. Conflicts in the Southeast include the Creek War and Seminole Wars, both before and after the Indian Removals of most members of the Five Civilized Tribes. In the 1830s, President Andrew Jackson signed the Indian Removal Act of 1830, a policy of relocating Indians from their homelands to Indian Territory and reservations in surrounding areas to open their lands for non-native settlements. This resulted in the Trail of Tears. In July 1845, the New York newspaper editor John L. O'Sullivan coined the phrase, "Manifest Destiny", as the "design of Providence" supporting the territorial expansion of the United States. Manifest Destiny had serious consequences for Native Americans, since continental expansion for the U.S. took place at the cost of their occupied land. A justification for the policy of conquest and subjugation of the indigenous people emanated from the stereotyped perceptions of all Native Americans as "merciless Indian savages" (as described in the United States Declaration of Independence). Sam Wolfson in "The Guardian" writes, "The declaration's passage has often been cited as an encapsulation of the dehumanizing attitude toward indigenous Americans that the US was founded on." The Indian Appropriations Act of 1851 set the precedent for modern-day Native American reservations through allocating funds to move western tribes onto reservations since there were no more lands available for relocation. Native American nations on the plains in the west continued armed conflicts with the U.S. throughout the 19th century, through what were called generally Indian Wars. Notable conflicts in this period include the Dakota War, Great Sioux War, Snake War, Colorado War, and Texas-Indian Wars. Expressing the frontier anti-Indian sentiment, Theodore Roosevelt believed the Indians were destined to vanish under the pressure of white civilization, stating in an 1886 lecture: One of the last and most notable events during the Indian wars was the Wounded Knee Massacre in 1890. In the years leading up to it the U.S. government had continued to seize Lakota lands. A Ghost Dance ritual on the Northern Lakota reservation at Wounded Knee, South Dakota, led to the U.S. Army's attempt to subdue the Lakota. The dance was part of a religious movement founded by the Northern Paiute spiritual leader Wovoka that told of the return of the Messiah to relieve the suffering of Native Americans and promised that if they would live righteous lives and perform the Ghost Dance properly, the European American colonists would vanish, the bison would return, and the living and the dead would be reunited in an Edenic world. On December 29 at Wounded Knee, gunfire erupted, and U.S. soldiers killed up to 300 Indians, mostly old men, women, and children. Native Americans served in both the Union and Confederate military during the American Civil War. At the outbreak of the war, for example, the minority party of the Cherokees gave its allegiance to the Confederacy, while originally the majority party went for the North. Native Americans fought knowing they might jeopardize their independence, unique cultures, and ancestral lands if they ended up on the losing side of the Civil War. 28,693 Native Americans served in the Union and Confederate armies during the Civil War, participating in battles such as Pea Ridge, Second Manassas, Antietam, Spotsylvania, Cold Harbor, and in Federal assaults on Petersburg. A few Native American tribes, such as the Creek and the Choctaw, were slaveholders and found a political and economic commonality with the Confederacy. The Choctaw owned over 2,000 slaves. In the 19th century, the incessant westward expansion of the United States incrementally compelled large numbers of Native Americans to resettle further west, often by force, almost always reluctantly. Native Americans believed this forced relocation illegal, given the Treaty of Hopewell of 1785. Under President Andrew Jackson, United States Congress passed the Indian Removal Act of 1830, which authorized the President to conduct treaties to exchange Native American land east of the Mississippi River for lands west of the river. As many as 100,000 Native Americans relocated to the West as a result of this Indian removal policy. In theory, relocation was supposed to be voluntary and many Native Americans did remain in the East. In practice, great pressure was put on Native American leaders to sign removal treaties. The most egregious violation, the Trail of Tears, was the removal of the Cherokee by President Jackson to Indian Territory. The 1864 deportation of the Navajos by the U.S. government occurred when 8,000 Navajos were forced to an internment camp in Bosque Redondo, where, under armed guards, more than 3,500 Navajo and Mescalero Apache men, women, and children died from starvation and disease. In 1817, the Cherokee became the first Native Americans recognized as U.S. citizens. Under Article 8 of the 1817 Cherokee treaty, "Upwards of 300 Cherokees (Heads of Families) in the honest simplicity of their souls, made an election to become American citizens". Factors establishing citizenship included: After the American Civil War, the Civil Rights Act of 1866 states, "that all persons born in the United States, and not subject to any foreign power, excluding Indians not taxed, are hereby declared to be citizens of the United States". In 1871, Congress added a rider to the Indian Appropriations Act, signed into law by President Ulysses S. Grant, ending United States recognition of additional Native American tribes or independent nations, and prohibiting additional treaties. After the Indian wars in the late 19th century, the government established Native American boarding schools, initially run primarily by or affiliated with Christian missionaries. At this time, American society thought that Native American children needed to be acculturated to the general society. The boarding school experience was a total immersion in modern American society, but it could prove traumatic to children, who were forbidden to speak their native languages. They were taught Christianity and not allowed to practice their native religions, and in numerous other ways forced to abandon their Native American identities. Before the 1930s, schools on the reservations provided no schooling beyond the sixth grade. To obtain more, boarding school was usually necessary. Small reservations with a few hundred people usually sent their children to nearby public schools. The "Indian New Deal" of the 1930s closed many of the boarding schools, and downplayed the assimilationist goals. The Indian Division of the Civilian Conservation Corps operated large-scale construction projects on the reservations, building thousands of new schools and community buildings. Under the leadership of John Collier the Bureau of Indian Affairs (BIA) brought in progressive educators to reshape Indian education. The BIA by 1938 taught 30,000 students in 377 boarding and day schools, or 40% of all Indian children in school. The Navajo largely opposed schooling of any sort, but the other tribes accepted the system. There were now high schools on larger reservations, educating not only teenagers but also an adult audience. There were no Indian facilities for higher education. They deemphasized textbooks, emphasized self-esteem, and started teaching Indian history. They promoted traditional arts and crafts of the sort that could be conducted on the reservations, such as making jewelry. The New Deal reformers met significant resistance from parents and teachers, and had mixed results. World War II brought younger Indians in contact with the broader society through military service and work in the munitions industries. The role of schooling was changed to focus on vocational education for jobs in urban America. Since the rise of self-determination for Native Americans, they have generally emphasized education of their children at schools near where they live. In addition, many federally recognized tribes have taken over operations of such schools and added programs of language retention and revival to strengthen their cultures. Beginning in the 1970s, tribes have also founded colleges at their reservations, controlled, and operated by Native Americans, to educate their young for jobs as well as to pass on their cultures. On August 29, 1911, Ishi, generally considered to have been the last Native American to live most of his life without contact with European-American culture, was discovered near Oroville, California. In 1919, the United States under President Woodrow Wilson granted citizenship to all Native Americans who had served in World War I. Nearly 10,000 men had enlisted and served, a high number in relation to their population. Despite this, in many areas Native Americans faced local resistance when they tried to vote and were discriminated against with barriers to voter registration. On June 2, 1924, U.S. President Calvin Coolidge signed the Indian Citizenship Act, which made all Native Americans born in the United States and its territories American citizens. Prior to passage of the act, nearly two-thirds of Native Americans were already U.S. citizens, through marriage, military service or accepting land allotments. The Act extended citizenship to "all non-citizen Indians born within the territorial limits of the United States". Charles Curtis, a Congressman and longtime US Senator from Kansas, was of Kaw, Osage, Potawatomi, and European ancestry. After serving as a United States Representative and being repeatedly re-elected as United States Senator from Kansas, Curtis served as Senate Minority Whip for 10 years and as Senate Majority Leader for five years. He was very influential in the Senate. In 1928 he ran as the vice-presidential candidate with Herbert Hoover for president, and served from 1929 to 1933. He was the first person with significant Native American ancestry and the first person with acknowledged non-European ancestry to be elected to either of the highest offices in the land. American Indians today in the United States have all the rights guaranteed in the U.S. Constitution, can vote in elections, and run for political office. Controversies remain over how much the federal government has jurisdiction over tribal affairs, sovereignty, and cultural practices. Mid-century, the Indian termination policy and the Indian Relocation Act of 1956 marked a new direction for assimilating Native Americans into urban life. The census counted 332,000 Indians in 1930 and 334,000 in 1940, including those on and off reservations in the 48 states. Total spending on Indians averaged $38 million a year in the late 1920s, dropping to a low of $23 million in 1933, and returning to $38 million in 1940. Some 44,000 Native Americans served in the United States military during World War II: at the time, one-third of all able-bodied Indian men from eighteen to fifty years of age. Described as the first large-scale exodus of indigenous peoples from the reservations since the removals of the 19th century, the men's service with the U.S. military in the international conflict was a turning point in Native American history. The overwhelming majority of Native Americans welcomed the opportunity to serve; they had a voluntary enlistment rate that was 40% higher than those drafted. Their fellow soldiers often held them in high esteem, in part since the legend of the tough Native American warrior had become a part of the fabric of American historical legend. White servicemen sometimes showed a lighthearted respect toward Native American comrades by calling them "chief". The resulting increase in contact with the world outside of the reservation system brought profound changes to Native American culture. "The war", said the U.S. Indian Commissioner in 1945, "caused the greatest disruption of Native life since the beginning of the reservation era", affecting the habits, views, and economic well-being of tribal members. The most significant of these changes was the opportunity—as a result of wartime labor shortages—to find well-paying work in cities, and many people relocated to urban areas, particularly on the West Coast with the buildup of the defense industry. There were also losses as a result of the war. For instance, a total of 1,200 Pueblo men served in World War II; only about half came home alive. In addition, many more Navajo served as code talkers for the military in the Pacific. The code they made, although cryptologically very simple, was never cracked by the Japanese. Military service and urban residency contributed to the rise of American Indian activism, particularly after the 1960s and the occupation of Alcatraz Island (1969–1971) by a student Indian group from San Francisco. In the same period, the American Indian Movement (AIM) was founded in Minneapolis, and chapters were established throughout the country, where American Indians combined spiritual and political activism. Political protests gained national media attention and the sympathy of the American public. Through the mid-1970s, conflicts between governments and Native Americans occasionally erupted into violence. A notable late 20th-century event was the Wounded Knee incident on the Pine Ridge Indian Reservation. Upset with tribal government and the failures of the federal government to enforce treaty rights, about 300 Oglala Lakota and AIM activists took control of Wounded Knee on February 27, 1973. Indian activists from around the country joined them at Pine Ridge, and the occupation became a symbol of rising American Indian identity and power. Federal law enforcement officials and the national guard cordoned off the town, and the two sides had a standoff for 71 days. During much gunfire, one United States Marshal was wounded and paralyzed. In late April, a Cherokee and local Lakota man were killed by gunfire; the Lakota elders ended the occupation to ensure no more lives were lost. In June 1975, two FBI agents seeking to make an armed robbery arrest at Pine Ridge Reservation were wounded in a firefight, and killed at close range. The AIM activist Leonard Peltier was sentenced in 1976 to two consecutive terms of life in prison for the FBI deaths. In 1968, the government enacted the Indian Civil Rights Act. This gave tribal members most of the protections against abuses by tribal governments that the Bill of Rights accords to all U.S. citizens with respect to the federal government. In 1975, the U.S. government passed the Indian Self-Determination and Education Assistance Act, marking the culmination of fifteen years of policy changes. It resulted from American Indian activism, the Civil Rights Movement, and community development aspects of President Lyndon Johnson's social programs of the 1960s. The Act recognized the right and need of Native Americans for self-determination. It marked the U.S. government's turn away from the 1950s policy of termination of the relationship between tribes and the government. The U.S. government encouraged Native Americans' efforts at self-government and determining their futures. Tribes have developed organizations to administer their own social, welfare and housing programs, for instance. Tribal self-determination has created tension with respect to the federal government's historic trust obligation to care for Indians; however, the Bureau of Indian Affairs has never lived up to that responsibility. Navajo Community College, now called Diné College, the first tribal college, was founded in Tsaile, Arizona, in 1968 and accredited in 1979. Tensions immediately arose between two philosophies: one that the tribal colleges should have the same criteria, curriculum and procedures for educational quality as mainstream colleges, the other that the faculty and curriculum should be closely adapted to the particular historical culture of the tribe. There was a great deal of turnover, exacerbated by very tight budgets. In 1994, the U.S. Congress passed legislation recognizing the tribal colleges as land-grant colleges, which provided opportunities for large-scale funding. Thirty-two tribal colleges in the United States belong to the American Indian Higher Education Consortium. By the early 21st century, tribal nations had also established numerous language revival programs in their schools. In addition, Native American activism has led major universities across the country to establish Native American studies programs and departments, increasing awareness of the strengths of Indian cultures, providing opportunities for academics, and deepening research on history and cultures in the United States. Native Americans have entered academia; journalism and media; politics at local, state and federal levels; and public service, for instance, influencing medical research and policy to identify issues related to American Indians. In 2009, an "apology to Native Peoples of the United States" was included in the Defense Appropriations Act. It stated that the U.S. "apologizes on behalf of the people of the United States to all Native Peoples for the many instances of violence, maltreatment, and neglect inflicted on Native Peoples by citizens of the United States". In 2013, jurisdiction over persons who were not tribal members under the Violence Against Women Act was extended to Indian Country. This closed a gap which prevented arrest or prosecution by tribal police or courts of abusive partners of tribal members who were not native or from another tribe. Migration to urban areas continued to grow with 70% of Native Americans living in urban areas in 2012, up from 45% in 1970 and 8% in 1940. Urban areas with significant Native American populations include Minneapolis, Denver, Albuquerque, Phoenix, Tucson, Chicago, Oklahoma City, Houston, New York City, Los Angeles, and Rapid City. Many lived in poverty. Racism, unemployment, drugs and gangs were common problems which Indian social service organizations such as the Little Earth housing complex in Minneapolis attempted to address. Grassroots efforts to support urban Indigenous populations have also taken place, as in the case of Bringing the Circle Together in Los Angeles. The 2010 Census showed that the U.S. population on April 1, 2010, was 308.7 million. Out of the total U.S. population, 2.9 million people, or 0.9 percent, reported American Indian or Alaska Native alone. In addition, 2.3 million people, or another 0.7 percent, reported American Indian or Alaska Native in combination with one or more other races. Together, these two groups totaled 5.2 million people. Thus, 1.7 percent of all people in the United States identified as American Indian or Alaska Native, either alone or in combination with one or more other races. The definition of American Indian or Alaska Native used in the 2010 census: According to Office of Management and Budget, "American Indian or Alaska Native" refers to a person having origins in any of the original peoples of North and South America (including Central America) and who maintains tribal affiliation or community attachment. The 2010 census permitted respondents to self-identify as being of one or more races. Self-identification dates from the census of 1960; prior to that the race of the respondent was determined by opinion of the census taker. The option to select more than one race was introduced in 2000. If American Indian or Alaska Native was selected, the form requested the individual provide the name of the "enrolled or principal tribe". The census counted 248,000 Native Americans in 1890, 332,000 in 1930 and 334,000 in 1940, including those on and off reservations in the 48 states. Total spending on Native Americans averaged $38 million a year in the late 1920s, dropping to a low of $23 million in 1933, and returning to $38 million in 1940. 78% of Native Americans live outside a reservation. Full-blood individuals are more likely to live on a reservation than mixed-blood individuals. The Navajo, with 286,000 full-blood individuals, is the largest tribe if only full-blood individuals are counted; the Navajo are the tribe with the highest proportion of full-blood individuals, 86.3%. The Cherokee have a different history; it is the largest tribe with 819,000 individuals, and it has 284,000 full-blood individuals. As of 2012, 70% of Native Americans live in urban areas, up from 45% in 1970 and 8% in 1940. Urban areas with significant Native American populations include Minneapolis, Denver, Phoenix, Tucson, Chicago, Oklahoma City, Houston, New York City, and Los Angeles. Many live in poverty. Racism, unemployment, drugs and gangs are common problems which Indian social service organizations such as the Little Earth housing complex in Minneapolis attempt to address. According to 2003 United States Census Bureau estimates, a little over one third of the 2,786,652 Native Americans in the United States live in three states: California (413,382), Arizona (294,137) and Oklahoma (279,559). In 2010, the U.S. Census Bureau estimated that about 0.8% of the U.S. population was of American Indian or Alaska Native descent. This population is unevenly distributed across the country. Below, all fifty states, as well as the District of Columbia and Puerto Rico, are listed by the proportion of residents citing American Indian or Alaska Native ancestry, based on the 2010 U.S. Census. In 2006, the U.S. Census Bureau estimated that about less than 1.0% of the U.S. population was of Native Hawaiian or Pacific Islander descent. This population is unevenly distributed across twenty-six states. Below, are the twenty-six states that had at least 0.1%. They are listed by the proportion of residents citing Native Hawaiian or Pacific Islander ancestry, based on 2006 estimates: Below are numbers for U.S. citizens self-identifying to selected tribal grouping, according to the 2000 U.S. census. There are 573 federally recognized tribal governments in the United States. These tribes possess the right to form their own governments, to enforce laws (both civil and criminal) within their lands, to tax, to establish requirements for membership, to license and regulate activities, to zone, and to exclude persons from tribal territories. Limitations on tribal powers of self-government include the same limitations applicable to states; for example, neither tribes nor states have the power to make war, engage in foreign relations, or coin money (this includes paper currency). Many Native Americans and advocates of Native American rights point out that the U.S. federal government's claim to recognize the "sovereignty" of Native American peoples falls short, given that the United States wishes to govern Native American peoples and treat them as subject to U.S. law. Such advocates contend that full respect for Native American sovereignty would require the U.S. government to deal with Native American peoples in the same manner as any other sovereign nation, handling matters related to relations with Native Americans through the Secretary of State, rather than the Bureau of Indian Affairs. The Bureau of Indian Affairs reports on its website that its "responsibility is the administration and management of of land held in trust by the United States for American Indians, Indian tribes, and Alaska Natives". Many Native Americans and advocates of Native American rights believe that it is condescending for such lands to be considered "held in trust" and regulated in any fashion by other than their own tribes, whether the U.S. or Canadian governments, or any other non-Native American authority. As of 2000, the largest groups in the United States by population were Navajo, Cherokee, Choctaw, Sioux, Chippewa, Apache, Blackfeet, Iroquois, and Pueblo. In 2000, eight of ten Americans with Native American ancestry were of mixed ancestry. It is estimated that by 2100 that figure will rise to nine out of ten. In addition, there are a number of tribes that are recognized by individual states, but not by the federal government. The rights and benefits associated with state recognition vary from state to state. Some tribal groups have been unable to document the cultural continuity required for federal recognition. The Muwekma Ohlone of the San Francisco bay area are pursuing litigation in the federal court system to establish recognition. Many of the smaller eastern tribes, long considered remnants of extinct peoples, have been trying to gain official recognition of their tribal status. Several tribes in Virginia and North Carolina have gained state recognition. Federal recognition confers some benefits, including the right to label arts and crafts as Native American and permission to apply for grants that are specifically reserved for Native Americans. But gaining federal recognition as a tribe is extremely difficult; to be established as a tribal group, members have to submit extensive genealogical proof of tribal descent and continuity of the tribe as a culture. In July 2000, the Washington State Republican Party adopted a resolution recommending that the federal and legislative branches of the U.S. government terminate tribal governments. In 2007, a group of Democratic Party congressmen and congresswomen introduced a bill in the U.S. House of Representatives to "terminate" the Cherokee Nation. This was related to their voting to exclude Cherokee Freedmen as members of the tribe unless they had a Cherokee ancestor on the Dawes Rolls, although all Cherokee Freedmen and their descendants had been members since 1866. As of 2004, various Native Americans are wary of attempts by others to gain control of their reservation lands for natural resources, such as coal and uranium in the West. In the state of Virginia, Native Americans face a unique problem. Until 2017 Virginia previously had no federally recognized tribes but the state had recognized eight. This is related historically to the greater impact of disease and warfare on the Virginia Indian populations, as well as their intermarriage with Europeans and Africans. Some people confused the ancestry with culture, but groups of Virginia Indians maintained their cultural continuity. Most of their early reservations were ended under the pressure of early European settlement. Some historians also note the problems of Virginia Indians in establishing documented continuity of identity, due to the work of Walter Ashby Plecker (1912–1946). As registrar of the state's Bureau of Vital Statistics, he applied his own interpretation of the one-drop rule, enacted in law in 1924 as the state's Racial Integrity Act. It recognized only two races: "white" and "colored". Plecker, a segregationist, believed that the state's Native Americans had been "mongrelized" by intermarriage with African Americans; to him, ancestry determined identity, rather than culture. He thought that some people of partial black ancestry were trying to "pass" as Native Americans. Plecker thought that anyone with any African heritage had to be classified as colored, regardless of appearance, amount of European or Native American ancestry, and cultural/community identification. Plecker pressured local governments into reclassifying all Native Americans in the state as "colored", and gave them lists of family surnames to examine for reclassification based on his interpretation of data and the law. This led to the state's destruction of accurate records related to families and communities who identified as Native American (as in church records and daily life). By his actions, sometimes different members of the same family were split by being classified as "white" or "colored". He did not allow people to enter their primary identification as Native American in state records. In 2009, the Senate Indian Affairs Committee endorsed a bill that would grant federal recognition to tribes in Virginia. To achieve federal recognition and its benefits, tribes must prove continuous existence since 1900. The federal government has maintained this requirement, in part because through participation on councils and committees, federally recognized tribes have been adamant about groups' satisfying the same requirements as they did. The Civil Rights Movement was a very significant moment for the rights of Native Americans and other people of color. Native Americans faced racism and prejudice for hundreds of years, and this increased after the American Civil War. Native Americans, like African Americans, were subjected to the Jim Crow Laws and segregation in the Deep South especially after they were made citizens through the Indian Citizenship Act of 1924. As a body of law, Jim Crow institutionalized economic, educational, and social disadvantages for Native Americans, and other people of color living in the south. Native American identity was especially targeted by a system that only wanted to recognize white or colored, and the government began to question the legitimacy of some tribes because they had intermarried with African Americans. Native Americans were also discriminated and discouraged from voting in the southern and western states. In the south segregation was a major problem for Native Americans seeking education, but the NAACP's legal strategy would later change this. Movements such as Brown v. Board of Education was a major victory for the Civil Rights Movement headed by the NAACP, and inspired Native Americans to start participating in the Civil Rights Movement. Dr. Martin Luther King Jr. began assisting Native Americans in the south in the late 1950s after they reached out to him. At that time the remaining Creek in Alabama were trying to completely desegregate schools in their area. In this case, light-complexioned Native children were allowed to ride school buses to previously all white schools, while dark-skinned Native children from the same band were barred from riding the same buses. Tribal leaders, upon hearing of King's desegregation campaign in Birmingham, Alabama, contacted him for assistance. He promptly responded and through his intervention the problem was quickly resolved. Dr. King would later make trips to Arizona visiting Native Americans on reservations, and in churches encouraging them to be involved in the Civil Rights Movement. In King's book "Why We Can't Wait" he writes: Our nation was born in genocide when it embraced the doctrine that the original American, the Indian, was an inferior race. Even before there were large numbers of Negroes on our shores, the scar of racial hatred had already disfigured colonial society. From the sixteenth century forward, blood flowed in battles over racial supremacy. We are perhaps the only nation which tried as a matter of national policy to wipe out its indigenous population. Moreover, we elevated that tragic experience into a noble crusade. Indeed, even today we have not permitted ourselves to reject or to feel remorse for this shameful episode. Our literature, our films, our drama, our folklore all exalt it. Native Americans would then actively participate and support the NAACP, and the Civil Rights Movement. The National Indian Youth Council (NIYC) would soon rise in 1961 to fight for Native American rights during the Civil Rights Movement, and were strong supporters of Dr. Martin Luther King Jr.. During the 1963 March on Washington there was a sizable Native American contingent, including many from South Dakota, and many from the Navajo nation. Native Americans also participated the Poor People's Campaign in 1968. The NIYC were very active supporters of the Poor People's Campaign unlike the National Congress of American Indians (NCAI); the NIYC and other Native organizations met with King in March 1968 but the NCAI disagreed on how to approach the anti-poverty campaign; the NCAI decided against participating in the march. The NCAI wished to pursue their battles in the courts and with Congress, unlike the NIYC. The NAACP also inspired the creation of the Native American Rights Fund (NARF) which was patterned after the NAACP's Legal Defense and Education Fund. Furthermore, the NAACP continued to organize to stop mass incarceration and end the criminalization of Native Americans and other communities of people of color. The following is an excerpt from a statement from Mel Thom on May 1, 1968, during a meeting with Secretary of State Dean Rusk: (It was written by members of the Workshop on American Indian Affairs and the NIYC) Native American struggles amid poverty to maintain life on the reservation or in larger society have resulted in a variety of health issues, some related to nutrition and health practices. The community suffers a vulnerability to and disproportionately high rate of alcoholism. Recent studies also point to rising rates of stroke, heart disease, and diabetes in the Native American population. In a study conducted in 2006–2007, non-Native Americans admitted they rarely encountered Native Americans in their daily lives. While sympathetic toward Native Americans and expressing regret over the past, most people had only a vague understanding of the problems facing Native Americans today. For their part, Native Americans told researchers that they believed they continued to face prejudice, mistreatment, and inequality in the broader society. Federal contractors and subcontractors, such as businesses and educational institutions, are legally required to adopt equal opportunity employment and affirmative action measures intended to prevent discrimination against employees or applicants for employment on the basis of "color, religion, sex, or national origin". For this purpose, a Native American is defined as "A person having origins in any of the original peoples of North and South America (including Central America), and who maintains a tribal affiliation or community attachment". The passing of the Indian Relocation Act saw a 56% increase in Native American city dwellers over 40 years. The Native American urban poverty rate exceeds that of reservation poverty rates due to discrimination in hiring processes. However, self-reporting is permitted: "Educational institutions and other recipients should allow students and staff to self-identify their race and ethnicity unless self-identification is not practicable or feasible." Self-reporting opens the door to "box checking" by people who, despite not having a substantial relationship to Native American culture, innocently or fraudulently check the box for Native American. The difficulties that Native Americans face in the workforce, for example, a lack of promotions and wrongful terminations are attributed to racial stereotypes and implicit biases. Native American business owners are seldom offered auxiliary resources that are crucial for entrepreneurial success. American Indian activists in the United States and Canada have criticized the use of Native American mascots in sports, as perpetuating stereotypes. This is considered cultural appropriation. There has been a steady decline in the number of secondary school and college teams using such names, images, and mascots. Some tribal team names have been approved by the tribe in question, such as the Seminole Tribe of Florida's approving use of their name for the teams of Florida State University. Among professional teams, only the NBA's Golden State Warriors discontinued use of Native American-themed logos in 1971. Controversy has remained regarding teams such as the NFL's Washington Redskins, whose name is considered to be a racial slur, and MLB's Cleveland Indians, whose usage of a caricature called Chief Wahoo has also faced protest. Native Americans have been depicted by American artists in various ways at different periods. A number of 19th- and 20th-century United States and Canadian painters, often motivated by a desire to document and preserve Native culture, specialized in Native American subjects. Among the most prominent of these were Elbridge Ayer Burbank, George Catlin, Seth Eastman, Paul Kane, W. Langdon Kihn, Charles Bird King, Joseph Henry Sharp, and John Mix Stanley. In the 20th century, early portrayals of Native Americans in movies and television roles were first performed by European Americans dressed in mock traditional attire. Examples included "The Last of the Mohicans" (1920), "Hawkeye and the Last of the Mohicans" (1957), and "F Troop" (1965–67). In later decades, Native American actors such as Jay Silverheels in "The Lone Ranger" television series (1949–57) came to prominence. Roles of Native Americans were limited and not reflective of Native American culture. By the 1970s some Native American film roles began to show more complexity, such as those in "Little Big Man" (1970), "Billy Jack" (1971), and "The Outlaw Josey Wales" (1976), which depicted Native Americans in minor supporting roles. For years, Native people on U.S. television were relegated to secondary, subordinate roles. During the years of the series "Bonanza" (1959–1973), no major or secondary Native characters appeared on a consistent basis. The series "The Lone Ranger" (1949–1957), "Cheyenne" (1955–1963), and "Law of the Plainsman" (1959–1963) had Native characters who were essentially aides to the central white characters. This continued in such series as "How the West Was Won". These programs resembled the "sympathetic" yet contradictory film "Dances With Wolves" of 1990, in which, according to Ella Shohat and Robert Stam, the narrative choice was to relate the Lakota story as told through a Euro-American voice, for wider impact among a general audience. Like the 1992 remake of "The Last of the Mohicans" and "" (1993), "Dances with Wolves" employed a number of Native American actors, and made an effort to portray Indigenous languages. In 2009 "We Shall Remain" (2009), a television documentary by Ric Burns and part of the "American Experience" series, presented a five-episode series "from a Native American perspective". It represented "an unprecedented collaboration between Native and non-Native filmmakers and involves Native advisors and scholars at all levels of the project". The five episodes explore the impact of King Philip's War on the northeastern tribes, the "Native American confederacy" of Tecumseh's War, the U.S.-forced relocation of Southeastern tribes known as the Trail of Tears, the pursuit and capture of Geronimo and the Apache Wars, and concludes with the Wounded Knee incident, participation by the American Indian Movement, and the increasing resurgence of modern Native cultures since. Native Americans are often known as Indians or American Indians. The term "Native American" was introduced in the United States in preference to the older term "Indian" to distinguish the indigenous peoples of the Americas from the people of India and to avoid negative stereotypes associated with the term "Indian". In 1995, a plurality of indigenous Americans, however, preferred the term "American Indian" and many tribes include the word Indian in their formal title. Criticism of the neologism "Native American" comes from diverse sources. Russell Means, an American Indian activist, opposed the term "Native American" because he believed it was imposed by the government without the consent of American Indians. He has also argued that the use of the word "Indian" derives not from a confusion with India but from a Spanish expression "en Dios" meaning "in God" (and a near-homophone of the Spanish word for "Indians", "indios"). A 1995 U.S. Census Bureau survey found that more Native Americans in the United States preferred "American Indian" to "Native American". Most American Indians are comfortable with "Indian", "American Indian", and "Native American", and the terms are often used interchangeably. The traditional term is reflected in the name chosen for the National Museum of the American Indian, which opened in 2004 on the Mall in Washington, D.C. Gambling has become a leading industry. Casinos operated by many Native American governments in the United States are creating a stream of gambling revenue that some communities are beginning to leverage to build diversified economies. Although many Native American tribes have casinos, the impact of Native American gaming is widely debated. Some tribes, such as the Winnemem Wintu of Redding, California, feel that casinos and their proceeds destroy culture from the inside out. These tribes refuse to participate in the gambling industry. Numerous tribes around the country have entered the financial services market including the Otoe-Missouria, Tunica-Biloxi, and the Rosebud Sioux. Because of the challenges involved in starting a financial services business from scratch, many tribes hire outside consultants and vendors to help them launch these businesses and manage the regulatory issues involved. Similar to the tribal sovereignty debates that occurred when tribes first entered the gaming industry, the tribes, states, and federal government are currently in disagreement regarding who possesses the authority to regulate these e-commerce business entities. Prosecution of serious crime, historically endemic on reservations, was required by the 1885 Major Crimes Act, 18 U.S.C. §§1153, 3242, and court decisions to be investigated by the federal government, usually the Federal Bureau of Investigation, and prosecuted by United States Attorneys of the United States federal judicial district in which the reservation lies. A December 13, 2009 "New York Times" article about growing gang violence on the Pine Ridge Indian Reservation estimated that there were 39 gangs with 5,000 members on that reservation alone. Navajo country recently reported 225 gangs in its territory. As of 2012, a high incidence of rape continued to impact Native American women and Alaskan native women. According to the Department of Justice, 1 in 3 Native women have suffered rape or attempted rape, more than twice the national rate. About 46 percent of Native American women have been raped, beaten, or stalked by an intimate partner, according to a 2010 study by the Centers for Disease Control. According to Professor N. Bruce Duthu, "More than 80 percent of Indian victims identify their attacker as non-Indian". Today, other than tribes successfully running casinos, many tribes struggle, as they are often located on reservations isolated from the main economic centers of the country. The estimated 2.1 million Native Americans are the most impoverished of all ethnic groups. According to the 2000 Census, an estimated 400,000 Native Americans reside on reservation land. While some tribes have had success with gaming, only 40% of the 562 federally recognized tribes operate casinos. According to a 2007 survey by the U.S. Small Business Administration, only 1% of Native Americans own and operate a business. The barriers to economic development on Native American reservations have been identified by Joseph Kalt and Stephen Cornell of the Harvard Project on American Indian Economic Development at Harvard University, in their report: "What Can Tribes Do? Strategies and Institutions in American Indian Economic Development" (2008), are summarized as follows: A major barrier to development is the lack of entrepreneurial knowledge and experience within Indian reservations. "A general lack of education and experience about business is a significant challenge to prospective entrepreneurs", was the report on Native American entrepreneurship by the Northwest Area Foundation in 2004. "Native American communities that lack entrepreneurial traditions and recent experiences typically do not provide the support that entrepreneurs need to thrive. Consequently, experiential entrepreneurship education needs to be embedded into school curricula and after-school and other community activities. This would allow students to learn the essential elements of entrepreneurship from a young age and encourage them to apply these elements throughout life". "Rez Biz" magazine addresses these issues. Some scholars argue that the existing theories and practices of economic development are not suitable for Native American communities—given the lifestyle, economic, and cultural differences, as well as the unique history of Native American-U.S. relations. Most economic development research were not conducted on Native American communities. The federal government fails to consider place-based issues of American Indian poverty by generalizing the demographic. In addition, the concepts of economic development threatens to upend the multidimensionality of Native American culture. The dominance of federal government involvement in indigenous developmental activities perpetuates and exacerbates the salvage paradigm. Native land that is owned by individual Native Americans sometimes cannot be developed because of fractionalization. Fractionalization occurs when a landowner dies, and their land is inherited by their children, but not subdivided. This means that one parcel might be owned by 50 different individuals. A majority of those holding interest must agree to any proposal to develop the land, and establishing this consent is time-consuming, cumbersome, and sometimes impossible. Another landownership issue on reservations is checkerboarding, where Tribal land is interspersed with land owned by the federal government on behalf of Natives, individually owned plots, and land owned by non-Native individuals. This prevents Tribal governments from securing plots of land large enough for economic development or agricultural uses. Because reservation land is owned “in trust” by the federal government, individuals living on reservations cannot build equity in their homes. This bars Native Americans from getting loans, as there is nothing that a bank can collect if the loan is not paid. Past efforts to encourage landownership (such as the Dawes Act) resulted in a net loss of Tribal land. After they were familiarized with their smallholder status, Native American landowners were lifted of trust restrictions and their land would get transferred back to them, contingent on a transactional fee to the federal government. The transfer fee discouraged Native American land ownership, with 65% of tribal owned land being sold to non-Native Americans by the 1920s. Activists against property rights point to historical evidence of communal ownership of land and resources by tribes. They claim that because of this history, property rights are foreign to Natives and have no place in the modern reservation system. Those in favor of property rights cite examples of tribes negotiating with colonial communities or other tribes about fishing and hunting rights in an area. Land ownership was also a challenge because of the different definitions of land that the Natives and the Europeans had. Most Native American tribes thought of property rights more as "borrowing" the land, while those from Europe thought of land as individual property. State-level efforts such as the Oklahoma Indian Welfare Act were attempts to contain tribal land in Native American hands. However, more bureaucratic decisions only expanded the size of the bureaucracy. The knowledge disconnect between the decision-making bureaucracy and Native American stakeholders resulted in ineffective development efforts. Traditional Native American entrepreneurship does not prioritize profit maximization, rather, business transactions must have align with their social and cultural values. In response to indigenous business philosophy, the federal government created policies that aimed to formalize their business practices, which undermined the Native American status quo. Additionally, legal disputes interfered with tribal land leasing, which were settled with the verdict against tribal sovereignty. Often, bureaucratic overseers of development are far removed from Native American communities, and lack the knowledge and understanding to develop plans or make resource allocation decisions. The top-down heavy involvement in developmental operations corrupts bureaucrats into further self-serving agenda. Such incidences include fabricated reports that exaggerate results. While Native American urban poverty is attributed to hiring and workplace discrimination in a heterogeneous setting, reservation and trust land poverty rates are endogenous to deserted opportunities in isolated regions. Historical trauma is described as collective emotional and psychological damage throughout a person's lifetime and across multiple generations. Examples of historical trauma can be seen through the Wounded Knee Massacre of 1890, where over 200 unarmed Lakota were killed, and the Dawes Allotment Act of 1887, when American Indians lost four-fifths of their land. American Indian youth have higher rates of substance and alcohol abuse deaths than the general population. Many American Indians can trace the beginning of their substance and alcohol abuse to a traumatic event related to their offender's own substance abuse. A person's substance abuse can be described as a defense mechanism against the user's emotions and trauma. For American Indians alcoholism is a symptom of trauma passed from generation to generation and influenced by oppressive behaviors and policies by the dominant Euro-American society. Boarding schools were made to "Kill the Indian, Save the man". Shame among American Indians can be attributed to the hundreds of years of discrimination. The culture of Pre-Columbian North America is usually defined by the concept of the culture area, namely a geographical region where shared cultural traits occur. The northwest culture area, for example shared common traits such as salmon fishing, woodworking, and large villages or towns and a hierarchical social structure. Though cultural features, language, clothing, and customs vary enormously from one tribe to another, there are certain elements which are encountered frequently and shared by many tribes. Early European American scholars described the Native Americans as having a society dominated by clans. European colonization of the Americas had a major impact on Native American culture through what is known as the Columbian exchange. The Columbian exchange, also known as the Columbian interchange, was the widespread transfer of plants, animals, culture, human populations, technology, and ideas between the Americas and the Old World in the 15th and 16th centuries, following Christopher Columbus's 1492 voyage. The Columbian exchange generally had a destructive impact on Native American culture through disease, and a 'clash of cultures', whereby European values of private property, the family, and labor, led to conflict, appropriation of traditional communal lands and changed how the indigenous tribes practiced slavery. The impact of the Columbian exchange was not entirely negative however. For example, the re-introduction of the horse to North America allowed the Plains Indian to revolutionize their way of life by making hunting, trading, and warfare far more effective, and to greatly improve their ability to transport possessions and move their settlements. The Great Plains tribes were still hunting the bison when they first encountered the Europeans. The Spanish reintroduction of the horse to North America in the 17th century and Native Americans' learning to use them greatly altered the Native Americans' culture, including changing the way in which they hunted large game. Horses became such a valuable, central element of Native lives that they were counted as a measure of wealth. In the early years, as these native peoples encountered European explorers and settlers and engaged in trade, they exchanged food, crafts, and furs for blankets, iron and steel implements, horses, trinkets, firearms, and alcoholic beverages. The Na-Dené, Algic, and Uto-Aztecan families are the largest in terms of number of languages. Uto-Aztecan has the most speakers (1.95 million) if the languages in Mexico are considered (mostly due to 1.5 million speakers of Nahuatl); Na-Dené comes in second with approximately 200,000 speakers (nearly 180,000 of these are speakers of Navajo), and Algic in third with about 180,000 speakers (mainly Cree and Ojibwe). Na-Dené and Algic have the widest geographic distributions: Algic currently spans from northeastern Canada across much of the continent down to northeastern Mexico (due to later migrations of the Kickapoo) with two outliers in California (Yurok and Wiyot); Na-Dené spans from Alaska and western Canada through Washington, Oregon, and California to the U.S. Southwest and northern Mexico (with one outlier in the Plains). Several families consist of only 2 or 3 languages. Demonstrating genetic relationships has proved difficult due to the great linguistic diversity present in North America. Two large (super-) family proposals, Penutian and Hokan, look particularly promising. However, even after decades of research, a large number of families remain. A number of English words have been derived from Native American languages. To counteract a shift to English, some Native American tribes have initiated language immersion schools for children, where a native Indian language is the medium of instruction. For example, the Cherokee Nation initiated a 10-year language preservation plan that involved raising new fluent speakers of the Cherokee language from childhood on up through school immersion programs as well as a collaborative community effort to continue to use the language at home. This plan was part of an ambitious goal that, in 50 years, will result in 80% or more of the Cherokee people being fluent in the language. The Cherokee Preservation Foundation has invested $3 million in opening schools, training teachers, and developing curricula for language education, as well as initiating community gatherings where the language can be actively used. Formed in 2006, the Kituwah Preservation & Education Program (KPEP) on the Qualla Boundary focuses on language immersion programs for children from birth to fifth grade, developing cultural resources for the general public and community language programs to foster the Cherokee language among adults. There is also a Cherokee language immersion school in Tahlequah, Oklahoma, that educates students from pre-school through eighth grade. Because Oklahoma's official language is English, Cherokee immersion students are hindered when taking state-mandated tests because they have little competence in English. The Department of Education of Oklahoma said that in 2012 state tests: 11% of the school's sixth-graders showed proficiency in math, and 25% showed proficiency in reading; 31% of the seventh-graders showed proficiency in math, and 87% showed proficiency in reading; 50% of the eighth-graders showed proficiency in math, and 78% showed proficiency in reading. The Oklahoma Department of Education listed the charter school as a Targeted Intervention school, meaning the school was identified as a low-performing school but has not so that it was a Priority School. Ultimately, the school made a C, or a 2.33 grade point average on the state's A-F report card system. The report card shows the school getting an F in mathematics achievement and mathematics growth, a C in social studies achievement, a D in reading achievement, and an A in reading growth and student attendance. "The C we made is tremendous," said school principal Holly Davis, "[t]here is no English instruction in our school's younger grades, and we gave them this test in English." She said she had anticipated the low grade because it was the school's first year as a state-funded charter school, and many students had difficulty with English. Eighth graders who graduate from the Tahlequah immersion school are fluent speakers of the language, and they usually go on to attend Sequoyah High School where classes are taught in both English and Cherokee. Historical diets of Native Americans differed dramatically region to region. Different peoples might have relayed more heavily of agriculture, horticulture, hunting, fishing, or gathering of wild plants and fungi. Tribes developed diets best suited for their environments. Iñupiat, Yupiit, Unangan, and fellow Alaska Natives fished, hunted, and harvested wild plants, but did not rely on agriculture. Coastal peoples relied more heavily on sea mammals, fish, and fish eggs, while inland peoples hunted caribou and moose. Alaskan Natives prepared and preserved dried and smoked meat and fish. Pacific Northwest tribes crafted seafaring dugouts long for fishing. In the Eastern Woodlands, early peoples independently invented agricultural and by 1800 BCE developed the crops of the Eastern Agricultural Complex, which include squash ("Cucurbita pepo ssp. ovifera"), sunflower ("Helianthus annuus var. macrocarpus"), goosefoot ("Chenopodium berlandieri"), and marsh elder ("Iva annua var. macrocarpa"). The Sonoran desert region including parts of Arizona and California, part of a region known as Aridoamerica, relied heavily on the tepary bean ("Phaseolus acutifolius") as a staple crop. This and other desert crops, mesquite bead pods, "tunas" (prickly pear fruit), cholla buds, saguaro cactus fruit, and acorns are being actively promoted today by Tohono O'odham Community Action. In the Southwest, some communities developed irrigation techniques while others, such as the Hopi dry-farmed. They filled storehouses with grain as protection against the area's frequent droughts. Maize or corn, first cultivated in what is now Mexico was traded north into Aridoamerica and Oasisamerica, southwest. From there, maize cultivation spread throughout the Great Plains and Eastern Woodlands by 200 CE. Native farmers practiced polycropping maize, beans, and squash; these crops are known as the Three Sisters. The beans would replace the nitrogen, which the maize leached from the ground, as well as using corn stalks for support for climbing. The agriculture gender roles of the Native Americans varied from region to region. In the Southwest area, men prepared the soil with hoes. The women were in charge of planting, weeding, and harvesting the crops. In most other regions, the women were in charge of most agriculture, including clearing the land. Clearing the land was an immense chore since the Native Americans rotated fields. Europeans in the eastern part of the continent observed that Native Americans cleared large areas for cropland. Their fields in New England sometimes covered hundreds of acres. Colonists in Virginia noted thousands of acres under cultivation by Native Americans. Early farmers commonly used tools such as the hoe, maul, and dibber. The hoe was the main tool used to till the land and prepare it for planting; then it was used for weeding. The first versions were made out of wood and stone. When the settlers brought iron, Native Americans switched to iron hoes and hatchets. The dibber was a digging stick, used to plant the seed. Once the plants were harvested, women prepared the produce for eating. They used the maul to grind the corn into mash. It was cooked and eaten that way or baked as corn bread. Native American religious practices differ widely across the country. These spiritualities may accompany adherence to another faith, or can represent a person's primary religious identity. While much Native American spiritualism exists in a tribal-cultural continuum, and as such cannot be easily separated from tribal identity itself. Cultural religious practices of some tribes include the use of sacred herbs such as tobacco, sweetgrass or sage. Many Plains tribes have sweatlodge ceremonies, though the specifics of the ceremony vary among tribes. Fasting, singing and prayer in the ancient languages of their people, and sometimes drumming are also common. The Midewiwin Lodge is a medicine society inspired by the oral history and prophesies of the Ojibwa (Chippewa) and related tribes. Another significant religious body among Native peoples is known as the Native American Church. It is a syncretistic church incorporating elements of Native spiritual practice from a number of different tribes as well as symbolic elements from Christianity. Its main rite is the peyote ceremony. Prior to 1890, traditional religious beliefs included Wakan Tanka. In the American Southwest, especially New Mexico, a syncretism between the Catholicism brought by Spanish missionaries and the native religion is common; the religious drums, chants, and dances of the Pueblo people are regularly part of Masses at Santa Fe's Saint Francis Cathedral. Native American-Catholic syncretism is also found elsewhere in the United States. (e.g., the National Kateri Tekakwitha Shrine in Fonda, New York, and the National Shrine of the North American Martyrs in Auriesville, New York). The eagle feather law (Title 50 Part 22 of the Code of Federal Regulations) stipulates that only individuals of certifiable Native American ancestry enrolled in a federally recognized tribe are legally authorized to obtain eagle feathers for religious or spiritual use. The law does not allow Native Americans to give eagle feathers to non-Native Americans. Gender roles are differentiated in many Native American tribes. Many Natives have historically defied colonial expectations of sexuality and gender, and continue to do so in contemporary life. Whether a particular tribe is predominantly matrilineal or patrilineal, often both sexes have some degree of decision-making power within the tribe. Many Nations, such as the Haudenosaunee Five Nations and the Southeast Muskogean tribes, have matrilineal or Clan Mother systems, in which property and hereditary leadership are controlled by and passed through the maternal lines. In these Nations, the children are considered to belong to the mother's clan. In Cherokee culture, women own the family property. When traditional young women marry, their husbands may join them in their mother's household. Matrilineal structures enable young women to have assistance in childbirth and rearing, and protect them in case of conflicts between the couple. If a couple separates or the man dies, the woman has her family to assist her. In matrilineal cultures the mother's brothers are usually the leading male figures in her children's lives; fathers have no standing in their wife and children's clan, as they still belong to their own mother's clan. Hereditary clan chief positions pass through the mother's line and chiefs have historically been selected on recommendation of women elders, who could also disapprove of a chief. In the patrilineal tribes, such as the Omaha, Osage, Ponca, and Lakota, hereditary leadership passes through the male line, and children are considered to belong to the father and his clan. In patrilineal tribes, if a woman marries a non-Native, she is no longer considered part of the tribe, and her children are considered to share the ethnicity and culture of their father. In patriarchal tribes, gender roles tend to be rigid. Men have historically hunted, traded and made war while, as life-givers, women have primary responsibility for the survival and welfare of the families (and future of the tribe). Women usually gather and cultivate plants, use plants and herbs to treat illnesses, care for the young and the elderly, make all the clothing and instruments, and process and cure meat and skins from the game. Some mothers use cradleboards to carry an infant while working or traveling. In matriarchal and egalitarian nations, the gender roles are usually not so clear-cut, and are even less so in the modern era. At least several dozen tribes allowed polygyny to sisters, with procedural and economic limits. Lakota, Dakota, and Nakota girls are encouraged to learn to ride, hunt and fight. Though fighting in war has mostly been left to the boys and men, occasionally women have fought as well – both in battles and in defense of the home – especially if the tribe was severely threatened. Native American leisure time led to competitive individual and team sports. Jim Thorpe, Joe Hipp, Notah Begay III, Chris Wondolowski, Jacoby Ellsbury, Joba Chamberlain, Kyle Lohse, Sam Bradford, Jack Brisco, Tommy Morrison, Billy Mills, Angel Goodrich, Shoni Schimmel, and Kyrie Irving are well known professional athletes. Native American ball sports, sometimes referred to as lacrosse, stickball, or baggataway, were often used to settle disputes, rather than going to war, as a civil way to settle potential conflict. The Choctaw called it "isitoboli" ("Little Brother of War"); the Onondaga name was "dehuntshigwa'es" ("men hit a rounded object"). There are three basic versions, classified as Great Lakes, Iroquoian, and Southern. The game is played with one or two rackets or sticks and one ball. The object of the game is to land the ball in the opposing team's goal (either a single post or net) to score and to prevent the opposing team from scoring on your goal. The game involves as few as 20 or as many as 300 players with no height or weight restrictions and no protective gear. The goals could be from around apart to about ; in lacrosse the field is . Chunkey was a game that consisted of a stone-shaped disk that was about 1–2 inches in diameter. The disk was thrown down a corridor so that it could roll past the players at great speed. The disk would roll down the corridor, and players would throw wooden shafts at the moving disk. The object of the game was to strike the disk or prevent your opponents from hitting it. Jim Thorpe, a Sauk and Fox Native American, was an all-round athlete playing football and baseball in the early 20th century. Future President Dwight Eisenhower injured his knee while trying to tackle the young Thorpe. In a 1961 speech, Eisenhower recalled Thorpe: "Here and there, there are some people who are supremely endowed. My memory goes back to Jim Thorpe. He never practiced in his life, and he could do anything better than any other football player I ever saw." In the 1912 Olympics, Thorpe could run the 100-yard dash in 10 seconds flat, the 220 in 21.8 seconds, the 440 in 51.8 seconds, the 880 in 1:57, the mile in 4:35, the 120-yard high hurdles in 15 seconds, and the 220-yard low hurdles in 24 seconds. He could long jump 23 ft 6 in and high-jump 6 ft 5 in. He could pole vault , put the shot , throw the javelin , and throw the discus . Thorpe entered the U.S. Olympic trials for the pentathlon and the decathlon. Louis Tewanima, Hopi people, was an American two-time Olympic distance runner and silver medalist in the 10,000 meter run in 1912. He ran for the Carlisle Indian School where he was a teammate of Jim Thorpe. His silver medal in 1912 remained the best U.S. achievement in this event until another Indian, Billy Mills, won the gold medal in 1964. Tewanima also competed at the 1908 Olympics, where he finished in ninth place in the marathon.[1] Ellison Brown, of the Narragansett people from Rhode Island, better known as "Tarzan" Brown, won two Boston Marathons (1936, 1939) and competed on the United States Olympic team in the 1936 Olympic Games in Berlin, Germany, but did not finish due to injury. He qualified for the 1940 Olympic Games in Helsinki, Finland, but the games were canceled due to the outbreak of World War II. Billy Mills, a Lakota and USMC officer, won the gold medal in the 10,000 meter run at the 1964 Tokyo Olympics. He was the only American ever to win the Olympic gold in this event. An unknown before the Olympics, Mills finished second in the U.S. Olympic trials. Billy Kidd, part Abenaki from Vermont, became the first American male to medal in alpine skiing in the Olympics, taking silver at age 20 in the slalom in the 1964 Winter Olympics at Innsbruck, Austria. Six years later at the 1970 World Championships, Kidd won the gold medal in the combined event and took the bronze medal in the slalom. Ashton Locklear (Lumbee), an uneven bars specialist was an alternate for the 2016 Summer Olympics U.S. gymnastics team, the Final Five. In 2016, Kyrie Irving (Sioux) also helped Team USA win the gold medal at the 2016 Summer Olympics. With the win, he became just the fourth member of Team USA to capture the NBA championship and an Olympic gold medal in the same year, joining LeBron James, Michael Jordan, and Scottie Pippen. Traditional Native American music is almost entirely monophonic, but there are notable exceptions. Native American music often includes drumming or the playing of rattles or other percussion instruments but little other instrumentation. Flutes and whistles made of wood, cane, or bone are also played, generally by individuals, but in former times also by large ensembles (as noted by Spanish conquistador de Soto). The tuning of modern flutes is typically pentatonic. Performers with Native American parentage have occasionally appeared in American popular music such as Rita Coolidge, Wayne Newton, Gene Clark, Buffy Sainte-Marie, Blackfoot, Redbone (members are also of Mexican descent), and CocoRosie. Some, such as John Trudell, have used music to comment on life in Native America. Other musicians such as R. Carlos Nakai, Joanne Shenandoah and Robert "Tree" Cody integrate traditional sounds with modern sounds in instrumental recordings, whereas the music by artist Charles Littleleaf is derived from ancestral heritage as well as nature. A variety of small and medium-sized recording companies offer an abundance of recent music by Native American performers young and old, ranging from pow-wow drum music to hard-driving rock-and-roll and rap. In the International world of ballet dancing Maria Tallchief was considered America's first major prima ballerina, and was the first person of Native American descent to hold the rank. along with her sister Marjorie Tallchief both became star ballerinas. The most widely practiced public musical form among Native Americans in the United States is that of the pow-wow. At pow-wows, such as the annual Gathering of Nations in Albuquerque, New Mexico, members of drum groups sit in a circle around a large drum. Drum groups play in unison while they sing in a native language and dancers in colorful regalia dance clockwise around the drum groups in the center. Familiar pow-wow songs include honor songs, intertribal songs, crow-hops, sneak-up songs, grass-dances, two-steps, welcome songs, going-home songs, and war songs. Most indigenous communities in the United States also maintain traditional songs and ceremonies, some of which are shared and practiced exclusively within the community. The Iroquois, living around the Great Lakes and extending east and north, used strings or belts called "wampum" that served a dual function: the knots and beaded designs mnemonically chronicled tribal stories and legends, and further served as a medium of exchange and a unit of measure. The keepers of the articles were seen as tribal dignitaries. Pueblo peoples crafted impressive items associated with their religious ceremonies. "Kachina" dancers wore elaborately painted and decorated masks as they ritually impersonated various ancestral spirits. Pueblo people are particularly noted for their traditional high-quality pottery, often with geometric designs and floral, animal and bird motifs. Sculpture was not highly developed, but carved stone and wood fetishes were made for religious use. Superior weaving, embroidered decorations, and rich dyes characterized the textile arts. Both turquoise and shell jewelry were created, as were formalized pictorial arts. Navajo spirituality focused on the maintenance of a harmonious relationship with the spirit world, often achieved by ceremonial acts, usually incorporating sandpainting. For the Navajo the sand painting is not merely a representational object, but a dynamic spiritual entity with a life of its own, which helped the patient at the centre of the ceremony re-establish a connection with the life force. These vivid, intricate, and colorful sand creations were erased at the end of the healing ceremony. The Native American arts and crafts industry brings in more than a billion in gross sales annually. Native American art comprises a major category in the world art collection. Native American contributions include pottery, paintings, jewellery, weavings, sculpture, basketry, and carvings. Franklin Gritts was a Cherokee artist who taught students from many tribes at Haskell Institute (now Haskell Indian Nations University) in the 1940s, the "Golden Age" of Native American painters. The integrity of certain Native American artworks is protected by the Indian Arts and Crafts Act of 1990, that prohibits representation of art as Native American when it is not the product of an enrolled Native American artist. Attorney Gail Sheffield and others claim that this law has had "the unintended consequence of sanctioning discrimination against Native Americans whose tribal affiliation was not officially recognized". Native artists such as Jeanne Rorex Bridges (Echota Cherokee) who was not enrolled ran the risk of fines or imprisonment if they continued to sell their art while affirming their Indian heritage. Interracial relations between Native Americans, Europeans, and Africans is a complex issue that has been mostly neglected with "few in-depth studies on interracial relationships". Some of the first documented cases of European/Native American intermarriage and contact were recorded in Post-Columbian Mexico. One case is that of Gonzalo Guerrero, a European from Spain, who was shipwrecked along the Yucatan Peninsula, and fathered three Mestizo children with a Mayan noblewoman. Another is the case of Hernán Cortés and his mistress La Malinche, who gave birth to another of the first multi-racial people in the Americas. European impact was immediate, widespread, and profound already during the early years of colonization and nationhood. Europeans living among Native Americans were often called "white indians". They "lived in native communities for years, learned native languages fluently, attended native councils, and often fought alongside their native companions". Early contact was often charged with tension and emotion, but also had moments of friendship, cooperation, and intimacy. Marriages took place in English, Spanish, and French colonies between Native Americans and Europeans though Native American women were also the victims of rape. Given the preponderance of men among the colonists in the early years, generally European men tried to turn to Native American women for sexual relationships either through marriage, informal relationships, or rape. There was fear on both sides, as the different peoples realized how different their societies were. The whites regarded the Indians as "savage" because they were not Christian. They were suspicious of cultures which they did not understand. The Native American author, Andrew J. Blackbird, wrote in his "History of the Ottawa and Chippewa Indians of Michigan" (1897), that white settlers introduced some immoralities into Native American tribes. Many Native Americans suffered because the Europeans introduced alcohol and the whiskey trade resulted in alcoholism among the people, who were alcohol-intolerant. Blackbird wrote: The U.S. government had two purposes when making land agreements with Native Americans: to open it up more land for white settlement, and to ease tensions between whites and Native Americans by forcing the Native Americans to use the land in the same way as did the whites—for subsistence farms. The government used a variety of strategies to achieve these goals; many treaties required Native Americans to become farmers in order to keep their land. Government officials often did not translate the documents which Native Americans were forced to sign, and native chiefs often had little or no idea what they were signing. For a Native American man to marry a white woman, he had to get consent of her parents, as long as "he can prove to support her as a white woman in a good home". In the early 19th century, the Shawnee Tecumseh and blonde hair, blue-eyed Rebbecca Galloway had an interracial affair. In the late 19th century, three European-American middle-class women teachers at Hampton Institute married Native American men whom they had met as students. As European-American women started working independently at missions and Indian schools in the western states, there were more opportunities for their meeting and developing relationships with Native American men. For instance, Charles Eastman, a man of European and Lakota descent whose father sent both his sons to Dartmouth College, got his medical degree at Boston University and returned to the West to practice. He married Elaine Goodale, whom he met in South Dakota. He was the grandson of Seth Eastman, a military officer from Maine, and a chief's daughter. Goodale was a young European-American teacher from Massachusetts and a reformer, who was appointed as the U.S. superintendent of Native American education for the reservations in the Dakota Territory. They had six children together. The majority of Native American tribes did practice some form of slavery before the European introduction of African slavery into North America, but none exploited slave labor on a large scale. Most Native American tribes did not barter captives in the pre-colonial era, although they sometimes exchanged enslaved individuals with other tribes in peace gestures or in exchange for their own members. When Europeans arrived as colonists in North America, Native Americans changed their practice of slavery dramatically. Native Americans began selling war captives to Europeans rather than integrating them into their own societies as they had done before. As the demand for labor in the West Indies grew with the cultivation of sugar cane, Europeans enslaved Native Americans for the Thirteen Colonies, and some were exported to the "sugar islands". The British settlers, especially those in the southern colonies, purchased or captured Native Americans to use as forced labor in cultivating tobacco, rice, and indigo. Accurate records of the numbers enslaved do not exist because vital statistics and census reports were at best infrequent. Scholars estimate tens to hundreds of thousands of Native Americans may have been enslaved by the Europeans, being sold by Native Americans themselves or Europeans. Slaves became a caste of people who were foreign to the English (Native Americans, Africans and their descendants) and non-Christians. The Virginia General Assembly defined some terms of slavery in 1705: The slave trade of Native Americans lasted only until around 1750. It gave rise to a series of devastating wars among the tribes, including the Yamasee War. The Indian Wars of the early 18th century, combined with the increasing importation of African slaves, effectively ended the Native American slave trade by 1750. Colonists found that Native American slaves could easily escape, as they knew the country. The wars cost the lives of numerous colonial slave traders and disrupted their early societies. The remaining Native American groups banded together to face the Europeans from a position of strength. Many surviving Native American peoples of the southeast strengthened their loose coalitions of language groups and joined confederacies such as the Choctaw, the Creek, and the Catawba for protection. Even after the Indian Slave Trade ended in 1750 the enslavement of Native Americans continued in the west, and also in the Southern states mostly through kidnappings. Both Native American and African enslaved women suffered rape and sexual harassment by male slaveholders and other white men. African and Native Americans have interacted for centuries. The earliest record of Native American and African contact occurred in April 1502, when Spanish colonists transported the first Africans to Hispaniola to serve as slaves. Sometimes Native Americans resented the presence of African Americans. The "Catawaba tribe in 1752 showed great anger and bitter resentment when an African American came among them as a trader". To gain favor with Europeans, the Cherokee exhibited the strongest color prejudice of all Native Americans. Because of European fears of a unified revolt of Native Americans and African Americans, the colonists tried to encourage hostility between the ethnic groups: "Whites sought to convince Native Americans that African Americans worked against their best interests." In 1751, South Carolina law stated: In addition, in 1758 the governor of South Carolina James Glen wrote: Europeans considered both races inferior and made efforts to make both Native Americans and Africans enemies. Native Americans were rewarded if they returned escaped slaves, and African Americans were rewarded for fighting in the late 19th-century Indian Wars. "Native Americans, during the transitional period of Africans becoming the primary race enslaved, were enslaved at the same time and shared a common experience of enslavement. They worked together, lived together in communal quarters, produced collective recipes for food, shared herbal remedies, myths and legends, and in the end they intermarried." Because of a shortage of men due to warfare, many tribes encouraged marriage between the two groups, to create stronger, healthier children from the unions. In the 18th century, many Native American women married freed or runaway African men due to a decrease in the population of men in Native American villages. Records show that many Native American women bought African men but, unknown to the European sellers, the women freed and married the men into their tribe. When African men married or had children by a Native American woman, their children were born free, because the mother was free (according to the principle of "partus sequitur ventrem", which the colonists incorporated into law). While numerous tribes used captive enemies as servants and slaves, they also often adopted younger captives into their tribes to replace members who had died. In the Southeast, a few Native American tribes began to adopt a slavery system similar to that of the American colonists, buying African American slaves, especially the Cherokee, Choctaw, and Creek. Though less than 3% of Native Americans owned slaves, divisions grew among the Native Americans over slavery. Among the Cherokee, records show that slave holders in the tribe were largely the children of European men who had shown their children the economics of slavery. As European colonists took slaves into frontier areas, there were more opportunities for relationships between African and Native American peoples. In the 2010 Census, nearly 3 million people indicated that their race was Native American (including Alaska Native). Of these, more than 27% specifically indicated "Cherokee" as their ethnic origin. Many of the First Families of Virginia claim descent from Pocahontas or some other "Indian princess". This phenomenon has been dubbed the "Cherokee Syndrome". Across the US, numerous individuals cultivate an opportunistic ethnic identity as Native American, sometimes through Cherokee heritage groups or Indian Wedding Blessings. Many tribes, especially those in the Eastern United States, are primarily made up of individuals with an unambiguous Native American identity, despite being predominantly of European ancestry. More than 75% of those enrolled in the Cherokee Nation have less than one-quarter Cherokee blood, and the current Principal Chief of the Cherokee Nation, Bill John Baker, is 1/32 Cherokee, amounting to about 3%. Historically, numerous Native Americans assimilated into colonial and later American society, e.g. through adopting English and converting to Christianity. In many cases, this process occurred through forced assimilation of children sent off to special boarding schools far from their families. Those who could pass for white had the advantage of white privilege Today, after generations of racial whitening through hypergamy and interracial marriage, many Native Americans are visually indistinguishable from White Americans, unlike mestizos in the United States, who may in fact have little or no non-indigenous ancestry. Considered a property that would hold Indians back on the road to civilization, Indian blood could be diluted over generations through interbreeding with Euro-American populations. Native Americans were seen as capable of cultural evolution (unlike Africans) and therefore of cultural absorption into the white populace. “Kill the Indian, save the man” was a mantra of nineteenth-century U.S. assimilation policies. Native Americans are more likely than any other racial group to practice interracial marriage, resulting in an ever-declining proportion of indigenous blood among those who claim a Native American identity. Some tribes will even resort to disenrollment of tribal members unable to provide scientific "proof" of Native ancestry, usually through a Certificate of Degree of Indian Blood. Disenrollment has become a contentious issue in Native American reservation politics. Intertribal mixing was common among many Native American tribes prior to European contact, as they would adopt captives taken in warfare. Individuals often had ancestry from more than one tribe, particularly after tribes lost so many members from disease in the colonial era and after. Bands or entire tribes occasionally split or merged to form more viable groups in reaction to the pressures of climate, disease and warfare. A number of tribes traditionally adopted captives into their group to replace members who had been captured or killed in battle. Such captives were from rival tribes and later were taken from raids on European settlements. Some tribes also sheltered or adopted white traders and runaway slaves, and others owned slaves of their own. Tribes with long trading histories with Europeans show a higher rate of European admixture, reflecting years of intermarriage between Native American women and European men, often seen as advantageous to both sides. A number of paths to genetic and ethnic diversity among Native Americans have occurred. In recent years, genetic genealogists have been able to determine the proportion of Native American ancestry carried by the African-American population. The literary and history scholar Henry Louis Gates, Jr., had experts on his TV programs who discussed African-American ancestry. They stated that 5% of African Americans have at least 12.5% Native American ancestry, or the equivalent to one great-grandparent, which may represent more than one distant ancestor. A greater percentage could have a smaller proportion of Indian ancestry, but their conclusions show that popular estimates of Native American admixture may have been too high. More recent genetic testing research of 2015, have found varied ancestries which show different tendencies by region and sex of ancestors. Though DNA testing is limited these studies found that on average, African Americans have 73.2–82.1% West African, 16.7%–29% European, and 0.8–2% Native American genetic ancestry, with large variation between individuals. DNA testing is not sufficient to qualify a person for specific tribal membership, as it cannot distinguish among Native American tribes; however some tribes such as the Meskwaki Nation require a DNA test in order to enroll in the tribe. Most DNA testing examines few lineages that comprise a minuscule percentage of one’s total ancestry, approximately less than 1 percent of total DNA. Every human being has about one thousand ancestors going back ten generations. In "Native American DNA: Tribal Belonging and the False Promise of Genetic Science", Kim Tallbear states that a person, “… could have up to two Native American grandparents and show no sign of Native American ancestry. For example, a genetic male could have a maternal grandfather (from whom he did not inherit his Y chromosome) and a paternal grandmother (from whom he did not inherit his mtDNA) who were descended from Native American founders, but mtDNA and Y-chromosome analyses would not detect them.” Native American identity has historically been based on culture, not just biology, as many American Indian peoples adopted captives from their enemies and assimilated them into their tribes. The Indigenous Peoples Council on Biocolonialism (IPCB) notes that: "Native American markers" are not found solely among Native Americans. While they occur more frequently among Native Americans, they are also found in people in other parts of the world. Geneticists state: Not all Native Americans have been tested; especially with the large number of deaths due to disease such as smallpox, it is unlikely that Native Americans only have the genetic markers they have identified [so far], even when their maternal or paternal bloodline does not include a [known] non-Native American. To receive tribal services, a Native American must be a certified (or enrolled) member of a federally recognized tribal organization. Each tribal government makes its own rules for eligibility of citizens or tribal members. Among tribes, qualification for enrollment may be based upon a required percentage of Native American "blood" (or the "blood quantum") of an individual seeking recognition, or documented descent from an ancestor on the Dawes Rolls or other registers. But, the federal government has its own standards related to who qualifies for services available to certified Native Americans. For instance, federal scholarships for Native Americans require the student both to be enrolled in a federally recognized tribe "and" to be of at least one-quarter Native American descent (equivalent to one grandparent), attested to by a Certificate of Degree of Indian Blood (CDIB) card issued by the federal government. Some tribes have begun requiring genealogical DNA testing of individuals' applying for membership, but this is usually related to an individual's proving parentage or direct descent from a certified member. Requirements for tribal membership vary widely by tribe. The Cherokee require documented direct genealogical descent from a Native American listed on the early 1906 Dawes Rolls. Tribal rules regarding recognition of members who have heritage from multiple tribes are equally diverse and complex. Federally recognized tribes do not accept genetic-ancestry results as appropriate documentation for enrollment and do not advise applicants to submit such documentation. Tribal membership conflicts have led to a number of legal disputes, court cases, and the formation of activist groups. One example of this are the Cherokee Freedmen. Today, they include descendants of African Americans once enslaved by the Cherokees, who were granted, by federal treaty, citizenship in the historic Cherokee Nation as freedmen after the Civil War. The modern Cherokee Nation, in the early 1980s, passed a law to require that all members must prove descent from a Cherokee Native American (not Cherokee Freedmen) listed on the Dawes Rolls, resulting in the exclusion of some individuals and families who had been active in Cherokee culture for years. Since the census of 2000, people may identify as being of more than one race. Since the 1960s, the number of people claiming Native American ancestry has grown significantly and by the 2000 census, the number had more than doubled. Sociologists attribute this dramatic change to "ethnic shifting" or "ethnic shopping"; they believe that it reflects a willingness of people to question their birth identities and adopt new ethnicities which they find more compatible. The author Jack Hitt writes: The journalist Mary Annette Pember notes that identifying with Native American culture may be a result of a person's increased interest in genealogy, the romanticization of the lifestyle, and a family tradition of Native American ancestors in the distant past. There are different issues if a person wants to pursue enrollment as a member of a tribe. Different tribes have different requirements for tribal membership; in some cases persons are reluctant to enroll, seeing it as a method of control initiated by the federal government; and there are individuals who are 100% Native American but, because of their mixed tribal heritage, do not qualify to belong to any individual tribe. Pember concludes: The genetic history of indigenous peoples of the Americas primarily focuses on human Y-chromosome DNA haplogroups and human mitochondrial DNA haplogroups. "Y-DNA" is passed solely along the patrilineal line, from father to son, while "mtDNA" is passed down the matrilineal line, from mother to offspring of both sexes. Neither recombines, and thus Y-DNA and mtDNA change only by chance mutation at each generation with no intermixture between parents' genetic material. Autosomal "atDNA" markers are also used, but differ from mtDNA or Y-DNA in that they overlap significantly. Autosomal DNA is generally used to measure the average continent-of-ancestry genetic admixture in the entire human genome and related isolated populations. Within mtDNA, genetic scientists have found specific nucleotide sequences classified as “Native American markers” because the sequences are understood to have been inherited through the generations of genetic females within populations that first settled the “New World.” There are five primary Native American mtDNA haplogroups in which there are clusters of closely linked markers inherited together. All five haplogroups have been identified by researchers as “prehistoric Native North American samples,” and it is commonly asserted that the majority of living Native Americans possess one of the common five mtDNA haplogroup markers. The genetic pattern indicates Indigenous Americans experienced two very distinctive genetic episodes; first with the initial-peopling of the Americas, and secondly with European colonization of the Americas. The former is the determinant factor for the number of gene lineages, zygosity mutations and founding haplotypes present in today's Indigenous Amerindian populations. Human settlement of the New World occurred in stages from the Bering sea coast line, with an initial 15,000 to 20,000-year layover on Beringia for the small founding population. The micro-satellite diversity and distributions of the Y lineage specific to South America indicates that certain Amerindian populations have been isolated since the initial colonization of the region. The Na-Dené, Inuit and Indigenous Alaskan populations exhibit haplogroup Q-M242 (Y-DNA) mutations, however, that are distinct from other indigenous Amerindians, and that have various mtDNA and atDNA mutations. This suggests that the paleo-Indian migrants into the northern extremes of North America and Greenland were descended from a later, independent migrant population. Genetic analyses of HLA I and HLA II genes as well as HLA-A, -B, and -DRB1 gene frequencies links the Ainu people of northern Japan and southeastern Russia to some Indigenous peoples of the Americas, especially to populations on the Pacific Northwest Coast such as Tlingit. The scientists suggest that the main ancestor of the Ainu and of some Native American groups can be traced back to Paleolithic groups in Southern Siberia.
https://en.wikipedia.org/wiki?curid=21217
Nights into Dreams Development began after the release of "Sonic & Knuckles" in 1994, although the concept originated during the development of "Sonic the Hedgehog 2" two years prior. Development was led by Sonic Team veterans Yuji Naka, Naoto Ohshima, and Takashi Iizuka. Naka began the project with the main idea revolving around flight, and Ohshima designed the character Nights to resemble an angel that could fly like a bird. Ohshima designed Nights as an androgynous character. The team conducted research on dreaming and REM sleep, and was influenced by the works and theories of psychoanalysts Carl Jung and Sigmund Freud. An analogue controller, the Saturn 3D controller, was designed alongside the game and included with some retail copies. "Nights into Dreams" received acclaim for its graphics, gameplay, soundtrack, and atmosphere. It has appeared on several lists of the greatest games of all time. An abbreviated Christmas-themed version, "Christmas Nights", was released in December 1996. The game was ported to the PlayStation 2 in 2008 in Japan and a high-definition version was released worldwide for PlayStation 3, Xbox 360, and Windows in 2012. A sequel, "", was released for the Wii in 2007. "Nights into Dreams" is split into seven levels, referred to as "Dreams". The levels are distributed between the two teenage characters: three are unique to Claris, three to Elliot, and each play through an identical final seventh level, "Twin Seeds". Initially, only Claris' "Spring Valley" and Elliot's "Splash Garden" levels are available, and successful completion of one of these unlocks the next level in that character's path. Previously completed stages may be revisited to improve the player's high scores; a grade between A and F is given to the player upon completion, but a "C" grade (or better) in all the selected character's levels must be achieved to unlock the relevant "Twin Seeds" stage for that character. Points are accumulated depending on how fast the player completes a level, and extra points are awarded when the player flies through rings. Each level is split up into four "Mares" set in Nightopia and a boss fight which takes place in Nightmare. In each level, players initially control Claris or Elliot, who immediately have their Ideyas (spherical objects that contain emotions) of hope, wisdom, intelligence and purity stolen from them by Wizeman's minions, leaving behind only their Ideya of courage. The goal of each Mare is to recover one of the stolen Ideya by collecting 20 blue chips and delivering them to the cage holding the Ideyas, which overloads and releases the orb it holds. If the player walks around the landscape for too long, they are pursued by a sentient alarm clock which awakens the character and end the level if it comes into contact with the player. The majority of the gameplay centres on flying sequences, which are triggered by walking into the Ideya Palace near the start of each level so that the character merges with the imprisoned Nights. Once the flying sequence is initiated, the time limit begins. In the flying sections, the player controls Nights' flight along a predetermined route through each Mare, resembling that of a 2D platformer. The player has only a limited period of time available before Nights falls to the ground and transforms back into Claris or Elliot, and each collision with an enemy subtracts five seconds from the time remaining. The player's time is replenished each time they return an Ideya to the Ideya Palace. While flying, Nights can use a "Drill Dash" to travel faster, as well as defeat certain reverie enemies scattered throughout the level. Grabbing onto certain enemies causes Nights to spin around, which launches both Nights and the enemy in the direction the boost was initiated. Various acrobatic manoeuvres can be performed, including the "Paraloop", whereby flying around in a complete circle and connecting the trail of stars left in Nights' wake causes any items within the loop to be attracted towards Nights. The game features a combo system known as "Linking", whereby actions such as collecting items and flying through rings are worth more points when performed in quick succession. Power-ups may be gained by flying through several predetermined rings, indicated by a bonus barrel. The power-ups include a speed boost, point multiplier and an air pocket. The player receives a grade based on their score at the end of each Mare, and an overall grade for the level after clearing all four Mares. Nights is then transported to Nightmare for a boss fight against one of Wizeman's "Level Two" Nightmarens. Each boss fight has a time limit, and the game ends if the player runs out of time during the battle. Upon winning the boss fight, the player is awarded a score multiplier based on how quickly the boss was defeated, which is then applied to the score earned in the Nightopia section to produce the player's final score for that Dream. The game also features a multiplayer mode, which allows two players to battle each other by using a splitscreen. One player controls Nights, whereas the other controls Reala. The winner is determined by the first player to defeat the other, which is accomplished by hitting or paralooping the other player three times. The game features an artificial life system known as "A-Life", which involves entities called Nightopians and keeps track of their moods. It is possible to have them mate with other Nightopians, which creates hybrids known as "Superpians". The more the game is played, the more inhabitants appear, and environmental features and aesthetics change. The A-Life system features an evolving music engine, allowing tempo, pitch, and melody to alter depending on the state of Nightopians within the level. The feature runs from the Sega Saturn's internal clock, which alters features in the A-Life system depending on the time. Every night, all human dreams are played out in Nightopia and Nightmare, the two parts of the dream world. In Nightopia, distinct aspects of dreamers' personalities are represented by luminous coloured spheres known as "Ideya". The evil ruler of Nightmare, Wizeman the Wicked, is stealing this dream energy from sleeping visitors in order to gather power and take control of Nightopia and eventually the real world. To achieve this, he creates five beings called "Nightmaren": jester-like, flight-capable beings, which include Jackle, Clawz, Gulpo, Gillwing and Puffy as well as many minor maren. He also creates two "Level One" Nightmaren: Nights and Reala. However, Nights rebels against Wizeman's plans, and is punished by being imprisoned inside an Ideya palace, a container for dreamers' Ideya. One day, Elliot Edwards and Claris Sinclair, two teenagers from the city of Twin Seeds, go through failures. Elliot is a basketball player who enjoys a game with his friends. He is challenged by a group of older school students and suffers a humiliating defeat on the court. Claris is a talented singer and her ambition is to perform on stage. She auditions for a part in the events commemorating the centenary of the city of Twin Seeds. Standing in front of the judges, she is overcome by stage fright and does not perform well, which causes her to lose all hopes of getting the role. When they go to sleep that night, both Elliot and Claris suffer nightmares that replay the events. They escape into Nightopia and find that they both possess the rare Red Ideya of Courage, the only type that Wizeman cannot steal. Once in Nightopia, they discover and release Nights, who tells them about dreams and Wizeman and his plans; the three begin a journey to stop Wizeman and restore peace to Nightopia. When they defeat Wizeman and Reala, peace is returned to Nightopia and the world of Nightmare is suppressed. The next day, back in Twin Seeds, a centenary ceremony begins. Elliot is seen walking through the parade until he has a vision of Nights looking at him through a billboard. Realizing that Claris is performing in a hall, Elliot runs through the crowd and sees Claris on stage in front of a large audience, singing well. The two look at each other, and are transitioned to a spring valley in Nightopia, which leaves ambiguity as to whether what they achieved was real or just a dream. The concept for "Nights" originated during the development of "Sonic the Hedgehog 2" in 1992, but development did not begin until after the release of "Sonic & Knuckles" in late 1994. Programming began in April 1995 and total development spanned six months. The development team consisted of staff who worked on previous "Sonic the Hedgehog" games: Yuji Naka was lead programmer and producer, while Naoto Ohshima and Takashi Iizuka were director and lead designer, respectively. Naka and Ohshima felt they had spent enough time with the "Sonic" franchise and were eager to work on new concepts. According to Naka, the initial development team consisted of seven people, and grew to 20 as programmers arrived. "Sonic the Hedgehog" creator and project director Ohshima created the character of Nights based on his inspirations from travelling around Europe and western Asia. He came to the conclusion that the character should resemble an angel and fly like a bird. Naka originally intended to make "Nights into Dreams" a slow-paced game, but as development progressed the gameplay pace gradually increased, in similar vein to "Sonic" games. The initial concept envisioned the flying character in a rendered 2D sprite art, with side-scrolling features similar to "Sonic the Hedgehog". The team were hesitant to switch the game from 2D to 3D, as Naka was sceptical that appealing characters could be created with polygons, in contrast to traditional pixel sprites, which Sonic Team's designers found "more expressive". According to Izuka, the game design and story took two years to finalise. The game's difficulty was designed with the intent that young and inexperienced players would be able to complete the game, while more experienced players would be compelled by the replay value. The game was developed using Silicon Graphics workstations for graphical designs and Sega Saturn emulators running on Hewlett-Packard machines for programming. There were problems during early stages of development because of a lack of games to use as reference; the team had to redesign the Spring Valley level numerous times and build "everything from scratch". The team used the Sega Graphics Library operating system, said by many developers to make programming for the Saturn dramatically easier, only sparingly, instead creating the game almost entirely with custom libraries. Because the Sonic Team offices did not include soundproof studios, team members recorded sound effects at night. According to Naka, every phrase in the game has a meaning; for example, "abayo" is Japanese slang for "goodbye". The team felt that the global market would be less resistant to a game featuring full 3D CGI cut scenes than 2D anime. Norihiro Nishiyama, the designer of the in-game movies, felt that the 3D cutscenes were a good method to show the different concepts of dreaming and waking up. Naka said that the movies incorporate realism to make it more difficult for the player to disambiguate the boundary between dreams and reality. The development took longer than expected because of the team's inexperience with Saturn hardware and uncertainty about using the full 560 megabyte space on the CD-ROM. The team initially thought that the game would consume around 100 megabytes of data, and at one point considered releasing it on two separate discs. Iizuka said that the most difficult part of development was finding a way of handling the "contradiction" of using 2D sidescroller controls in a fully 3D game. Naka limited the game's flying mechanic to "invisible 2D tracks" because early beta testing revealed that the game was too difficult to play in full 3D. The standard Saturn gamepad was found to be insufficient to control Nights in flight, so the team developed the Saturn analog controller to be used with the game. It took about six months to develop, and the team went through a large number of ideas for alternate controllers, including one shaped like a Nights doll. Iizuka said that the game was inspired by anime and Cirque du Soleil's "Mystère" theatrical performance. The development team researched dream sequences and REM sleep, including the works of the psychoanalysts Carl Jung, Sigmund Freud and Friedrich Holtz. Iizuka analysed Jung's theories of dream archetypes and spent a considerable amount of time studying dreams and theories associated with them. Naka said that the main protagonist, Nights, is reflective of Jung's analytical "shadow" theory, whereas the two central characters, Claris and Elliot, were inspired by Jung's animus and anima. "Nights into Dreams" was introduced alongside an optional gamepad, the Saturn 3D controller, included with some copies of the game. It features an analogue stick and analogue triggers designed specifically for the game to make movement easier. Sonic Team noted the successful twinning of the Nintendo 64 controller with "Super Mario 64" (1996), and realised that the default Saturn controller was better suited to arcade games than "Nights into Dreams". During development, director Steven Spielberg visited the Sonic Team studio and became the first person outside the team to play the game. Naka asked him to use an experimental version of the Saturn 3D controller, and it was jokingly referred to as the "Spielberg controller" throughout development. Because the Nights character was testing very young in focus groups, Sega used a nighttime scene for the cover art to give the game a more mature look. The game was marketed with a budget of $10 million, which included television and print advertisements in the United States. In the US, the game was advertised with the slogan "Prepare to fly." , or Christmas NiGHTS into Dreams..., is a Christmas-themed two-level game of "Nights into Dreams" that was released in December 1996. It was introduced in Japan as part of a Christmas Sega Saturn bundle; elsewhere it was given away with the purchase of Saturn games such as "Daytona USA Championship Circuit Edition" (1996) or issues of "Sega Saturn Magazine" and "Next Generation Magazine". In the United Kingdom, "Christmas Nights" was not included with the "Sega Saturn Magazine" until December 1997. In a 2007 interview, Iizuka stated that "Christmas Nights" was created to increase Saturn console sales. Development began in July 1996 and took three to four months, according to Naka. "Christmas Nights" follows Elliot and Claris during the holiday season following their adventures with Nights. Realizing that the Christmas Star is missing from the Twin Seeds Christmas tree, the pair travel to Nightopia to find it, where they reunite with Nights and retrieve the Christmas Star from Gillwing's lair. "Christmas Nights" contains the full version of Claris' Spring Valley dream level from "Nights into Dreams", playable as both Claris and Elliot. The Saturn's internal clock changes elements according to the date and time: December activates "Christmas Nights" mode, replacing item boxes with Christmas presents, greenery with snow and gumdrops, rings with wreaths, and Ideya captures with Christmas trees; Nightopians wear elf costumes, and the music is replaced with a rendition of "Jingle Bells" and an a cappella version of the "Nights" theme song. During the "Winter Nights" period, the Spring Valley weather changes according to the hour. Other changes apply on New Year's Day; on April Fool's Day, Reala replaces Nights as the playable character. The game features several unlockable bonuses, such as being able to play the game's soundtrack, observe the status of the A-life system, experiment with the game's music mixer, time attack one Mare, or play as Sega's mascot Sonic the Hedgehog in the minigame "Sonic the Hedgehog: Into Dreams." Sonic may only play through Spring Valley on foot, and must defeat the boss: an inflatable Dr. Robotnik. The music is a remixed version of "Final Fever", the final boss battle music from the Japanese and European version of "Sonic CD" (1993). In the HD version of "Nights", the "Christmas Nights" content is playable after the game has been cleared once. Sonic Team made a prototype Saturn sequel with the title "Air Nights" for the Saturn, and began development for the Dreamcast. In August 1999, Naka confirmed that a sequel was in development; by December 2000, however, it had been cancelled. Naka expressed reluctance to develop a sequel, but later said he was interested in using "Nights into Dreams" "to reinforce Sega's identity". Aside from a handheld electronic game released by Tiger Electronics and small minigames featured in several Sega games, no sequel was released for a Sega console. On 1 April 2007, a sequel, "," was announced for the Wii. The game was first previewed on Portuguese publication "Maxi Consolas", after the release of short reveals from the "Official Nintendo Magazine" and "Game Reactor". The sequel is a Wii exclusive, making use of the Wii Remote. The gameplay involves the use of various masks, and features a multiplayer mode for two players in addition to Nintendo Wi-Fi Connection online functions. The game was developed by Sega Studio USA, with Iizuka, one of the designers of the original game, serving as producer. It was released in Japan and the United States in December 2007, and in Europe and Australia on 18 January 2008. In 2010, Iizuka said that he would be interested in making a third "Nights into Dreams" game. "Nights into Dreams" has an average score of 89 percent at GameRankings, based on an aggregate of nine reviews. In Japan, "Nights into Dreams" was the bestselling Saturn game and the 21st-bestselling game of 1996. The graphics and flight mechanics were the most praised aspects. Tom Guise from "Computer and Video Games" heralded the game's flight system and freedom as captivating and stated that "Nights into Dreams" is the "perfect evolution" of a "Sonic" game. Scary Larry of "GamePro" said flying using the analogue joystick "is a breeze" and that the gameplay is fun, enjoyable, and impressive. He gave it a 4.5 out of 5 for graphics and a 5 out of 5 in every other category (sound, control, and FunFactor). "Entertainment Weekly" said its "graceful acrobatic stunts" offer "a more compelling sensation of soaring than most flight simulators". "Edge" praised the game's analogue controller and called the levels "well-designed and graphically unrivaled", but the reviewer expressed disappointment in the limited level count compared to "Super Mario 64", and suggested that "Nights" seemed to prioritise technical achievements and Saturn selling points over gameplay with as clear a focus as "Sonic". Martin Robinson from Eurogamer opined that the flight mechanics were a "giddy thrill". Colin Ferris from Game Revolution praised the graphics and speed of the game as breathtaking and awe-inspiring, concluding that it offered the best qualities of the fifth-generation machines. "GameFan" praised the combination of "lush graphics, amazing music, and totally unique gameplay". "Next Generation" criticised the speed, saying that the only disappointing aspect was the way "it all rushes by so fast". However, the magazine praised the two-player mode and the innovative method of grading the player once they completed a level. "Electronic Gaming Monthly"s four reviewers were impressed with both the technical aspects and style of the graphics, and said the levels are great fun to explore, though they expressed disappointment that the game was not genuinely 3D and said it did not manage to surpass "Super Mario 64". Levi Buchanan from IGN believed that the console "was not built to handle "Nights"" due to the game occasionally clipping and warping, though he admitted that the graphics were "pretty darn good". A reviewer from "Mean Machines Sega" praised the game's vibrant colours and detailed textures, and described its animation as being "fluid as water". The reviewer also noted occasional pop-in and glitching during the game. Rad Automatic from the British "Sega Saturn Magazine" praised the visuals and colour scheme as rich in both texture and detail, while suggesting that "Nights into Dreams" is "one of the most captivating games the Saturn has witnessed yet." "Next Generation" similarly commended the game's visuals, stating that they were "beyond a doubt" the most fluid and satisfying for any game on any system. Upon release, the Japanese "Sega Saturn Magazine" opined that the game would have a significant impact on the video game industry, particularly that in the action game genre. The reviewer also stated that the game felt better through the use of the analogue pad, in contrast to the conventional controller, and also praised the light and smooth feeling the analogue pad portrayed during gameplay. Reviewers also praised the game's soundtrack and audio effects. Paul Davies from "Computer and Video Games" cited the game as having "the best music ever"; in the same review, Tom Guise attributed the music to creating a hypnotically magical atmosphere. Ferris stated that the music and sound effects were that of a dream world, and asserted that they were fitting for a game like "Nights into Dreams". IGN's Buchanan praised the game's soundtrack, stating that each stage's soundtrack is "quite good" and that the sound effects "fit in perfectly with the dream universe". In "Electronic Gaming Monthly"s "Best of '96" awards, "Nights Into Dreams" was a runner-up for Flying Game of the Year (behind "Pilotwings 64"), Nights was a runner-up for Coolest Mascot (behind Mario), and the Saturn analog controller, which the magazine called the "Nights Controller", won Best Peripheral. The following year "EGM" ranked it the 70th best console video game of all time, describing it as "Unlike anything you've seen before ... a 2.5-D platform game without the platforms." Since its release, "Nights into Dreams" has appeared on several best-game-of-all-time lists. In a January 2000 poll by "Computer and Video Games", readers placed the game 15th on their "100 Greatest Games" list, directly behind "Super Mario 64". IGN ranked the game as the 94th best game of all time in their "Top 100 Games" list in 2007, and in 2008, Levi Buchanan ranked it fourth in his list of the top 10 Sega Saturn games. "Next Generation" ranked the game 25th in its list of the "100 Greatest Games of All Time" in their September 1996 issue (i.e. one month before they actually reviewed the game, and roughly two months before it saw release outside Japan). 1UP ranked the game third in its "Top Ten Cult Classics" list. In 2014, GamesRadar listed "Nights into Dreams" as the best Sega Saturn game of all time, stating that the game "tapped into a new kind of platform gameplay for its era". Sega released a remake of "Nights into Dreams" for the PlayStation 2 exclusively in Japan on 21 February 2008. It includes 16:9 wide screen support, an illustration gallery and features the ability to play the game in classic Saturn graphics. The game was also featured in a bundle named the Nightopia Dream Pack, which includes a reprint of a picture book that was released in Japan alongside the original Saturn game. A "Nights into Dreams" handheld electronic game was released by Tiger Electronics in 1997, and a port of it was later released for Tiger's unsuccessful R-Zone console. A high definition remaster of the PlayStation 2 version was released for PlayStation Network on 2 October 2012 and for Xbox Live Arcade on 5 October 2012. A Windows version was released via Steam on 17 December 2012, with online score leaderboards and the option to play with enhanced graphics or with the original Saturn graphics. The HD version also includes "Christmas Nights," but the two-player mode and "Sonic the Hedgehog" level were removed. Claris and Eliot make a cameo appearance in Sonic Team's "Burning Rangers" (1998), with both Claris and Eliot sending the Rangers emails thanking them for their help. "Nights into Dreams"-themed pinball areas feature in "Sonic Adventure" (1998) and "Sonic Pinball Party" (2003), with soundtrack being featured in the latter game. The PlayStation 2 games "" (2003) and "Sega SuperStars" (2004) both feature minigames based on "Nights into Dreams", in which Nights is controlled using the player's body. Nights is also an unlockable character in "Sonic Riders" (2006) and "" (2008). A minigame version of "Nights into Dreams" is playable through using the Nintendo GameCube – Game Boy Advance link cable connectivity with "Phantasy Star Online Episode I & II" (2000) and "Billy Hatcher and the Giant Egg" (2003). Following a successful fan campaign by a "Nights into Dreams" fansite, the character Nights was integrated into "Sonic & Sega All-Stars Racing" (2010) as a traffic guard. Nights and Reala also appear as playable characters in "Sega Superstars Tennis" (2008) and "Sonic & All-Stars Racing Transformed" (2012), the latter of which also features a "Nights into Dreams"-themed racetrack. The limited Deadly Six edition of "Sonic Lost World" (2013) features a "Nights into Dreams"-inspired stage, "Nightmare Zone", as downloadable content. In February 1998, Archie Comics adapted "Nights into Dreams" into a three-issue comic book miniseries to test whether a Nights comic would sell well in North America. The first miniseries was loosely based on the game, with Nights identified as male despite the character's androgynous design. The company later released a second three-issue miniseries, continuing the story of the first, but the series did not gain enough sales to warrant an ongoing series. It was later added to a list of guest franchises featured in Archie Comics' "Worlds Unite" crossover between its "Sonic the Hedgehog" and "Mega Man" comics. Citations Bibliography
https://en.wikipedia.org/wiki?curid=21218
Neuromyotonia Neuromyotonia (NMT) is a form of peripheral nerve hyperexcitability that causes spontaneous muscular activity resulting from repetitive motor unit action potentials of peripheral origin. The prevalence of NMT is unknown. Neuromyotonia along with Morvan's Syndrome are the most severe types in the Peripheral Nerve Hyperexciteability spectrum. Example of two more common and less severe syndromes in the spectrum are Cramp Fasciculation Syndrome and Benign Fasciculation Syndrome. NMT is a diverse disorder. As a result of muscular hyperactivity, patients may present with muscle cramps, stiffness, myotonia-like symptoms (slow relaxation), associated walking difficulties, hyperhidrosis (excessive sweating), myokymia (quivering of a muscle), fasciculations (muscle twitching), fatigue, exercise intolerance, myoclonic jerks and other related symptoms. The symptoms (especially the stiffness and fasciculations) are most prominent in the calves, legs, trunk, and sometimes the face and neck, but can also affect other body parts. NMT symptoms may fluctuate in severity and frequency. Symptoms range from mere inconvenience to debilitating. At least a third of people also experience sensory symptoms. The three causes of NMT are: The acquired form is the most common, accounting for up to 80 percent of all cases and is suspected to be autoimmune-mediated, which is usually caused by antibodies against the neuromuscular junction. The exact cause is unknown. However, autoreactive antibodies can be detected in a variety of peripheral (e.g. myasthenia gravis, Lambert-Eaton myasthenic syndrome) and central nervous system (e.g. paraneoplastic cerebellar degeneration, paraneoplastic limbic encephalitis) disorders. Their causative role has been established in some of these diseases but not all. Neuromyotonia is considered to be one of these with accumulating evidence for autoimmune origin over the last few years. Autoimmune neuromyotonia is typically caused by antibodies that bind to potassium channels on the motor nerve resulting in continuous/hyper-excitability. Onset is typically seen between the ages of 15–60, with most experiencing symptoms before the age of 40. Some neuromyotonia cases do not only improve after plasma exchange but they may also have antibodies in their serum samples against voltage-gated potassium channels. Moreover, these antibodies have been demonstrated to reduce potassium channel function in neuronal cell lines. Diagnosis is clinical and initially consists of ruling out more common conditions, disorders, and diseases, and usually begins at the general practitioner level. A doctor may conduct a basic neurological exam, including coordination, strength, reflexes, sensation, etc. A doctor may also run a series of tests that include blood work and MRIs. From there, a patient is likely to be referred to a neurologist or a neuromuscular specialist. The neurologist or specialist may run a series of more specialized tests, including needle electromyography EMG/ and nerve conduction studies (NCS) (these are the most important tests), chest CT (to rule out paraneoplastic) and specific blood work looking for voltage-gated potassium channel antibodies, acetylcholine receptor antibody, and serum immunofixation, TSH, ANA ESR, EEG etc. Neuromyotonia is characterized electromyographically by doublet, triplet or multiplet single unit discharges that have a high, irregular intraburst frequency. Fibrillation potentials and fasciculations are often also present with electromyography. Because the condition is so rare, it can often be years before a correct diagnosis is made. NMT is not fatal and many of the symptoms can be controlled. However, because NMT mimics some symptoms of motor neuron disease (ALS) and other more severe diseases, which may be fatal, there can often be significant anxiety until a diagnosis is made. In some rare cases, acquired neuromyotonia has been misdiagnosed as amyotrophic lateral sclerosis (ALS) particularly if fasciculations may be evident in the absence of other clinical features of ALS. However, fasciculations are rarely the first sign of ALS as the hallmark sign is weakness. Similarly, multiple sclerosis has been the initial misdiagnosis in some NMT patients. In order to get an accurate diagnosis see a trained neuromuscular specialist. People diagnosed with Benign Fasciculation Syndrome or Enhanced Physiological Tremor may experience similar symptoms as NMT, although it is unclear today whether BFS or EPT are weak forms of NMT. There are three main types of NMT: Neuromyotonia is a type of peripheral nerve hyperexcitability. Peripheral nerve hyperexcitability is an umbrella diagnosis that includes (in order of severity of symptoms from least severe to most severe) benign fasciculation syndrome, cramp fasciculation syndrome, neuromyotonia and morvan's syndrome. Some doctors will only give the diagnosis of peripheral nerve hyperexcitability as the differences between the three are largely a matter of the severity of the symptoms and can be subjective. However, some objective EMG criteria have been established to help distinguish between the three. Moreover, the generic use of the term "peripheral nerve hyperexcitability syndromes" to describe the aforementioned conditions is recommended and endorsed by several prominent researchers and practitioners in the field. There is no known cure for neuromyotonia, but the condition is treatable. Anticonvulsants, including phenytoin and carbamazepine, usually provide significant relief from the stiffness, muscle spasms, and pain associated with neuromyotonia. Plasma exchange and IVIg treatment may provide short-term relief for patients with some forms of the acquired disorder. It is speculated that the plasma exchange causes an interference with the function of the voltage-dependent potassium channels, one of the underlying issues of hyper-excitability in autoimmune neuromyotonia. Botox injections also provide short-term relief. Immunosuppressants such as Prednisone may provide long term relief for patients with some forms of the acquired disorder. The long-term prognosis is uncertain, and has mostly to do with the underlying cause; i.e. autoimmune, paraneoplastic, etc. However, in recent years increased understanding of the basic mechanisms of NMT and autoimmunity has led to the development of novel treatment strategies. NMT disorders are now amenable to treatment and their prognoses are good. Many patients respond well to treatment, which usually provide significant relief of symptoms. Some cases of spontaneous remission have been noted, including Isaac's original two patients when followed up 14 years later. While NMT symptoms may fluctuate, they generally don't deteriorate into anything more serious, and with the correct treatment the symptoms are manageable. A very small proportion of cases with NMT may develop central nervous system findings in their clinical course, causing a disorder called Morvan's syndrome, and they may also have antibodies against potassium channels in their serum samples. Sleep disorder is only one of a variety of clinical conditions observed in Morvan's syndrome cases ranging from confusion and memory loss to hallucinations and delusions. However, this is a separate disorder. Some studies have linked NMT with certain types of cancers, mostly lung and thymus, suggesting that NMT may be paraneoplastic in some cases. In these cases, the underlying cancer will determine prognosis. However, most examples of NMT are autoimmune and not associated with cancer.
https://en.wikipedia.org/wiki?curid=21221
Neurology Neurology (from , "string, nerve" and the suffix -logia, "study of") is a branch of medicine dealing with disorders of the nervous system. Neurology deals with the diagnosis and treatment of all categories of conditions and disease involving the central and peripheral nervous systems (and their subdivisions, the autonomic and somatic nervous systems), including their coverings, blood vessels, and all effector tissue, such as muscle. Neurological practice relies heavily on the field of neuroscience, the scientific study of the nervous system. A neurologist is a physician specializing in neurology and trained to investigate, or diagnose and treat neurological disorders. Neurologists may also be involved in clinical research, clinical trials, and basic or translational research. While neurology is a nonsurgical specialty, its corresponding surgical specialty is neurosurgery. Significant overlap occurs between the fields of neurology and psychiatry, with the boundary between the two disciplines and the conditions they treat being somewhat nebulous. Many neurological disorders have been described as listed. These can affect the central nervous system (brain and spinal cord), the peripheral nervous system, the autonomic nervous system, and the muscular system. The academic discipline began between the 15th and 16th centuries with the work and research of many neurologists such as Thomas Willis, Robert Whytt, Matthew Baillie, Charles Bell, Moritz Heinrich Romberg, Duchenne de Boulogne, William A. Hammond, Jean-Martin Charcot, and John Hughlings Jackson. Many neurologists also have additional training or interest in one area of neurology, such as stroke, epilepsy, neuromuscular, sleep medicine, pain management, or movement disorders. In the United States and Canada, neurologists are physicians having completed postgraduate training in neurology after graduation from medical school. Neurologists complete, on average, about 8 years of medical college education and clinical training, which includes obtaining a four-year undergraduate degree, a medical degree (DO or MD), which comprises an additional four years of study, then completing one year of basic clinical training and four years of residency. The four-year residency consists of one year of internal medicine internship training followed by three years of training in neurology. Some neurologists receive additional subspecialty training focusing on a particular area of the fields. These training programs are called fellowships, and are one to two years in duration. Subspecialties include brain injury medicine, clinical neurophysiology, epilepsy, hospice and palliative medicine, neurodevelopmental disabilities, neuromuscular medicine, pain medicine, sleep medicine, neurocritical care, vascular neurology (stroke), behavioral neurology, child neurology, headache, multiple sclerosis, neuroimaging, neurorehabilitation. In Germany, a compulsory year of psychiatry must be done to complete a residency of neurology. In the United Kingdom and Ireland, neurology is a subspecialty of general (internal) medicine. After five years of medical school and two years as a Foundation Trainee, an aspiring neurologist must pass the examination for Membership of the Royal College of Physicians (or the Irish equivalent) and complete two years of core medical training before entering specialist training in neurology. Up to the 1960s, some intending to become neurologists would also spend two years working in psychiatric units before obtaining a diploma in psychological medicine. However, that was uncommon and, now that the MRCPsych takes three years to obtain, would no longer be practical. A period of research is essential, and obtaining a higher degree aids career progression. Many found it was eased after an attachment to the Institute of Neurology at Queen Square, London. Some neurologists enter the field of rehabilitation medicine (known as physiatry in the US) to specialise in neurological rehabilitation, which may include stroke medicine, as well as brain injuries. During a neurological examination, the neurologist reviews the patient's health history with special attention to the current condition. The patient then takes a neurological exam. Typically, the exam tests mental status, function of the cranial nerves (including vision), strength, coordination, reflexes, and sensation. This information helps the neurologist determine whether the problem exists in the nervous system and the clinical localization. Localization of the pathology is the key process by which neurologists develop their differential diagnosis. Further tests may be needed to confirm a diagnosis and ultimately guide therapy and appropriate management. Neurologists examine patients who are referred to them by other physicians in both the inpatient and outpatient settings. Neurologists begin their interactions with patients by taking a comprehensive medical history, and then performing a physical examination focusing on evaluating the nervous system. Components of the neurological examination include assessment of the patient's cognitive function, cranial nerves, motor strength, sensation, reflexes, coordination, and gait. In some instances, neurologists may order additional diagnostic tests as part of the evaluation. Commonly employed tests in neurology include imaging studies such as computed axial tomography (CAT) scans, magnetic resonance imaging (MRI), and ultrasound of major blood vessels of the head and neck. Neurophysiologic studies, including electroencephalography (EEG), needle electromyography (EMG), nerve conduction studies (NCSs) and evoked potentials are also commonly ordered. Neurologists frequently perform lumbar punctures to assess characteristics of a patient's cerebrospinal fluid. Advances in genetic testing have made genetic testing an important tool in the classification of inherited neuromuscular disease and diagnosis of many other neurogenetic diseases. The role of genetic influences on the development of acquired neurologic diseases is an active area of research. Some of the commonly encountered conditions treated by neurologists include headaches, radiculopathy, neuropathy, stroke, dementia, seizures and epilepsy, Alzheimer's disease, attention deficit/hyperactivity disorder, Parkinson's disease, Tourette's syndrome, multiple sclerosis, head trauma, sleep disorders, neuromuscular diseases, and various infections and tumors of the nervous system. Neurologists are also asked to evaluate unresponsive patients on life support to confirm brain death. Treatment options vary depending on the neurological problem. They can include referring the patient to a physiotherapist, prescribing medications, or recommending a surgical procedure. Some neurologists specialize in certain parts of the nervous system or in specific procedures. For example, clinical neurophysiologists specialize in the use of EEG and intraoperative monitoring to diagnose certain neurological disorders. Other neurologists specialize in the use of electrodiagnostic medicine studies – needle EMG and NCSs. In the US, physicians do not typically specialize in all the aspects of clinical neurophysiology – i.e. sleep, EEG, EMG, and NCSs. The American Board of Clinical Neurophysiology certifies US physicians in general clinical neurophysiology, epilepsy, and intraoperative monitoring. The American Board of Electrodiagnostic Medicine certifies US physicians in electrodiagnostic medicine and certifies technologists in nerve-conduction studies. Sleep medicine is a subspecialty field in the US under several medical specialties including anesthesiology, internal medicine, family medicine, and neurology. Neurosurgery is a distinct specialty that involves a different training path, and emphasizes the surgical treatment of neurological disorders. Also, many nonmedical doctors, those with doctoral degrees(usually PhDs) in subjects such as biology and chemistry, study and research the nervous system. Working in laboratories in universities, hospitals, and private companies, these neuroscientists perform clinical and laboratory experiments and tests to learn more about the nervous system and find cures or new treatments for diseases and disorders. A great deal of overlap occurs between neuroscience and neurology. Many neurologists work in academic training hospitals, where they conduct research as neuroscientists in addition to treating patients and teaching neurology to medical students. Neurologists are responsible for the diagnosis, treatment, and management of all the conditions mentioned above. When surgical or endovascular intervention is required, the neurologist may refer the patient to a neurosurgeon or an interventional neuroradiologist. In some countries, additional legal responsibilities of a neurologist may include making a finding of brain death when it is suspected that a patient has died. Neurologists frequently care for people with hereditary (genetic) diseases when the major manifestations are neurological, as is frequently the case. Lumbar punctures are frequently performed by neurologists. Some neurologists may develop an interest in particular subfields, such as stroke, dementia, movement disorders, neurointensive care, headaches, epilepsy, sleep disorders, chronic pain management, multiple sclerosis, or neuromuscular diseases. Some overlap also occurs with other specialties, varying from country to country and even within a local geographic area. Acute head trauma is most often treated by neurosurgeons, whereas sequelae of head trauma may be treated by neurologists or specialists in rehabilitation medicine. Although stroke cases have been traditionally managed by internal medicine or hospitalists, the emergence of vascular neurology and interventional neuroradiology has created a demand for stroke specialists. The establishment of Joint Commission-certified stroke centers has increased the role of neurologists in stroke care in many primary, as well as tertiary, hospitals. Some cases of nervous system infectious diseases are treated by infectious disease specialists. Most cases of headache are diagnosed and treated primarily by general practitioners, at least the less severe cases. Likewise, most cases of sciatica are treated by general practitioners, though they may be referred to neurologists or surgeons (neurosurgeons or orthopedic surgeons). Sleep disorders are also treated by pulmonologists and psychiatrists. Cerebral palsy is initially treated by pediatricians, but care may be transferred to an adult neurologist after the patient reaches a certain age. Physical medicine and rehabilitation physicians also in the US diagnosis and treat patients with neuromuscular diseases through the use of electrodiagnostic studies (needle EMG and nerve-conduction studies) and other diagnostic tools. In the United Kingdom and other countries, many of the conditions encountered by older patients such as movement disorders, including Parkinson's disease, stroke, dementia, or gait disorders, are managed predominantly by specialists in geriatric medicine. Clinical neuropsychologists are often called upon to evaluate brain-behavior relationships for the purpose of assisting with differential diagnosis, planning rehabilitation strategies, documenting cognitive strengths and weaknesses, and measuring change over time (e.g., for identifying abnormal aging or tracking the progression of a dementia) In some countries, e.g. US and Germany, neurologists may subspecialize in clinical neurophysiology, the field responsible for EEG and intraoperative monitoring, or in electrodiagnostic medicine nerve conduction studies, EMG, and evoked potentials. In other countries, this is an autonomous specialty (e.g., United Kingdom, Sweden, Spain). Although mental illnesses are believed by many to be neurological disorders affecting the central nervous system, traditionally they are classified separately, and treated by psychiatrists. In a 2002 review article in the "American Journal of Psychiatry", Professor Joseph B. Martin, Dean of Harvard Medical School and a neurologist by training, wrote, "the separation of the two categories is arbitrary, often influenced by beliefs rather than proven scientific observations. And the fact that the brain and mind are one makes the separation artificial anyway". Neurological disorders often have psychiatric manifestations, such as poststroke depression, depression and dementia associated with Parkinson's disease, mood and cognitive dysfunctions in Alzheimer's disease, and Huntington disease, to name a few. Hence, the sharp distinction between neurology and psychiatry is not always on a biological basis. The dominance of psychoanalytic theory in the first three-quarters of the 20th century has since then been largely replaced by a focus on pharmacology. Despite the shift to a medical model, brain science has not advanced to a point where scientists or clinicians can point to readily discernible pathologic lesions or genetic abnormalities that in and of themselves serve as reliable or predictive biomarkers of a given mental disorder. The emerging field of neurological enhancement highlights the potential of therapies to improve such things as workplace efficacy, attention in school, and overall happiness in personal lives. However, this field has also given rise to questions about neuroethics and the psychopharmacology of lifestyle drugs can have negative and positive effects on neurology because different types of drugs can depend on people and their lives [Cheyanne l.dorsey]
https://en.wikipedia.org/wiki?curid=21226
Niue Niue ( or ; ) is an island country in the South Pacific Ocean, northeast of New Zealand. Niue's land area is about and its population, predominantly Polynesian, was about 1,600 in 2016. The island is commonly referred to as "The Rock", which comes from the traditional name "Rock of Polynesia". Niue is one of the world's largest coral islands. The terrain of the island has two noticeable levels. The higher level is made up of a limestone cliff running along the coast, with a plateau in the centre of the island reaching approximately 60 metres (200 feet) high above sea level. The lower level is a coastal terrace approximately 0.5 km (0.3 miles) wide and about 25–27 metres (80–90 feet) high, which slopes down and meets the sea in small cliffs. A coral reef surrounds the island, with the only major break in the reef being in the central western coast, close to the capital, Alofi. Niue is a self-governing state in free association with New Zealand, and New Zealand conducts most diplomatic relations on its behalf. Niueans are citizens of New Zealand, and Queen Elizabeth II is head of state in her capacity as Queen of New Zealand. Between 90% and 95% of Niuean people live in New Zealand, along with about 70% of the speakers of the Niuean language. Niue is a bilingual country, with 30% of the population speaking both Niuean and English. The percentage of monolingual English-speaking people is only 11%, while 46% are monolingual Niuean speakers. Niue is not a member of the United Nations (UN), but UN organisations have accepted its status as a freely-associated state as equivalent to independence for the purposes of international law. As such, Niue is a member of some UN specialised agencies (such as UNESCO, and the WHO), and is invited, alongside the other non-UN member state, the Cook Islands, to attend United Nations conferences open to "all states". Niue has been a member of the Pacific Community since 1980. Niue is subdivided into 14 "villages" (municipalities). Each village has a village council that elects its chairperson. The villages are at the same time electoral districts; each village sends an assemblyperson to the Parliament of Niue. A small and democratic nation, Niueans hold legislative elections every 3 years. The "Niue Integrated Strategic Plan" (NISP), adopted in 2003, is the national development plan, setting national priorities for development in areas such as financial sustainability. Since the late 20th century Niue has become a leader in green growth; the European Union is helping the nation convert to renewable energy. In January 2004, Niue was hit by Cyclone Heta, which caused extensive damage to the island, including wiping out most of South Alofi. The disaster set the island back about two years from its planned timeline to implement the NISP since national efforts concentrated on recovery. Polynesians from Samoa settled Niue around 900 AD. Further settlers arrived from Tonga in the 16th century. Until the beginning of the 18th century, Niue appears to have had no national government or national leader; chiefs and heads of families exercised authority over segments of the population. A succession of "patu-iki" (kings) ruled, beginning with Puni-mata. Tui-toga, who reigned from 1875 to 1887, was the first Christian king. The first Europeans to sight Niue sailed under Captain James Cook in 1774. Cook made three attempts to land, but the inhabitants refused to grant permission to do so. He named the island "Savage Island" because, as legend has it, the natives who "greeted" him were painted in what appeared to be blood. The substance on their teeth was hulahula, a native red fe'i banana. For the next couple of centuries, Niue was known as Savage Island until its original name, "Niue", which translates as "behold the coconut", regained use. Whaling vessels were some of the most regular visitors to the island in the nineteenth century. The first on record was the "Fanny" in February 1824. The last known whaler to visit was the "Albatross" in November 1899. The next notable European visitors represented the London Missionary Society; they arrived on the "Messenger of Peace". After many years of trying to land a European missionary, they abducted a Niuean named Nukai Peniamina and trained him as a pastor at the Malua Theological College in Samoa. Peniamina returned in 1846 on the "John Williams" as a missionary with the help of Toimata Fakafitifonua. He was finally allowed to land in Uluvehi Mutalau after a number of attempts in other villages had failed. The chiefs of Mutalau village allowed him to land and assigned over 60 warriors to protect him day and night at the fort in Fupiu. In July 1849 Captain John Erskine visited the island in HMS "Havannah". Christianity was first taught to the Mutalau people before it spread to all the villages. Originally other major villages opposed the introduction of Christianity and had sought to kill Peniamina. The people from the village of Hakupu, although the last village to receive Christianity, came and asked for a "word of God"; hence, their village was renamed "Ha Kupu Atua" meaning "any word of God", or "Hakupu" for short. In 1889 the chiefs and rulers of Niue, in a letter to Queen Victoria, asked her "to stretch out towards us your mighty hand, that Niue may hide herself in it and be safe". After expressing anxiety lest some other nation should take possession of the island, the letter continued: "We leave it with you to do as seems best to you. If you send the flag of Britain that is well; or if you send a Commissioner to reside among us, that will be well". The British did not initially take up the offer. In 1900 a petition by the Cook Islanders asking for annexation included Niue "if possible". In a document dated 19 October 1900, the "King" and Chiefs of Niue consented to "Queen Victoria taking possession of this island". A despatch to the Secretary of State for the Colonies from the Governor of New Zealand referred to the views expressed by the Chiefs in favour of "annexation" and to this document as "the deed of cession". A British Protectorate was declared, but it remained short-lived. Niue was brought within the boundaries of New Zealand on 11 June 1901 by the same Order and Proclamation as the Cook Islands. The Order limited the islands to which it related by reference to an area in the Pacific described by co-ordinates, and Niue, at 19.02 S., 169.55 W, lies within that area. The New Zealand Parliament restored self-government in Niue with the 1974 constitution, following a referendum in 1974 in which Niueans had three options: independence, self-government or continuation as a New Zealand territory. The majority selected self-government, and Niue's written constitution was promulgated as supreme law. Robert Rex, ethnically part European, part native, was elected by the Niue Assembly as the first premier, a position he held until his death 18 years later. Rex became the first Niuean to receive a knighthood – in 1984. In January 2004 Cyclone Heta hit Niue, killing two people and causing extensive damage to the entire island, including wiping out most of the south of the capital, Alofi. On March 7, 2020, the International Dark Sky Association announced that Niue had become the first Dark Sky Preserve Nation. The Niue Constitution Act of 1974 vests executive authority in Her Majesty the Queen in Right of New Zealand and in the governor-general of New Zealand. The Constitution specifies that everyday practice involves the exercise of sovereignty by Cabinet, composed of the premier (Sir Toke Talagi since 2008) and of three other ministers. The Premier and ministers are members of the Niue Legislative Assembly, the nation's parliament. The Assembly consists of 20 members, 14 of them elected by the electors of each village constituency, and six by all registered voters in all constituencies. Electors must be New Zealand citizens, resident for at least three months, and candidates must be electors and resident for 12 months. Everyone born in Niue must register on the electoral roll. Niue has no political parties; all Assembly members are independents. The only Niuean political party to have ever existed, the Niue People's Party (1987–2003), won once (in 2002) before disbanding the following year. The Legislative Assembly elects the speaker as its first official in the first sitting of the Assembly following an election. The speaker calls for nominations for premier; the candidate with the most votes from the 20 members is elected. The premier selects three other members to form a Cabinet, the executive arm of government. General elections take place every three years, most recently on 30 May 2020. The judiciary, independent of the executive and the legislature, includes a High Court and a Court of Appeal, with appeals to the Judicial Committee of the Privy Council in London. Niue has been self-governing in free association with New Zealand since 3 September 1974, when the people endorsed the Constitution in a plebiscite. Niue is fully responsible for its internal affairs. Niue's position concerning its external relations is less clear cut. Section 6 of the Niue Constitution Act provides that: "Nothing in this Act or in the Constitution shall affect the responsibilities of Her Majesty the Queen in right of New Zealand for the external affairs and defence of Niue." Section 8 elaborates but still leaves the position unclear: Effect shall be given to the provisions of sections 6 and 7 [concerning external affairs and defence and economic and administrative assistance respectively] of this Act, and to any other aspect of the relationship between New Zealand and Niue which may from time to time call for positive co-operation between New Zealand and Niue after consultation between the Prime Minister of New Zealand and the Premier of Niue, and in accordance with the policies of their respective Governments; and, if it appears desirable that any provision be made in the law of Niue to carry out these policies, that provision may be made in the manner prescribed in the Constitution, but not otherwise." Niue has a representative mission in Wellington, New Zealand. It is a member of the Pacific Islands Forum and a number of regional and international agencies. It is not a member of the United Nations, but is a state party to the United Nations Convention on the Law of the Sea, the United Nations Framework Convention on Climate Change, the Ottawa Treaty and the Treaty of Rarotonga. The country is a member state of UNESCO since 26 October 1993. Traditionally, Niue's foreign relations and defence have been regarded as the responsibility of New Zealand. However, in recent years Niue has begun to follow its own foreign relations, independent of New Zealand, in some spheres. It established diplomatic relations with the People's Republic of China on 12 December 2007. The joint communique signed by Niue and China is different in its treatment of the Taiwan question from that agreed by New Zealand and China. New Zealand "acknowledged" China's position on Taiwan but has never expressly agreed with it, but Niue "recognises that there is only one China in the world, the Government of the People's Republic of China is the sole legal government representing the whole of China and Taiwan is an inalienable part of the territory of China." Niue established diplomatic relations with India on 30 August 2012. On 10 June 2014 the Government of Niue announced that Niue had established diplomatic relations with Turkey. The Honourable Minister of Infrastructure Dalton Tagelagi formalised the agreement at the Pacific Small Island States Foreign Ministers meeting in Istanbul, Turkey. The Memorandum of Understanding with Turkey is part of increasing Niue's foreign relationship with countries including the People's Republic of China, India, Australia, Thailand, Samoa, Cook Islands and Singapore. The people of Niue have fought as part of the New Zealand military. In World War I, Niue sent about 200 soldiers as part of the New Zealand (Māori) Pioneer Battalion in the New Zealand forces. Niue is not a republic but its full name was listed as "the Republic of Niue" for a number of years on the ISO list of country names (ISO-3166-1). In its newsletter of 14 July 2011, the ISO acknowledged that this was a mistake and the words "the Republic of" were deleted from the ISO list of country names. Niue is a raised coral atoll in the southern Pacific Ocean, east of Tonga. There are three outlying coral reefs within the Exclusive Economic Zone, with no land area: Besides these, Albert Meyer Reef, (almost long and wide, least depth , southwest) is not officially claimed by Niue, and the existence of Haymet Rocks ( east-southeast) is in doubt. Niue is one of the world's largest coral islands. The terrain consists of steep limestone cliffs along the coast with a central plateau rising to about above sea level. A coral reef surrounds the island, with the only major break in the reef being in the central western coast, close to the capital, Alofi. A number of limestone caves occur near the coast. The island is roughly oval in shape (with a diameter of about ), with two large bays indenting the western coast, Alofi Bay in the centre and Avatele Bay in the south. Between these is the promontory of Halagigie Point. A small peninsula, TePā Point (Blowhole Point), is close to the settlement of Avatele in the southwest. Most of the population resides close to the west coast, around the capital, and in the northwest. Some of the soils are geochemically very unusual. They are extremely weathered tropical soils, with high levels of iron and aluminium oxides (oxisol) and mercury, and they contain high levels of natural radioactivity. There is almost no uranium, but the radionucleides Th-230 and Pa-231 head the decay chains. This is the same distribution of elements as found naturally on very deep seabeds, but the geochemical evidence suggests that the origin of these elements is extreme weathering of coral and brief sea submergence 120,000 years ago. Endothermal upwelling, by which mild volcanic heat draws deep seawater up through the porous coral, almost certainly contributes. No adverse health effects from the radioactivity or the other trace elements have been demonstrated, and calculations show that the level of radioactivity is probably much too low to be detected in the population. These unusual soils are very rich in phosphate, but it is not accessible to plants, being in the very insoluble form of iron phosphate, or crandallite. It is thought that similar radioactive soils may exist on Lifou and Mare near New Caledonia, and Rennell in the Solomon Islands, but no other locations are known. According to the World Health Organization, residents are evidently very susceptible to skin cancer. In 2002 Niue reported skin cancer deaths at a rate of 2,482 per 100,000 people – far higher than any other country. Niue is separated from New Zealand by the International Date Line. The time difference is 23 hours during the Southern Hemisphere winter and 24 hours when New Zealand uses Daylight Saving Time. The island has a tropical climate, with most rainfall occurring between November and April. A leader in green growth, Niue is also focusing on solar power provision, with help from the European Union. However, Niue currently deals with one of the highest rates of greenhouse gas production per capita in the world. Niue aims to become 80% renewable by 2025. The Niue Island Organic Farmers Association is currently paving way to a Multilateral Environmental Agreement (MEA) committed to making Niue the world's first fully organic nation by 2020. In July 2009 a solar panel system was installed, injecting about 50 kW into the Niue national power grid. This is nominally 6% of the average 833 kW electricity production. The solar panels are at Niue High School (20 kW), Niue Power Corporation office (1.7 kW) and the Niue Foou Hospital (30 kW). The EU-funded grid-connected photovoltaic systems are supplied under the REP-5 programme and were installed recently by the Niue Power Corporation on the roofs of the high school and the power station office and on ground-mounted support structures in front of the hospital. They will be monitored and maintained by the NPC. In 2014 two additional solar power installations were added to the Niue national power grid, one funded under PALM5 of Japan is located outside of the Tuila power station – so far only this has battery storage, the other under European Union funding is located opposite the Niue International Airport Terminal. Niue's economy is small. Its gross domestic product (GDP) was NZ$17 million in 2003, or US$10 million at purchasing power parity. Niue's GDP has increased to US$24.9 million in 2016. Niue uses the New Zealand dollar. The Niue Integrated Strategic Plan (NISP) is the national development plan, setting national priorities for development. Cyclone Heta set the island back about two years from its planned timeline to implement the NISP, since national efforts concentrated on recovery efforts. In 2008, Niue had yet to fully recover. After Heta, the government made a major commitment to rehabilitate and develop the private sector. The government allocated $1 million for the private sector, and spent it on helping businesses devastated by the cyclone, and on construction of the Fonuakula Industrial Park. This industrial park is now completed and some businesses are already operating from there. The Fonuakula Industrial Park is managed by the Niue Chamber of Commerce, a not-for-profit organisation providing advisory services to businesses. Joint ventures The government and the Reef Group from New Zealand started two joint ventures in 2003 and 2004 to develop fisheries and a 120-hectare noni juice operation. Noni fruit comes from "Morinda citrifolia", a small tree with edible fruit. Niue Fish Processors Ltd (NFP) is a joint venture company processing fresh fish, mainly tuna (yellowfin, big eye and albacore), for export to overseas markets. NFP operates out of a state-of-the-art fish plant in Amanau Alofi South, completed and opened in October 2004. Trade Niue is negotiating free trade agreements with other Pacific countries, PICTA Trade in Services (PICTA TIS), Economic Partnership Agreements with the European Union, and PACERPlus with Australia and New Zealand. The Office of the Chief Trade Adviser (OCTA) has been set up to assist Niue and other Pacific countries in the negotiation of the PACERPlus. Mining In August 2005, an Australian mining company, Yamarna Goldfields, suggested that Niue might have the world's largest deposit of uranium. By early September these hopes were seen as overoptimistic, and in late October the company cancelled its plans, announcing that exploratory drilling had identified nothing of commercial value. The Australian Securities and Investments Commission filed charges in January 2007 against two directors of the company, now called Mining Projects Group Ltd, alleging that their conduct had been deceptive and that they engaged in insider trading. This case was settled out of court in July 2008, both sides withdrawing their claims. Remittances from expatriates were a major source of foreign exchange in the 1970s and early 1980s. Continuous migration to New Zealand has shifted most members of nuclear and extended families there, removing the need to send remittances back home. In the late 1990s, PFTAC conducted studies on the balance of payments, which confirmed that Niueans are receiving few remittances but are sending more money overseas. Foreign aid Foreign aid has been Niue's principal source of income. Although most aid comes from New Zealand, this is currently being phased out with reductions of NZ$250,000 each year. The country will need to rely more upon its own economy. The government generates some revenue, mainly from income tax, import tax and the lease of phone lines. Offshore banking The government briefly considered offshore banking. Under pressure from the US Treasury, Niue agreed to end its support for schemes designed to minimise tax in countries like New Zealand. Niue provides automated Companies Registration, administered by the New Zealand Ministry of Economic Development. The Niue Legislative Assembly passed the Niue Consumption Tax Act in the first week of February 2009, and the 12.5% tax on goods and services was expected to take effect on 1 April 2009. Income tax has been lowered, and import tax may be reset to zero except for "sin" items like tobacco, alcohol and soft drinks. Tax on secondary income has been lowered from 35% to 10%, with the stated goal of fostering increased labour productivity. Internet In 1997, the Internet Assigned Numbers Authority (IANA), under contract with the US Department of Commerce, assigned the Internet Users Society-Niue (IUS-N), a private nonprofit, as manager of the .nu top-level domain on the Internet. IUS-N's charitable purpose was – and continues to be – to use revenue from the registration of .nu domain names to fund low-cost or free Internet services for the people of Niue. In a letter to ICANN in 2007, IUS-N's independent auditors reported IUS-N had invested US$3 million for Internet services in Niue between 1999 and 2005 from .nu domain name registration revenue during that period. In 1999, IUS-N and the Government of Niue signed an agreement whereby the Government recognised that IUS-N managed the .nu ccTLD under IANA's authority and IUS-N committed to provide free Internet services to government departments as well as to Niue's private citizens. A newly elected government later disputed that agreement and attempted to assert a claim on the domain name, including a requirement for IUS-N to make direct payments of compensation to the Government. In 2005, a Government-appointed Commission of Inquiry into the dispute released its report, which found no merit in the government's claims; the government subsequently dismissed the claims in 2007. Starting in 2003, IUS-N began installing WiFi connections throughout the capital village of Alofi and in several nearby villages and schools, and has been expanding WiFi coverage into the outer villages since then, making Niue the first WiFi Nation. To assure security for Government departments, IUS-N provides the government with a secure DSL connection to IUS-N's satellite Internet link, at no cost. Agriculture is very important to the lifestyle of Niueans and the economy, and around 204 square kilometres of the land area are available for agriculture. Subsistence agriculture is very much part of Niue's culture, where nearly all the households have plantations of taro. Taro is a staple food, and the pink taro now dominant in the taro markets in New Zealand and Australia is an intellectual property of Niue. This is one of the naturally occurring taro varieties on Niue, and has a strong resistance to pests. The Niue taro is known in Samoa as "talo Niue" and in international markets as pink taro. Niue exports taro to New Zealand. Tapioca or cassava, yams and kumara also grow very well, as do different varieties of bananas. Coconut meat, passionfruit and limes dominated exports in the 1970s, but in 2008 vanilla, noni and taro were the main export crops. Most families grow their own food crops for subsistence and sell their surplus at the Niue Makete in Alofi, or export to their families in New Zealand. Coconut crab, or uga, is also part of the food chain; it lives in the forest and coastal areas. In 2003, the government made a commitment to develop and expand vanilla production with the support of NZAID. Vanilla has grown wild on Niue for a long time. Despite the setback caused by the devastating Cyclone Heta in early 2004, work on vanilla production continues. The expansion plan started with the employment of the unemployed or underemployed labour force to help clear land, plant supporting trees and plant vanilla vines. The approach to accessing land includes planning to have each household plant a small plot of around half to to be cleared and planted with vanilla vines. There are a lot of planting materials for supporting trees to meet demand for the expansion of vanilla plantations, but a severe shortage of vanilla vines for planting stock. There are the existing vanilla vines, but cutting them for planting stock will reduce or stop the vanilla from producing beans. At the moment, the focus is in the areas of harvesting and marketing. The last agricultural census was in 1989. Tourism is one of the three priority economic sectors (the other two are fisheries and agriculture) for economic development. In 2006, estimated visitor expenditure reached (equivalent to about $M in ) making tourism a major industry for Niue. Niue will continue to receive direct support from the government and overseas donor agencies. The only airport is Niue International Airport. Air New Zealand is the sole airline, flying twice a week from Auckland. In the early 1990s Niue International Airport was served by a local airline, Niue Airlines, but it closed in 1992. There is a tourism development strategy to increase the number of rooms available to tourists at a sustainable level. Niue is trying to attract foreign investors to invest in the tourism industry by offering import and company tax concessions as incentives. New Zealand businessman Earl Hagaman, founder of Scenic Hotel Group, was awarded a contract in 2014 to manage the Matavai Resort in Niue after he made a $101,000 political donation to the New Zealand National Party, which at that time led a minority government in New Zealand. The resort is subsidized by New Zealand, which wants to bolster tourism there. In 2015 New Zealand announced $7.5m in additional funding for expansion of the resort. The selection of the Matavai contractor was made by the Niue Tourism Property Trust, whose trustees are appointed by New Zealand Foreign Affairs minister Murray McCully. Prime Minister John Key said he did not handle campaign donations, and that Niue premier Toke Talagi has long pursued tourism as a growth strategy. McCully denied any link between the donation, the foreign aid and the contractor selection. Niue became the world's first dark sky country in March 2020. The entire island maintains standards of light development and keeps light pollution limited. Visitors will be able to enjoy guided Astro-tours led by trained Niuean community members. Viewing sites which are used for whale-watching and accessing the sea, as well as the roads that cross the island, make ideal viewing locations. The sailing season begins in May. Alofi Bay has many mooring buoys and yacht crews can lodge at Niue Backpackers. The anchorage in Niue is one of the least protected in the South Pacific, so much that cruise ship tenders are often unable to risk landing passengers due to weather or sea conditions, as well as the associated risk of having them stranded ashore. Other challenges of the anchorage are a primarily coral bottom and many deep spots. Mooring buoys are attached to seine floats that support the mooring lines away from seabed obstructions. On 27 October 2016, Niue officially declared that all its national debt was paid off. The Government plans to spend money saved from servicing loans on increasing pensions and offering incentives to lure expatriates back home. However, Niue isn't entirely independent. New Zealand pays $14 million in aid each year and Niue still depends on New Zealand. Premier Toke Talagi said Niue managed to pay off US$4 million of debt and had "no interest" in borrowing again, particularly from huge powers such as China. The first computers were Apple machines brought in by the University of the South Pacific Extension Centre around the early 1980s. The Treasury Department computerised its general ledger in 1986 using NEC personal computers that were IBM PC XT compatible. The Census of Households and Population in 1986 was the first to be processed using a personal computer with the assistance of David Marshall, FAO Adviser on Agricultural Statistics, advising UNFPA Demographer Dr Lawrence Lewis and Niue Government Statistician Bill Vakaafi Motufoou to switch from using manual tabulation cards. In 1987 Statistics Niue got its new personal computer NEC PC AT use for processing the 1986 census data; personnel were sent on training in Japan and New Zealand to use the new computer. The first Computer Policy was developed and adopted in 1988. In 2003, Niue became the first country in the world to provide state-funded wireless internet to all inhabitants. In August 2008 it has been reported that all school students have what is known as the OLPC XO-1, a specialised laptop by the One Laptop per Child project designed for children in the developing world. Niue was also a location of tests for the OpenBTS project, which aims to deliver low-cost GSM base stations built with open source software. In July 2011, Telecom Niue launched pre-paid mobile services (Voice/EDGE – 2.5G) as Rokcell Mobile based on the commercial GSM product of vendor Lemko. Three BTS sites will cover the nation. International roaming is not currently available. The fibre optic cable ring is now completed around the island (FTTC), Internet/ADSL services were rolled out towards the end of 2011. In January 2015 Telecom Niue completed the laying of the fibre optic cable around Niue connecting all the 14 villages, making land line phones and ADSL internet connection available to households. The following demographic statistics are from the CIA World Factbook. Niue is the birthplace of New Zealand artist and writer John Pule. Author of "The Shark That Ate the Sun", he also paints tapa cloth inspired designs on canvas. In 2005, he co-wrote "Hiapo: Past and Present in Niuean Barkcloth", a study of a traditional Niuean artform, with Australian writer and anthropologist Nicholas Thomas. Taoga Niue is a new Government Department responsible for the preservation of culture, tradition and heritage. Recognising its importance, the Government has added Taoga Niue as the sixth pillar of the Niue Integrated Strategic Plan (NISP). Niue has two broadcast outlets, Television Niue and Radio Sunshine, managed and operated by the Broadcasting Corporation of Niue, and one newspaper, the "Niue Star". Despite being a small country, a number of sports are popular. Rugby union is the most popular sport, played by both men and women; Niue was the 2008 FORU Oceania Cup champions. Netball is played only by women. There is a nine-hole golf course at Fonuakula. There is a lawn bowling green under construction. Association Football is a popular sport, as evidenced by the Niue Soccer Tournament, though the Niue national football team has played only two matches. Rugby league is also a popular sport. Niue Rugby League have only started making strides within the international arena since their first ever test match against Vanuatu, going down 22–20 in 2013. On 4 October 2014, the Niue rugby league team record their first ever international test match win defeating the Philippines 36–22. In May 2015, Niue Rugby League recorded their second international test match win against the South African Rugby League side, 48–4. Niue now sit 31st in the Rugby League World Rankings.
https://en.wikipedia.org/wiki?curid=21228
Nirvana (band) Nirvana was an American rock band formed in Aberdeen, Washington, in 1987. Founded by lead singer and guitarist Kurt Cobain and bassist Krist Novoselic, the band went through a succession of drummers before recruiting Dave Grohl in 1990. Characterized by their punk aesthetic, Nirvana's fusion of pop melodies with noise, combined with their themes of abjection and social alienation, made them hugely popular during their short tenure. Though they dissolved in 1994 after the death of Cobain, their music maintains a popular following and continues to influence modern rock and roll culture. In the late 1980s, Nirvana established itself as part of the Seattle grunge scene, releasing its first album, "Bleach", for the independent record label Sub Pop in 1989. They developed a sound that relied on dynamic contrasts, often between quiet verses and loud, heavy choruses. After signing to major label DGC Records in 1991, Nirvana found unexpected mainstream success with "Smells Like Teen Spirit", the first single from their landmark second album "Nevermind" (1991). A cultural phenomenon of the 1990s, the album went on to be certified Diamond by the RIAA. Nirvana's sudden success popularized alternative rock and was credited for ending the dominance of hair metal, while they were also often referenced as the figurehead band of Generation X. Following extensive tours and the 1992 compilation album "Incesticide" and EP "Hormoaning", Nirvana released their abrasive and less mainstream sounding third studio album, "In Utero" (1993). The album topped both the American and British album charts, and was widely acclaimed. Nirvana disbanded following Cobain's death in April 1994. Various posthumous releases have been overseen by Novoselic, Grohl, and Cobain's widow Courtney Love. The posthumous live album "MTV Unplugged in New York" (1994) won Best Alternative Music Performance at the 1996 Grammy Awards. During their three years as a mainstream act, Nirvana received the American Music Award, Brit Award and Grammy Award, as well as seven MTV Video Music Awards and two NME Awards. By 2009, they had over 25 million RIAA-certified units and sold over 75 million records worldwide, making them one of the best-selling bands of all time. The band have achieved ten top 40 hits on the "Billboard" Alternative Songs chart, including five No. 1's, and four No. 1 albums on the "Billboard" 200. In 2000, VH1 ranked them sixth in its list of the "100 Greatest Hard Rock Artists". In 2004, "Rolling Stone" name them among the greatest artists of all time. Nirvana was inducted into the Rock and Roll Hall of Fame in their first year of eligibility in 2014. Singer and guitarist Kurt Cobain and bassist Krist Novoselic met while attending Aberdeen High School in Washington state. The pair became friends while frequenting the practice space of the Melvins. Cobain wanted to form a band with Novoselic, but Novoselic did not respond for a long period. Cobain gave him a demo tape of his project Fecal Matter. Three years after the two first met, Novoselic notified Cobain that he had finally listened to the Fecal Matter demo and suggested they start a group. Their first band, the Sellouts, was a Creedence Clearwater Revival tribute band. They recruited Bob McFadden on drums, but after a month the project fell apart. In early 1987, Cobain and Novoselic recruited drummer Aaron Burckhard. They practiced material from Cobain's Fecal Matter tape but started writing new material soon after forming. During its initial months, the band went through a series of names, including Fecal Matter, Skid Row and Ted Ed Fred. The group settled on Nirvana because, according to Cobain, "I wanted a name that was kind of beautiful or nice and pretty instead of a mean, raunchy punk name like the Angry Samoans". Novoselic and Cobain moved to Tacoma and Olympia, Washington respectively. They temporarily lost contact with Burckhard, and instead practiced with Dale Crover of the Melvins. Nirvana recorded its first demos in January 1988. In early 1988, Crover moved to San Francisco but recommended Dave Foster as his replacement on drums. Foster's tenure with Nirvana lasted only a few months; during a stint in jail, he was replaced by Burckhard, who again departed after telling Cobain he was too hungover to practice one day. Cobain and Novoselic put an ad seeking a replacement drummer in "The Rocket", a Seattle music publication, but received no satisfactory responses. Meanwhile, a mutual friend introduced them to drummer Chad Channing, and the three musicians agreed to jam together. Channing continued to jam with Cobain and Novoselic; however, by Channing's own account, "They never actually said 'okay, you're in.'" Channing played his first show with Nirvana in May 1988. Nirvana released its first single, a cover of Shocking Blue's "Love Buzz", in November 1988 on the Seattle independent record label Sub Pop. They did their first ever interview with John Robb in "Sounds", which made their release its single of the week. The following month, the band began recording its debut album, "Bleach", with local producer Jack Endino. "Bleach" was influenced by the heavy dirge-rock of the Melvins and the 1980s punk rock of Mudhoney, as well as the 1970s heavy metal of Black Sabbath. The money for the recording sessions for "Bleach", listed as $606.17 on the album sleeve, was supplied by Jason Everman, who was subsequently brought into the band as the second guitarist. Though Everman did not play on the album, he received a credit on "Bleach" because, according to Novoselic, they "wanted to make him feel more at home in the band". Just prior to the album's release, Nirvana became the first band to sign an extended contract with Sub Pop. "Bleach" was released in June 1989, and became a favorite of college radio stations. After the album's release Nirvana embarked on its first national tour, but ended up canceling the last few dates and returning to Washington state due to increasing differences with Everman. No one told Everman he was fired; Everman later said he had actually quit. Although Sub Pop did not promote "Bleach" as much as other releases, it was a steady seller, and had initial sales of 40,000 copies. However, Cobain was upset by the label's lack of promotion and distribution for the album. In late 1989, the band recorded the "Blew" EP with producer Steve Fisk. In an interview around that time with John Robb in "Sounds", Cobain commented that the band's music was changing, saying "The early songs were really angry... But as time goes on the songs are getting poppier and poppier as I get happier and happier. The songs are now about conflicts in relationships, emotional things with other human beings". In April 1990, Nirvana began working on their next album with producer Butch Vig at Smart Studios in Madison, Wisconsin. Cobain and Novoselic became disenchanted with Channing's drumming, and Channing expressed frustration at not being involved in songwriting. As bootlegs of Nirvana demos with Vig began to circulate in the music industry and draw attention from major labels, Channing left the band. That July, Nirvana recorded the single "Sliver" with Mudhoney drummer Dan Peters. Dale Crover filled in on drums on Nirvana's seven-date American West Coast tour with Sonic Youth that August. In September 1990, Buzz Osborne of the Melvins introduced the band to drummer Dave Grohl, whose Washington, D.C. band Scream had broken up. Grohl auditioned for Novoselic and Cobain days after arriving in Seattle; Novoselic later said, "We knew in two minutes that he was the right drummer." Grohl told "Q" :"I remember being in the same room with them and thinking, 'What? "That"'s Nirvana? Are you kidding?' Because on their record cover they looked like psycho lumberjacks... I was like, 'What, that little dude and that big motherfucker? You're kidding me'." Disenchanted with Sub Pop and with the Smart Studios sessions generating interest, Nirvana decided to look for a deal with a major record label since no indie label could buy the group out of its contract. Cobain and Novoselic consulted Soundgarden and Alice in Chains manager Susan Silver for advice. They met Silver in Los Angeles and she introduced them to agent Don Muller and music business attorney Alan Mintz, who was specialized in finding deals for new bands. Mintz started sending out Nirvana's demo tape to major labels looking for deals. Following repeated recommendations by Sonic Youth's Kim Gordon, Nirvana signed to DGC Records in 1990. When Nirvana was inducted into the Rock and Roll Hall of Fame in 2014, Novoselic thanked Silver during his speech for "introducing them to the music industry properly". After signing, the band began recording its first major label album, "Nevermind". The group was offered a number of producers, but held out for Vig. Rather than record at Vig's Madison studio as they had in 1990, production shifted to Sound City Studios in Van Nuys, Los Angeles, California. For two months, the band worked through a variety of songs. Some, such as "In Bloom" and "Breed", had been in Nirvana's repertoire for years, while others, including "On a Plain" and "Stay Away", lacked finished lyrics until midway through the recording process. After the recording sessions were completed, Vig and the band set out to mix the album. However, the recording sessions had run behind schedule and the resulting mixes were deemed unsatisfactory. Slayer mixer Andy Wallace was brought in to create the final mix. After the album's release, members of Nirvana expressed dissatisfaction with the polished sound the mixer had given "Nevermind". Initially, DGC Records was hoping to sell 250,000 copies of "Nevermind", the same they had achieved with Sonic Youth's "Goo". However, the first single "Smells Like Teen Spirit" quickly gained momentum, boosted by major airplay of the music video on MTV. As it toured Europe during late 1991, the band found that its shows were dangerously oversold, that television crews were becoming a constant presence onstage, and that "Smells Like Teen Spirit" was almost omnipresent on radio and music television. By Christmas 1991, "Nevermind" was selling 400,000 copies a week in the US. In January 1992, the album displaced Michael Jackson's "Dangerous" at number one on the "Billboard" album charts, and topped the charts in numerous other countries. The month "Nevermind" reached number one, "Billboard" proclaimed, "Nirvana is that rare band that has everything: critical acclaim, industry respect, pop radio appeal, and a rock-solid college/alternative base." The album eventually sold over seven million copies in the United States and over 30 million worldwide. Nirvana's sudden success was credited for popularizing alternative rock and ending the dominance of hair metal. Citing exhaustion, Nirvana decided not to undertake another American tour in support of "Nevermind", instead opting to make only a handful of performances later that year. In March 1992, Cobain sought to reorganize the group's songwriting royalties (which to this point had been split equally) to better represent that he wrote the majority of the music. Grohl and Novoselic did not object, but when Cobain wanted the agreement to be retroactive to the release of "Nevermind", the disagreements between the two sides came close to breaking up the band. After a week of tension, Cobain received a retroactive share of 75 percent of the royalties. Bad feelings about the situation remained within the group afterward. Amid rumors that the band was disbanding due to Cobain's health, Nirvana headlined the closing night of England's 1992 Reading Festival. Cobain personally programmed the performance lineup. Nirvana's performance at Reading is often regarded by the press as one of the most memorable of the group's career. A few days later, Nirvana performed at the MTV Video Music Awards; despite the network's refusal to let the band play the new song "Rape Me", Cobain strummed and sang the first few bars of the song before breaking into "Lithium". The band received awards for the Best Alternative Video and Best New Artist categories. DGC had hoped to have a new Nirvana album ready for a late 1992 holiday season; instead, it released the compilation album "Incesticide" in December 1992. A joint venture between DGC and Sub Pop, "Incesticide" collected various rare Nirvana recordings and was intended to provide the material for a better price and at quality bootlegs. As "Nevermind" had been out for 15 months and had yielded a fourth single in "In Bloom" by that point, Geffen/DGC opted not to heavily promote "Incesticide", which was certified gold by the Recording Industry Association of America the following February. In February 1993, Nirvana released "Puss"/"Oh, the Guilt", a split single with The Jesus Lizard, on the independent label Touch & Go. Meanwhile, the group chose Steve Albini, who had a reputation as a principled and opinionated individual in the American indie music scene, to record its third album. While there was speculation that the band chose Albini to record the album due to his underground credentials, Cobain insisted that Albini's sound was simply the one he had always wanted Nirvana to have: a "natural" recording without layers of studio trickery. Nirvana traveled to Pachyderm Studio in Cannon Falls, Minnesota, in that February to record the album. The sessions with Albini were productive and quick, and the album was recorded and mixed in two weeks for $25,000. Several weeks after the completion of the recording sessions, stories ran in the "Chicago Tribune" and "Newsweek" that quoted sources claiming DGC considered the album "unreleasable". As a result, fans began to believe that the band's creative vision might be compromised by their label. While the stories about DGC shelving the album were untrue, the band actually was unhappy with certain aspects of Albini's mixes; they thought the bass levels were too low, and Cobain felt that "Heart-Shaped Box" and "All Apologies" did not sound "perfect". Longtime R.E.M. producer Scott Litt was called in to remix these two songs, with Cobain adding additional instrumentation and backing vocals."In Utero" debuted at number one on the "Billboard" 200 album chart in September 1993. "Time"s Christopher John Farley wrote in his review of the album, "Despite the fears of some alternative-music fans, Nirvana hasn't gone mainstream, though this potent new album may once again force the mainstream to go Nirvana". "In Utero" went on to sell over 5 million copies in the United States. That October, Nirvana embarked on its first tour of the United States in two years with support from Half Japanese and the Breeders. For the tour, the band added Pat Smear of the punk rock band Germs as a second guitarist. In November, Nirvana recorded a performance for the television program "MTV Unplugged". Augmented by Smear and cellist Lori Goldston, the band broke convention for the show by choosing not to play their most recognizable songs. Instead, they performed several covers, and invited Cris and Curt Kirkwood of the Meat Puppets to join them for renditions of three Meat Puppets songs. In early 1994, Nirvana embarked on a European tour. Nirvana's final concert took place in Munich, Germany, on March 1. In Rome, on the morning of March 4, Cobain's wife, Courtney Love, found Cobain unconscious in their hotel room and he was rushed to the hospital. Cobain had reacted to a combination of prescribed Rohypnol and alcohol. The rest of the tour was cancelled. In the ensuing weeks, Cobain's heroin addiction resurfaced. Following an intervention, Cobain was convinced to admit himself into drug rehabilitation. After less than a week, he left the rehabilitation facility and returned to Seattle. One week later, on April 8, 1994, Cobain was found dead of a self-inflicted shotgun wound at his home in the Denny-Blaine neighborhood of the city. In August 1994, DGC announced a double album, "Verse Chorus Verse," comprising live material from throughout Nirvana's career, including its "MTV Unplugged" performance. However, Novoselic and Grohl found assembling the material so soon after Cobain's death emotionally overwhelming, and the album was canceled. Instead, in November, DGC released the "MTV Unplugged" performance as "MTV Unplugged in New York;" it debuted at number one on the "Billboard" charts, and earned Nirvana a Grammy Award for Best Alternative Music Album. It was followed by Nirvana's first full-length live video, "Live! Tonight! Sold Out!!". In 1996, the live album "From the Muddy Banks of the Wishkah" became the third consecutive Nirvana release to debut at the top of the "Billboard" album chart. Grohl founded a new band, Foo Fighters, and Novoselic turned his attention to political activism. In 1997, Novoselic, Grohl, and Love formed the limited liability company Nirvana LLC to oversee all Nirvana-related projects. A 45-track box set of Nirvana rarities was scheduled for release in October 2001. However, shortly before the release date, Love filed a suit to dissolve Nirvana LLC, and an injunction was issued preventing the release of any new Nirvana material until the case was resolved. Love contended that Cobain was the band, that Grohl and Novoselic were sidemen, and that she had signed the partnership agreement originally under bad advice. Grohl and Novoselic countersued, asking the court to remove Love from the partnership and to replace her with another representative of Cobain's estate. The day before the case was set to go to trial in October 2002, Love, Novoselic, and Grohl announced that they had reached a settlement. The next month, the best-of compilation "Nirvana" was released, featuring the previously unreleased track "You Know You're Right", the last song Nirvana recorded before Cobain's death. It debuted at number three on the "Billboard" album chart. The box set, "With the Lights Out", was finally released in November 2004. The release contained early Cobain demos, rough rehearsal recordings, and live tracks recorded throughout the band's history. An album of selected tracks from the box set, "", was released in late 2005. In April 2006, Love announced that she was selling 25 percent of her stake in the Nirvana song catalog in a deal estimated at $50 million. The share of Nirvana's publishing was purchased by Primary Wave Music, which was founded by Larry Mestel, a former CEO of Virgin Records. Love sought to assure Nirvana's fanbase that the music would not simply be licensed to the highest bidder: "We are going to remain very tasteful and true to the spirit of Nirvana while taking the music to places it has never been before". Further releases have included the DVD releases of "Live! Tonight! Sold Out!!" in 2006, and the full version of "MTV Unplugged in New York" in 2007. In November 2009, Nirvana's performance at the 1992 Reading Festival was released on CD and DVD as "Live at Reading," alongside a deluxe 20th-anniversary edition of "Bleach." DGC released a number of 20th anniversary deluxe-edition packages of "Nevermind" in September 2011 and "In Utero" in September 2013. In 2012, Grohl, Novoselic, and Smear joined Paul McCartney at . The performance featured the premiere of a new song written by the four, "Cut Me Some Slack". A studio recording was released on the soundtrack to "Sound City", a film by Grohl. On July 19, 2013, they played with McCartney again during the encore of his Safeco Field "Out There" concert in Seattle, the first time Nirvana members had played together in their hometown in over 15 years. In 2014, Cobain, Novoselic, and Grohl were inducted into the Rock and Roll Hall of Fame. At the induction ceremony, Novoselic, Grohl and Smear performed a four-song set with guest vocalists Joan Jett, Kim Gordon, St. Vincent, and Lorde. Novoselic, Grohl, and Smear then performed a full show at Brooklyn's St. Vitus Bar with Jett, Gordon, St. Vincent, J Mascis, and John McCauley as guest vocalists. At the ceremony, Grohl thanked Burckhard, Crover, Peters and Channing for their time in Nirvana. Everman also attended. At Clive Davis' annual pre-Grammy party in 2016, the surviving members of Nirvana reunited to perform the David Bowie song "The Man Who Sold the World", which Nirvana covered in their "MTV Unplugged" performance. Beck accompanied them on acoustic guitar and lead vocals. In October 2018, Novoselic and Grohl reunited during the finale of the Cal Jam festival at Glen Helen Amphitheater in San Bernardino County, California, joined by guest vocalists John McCauley and Joan Jett. On June 25, 2019, "The New York Times Magazine" listed Nirvana among hundreds of artists whose material was destroyed in the 2008 Universal fire. In January 2020, Novoselic and Grohl reunited for a performance at a benefit for The Art of Elysium at the Hollywood Palladium, joined by Beck, St Vincent, and Grohl's daughter Violet Grohl. Cobain described Nirvana's initial sound as "a Gang of Four and Scratch Acid ripoff". When Nirvana recorded "Bleach", Cobain felt he had to fit the expectations of the Sub Pop grunge sound to build a fanbase, and suppressed his arty and pop songwriting in favor of a more rock sound. Nirvana biographer Michael Azerrad argued, "Ironically, it was the restrictions of the Sub Pop sound that helped the band find its musical identity." Azerrad stated that by acknowledging that they had grown up listening to Black Sabbath and Aerosmith, they had been able to move on from their derivative early sound. Nirvana used dynamic shifts that went from quiet to loud. Cobain sought to mix heavy and pop musical sounds, saying, "I wanted to be totally Led Zeppelin in a way and then be totally extreme punk rock and then do real wimpy pop songs." When Cobain heard the Pixies' 1988 album "Surfer Rosa" after recording "Bleach", he felt it had the sound he wanted to achieve but had been too intimidated to try. The Pixies' subsequent popularity encouraged Cobain to follow his instincts as a songwriter. Like the Pixies, Nirvana moved between "spare bass-and-drum grooves and shrill bursts of screaming guitar and vocals". Near the end of his life, Cobain said the band had become bored of the "limited" formula, but expressed doubt that they were skilled enough to try other dynamics. Cobain's rhythm guitar style, which relied on power chords, low-note riffs, and a loose left-handed technique, featured the key components to the band's songs. Cobain would often initially play a song's verse riff in a clean tone, then double it with distorted guitars when he repeated the part. In some songs the guitar would be absent from the verses entirely to allow the drums and bass guitar to support the vocals, or it would only play sparse melodies like the two-note pattern used in "Smells Like Teen Spirit". Cobain rarely played standard guitar solos, opting to play variations of the song's melody as single note lines. Cobain's solos were mostly blues-based and discordant, which music writer Jon Chappell described as "almost an iconoclastic parody of the traditional instrumental break", a quality typified by the note-for-note replication of the lead melody in "Smells Like Teen Spirit" and the atonal solo for "Breed". The band had no formal musical training; Cobain said: "I have no concept of knowing how to be a musician at all what-so-ever... I couldn't even pass Guitar 101". Grohl's drumming "took Nirvana's sound to a new level of intensity". Azerrad stated that Grohl's "powerful drumming propelled the band to a whole new plane, visually as well as musically", noting, "Although Dave is a merciless basher, his parts are also distinctly musical—it wouldn't be difficult to figure out what song he was playing even without the rest of the music". From 1992, Cobain and Novoselic would tune their guitars to E flat for studio and live performances (until then, their live tunings were to concert pitch). Cobain noted, "We play so hard we can't tune our guitars fast enough". The band made a habit of destroying its equipment after shows. Novoselic said he and Cobain created the "shtick" in order to get off of the stage sooner. Cobain stated it began as an expression of his frustration with previous drummer Channing making mistakes and dropping out entirely during performances. Everett True said in 1989, "Nirvana songs treat the banal and pedestrian with a unique slant". Cobain came up with the basic components of each song (usually writing them on an acoustic guitar), as well as the singing style and the lyrics. He emphasized that Novoselic and Grohl "have a big part in deciding on how long a song should be and how many parts it should have. So I don't like to be considered the sole songwriter". When asked which part of the songs he would write first, Cobain responded, "I don’t know. I really don't know. I guess I start with the verse and then go into the chorus". Cobain usually wrote lyrics for songs minutes before recording them. Cobain said, "When I write a song the lyrics are the least important subject. I can go through two or three different subjects in a song and the title can mean absolutely nothing at all". Cobain told "Spin" in 1993 that he "didn't give a flying fuck" what the lyrics on "Bleach" were about, figuring "Let's just scream some negative lyrics and as long as they're not sexist and don't get too embarrassing it'll be okay", while the lyrics to "Nevermind" were taken from two years of poetry he had accumulated, which he cut up and chose lines he preferred from. In comparison, Cobain stated that the lyrics to "In Utero" were "more focused, they're almost built on themes". Cobain didn't write necessarily in a linear fashion, instead relying on juxtapositions of contradictory images to convey emotions and ideas. Often in his lyrics, Cobain would present an idea then reject it; the songwriter explained, "I'm such a nihilistic jerk half the time and other times I'm so vulnerable and sincere [.. The songs are] like a mixture of both of them. That's how most people my age are". Characterized by their punk aesthetic, Nirvana often fused pop melodies with noise. Combined with their themes of abjection and alienation, the band became hugely popular during their short tenure. Stephen Thomas Erlewine wrote that prior to Nirvana, "alternative music was consigned to specialty sections of record stores, and major labels considered it to be, at the very most, a tax write-off". Following the release of "Nevermind", "nothing was ever quite the same, for better and for worse". The success of "Nevermind" not only popularized grunge, but also established "the cultural and commercial viability of alternative rock in general". While other alternative bands had hits before, Nirvana "broke down the doors forever", according to Erlewine. Erlewine further stated that Nirvana's breakthrough "didn't eliminate the underground", but rather "just gave it more exposure". Erlewine also stated that Nirvana's breakthrough "popularized so-called 'Generation X' and 'slacker' culture". Immediately following Cobain's death, numerous headlines referred to Nirvana's frontman as "the voice of a generation", although he had rejected such labeling during his lifetime. In 1992, Jon Pareles of "The New York Times" reported that Nirvana's breakthrough had made others in the alternative scene impatient for achieving similar success, noting, "Suddenly, all bets are off. No one has the inside track on which of dozens, perhaps hundreds, of ornery, obstreperous, unkempt bands might next appeal to the mall-walking millions". Record company executives offered large advances and record deals to bands, and previous strategies of building audiences for alternative rock groups had been replaced by the opportunity to achieve mainstream popularity quickly.Nirvana didn't invent alternative rock, but they were one of the bands foremost in bringing it to the masses. The Seattle trio had an unmistakeable sound - a genius blend of Kurt Cobain's raspy voice and gnashing guitars, Dave Grohl's relentless drumming and Krist Novoselic's uniting bass-work that connected with fans in a hail of alternately melodic and hard-charging songs that would become signature classics. – "Billboard" magazineMichael Azerrad argued in his Nirvana biography "" (1993) that "Nevermind" marked an epochal generational shift in music similar to the rock-and-roll explosion in the 1950s and the end of the baby boomer generation's dominance of the musical landscape. Azerrad wrote, ""Nevermind" came along at exactly the right time. This was music by, for, and about a whole new group of young people who had been overlooked, ignored, or condescended to." Regarding the success of "Nevermind", Fugazi frontman Guy Picciotto said: "It was like our record could have been a hobo pissing in the forest for the amount of impact it had ... It felt like we were playing ukuleles all of a sudden because of the disparity of the impact of what they did." Nirvana is one of the best-selling bands of all time, having sold over 75 million records worldwide. With over 25 million RIAA-certified units, the band is also the 80th-best-selling music artist in the United States. The band have achieved ten top 40 hits on the "Billboard" Alternative Songs chart, including five No. 1's. Two of the band's studio albums and two of their live albums have reached the top spot on the "Billboard" 200. Nirvana has been awarded one Diamond, three Multi-Platinum, seven Platinum and two Gold certified albums in the United States by the RIAA, and four Multi-Platinum, four Platinum, two Gold and one Silver certified albums in the UK by the BPI. "Nevermind", the band's most successful album, has sold over 30 million copies worldwide, making it one of the best-selling albums ever. Their most successful song, "Smells Like Teen Spirit", is among the best-selling singles of all time, having sold 8 million copies. Since its breakup, Nirvana has continued to receive acclaim. In 2003, they were selected as one of the inductees of the "Mojo" Hall of Fame 100. The band also received a nomination in 2004 from the UK Music Hall of Fame for the title of "Greatest Artist of the 1990s". "Rolling Stone" placed Nirvana at number 27 on their list of the "100 Greatest Artists of All Time" in 2004, and at number 30 on their updated list in 2011. In 2003, the magazine's senior editor David Fricke picked Kurt Cobain as the 12th best guitarist of all time. "Rolling Stone" later ranked Cobain as the 45th greatest singer in 2008 and 73rd greatest guitarist of all time in 2011. VH1 ranked Nirvana as the 42nd greatest artists of rock and roll in 1998, the 7th greatest hard rock artists in 2000, and the 14th greatest artists of all time in 2010. Nirvana's contributions to music have also received recognition. The Rock and Roll Hall of Fame has inducted two of Nirvana's recordings, "Smells Like Teen Spirit" and "All Apologies", into its list of "The Songs That Shaped Rock and Roll". The museum also ranked "Nevermind" number 10 on its "The Definitive 200 Albums of All Time" list in 2007. In 2005, the Library of Congress added "Nevermind" to the National Recording Registry, which collects "culturally, historically or aesthetically important" sound recordings from the 20th century. In 2011, four of Nirvana's songs appeared on "Rolling Stone"s updated list of "The 500 Greatest Songs of All Time", with "Smells Like Teen Spirit" ranking the highest at number 9. Three of the band's albums were ranked on the magazine's 2012 list of "The 500 Greatest Albums of All Time", with "Nevermind" placing the highest at number 17. The same three Nirvana albums were also placed on "Rolling Stone"'s 2011 list of "The 100 Best Albums of the Nineties", with "Nevermind" ranking the highest at number 1, making it the greatest album of the decade. "Time" included "Nevermind" on its list of "The All-TIME 100 Albums" in 2006, labeling it "the finest album of the 1990s". In 2011, the magazine also added "Smells Like Teen Spirit" on its list of "The All-TIME 100 Songs", and "Heart-Shaped Box" on its list of "The 30 All-TIME Best Music Videos". Nirvana was announced in their first year of eligibility as being part of the 2014 class of inductees into the Rock and Roll Hall of Fame on December 17, 2013. The induction ceremony was held April 10, 2014, in Brooklyn, New York, at the Barclays Center. However, Channing, who was informed of his omission by SMS, was not included in the induction, as the accolade was only applied to Cobain, Novoselic and Grohl.
https://en.wikipedia.org/wiki?curid=21231
Nirvana (British band) Nirvana are an English pop rock band, formed in London, England in 1965. Though the band achieved only limited commercial success, they were acclaimed both by music industry professionals and by critics. In 1985, the band reformed. The members of the band took the popular American rock band with the same name to court over the usage of the name, eventually reaching a settlement. Nirvana was created as the performing arm of the London-based songwriting partnership of Irish musician Patrick Campbell-Lyons and Greek composer Alex Spyropoulos. On their recordings Campbell-Lyons and Spyropoulos supplied all the vocals. The instrumental work was primarily undertaken by top session musicians and orchestral musicians - with Campbell-Lyons providing a little guitar and Spyropoulos contributing some keyboards. Musically, Campbell-Lyons and Spyropoulos blended myriad musical styles including rock, pop, folk, jazz, Latin rhythms and classical music, primarily augmented by baroque chamber-style arrangements to create a unique entity. In October 1967, they released their first album: a concept album produced by Chris Blackwell titled "The Story of Simon Simopath". The album was one of the first narrative concept albums ever released, predating story-driven concept albums such as The Pretty Things' "S.F. Sorrow" (December 1968), The Who's "Tommy" (April 1969) and The Kinks' "Arthur" (September 1969), and the Moody Blues album "Days of Future Passed" (November 1967) by a month. Island Records launched Nirvana's first album "with a live show at the Saville Theatre, sharing a bill with fellow label acts Traffic, Spooky Tooth, and Jackie Edwards." Unable to perform their songs live as a duo and with the impending release of their first album, Campbell-Lyons and Spyropoulos decided to create a live performing ensemble, The Nirvana Ensemble, and they recruited four musicians to enable them to undertake concerts and TV appearances. Though hired to be part of the live performance group rather than as band members, these four musicians were also included in the photograph alongside the core duo on the album cover of their first album to assist in projecting an image of a group rather than a duo. However they were not core founding members of the group and within a few months Nirvana had reverted to its original two-person lineup. The four musicians who augmented Campbell-Lyons and Spyropoulos on their live appearances and television shows for those few months were Ray Singer (guitar), Brian Henderson (bass), Sylvia A. Schuster (cello) and Michael Coe (French horn, viola). Sue & Sunny also participated in The Nirvana Ensemble, providing vocals. The band appeared on French television with Salvador Dalí, who splashed black paint on them during a performance of their second single "Rainbow Chaser." Campbell-Lyons kept the jacket, but regrets that Dalí did not sign any of their paint-splashed clothes. Island Records allegedly sent the artist an invoice for the cleaning of Schuster's cello. Following the minor chart success of "Rainbow Chaser", "live appearances became increasingly rare" and the songwriting duo at the core of Nirvana "decided to disband the sextet" and to rely on session musicians for future recordings. Spyropoulos cited Schuster's departure due to pregnancy as the instigator for the band returning to its core membership. Campbell-Lyons also cited the high cost of having the additional members as a reason for their departure. Schuster later became principal cellist of the BBC Symphony Orchestra. In 1968, the duo recorded their second album, "All of Us", which featured a similar broad range of musical styles as their first album. Their third album, "Black Flower", was rejected by Blackwell, comparing it disparagingly to Francis Lai's "A Man and a Woman". Under the title, "To Markos III" (supposedly named for a "rich uncle" of Spyropoulos who helped finance the album), it was released in the UK on the Pye label in May 1970, though reportedly only 250 copies were pressed it was deleted shortly after. One track, "Christopher Lucifer," was a jibe at Blackwell. In 1971 the duo amicably separated, with Campbell-Lyons the primary contributor to the next two Nirvana albums, "Local Anaesthetic" 1971, and "Songs of Love And Praise" 1972, the latter featuring the return of Sylvia Schuster. Campbell-Lyons subsequently worked as a solo artist and issued further albums: "Me and My Friend", 1973, "The Electric Plough", 1981, and "The Hero I Might Have Been", 1983, though these did not enjoy commercial success. The duo reunited in 1985, touring Europe and releasing a compilation album "Black Flower" (Bam-Caruso, 1987) which contained some new material. ("Black Flower" had been the original planned title of their third album). In the 1990s two further albums were released. "Secret Theatre" 1994 compiled rare tracks and demos, while "Orange and Blue" 1996 contained previously unreleased material including a flower-power cover of the song "Lithium" originally recorded by the Seattle grunge band of the same name, Nirvana. According to the band's official website, this was intended as part of a tongue-in-cheek album called "Nirvana Sings Nirvana" that was aborted when lead singer Kurt Cobain died. When the recording was presented on the "Orange and Blue" album, Campbell-Lyons's liner notes treated it seriously and with allusion to Heathcliff from "Wuthering Heights". Also according to the website, the band still wanted to open for Hole even after Cobain's death. The original group filed a lawsuit in California against the Seattle grunge band in 1992. The matter was settled out of court on undisclosed terms that apparently allowed both bands to continue using the name and issuing new recordings without any packaging disclaimers or caveats to distinguish one Nirvana from the other. Music writer Everett True has claimed that Cobain's record label paid $100,000 to the original Nirvana to permit Cobain's band continued use of the name. In 1999, they released a three-disc CD anthology titled "Chemistry", including twelve previously unreleased tracks and some new material. Their first three albums were reissued on CD by Universal Records in 2003 and received critical acclaim. In 2005, Universal (Japan) reissued "Local Anaesthetic" and "Songs of Love And Praise". In 2018 a new album was released on the Island label "Rainbow Chaser: The 60s Recordings (The Island Years)" which featured the first 2 albums from this UK psychedelic group in a 2CD package, featuring 52 tracks with 27 previously unreleased outtakes, demos and alternative versions. Included in these tracks are some rare gems that clearly show how the group influenced pop at the time. The group were in the school of baroque-flavoured, melodic pop-rock music typified by the Beach Boys of "Pet Sounds" and "God Only Knows", the Zombies of "Odessey and Oracle" and "Time of the Season", the Procol Harum of "A Whiter Shade of Pale", the Moody Blues of "Days of Future Passed" and "Nights in White Satin" and the Kinks of "Waterloo Sunset" and the Love of "Forever Changes". The majority of the tracks on Nirvana's albums fell into that broad genre of contemporary popular music, not easily categorized but perhaps best described as the baroque or chamber strand of progressive rock, soft rock or orchestral pop and chamber pop. The Nirvana song "Rainbow Chaser" is thought to be the first-ever British recording to feature the audio effect known as phasing or flanging throughout an entire track, as distinct from occasionally within a song such as the Small Faces in "Itchycoo Park". Phasing was, by 1967, heavily identified with the musical style known as psychedelia, and as "Rainbow Chaser" was the only Nirvana single to achieve commercial success, peaking at number 34 in UK Singles Chart during May 1968, they were invariably tagged as a "psychedelic" band. "Rainbow Chaser" was one of the few Nirvana recordings that had any connection with "psychedelic" music. "Orange and Blue", though, was acknowledged to have been written under the influence of LSD according to the liner notes of the eponymous album. A who's-who of behind-the-scenes craftsmen, who went on to become Britain's top producers, arrangers, engineers and mixers of the 1970s, chose to work with Nirvana in the late 1960s and in essence cut their studio teeth working with Nirvana. Two of these arranger/producers actually worked with Nirvana before working with The Beatles and The Rolling Stones. Nirvana's producers, arrangers, engineers and mixers included: Mike Weighell, Nova Studios, beginning of the 1970s. Others who worked on production with Nirvana include Muff Winwood (formerly of the Spencer Davis Group); arranger/producer Mike Hurst who worked with Jimmy Page, Cat Stevens, Manfred Mann, Spencer Davis Group and Colin Blunstone; arranger Johnny Scott who arranged for the Hollies and subsequently scored films such as "The Shooting Party" and "". Top musicians who played on Nirvana sessions include: Lesley Duncan, Big Jim Sullivan, Herbie Flowers, Billy Bremner (later of Rockpile/Dave Edmunds fame), Luther Grosvenor, Clem Cattini and the full lineup of rock band Spooky Tooth, Pete Kelly (also known as Patrick Joseph Kelly)(Keyboards) who also co-wrote the 'Modus Operandi'track on the 'Local Anaesthetic' album.
https://en.wikipedia.org/wiki?curid=21232
Nirvana In Indian religions, nirvana is synonymous with "moksha" and "mukti". All Indian religions assert it to be a state of perfect quietude, freedom, highest happiness as well as the liberation from or ending of "samsara", the repeating cycle of birth, life and death. However, Buddhist and non-Buddhist traditions describe these terms for liberation differently. In the Buddhist context, "nirvana" refers to realization of non-self and emptiness, marking the end of rebirth by stilling the "fires" that keep the process of rebirth going. In Hindu philosophy, it is the union of or the realization of the identity of Atman with Brahman, depending on the Hindu tradition. In Jainism, nirvana is also the soteriological goal, representing the release of a soul from karmic bondage and samsara. The word "nirvāṇa", states Steven Collins, is from the verbal root "vā" "blow" in the form of past participle "vāna" "blown", prefixed with the preverb "nis" meaning "out". Hence the original meaning of the word is "blown out, extinguished". Sandhi changes the sounds: the "v" of "vāna" causes "nis" to become "nir", and then the "r" of "nir" causes retroflexion of the following "n": "nis+vāna" > "nirvāṇa". The term "nirvana" in the soteriological sense of "blown out, extinguished" state of liberation does not appear in the Vedas nor in the Upanishads. According to Collins, "the Buddhists seem to have been the first to call it "nirvana"." However, the ideas of spiritual liberation using different terminology, with the concept of soul and Brahman, appears in Vedic texts and Upanishads, such as in verse 4.4.6 of the Brihadaranyaka Upanishad. This may have been deliberate use of words in early Buddhism, suggests Collins, since Atman and Brahman were described in Vedic texts and Upanishads with the imagery of fire, as something good, desirable and liberating. "Nirvāṇa" is a term found in the texts of all major Indian religions – Buddhism, Hinduism, Jainism and Sikhism. It refers to the profound peace of mind that is acquired with "moksha", liberation from samsara, or release from a state of suffering, after respective spiritual practice or sādhanā. The idea of "moksha" is connected to the Vedic culture, where it conveyed a notion of "amrtam", "immortality", and also a notion of a "timeless", "unborn", or "the still point of the turning world of time". It was also its timeless structure, the whole underlying "the spokes of the invariable but incessant wheel of time". The hope for life after death started with notions of going to the worlds of the Fathers or Ancestors and/or the world of the Gods or Heaven. The earliest Vedic texts incorporate the concept of life, followed by an afterlife in heaven and hell based on cumulative virtues (merit) or vices (demerit). However, the ancient Vedic Rishis challenged this idea of afterlife as simplistic, because people do not live an equally moral or immoral life. Between generally virtuous lives, some are more virtuous; while evil too has degrees, and either permanent heaven or permanent hell is disproportionate. The Vedic thinkers introduced the idea of an afterlife in heaven or hell in proportion to one's merit, and when this runs out, one returns and is reborn. The idea of rebirth following "running out of merit" appears in Buddhist texts as well. This idea appears in many ancient and medieval texts, as "Saṃsāra", or the endless cycle of life, death, rebirth and redeath, such as section 6:31 of the "Mahabharata" and verse 9.21 of the "Bhagavad Gita". The Saṃsara, the life after death, and what impacts rebirth came to be seen as dependent on karma. The liberation from Saṃsāra "d"eveloped as an ultimate goal and soteriological value in the Indian culture, and called by different terms such as nirvana, moksha, mukti and kaivalya. This basic scheme underlies Hinduism, Jainism and Buddhism, where "the ultimate aim is the timeless state of "moksa", or, as the Buddhists first seem to have called it, nirvana." Although the term occurs in the literatures of a number of ancient Indian traditions, the concept is most commonly associated with Buddhism. It was later adopted by other Indian religions, but with different meanings and description ("Moksha"), such as in the Hindu text "Bhagavad Gita" of the "Mahabharata". Nirvana ("nibbana") literally means "blowing out" or "quenching". It is the most used as well as the earliest term to describe the soteriological goal in Buddhism: release from the cycle of rebirth ("saṃsāra"). Nirvana is part of the Third Truth on "cessation of dukkha" in the Four Noble Truths doctrine of Buddhism. It is the goal of the Noble Eightfold Path. The Buddha is believed in the Buddhist scholastic tradition to have realized two types of nirvana, one at enlightenment, and another at his death. The first is called "sopadhishesa-nirvana" (nirvana with a remainder), the second "parinirvana" or "anupadhishesa-nirvana" (nirvana without remainder, or final nirvana). In the Buddhist tradition, nirvana is described as the extinguishing of the "fires" that cause rebirths and associated suffering. The Buddhist texts identify these three "three fires" or "three poisons" as "raga" (greed, sensuality), "dvesha" (aversion, hate) and "avidyā" or "moha" (ignorance, delusion). The state of nirvana is also described in Buddhism as cessation of all afflictions, cessation of all actions, cessation of rebirths and suffering that are a consequence of afflictions and actions. Liberation is described as identical to "anatta" ("anatman", non-self, lack of any self). In Buddhism, liberation is achieved when all things and beings are understood to be with no Self. Nirvana is also described as identical to achieving "sunyata" (emptiness), where there is no essence or fundamental nature in anything, and everything is empty. In time, with the development of Buddhist doctrine, other interpretations were given, such as being an unconditioned state, a fire going out for lack of fuel, abandoning weaving ("vana") together of life after life, and the elimination of desire. However, Buddhist texts have asserted since ancient times that nirvana is more than "destruction of desire", it is "the object of the knowledge" of the Buddhist path. The most ancient texts of Hinduism such as the Vedas and early Upanishads don't mention the soteriological term "Nirvana". This term is found in texts such as the Bhagavad Gita and the Nirvana Upanishad, likely composed in the post-Buddha era. The concept of Nirvana is described differently in Buddhist and Hindu literature. Hinduism has the concept of Atman – the soul, self – asserted to exist in every living being, while Buddhism asserts through its "anatman" doctrine that there is no Atman in any being. Nirvana in Buddhism is "stilling mind, cessation of desires, and action" unto emptiness, states Jeaneane Fowler, while nirvana in post-Buddhist Hindu texts is also "stilling mind but not inaction" and "not emptiness", rather it is the knowledge of true Self (Atman) and the acceptance of its universality and unity with metaphysical Brahman. The ancient soteriological concept in Hinduism is moksha, described as the liberation from the cycle of birth and death through self-knowledge and the eternal connection of Atman (soul, self) and metaphysical Brahman. Moksha is derived from the root "muc*" () which means free, let go, release, liberate; Moksha means "liberation, freedom, emancipation of the soul". In the Vedas and early Upanishads, the word mucyate () appears, which means to be set free or release - such as of a horse from its harness. The traditions within Hinduism state that there are multiple paths ("marga") to moksha: "jnana-marga", the path of knowledge; "bhakti-marga", the path of devotion; and "karma-marga", the path of action. The term Brahma-nirvana appears in verses 2.72 and 5.24-26 of the Bhagavad Gita. It is the state of release or liberation; the union with the Brahman. According to Easwaran, it is an experience of blissful egolessness. According to Zaehner, Johnson and other scholars, "nirvana" in the Gita is a Buddhist term adopted by the Hindus. Zaehner states it was used in Hindu texts for the first time in the Bhagavad Gita, and that the idea therein in verse 2.71-72 to "suppress one's desires and ego" is also Buddhist. According to Johnson the term "nirvana" is borrowed from the Buddhists to confuse the Buddhists, by linking the Buddhist nirvana state to the pre-Buddhist Vedic tradition of metaphysical absolute called Brahman. According to Mahatma Gandhi, the Hindu and Buddhist understanding of "nirvana" are different because the nirvana of the Buddhists is shunyata, emptiness, but the nirvana of the Gita means peace and that is why it is described as brahma-nirvana (oneness with Brahman). The terms "moksa" and "nirvana" are often used interchangeably in the Jain texts. Uttaradhyana Sutra provides an account of Sudharman – also called Gautama, and one of the disciples of Mahavira – explaining the meaning of nirvana to Kesi, a disciple of Parshva. The term "Nirvana" (also mentioned is "parinirvana") in the thirteenth or fourtheenth century Manichaean work "The great song to Mani" and "The story of the Death of Mani", referring to the "realm of light". The concept of liberation as "extinction of suffering", along with the idea of "sansara" as the "cycle of rebirth" is also part of Sikhism. Nirvana appears in Sikh texts as the term "Nirban". However, the more common term is "Mukti" or "Moksh", a salvation concept wherein loving devotion to God is emphasized for liberation from endless cycle of rebirths.
https://en.wikipedia.org/wiki?curid=21233
Neva The Neva ( Finnish: Nëvii), ) is a river in northwestern Russia flowing from Lake Ladoga through the western part of Leningrad Oblast (historical region of Ingria) to the Neva Bay of the Gulf of Finland. Despite its modest length of , it is the fourth largest river in Europe in terms of average discharge (after the Volga, the Danube and the Rhine). The Neva is the only river flowing from Lake Ladoga. It flows through the city of Saint Petersburg, three smaller towns of Shlisselburg, Kirovsk and Otradnoye, and dozens of settlements. The river is navigable throughout and is part of the Volga–Baltic Waterway and White Sea–Baltic Canal. It is a site of numerous major historical events, including the Battle of the Neva in 1240 which gave Alexander Nevsky his name, the founding of Saint Petersburg in 1703, and the Siege of Leningrad by the German army during World War II. The Neva river played a vital role in trade between Byzantium and Scandinavia. The area of the Neva river was originally inhabited by Finnic people. The word "neva" is widely spread in Finnic languages with similar meanings. In Finnish ("neva") it means poor fen, in Karelian ("neva") watercourse and in Estonian ("nõva") waterway. It has also been argued that the name derives from the Indo-European adjective "newā" which means new. The river began to flow around 1350 BC. However, the place names of the area don't support any Indo-European influence in the area before Scandinavian traders and Slavs started to enter the region in the 8th century CE. In the Paleozoic, 300–400 million years ago, the entire territory of the modern delta of the Neva River was covered by a sea. Modern relief was formed as a result of glacier activity. Its retreat formed the Littorina Sea, the water level of which was some higher than the present level of the Baltic Sea. Then, the Tosna was flowing in the modern bed of the Neva, from east to west into the Litorinal Sea. In the north of the Karelian Isthmus, the Littorina Sea united by a wide strait with Lake Ladoga. The Mga then flowed to the east, into Lake Ladoga, near the modern source of the Neva; the Mga then was separated from the basin of the Tosna. Near the modern Lake Ladoga, land rose faster, and a closed reservoir was formed. Its water level began to rise, eventually flooded the valley of Mga and broke into the valley of the river Tosna. The Ivanovo rapids of the modern Neva were created in the breakthrough area. So about 2000 BC the Neva was created with its tributaries Tosna and Mga. According to some newer data, it happened at 1410–1250 BC making the Neva a rather young river. The valley of Neva is formed by glacial and post-glacial sediments and it did not change much over the past 2500 years. The delta of Neva was formed at that time, which is actually pseudodelta, as it was formed not by accumulation of river material but by plunging into the past sediments. The Neva flows out of Lake Ladoga near Shlisselburg, flows through the Neva Lowland and discharges into the Baltic Sea in the Gulf of Finland. It has a length of , and the shortest distance from the source to the mouth is . The river banks are low and steep, on average about and at the mouth. There are three sharp turns: the Ivanovskye rapids, at Nevsky Forest Park of the Ust-Slavyanka region (the so-called crooked knee) and near the Smolny Institute, below the mouth of the river Okhta. The river declines in elevation between source and mouth. At one point the river crosses a moraine ridge and forms the Ivanovskye rapids. There, at the beginning of the rapids, is the narrowest part of the river: . The average flow rate in the rapids is about . The average width along the river is . The widest places, at , are in the delta, near the gates of the marine trading port, at the end of the Ivanovskye rapids near the confluence of the river Tosna, and near the island Fabrinchny near the source. The average depth is ; the maximum of is reached above the Liteyny Bridge, and the minimum of is in Ivanovskye rapids. In the area of Neva basin, rainfall greatly exceeds evaporation; the latter accounts for only 37.7 percent of the water consumption from Neva and the remaining 62.3 percent is water runoff. Since 1859, the largest volume of was observed in 1924 and the lowest in 1900 at . The average annual discharge is or on average. Because of the uniform water-flow from Lake Ladoga to the Neva over the whole year, there are almost no floods and corresponding water rise in the spring. The Neva freezes throughout from early December to early April. The ice thickness is within Saint Petersburg and in other areas. Ice congestion may form in winter in the upper reaches of the river, this sometimes causes upstream floods. Of the total ice volume of Lake Ladoga, , less than 5 percent enters the Neva. The average summer water temperature is , and the swimming season lasts only about 1.5 months. The water is fresh, with medium turbidity; the average salinity is 61.3 mg/L and the calcium bicarbonate content is 7 mg/L. The basin area of Neva is 5,000 km², including the pools of Lake Ladoga and Onega (281,000 km²). The basin contains 26,300 lakes and has a complex hydrological network of more than 48,300 rivers, however only 26 flow directly into Neva. The main tributaries are Mga, Tosna, Izhora, Slavyanka and Murzinka on the left, and Okhta and Chyornaya Rechka on the right side of Neva. The hydrological network had been altered by the development of St. Petersburg through its entire history. When it was founded in 1703, the area was low and swampy and required construction of canals and ponds for drainage. The earth excavated during their construction was used to raise the city. At the end of the 19th century, the delta of Neva consisted of 48 rivers and canals and 101 islands. The most significant distributaries of the delta are listed in the table. Before construction of the Obvodny Canal, the left tributary of that area was the Volkovka; its part at the confluence is now called Monastyrka. The Ladoga Canal starts at the root of Neva and connects it along the southern coast of Lake Ladoga with the Volkhov. Some canals of the delta were filled over time, so that only 42 islands remained by 1972, all within the city limits of St. Petersburg. The largest islands are Vasilyevsky at , Petrogradsky at , Krestovsky at , and Dekabristov at ; others include Zayachy, Yelagin and Kamenny Islands. At the source of the Neva, near Shlisselburg, there are the two small islands of Orekhovy and Fabrichny. Island Glavryba lies up the river, above the town of Otradnoye. There is almost no aquatic vegetation in Neva. The river banks mostly consist of sand, podsol, gleysols, peat and boggy peat soils. Several centuries ago, the whole territory of the Neva lowland was covered by pine and spruce mossy forests. They were gradually reduced by the fires and cutting for technical needs. Extensive damage was caused during World War II: in St. Petersburg, the forests were reduced completely, and in the upper reaches down to 40 to 50 percent. Forest were replanted after the war with spruce, pine, cedar, Siberian larch, oak, Norway maple, elm, America, ash, apple tree, mountain ash and other species. The shrubs include barberry, lilac, jasmine, hazel, honeysuckle, hawthorn, rose hip, viburnum, juniper, elder, shadbush and many others. Nowadays, the upper regions of the river are dominated by birch and pine-birch grass-shrub forests and in the middle regions there are swampy pine forests. In St. Petersburg, along the Neva, there are many gardens and parks, including the Summer Garden, Field of Mars, Rumyantsev, Smolny, Alexander Gardens, Garden of the Alexander Nevsky Lavra and many others. Because of the rapid flow, cold water and lack of quiet pools and aquatic vegetation the diversity of fish species in Neva is small. Permanent residents include such undemanding to environment species as perch, ruffe and roaches. Many fish species are transitory, of which commercial value have smelt, vendace and partly salmon. Floods in St. Petersburg are usually caused by the overflow of the delta of Neva and by surging water in the eastern part of Neva Bay. They are registered when the water rises above with respect to a gauge at the Mining Institute. More than 300 floods occurred after the city was founded in 1703. Three of them were catastrophic: on 7 November 1824, when water rose to ; on 23 September 1924 when it reached , and 10 September 1777 when it rose to . However, a much larger flood of was described in 1691. Besides flooding as a result of tidal waves, in 1903, 1921 and 1956 floods were caused by the melting of snow. The Federal Service for Hydrometeorology and Environmental Monitoring of Russia classifies the Neva as a "heavily polluted" river. The main pollutants include copper, zinc, manganese, nitrites and nitrogen. The dirtiest tributaries of the Neva are the Mga, Slavyanka, Ohta and Chernaya. Hundreds of factories pour wastewater into the Neva within St. Petersburg, and petroleum is regularly transported along the river. The annual influx of pollutants is 80,000 tonnes, and the heaviest polluters are Power-and-heating Plant 2 (), "Plastpolymer" and "Obukhov State Plant". The biggest polluters in the Leningrad Oblast are the cities of Shlisselburg, Kirovsk and Otradnoye, as well as the Kirov thermal power station. More than 40 oil spills are registered on the river every year. In 2008 the Federal Service of St. Petersburg announced that no beach of the Neva was fit for swimming. Cleaning of wastewater in St. Petersburg started in 1979; by 1997 about 74% of wastewater was purified. This proportion rose to 85% in 2005, to 91.7% by 2008, and Feliks Karamzinov expected it to reach almost 100% by 2011 with the completion of the expansion of the main sewerage plant. Many sites of ancient people, up to nine thousand years old, were found within the territory of the Neva basin. It is believed that around twelve thousand years BC, Finno-Ugric peoples (Votes and Izhorians) moved to this area from the Ural Mountains. In the 8th and 9th centuries AD, the area was inhabited by the East Slavs who were mainly engaged in slash and burn agriculture, hunting and fishing. From the 8th to 13th centuries, Neva provided a waterway from Scandinavia to the Byzantine Empire. In the 9th century, the area belonged to Veliky Novgorod. Neva is already mentioned in the Life of Alexander Nevsky (13th century). At that time, Veliky Novgorod was engaged in nearly constant wars with Sweden. A major battle occurred on 15 July 1240 at the confluence of the Izhora and Neva Rivers. The Russian army, led by the 20-year-old Prince Alexander Yaroslavich, aimed to stop the planned Swedish invasion. The Swedish army was defeated; the prince showed personal courage in combat and received the honorary name of "Nevsky". As a result of the Russian defeat in the Ingrian War of 1610–17 and the concomitant Treaty of Stolbovo, the area of the Neva River became part of Swedish Ingria. Beginning in 1642, the capital of Ingria was Nyen, a city near the Nyenschantz fortress. Because of financial and religious oppression, much of the Orthodox population left the Neva region, emptying 60 percent of the villages by 1620. The abandoned areas became populated by people from the Karelian Isthmus and Savonia. As a result of the Great Northern War of 1700–21, the valley of Neva River became part of Russian Empire. On 16 May 1703, the city of St. Petersburg was founded in the mouth of Neva and became capital of Russia in 1712. Neva became the central part of the city. It was cleaned, intersected with canals and enclosed with embankments. In 1715, construction began of the first wooden embankment between the Admiralty building and the Summer Garden. In the early 1760s works started to cover it in granite and to build bridges across Neva and its canals and tributaries, such as the Hermitage Bridge. From 1727 to 1916, the temporary Isaakievsky pontoon bridge was early constructed between the modern Saint Isaac's Square and Vasilievsky Island. A similar, but much longer Trinity pontoon bridge, which spanned , was brought from the Summer Garden to Petrogradsky Island. The first permanent bridge across Neva, Blagoveshchensky Bridge, was opened in 1850, and the second, Liteyny Bridge, came into operation in 1879. In 1858, a "Joint-stock company St. Petersburg water supply" was established, which built the first water supply network in the city. A two-stage water purification station was constructed in 1911. The development of the sewerage system began only in 1920, after the October Revolution, and by 1941, the sewerage network was long. Every winter from 1895 to 1910, electric tramways were laid on the ice of the river, connecting the Senate Square, Vasilievsky island, Palace Embankment and other parts of the city. The power was supplied through the rails and a top cable supported by wooden piles frozen into the ice. The service was highly successful and ran without major accidents except for a few failures in the top electrical wires. The trams ran at the speed of and could carry 20 passengers per carriage. The carriages were converted from the used horsecars. About 900,000 passengers were transported over a regular season between 20 January and 21 March. The sparking of contacts at the top wires amused spectators in the night. The first concrete bridge across Neva, the Volodarsky Bridge, was built in 1936. During World War II, from 8 September 1941 to 27 January 1944 Leningrad was in the devastating German Siege. On 30 August 1941, the German army captured Mga and came to Neva. On 8 September Germans captured Shlisselburg and cut all land communications and waterways to St. Petersburg (then Leningrad). The siege was partly relieved in January 1943, and ended on 27 January 1944. A river station was built above the Volodarsky Bridge in 1970 which could accept 10 large ships at a time. Wastewater treatment plants were built in Krasnoselsk in 1978, on the Belyi Island in 1979–83, and in Olgino in 1987–94. The South-West Wastewater Treatment Plant was constructed in 2003–05. Neva has very few shoals and its banks are steep, making the river suited for navigation. Utkino Backwaters were constructed in the late 19th century to park unused ships. Neva is part of the major Volga–Baltic Waterway and White Sea – Baltic Canal, however it has relatively low transport capacity because of its width, depth and bridges. Neva is available for vessels with capacity below 5,000 tonnes. Major transported goods include timber from Arkhangelsk and Vologda; apatite, granite and diabase from Kola Peninsula; cast iron and steel from Cherepovets; coal from Donetsk and Kuznetsk; pyrite from Ural; potassium chloride from Solikamsk; oil from Volga region. There are also many passenger routes to Moscow, Astrakhan, Rostov, Perm, Nizhny Novgorod, Valaam and other destinations. Navigation season on the Neva River runs from late April to November. To the west of Shlisselburg, an oil pipeline runs under the river. The pipeline is part of the Baltic Pipeline System, which provides oil from Timan-Pechora plate, West Siberia, Ural, Kazakhstan and Primorsk to the Gulf of Finland. The long pipeline lies below the river bottom and delivers about 42 million tonnes of oil a year. Near the Ladozhsky Bridge there is an underwater tunnel to host a gas pipeline Nord Stream. The tunnel has a diameter of and a length of and is laid at a maximum depth of . Neva is the main source of water (96 percent) of St. Petersburg and its suburbs. From 26 June 2009, St. Petersburg started processing the drinking water by ultraviolet light, abandoning the use of chlorine for disinfection. Neva also has developed fishery, both commercial and recreational. Leningrad Oblast: St. Petersburg: St. Petersburg, Neva delta Construction of the Novo-Admiralteisky Bridge, a movable drawbridge across the river, has been approved, but will not commence before 2011. Whereas most tourist attractions of Neva are located within St. Petersburg, there are several historical places upstream, in the Leningrad Oblast. They include the fortress Oreshek, which was built in 1323 on the Orekhovy Island at the source of Neva River, south-west of the Petrokrepost Bay, near the city of Shlisselburg. The waterfront of Schlisselburg has a monument of Peter I. In the city, there are Blagoveshchensky Cathedral (1764–95) and a still functioning Orthodox church of St. Nicholas, built in 1739. On the river bank stands the Church of the Intercession. Raised in 2007, it is a wooden replica of a historical church which stood on the southern shore of Lake Onega. That church was constructed in 1708 and it burned down in 1963. It is believed to be the forerunner of the famous Kizhi Pogost. Old Ladoga Canal, built in the first half of the 18th century, is a water transport route along the shore of Lake Ladoga which is connecting the River Volkhov and Neva. Some of its historical structures are preserved, such as a four-chamber granite sluice (1836) and a bridge (1832). On 21 August 1963 a Soviet twinjet Tu-124 airliner performed an emergency water landing on Neva near the Finland Railway Bridge. The plane took off from Tallinn-Ülemiste Airport (TLL) at 08:55 on 21 August 1963 with 45 passengers and seven crew on board and was scheduled to land at Moscow-Vnukovo (VKO). After liftoff, the crew noticed that the nose gear undercarriage did not retract, and the ground control diverted the flight to Leningrad (LED) because of fog at Tallinn. While circling above St. Petersburg at the altitude of at , under unclear circumstances (lack of fuel was one of the factors), both engines stalled. The crew performed emergency landing on the Neva River, barely missing some of its bridges and an 1898-built steam tugboat. The tugboat rushed to the plane and towed it to the shore. No casualties were sustained at any stage. The plane captain was first fired from job but then restored and awarded with the Order of the Red Star.
https://en.wikipedia.org/wiki?curid=21238
Nissan , usually shortened to Nissan (, or AU/NZ: ; Japanese pronunciation: ), is a Japanese multinational automobile manufacturer headquartered in Nishi-ku, Yokohama. The company sells its cars under the Nissan, Infiniti, and Datsun brands with in-house performance tuning products labelled Nismo. The company traces its name to the Nissan "zaibatsu", now called Nissan Group. Since 1999, Nissan has been part of the Renault–Nissan–Mitsubishi Alliance (Mitsubishi joining in 2016), a partnership between Nissan of Japan, Mitsubishi Motors of Japan and Renault of France. As of 2013, Renault holds a 43.4% voting stake in Nissan, while Nissan holds a 15% non-voting stake in Renault. From October 2016 onwards, Nissan holds a 34% controlling stake in Mitsubishi Motors. In 2013, Nissan was the sixth largest automaker in the world, after Toyota, General Motors, Volkswagen Group, Hyundai Motor Group, and Ford. Taken together, the Renault–Nissan Alliance would be the world's fourth largest automaker. Nissan is the leading Japanese brand in China, Russia and Mexico. In 2014, Nissan was the largest car manufacturer in North America. Nissan is the world's largest electric vehicle (EV) manufacturer, with global sales of more than 320,000 all-electric vehicles as of April 2018. The top-selling vehicle of the car-maker's fully electric lineup is the Nissan LEAF, an all-electric car and the world's top-selling highway-capable plug-in electric car in history. In January 2018, Nissan CEO Hiroto Saikawa announced that all Infiniti vehicles launched from 2021 will be hybrid vehicles or all-electric vehicles. Masujiro Hashimoto founded the in Tokyo's Azabu-Hiroo district, Japan's first automobile manufacturer. In 1914, the company produced its first car, called DAT. The new car's model name was an acronym of the company's investors' surnames: It was renamed to Kaishinsha Motorcar Co., Ltd. in 1918, and again to DAT Jidosha & Co., Ltd. (DAT Motorcar Co.) in 1925. DAT Motors built trucks in addition to the DAT and Datsun passenger cars. The vast majority of its output were trucks, due to an almost non- existent consumer market for passenger cars at the time, and disaster recovery efforts as a result of the 1923 Great Kantō earthquake. Beginning in 1918, the first DAT trucks were produced for the military market. At the same time, Jitsuyo Jidosha Co., Ltd. (jitsuyo means practical use or utility) produced small trucks using parts, and materials imported from the United States. Commercial operations were placed on hold during Japan's participation in World War I, and the company contributed to the war effort. In 1926 the Tokyo-based DAT Motors merged with the Osaka-based a.k.a. Jitsuyo Jidosha Seizo (established 1919 as a Kubota subsidiary) to become in Osaka until 1932. From 1923 to 1925, the company produced light cars and trucks under the name of Lila. In 1931, DAT came out with a new smaller car, called the Datsun Type 11, the first "Datson", meaning "Son of DAT". Later in 1933 after Nissan Group "zaibatsu" took control of DAT Motors, the last syllable of Datson was changed to "sun", because "son" also means "loss" in Japanese, hence the name . In 1933, the company name was Nipponized to and was moved to Yokohama. In 1928, Yoshisuke Aikawa (nickname: Gisuke/Guisuke Ayukawa) founded the holding company Nihon Sangyo (日本産業 Japan Industries or Nihon Industries). The name 'Nissan' originated during the 1930s as an abbreviation used on the Tokyo Stock Exchange for Nihon Sangyo. This company was Nissan "Zaibatsu" which included Tobata Casting and Hitachi. At this time Nissan controlled foundries and auto parts businesses, but Aikawa did not enter automobile manufacturing until 1933. The zaibatsu eventually grew to include 74 firms and became the fourth-largest in Japan during World War II. In 1931, DAT Jidosha Seizo became affiliated with Tobata Casting and was merged into Tobata Casting in 1933. As Tobata Casting was a Nissan company, this was the beginning of Nissan's automobile manufacturing. In 1934, Aikawa separated the expanded automobile parts division of Tobata Casting and incorporated it as a new subsidiary, which he named . The shareholders of the new company however were not enthusiastic about the prospects of the automobile in Japan, so Aikawa bought out all the Tobata Casting shareholders (using capital from Nihon Industries) in June 1934. At this time, Nissan Motor effectively became owned by Nihon Sangyo and Hitachi. In 1935, construction of its Yokohama plant was completed. 44 Datsuns were shipped to Asia, Central and South America. In 1935, the first car manufactured by an integrated assembly system rolled off the line at the Yokohama plant. Nissan built trucks, airplanes, and engines for the Imperial Japanese Army. In November 1937 Nissan's headquarter was moved to Hsinking, the capital of Manchukuo. In December the company changed name to Manchuria Heavy Industries Developing Co (MHID). In 1940, first knockdown kits were shipped to Dowa Jidosha Kogyo (Dowa Automobile), one of MHID's companies, for assembly. In 1944, the head office was moved to Nihonbashi, Tokyo, and the company name was changed to Nissan Heavy Industries, Ltd., which the company kept through 1949. DAT had inherited Kubota's chief designer, American engineer William R. Gorham. This, along with Aikawa's 1908 visit to Detroit, was to greatly affect Nissan's future. Although it had always been Aikawa's intention to use cutting-edge auto making technology from America, it was Gorham that carried out the plan. Most of the machinery and processes originally came from the United States. When Nissan started to assemble larger vehicles under the "Nissan" brand in 1937, much of the design plans and plant facilities were supplied by the Graham-Paige Company. Nissan also had a Graham license under which passenger cars, buses, and trucks were made. In David Halberstam's 1986 book "The Reckoning," Halberstam states "In terms of technology, Gorham was the founder of the Nissan Motor Company" and that "young Nissan engineers who had never met him spoke of him as a god and could describe in detail his years at the company and his many inventions." From 1934 Datsun began to build Austin 7s under license. This operation became the greatest success of 's overseas licensing of its Seven and marked the beginning of Datsun's international success. In 1952, Nissan entered into a legal agreement with Austin, for Nissan to assemble 2,000 Austins from imported partially assembled sets and sell them in Japan under the Austin trademark. The agreement called for Nissan to make all Austin parts locally within three years, a goal Nissan met. Nissan produced and marketed Austins for seven years. The agreement also gave Nissan the rights to use Austin patents, which Nissan used in developing its own engines for its Datsun line of cars. In 1953, British-built Austins were assembled and sold, but by 1955, the Austin A50 – completely built by Nissan and featuring a new 1489 cc engine—was on the market in Japan. Nissan produced 20,855 Austins from 1953 to 1959. Nissan leveraged the Austin patents to further develop their own modern engine designs past what the Austin's A- and B-family designs offered. The apex of the Austin-derived engines was the new design A series engine in 1966. In 1967, Nissan introduced its new highly advanced four-cylinder overhead cam (OHC) Nissan L engine, which while similar to Mercedes-Benz OHC designs was a totally new engine designed by Nissan. This engine powered the new Datsun 510, which gained Nissan respect in the worldwide sedan market. Then, in 1969 Nissan introduced the Datsun 240Z sports car which used a six-cylinder variation of the L series engine, developed under Nissan Machinery (Nissan Koki Co., Ltd. ) in 1964, a former remnant of another auto manufacturer Kurogane. The 240Z was an immediate sensation and lifted Nissan to world-class status in the automobile market. During the Korean War, Nissan was a major vehicle producer for the U.S. Army. After the Korean War ended, significant levels of anti-communist sentiment existed in Japan. The union that organized Nissan's workforce was strong and militant. Nissan was in financial difficulties, and when wage negotiations came, the company took a hard line. Workers were locked out, and several hundred were fired. The Japanese government and the U.S. occupation forces arrested several union leaders. The union ran out of strike funds and was defeated. A new labor union was formed, with Shioji Ichiro one of its leaders. Ichiro had studied at Harvard University on a U.S. government scholarship. He advanced an idea to trade wage cuts against saving 2,000 jobs. Ichiro's idea was made part of a new union contract that prioritized productivity. Between 1955 and 1973, Nissan "expanded rapidly on the basis of technical advances supported – and often suggested – by the union." Ichiro became president of the Confederation of Japan Automobile Workers Unions and "the most influential figure in the right wing of the Japanese labor movement." In 1966, Nissan merged with the Prince Motor Company, bringing more upmarket cars, including the Skyline and Gloria, into its selection. The Prince name was eventually abandoned, and successive Skylines and Glorias bore the Nissan name. "Prince," was used at the Japanese Nissan dealership "Nissan Prince Shop" until 1999, when "Nissan Red Stage" replaced it. Nissan Red Stage itself has been replaced as of 2007. The Skyline lives on as the G Series of Infiniti. To capitalize on the renewed investment during 1964 Summer Olympics, Nissan established the gallery on the second and third floors of the San-ai building, located in Ginza, Tokyo. To attract visitors, Nissan started using beautiful female showroom attendants where Nissan held a competition to choose five candidates as the first class of Nissan Miss Fairladys, modeled after "Datsun Demonstrators" from the 1930s who introduced cars. The Fairlady name was used as a link to the popular Broadway play "My Fair Lady" of the era. Miss Fairladys became the marketers of the Datsun Fairlady 1500. In April 2008, 14 more Miss Fairlady candidates were added, for a total of 45 Nissan Miss Fairlady pageants (22 in Ginza, 8 in Sapporo, 7 in Nagoya, 7 in Fukuoka). In April 2012, 7 more Miss Fairlady candidates were added, for a total of 48 Nissan Miss Fairlady pageants (26 in Ginza, 8 in Sapporo, 7 in Nagoya, 7 in Fukuoka). In April 2013, 6 more Miss Fairlady candidates were added to Ginza showroom, for a total of 27 48th Ginza Nissan Miss Fairlady pageants. In the 1950s, Nissan decided to expand into worldwide markets. Nissan management realized their Datsun small car line would fill an unmet need in markets such as Australia and the world's largest car market, the United States. They first showed the Datsun Bluebird at the 1958 Los Angeles Auto Show. The company formed a U.S. subsidiary, Nissan Motor Corporation U.S.A., in Gardena, California in 1960, headed by Yutaka Katayama. Nissan continued to improve their sedans with the latest technological advancements and chic Italianate styling in sporty cars such as the Datsun Fairlady roadsters, the race-winning 411 series, the Datsun 510 and the Datsun 240Z. By 1970, Nissan had become one of the world's largest exporters of automobiles. In the wake of the 1973 oil crisis, consumers worldwide (especially in the lucrative U.S. market) began turning to high-quality small economy cars. To meet the growing demand for its new Nissan Sunny, the company built new factories in Mexico (Nissan Mexicana was established in the early 1960s and commenced manufacturing since 1966 at their Cuernavaca assembly facility, making it their first North American assembly plant), Australia, New Zealand, Taiwan, United States (Nissan Motor Manufacturing Corporation USA was established in 1980) and South Africa. The "Chicken Tax" of 1964 placed a 25% tax on commercial vans imported to the United States. In response, Nissan, Toyota Motor Corp. and Honda Motor Co. began building plants in the U.S. in the early 1980s. Nissan's initial assembly plant Smyrna assembly plant (which broke ground in 1980) at first built only trucks such as the 720 and Hardbody, but has since expanded to produce several car and SUV lines, including the Altima, Maxima, Rogue, Pathfinder, Infiniti QX60 and LEAF all-electric car. The addition of mass-market automobiles was in response to the 1981 Voluntary Export Restraints imposed by the U.S. Government. An engine plant in Decherd, Tennessee followed, most recently a second assembly plant was established in Canton, Mississippi. In 1970, Teocar was created, which was a Greek assembly plant created in cooperation with Theoharakis. It was situated in Volos, Greece and its geographical location was perfect as the city had a major port. The plant started production in 1980, assembling Datsun pick-up trucks and continuing with the Nissan Cherry and Sunny automobiles. Until May 1995 170,000 vehicles were made, mainly for Greece. By the early 1980s, Nissan (Datsun) had long been the best selling Japanese brand in Europe. In order to overcome export tariffs and delivery costs to its European customers, Nissan contemplated establishing a plant in Europe. Nissan tried to convert the Greek plant into one manufacturing cars for all European countries. However, due to issues with the Greek government not only did that not happen but the plant itself was closed. A joint venture with Italy's then state-owned Alfa Romeo was also entered in 1980, leading to Italian production of the Nissan Cherry and an Alfa-badged version, the Alfa Romeo Arna. After an extensive review, Nissan decided to go it alone instead. The City of Sunderland in the north east of England was chosen for its skilled workforce and its location near major ports. The plant was completed in 1986 as the subsidiary Nissan Motor Manufacturing (UK) Ltd. By 2007, it was producing 400,000 vehicles per year, landing it the title of the most productive plant in Europe. In 2001, Nissan established a manufacturing plant in Brazil. In 2005, Nissan added operations in India, through its subsidiary Nissan Motor India Pvt. Ltd. With its global alliance partner, Renault, Nissan invested $990 million to set up a manufacturing facility in Chennai, catering to the Indian market as well as a base for exports of small cars to Europe. Nissan entered the Middle East market in 1957 when it sold its first car in Saudi Arabia. Nissan sold nearly 520,000 new vehicles in China in 2009 in a joint venture with Dongfeng Motor. To meet increased production targets, Dongfeng-Nissan expanded its production base in Guangzhou, which would become Nissan's largest factory around the globe in terms of production capacity. Nissan also has moved and expanded its Nissan Americas Inc. headquarters, moving from Los Angeles to Franklin, Tennessee in the Nashville area. In June 2001, Carlos Ghosn was named chief executive officer of Nissan. In May 2005, Ghosn was named president of Nissan's partner company Renault. He was appointed president and CEO of Renault on 6 May 2009. Nissan's management is a trans-cultural, diverse team. In February 2017, Ghosn announced he would step down as CEO of Nissan on 1 April 2017, while remaining chairman of the company. He was replaced as CEO by his then-deputy Hiroto Saikawa. On 19 November 2018, Ghosn was fired as chairman following his arrest for the alleged underreporting of his income to Japanese financial authorities. After 108 days in detention, Ghosn was released on bail, but after 29 days he was again detained on new charges (4 April 2019). He had been due to hold a news conference, but instead, his lawyers released a video of Ghosn alleging this 2018-2019 Nissan scandal is itself evidence of value destruction and Nissan corporate mismanagement. In September 2019, Saikawa resigned as CEO, following allegations of improper payments received by him. Yasuhiro Yamauchi was appointed as acting CEO. In October 2019, the company announced it had appointed Makoto Uchida as its next CEO. The appointment would be made "effective" by 1 January 2020 at the latest. On 1 December 2019, Uchida became CEO. In the U.S., Nissan has been increasing its reliance on sales to daily-rental companies like Enterprise Rent-A-Car or Hertz. In 2016, Nissan's rental sales jumped 37% and in 2017 Nissan became the only major automaker to boost rental sales when the Detroit Three cut back less profitable deliveries to daily-rental companies, which traditionally are the biggest customers of domestic automakers. In late July 2019, Nissan announced it would lay off 12,500 employees over the next 3 years, citing a 95% year on year net income fall. Hiroto Saikawa, CEO at the time, confirmed the majority of those cuts would be plant workers. Nissan's first final assembly robots were installed in the Murayama plant (where the then-new March/Micra was assembled) in 1982. In 1984 the Zama plant began to be robotized; this automation process then continued throughout Nissan's factories. Nissan electric vehicles have been produced intermittently since 1946. In 2010 the Nissan Leaf plug-in battery electric vehicle was introduced; it was the world's most sold plug-in electric car for nearly a decade. It was preceded by the Altra and the Hypermini. Until surpassed by Tesla, Nissan was the world's largest electric vehicle (EV) manufacturer, with global sales of more than 320,000 all-electric vehicles as of April 2018. Luxgen and Nissan partner to assemble vehicles in the Philippines with its affiliate Nissan Motor Philippines Inc. (NMPI). In Australia, between 1989 and 1992, Nissan Australia shared models with Ford Australia under a government-backed rationalisation scheme known as the Button Plan, with a version of the Nissan Pintara being sold as the Ford Corsair and a version of the Ford Falcon as the Nissan Ute. A variant of the Nissan Patrol was sold as the Ford Maverick during the 1988–94 model years. In North America, Nissan partnered with Ford from 1993 to 2002 to market the Ohio built Mercury Villager and the Nissan Quest. The two minivans were virtually identical aside from cosmetic differences. In 2002, Nissan and Ford announced the discontinuation of the arrangement. In Europe, Nissan and Ford Europe partnered to produce the Nissan Terrano II and the badge-engineered Ford Maverick, a mid-size SUV produced at the Nissan Motor Ibérica S.A (NMISA) plant in Barcelona, Spain. The Maverick/Terrano II was a popular vehicle sold throughout Europe and Australasia. It was also sold in Japan as a captive import, with the Nissan model marketed as the Nissan Mistral. Nissan licensed the Volkswagen Santana. Production began in 1984, at Nissan's Zama, Kanagawa, and ended in May 1990. From 1983 to 1987, Nissan cooperated with Alfa Romeo to build the Arna. The goal was for Alfa to compete in the family hatchback market segment, and for Nissan to establish a foothold in the European market. After Alfa Romeo's takeover by Fiat, both the car and cooperation were discontinued. In Europe, GM and Nissan co-operated on the Light Commercial vehicle the Nissan Primastar. The high roof version is built in the NMISA plant in Barcelona, Spain; while the low roof version is built at Vauxhall Motors/Opel's Luton plant in Bedfordshire, UK In 2013, GM announced its intentions to rebadge the Nissan NV200 commercial van as the 2015 model year Chevrolet City Express, to be introduced by end of 2014. Holden, GM's Australian subsidiary, sold versions of the Nissan Pulsar as the Holden Astra between 1984 and 1989. LDV Group sold a badge engineered light commercial vehicle version of the Nissan Serena as the LDV Cub from 1996 to 2001. The Nissan equivalent was marketed as the Nissan Vannette Cargo. In 1999, facing severe financial difficulties, Nissan entered an alliance with Renault S.A. of France. Signed on 27 March 1999, the "Renault-Nissan Alliance" was the first of its kind involving a Japanese and French car manufacturer, each with its own distinct corporate culture and brand identity. In the spring of 2000, Yanase, Japan's premier seller of imported automobiles, cancelled its licensing contract with Renault, and Nissan took over as the sole licensee. The Renault-Nissan Alliance has evolved over the years to Renault holding 43.4% of Nissan shares, while Nissan holds 15% of Renault shares. The alliance itself is incorporated as the Renault-Nissan B.V., founded on 28 March 2002 under Dutch law. Renault-Nissan B.V. is equally owned by Renault and Nissan. Under CEO Ghosn's "Nissan Revival Plan" (NRP), the company has rebounded in what many leading economists consider to be one of the most spectacular corporate turnarounds in history, catapulting Nissan to record profits and a dramatic revitalization of both its Nissan and Infiniti model line-ups. Ghosn has been recognized in Japan for the company's turnaround in the midst of an ailing Japanese economy. Ghosn and the Nissan turnaround were featured in Japanese manga and popular culture. His achievements in revitalizing Nissan were noted by the Japanese Government, which awarded him the Japan Medal with Blue Ribbon in 2004. On 7 April 2010, Daimler AG exchanged a 3.9% share of its holdings for 3.9% from both Nissan and Renault. This triple alliance allows for the increased sharing of technology and development costs, encouraging global cooperation and mutual development. On 12 December 2012, the Renault–Nissan Alliance formed a joint venture with Russian Technologies (Alliance Rostec Auto BV) with the aim of becoming the long-term controlling shareholder of AvtoVAZ, Russia's largest car company and owner of the country's biggest selling brand, Lada. The takeover was completed in June 2014, and the two companies of the Renault-Nissan Alliance took a combined 67.1% stake of Alliance Rostec, which in turn acquired a 74.5% of AvtoVAZ, thereby giving Renault and Nissan indirect control over the Russian manufacturer. Ghosn was appointed chairman of the board of AvtoVAZ on 27 June 2013. Taken together, the Renault–Nissan Alliance sells one in ten cars worldwide, and would be the world's fourth largest automaker with 2013 sales of 8,266,098 units. In May 2020, Nissan announced that the company would cut production capacity by 20% due to the Covid-19 pandemic. The company announced it would shut down factories in Indonesia and Spain, and would exit the South Korean car market. Presidents and chief executive officers of Nissan: Nissan: Nissan's volume models are sold worldwide under the Nissan brand. Datsun: Until 1983, Nissan automobiles in most export markets were sold under the Datsun brand. In 1984 the Datsun brand was phased out and the Nissan brand was phased in. All cars in 1984 had both the Datsun and Nissan branding on them and in 1985 the Datsun name was completely dropped. In July 2013, Nissan announced the relaunch of Datsun as a brand targeted at emerging markets. Infiniti: Since 1989, Nissan has sold its luxury models under the Infiniti brand. In 2012, Infiniti changed its headquarters to Hong Kong, where it is incorporated as Infiniti Global Limited. Its president is former BMW executive Roland Krueger. From 2014, Infiniti cars are sold in Japan. Nismo: Nissan's in-house tuning shop is Nismo, short for "Nissan Motorsport International Limited." Nismo is being re-positioned as Nissan's performance brand. For many years, Nissan used a red wordmark for the company, and car "badges" for the "Nissan" and "Infiniti" brands. At Nissan's 2013 earnings press conference in Yokohama, Nissan unveiled "a new steel-blue logo that spells out—literally—the distinction between Nissan the company and Nissan the brand." Using a blue-gray color scheme, the new corporate logo did read NISSAN MOTOR COMPANY. Underneath were the "badge" logos for the Nissan, Infiniti and Datsun brands. Later in 2013, the Nissan "Company" logo changed to the Nissan "Corporation" logo. The latter is the currently used logo of Nissan Motor Co., Ltd. Nissan has produced an extensive range of mainstream cars and trucks, initially for domestic consumption but exported around the world since the 1950s. It also produced several memorable sports cars, including the Datsun Fairlady 1500, 1600 and 2000 Roadsters, the Z-car, an affordable sports car originally introduced in 1969; and the GT-R, a powerful all-wheel-drive sports coupe. In 1985, Nissan created a tuning division, "Nismo", for competition and performance development of such cars. One of Nismo's latest models is the 370Z Nismo. Nissan also sells a range of kei cars, mainly as a joint venture with other Japanese manufacturers like Suzuki or Mitsubishi. Until 2013, Nissan rebadged kei cars built by other manufacturers. Beginning in 2013, Nissan and Mitsubishi shared the development of the Nissan DAYZ / Mitsubishi eK Wagon series. Nissan also has shared model development of Japanese domestic cars with other manufacturers, particularly Mazda, Subaru, Suzuki and Isuzu. In China, Nissan produces cars in association with the Dongfeng Motor Group including the 2006 Nissan Livina Geniss, the first in a range of a new worldwide family of a medium-sized car. In 2010, Nissan created another tuning division, "IPL", this time for their premium/luxury brand Infiniti. In 2011, after Nissan released the Nissan NV-Series in the United States, Canada, and Mexico, Nissan created a commercial sub-brand called "Nissan Commercial Vehicles" which focuses on commercial vans, pickup trucks, and fleet vehicles for the US, Canadian, and Mexican Markets. In 2013, Nissan launched the Qashqai SUV in South Africa, along with their new motorsport Qashqai Car Games. It is the same year when the Datsun brand was relaunched by Nissan after a 27-year hiatus. Nissan launched their Nissan Intelligent Mobility vision in 2016 by revealing the IDS Concept at the 2016 Geneva Motor Show. Most Nissan vehicles like the Dayz, Rogue and Leaf are equipped with Nissan Intelligent Mobility technology. In 2018, Nissan launched the sixth-generation Altima at the 2018 New York Auto Show. As of 2007 in Japan, Nissan sells its products with internationally recognized "Nissan" signage, using a chrome circle with "Nissan" across the front. Previously, Nissan used two dealership names called , , and , established in 1999 after the merger with Renault. Nissan Red Stage was the result of combining an older sales channel of dealerships under the names , established in 1966 after the merger of Prince Motors by Nissan, which sold the Nissan Skyline, and , which sold cars developed from the Nissan Sunny at its introduction in 1966. The word "satio" is Latin, which means "ample" or "sufficient". was briefly known previously as "Nissan Cony Store" when they assumed operations of a small "kei" manufacturer called Aichi Machine Industry Co., Ltd. (愛知機械工業) who manufactured the "", "Guppy" and brand of "kei" cars and trucks until 1970, when the network was renamed for the Nissan Cherry. Nissan Blue Stage was the result of combining older sales channels, called in 1955, then renamed "Nissan Bluebird Store" in 1966, selling Nissan's original post-war products called the Datsun Bluebird, Datsun Sports, Datsun Truck, Datsun Cablight, Datsun Cabstar, Nissan Junior, and Nissan Cedric. was established in 1965 and offered luxury sedans like the Nissan Laurel and the Nissan President. In 1970, Nissan also set up a separate sales chain which sold used cars including auctions, called , which they still maintain. In the early days of Nissan's dealership network, Japanese consumers were directed towards specific Nissan stores for cars that were of a specific size and pricepoint. Over time as sales progressed and the Japanese automotive industry became more prolific, vehicles that were dedicated to particular stores were badge engineered, given different names, and shared within the existing networks thereby selling the same platforms at different locations. The networks allowed Nissan to better compete with the network established earlier by Toyota at Japanese locations. Starting in 1960, another sales distribution channel was established that sold diesel products for commercial use, called Nissan Diesel until the diesel division was sold in 2007 to Volvo AB. To encourage retail sales, Nissan passenger vehicles that were installed with diesel engines, like the Cedric, were available at Nissan Diesel locations. All cars sold at Nissan Blue Stage (1999–2005): All cars sold at Nissan Store (later Nissan Bluebird Store, Nissan Exhibition), Nissan Motor Store, (1955–1999): All cars sold at Nissan Red Stage (1999–2005): All cars sold at Nissan Prince Store, Nissan Satio Store, Nissan Cherry Store (1966–1999): Nissan has classified several vehicles as "premium" and select dealerships offer the "Nissan Premium Factory" catalog. Vehicles in this category are: Nissan Cabstar (日産・キャブスター "Nissan Kyabusutā") is the name used in Japan for two lines of pickup trucks and light commercial vehicles sold by Nissan and built by UD Nissan Diesel, a Volvo AB company and by Renault-Nissan Alliance for the European market. The name originated with the 1968 Datsun Cabstar, but this was gradually changed over to "Nissan" badging in the early 1980s. The lighter range (1-1.5 tons) replaced the earlier Cabstar and Homer, while the heavier Caball and Clipper were replaced by the 2–4 ton range Atlas (日産・アトラス "Nissan Atorasu"). The nameplate was first introduced in December 1981.The Cabstar is known also as the Nissan Cabstar, Renault Maxity and Samsung SV110 depending on the location. The range has been sold across the world. It shares its platform with the Nissan Caravan. The Nissan Titan was introduced in 2004, as a full-size pickup truck produced for the North American market, the truck shares the stretched Nissan F-Alpha platform with the Nissan Armada and Infiniti QX56 SUVs. It was listed by Edmunds.com as the best full-size truck. The second generation Titan was revealed at the 2016 North American International Auto Show as a 2017 model year vehicle. The first Cabstar (A320) appeared in March 1968, as a replacement for the earlier Datsun Cablight. It is a cab-over engine truck and was available either as a truck, light van (glazed van), or as a "route van" (bus). It uses the 1189 cc Nissan D12 engine with 56 PS (41 kW). After some modifications and the new 1.3 liter J13 engine, with 67 PS (49 kW), in August 1970 the code became A321. The Cabstar underwent another facelift with an entirely new front clip in May 1973. The 1483 cc J15 engine became standard fitment at this time (PA321), with 77 PS (57 kW) at 5200 rpm. The Cabstar was placed just beneath the slightly bigger Homer range in Nissan's commercial vehicle lineup. It received a full makeover in January 1976, although the van models were not replaced. The F20 Nissan Homer, introduced in January 1976, was also sold as the Nissan Datsun Cabstar in Japan. Both ranges were sold with either a 1.5 (J15) or a 2.0 liter (H20) petrol inline-four or with the 2.2 liter SD22 diesel engine. The F20 received a desmogged engine range in September 1979 and with it a new chassis code, F21. Manufacturing of the heavier range (H40-series) Atlas began in December 1981, while the lighter series Atlas (F22) was introduced in February 1982 – this succeeded both the Homer and Cabstar ranges and the nameplate has not been used in the Japanese market since. The Atlas F22 was sold in Europe as the Nissan Cabstar and proved a popular truck in the UK market due to its reliability and ability to carry weight. From 1990 the range widened and was sold as the Cabstar E. Actually (2015) the Cabstar is manufactured in the NSIO (Nissan Spanish Industrial Operations) Plant in Ávila, Spain under the brand name of NT400. Nissan introduced its first battery electric vehicle, the Nissan Altra at the Los Angeles International Auto Show on 29 December 1997. Unveiled in 2009, the EV-11 prototype electric car was based on the Nissan Tiida (Versa in North America), with the conventional gasoline engine replaced with an all-electric drivetrain. In 2010, Nissan introduced the Nissan LEAF as the first mass-market, all-electric vehicle launched globally. , the Nissan Leaf was the world's best selling highway-capable all-electric car ever. Global sales totaled 100,000 Leafs by mid January 2014, representing a 45% market share of worldwide pure electric vehicles sold since 2010. Global Leaf sales passed the 200,000 unit milestone in December 2015, and the Leaf continued ranking as the all-time best selling all-electric car. Nissan's second all-electric vehicle, the Nissan e-NV200, was announced in November 2013. Series production at the Nissan Plan in Barcelona, Spain, began on 7 May 2014. The e-NV200 commercial van is based on the Nissan Leaf. Nissan plans to launch two additional battery electric vehicles by March 2017. In June 2016, Nissan announced it will introduce its first range extender car in Japan before March 2017. The series plug-in hybrid will use a new hybrid system, dubbed e-Power, which debuted with the Nissan Gripz concept crossover showcased at the September 2015 Frankfurt Auto Show. , Nissan electric vehicles were sold in 48 world markets. Nissan global electric vehicle sales passed 275,000 units in December 2016. The second-generation Leaf was launched by Nissan in Japan. In August 2013 Nissan announced its plans to launch several driverless cars by 2020. The company is building a dedicated autonomous driving proving ground in Japan, to be completed in 2014. Nissan installed its autonomous car technology in a Nissan Leaf all-electric car for demonstration purposes. The car was demonstrated at Nissan 360 test drive event held in California in August 2013. In September 2013, the Leaf fitted with the prototype Advanced Driver Assistance System was granted a license plate that allows it to drive on Japanese public roads. The testing car will be used by Nissan engineers to evaluate how its in-house autonomous driving software performs in the real world. Time spent on public roads will help refine the car's software for fully automated driving. The autonomous Leaf was demonstrated on public roads for the first time at a media event held in Japan in November 2013. The Leaf drove on the Sagami Expressway in Kanagawa prefecture, near Tokyo. Nissan vice chairman Toshiyuki Shiga and the prefecture's governor, Yuji Kuroiwa, rode in the car during the test. Nissan has also had a number of ventures outside the automotive industry, most notably the Tu–Ka mobile phone service (est. 1994), which was sold to DDI and Japan Telecom (both now merged into KDDI) in 1999. Nissan offers a subscription-based telematics service in select vehicles to drivers in Japan, called CarWings. Nissan also owns Nissan Marine, a joint venture with Tohatsu Corp that produces motors for smaller boats and other maritime equipment. Nissan also built the M-V orbital rocket. Nismo is the motorsports division of Nissan, founded in 1984. Nismo cars have participated in the All Japan Sports Prototype Championship, Super GT, IMSA GT Championship, World Sportscar Championship, FIA World Endurance Championship, British Touring Car Championship, Supercars Championship and Blancpain GT Series. Also, they were featured at the World Series by Nissan from 1998 to 2004. Nissan sponsored the Los Angeles Open golf tournament from 1987 to 2007. Beginning in 2015, Nissan became the naming rights sponsor for Nissan Stadium, the home of the Tennessee Titans and Tennessee State University football teams in Nashville. Nissan also became the official sponsor of the Heisman Trophy and UEFA Champions League. Nissan's central research is inside the Oppama Plant site, Yokosuka, which began its operation in 1961, at the former site of Imperial Japanese Navy's Airborn Squadron base. In 1982, Nissan's technical centers in Suginami, Tokyo and Tsurumi, Yokohama were combined into one: Nissan Technical Center (NTC) in Atsugi, Kanagawa, at the foot of Mount Ōyama of the Tanzawa Mountains. At its 30th anniversary in 2012, NTC employed 9,500 employees in product development, design, production engineering, and purchasing. Nissan Technical Center works closely with its overseas operations: Nissan Technical Center (NTC)/North America, NTC/Mexico, Nissan Design America, and Nissan Silicon Valley Office. In 2007, the company opened Nissan Advanced Technology Center (NATC), near the NTC site. It works in close contact with the central research, the Silicon Valley office, the technical office near the Nissan headquarters in central Yokohama, and the overseas offices in Detroit, Silicon Valley, and Moscow. Nissan's test courses are in Tochigi (two courses), Yokosuka and Hokkaido. In mid- 2018, Nissan launched its first of many planned software and information technology development centers in Thiruvananthapuram, Kerala, India. Data extracted from Nissan's international corporate website.
https://en.wikipedia.org/wiki?curid=21240
Intermediate value theorem In mathematical analysis, the intermediate value theorem states that if "f" is a continuous function whose domain contains the interval ["a", "b"], then it takes on any given value between "f"("a") and "f"("b") at some point within the interval. This has two important corollaries: This captures an intuitive property of continuous functions over the real numbers: given "f" continuous on [1, 2] with the known values "f"(1) = 3 and "f"(2) = 5. Then the graph of "y" = "f"("x") must pass through the horizontal line "y" = 4 while "x" moves from 1 to 2. It represents the idea that the graph of a continuous function on a closed interval can be drawn without lifting a pencil from the paper. The intermediate value theorem states the following: Consider an interval formula_1 in the real numbers formula_2 and a continuous function formula_3. Then Remark: "Version II" states that the set of function values has no gap. For any two function values formula_11, even if they are outside the interval between formula_5 and formula_6, all points in the interval formula_14 are also function values, A subset of the real numbers with no internal gap is an interval. "Version I" is naturally contained in "Version II". The theorem depends on, and is equivalent to, the completeness of the real numbers. The intermediate value theorem does not apply to the rational numbers ℚ because gaps exist between rational numbers; irrational numbers fill those gaps. For example, the function formula_16 for formula_17 satisfies formula_18 and formula_19. However, there is no rational number formula_20 such that formula_21, because formula_22 is an irrational number. The theorem may be proven as a consequence of the completeness property of the real numbers as follows: We shall prove the first case, formula_23 be the set of all formula_24 such that formula_25. Then formula_26 is non-empty since formula_27 is an element of formula_26, and formula_26 is bounded above by formula_30. Hence, by completeness, the supremum formula_31 exists. That is, formula_32 is the smallest number that is greater than or equal to every member of formula_26. We claim that formula_8. Fix some formula_35. Since formula_36 is continuous, there is a formula_37 such that formula_38 whenever formula_39. This means that for all formula_41. By the properties of the supremum, there exists some formula_42 that is contained in formula_26, and so Picking formula_45, we know that formula_46 because formula_32 is the supremum of formula_26. This means that Both inequalities are valid for all formula_35, from which we deduce formula_8 as the only possible value, as stated. Remark: The intermediate value theorem can also be proved using the methods of non-standard analysis, which places "intuitive" arguments involving infinitesimals on a rigorous footing. For "u" = 0 above, the statement is also known as "Bolzano's theorem." (Since there is nothing special about "u"=0, this is obviously equivalent to the intermediate value theorem itself.) This theorem was first proved by Bernard Bolzano in 1817. Augustin-Louis Cauchy provided a proof in 1821. Both were inspired by the goal of formalizing the analysis of functions and the work of Joseph-Louis Lagrange. The idea that continuous functions possess the intermediate value property has an earlier origin. Simon Stevin proved the intermediate value theorem for polynomials (using a cubic as an example) by providing an algorithm for constructing the decimal expansion of the solution. The algorithm iteratively subdivides the interval into 10 parts, producing an additional decimal digit at each step of the iteration. Before the formal definition of continuity was given, the intermediate value property was given as part of the definition of a continuous function. Proponents include Louis Arbogast, who assumed the functions to have no jumps, satisfy the intermediate value property and have increments whose sizes corresponded to the sizes of the increments of the variable. Earlier authors held the result to be intuitively obvious and requiring no proof. The insight of Bolzano and Cauchy was to define a general notion of continuity (in terms of infinitesimals in Cauchy's case and using real inequalities in Bolzano's case), and to provide a proof based on such definitions. The intermediate value theorem is closely linked to the topological notion of connectedness and follows from the basic properties of connected sets in metric spaces and connected subsets of ℝ in particular: In fact, connectedness is a topological property and (*) generalizes to topological spaces: "If formula_53 and formula_54 are topological spaces, formula_55 is a continuous map, and formula_53 is a connected space, then formula_64 is connected." The preservation of connectedness under continuous maps can be thought of as a generalization of the intermediate value theorem, a property of real valued functions of a real variable, to continuous functions in general spaces. Recall the first version of the intermediate value theorem, stated previously: Intermediate value theorem. "(Version I). Consider a closed interval formula_1 in the real numbers formula_2 and a continuous function formula_3. Then, if formula_68 is a real number such that formula_69, there exists formula_70 such that formula_8." The intermediate value theorem is an immediate consequence of these two properties of connectedness: Proof: By (**), formula_1 is a connected set. It follows from (*) that the image, formula_9, is also connected. For convenience, assume that formula_74. Then once more invoking (**), formula_75, or formula_8 for some formula_77. Since formula_78, formula_70 must actually hold, and the desired conclusion follows. The same argument applies if formula_80, so we are done.formula_81 The intermediate value theorem generalizes in a natural way: Suppose that "X" is a connected topological space and ("Y", 0 and f(0) = 0. This function is not continuous at "x" = 0 because the limit of "f"("x") as "x" tends to 0 does not exist; yet the function has the intermediate value property. Another, more complicated example is given by the Conway base 13 function. In fact, Darboux's theorem states that all functions that result from the differentiation of some other function on some interval have the intermediate value property (even though they need not be continuous). Historically, this intermediate value property has been suggested as a definition for continuity of real-valued functions; this definition was not adopted. A similar result is the Borsuk–Ulam theorem, which says that a continuous map from the formula_82-sphere to Euclidean formula_82-space will always map some pair of antipodal points to the same place. "Proof for 1-dimensional case:" Take formula_36 to be any continuous function on a circle. Draw a line through the center of the circle, intersecting it at two opposite points formula_85 and formula_86. Define formula_87 to be formula_88. If the line is rotated 180 degrees, the value −"d" will be obtained instead. Due to the intermediate value theorem there must be some intermediate rotation angle for which "d" = 0, and as a consequence "f"("A") = "f"("B") at this angle. In general, for any continuous function whose domain is some closed convex formula_82-dimensional shape and any point inside the shape (not necessarily its center), there exist two antipodal points with respect to the given point whose functional value is the same. The theorem also underpins the explanation of why rotating a wobbly table will bring it to stability (subject to certain easily met constraints).
https://en.wikipedia.org/wiki?curid=14884
Iran–Iraq War The Iran–Iraq War (; “First Gulf War”) or "Holy defence" (Persian: دفاع مقدس) as called among Iranians began on 22 September 1980, when Iraq invaded Iran, and it ended on 20 August 1988, when Iran accepted the UN-brokered ceasefire. Iraq wanted to replace Iran as the dominant Persian Gulf state, and was worried the 1979 Iranian Revolution would lead Iraq's Shi'ite majority to rebel against the Ba'athist government. The war also followed a long history of border disputes, and Iraq planned to annex the oil-rich Khuzestan Province and the east bank of the Shatt al-Arab (Arvand Rud). Although Iraq hoped to take advantage of Iran's post-revolutionary chaos, it made limited progress and was quickly repelled; Iran regained virtually all lost territory by June 1982. For the next five years, Iran was on the offensive until Iraq took back the initiative in 1988, and whose major offensives led to the final conclusion of the war. There were a number of proxy forces—most notably the People's Mujahedin of Iran siding with Iraq and the Iraqi Kurdish militias of the KDP and PUK siding with Iran. The United States, Britain, the Soviet Union, France, and most Arab countries provided political and logistic support for Iraq, while Iran was largely isolated. After eight years of war, war-exhaustion, economic devastation, decreased morale, military stalemate, lack of international sympathy against the use of weapons of mass destruction against civilians by Iraqi forces, and increased U.S.–Iran military tension all led to a ceasefire brokered by the United Nations. The conflict has been compared to World War I in terms of the tactics used, including large-scale trench warfare with barbed wire stretched across fortified defensive lines, manned machine gun posts, bayonet charges, Iranian human wave attacks, extensive use of chemical weapons by Iraq, and, later, deliberate attacks on civilian targets. A special feature of the war can be seen in the Iranian cult of the martyr which had been developed in the years before the revolution. The discourses on martyrdom formulated in the Iranian Shiite context led to the tactics of "human wave attacks" and thus had a lasting impact on the dynamics of the war. An estimated 500,000 Iraqi and Iranian soldiers died, in addition to a smaller number of civilians. The end of the war resulted in neither reparations nor border changes. The Iran–Iraq War was originally referred to as the "Persian Gulf War" until the Persian Gulf War of 1990 and 1991, after which it was known as the "First Persian Gulf War". The Iraq–Kuwait conflict, which was known as the "Second Persian Gulf War," eventually became known simply as the "Persian Gulf War". The Iraq War from 2003 to 2011 has been called the "Second Persian Gulf War". In Iran, the war is known as the "Imposed War" ( ') and the "Holy Defense" ( '). State media in Iraq dubbed the war "Saddam's Qadisiyyah" (, ""), in reference to the seventh-century Battle of al-Qādisiyyah, in which Arab warriors overcame the Sasanian Empire during the Muslim conquest of Iran. In April 1969, Iran abrogated the 1937 treaty over the Shatt al-Arab and Iranian ships stopped paying tolls to Iraq when they used the Shatt al-Arab. The Shah argued that the 1937 treaty was unfair to Iran because almost all river borders around the world ran along the "thalweg", and because most of the ships that used the Shatt al-Arab were Iranian. Iraq threatened war over the Iranian move, but on 24 April 1969, an Iranian tanker escorted by Iranian warships (Joint Operation Arvand) sailed down the Shatt al-Arab, and Iraq—being the militarily weaker state—did nothing. The Iranian abrogation of the 1937 treaty marked the beginning of a period of acute Iraqi-Iranian tension that was to last until the Algiers Accords of 1975. The relationship between the governments of Iran and Iraq briefly improved in 1978, when Iranian agents in Iraq discovered plans for a pro-Soviet "coup d'état" against Iraq's government. When informed of this plot, Saddam ordered the execution of dozens of his army's officers, and in a sign of reconciliation, expelled from Iraq Ruhollah Khomeini, an exiled leader of clerical opposition to the Shah. Nonetheless, Saddam considered the 1975 Algiers Agreement to be merely a truce, rather than a definite settlement, and waited for an opportunity to contest it. Tensions between Iraq and Iran were fuelled by Iran's Islamic revolution and its appearance of being a Pan-Islamic force, in contrast to Iraq's Arab nationalism. Despite Iraq's goal of regaining the Shatt al-Arab, the Iraqi government initially seemed to welcome the Iranian Revolution, which overthrew Shah Mohammad Reza Pahlavi, who was seen as a common enemy. It is difficult to pinpoint when tensions began to build, but there were frequent cross-border skirmishes, largely at Iran's instigation. Ayatollah Ruhollah Khomeini called on Iraqis to overthrow the Ba'ath government, which was received with considerable anger in Baghdad. On 17 July 1979, despite Khomeini's call, Saddam gave a speech praising the Iranian Revolution and called for an Iraqi-Iranian friendship based on non-interference in each other's internal affairs. When Khomeini rejected Saddam's overture by calling for Islamic revolution in Iraq, Saddam was alarmed. Iran's new Islamic administration was regarded in Baghdad as an irrational, existential threat to the Ba'ath government, especially because the Ba'ath party, having a secular nature, discriminated against and posed a threat to the fundamentalist Shia movement in Iraq, whose clerics were Iran's allies within Iraq and whom Khomeini saw as oppressed. Saddam's primary interest in war may have also stemmed from his desire to right the supposed "wrong" of the Algiers Agreement, in addition to finally achieving his desire of annexing Khuzestan and becoming the regional superpower. Saddam's goal was to replace Egypt as the "leader of the Arab world" and to achieve hegemony over the Persian Gulf. He saw Iran's increased weakness due to revolution, sanctions, and international isolation. Saddam had invested heavily in Iraq's military since his defeat against Iran in 1975, buying large amounts of weaponry from the Soviet Union, France and Britain. By 1980, Iraq possessed 200,000 soldiers, 2,000 tanks and 450 aircraft. Watching the disintegration of the powerful Iranian army that frustrated him in 1974–1975, he saw an opportunity to attack, using the threat of Islamic Revolution as a pretext. On 8 March 1980, Iran announced it was withdrawing its ambassador from Iraq, downgraded its diplomatic ties to the charge d'affaires level, and demanded that Iraq do the same. The following day, Iraq declared Iran's ambassador persona non-grata, and demanded his withdrawal from Iraq by 15 March. Iraq soon after expropriated the properties of 70,000 civilians believed to be of Iranian origin and expelled them from its territory. Many, if not most, of those expelled were in fact Arabic-speaking Iraqi Shias who had little to no family ties with Iran. This caused tensions between the two nations to increase further. Iraq began planning offensives, confident that they would succeed. Iran lacked both cohesive leadership and spare parts for their American-made and British-made equipment. The Iraqis could mobilise up to 12 mechanised divisions, and morale was running high. In addition, the area around the Shatt al-Arab posed no obstacle for the Iraqis, as they possessed river crossing equipment. Iraq correctly deduced that Iran's defences at the crossing points around the Karkheh and Karoun Rivers were undermanned and that the rivers could be easily crossed. Iraqi intelligence was also informed that the Iranian forces in Khuzestan (which consisted of two divisions prior to the revolution) now only consisted of several ill-equipped and under-strength battalions. Only a handful of company-sized tank units remained operational. The only qualms the Iraqis had were over the Islamic Republic of Iran Air Force (formerly the Imperial Iranian Air Force). Despite the purge of several key pilots and commanders, as well as the lack of spare parts, the air force showed its power during local uprisings and rebellions. They were also active after the failed U.S. attempt to rescue its hostages, Operation Eagle Claw. Based on these observations, Iraq's leaders decided to carry out a surprise airstrike against the Iranian air force's infrastructure prior to the main invasion. In Iran, severe officer purges (including numerous executions ordered by Sadegh Khalkhali, the new Revolutionary Court judge), and shortages of spare parts for Iran's U.S.-made and British-made equipment had crippled Iran's once-mighty military. Between February and September 1979, Iran's government executed 85 senior generals and forced all major-generals and most brigadier-generals into early retirement. By September 1980, the government had purged 12,000 army officers. These purges resulted in a drastic decline in the Iranian military's operational capacities. Their regular army (which, in 1978, was considered the world's fifth most powerful) had been badly weakened. The desertion rate had reached 60%, and the officer corps was devastated. The most highly skilled soldiers and aviators were exiled, imprisoned, or executed. Throughout the war, Iran never managed to fully recover from this flight of human capital. Continuous sanctions prevented Iran from acquiring many heavy weapons, such as tanks and aircraft. When the invasion occurred, many pilots and officers were released from prison, or had their executions commuted to combat the Iraqis. In addition, many junior officers were promoted to generals, resulting in the army being more integrated as a part of the regime by the war's end, as it is today. Iran still had at least 1,000 operational tanks and several hundred functional aircraft, and could cannibalize equipment to procure spare parts. Meanwhile, a new paramilitary organisation gained prominence in Iran, the Islamic Revolutionary Guard Corps (often shortened to "Revolutionary Guards", and known in Iran as the "Sepah-e-Pasdaran"). This was intended to protect the new regime and counterbalance the army, which was seen as less loyal. Despite having been trained as a paramilitary organisation, after the Iraqi invasion, they were forced to act as a regular army. Initially, they refused to fight alongside the army, which resulted in many defeats, but by 1982, the two groups began carrying out combined operations. Another paramilitary militia was founded in response to the invasion, the "Army of 20 Million", commonly known as the Basij. The Basij were poorly armed and had members as young as 12 and as old as 70. They often acted in conjunction with the Revolutionary Guard, launching so-called human wave attacks and other campaigns against the Iraqis. They were subordinate to the Revolutionary Guards, and they made up most of the manpower that was used in the Revolutionary Guard's attacks. Stephen Pelletiere wrote in his 1992 book "The Iran–Iraq War: Chaos in a Vacuum": The most important dispute was over the Shatt al-Arab waterway. Iran repudiated the demarcation line established in the Anglo-Ottoman Convention of Constantinople of November 1913. Iran asked the border to run along the thalweg, the deepest point of the navigable channel. Iraq, encouraged by Britain, took Iran to the League of Nations in 1934, but their disagreement was not resolved. Finally in 1937 Iran and Iraq signed their first boundary treaty. The treaty established the waterway border on the eastern bank of the river except for a anchorage zone near Abadan, which was allotted to Iran and where the border ran along the thalweg. Iran sent a delegation to Iraq soon after the Ba'ath coup in 1969 and, when Iraq refused to proceed with negotiations over a new treaty, the treaty of 1937 was withdrawn by Iran. The Iranian abrogation of the 1937 treaty marked the beginning of a period of acute Iraqi-Iranian tension that was to last until the Algiers Accords of 1975. The 1974-75 Shatt al-Arab clashes were a previous Iranian-Iraqi standoff in the region of the Shatt al-Arab waterway during the mid-1970s. Nearly 1,000 were killed in the clashes. It was the most significant dispute over the Shatt al-Arab waterway in modern times, prior to the Iran–Iraq War. Five years later, on 17 September 1980, Iraq suddenly abrogated the Algiers Protocol following the Iranian revolution. Saddam Hussein claimed that the Islamic Republic of Iran refused to abide by the stipulations of the Algiers Protocol and, therefore, Iraq considered the Protocol null and void. Five days later, the Iraqi army crossed the border. Iraq launched a full-scale invasion of Iran on 22 September 1980. The Iraqi Air Force launched surprise air strikes on ten Iranian airfields with the objective of destroying the Iranian Air Force. The attack failed to damage the Iranian Air Force significantly; it damaged some of Iran's airbase infrastructure, but failed to destroy a significant number of aircraft. The Iraqi Air Force was only able to strike in depth with a few MiG-23BN, Tu-22, and Su-20 aircraft, and Iran had built hardened aircraft shelters where most of its combat aircraft were stored. The next day, Iraq launched a ground invasion along a front measuring in three simultaneous attacks. The invasion's purpose, according to Saddam, was to blunt the edge of Khomeini's movement and to thwart his attempts to export his Islamic revolution to Iraq and the Persian Gulf states. Saddam hoped that by annexing Khuzestan, he would cause such a blow to Iran's prestige that it would lead to the new government's downfall, or at least end Iran's calls for his overthrow. Of Iraq's six divisions that invaded by ground, four were sent to Khuzestan, which was located near the border's southern end, to cut off the Shatt al-Arab from the rest of Iran and to establish a territorial security zone. The other two divisions invaded across the northern and central part of the border to prevent an Iranian counter-attack. Two of the four Iraqi divisions, one mechanised and one armoured, operated near the southern end and began a siege of the strategically important port cities of Abadan and Khorramshahr. The two armoured divisions secured the territory bounded by the cities of Khorramshahr, Ahvaz, Susangerd, and Musian. On the central front, the Iraqis occupied Mehran, advanced towards the foothills of the Zagros Mountains, and were able to block the traditional Tehran–Baghdad invasion route by securing territory forward of Qasr-e Shirin, Iran. On the northern front, the Iraqis attempted to establish a strong defensive position opposite Suleimaniya to protect the Iraqi Kirkuk oil complex. Iraqi hopes of an uprising by the ethnic Arabs of Khuzestan failed to materialise, as most of the ethnic Arabs remained loyal to Iran. The Iraqi troops advancing into Iran in 1980 were described by Patrick Brogan as "badly led and lacking in offensive spirit". The first known chemical weapons attack by Iraq on Iran probably took place during the fighting around Susangerd. Though the Iraqi air invasion surprised the Iranians, the Iranian air force retaliated the day after with a large-scale attack against Iraqi air bases and infrastructure in Operation Kaman 99. Groups of F-4 Phantom and F-5 Tiger fighter jets attacked targets throughout Iraq, such as oil facilities, dams, petrochemical plants, and oil refineries, and included Mosul Airbase, Baghdad, and the Kirkuk oil refinery. Iraq was taken by surprise at the strength of the retaliation, which caused the Iraqis heavy losses and economic disruption, but the Iranians took heavy losses as well as they lost many aircraft and aircrews to Iraqi air defenses. Iranian Army Aviation's AH-1 Cobra helicopter gunships began attacks on the advancing Iraqi divisions, along with F-4 Phantoms armed with Maverick missiles; they destroyed numerous armoured vehicles and impeded the Iraqi advance, though not completely halting it. Meanwhile, Iraqi air attacks on Iran were repelled by Iran's F-14 Tomcat interceptor fighter jets, using Phoenix missiles, which downed a dozen of Iraq's Soviet-built fighters in the first two days of battle. The Iranian regular military, police forces, volunteer Basij, and Revolutionary Guards all conducted their operations separately; thus, the Iraqi invading forces did not face coordinated resistance. However, on 24 September, the Iranian Navy attacked Basra, Iraq, destroying two oil terminals near the Iraqi port Faw, which reduced Iraq's ability to export oil. The Iranian ground forces (primarily consisting of the Revolutionary Guard) retreated to the cities, where they set up defences against the invaders. On 30 September, Iran's air force launched Operation Scorch Sword, striking and badly damaging the nearly-complete Osirak nuclear reactor near Baghdad. By 1 October, Baghdad had been subjected to eight air attacks. In response, Iraq launched aerial strikes against Iranian targets. The mountainous border between Iran and Iraq made a deep ground invasion almost impossible, and air strikes were used instead. The invasion's first waves were a series of air strikes targeted at Iranian airfields. Iraq also attempted to bomb Tehran, Iran's capital and command centre, into submission. On 22 September, a prolonged battle began in the city of Khorramshahr, eventually leaving 7,000 dead on each side. Reflecting the bloody nature of the struggle, Iranians came to call Khorramshahr "City of Blood". The battle began with Iraqi air raids against key points and mechanised divisions advancing on the city in a crescent-like formation. They were slowed by Iranian air attacks and Revolutionary Guard troops with recoilless rifles, rocket-propelled grenades, and Molotov cocktails. The Iranians flooded the marsh areas around the city, forcing the Iraqis to traverse through narrow strips of land. Iraqi tanks launched attacks with no infantry support, and many tanks were lost to Iranian anti-tank teams. However, by 30 September, the Iraqis had managed to clear the Iranians from the outskirts of the city. The next day, the Iraqis launched infantry and armoured attacks into the city. After heavy house-to-house fighting, the Iraqis were repelled. On 14 October, the Iraqis launched a second offensive. The Iranians launched a controlled withdrawal from the city, street by street. By 24 October, most of the city was captured, and the Iranians evacuated across the Karun River. Some partisans remained, and fighting continued until 10 November. The people of Iran, rather than turning against their still-weak Islamic Republic, rallied around their country. An estimated 200,000 fresh troops had arrived at the front by November, many of them ideologically committed volunteers. Though Khorramshahr was finally captured, the battle had delayed the Iraqis enough to allow the large-scale deployment of the Iranian military. In November, Saddam ordered his forces to advance towards Dezful and Ahvaz, and lay sieges to both cities. However, the Iraqi offensive had been badly damaged by Iranian militias and air power. Iran's air force had destroyed Iraq's army supply depots and fuel supplies, and was strangling the country through an aerial siege. Iran's supplies had not been exhausted, despite sanctions, and the military often cannibalised spare parts from other equipment and began searching for parts on the black market. On 28 November, Iran launched Operation Morvarid (Pearl), a combined air and sea attack which destroyed 80% of Iraq's navy and all of its radar sites in the southern portion of the country. When Iraq laid siege to Abadan and dug its troops in around the city, it was unable to blockade the port, which allowed Iran to resupply Abadan by sea. Iraq's strategic reserves had been depleted, and by now it lacked the power to go on any major offensives until nearly the end of the war. On 7 December, Hussein announced that Iraq was going on the defensive. By the end of 1980, Iraq had destroyed about 500 Western-built Iranian tanks and captured 100 others. For the next eight months, both sides were on a defensive footing (with the exception of the Battle of Dezful), as the Iranians needed more time to reorganise their forces after the damage inflicted by the purge of 1979–80. During this period, fighting consisted mainly of artillery duels and raids. Iraq had mobilised 21 divisions for the invasion, while Iran countered with only 13 regular army divisions and one brigade. Of the regular divisions, only seven were deployed to the border. The war bogged down into World War I-style trench warfare with tanks and modern late-20th century weapons. Due to the power of anti-tank weapons such as the RPG-7, armored manoeuvre by the Iraqis was very costly, and they consequently entrenched their tanks into static positions. Iraq also began firing Scud missiles into Dezful and Ahvaz, and used terror bombing to bring the war to the Iranian civilian population. Iran launched dozens of "human wave assaults". On 5 January 1981, Iran had reorganised its forces enough to launch a large-scale offensive, Operation Nasr (Victory). The Iranians launched their major armoured offensive from Dezful in the direction of Susangerd, consisting of tank brigades from the 16th "Qazvin", 77th "Khorasan", and 92nd Khuzestan Armoured Divisions, and broke through Iraqi lines. However, the Iranian tanks had raced through Iraqi lines with their flanks unprotected and with no infantry support; as a result, they were cut off by Iraqi tanks. In the ensuing Battle of Dezful, the Iranian armoured divisions were nearly wiped out in one of the biggest tank battles of the war. When the Iranian tanks tried to manoeuvre, they became stuck in the mud of the marshes, and many tanks were abandoned. The Iraqis lost 45 T-55 and T-62 tanks, while the Iranians lost 100–200 Chieftain and M-60 tanks. Reporters counted roughly 150 destroyed or deserted Iranian tanks, and also 40 Iraqi tanks. 141 Iranians were killed during the battle. The battle had been ordered by Iranian president Abulhassan Banisadr, who was hoping that a victory might shore up his deteriorating political position; instead, the failure hastened his fall. Many of Iran's problems took place because of political infighting between President Banisadr, who supported the regular army, and the hardliners who supported the IRGC. Once he was impeached and the competition ended, the performance of the Iranian military improved. Iran was further distracted by internal fighting between the regime and the Islamic Marxist "Mujaheddin e-Khalq" (MEK) on the streets of Iran's major cities in June 1981 and again in September. After the end of these battles, the MEK gradually leaned towards Saddam, completely taking his side by the mid-1980s. In 1986, Rajavi moved from Paris to Iraq and set up a base on the Iranian border. The Battle of Dezful became a critical battle in Iranian military thinking. Less emphasis was placed on the Army with its conventional tactics, and more emphasis was placed on the Revolutionary Guard with its unconventional tactics. The Iraqi Air Force, badly damaged by the Iranians, was moved to the H-3 Airbase in Western Iraq, near the Jordanian border and away from Iran. However, on 3 April 1981, the Iranian air force used eight F-4 Phantom fighter bombers, four F-14 Tomcats, three Boeing 707 refuelling tankers, and one Boeing 747 command plane to launch a surprise attack on H3, destroying 27–50 Iraqi fighter jets and bombers. Despite the successful H-3 airbase attack (in addition to other air attacks), the Iranian Air Force was forced to cancel its successful 180-day air offensive. In addition, they abandoned their attempted control of Iranian airspace. They had been seriously weakened by sanctions and pre-war purges and further damaged by a fresh purge after the impeachment crisis of President Banisadr. the Iranian Air Force could not survive further attrition, and decided to limit their losses, abandoning efforts to control Iranian airspace. The Iranian air force would henceforth fight on the defensive, trying to deter the Iraqis rather than engaging them. While throughout 1981–1982 the Iraqi air force would remain weak, within the next few years they would rearm and expand again, and begin to regain the strategic initiative. The Iranians suffered from a shortage of heavy weapons, but had a large number of devoted volunteer troops, so they began using human wave attacks against the Iraqis. Typically, an Iranian assault would commence with poorly trained Basij who would launch the primary human wave assaults to swamp the weakest portions of the Iraqi lines en masse (on some occasions even bodily clearing minefields). This would be followed up by the more experienced Revolutionary Guard infantry, who would breach the weakened Iraqi lines, and followed up by the regular army using mechanized forces, who would maneuver through the breach and attempt to encircle and defeat the enemy. According to historian Stephen C. Pelletiere, the idea of Iranian "human wave attacks" was a misconception. Instead, the Iranian tactics consisted of using groups of 22 man infantry squads, which moved forward to attack specific objectives. As the squads surged forward to execute their missions, that gave the impression of a "human wave attack". Nevertheless, the idea of "human wave attacks" remained virtually synonymous with any large-scale infantry frontal assault Iran carried out. Large numbers of troops would be used, aimed at overwhelming the Iraqi lines (usually the weakest portion, typically manned by the Iraqi Popular Army), regardless of losses. According to the former Iraqi general Ra'ad al-Hamdani, the Iranian human wave charges consisted of armed "civilians" who carried most of their necessary equipment themselves into battle and often lacked command and control and logistics. Operations were often carried out during the night and deception operations, infiltrations, and maneuvers became more common. The Iranians would also reinforce the infiltrating forces with new units to keep up their momentum. Once a weak point was found, the Iranians would concentrate all of their forces into that area in an attempt to break through with human wave attacks. The human wave attacks, while extremely bloody (tens of thousands of troops died in the process), when used in combination with infiltration and surprise, caused major Iraqi defeats. As the Iraqis would dig in their tanks and infantry into static, entrenched positions, the Iranians would manage to break through the lines and encircle entire divisions. Merely the fact that the Iranian forces used maneuver warfare by their light infantry against static Iraqi defenses was often the decisive factor in battle. However, lack of coordination between the Iranian Army and IRGC and shortages of heavy weaponry played a detrimental role, often with most of the infantry not being supported by artillery and armor. After the Iraqi offensive stalled in March 1981, there was little change in the front other than Iran retaking the high ground above Susangerd in May. By late 1981, Iran returned to the offensive and launched a new operation (Operation Samen-ol-A'emeh (The Eighth Imam)), ending the Iraqi Siege of Abadan on 27–29 September 1981. The Iranians used a combined force of regular army artillery with small groups of armor, supported by Pasdaran (IRGC) and Basij infantry. On 15 October, after breaking the siege, a large Iranian convoy was ambushed by Iraqi tanks, and during the ensuing tank battle Iran lost 20 Chieftains and other armored vehicles and withdrew from the previously gained territory. On 29 November 1981, Iran began Operation Tariq al-Qods with three army brigades and seven Revolutionary Guard brigades. The Iraqis failed to properly patrol their occupied areas, and the Iranians constructed a road through the unguarded sand dunes, launching their attack from the Iraqi rear. The town of Bostan was retaken from Iraqi divisions by 7 December. By this time the Iraqi Army was experiencing serious morale problems, compounded by the fact that Operation Tariq al-Qods marked the first use of Iranian "human wave" tactics, where the Revolutionary Guard light infantry repeatedly charged at Iraqi positions, oftentimes without the support of armour or air power. The fall of Bostan exacerbated the Iraqis' logistical problems, forcing them to use a roundabout route from Ahvaz to the south to resupply their troops. 6,000 Iranians and over 2,000 Iraqis were killed in the operation. The Iraqis, realising that the Iranians were planning to attack, decided to preempt them with Operation al-Fawz al-'Azim (Supreme Success) on 19 March. Using a large number of tanks, helicopters, and fighter jets, they attacked the Iranian buildup around the Roghabiyeh pass. Though Saddam and his generals assumed they had succeeded, in reality the Iranian forces remained fully intact. The Iranians had concentrated much of their forces by bringing them directly from the cities and towns throughout Iran via trains, buses, and private cars. The concentration of forces did not resemble a traditional military buildup, and although the Iraqis detected a population buildup near the front, they failed to realize that this was an attacking force. As a result, Saddam's army was unprepared for the Iranian offensives to come. Iran's next major offensive, led by then Colonel Ali Sayad Shirazi , was Operation Undeniable Victory. On 22 March 1982, Iran launched an attack which took the Iraqi forces by surprise: using Chinook helicopters, they landed behind Iraqi lines, silenced their artillery, and captured an Iraqi headquarters. The Iranian Basij then launched "human wave" attacks, consisting of 1,000 fighters per wave. Though they took heavy losses, they eventually broke through Iraqi lines. The Revolutionary Guard and regular army followed up by surrounding the Iraqi 9th and 10th Armoured and 1st Mechanised Divisions that had camped close to the Iranian town of Shush. The Iraqis launched a counter-attack using their 12th Armoured division to break the encirclement and rescue the surrounded divisions. Iraqi tanks came under attack by 95 Iranian F-4 Phantom and F-5 Tiger fighter jets, destroying much of the division. Operation Undeniable Victory was an Iranian victory; Iraqi forces were driven away from Shush, Dezful and Ahvaz. The Iranian armed forces destroyed 320–400 Iraqi tanks and armored vehicles in a costly success. In just the first day of the battle, the Iranians lost 196 tanks. By this time, most of the Khuzestan province had been recaptured. In preparation for Operation Beit ol-Moqaddas, the Iranians had launched numerous air raids against Iraq air bases, destroying 47 jets (including Iraq's brand new Mirage F-1 fighter jets from France); this gave the Iranians air superiority over the battlefield while allowing them to monitor Iraqi troop movements. On 29 April, Iran launched the offensive. 70,000 Revolutionary Guard and Basij members struck on several axes – Bostan, Susangerd, the west bank of the Karun River, and Ahvaz. The Basij launched human wave attacks, which were followed up by the regular army and Revolutionary Guard support along with tanks and helicopters. Under heavy Iranian pressure, the Iraqi forces retreated. By 12 May, Iran had driven out all Iraqi forces from the Susangerd area. The Iranians captured several thousand Iraqi troops and a large number of tanks. Nevertheless, the Iranians took many losses as well, especially among the Basij. The Iraqis retreated to the Karun River, with only Khorramshahr and a few outlying areas remaining in their possession. Saddam ordered 70,000 troops to be placed around the city of Khorramshahr. The Iraqis created a hastily constructed defence line around the city and outlying areas. To discourage airborne commando landings, the Iraqis also placed metal spikes and destroyed cars in areas likely to be used as troop landing zones. Saddam Hussein even visited Khorramshahr in a dramatic gesture, swearing that the city would never be relinquished. However, Khorramshahr's only resupply point was across the Shatt al-Arab, and the Iranian air force began bombing the supply bridges to the city, while their artillery zeroed in on the besieged garrison. In the early morning hours of 23 May 1982, the Iranians began the drive towards Khorramshahr across the Karun River. This part of Operation Beit ol-Moqaddas was spearheaded by the 77th Khorasan division with tanks along with the Revolutionary Guard and Basij. The Iranians hit the Iraqis with destructive air strikes and massive artillery barrages, crossed the Karun River, captured bridgeheads, and launched human wave attacks towards the city. Saddam's defensive barricade collapsed; in less than 48 hours of fighting, the city fell and 19,000 Iraqis surrendered to the Iranians. A total of 10,000 Iraqis were killed or wounded in Khorramshahr, while the Iranians suffered 30,000 casualties. During the whole of Operation Beit ol-Moqaddas, 33,000 Iraqi soldiers were captured by the Iranians. The fighting had battered the Iraqi military: its strength fell from 210,000 to 150,000 troops; over 20,000 Iraqi soldiers were killed and over 30,000 captured; two out of four active armoured divisions and at least three mechanised divisions fell to less than a brigade's strength; and the Iranians had captured over 450 tanks and armoured personnel carriers. The Iraqi Air Force was also left in poor shape: after losing up to 55 aircraft since early December 1981, they had only 100 intact fighter-bombers and interceptors. A defector who flew his MiG-21 to Syria in June 1982 revealed that the Iraqi Air Force had only three squadrons of fighter-bombers capable of mounting operations into Iran. The Iraqi Army Air Corps was in slightly better shape, and could still operate more than 70 helicopters. Despite this, the Iraqis still held 3,000 tanks, while Iran held 1,000. At this point, Saddam believed that his army was too demoralised and damaged to hold onto Khuzestan and major swathes of Iranian territory, and withdrew his remaining forces, redeploying them in defence along the border. However, his troops continued to occupy some key Iranian border areas of Iran, including the disputed territories that prompted his invasion, notably the Shatt al-Arab waterway. In response to their failures against the Iranians in Khorramshahr, Saddam ordered the executions of Generals Juwad Shitnah and Salah al-Qadhi and Colonels Masa and al-Jalil. At least a dozen other high-ranking officers were also executed during this time. This became an increasingly common punishment for those who failed him in battle. In April 1982, the rival Ba'athist regime in Syria, one of the few nations that supported Iran, closed the Kirkuk–Baniyas pipeline that had allowed Iraqi oil to reach tankers on the Mediterranean, reducing the Iraqi budget by $5 billion per month. Journalist Patrick Brogan wrote, "It appeared for a while that Iraq would be strangled economically before it was defeated militarily." Syria's closure of the Kirkuk–Baniyas pipeline left Iraq with the pipeline to Turkey as the only means of exporting oil. However, that pipeline had a capacity of only , which was insufficient to pay for the war. However, Saudi Arabia, Kuwait, and the other Gulf states saved Iraq from bankruptcy by providing it with an average of $60 billion in subsidies per year. Though Iraq had previously been hostile towards other Gulf states, "the threat of Persian fundamentalism was far more feared." They were especially inclined to fear Iranian victory after Ayatollah Khomeini declared monarchies to be illegitimate and an un-Islamic form of government. Khomeini's statement was widely received as a call to overthrow the Gulf monarchies. Journalists John Bulloch and Harvey Morris wrote: The virulent Iranian campaign, which at its peak seemed to be making the overthrow of the Saudi regime a war aim on a par with the defeat of Iraq, did have an effect on the Kingdom [of Saudi Arabia], but not the one the Iranians wanted: instead of becoming more conciliatory, the Saudis became tougher, more self-confident, and less prone to seek compromise. Saudi Arabia was said to provide Iraq with $1 billion per month starting in mid-1982. Iraq began receiving support from the United States and west European countries as well. Saddam was given diplomatic, monetary, and military support by the United States, including massive loans, political influence, and intelligence on Iranian deployments gathered by American spy satellites. The Iraqis relied heavily on American satellite footage and radar planes to detect Iranian troop movements, and they enabled Iraq to move troops to the site before the battle. With Iranian success on the battlefield, the United States increased its support of the Iraqi government, supplying intelligence, economic aid, and dual-use equipment and vehicles, as well as normalizing its intergovernmental relations (which had been broken during the 1967 Six-Day War). President Ronald Reagan decided that the United States "could not afford to allow Iraq to lose the war to Iran", and that the United States "would do whatever was necessary to prevent Iraq from losing". Reagan formalised this policy by issuing a National Security Decision Directive to this effect in June 1982. In 1982, Reagan removed Iraq from the list of countries "supporting terrorism" and sold weapons such as howitzers to Iraq via Jordan. France sold Iraq millions of dollars worth of weapons, including Gazelle helicopters, Mirage F-1 fighters, and Exocet missiles. Both the United States and West Germany sold Iraq dual-use pesticides and poisons that would be used to create chemical and other weapons, such as Roland missiles. At the same time, the Soviet Union, angered with Iran for purging and destroying the communist Tudeh Party, sent large shipments of weapons to Iraq. The Iraqi Air Force was replenished with Soviet, Chinese, and French fighter jets and attack/transport helicopters. Iraq also replenished their stocks of small arms and anti-tank weapons such as AK-47s and rocket-propelled grenades from its supporters. The depleted tank forces were replenished with more Soviet and Chinese tanks, and the Iraqis were reinvigorated in the face of the coming Iranian onslaught. Iran was portrayed as the aggressor, and would be seen as such until the 1990–1991 Persian Gulf War, when Iraq would be condemned. Iran did not have the money to purchase arms to the same extent as Iraq did. They counted on China, North Korea, Libya, Syria, and Japan for supplying anything from weapons and munitions to logistical and engineering equipment. On 20 June 1982, Saddam announced that he wanted to sue for peace and proposed an immediate ceasefire and withdrawal from Iranian territory within two weeks. Khomeini responded by saying the war would not end until a new government was installed in Iraq and reparations paid. He proclaimed that Iran would invade Iraq and would not stop until the Ba'ath regime was replaced by an Islamic republic. Iran supported a government in exile for Iraq, the Supreme Council of the Islamic Revolution in Iraq, led by exiled Iraqi cleric Mohammad Baqer al-Hakim, which was dedicated to overthrowing the Ba'ath party. They recruited POW's, dissidents, exiles, and Shias to join the Badr Brigade, the military wing of the organisation. The decision to invade Iraq was taken after much debate within the Iranian government. One faction, comprising Prime Minister Mir-Hossein Mousavi, Foreign Minister Ali Akbar Velayati, President Ali Khamenei, Army Chief of Staff General Ali Sayad Shirazi as well as Major General Qasem-Ali Zahirnejad, wanted to accept the ceasefire, as most of Iranian soil had been recaptured. In particular, General Shirazi and Zahirnejad were both opposed to the invasion of Iraq on logistical grounds, and stated they would consider resigning if "unqualified people continued to meddle with the conduct of the war". Of the opposing view was a hardline faction led by the clerics on the Supreme Defence Council, whose leader was the politically powerful speaker of the "Majlis", Akbar Hashemi Rafsanjani. Iran also hoped that their attacks would ignite a revolt against Saddam's rule by the Shia and Kurdish population of Iraq, possibly resulting in his downfall. They were successful in doing so with the Kurdish population, but not the Shia. Iran had captured large quantities of Iraqi equipment (enough to create several tank battalions, Iran once again had 1,000 tanks) and also managed to clandestinely procure spare parts as well. At a cabinet meeting in Baghdad, Minister of Health Riyadh Ibrahim Hussein suggested that Saddam could step down temporarily as a way of easing Iran towards a ceasefire, and then afterwards would come back to power. Saddam, annoyed, asked if anyone else in the Cabinet agreed with the Health Minister's idea. When no one raised their hand in support, he escorted Riyadh Hussein to the next room, closed the door, and shot him with his pistol. Saddam returned to the room and continued with his meeting. For the most part, Iraq remained on the defensive for the next five years, unable and unwilling to launch any major offensives, while Iran launched more than 70 offensives. Iraq's strategy changed from holding territory in Iran to denying Iran any major gains in Iraq (as well as holding onto disputed territories along the border). Saddam commenced a policy of total war, gearing most of his country towards defending against Iran. By 1988, Iraq was spending 40–75% of its GDP on military equipment. Saddam had also more than doubled the size of the Iraqi army, from 200,000 soldiers (12 divisions and three independent brigades) to 500,000 (23 divisions and nine brigades). Iraq also began launching air raids against Iranian border cities, greatly increasing the practice by 1984. By the end of 1982, Iraq had been resupplied with new Soviet and Chinese materiel, and the ground war entered a new phase. Iraq used newly acquired T-55, T-62 and T-72 tanks (as well as Chinese copies), BM-21 truck-mounted rocket launchers, and Mi-24 helicopter gunships to prepare a Soviet-type three-line defence, replete with obstacles such as barbed wire, minefields, fortified positions and bunkers. The Combat Engineer Corps built bridges across water obstacles, laid minefields, erected earthen revetments, dug trenches, built machinegun nests, and prepared new defence lines and fortifications. Iraq began to focus on using defense in depth to defeat the Iranians. Iraq created multiple static defense lines to bleed the Iranians through sheer size. When faced against large Iranian attack, where human waves would overrun Iraq's forward entrenched infantry defences, the Iraqis would often retreat, but their static defences would bleed the Iranians and channel them into certain directions, drawing them into traps or pockets. Iraqi air and artillery attacks would then pin the Iranians down, while tanks and mechanised infantry attacks using mobile warfare would push them back. Sometimes, the Iraqis would launch "probing attacks" into the Iranian lines to provoke them into launching their attacks sooner. While Iranian human wave attacks were successful against the dug in Iraqi forces in Khuzestan, they had trouble breaking through Iraq's defense in depth lines. Iraq had a logistical advantage in their defence: the front was located near the main Iraqi bases and arms depots, allowing their army to be efficiently supplied. By contrast, the front in Iran was a considerable distance away from the main Iranian bases and arms depots, and as such, Iranian troops and supplies had to travel through mountain ranges before arriving at the front. In addition, Iran's military power was weakened once again by large purges in 1982, resulting from another supposedly attempted coup. The Iranian generals wanted to launch an all-out attack on Baghdad and seize it before the weapon shortages continued to manifest further. Instead, that was rejected as being unfeasible, and the decision was made to capture one area of Iraq after the other in the hopes that a series of blows delivered foremost by the Revolutionary Guards Corps would force a political solution to the war (including Iraq withdrawing completely from the disputed territories along the border). The Iranians planned their attack in southern Iraq, near Basra. Called Operation Ramadan, it involved over 180,000 troops from both sides, and was one of the largest land battles since World War II. Iranian strategy dictated that they launch their primary attack on the weakest point of the Iraqi lines; however, the Iraqis were informed of Iran's battle plans and moved all of their forces to the area the Iranians planned to attack. The Iraqis were equipped with tear gas to use against the enemy, which would be the first major use of chemical warfare during the conflict, throwing an entire attacking division into chaos. Over 100,000 Revolutionary Guards and Basij volunteer forces charged towards the Iraqi lines. The Iraqi troops had entrenched themselves in formidable defences, and had set up a network of bunkers and artillery positions. The Basij used human waves, and were even used to bodily clear the Iraqi minefields and allow the Revolutionary Guards to advance. Combatants came so close to one another that Iranians were able to board Iraqi tanks and throw grenades inside the hulls. By the eighth day, the Iranians had gained inside Iraq and had taken several causeways. Iran's Revolutionary Guards also used the T-55 tanks they had captured in earlier battles. However, the attacks came to a halt and the Iranians turned to defensive measures. Seeing this, Iraq used their Mi-25 helicopters, along with Gazelle helicopters armed with Euromissile HOT, against columns of Iranian mechanised infantry and tanks. These "hunter-killer" teams of helicopters, which had been formed with the help of East German advisors, proved to be very costly for the Iranians. Aerial dogfights occurred between Iraqi MiGs and Iranian F-4 Phantoms. On 16 July, Iran tried again further north and managed to push the Iraqis back. However, only from Basra, the poorly equipped Iranian forces were surrounded on three sides by Iraqis with heavy weaponry. Some were captured, while many were killed. Only a last-minute attack by Iranian AH-1 Cobra helicopters stopped the Iraqis from routing the Iranians. Three more similar attacks occurred around the Khorramshar-Baghdad road area towards the end of the month, but none were significantly successful. Iraq had concentrated three armoured divisions, the 3rd, 9th, and 10th, as a counter-attack force to attack any penetrations. They were successful in defeating the Iranian breakthroughs, but suffered heavy losses. The 9th Armoured Division in particular had to be disbanded, and was never reformed. The total casualty toll had grown to include 80,000 soldiers and civilians. 400 Iranian tanks and armored vehicles were destroyed or abandoned, while Iraq lost no fewer than 370 tanks. After Iran's failure in Operation Ramadan, they carried out only a few smaller attacks. Iran launched two limited offensives aimed at reclaiming the Sumar Hills and isolating the Iraqi pocket at Naft Shahr at the international border, both of which were part of the disputed territories still under Iraqi occupation. They then aimed to capture the Iraqi border town of Mandali. They planned to take the Iraqis by surprise using Basij militiamen, army helicopters, and some armoured forces, then stretch their defences and possibly break through them to open a road to Baghdad for future exploitation. During Operation Muslim ibn Aqil (1–7 October), Iran recovered of disputed territory straddling the international border and reached the outskirts of Mandali before being stopped by Iraqi helicopter and armoured attacks. During Operation Muharram (1–21 November), the Iranians captured part of the Bayat oilfield with the help of their fighter jets and helicopters, destroying 105 Iraqi tanks, 70 APCs, and 7 planes with few losses. They nearly breached the Iraqi lines but failed to capture Mandali after the Iraqis sent reinforcements, including brand new T-72 tanks, which possessed armour that could not be pierced from the front by Iranian TOW missiles. The Iranian advance was also impeded by heavy rains. 3,500 Iraqis and an unknown number of Iranians died, with only minor gains for Iran. After the failure of the 1982 summer offensives, Iran believed that a major effort along the entire breadth of the front would yield victory. During the course of 1983, the Iranians launched five major assaults along the front, though none achieved substantial success, as the Iranians staged more massive "human wave" attacks. By this time, it was estimated that no more than 70 Iranian fighter aircraft were still operational at any given time; Iran had its own helicopter repair facilities, left over from before the revolution, and thus often used helicopters for close air support. Iranian fighter pilots had superior training compared to their Iraqi counterparts (as most had received training from US officers before the 1979 revolution) and would continue to dominate in combat. However, aircraft shortages, the size of defended territory/airspace, and American intelligence supplied to Iraq allowed the Iraqis to exploit gaps in Iranian airspace. Iraqi air campaigns met little opposition, striking over half of Iran, as the Iraqis were able to gain air superiority towards the end of the war. In Operation Before the Dawn, launched 6 February 1983, the Iranians shifted focus from the southern to the central and northern sectors. Employing 200,000 "last reserve" Revolutionary Guard troops, Iran attacked along a stretch near al-Amarah, Iraq, about southeast of Baghdad, in an attempt to reach the highways connecting northern and southern Iraq. The attack was stalled by of hilly escarpments, forests, and river torrents blanketing the way to al-Amarah, but the Iraqis could not force the Iranians back. Iran directed artillery on Basra, Al Amarah, and Mandali. The Iranians suffered a large number of casualties clearing minefields and breaching Iraqi anti-tank mines, which Iraqi engineers were unable to replace. After this battle, Iran reduced its use of human wave attacks, though they still remained a key tactic as the war went on. Further Iranian attacks were mounted in the Mandali–Baghdad north-central sector in April 1983, but were repelled by Iraqi mechanised and infantry divisions. Casualties were high, and by the end of 1983, an estimated 120,000 Iranians and 60,000 Iraqis had been killed. Iran, however, held the advantage in the war of attrition. From early 1983–1984, Iran launched a series of four "Valfajr" (Dawn) Operations (that eventually numbered to 10). During Operation Dawn-1, in early February 1983, 50,000 Iranian forces attacked westward from Dezful and were confronted by 55,000 Iraqi forces. The Iranian objective was to cut off the road from Basra to Baghdad in the central sector. The Iraqis carried out 150 air sorties against the Iranians, and even bombed Dezful, Ahvaz, and Khorramshahr in retribution. The Iraqi counterattack was broken up by Iran's 92nd Armoured Division. During Operation Dawn-2, the Iranians directed insurgency operations by proxy in April 1983 by supporting the Kurds in the north. With Kurdish support, the Iranians attacked on 23 July 1983, capturing the Iraqi town of Haj Omran and maintaining it against an Iraqi poison gas counteroffensive. This operation incited Iraq to later conduct indiscriminate chemical attacks against the Kurds. The Iranians attempted to further exploit activities in the north on 30 July 1983, during Operation Dawn-3. Iran saw an opportunity to sweep away Iraqi forces controlling the roads between the Iranian mountain border towns of Mehran, Dehloran and Elam. Iraq launched airstrikes, and equipped attack helicopters with chemical warheads; while ineffective, it demonstrated both the Iraqi general staff's and Saddam's increasing interest in using chemical weapons. In the end, 17,000 had been killed on both sides, with no gain for either country. The focus of Operation Dawn-4 in September 1983 was the northern sector in Iranian Kurdistan. Three Iranian regular divisions, the Revolutionary Guard, and Kurdistan Democratic Party (KDP) elements amassed in Marivan and Sardasht in a move to threaten the major Iraqi city Suleimaniyah. Iran's strategy was to press Kurdish tribes to occupy the Banjuin Valley, which was within of Suleimaniyah and from the oilfields of Kirkuk. To stem the tide, Iraq deployed Mi-8 attack helicopters equipped with chemical weapons and executed 120 sorties against the Iranian force, which stopped them into Iraqi territory. 5,000 Iranians and 2,500 Iraqis died. Iran gained of its territory back in the north, gained of Iraqi land, and captured 1,800 Iraqi prisoners while Iraq abandoned large quantities of valuable weapons and war materiel in the field. Iraq responded to these losses by firing a series of SCUD-B missiles into the cities of Dezful, Masjid Soleiman, and Behbehan. Iran's use of artillery against Basra while the battles in the north raged created multiple fronts, which effectively confused and wore down Iraq. Previously, the Iranians had outnumbered the Iraqis on the battlefield, but Iraq expanded their military draft (pursuing a policy of total war), and by 1984, the armies were equal in size. By 1986, Iraq had twice as many soldiers as Iran. By 1988, Iraq would have 1 million soldiers, giving it the fourth largest army in the world. Some of their equipment, such as tanks, outnumbered the Iranians' by at least five to one. Iranian commanders, however, remained more tactically skilled. After the Dawn Operations, Iran attempted to change tactics. In the face of increasing Iraqi defense in depth, as well as increased armaments and manpower, Iran could no longer rely on simple human wave attacks. Iranian offensives became more complex and involved extensive maneuver warfare using primarily light infantry. Iran launched frequent, and sometimes smaller offensives to slowly gain ground and deplete the Iraqis through attrition. They wanted to drive Iraq into economic failure by wasting money on weapons and war mobilization, and to deplete their smaller population by bleeding them dry, in addition to creating an anti-government insurgency (they were successful in Kurdistan, but not southern Iraq). Iran also supported their attacks with heavy weaponry when possible and with better planning (although the brunt of the battles still fell to the infantry). The Army and Revolutionary Guards worked together better as their tactics improved. Human wave attacks became less frequent (although still used). To negate the Iraqi advantage of defense in depth, static positions, and heavy firepower, Iran began to focus on fighting in areas where the Iraqis could not use their heavy weaponry, such as marshes, valleys, and mountains, and frequently using infiltration tactics. Iran began training troops in infiltration, patrolling, night-fighting, marsh warfare, and mountain warfare. They also began training thousands of Revolutionary Guard commandos in amphibious warfare, as southern Iraq is marshy and filled with wetlands. Iran used speedboats to cross the marshes and rivers in southern Iraq and landed troops on the opposing banks, where they would dig and set up pontoon bridges across the rivers and wetlands to allow heavy troops and supplies to cross. Iran also learned to integrate foreign guerrilla units as part of their military operations. On the northern front, Iran began working heavily with the Peshmerga, Kurdish guerrillas. Iranian military advisors organised the Kurds into raiding parties of 12 guerrillas, which would attack Iraqi command posts, troop formations, infrastructure (including roads and supply lines), and government buildings. The oil refineries of Kirkuk became a favourite target, and were often hit by homemade Peshmerga rockets. By 1984, the Iranian ground forces were reorganised well enough for the Revolutionary Guard to start Operation Kheibar, which lasted from 24 February to 19 March. On 15 February 1984, the Iranians began launching attacks against the central section of the front, where the Second Iraqi Army Corps was deployed: 250,000 Iraqis faced 250,000 Iranians. The goal of this new major offensive was the capture of Basra-Baghdad Highway, cutting off Basra from Baghdad and setting the stage for an eventual attack upon the city. The Iraqi high command had assumed that the marshlands above Basra were natural barriers to attack, and had not reinforced them. The marshes negated Iraqi advantage in armor, and absorbed artillery rounds and bombs. Prior to the attack, Iranian commandos on helicopters had landed behind Iraqi lines and destroyed Iraqi artillery. Iran launched two preliminary attacks prior to the main offensive, Operation Dawn 5 and Dawn 6. They saw the Iranians attempting to capture Kut al-Imara, Iraq and sever the highway connecting Baghdad to Basra, which would impede Iraqi coordination of supplies and defences. Iranian troops crossed the river on motorboats in a surprise attack, though only came within of the highway. Operation Kheibar began on 24 February with Iranian infantrymen crossing the Hawizeh Marshes using motorboats and transport helicopters in an amphibious assault. The Iranians attacked the vital oil-producing Majnoon Island by landing troops via helicopters onto the islands and severing the communication lines between Amareh and Basra. They then continued the attack towards Qurna. By 27 February, they had captured the island, but suffered catastrophic helicopter losses to the IrAF. On that day, a massive array of Iranian helicopters transporting Pasdaran troops were intercepted by Iraqi combat aircraft (MiGs, Mirages and Sukhois). In what was essentially an aerial slaughter, Iraqi jets shot down 49 of the 50 Iranian helicopters. At times, fighting took place in waters over deep. Iraq ran live electrical cables through the water, electrocuting numerous Iranian troops and then displaying their corpses on state television. By 29 February, the Iranians had reached the outskirts of Qurna and were closing in on the Baghdad–Basra highway. They had broken out of the marshes and returned to open terrain, where they were confronted by conventional Iraqi weapons, including artillery, tanks, air power, and mustard gas. 1,200 Iranian soldiers were killed in the counter-attack. The Iranians retreated back to the marshes, though they still held onto them along with Majnoon Island. The Battle of the Marshes saw an Iraqi defence that had been under continuous strain since 15 February; they were relieved by their use of chemical weapons and defence-in-depth, where they layered defensive lines: even if the Iranians broke through the first line, they were usually unable to break through the second due to exhaustion and heavy losses. They also largely relied on Mi-24 Hind to "hunt" the Iranian troops in the marshes, and at least 20,000 Iranians were killed in the marsh battles. Iran used the marshes as a springboard for future attacks/infiltrations. Four years into the war, the human cost to Iran had been 170,000 combat fatalities and 340,000 wounded. Iraqi combat fatalities were estimated at 80,000 with 150,000 wounded. Unable to launch successful ground attacks against Iran, Iraq used their now expanded air force to carry out strategic bombing against Iranian shipping, economic targets, and cities in order to damage Iran's economy and morale. Iraq also wanted to provoke Iran into doing something that would cause the superpowers to be directly involved in the conflict on the Iraqi side. The so-called "Tanker War" started when Iraq attacked the oil terminal and oil tankers at Kharg Island in early 1984. Iraq's aim in attacking Iranian shipping was to provoke the Iranians to retaliate with extreme measures, such as closing the Strait of Hormuz to all maritime traffic, thereby bringing American intervention; the United States had threatened several times to intervene if the Strait of Hormuz were closed. As a result, the Iranians limited their retaliatory attacks to Iraqi shipping, leaving the strait open to general passage. Iraq declared that all ships going to or from Iranian ports in the northern zone of the Persian Gulf were subject to attack. They used F-1 Mirage, Super Etendard, Mig-23, Su-20/22, and Super Frelon helicopters armed with Exocet anti-ship missiles as well as Soviet-made air-to-surface missiles to enforce their threats. Iraq repeatedly bombed Iran's main oil export facility on Kharg Island, causing increasingly heavy damage. As a first response to these attacks, Iran attacked a Kuwaiti tanker carrying Iraqi oil near Bahrain on 13 May 1984, as well as a Saudi tanker in Saudi waters on 16 May. Because Iraq had become landlocked during the course of the war, they had to rely on their Arab allies, primarily Kuwait, to transport their oil. Iran attacked tankers carrying Iraqi oil from Kuwait, later attacking tankers from any Persian Gulf state supporting Iraq. Attacks on ships of noncombatant nations in the Persian Gulf sharply increased thereafter, with both nations attacking oil tankers and merchant ships of neutral nations in an effort to deprive their opponent of trade. The Iranian attacks against Saudi shipping led to Saudi F-15s shooting down a pair of F-4 Phantom II on 5 June 1984. The air and small-boat attacks, however, did little damage to Persian Gulf state economies, and Iran moved its shipping port to Larak Island in the Strait of Hormuz. The Iranian Navy imposed a naval blockade of Iraq, using its British-built frigates to stop and inspect any ships thought to be trading with Iraq. They operated with virtual impunity, as Iraqi pilots had little training in hitting naval targets. Some Iranian warships attacked tankers with ship-to-ship missiles, while others used their radars to guide land-based anti-ship missiles to their targets. Iran began to rely on its new Revolutionary Guard's navy, which used Boghammar speedboats fitted with rocket launchers and heavy machine guns. These speedboats would launch surprise attacks against tankers and cause substantial damage. Iran also used F-4 Phantoms II and helicopters to launch Maverick missiles and unguided rockets at tankers. A U.S. Navy ship, , was struck on 17 May 1987 by two Exocet anti-ship missiles fired from an Iraqi F-1 Mirage plane. The missiles had been fired at about the time the plane was given a routine radio warning by "Stark". The frigate did not detect the missiles with radar, and warning was given by the lookout only moments before they struck. Both missiles hit the ship, and one exploded in crew quarters, killing 37 sailors and wounding 21. Lloyd's of London, a British insurance market, estimated that the Tanker War damaged 546 commercial vessels and killed about 430 civilian sailors. The largest portion of the attacks was directed by Iraq against vessels in Iranian waters, with the Iraqis launching three times as many attacks as the Iranians. But Iranian speedboat attacks on Kuwaiti shipping led Kuwait to formally petition foreign powers on 1 November 1986 to protect its shipping. The Soviet Union agreed to charter tankers starting in 1987, and the United States Navy offered to provide protection for foreign tankers reflagged and flying the U.S. flag starting 7 March 1987 in Operation Earnest Will. Neutral tankers shipping to Iran were unsurprisingly not protected by Earnest Will, resulting in reduced foreign tanker traffic to Iran, since they risked Iraqi air attack. Iran accused the United States of helping Iraq. During the course of the war, Iran attacked two Soviet merchant ships. "Seawise Giant", the largest ship ever built, was struck by Iraqi Exocet missiles as it was carrying Iranian crude oil out of the Persian Gulf. Meanwhile, Iraq's air force also began carrying out strategic bombing raids against Iranian cities. While Iraq had launched numerous attacks with aircraft and missiles against border cities from the beginning of the war and sporadic raids on Iran's main cities, this was the first systematic strategic bombing that Iraq carried out during the war. This would become known as the "War of the Cities". With the help of the USSR and the west, Iraq's air force had been rebuilt and expanded. Meanwhile, Iran, due to sanctions and lack of spare parts, had heavily curtailed its air force operations. Iraq used Tu-22 Blinder and Tu-16 Badger strategic bombers to carry out long-range high-speed raids on Iranian cities, including Tehran. Fighter-bombers such as the Mig-25 Foxbat and Su-22 Fitter were used against smaller or shorter range targets, as well as escorting the strategic bombers. Civilian and industrial targets were hit by the raids, and each successful raid inflicted economic damage from regular strategic bombing. In response, the Iranians deployed their F-4 Phantoms to combat the Iraqis, and eventually they deployed F-14s as well. Most of the Iraqi air raids were intercepted by the Iranian fighter jets and air defense, but some also successfully hit their targets, becoming a major headache for Iran. By 1986, Iran also expanded their air defense network heavily to relieve the pressure on the air force. By later in the war, Iraqi raids primarily consisted of indiscriminate missile attacks while air attacks were used only on fewer, more important targets. Starting in 1987, Saddam also ordered several chemical attacks on civilian targets in Iran, such as the town of Sardasht. Iran also launched several retaliatory air raids on Iraq, while primarily shelling border cities such as Basra. Iran also bought some Scud missiles from Libya, and launched them against Baghdad. These too inflicted damage upon Iraq. On 7 February 1984, during the first war of the cities, Saddam ordered his air force to attack eleven Iranian cities; bombardments ceased on 22 February 1984. Though Saddam intended the attacks to demoralise Iran and force them to negotiate, they had little effect, and Iran quickly repaired the damage. Moreover, Iraq's air force took heavy losses and Iran struck back, hitting Baghdad and other Iraqi cities. The attacks resulted in tens of thousands of civilian casualties on both sides, and became known as the first "war of the cities". It was estimated that 1,200 Iranian civilians were killed during the raids in February alone. There would be five such major exchanges throughout the course of the war, and multiple minor ones. While interior cities such as Tehran, Tabriz, Qom, Isfahan and Shiraz received numerous raids, the cities of western Iran suffered the most. By 1984, Iran's losses were estimated to be 300,000 soldiers, while Iraq's losses were estimated to be 150,000. Foreign analysts agreed that both Iran and Iraq failed to use their modern equipment properly, and both sides failed to carry out modern military assaults that could win the war. Both sides also abandoned equipment in the battlefield because their technicians were unable to carry out repairs. Iran and Iraq showed little internal coordination on the battlefield, and in many cases units were left to fight on their own. As a result, by the end of 1984, the war was a stalemate. One limited offensive Iran launched (Dawn 7) took place from 18–25 October 1984, when they recaptured the Iranian city of Mehran, which had been occupied by the Iraqis from the beginning of the war. By 1985, Iraqi armed forces were receiving financial support from Saudi Arabia, Kuwait, and other Persian Gulf states, and were making substantial arms purchases from the Soviet Union, China, and France. For the first time since early 1980, Saddam launched new offensives. On 6 January 1986, the Iraqis launched an offensive attempting to retake Majnoon Island. However, they were quickly bogged down into a stalemate against 200,000 Iranian infantrymen, reinforced by amphibious divisions. However, they managed to gain a foothold in the southern part of the island. Iraq also carried out another "war of the cities" between 12–14 March, hitting up to 158 targets in over 30 towns and cities, including Tehran. Iran responded by launching 14 Scud missiles for the first time, purchased from Libya. More Iraqi air attacks were carried out in August, resulting in hundreds of additional civilian casualties. Iraqi attacks against both Iranian and neutral oil tankers in Iranian waters continued, with Iraq carrying out 150 airstrikes using French bought Super Etendard and Mirage F-1 jets as well as Super Frelon helicopters, armed with Exocet missiles. The Iraqis attacked again on 28 January 1985; they were defeated, and the Iranians retaliated on 11 March 1985 with a major offensive directed against the Baghdad-Basra highway (one of the few major offensives conducted in 1985), codenamed Operation Badr (after the Battle of Badr, Muhammad's first military victory in Mecca). Ayatollah Khomeini urged Iranians on, declaring: It is our belief that Saddam wishes to return Islam to blasphemy and polytheism...if America becomes victorious...and grants victory to Saddam, Islam will receive such a blow that it will not be able to raise its head for a long time...The issue is one of Islam versus blasphemy, and not of Iran versus Iraq. This operation was similar to Operation Kheibar, though it invoked more planning. Iran used 100,000 troops, with 60,000 more in reserve. They assessed the marshy terrain, plotted points where they could land tanks, and constructed pontoon bridges across the marshes. The Basij forces were also equipped with anti-tank weapons. The ferocity of the Iranian offensive broke through the Iraqi lines. The Revolutionary Guard, with the support of tanks and artillery, broke through north of Qurna on 14 March. That same night 3,000 Iranian troops reached and crossed the Tigris River using pontoon bridges and captured part of the Baghdad–Basra Highway 6, which they had failed to achieve in Operations Dawn 5 and 6. Saddam responded by launching chemical attacks against the Iranian positions along the highway and by initiating the aforementioned second "war of the cities", with an air and missile campaign against twenty to thirty Iranian population centres, including Tehran. Under General Sultan Hashim Ahmad al-Tai and General Jamal Zanoun (both considered to be among Iraq's most skilled commanders), the Iraqis launched air attacks against the Iranian positions and pinned them down. They then launched a pincer attack using mechanized infantry and heavy artillery. Chemical weapons were used, and the Iraqis also flooded Iranian trenches with specially constructed pipes delivering water from the Tigris River. The Iranians retreated back to the Hoveyzeh marshes while being attacked by helicopters, and the highway was recaptured by the Iraqis. Operation Badr resulted in 10,000–12,000 Iraqi casualties and 15,000 Iranian ones. The failure of the human wave attacks in earlier years had prompted Iran to develop a better working relationship between the Army and the Revolutionary Guard and to mould the Revolutionary Guard units into a more conventional fighting force. To combat Iraq's use of chemical weapons, Iran began producing an antidote. They also created and fielded their own homemade drones, the Mohajer 1's, fitted with six RPG-7's to launch attacks. They were primarily used in observation, being used for up to 700 sorties. For the rest of 1986, and until the spring of 1988, the Iranian Air Force's efficiency in air defence increased, with weapons being repaired or replaced and new tactical methods being used. For example, the Iranians would loosely integrate their SAM sites and interceptors to create "killing fields" in which dozens of Iraqi planes were lost (which was reported in the West as the Iranian Air Force using F-14s as "mini-AWACs"). The Iraqi Air Force reacted by increasing the sophistication of its equipment, incorporating modern electronic countermeasure pods, decoys such as chaff and flare, and anti-radiation missiles. Due to the heavy losses in the last war of the cities, Iraq reduced their use of aerial attacks on Iranian cities. Instead, they would launch Scud missiles, which the Iranians could not stop. Since the range of the Scud missile was too short to reach Tehran, they converted them to al-Hussein missiles with the help of East German engineers, cutting up their Scuds into three chunks and attaching them together. Iran responded to these attacks by using their own Scud missiles. Compounding the extensive foreign help to Iraq, Iranian attacks were severely hampered by their shortages of weaponry, particularly heavy weapons as large amounts had been lost during the war. Iran still managed to maintain 1,000 tanks (often by capturing Iraqi ones) and additional artillery, but many needed repairs to be operational. However, by this time Iran managed to procure spare parts from various sources, helping them to restore some weapons. They secretly imported some weapons, such as RBS-70 anti-aircraft MANPADS. In an exception to the United States' support for Iraq, in exchange for Iran using its influence to help free western hostages in Lebanon, the United States secretly sold Iran some limited supplies (in Ayatollah Rafsanjani's postwar interview, he stated that during the period when Iran was succeeding, for a short time the United States supported Iran, then shortly after began helping Iraq again). Iran managed to get some advanced weapons, such as anti-tank TOW missiles, which worked better than rocket-propelled grenades. Iran later reverse-engineered and produced those weapons themselvesl. All of these almost certainly helped increase the effectiveness of Iran, although it did not reduce the human cost of their attacks. On the night of 10–11 February 1986, the Iranians launched Operation Dawn 8, in which 30,000 troops comprising five Army divisions and men from the Revolutionary Guard and Basij advanced in a two-pronged offensive to capture the al-Faw peninsula in southern Iraq, the only area touching the Persian Gulf. The capture of Al Faw and Umm Qasr was a major goal for Iran. Iran began with a feint attack against Basra, which was stopped by the Iraqis; Meanwhile, an amphibious strike force landed at the foot of the peninsula. The resistance, consisting of several thousand poorly trained soldiers of the Iraqi Popular Army, fled or were defeated, and the Iranian forces set up pontoon bridges crossing the Shatt al-Arab, allowing 30,000 soldiers to cross in a short period of time. They drove north along the peninsula almost unopposed, capturing it after only 24 hours of fighting. Afterwards they dug in and set up defenses. The sudden capture of al-Faw took the Iraqis by shock, since they had thought it impossible for the Iranians to cross the Shatt al-Arab. On 12 February 1986, the Iraqis began a counter-offensive to retake al-Faw, which failed after a week of heavy fighting. On 24 February 1986, Saddam sent one of his best commanders, General Maher Abd al-Rashid, and the Republican Guard to begin a new offensive to recapture al-Faw. A new round of heavy fighting took place. However, their attempts again ended in failure, costing them many tanks and aircraft: their 15th mechanised division was almost completely wiped out. The capture of al-Faw and the failure of the Iraqi counter-offensives were blows to the Ba'ath regime's prestige, and led the Gulf countries to fear that Iran might win the war. Kuwait in particular felt menaced with Iranian troops only away, and increased its support of Iraq accordingly. In March 1986, the Iranians tried to follow up their success by attempting to take Umm Qasr, which would have completely severed Iraq from the Gulf and placed Iranian troops on the border with Kuwait. However, the offensive failed due to Iranian shortages of armor. By this time, 17,000 Iraqis and 30,000 Iranians were made casualties. The First Battle of al-Faw ended in March, but heavy combat operations lasted on the peninsula into 1988, with neither side being able to displace the other. The battle bogged down into a World War I-style stalemate in the marshes of the peninsula. Immediately after the Iranian capture of al-Faw, Saddam declared a new offensive against Iran, designed to drive deep into the state. The Iranian border city of Mehran, on the foot of the Zagros Mountains, was selected as the first target. On 15–19 May, Iraqi Army's Second Corps, supported by helicopter gunships, attacked and captured the city. Saddam then offered the Iranians to exchange Mehran for al-Faw. The Iranians rejected the offer. Iraq then continued the attack, attempting to push deeper into Iran. However, Iraq's attack was quickly warded off by Iranian AH-1 Cobra helicopters with TOW missiles, which destroyed numerous Iraqi tanks and vehicles. The Iranians built up their forces on the heights surrounding Mehran. On 30 June, using mountain warfare tactics they launched their attack, recapturing the city by 3 July. Saddam ordered the Republican Guard to retake the city on 4 July, but their attack was ineffective. Iraqi losses were heavy enough to allow the Iranians to also capture territory inside Iraq, and depleted the Iraqi military enough to prevent them from launching a major offensive for the next two years. Iraq's defeats at al-Faw and at Mehran were severe blows to the prestige of the Iraqi regime, and western powers, including the US, became more determined to prevent an Iraqi loss. Through the eyes of international observers, Iran was prevailing in the war by the end of 1986. In the northern front, the Iranians began launching attacks toward the city of Suleimaniya with the help of Kurdish fighters, taking the Iraqis by surprise. They came within of the city before being stopped by chemical and army attacks. Iran's army had also reached the Meimak Hills, only from Baghdad. Iraq managed to contain Iran's offensives in the south, but was under serious pressure, as the Iranians were slowly overwhelming them. Iraq responded by launching another "war of the cities". In one attack, Tehran's main oil refinery was hit, and in another instance, Iraq damaged Iran's Assadabad satellite dish, disrupting Iranian overseas telephone and telex service for almost two weeks. Civilian areas were also hit, resulting in many casualties. Iraq continued to attack oil tankers via air. Iran responded by launching Scud missiles and air attacks at Iraqi targets. Iraq continued to attack Kharg Island and the oil tankers and facilities as well. Iran created a tanker shuttle service of 20 tankers to move oil from Kharg to Larak Island, escorted by Iranian fighter jets. Once moved to Larak, the oil would be moved to oceangoing tankers (usually neutral). They also rebuilt the oil terminals damaged by Iraqi air raids and moved shipping to Larak Island, while attacking foreign tankers that carried Iraqi oil (as Iran had blocked Iraq's access to the open sea with the capture of al-Faw). By now they almost always used the armed speedboats of the IRGC navy, and attacked many tankers. The tanker war escalated drastically, with attacks nearly doubling in 1986 (the majority carried out by Iraq). Iraq got permission from the Saudi government to use its airspace to attack Larak Island, although due to the distance attacks were less frequent there. The escalating tanker war in the Gulf became an ever-increasing concern to foreign powers, especially the United States. In April 1986, Ayatollah Khomeini issued a fatwa declaring that the war must be won by March 1987. The Iranians increased recruitment efforts, obtaining 650,000 volunteers. The animosity between the Army and the Revolutionary Guard arose again, with the Army wanting to use more refined, limited military attacks while the Revolutionary Guard wanted to carry out major offensives. Iran, confident in its successes, began planning their largest offensives of the war, which they called their "final offensives". Faced with their recent defeats in al-Faw and Mehran, Iraq appeared to be losing the war. Iraq's generals, angered by Saddam's interference, threatened a full-scale mutiny against the Ba'ath Party unless they were allowed to conduct operations freely. In one of the few times during his career, Saddam gave in to the demands of his generals. Up to this point, Iraqi strategy was to ride out Iranian attacks. However, the defeat at al-Faw led Saddam to declare the war to be "Al-Defa al-Mutaharakha" (The Dynamic Defense), and announcing that all civilians had to take part in the war effort. The universities were closed and all of the male students were drafted into the military. Civilians were instructed to clear marshlands to prevent Iranian amphibious infiltrations and to help build fixed defenses. The government tried to integrate the Shias into the war effort by recruiting many as part of the Ba'ath Party. In an attempt to counterbalance the religious fervor of the Iranians and gain support from the devout masses, the regime also began to promote religion and, on the surface, Islamization, despite the fact that Iraq was run by a secular regime. Scenes of Saddam praying and making pilgrimages to shrines became common on state-run television. While Iraqi morale had been low throughout the war, the attack on al-Faw raised patriotic fervor, as the Iraqis feared invasion. Saddam also recruited volunteers from other Arab countries into the Republican Guard, and received much technical support from foreign nations as well. While Iraqi military power had been depleted in recent battles, through heavy foreign purchases and support, they were able to expand their military even to much larger proportions by 1988. At the same time, Saddam ordered the genocidal al-Anfal Campaign in an attempt to crush the Kurdish resistance, who were now allied with Iran. The result was the deaths of several hundred thousand Iraqi Kurds, and the destruction of villages, towns, and cities. Iraq began to try to perfect its maneuver tactics. The Iraqis began to prioritize the professionalization of their military. Prior to 1986, the conscription-based Iraqi regular army and the volunteer-based Iraqi Popular Army conducted the bulk of the operations in the war, to little effect. The Republican Guard, formerly an elite praetorian guard, was expanded as a volunteer army and filled with Iraq's best generals. Loyalty to the state was no longer a primary requisite for joining. However, due to Saddam's paranoia, the former duties of the Republican Guard were transferred to a new unit, the Special Republican Guard. Full-scale war games against hypothetical Iranian positions were carried out in the western Iraqi desert against mock targets, and they were repeated over the course of a full year until the forces involved fully memorized their attacks. Iraq built its military massively, eventually possessing the 4th largest in the world, in order to overwhelm the Iranians through sheer size. Meanwhile, Iran continued to attack as the Iraqis were planning their strike. In 1987 the Iranians renewed a series of major human wave offensives in both northern and southern Iraq. The Iraqis had elaborately fortified Basra with 5 defensive rings, exploiting natural waterways such as the Shatt-al-Arab and artificial ones, such as "Fish Lake" and the Jasim River, along with earth barriers. Fish Lake was a massive lake filled with mines, underwater barbed wire, electrodes and sensors. Behind each waterway and defensive line was radar-guided artillery, ground attack aircraft and helicopters, all capable of firing poison gas or conventional munitions. The Iranian strategy was to penetrate the Iraqi defences and encircle Basra, cutting off the city as well as the Al-Faw peninsula from the rest of Iraq. Iran's plan was for three assaults: a diversionary attack near Basra, the main offensive and another diversionary attack using Iranian tanks in the north to divert Iraqi heavy armor from Basra. For these battles, Iran had re-expanded their military by recruiting many new Basij and Pasdaran volunteers. Iran brought 150,000–200,000 total troops into the battles. On 25 December 1986, Iran launched Operation Karbala-4 ("Karbala" referring to Hussein ibn Ali's Battle of Karbala). According to Iraqi General Ra'ad al-Hamdani, this was a diversionary attack. The Iranians launched an amphibious assault against the Iraqi island of Umm al-Rassas in the Shatt-Al-Arab river, parallel to Khoramshahr. They then set up a pontoon bridge and continued the attack, eventually capturing the island in a costly success but failing to advance further; the Iranians had 60,000 casualties, while the Iraqis 9,500. The Iraqi commanders exaggerated Iranian losses to Saddam, and it was assumed that the main Iranian attack on Basra had been fully defeated and that it would take the Iranians six months to recover. When the main Iranian attack, Operation Karbala 5, began, many Iraqi troops were on leave. The Siege of Basra, code-named Operation Karbala-5 (), was an offensive operation carried out by Iran in an effort to capture the Iraqi port city of Basra in early 1987. This battle, known for its extensive casualties and ferocious conditions, was the biggest battle of the war and proved to be the beginning of the end of the Iran–Iraq War. While Iranian forces crossed the border and captured the eastern section of Basra Governorate, the operation ended in a stalemate. At the same time as Operation Karbala 5, Iran also launched Operation Karbala-6 against the Iraqis in Qasr-e Shirin in central Iran to prevent the Iraqis from rapidly transferring units down to defend against the Karbala-5 attack. The attack was carried out by Basij infantry and the Revolutionary Guard's 31st "Ashura" and the Army's 77th "Khorasan" armored divisions. The Basij attacked the Iraqi lines, forcing the Iraqi infantry to retreat. An Iraqi armored counter-attack surrounded the Basij in a pincer movement, but the Iranian tank divisions attacked, breaking the encirclement. The Iranian attack was finally stopped by mass Iraqi chemical weapons attacks. Operation Karbala-5 was a severe blow to Iran's military and morale. To foreign observers, it appeared that Iran was continuing to strengthen. By 1988, Iran had become self-sufficient in many areas, such as anti-tank TOW missiles, Scud ballistic missiles (Shahab-1), Silkworm anti-ship missiles, Oghab tactical rockets, and producing spare parts for their weaponry. Iran had also improved its air defenses with smuggled surface to air missiles. Iran even was producing UAV's and the Pilatus PC-7 propeller aircraft for observation. Iran also doubled their stocks of artillery, and was self-sufficient in manufacture of ammunition and small arms. While it was not obvious to foreign observers, the Iranian public had become increasingly war-weary and disillusioned with the fighting, and relatively few volunteers joined the fight in 1987–88. Because the Iranian war effort relied on popular mobilization, their military strength actually declined, and Iran was unable to launch any major offensives after Karbala-5. As a result, for the first time since 1982, the momentum of the fighting shifted towards the regular army. Since the regular army was conscription based, it made the war even less popular. Many Iranians began to try to escape the conflict. As early as May 1985, anti-war demonstrations took place in 74 cities throughout Iran, which were crushed by the regime, resulting in some protesters being shot and killed. By 1987, draft-dodging had become a serious problem, and the Revolutionary Guards and police set up roadblocks throughout cities to capture those who tried to evade conscription. Others, particularly the more nationalistic and religious, the clergy, and the Revolutionary Guards, wished to continue the war. The leadership acknowledged that the war was a stalemate, and began to plan accordingly. No more "final offensives" were planned. The head of the Supreme Defense Council Hashemi Rafsanjani announced during a news conference the end of human wave attacks. Mohsen Rezaee, head of the IRGC, announced that Iran would focus exclusively on limited attacks and infiltrations, while arming and supporting opposition groups inside of Iraq. On the Iranian home front, sanctions, declining oil prices, and Iraqi attacks on Iranian oil facilities and shipping took a heavy toll on the economy. While the attacks themselves were not as destructive as some analysts believed, the U.S.-led Operation Earnest Will (which protected Iraqi and allied oil tankers, but not Iranian ones) led many neutral countries to stop trading with Iran because of rising insurance and fear of air attack. Iranian oil and non-oil exports fell by 55%, inflation reached 50% by 1987, and unemployment skyrocketed. At the same time, Iraq was experiencing crushing debt and shortages of workers, encouraging its leadership to try to end the war quickly. By the end of 1987, Iraq possessed 5,550 tanks (outnumbering the Iranians six to one) and 900 fighter aircraft (outnumbering the Iranians ten to one). After Operation Karbala-5, Iraq only had 100 qualified fighter pilots remaining; therefore, Iraq began to invest in recruiting foreign pilots from countries such as Belgium, South Africa, Pakistan, East Germany and the Soviet Union. They replenished their manpower by integrating volunteers from other Arab countries into their army. Iraq also became self-sufficient in chemical weapons and some conventional ones and received much equipment from abroad. Foreign support helped Iraq bypass its economic troubles and massive debt to continue the war and increase the size of its military. While the southern and central fronts were at a stalemate, Iran began to focus on carrying out offensives in northern Iraq with the help of the Peshmerga (Kurdish insurgents). The Iranians used a combination of semi-guerrilla and infiltration tactics in the Kurdish mountains with the Peshmerga. During Operation Karbala-9 in early April, Iran captured territory near Suleimaniya, provoking a severe poison gas counter-attack. During Operation Karbala-10, Iran attacked near the same area, capturing more territory. During Operation Nasr-4, the Iranians surrounded the city of Suleimaniya and, with the help of the Peshmerga, infiltrated over 140 km into Iraq and raided and threatened to capture the oil-rich city of Kirkuk and other northern oilfields. Nasr-4 was considered to be Iran's most successful individual operation of the war but Iranian forces were unable to consolidate their gains and continue their advance; while these offensives coupled with the Kurdish uprising sapped Iraqi strength, losses in the north would not mean a catastrophic failure for Iraq. On 20 July, the UN Security Council passed the U.S.-sponsored Resolution 598, which called for an end to the fighting and a return to pre-war boundaries. This resolution was noted by Iran for being the first resolution to call for a return to the pre-war borders, and setting up a commission to determine the aggressor and compensation. With the stalemate on land, the air/tanker war began to play an increasingly major role in the conflict. The Iranian air force had become very small, with only 20 F-4 Phantoms, 20 F-5 Tigers, and 15 F-14 Tomcats in operation, although Iran managed to restore some damaged planes to service. The Iranian Air Force, despite its once sophisticated equipment, lacked enough equipment and personnel to sustain the war of attrition that had developed, and was unable to lead an outright onslaught against Iraq. The Iraqi Air Force, however, had originally lacked modern equipment and experienced pilots, but after pleas from Iraqi military leaders, Saddam decreased political influence on everyday operations and left the fighting to his combatants. The Soviets began delivering more advanced aircraft and weapons to Iraq, while the French improved training for flight crews and technical personnel and continually introduced new methods for countering Iranian weapons and tactics. Iranian ground air defense still shot down many Iraqi aircraft. The main Iraqi air effort had shifted to the destruction of Iranian war-fighting capability (primarily Persian Gulf oil fields, tankers, and Kharg Island), and starting in late 1986, the Iraqi Air Force began a comprehensive campaign against the Iranian economic infrastructure. By late 1987, the Iraqi Air Force could count on direct American support for conducting long-range operations against Iranian infrastructural targets and oil installations deep in the Persian Gulf. U.S. Navy ships tracked and reported movements of Iranian shipping and defences. In the massive Iraqi air strike against Kharg Island, flown on 18 March 1988, the Iraqis destroyed two supertankers but lost five aircraft to Iranian F-14 Tomcats, including two Tupolev Tu-22Bs and one Mikoyan MiG-25RB. The U.S. Navy was now becoming more involved in the fight in the Persian Gulf, launching Operations Earnest Will and Prime Chance against the Iranians. The attacks on oil tankers continued. Both Iran and Iraq carried out frequent attacks during the first four months of the year. Iran was effectively waging a naval guerilla war with its IRGC navy speedboats, while Iraq attacked with its aircraft. In 1987, Kuwait asked to reflag its tankers to the U.S. flag. They did so in March, and the U.S. Navy began Operation Earnest Will to escort the tankers. The result of Earnest Will would be that, while oil tankers shipping Iraqi/Kuwaiti oil were protected, Iranian tankers and neutral tankers shipping to Iran would be unprotected, resulting in both losses for Iran and the undermining of its trade with foreign countries, damaging Iran's economy further. Iran deployed Silkworm missiles to attack ships, but only a few were actually fired. Both the United States and Iran jockeyed for influence in the Gulf. To discourage the United States from escorting tankers, Iran secretly mined some areas. The United States began to escort the reflagged tankers, but one was damaged by a mine while under escort. While being a public-relations victory for Iran, the United States increased its reflagging efforts. While Iran mined the Persian Gulf, their speedboat attacks were reduced, primarily attacking unflagged tankers shipping in the area. On 24 September, US Navy SEALS captured the Iranian mine-laying ship "Iran Ajr", a diplomatic disaster for the already isolated Iranians. On 8 October, the U.S. Navy destroyed four Iranian speedboats, and in response to Iranian Silkworm missile attacks on Kuwaiti oil tankers, launched Operation Nimble Archer, destroying two Iranian oil rigs in the Persian Gulf. During November and December, the Iraqi air force launched a bid to destroy all Iranian airbases in Khuzestan and the remaining Iranian air force. Iran managed to shoot down 30 Iraqi fighters with fighter jets, anti-aircraft guns, and missiles, allowing the Iranian air force to survive to the end of the war. On 28 June, Iraqi fighter bombers attacked the Iranian town of Sardasht near the border, using chemical mustard gas bombs. While many towns and cities had been bombed before, and troops attacked with gas, this was the first time that the Iraqis had attacked a civilian area with poison gas. One quarter of the town's then population of 20,000 was burned and stricken, and 113 were killed immediately, with many more dying and suffering health effects over following decades. Saddam ordered the attack in order to test the effects of the newly developed "dusty mustard" gas, which was designed to be even more crippling than traditional mustard gas. While little known outside of Iran (unlike the later Halabja chemical attack), the Sardasht bombing (and future similar attacks) had a tremendous effect on the Iranian people's psyche. By 1988, with massive equipment imports and reduced Iranian volunteers, Iraq was ready to launch major offensives against Iran. In February 1988, Saddam began the fifth and most deadly "war of the cities". Over the next two months, Iraq launched over 200 al-Hussein missiles at 37 Iranian cities. Saddam also threatened to use chemical weapons in his missiles, which caused 30% of Tehran's population to leave the city. Iran retaliated, launching at least 104 missiles against Iraq in 1988 and shelling Basra. This event was nicknamed the "Scud Duel" in the foreign media. In all, Iraq launched 520 Scuds and al-Husseins against Iran and Iran fired 177 in return. The Iranian attacks were too few in number to deter Iraq from launching their attacks. Iraq also increased their airstrikes against Kharg Island and Iranian oil tankers. With their tankers protected by U.S. warships, they could operate with virtual impunity. In addition, the West supplied Iraq's air force with laser-guided smart bombs, allowing them to attack economic targets while evading anti-aircraft defenses. These attacks began to have a major toll on the Iranian economy and morale and caused many casualties. In March 1988, the Iranians carried out Operation Dawn 10, Operation Beit ol-Moqaddas 2, and Operation Zafar 7 in Iraqi Kurdistan with the aim of capturing the Darbandikhan Dam and the power plant at Lake Dukan, which supplied Iraq with much of its electricity and water, as well as the city of Suleimaniya. Iran hoped that the capture of these areas would bring more favorable terms to the ceasefire agreement. This infiltration offensive was carried out in conjunction with the Peshmerga. Iranian airborne commandos landed behind the Iraqi lines and Iranian helicopters hit Iraqi tanks with TOW missiles. The Iraqis were taken by surprise, and Iranian F-5E Tiger fighter jets even damaged the Kirkuk oil refinery. Iraq carried out executions of multiple officers for these failures in March–April 1988, including Colonel Jafar Sadeq. The Iranians used infiltration tactics in the Kurdish mountains, captured the town of Halabja and began to fan out across the province. Though the Iranians advanced to within sight of Dukan and captured around and 4,000 Iraqi troops, the offensive failed due to the Iraqi use of chemical warfare. The Iraqis launched the deadliest chemical weapons attacks of the war. The Republican Guard launched 700 chemical shells, while the other artillery divisions launched 200–300 chemical shells each, unleashing a chemical cloud over the Iranians, killing or wounding 60% of them, the blow was felt particularly by the Iranian 84th infantry division and 55th paratrooper division. The Iraqi special forces then stopped the remains of the Iranian force. In retaliation for Kurdish collaboration with the Iranians, Iraq launched a massive poison gas attack against Kurdish civilians in Halabja, recently taken by the Iranians, killing thousands of civilians. Iran airlifted foreign journalists to the ruined city, and the images of the dead were shown throughout the world, but Western mistrust of Iran and collaboration with Iraq led them to also blame Iran for the attack. On 17 April 1988, Iraq launched Operation Ramadan Mubarak (Blessed Ramadan), a surprise attack against the 15,000 Basij troops on the al-Faw peninsula. The attack was preceded by Iraqi diversionary attacks in northern Iraq, with a massive artillery and air barrage of Iranian front lines. Key areas, such as supply lines, command posts, and ammunition depots, were hit by a storm of mustard gas and nerve gas, as well as by conventional explosives. Helicopters landed Iraqi commandos behind Iranian lines on al-Faw while the main Iraqi force made a frontal assault. Within 48 hours, all of the Iranian forces had been killed or cleared from the al-Faw Peninsula. The day was celebrated in Iraq as Faw Liberation Day throughout Saddam's rule. The Iraqis had planned the offensive well. Prior to the attack, the Iraqi soldiers gave themselves poison gas antidotes to shield themselves from the effect of the saturation of gas. The heavy and well executed use of chemical weapons was the decisive factor in the victory. Iraqi losses were relatively light, especially compared to Iran's casualties. The Iranians eventually managed to halt the Iraqi drive as they pushed towards Khuzestan. To the shock of the Iranians, rather than breaking off the offensive, the Iraqis kept up their drive, and a new force attacked the Iranian positions around Basra. Following this, the Iraqis launched a sustained drive to clear the Iranians out of all of southern Iraq. One of the most successful Iraqi tactics was the "one-two punch" attack using chemical weapons. Using artillery, they would saturate the Iranian front line with rapidly dispersing cyanide and nerve gas, while longer-lasting mustard gas was launched via fighter-bombers and rockets against the Iranian rear, creating a "chemical wall" that blocked reinforcement. The same day as Iraq's attack on al-Faw peninsula, the United States Navy launched Operation Praying Mantis in retaliation against Iran for damaging a warship with a mine. Iran lost oil platforms, destroyers, and frigates in this battle, which ended only when President Reagan decided that the Iranian navy had been damaged enough. In spite of this, the Revolutionary Guard Navy continued their speedboat attacks against oil tankers. The defeats at al-Faw and in the Persian Gulf nudged Iranian leadership towards quitting the war, especially when facing the prospect of fighting the Americans. Faced with such losses, Khomeini appointed the cleric Hashemi Rafsanjani as the Supreme Commander of the Armed Forces, though he had in actuality occupied that position for months. Rafsanjani ordered a last desperate counter-attack into Iraq, which was launched 13 June 1988. The Iranians infiltrated through the Iraqi trenches and moved into Iraq and managed to strike Saddam's presidential palace in Baghdad using fighter aircraft. After three days of fighting, the decimated Iranians were driven back to their original positions again as the Iraqis launched 650 helicopter and 300 aircraft sorties. On 18 June, Iraq launched Operation Forty Stars ( "chehel cheragh") in conjunction with the Mujahideen-e-Khalq (MEK) around Mehran. With 530 aircraft sorties and heavy use of nerve gas, they crushed the Iranian forces in the area, killing 3,500 and nearly destroying a Revolutionary Guard division. Mehran was captured once again and occupied by the MEK. Iraq also launched air raids on Iranian population centers and economic targets, setting 10 oil installations on fire. On 25 May 1988, Iraq launched the first of five Tawakalna ala Allah Operations, consisting of one of the largest artillery barrages in history, coupled with chemical weapons. The marshes had been dried by drought, allowing the Iraqis to use tanks to bypass Iranian field fortifications, expelling the Iranians from the border town of Shalamcheh after less than 10 hours of combat. On 25 June, Iraq launched the second Tawakal ala Allah operation against the Iranians on Majnoon Island. Iraqi commandos used amphibious craft to block the Iranian rear, then used hundreds of tanks with massed conventional and chemical artillery barrages to recapture the island after 8 hours of combat. Saddam appeared live on Iraqi television to "lead" the charge against the Iranians. The majority of the Iranian defenders were killed during the quick assault. The final two Tawakal ala Allah operations took place near al-Amarah and Khaneqan. By 12 July, the Iraqis had captured the city of Dehloran, inside Iran, along with 2,500 troops and much armour and material, which took four days to transport to Iraq. These losses included more than 570 of the 1,000 remaining Iranian tanks, over 430 armored vehicles, 45 self-propelled artillery, 300 towed artillery pieces, and 320 antiaircraft guns. These figures only included what Iraq could actually put to use; total amount of captured materiel was higher. Since March, the Iraqis claimed to have captured 1,298 tanks, 155 infantry fighting vehicles, 512 heavy artillery pieces, 6,196 mortars, 5,550 recoilless rifles and light guns, 8,050 man-portable rocket launchers, 60,694 rifles, 322 pistols, 454 trucks, and 1,600 light vehicles. The Iraqis withdrew from Dehloran soon after, claiming that they had "no desire to conquer Iranian territory". History professor Kaveh Farrokh considered this to be Iran's greatest military disaster during the war. Stephen Pelletier, a Journalist, Middle East expert, and Author, noted that "Tawakal ala Allah ... resulted in the absolute destruction of Iran's military machine." During the 1988 battles, the Iranians put up little resistance, having been worn out by nearly eight years of war. They lost large amounts of equipment but managed to rescue most of their troops from being captured, leaving Iraq with relatively few prisoners. On 2 July, Iran belatedly set up a joint central command which unified the Revolutionary Guard, Army, and Kurdish rebels, and dispelled the rivalry between the Army and the Revolutionary Guard. However, this came too late and, following the capture of 570 of their operable tanks and the destruction of hundreds more, Iran was believed to have fewer than 200 remaining operable tanks on the southern front, against thousands of Iraqi ones. The only area where the Iranians were not suffering major defeats was in Kurdistan. Saddam sent a warning to Khomeini in mid-1988, threatening to launch a new and powerful full-scale invasion and attack Iranian cities with weapons of mass destruction. Shortly afterwards, Iraqi aircraft bombed the Iranian town of Oshnavieh with poison gas, immediately killing and wounding over 2,000 civilians. The fear of an all out chemical attack against Iran's largely unprotected civilian population weighed heavily on the Iranian leadership, and they realized that the international community had no intention of restraining Iraq. The lives of the civilian population of Iran were becoming very disrupted, with a third of the urban population evacuating major cities in fear of the seemingly imminent chemical war. Meanwhile, Iraqi conventional bombs and missiles continuously hit towns and cities, destroying vital civilian and military infrastructure, and increasing the death toll. Iran replied with missile and air attacks, but not sufficiently to deter the Iraqis. with the threat of a new and even more powerful invasion, Commander-in-Chief Rafsanjani ordered the Iranians to retreat from Haj Omran, Kurdistan on 14 July. The Iranians did not publicly describe this as a retreat, instead calling it a "temporary withdrawal". By July, Iran's army inside Iraq (except Kurdistan) had largely disintegrated. Iraq put up a massive display of captured Iranian weapons in Baghdad, claiming they captured 1,298 tanks, 5,550 recoil-less rifles, and thousands of other weapons. However, Iraq had taken heavy losses as well, and the battles were very costly. In July 1988, Iraqi aircraft dropped bombs on the Iranian Kurdish village of Zardan. Dozens of villages, such as Sardasht, and some larger towns, such as Marivan, Baneh and Saqqez, were once again attacked with poison gas, resulting in even heavier civilian casualties. On 3 July 1988, the USS "Vincennes" shot down Iran Air Flight 655, killing 290 passengers and crew. The lack of international sympathy disturbed the Iranian leadership, and they came to the conclusion that the United States was on the verge of waging a full-scale war against them, and that Iraq was on the verge of unleashing its entire chemical arsenal upon their cities. At this point, elements of the Iranian leadership, led by Rafsanjani (who had initially pushed for the extension of the war), persuaded Khomeini to accept a ceasefire. They stated that in order to win the war, Iran's military budget would have to be increased eightfold and the war would last until 1993. On 20 July 1988, Iran accepted Resolution 598, showing its willingness to accept a ceasefire. A statement from Khomeini was read out in a radio address, and he expressed deep displeasure and reluctance about accepting the ceasefire, Happy are those who have departed through martyrdom. Happy are those who have lost their lives in this convoy of light. Unhappy am I that I still survive and have drunk the poisoned chalice... The news of the end of the war was greeted with celebration in Baghdad, with people dancing in the streets; in Tehran, however, the end of the war was greeted with a somber mood. Operation Mersad ( "ambush") was the last big military operation of the war. Both Iran and Iraq had accepted Resolution 598, but despite the ceasefire, after seeing Iraqi victories in the previous months, Mujahadeen-e-Khalq (MEK) decided to launch an attack of its own and wished to advance all the way to Teheran. Saddam and the Iraqi high command decided on a two-pronged offensive across the border into central Iran and Iranian Kurdistan. Shortly after Iran accepted the ceasefire the MEK army began its offensive, attacking into Ilam province under cover of Iraqi air power. In the north, Iraq also launched an attack into Iraqi Kurdistan, which was blunted by the Iranians. On 26 July 1988, the MEK started their campaign in central Iran, Operation Forough Javidan (Eternal Light), with the support of the Iraqi army. The Iranians had withdrawn their remaining soldiers to Khuzestan in fear of a new Iraqi invasion attempt, allowing the Mujahedeen to advance rapidly towards Kermanshah, seizing Qasr-e Shirin, Sarpol-e Zahab, Kerend-e Gharb, and Islamabad-e-Gharb. The MEK expected the Iranian population to rise up and support their advance; the uprising never materialised but they reached deep into Iran. In response, the Iranian military launched its counter-attack, Operation Mersad, under Lieutenant General Ali Sayyad Shirazi. Iranian paratroopers landed behind the MEK lines while the Iranian Air Force and helicopters launched an air attack, destroying much of the enemy columns. The Iranians defeated the MEK in the city of Kerend-e Gharb on 29 July 1988. On 31 July, Iran drove the MEK out of Qasr-e-Shirin and Sarpol Zahab, though MEK claimed to have "voluntarily withdrawn" from the towns. Iran estimated that 4,500 MEK were killed, while 400 Iranian soldiers died. The last notable combat actions of the war took place on 3 August 1988, in the Persian Gulf when the Iranian navy fired on a freighter and Iraq launched chemical attacks on Iranian civilians, killing an unknown number of them and wounding 2,300. Iraq came under international pressure to curtail further offensives. Resolution 598 became effective on 8 August 1988, ending all combat operations between the two countries. By 20 August 1988, peace with Iran was restored. UN peacekeepers belonging to the UNIIMOG mission took the field, remaining on the Iran–Iraq border until 1991. The majority of Western analysts believe that the war had no winners while some believed that Iraq emerged as the victor of the war, based on Iraq's overwhelming successes between April and July 1988. While the war was now over, Iraq spent the rest of August and early September clearing the Kurdish resistance. Using 60,000 troops along with helicopter gunships, chemical weapons (poison gas), and mass executions, Iraq hit 15 villages, killing rebels and civilians, and forced tens of thousands of Kurds to relocate to settlements. Many Kurdish civilians fled to Iran. By 3 September 1988, the anti-Kurd campaign ended, and all resistance had been crushed. 400 Iraqi soldiers and 50,000–100,000 Kurdish civilians and soldiers had been killed. At the war's conclusion, it took several weeks for the Armed Forces of the Islamic Republic of Iran to evacuate Iraqi territory to honor pre-war international borders set by the 1975 Algiers Agreement. The last prisoners of war were exchanged in 2003. The Security Council did not identify Iraq as the aggressor of the war until 11 December 1991, some 11 years after Iraq invaded Iran and 16 months following Iraq's invasion of Kuwait. The Iran–Iraq War was the deadliest conventional war ever fought between regular armies of developing countries. Iraqi casualties are estimated at 105,000–200,000 killed, while about 400,000 had been wounded and some 70,000 taken prisoner. Thousands of civilians on both sides died in air raids and ballistic missile attacks. Prisoners taken by both countries began to be released in 1990, though some were not released until more than 10 years after the end of the conflict. Cities on both sides had also been considerably damaged. While revolutionary Iran had been bloodied, Iraq was left with a large military and was a regional power, albeit with severe debt, financial problems, and labor shortages. According to Iranian government sources, the war cost Iran an estimated 200,000–220,000 killed, or up to 262,000 according to the conservative Western estimates. This includes 123,220 combatants, 60,711 MIA and 11,000–16,000 civilians. Combatants include 79,664 members of the Revolutionary Guard Corps and additional 35,170 soldiers from regular military. In addition, prisoners of war comprise 42,875 Iranian casualties, they were captured and kept in Iraqi detention centers from 2.5 to more than 15 years after the war was over. According to the Janbazan Affairs Organization, 398,587 Iranians sustained injuries that required prolonged medical and health care following primary treatment, including 52,195 (13%) injured due to the exposure to chemical warfare agents. From 1980 to 2012, 218,867 Iranians died due to war injuries and the mean age of combatants was 23 years old. This includes 33,430 civilians, mostly women and children. More than 144,000 Iranian children were orphaned as a consequence of these deaths. Other estimates put Iranian casualties up to 600,000. Both Iraq and Iran manipulated loss figures to suit their purposes. At the same time, Western analysts accepted improbable estimates. By April 1988, such casualties were estimated at between 150,000 and 340,000 Iraqis dead, and 450,000 to 730,000 Iranians. Shortly after the end of the war, it was thought that Iran suffered even more than a million dead. Considering the style of fighting on the ground and the fact that neither side penetrated deeply into the other's territory, USMC analysts believe events do not substantiate the high casualties claimed. The Iraqi government has claimed 800,000 Iranians were killed in conflict, four times more than Iranian official figures. Iraqi losses were also revised downwards over time. With the ceasefire in place, and UN peacekeepers monitoring the border, Iran and Iraq sent their representatives to Geneva, Switzerland, to negotiate a peace agreement on the terms of the ceasefire. However, peace talks stalled. Iraq, in violation of the UN ceasefire, refused to withdraw its troops from of disputed territory at the border area unless the Iranians accepted Iraq's full sovereignty over the Shatt al-Arab waterway. Foreign powers continued to support Iraq, which wanted to gain at the negotiating table what they failed to achieve on the battlefield, and Iran was portrayed as the one not wanting peace. Iran, in response, refused to release 70,000 Iraqi prisoners of war (compared to 40,000 Iranian prisoners of war held by Iraq). They also continued to carry out a naval blockade of Iraq, although its effects were mitigated by Iraqi use of ports in friendly neighbouring Arab countries. Iran also began to improve relations with many of the states that opposed it during the war. Because of Iranian actions, by 1990, Saddam had become more conciliatory, and in a letter to the now President Rafsanjani, he became more open to the idea of a peace agreement, although he still insisted on full sovereignty over the Shatt al-Arab. By 1990, Iran was undergoing military rearmament and reorganization, and purchased $10 billion worth of heavy weaponry from the USSR and China, including aircraft, tanks, and missiles. Rafsanjani reversed Iran's self-imposed ban on chemical weapons, and ordered the manufacture and stockpile of them (Iran destroyed them in 1993 after ratifying the Chemical Weapons Convention). As war with the western powers loomed, Iraq became concerned about the possibility of Iran mending its relations with the west in order to attack Iraq. Iraq had lost its support from the West, and its position in Iran was increasingly untenable. Saddam realized that if Iran attempted to expel the Iraqis from the disputed territories in the border area, it was likely they would succeed. Shortly after his invasion of Kuwait, Saddam wrote a letter to Rafsanjani stating that Iraq recognised Iranian rights over the eastern half of the Shatt al-Arab, a reversion to "status quo ante bellum" that he had repudiated a decade earlier, and that he would accept Iran's demands and withdraw Iraq's military from the disputed territories. A peace agreement was signed finalizing the terms of the UN resolution, diplomatic relations were restored, and by late 1990-early 1991, the Iraqi military withdrew. The UN peacekeepers withdrew from the border shortly afterward. Most of the prisoners of war were released in 1990, although some remained as late as 2003. Iranian politicians declared it to be the "greatest victory in the history of the Islamic Republic of Iran". Most historians and analysts consider the war to be a stalemate. Certain analysts believe that Iraq won, on the basis of the successes of their 1988 offensives which thwarted Iran's major territorial ambitions in Iraq and persuaded Iran to accept the ceasefire. Iranian analysts believe that they won the war because although they did not succeed in overthrowing the Iraqi government, they thwarted Iraq's major territorial ambitions in Iran, and that, two years after the war had ended, Iraq permanently gave up its claim of ownership over the entire Shatt al-Arab as well. On 9 December 1991, Javier Pérez de Cuéllar, UN Secretary General at the time, reported that Iraq's initiation of the war was unjustified, as was its occupation of Iranian territory and use of chemical weapons against civilians: That [Iraq's] explanations do not appear sufficient or acceptable to the international community is a fact...[the attack] cannot be justified under the charter of the United Nations, any recognized rules and principles of international law, or any principles of international morality, and entails the responsibility for conflict. Even if before the outbreak of the conflict there had been some encroachment by Iran on Iraqi territory, such encroachment did not justify Iraq's aggression against Iran—which was followed by Iraq's continuous occupation of Iranian territory during the conflict—in violation of the prohibition of the use of force, which is regarded as one of the rules of jus cogens...On one occasion I had to note with deep regret the experts' conclusion that "chemical weapons ha[d] been used against Iranian civilians in an area adjacent to an urban center lacking any protection against that kind of attack." He also stated that had the UN accepted this fact earlier, the war would have almost certainly not lasted as long as it did. Iran, encouraged by the announcement, sought reparations from Iraq, but never received any. Throughout the 1990s and early 2000s, Iran and Iraq relations remained balanced between a cold war and a cold peace. Despite renewed and somewhat thawed relations, both sides continued to have low level conflicts. Iraq continued to host and support the Mujahedeen-e-Khalq, which carried out multiple attacks throughout Iran up until the 2003 invasion of Iraq (including the assassination of Iranian general Ali Sayyad Shirazi in 1998, cross border raids, and mortar attacks). Iran carried out several airstrikes and missile attacks against Mujahedeen targets inside of Iraq (the largest taking place in 2001, when Iran fired 56 Scud missiles at Mujahedeen targets). In addition, according to General Hamdani, Iran continued to carry out low-level infiltrations of Iraqi territory, using Iraqi dissidents and anti-government activists rather than Iranian troops, in order to incite revolts. After the fall of Saddam in 2003, Hamdani claimed that Iranian agents infiltrated and created numerous militias in Iraq and built an intelligence system operating within the country. In 2005, the new government of Iraq apologised to Iran for starting the war. The Iraqi government also commemorated the war with various monuments, including the Hands of Victory and the al-Shaheed Monument, both in Baghdad. The war also helped to create a forerunner for the Coalition of the Gulf War, when the Gulf Arab states banded together early in the war to form the Gulf Cooperation Council to help Iraq fight Iran. The economic loss at the time was believed to exceed $500 billion for each country ($1.2 trillion total). In addition, economic development stalled and oil exports were disrupted. Iraq had accrued more than $130 billion of international debt, excluding interest, and was also weighed down by a slowed GDP growth. Iraq's debt to Paris Club amounted to $21 billion, 85% of which had originated from the combined inputs of Japan, the USSR, France, Germany, the United States, Italy and the United Kingdom. The largest portion of Iraq's debt, amounting to $130 billion, was to its former Arab backers, with $67 billion loaned by Kuwait, Saudi Arabia, Qatar, UAE, and Jordan. After the war, Iraq accused Kuwait of slant drilling and stealing oil, inciting its invasion of Kuwait, which in turn worsened Iraq's financial situation: the United Nations Compensation Commission mandated Iraq to pay reparations of more than $200 billion to victims of the invasion, including Kuwait and the United States. To enforce payment, Iraq was put under a complete international embargo, which further strained the Iraqi economy and pushed its external debt to private and public sectors to more than $500 billion by the end of Saddam's rule. Combined with Iraq's negative economic growth after prolonged international sanctions, this produced a debt-to-GDP ratio of more than 1,000%, making Iraq the most indebted developing country in the world. The unsustainable economic situation compelled the new Iraqi government to request that a considerable portion of debt incurred during the Iran–Iraq war be written off. Much of the oil industry in both countries was damaged in air raids. The war had its impact on medical science: a surgical intervention for comatose patients with penetrating brain injuries was created by Iranian physicians treating wounded soldiers, later establishing neurosurgery guidelines to treat civilians who had suffered blunt or penetrating skull injuries. Iranian physicians' experience in the war reportedly helped U.S. congresswoman Gabrielle Giffords recover after the 2011 Tucson shooting. In addition to helping trigger the Persian Gulf War, the Iran–Iraq War also contributed to Iraq's defeat in the Persian Gulf War. Iraq's military was accustomed to fighting the slow moving Iranian infantry formations with artillery and static defenses, while using mostly unsophisticated tanks to gun down and shell the infantry and overwhelm the smaller Iranian tank force; in addition to being dependent on weapons of mass destruction to help secure victories. Therefore, they were rapidly overwhelmed by the high-tech, quick-maneuvering U.S. forces using modern doctrines such as AirLand Battle. At first, Saddam attempted to ensure that the Iraqi population suffered from the war as little as possible. There was rationing, but civilian projects begun before the war continued. At the same time, the already extensive personality cult around Saddam reached new heights while the regime tightened its control over the military. After the Iranian victories of the spring of 1982 and the Syrian closure of Iraq's main pipeline, Saddam did a volte-face on his policy towards the home front: a policy of austerity and total war was introduced, with the entire population being mobilised for the war effort. All Iraqis were ordered to donate blood and around 100,000 Iraqi civilians were ordered to clear the reeds in the southern marshes. Mass demonstrations of loyalty towards Saddam became more common. Saddam also began implementing a policy of discrimination against Iraqis of Iranian origin. In the summer of 1982, Saddam began a campaign of terror. More than 300 Iraqi Army officers were executed for their failures on the battlefield. In 1983, a major crackdown was launched on the leadership of the Shia community. Ninety members of the al-Hakim family, an influential family of Shia clerics whose leading members were the émigrés Mohammad Baqir al-Hakim and Abdul Aziz al-Hakim, were arrested, and 6 were hanged. The crackdown on Kurds saw 8,000 members of the Barzani clan, whose leader (Massoud Barzani) also led the Kurdistan Democratic Party, similarly executed. From 1983 onwards, a campaign of increasingly brutal repression was started against the Iraqi Kurds, characterised by Israeli historian Efraim Karsh as having "assumed genocidal proportions" by 1988. The al-Anfal Campaign was intended to "pacify" Iraqi Kurdistan permanently. To secure the loyalty of the Shia population, Saddam allowed more Shias into the Ba'ath Party and the government, and improved Shia living standards, which had been lower than those of the Iraqi Sunnis. Saddam had the state pay for restoring Imam Ali's tomb with white marble imported from Italy. The Baathists also increased their policies of repression against the Shia. The most infamous event was the massacre of 148 civilians of the Shia town of Dujail. Despite the costs of the war, the Iraqi regime made generous contributions to Shia "waqf" (religious endowments) as part of the price of buying Iraqi Shia support. The importance of winning Shia support was such that welfare services in Shia areas were expanded during a time in which the Iraqi regime was pursuing austerity in all other non-military fields. During the first years of the war in the early 1980s, the Iraqi government tried to accommodate the Kurds in order to focus on the war against Iran. In 1983, the Patriotic Union of Kurdistan agreed to cooperate with Baghdad, but the Kurdistan Democratic Party (KDP) remained opposed. In 1983, Saddam signed an autonomy agreement with Jalal Talabani of the Patriotic Union of Kurdistan (PUK), though Saddam later reneged on the agreement. By 1985, the PUK and KDP had joined forces, and Iraqi Kurdistan saw widespread guerrilla warfare up to the end of the war. Israeli-British historian, Ephraim Karsh, argues that the Iranian government saw the outbreak of war as chance to strengthen its position and consolidate the Islamic revolution, noting that government propaganda presented it domestically as a glorious "jihad" and a test of Iranian national character. The Iranian regime followed a policy of total war from the beginning, and attempted to mobilise the nation as a whole. They established a group known as the Reconstruction Campaign, whose members were exempted from conscription and were instead sent into the countryside to work on farms to replace the men serving at the front. Iranian workers had a day's pay deducted from their pay cheques every month to help finance the war, and mass campaigns were launched to encourage the public to donate food, money, and blood. To further help finance the war, the Iranian government banned the import of all non-essential items, and launched a major effort to rebuild the damaged oil plants. According to former Iraqi general Ra'ad al-Hamdani, the Iraqis believed that in addition to the Arab revolts, the Revolutionary Guards would be drawn out of Tehran, leading to a counter-revolution in Iran that would cause Khomeini's government to collapse and thus ensure Iraqi victory. However, rather than turning against the revolutionary government as experts had predicted, Iran's people (including Iranian Arabs) rallied in support of the country and put up a stiff resistance. In June 1981, street battles broke out between the Revolutionary Guard and the left-wing Mujaheddin e-Khalq (MEK), continuing for several days and killing hundreds on both sides. In September, more unrest broke out on the streets of Iran as the MEK attempted to seize power. Thousands of left-wing Iranians (many of whom were not associated with the MEK) were shot and hanged by the government. The MEK began an assassination campaign that killed hundreds of regime officials by the fall of 1981. On 28 June 1981, they assassinated the secretary-general of the Islamic Republican Party, Mohammad Beheshti and on 30 August, killed Iran's president, Mohammad-Ali Rajai. The government responded with mass executions of suspected MEK members, a practice that lasted until 1985. In addition to the open civil conflict with the MEK, the Iranian government was faced with Iraqi-supported rebellions in Iranian Kurdistan, which were gradually put down through a campaign of systematic repression. 1985 also saw student anti-war demonstrations, which were crushed by government forces. The war furthered the decline of the Iranian economy that had begun with the revolution in 1978–79. Between 1979 and 1981, foreign exchange reserves fell from $14.6 billion to $1 billion. As a result of the war, living standards dropped dramatically, and Iran was described by British journalists John Bulloch and Harvey Morris as "a dour and joyless place" ruled by a harsh regime that "seemed to have nothing to offer but endless war". Though Iran was becoming bankrupt, Khomeini interpreted Islam's prohibition of usury to mean they could not borrow against future oil revenues to meet war expenses. As a result, Iran funded the war by the income from oil exports after cash had run out. The revenue from oil dropped from $20 billion in 1982 to $5 billion in 1988.French historian Pierre Razoux argued that this sudden drop in economic industrial potential, in conjunction with the increasing aggression of Iraq, placed Iran in a challenging position that had little leeway other than accepting Iraq's conditions of peace. In January 1985, former prime minister and anti-war Islamic Liberation Movement co-founder Mehdi Bazargan criticised the war in a telegram to the United Nations, calling it un-Islamic and illegitimate and arguing that Khomeini should have accepted Saddam's truce offer in 1982 instead of attempting to overthrow the Ba'ath. In a public letter to Khomeini sent in May 1988, he added "Since 1986, you have not stopped proclaiming victory, and now you are calling upon population to resist until victory. Is that not an admission of failure on your part?" Khomeini was annoyed by Bazargan's telegram, and issued a lengthy public rebuttal in which he defended the war as both Islamic and just. By 1987, Iranian morale had begun to crumble, reflected in the failure of government campaigns to recruit "martyrs" for the front. Israeli historian Efraim Karsh points to the decline in morale in 1987–88 as being a major factor in Iran's decision to accept the ceasefire of 1988. Not all saw the war in negative terms. The Islamic Revolution of Iran was strengthened and radicalised. The Iranian government-owned "Etelaat" newspaper wrote, "There is not a single school or town that is excluded from the happiness of 'holy defence' of the nation, from drinking the exquisite elixir of martyrdom, or from the sweet death of the martyr, who dies in order to live forever in paradise." Iran's regular Army had been purged after the 1979 Revolution, with most high-ranking officers either having deserted (fled the country) or been executed. At the beginning of the war, Iraq held a clear advantage in armour, while both nations were roughly equal in terms of artillery. The gap only widened as the war went on. Iran started with a stronger air force, but over time, the balance of power reversed in Iraq's favour (as Iraq was constantly expanding its military, while Iran was under arms sanctions). Estimates for 1980 and 1987 were: The conflict has been compared to World War I in terms of the tactics used, including large-scale trench warfare with barbed wire stretched across trenches, manned machine gun posts, bayonet charges, human wave attacks across a no man's land, and extensive use of chemical weapons such as sulfur mustard by the Iraqi government against Iranian troops, civilians, and Kurds. The world powers United States and the Soviet Union, together with many Western and Arab countries, provided military, intelligence, economic, and political support for Iraq. During the war, Iraq was regarded by the West and the Soviet Union as a counterbalance to post-revolutionary Iran. The Soviet Union, Iraq's main arms supplier during the war, did not wish for the end of its alliance with Iraq, and was alarmed by Saddam's threats to find new arms suppliers in the West and China if the Kremlin did not provide him with the weapons he wanted. The Soviet Union hoped to use the threat of reducing arms supplies to Iraq as leverage for forming a Soviet-Iranian alliance. During the early years of the war, the United States lacked meaningful relations with either Iran or Iraq, the former due to the Iranian Revolution and the Iran hostage crisis and the latter because of Iraq's alliance with the Soviet Union and hostility towards Israel. Following Iran's success of repelling the Iraqi invasion and Khomeini's refusal to end the war in 1982, the United States made an outreach to Iraq, beginning with the restoration of diplomatic relations in 1984. The United States wished to both keep Iran away from Soviet influence and protect other Gulf states from any threat of Iranian expansion. As a result, it began to provide limited support to Iraq. In 1982, Henry Kissinger, former Secretary of State, outlined U.S. policy towards Iran: The focus of Iranian pressure at this moment is Iraq. There are few governments in the world less deserving of our support and less capable of using it. Had Iraq won the war, the fear in the Gulf and the threat to our interest would be scarcely less than it is today. Still, given the importance of the balance of power in the area, it is in our interests to promote a ceasefire in that conflict; though not a cost that will preclude an eventual rapprochement with Iran either if a more moderate regime replaces Khomenini's or if the present rulers wake up to geopolitical reality that the historic threat to Iran's independence has always come from the country with which it shares a border of : the Soviet Union. A rapprochement with Iran, of course, must await at a minimum Iran's abandonment of hegemonic aspirations in the Gulf. Richard Murphy, Assistant Secretary of State during the war, testified to Congress in 1984 that the Reagan administration believed a victory for either Iran or Iraq was "neither militarily feasible nor strategically desirable". Support to Iraq was given via technological aid, intelligence, the sale of chemical and biological warfare technology and military equipment, and satellite intelligence. While there was direct combat between Iran and the United States, it is not universally agreed that the fighting between the United States and Iran was specifically to benefit Iraq, or for separate issues between the U.S. and Iran. American official ambiguity towards which side to support was summed up by Henry Kissinger when he remarked, "It's a pity they both can't lose." The Americans and the British also either blocked or watered down UN resolutions that condemned Iraq for using chemical weapons against the Iranians and their own Kurdish citizens. More than 30 countries provided support to Iraq, Iran, or both; most of the aid went to Iraq. Iran had a complex clandestine procurement network to obtain munitions and critical materials. Iraq had an even larger clandestine purchasing network, involving 10–12 allied countries, to maintain ambiguity over their arms purchases and to circumvent "official restrictions". Arab mercenaries and volunteers from Egypt and Jordan formed the Yarmouk Brigade and participated in the war alongside Iraqis. According to the Stockholm International Peace Institute, the Soviet Union, France, and China together accounted for over 90% of the value of Iraq's arms imports between 1980 and 1988. The United States pursued policies in favour of Iraq by reopening diplomatic channels, lifting restrictions on the export of dual-use technology, overseeing the transfer of third-party military hardware, and providing operational intelligence on the battlefield. France, which from the 1970s had been one of Iraq's closest allies, was a major supplier of military hardware. The French sold weapons equal to $5 billion, which comprised well over a quarter of Iraq's total arms stockpile. Citing French magazine "Le Nouvel Observateur" as the primary source, but also quoting French officials, the "New York Times" reported France had been sending chemical precursors of chemical weapons to Iraq, since 1986. China, which had no direct stake in the victory of either side and whose interests in the war were entirely commercial, freely sold arms to both sides. Iraq also made extensive use of front companies, middlemen, secret ownership of all or part of companies all over the world, forged end-user certificates, and other methods to hide what it was acquiring. Some transactions may have involved people, shipping, and manufacturing in as many as 10 countries. Support from Great Britain exemplified the methods by which Iraq would circumvent export controls. Iraq bought at least one British company with operations in the United Kingdom and the United States, and had a complex relationship with France and the Soviet Union, its major suppliers of actual weapons. The United Nations Security Council initially called for a cease-fire after a week of fighting while Iraq was occupying Iranian territory, and renewed the call on later occasions. However, the UN did not come to Iran's aid to repel the Iraqi invasion, and the Iranians thus interpreted the UN as subtly biased in favour of Iraq. Iraq's main financial backers were the oil-rich Persian Gulf states, most notably Saudi Arabia ($30.9 billion), Kuwait ($8.2 billion), and the United Arab Emirates ($8 billion). In all, Iraq received $35 billion in loans from the West and between $30 and $40 billion from the Persian Gulf states during the 1980s. The Iraqgate scandal revealed that a branch of Italy's largest bank, Banca Nazionale del Lavoro (BNL), in Atlanta, Georgia, relied partially on U.S. taxpayer-guaranteed loans to funnel $5 billion to Iraq from 1985 to 1989. In August 1989, when FBI agents raided the Atlanta branch of BNL, branch manager Christopher Drogoul was charged with making unauthorised, clandestine, and illegal loans to Iraq—some of which, according to his indictment, were used to purchase arms and weapons technology. According to the "Financial Times", Hewlett-Packard, Tektronix, and Matrix Churchill's branch in Ohio were among the companies shipping militarily useful technology to Iraq under the eye of the U.S. government. While the United States directly fought Iran, citing freedom of navigation as a major "casus belli", it also indirectly supplied some weapons to Iran as part of a complex and illegal programme that became known as the Iran–Contra affair. These secret sales were partly to help secure the release of hostages held in Lebanon, and partly to make money to help the Contras rebel group in Nicaragua. This arms-for-hostages agreement turned into a major scandal. North Korea was a major arms supplier to Iran, often acting as a third party in arms deals between Iran and the Communist bloc. Support included domestically manufactured arms and Eastern-Bloc weapons, for which the major powers wanted deniability. Among the other arms suppliers and supporters of Iran's Islamic Revolution, the major ones were Libya, Syria, and China. According to the Stockholm International Peace Institute, China was the largest foreign arms supplier to Iran between 1980 and 1988. Syria and Libya, breaking Arab solidarity, supported Iran with arms, rhetoric and diplomacy. Besides the United States and the Soviet Union, Yugoslavia also sold weapons to both countries for the entire duration of the conflict. Likewise, Portugal helped both countries; it was not unusual to see Iranian and Iraqi flagged ships anchored at Setúbal, waiting their turn to dock. From 1980 to 1987, Spain sold €458 million in weapons to Iran and €172 million to Iraq. Weapons sold to Iraq included 4x4 vehicles, BO-105 helicopters, explosives, and ammunition. A research party later discovered that an unexploded chemical Iraqi warhead in Iran was manufactured in Spain. Although neither side acquired any weapons from Turkey, both sides enjoyed Turkish civilian trade during the conflict, although the Turkish government remained neutral and refused to support the U.S.-imposed trade embargo on Iran. Turkey's export market jumped from $220 million in 1981 to $2 billion in 1985, making up 25% of Turkey's overall exports. Turkish construction projects in Iraq totaled $2.5 billion between 1974 and 1990. Trading with both countries helped Turkey to offset its ongoing economic crisis, though the benefits decreased as the war neared its end and accordingly disappeared entirely with Iraq's invasion of Kuwait and the resulting Iraq sanctions Turkey imposed in response. American support for Ba'athist Iraq during the Iran–Iraq War, in which it fought against post-revolutionary Iran, included several billion dollars' worth of economic aid, the sale of dual-use technology, non-U.S. origin weaponry, military intelligence, and special operations training. However, the U.S. did not directly supply arms to Iraq. U.S. government support for Iraq was not a secret and was frequently discussed in open sessions of the Senate and House of Representatives. American views toward Iraq were not enthusiastically supportive in its conflict with Iran, and activity in assistance was largely to prevent an Iranian victory. This was encapsulated by Henry Kissinger when he remarked, "It's a pity they both can't lose." A key element of U.S. political–military and energy–economic planning occurred in early 1983. The Iran–Iraq war had been going on for three years and there were significant casualties on both sides, reaching hundreds of thousands. Within the Reagan National Security Council concern was growing that the war could spread beyond the boundaries of the two belligerents. A National Security Planning Group meeting was called chaired by Vice President George Bush to review U.S. options. It was determined that there was a high likelihood that the conflict would spread into Saudi Arabia and other Gulf states, but that the United States had little capability to defend the region. Furthermore, it was determined that a prolonged war in the region would induce much higher oil prices and threaten the fragile world recovery which was just beginning to gain momentum. On 22 May 1984, President Reagan was briefed on the project conclusions in the Oval Office by William Flynn Martin who had served as the head of the NSC staff that organized the study. The full declassified presentation can be seen here. The conclusions were threefold: first oil stocks needed to be increased among members of the International Energy Agency and, if necessary, released early in the event of oil market disruption; second the United States needed to beef up the security of friendly Arab states in the region and thirdly an embargo should be placed on sales of military equipment to Iran and Iraq. The Plan was approved by the President and later affirmed by the G-7 leaders headed by Margaret Thatcher in the London Summit of 1984. According to "Foreign Policy", the "Iraqis used mustard gas and sarin prior to four major offensives in early 1988 that relied on U.S. satellite imagery, maps, and other intelligence. ... According to recently declassified CIA documents and interviews with former intelligence officials like Francona, the U.S. had firm evidence of Iraqi chemical attacks beginning in 1983." On 17 May 1987, an Iraqi Dassault Mirage F1 fighter jet launched two Exocet missiles at the , a "Perry" class frigate. The first struck the port side of the ship and failed to explode, though it left burning propellant in its wake; the second struck moments later in approximately the same place and penetrated through to crew quarters, where it exploded, killing 37 crew members and leaving 21 injured. Whether or not Iraqi leadership authorised the attack is still unknown. Initial claims by the Iraqi government (that "Stark" was inside the Iran–Iraq War zone) were shown to be false, and the motives and orders of the pilot remain unanswered. Though American officials claimed that the pilot who attacked "Stark" had been executed, an ex-Iraqi Air Force commander since stated he had not been punished, and was still alive at the time. The attack remains the only successful anti-ship missile strike on an American warship. Due to the extensive political and military cooperation between the Iraqis and Americans by 1987, the attack had little effect on relations between the two countries. U.S. attention was focused on isolating Iran as well as maintaining freedom of navigation. It criticised Iran's mining of international waters, and sponsored , which passed unanimously on 20 July, under which the U.S. and Iranian forces skirmished during Operation Earnest Will. During Operation Nimble Archer in October 1987, the United States attacked Iranian oil platforms in retaliation for an Iranian attack on the U.S.-flagged Kuwaiti tanker "Sea Isle City". On 14 April 1988, the frigate was badly damaged by an Iranian mine, and 10 sailors were wounded. U.S. forces responded with Operation Praying Mantis on 18 April, the U.S. Navy's largest engagement of surface warships since World War II. Two Iranian oil platforms were destroyed, and five Iranian warships and gunboats were sunk. An American helicopter also crashed. This fighting manifested in the International Court of Justice as Oil Platforms case (Islamic Republic of Iran v. United States of America), which was eventually dismissed in 2003. In the course of escorts by the U.S. Navy, the cruiser shot down Iran Air Flight 655 on 3 July 1988, killing all 290 passengers and crew on board. The American government claimed that "Vincennes" was in international waters at the time (which was later proven to be untrue), that the Airbus A300 had been mistaken for an Iranian F-14 Tomcat, and that "Vincennes" feared that she was under attack. The Iranians maintain that "Vincennes" was in their own waters, and that the passenger jet was turning away and increasing altitude after take-off. U.S. Admiral William J. Crowe later admitted on "Nightline" that "Vincennes" was in Iranian territorial waters when it launched the missiles. At the time of the attack, Admiral Crowe claimed that the Iranian plane did not identify itself and sent no response to warning signals he had sent. In 1996, the United States expressed their regret for the event and the civilian deaths it caused. In a declassified 1991 report, the CIA estimated that Iran had suffered more than 50,000 casualties from Iraq's use of several chemical weapons, though current estimates are more than 100,000 as the long-term effects continue to cause casualties. The official CIA estimate did not include the civilian population contaminated in bordering towns or the children and relatives of veterans, many of whom have developed blood, lung and skin complications, according to the Organization for Veterans of Iran. According to a 2002 article in the "Star-Ledger", 20,000 Iranian soldiers were killed on the spot by nerve gas. As of 2002, 5,000 of the 80,000 survivors continue to seek regular medical treatment, while 1,000 are hospital inpatients. According to Iraqi documents, assistance in developing chemical weapons was obtained from firms in many countries, including the United States, West Germany, the Netherlands, the United Kingdom, and France. A report stated that Dutch, Australian, Italian, French and both West and East German companies were involved in the export of raw materials to Iraqi chemical weapons factories. Declassified CIA documents show that the United States was providing reconnaissance intelligence to Iraq around 1987–88 which was then used to launch chemical weapon attacks on Iranian troops and that the CIA fully knew that chemical weapons would be deployed and sarin and cyclosarin attacks followed. On 21 March 1986, the United Nations Security Council made a declaration stating that "members are profoundly concerned by the unanimous conclusion of the specialists that chemical weapons on many occasions have been used by Iraqi forces against Iranian troops, and the members of the Council strongly condemn this continued use of chemical weapons in clear violation of the Geneva Protocol of 1925, which prohibits the use in war of chemical weapons." The United States was the only member who voted against the issuance of this statement. A mission to the region in 1988 found evidence of the use of chemical weapons, and was condemned in Security Council Resolution 612. According to Walter Lang, senior defense intelligence officer at the U.S. Defense Intelligence Agency, "the use of gas on the battlefield by the Iraqis was not a matter of deep strategic concern" to Reagan and his aides, because they "were desperate to make sure that Iraq did not lose". He claimed that the Defense Intelligence Agency "would have never accepted the use of chemical weapons against civilians, but the use against military objectives was seen as inevitable in the Iraqi struggle for survival". The Reagan administration did not stop aiding Iraq after receiving reports of the use of poison gas on Kurdish civilians. The United States accused Iran of using chemical weapons as well, though the allegations have been disputed. Joost Hiltermann, the principal researcher for Human Rights Watch between 1992 and 1994, conducted a two-year study that included a field investigation in Iraq, and obtained Iraqi government documents in the process. According to Hiltermann, the literature on the Iran–Iraq War reflects allegations of chemical weapons used by Iran, but they are "marred by a lack of specificity as to time and place, and the failure to provide any sort of evidence". Analysts Gary Sick and Lawrence Potter have called the allegations against Iran "mere assertions" and stated, "No persuasive evidence of the claim that Iran was the primary culprit [of using chemical weapons] was ever presented." Policy consultant and author Joseph Tragert stated, "Iran did not retaliate with chemical weapons, probably because it did not possess any at the time". At his trial in December 2006, Saddam said he would take responsibility "with honour" for any attacks on Iran using conventional or chemical weapons during the war, but that he took issue with the charges that he ordered attacks on Iraqis. A medical analysis of the effects of Iraqi mustard gas is described in a U.S. military textbook and contrasted effects of World War I gas. At the time of the conflict, the UN Security Council issued statements that "chemical weapons had been used in the war". UN statements never clarified that only Iraq was using chemical weapons, and according to retrospective authors "the international community remained silent as Iraq used weapons of mass destruction against Iranian[s] as well as Iraqi Kurds." Iran's attack on the "Osirak" nuclear reactor in September 1980 was the first attack on a nuclear reactor and one of only six military attacks on nuclear facilities in history. It was also the first instance of a pre-emptive attack on a nuclear reactor to forestall the development of a nuclear weapon, though it did not achieve its objective, as France repaired the reactor after the attack. (It took a second pre-emptive strike by the Israeli Air Force in June 1981 to disable the reactor, killing a French engineer in the process and causing France to pull out of "Osirak". The decommissioning of "Osirak" has been cited as causing a substantial delay to Iraqi acquisition of nuclear weapons.) The Iran–Iraq War was the first and only conflict in the history of warfare in which both forces used ballistic missiles against each other. This war also saw the only confirmed air-to-air helicopter battles in history with the Iraqi Mi-25s flying against Iranian AH-1J SeaCobras (supplied by the United States before the Iranian Revolution) on several separate occasions. In November 1980, not long after Iraq's initial invasion of Iran, two Iranian SeaCobras engaged two Mi-25s with TOW wire-guided antitank missiles. One Mi-25 went down immediately, the other was badly damaged and crashed before reaching base. The Iranians repeated this accomplishment on 24 April 1981, destroying two Mi-25s without incurring losses to themselves. One Mi-25 was also downed by an IRIAF F-14A. The Iraqis hit back, claiming the destruction of a SeaCobra on 14 September 1983 (with YaKB machine gun), then three SeaCobras on 5 February 1984 and three more on 25 February 1984 (two with Falanga missiles, one with S-5 rockets). After a lull in helicopter losses, each side lost a gunship on 13 February 1986. Later, a Mi-25 claimed a SeaCobra shot down with YaKB gun on 16 February, and a SeaCobra claimed a Mi-25 shot down with rockets on 18 February. The last engagement between the two types was on 22 May 1986, when Mi-25s shot down a SeaCobra. The final claim tally was 10 SeaCobras and 6 Mi-25s destroyed. The relatively small numbers and the inevitable disputes over actual kill numbers makes it unclear if one gunship had a real technical superiority over the other. Iraqi Mi-25s also claimed 43 kills against other Iranian helicopters, such as Agusta-Bell UH-1 Hueys. Both sides, especially Iraq, also carried out air and missile attacks against population centers. In October 1986, Iraqi aircraft began to attack civilian passenger trains and aircraft on Iranian soil, including an Iran Air Boeing 737 unloading passengers at Shiraz International Airport. In retaliation for the Iranian Operation Karbala 5, Iraq attacked 65 cities in 226 sorties over 42 days, bombing civilian neighbourhoods. Eight Iranian cities came under attack from Iraqi missiles. The bombings killed 65 children in an elementary school in Borujerd. The Iranians responded with Scud missile attacks on Baghdad and struck a primary school there. These events became known as the "War of the Cities". Despite the war, Iran and Iraq maintained diplomatic relations and embassies in each other's countries until mid-1987. Iran's government used human waves to attack enemy troops and even in some cases to clear minefields. Children volunteered as well. Some reports mistakenly have the Basijis marching into battle while marking their expected entry to heaven by wearing "plastic keys to paradise" around their necks, although other analysts regard this story as a hoax involving a misinterpretation of the carrying of a prayer book called "The Keys to Paradise"(Mafatih al-Janan) by Sheikh Abbas Qumi given to all volunteers. According to journalist Robin Wright: During the Fateh offensive in February 1987, I toured the southwest front on the Iranian side and saw scores of boys, aged anywhere from nine to sixteen, who said with staggering and seemingly genuine enthusiasm that they had volunteered to become martyrs. Regular army troops, the paramilitary Revolutionary Guards and mullahs all lauded these youths, known as baseeji [Basij], for having played the most dangerous role in breaking through Iraqi lines. They had led the way, running over fields of mines to clear the ground for the Iranian ground assault. Wearing white headbands to signify the embracing of death, and shouting "Shaheed, shaheed" (Martyr, martyr) they literally blew their way into heaven. Their numbers were never disclosed. But a walk through the residential suburbs of Iranian cities provided a clue. Window after window, block after block, displayed black-bordered photographs of teenage or preteen youths. The relationship between these two nations has warmed immensely since the downfall of Saddam Hussein, but mostly out of pragmatic interest. Iran and Iraq share many common interests, as they share a common enemy in the Islamic State. Significant military assistance has been provided by Iran to Iraq and this has bought them a large amount of political influence in Iraq's newly elected Shiite government. Iraq is also heavily dependent on the more stable and developed Iran for its energy needs, so a peaceful customer is likely a high priority for Iran, foreign policy wise. The Iran- Iraq War is regarded as being a major trigger for rising sectarianism in the region, as it was viewed by many as a clash between Sunni Muslims (Iraq and other Arab States) and the Shiite revolutionaries that had recently taken power in Iran. There remains lingering animosity however; despite the pragmatic alliance that has been formed as multiple government declarations from Iran have stated that the war will "affect every issue of internal and foreign policy" for decades to come. The sustained importance of this conflict is attributed mostly to the massive human and economic cost resulting from it, along with its ties to the Iranian Revolution. Another significant effect that the war has on Iran's policy is the issue of remaining war reparations. The UN estimates that Iraq owes about $149 billion, while Iran contends that, with both the direct and indirect effects taken into account, the cost of the war reaches a trillion. Iran has not vocalized the desire for these reparations in recent years, and has even suggested forms of financial aid. This is due most likely to Iran's interest in keeping Iraq politically stable, and imposing these reparation costs would further burden the already impoverished nation. The most important factor that governs Iraq's current foreign policy is the national government's consistent fragility following the overthrow of Saddam Hussein. Iraq's need for any and all allies that can help bring stability and bring development has allowed Iran to exert significant influence over the new Iraqi state; despite lingering memories of the war. Iraq is far too weak of a state to attempt to challenge Iran regionally, so accepting support while focusing on counter insurgency and stabilization is in their best interest. Currently, it seems as though Iraq is being pulled in two opposing directions, between a practical relationship with Iran, who can provide a reliable source of power as well as military support to the influential Shiite militias and political factions. The United States is pulling in the opposite direction as they offer Iraq significant economic aid packages, along with military support in the form of air and artillery strikes, all in the hopes to establish a stable ally in the region. If Iraq lurches too far in either direction, then the benefits offered to them by the other side will likely be gradually reduced or cut off completely. Another significant factor influencing relations is the shared cultural interests of their respective citizens, as they both wish to freely visit the multitude of holy sites located in both countries.
https://en.wikipedia.org/wiki?curid=14889
Incremental reading Incremental reading is a software-assisted method for learning and retaining information from reading, which involves the creation of flashcards out of electronic articles. "Incremental reading" means "reading in portions". Instead of a linear reading of articles one at a time, the method works by keeping a large reading list of electronic articles or books (often dozens or hundreds of them) and reading parts of several articles in each session. Articles in the reading list are prioritized by the user. In the course of reading, key points of articles are broken up into flashcards, which are then learned and reviewed over an extended period of time with the help of a spaced repetition algorithm. This use of flashcards at later stages of the process is based on the spacing effect (the phenomenon whereby learning is greater when studying is spread out over time) and the testing effect (the finding that long-term memory is increased when some of the learning period is devoted to retrieving the to-be-remembered information through testing). It is targeted towards people who are trying to learn for life a large amount of information, particularly if that information comes from various sources. The method itself is often credited to the Polish software developer Piotr Wozniak. He implemented the first version of incremental reading in 1999 in SuperMemo 99, providing the essential tools of the method: a prioritized reading list, and the possibility to extract portions of articles and to create cloze deletions. The term "incremental reading" itself appeared the next year with SuperMemo 2000. Later SuperMemo programmes subsequently enhanced the tools and techniques involved, such as webpage imports, material overload handling, etc. Limited incremental reading support for the text editor Emacs appeared in 2007. An Anki add-on for incremental reading was later published in 2011; for Anki 2.0 and 2.1, another add-on is available. Incremental reading was the first of a series of related concepts invented by Piotr Wozniak: incremental image learning, incremental video, incremental audio, incremental mail processing, incremental problem solving, and incremental writing. "Incremental learning" is the term Wozniak uses to refer to those concepts as a whole. When reading an electronic article, the user extracts the most important parts (similar to underlining or highlighting a paper article) and gradually distills them into flashcards. Flashcards are information presented in a question-answer format (making active recall possible). Cloze deletions are often used in incremental reading, as they are easy to create out of text. Both extracts and flashcards are scheduled independently from the original article. With time and reviews, articles are supposed to be gradually converted into extracts, and extracts into flashcards. Hence, incremental reading is a method of breaking down information from electronic articles into sets of flashcards. Contrary to extracts, flashcards are reviewed with active recall. This means that extracts such as "George Washington was the first U.S. President" must ultimately be converted into questions such as "Who was the first U.S. President?" (Answer: George Washington), or "Who was George Washington?" (Answer: the first U.S. President), etc., or cloze deletions such as "[BLANK] was the first U.S. President", "George Washington was [BLANK]", etc. This flashcard creation process is semi-automated – the reader chooses which material to learn and edits the precise wording of the questions, while the software assists in prioritizing articles and making the flashcards, and does the scheduling: it calculates the time for the reader to review each chunk, according to the rules of a spaced repetition algorithm. This means that all processed pieces of information are presented at increasing intervals. Individual articles are read in portions proportional to the attention span, which depends on the user, their mood, the article, etc. This allows for a substantial gain in attention, according to Piotr Wozniak. Without the use of spaced repetition, the reader would quickly get lost in the glut of information when studying dozens of subjects in parallel. However, spaced repetition makes it possible to retain traces of the processed material in memory.
https://en.wikipedia.org/wiki?curid=14891
Intelligence quotient An intelligence quotient (IQ) is a total score derived from a set of standardized tests or subtests designed to assess human intelligence. The abbreviation "IQ" was coined by the psychologist William Stern for the German term "Intelligenzquotient", his term for a scoring method for intelligence tests at University of Breslau he advocated in a 1912 book. Historically, IQ was a score obtained by dividing a person's mental age score, obtained by administering an intelligence test, by the person's chronological age, both expressed in terms of years and months. The resulting fraction (quotient) is multiplied by 100 to obtain the IQ score. For modern IQ tests, the median raw score of the norming sample is defined as IQ 100 and scores each standard deviation (SD) up or down are defined as 15 IQ points greater or less. By this definition, approximately two-thirds of the population scores are between IQ 85 and IQ 115. About 2.5 percent of the population scores above 130, and 2.5 percent below 70. Scores from intelligence tests are estimates of intelligence. Unlike, for example, distance and mass, a concrete measure of intelligence cannot be achieved given the abstract nature of the concept of "intelligence". IQ scores have been shown to be associated with such factors as morbidity and mortality, parental social status, and, to a substantial degree, biological parental IQ. While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates and the mechanisms of inheritance. IQ scores are used for educational placement, assessment of intellectual disability, and evaluating job applicants. In research contexts, they have been studied as predictors of job performance and income. They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate that scales to three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform current research on human intelligence. Historically, even before IQ tests were devised, there were attempts to classify people into intelligence categories by observing their behavior in daily life. Those other forms of behavioral observation are still important for validating classifications based primarily on IQ test scores. Both intelligence classification by observation of behavior outside the testing room and classification by IQ testing depend on the definition of "intelligence" used in a particular case and on the reliability and error of estimation in the classification procedure. The English statistician Francis Galton made the first attempt at creating a standardized test for rating a person's intelligence. A pioneer of psychometrics and the application of statistical methods to the study of human diversity and the study of inheritance of human traits, he believed that intelligence was largely a product of heredity (by which he did not mean genes, although he did develop several pre-Mendelian theories of particulate inheritance). He hypothesized that there should exist a correlation between intelligence and other observable traits such as reflexes, muscle grip, and head size. He set up the first mental testing center in the world in 1882 and he published "Inquiries into Human Faculty and Its Development" in 1883, in which he set out his theories. After gathering data on a variety of physical variables, he was unable to show any such correlation, and he eventually abandoned this research.French psychologist Alfred Binet, together with Victor Henri and Théodore Simon had more success in 1905, when they published the Binet-Simon test, which focused on verbal abilities. It was intended to identify mental retardation in school children, but in specific contradistinction to claims made by psychiatrists that these children were "sick" (not "slow") and should therefore be removed from school and cared for in asylums. The score on the Binet-Simon scale would reveal the child's mental age. For example, a six-year-old child who passed all the tasks usually passed by six-year-olds—but nothing beyond—would have a mental age that matched his chronological age, 6.0. (Fancher, 1985). Binet thought that intelligence was multifaceted, but came under the control of practical judgment. In Binet's view, there were limitations with the scale and he stressed what he saw as the remarkable diversity of intelligence and the subsequent need to study it using qualitative, as opposed to quantitative, measures (White, 2000). American psychologist Henry H. Goddard published a translation of it in 1910. American psychologist Lewis Terman at Stanford University revised the Binet-Simon scale, which resulted in the Stanford-Binet Intelligence Scales (1916). It became the most popular test in the United States for decades. The many different kinds of IQ tests include a wide variety of item content. Some test items are visual, while many are verbal. Test items vary from being based on abstract-reasoning problems to concentrating on arithmetic, vocabulary, or general knowledge. The British psychologist Charles Spearman in 1904 made the first formal factor analysis of correlations between the tests. He observed that children's school grades across seemingly unrelated school subjects were positively correlated, and reasoned that these correlations reflected the influence of an underlying general mental ability that entered into performance on all kinds of mental tests. He suggested that all mental performance could be conceptualized in terms of a single general ability factor and a large number of narrow task-specific ability factors. Spearman named it "g" for "general factor" and labeled the specific factors or abilities for specific tasks "s". In any collection of test items that make up an IQ test, the score that best measures "g" is the composite score that has the highest correlations with all the item scores. Typically, the ""g"-loaded" composite score of an IQ test battery appears to involve a common strength in abstract reasoning across the test's item content. Spearman's argument proposing a general factor of human intelligence is still accepted, in principle, by many psychometricians. Today's factor models of intelligence typically represent cognitive abilities as a three-level hierarchy, where there are a large number of narrow factors at the bottom of the hierarchy, a handful of broad, more general factors at the intermediate level, and at the apex a single factor, referred to as the "g" factor, which represents the variance common to all cognitive tasks. During World War I, the Army needed a way to evaluate and assign recruits to appropriate tasks. This led to the development of several mental tests by Robert Yerkes, who worked with major hereditarians of American psychometrics—including Terman, Goddard—to write the test. The testing generated controversy and much public debate in the United States. Nonverbal or "performance" tests were developed for those who could not speak English or were suspected of malingering. Based on Goddard's translation of the Binet-Simon test, the tests had an impact in screening men for officer training:...the tests did have a strong impact in some areas, particularly in screening men for officer training. At the start of the war, the army and national guard maintained nine thousand officers. By the end, two hundred thousand officers presided, and two- thirds of them had started their careers in training camps where the tests were applied. In some camps, no man scoring below C could be considered for officer training.1.75 million men were tested in total, making the results the first mass-produced written tests of intelligence, though considered dubious and non-usable, for reasons including high variability of test implementation throughout different camps and questions testing for familiarity with American culture rather than intelligence. After the war, positive publicity promoted by army psychologists helped to make psychology a respected field. Subsequently, there was an increase in jobs and funding in psychology in the United States. Group intelligence tests were developed and became widely used in schools and industry. The results of these tests, which at the time reaffirmed contemporary racism and nationalism, are considered controversial and dubious, having rested on certain contested assumptions: that intelligence was heritable, innate, and could be relegated to a single number, the tests were enacted systematically, and test questions actually tested for innate intelligence rather than subsuming environmental factors. The tests also allowed for the bolstering of jingoist narratives in the context of increased immigration, which may have influenced the passing of the Immigration Restriction Act of 1924. L.L. Thurstone argued for a model of intelligence that included seven unrelated factors (verbal comprehension, word fluency, number facility, spatial visualization, associative memory, perceptual speed, reasoning, and induction). While not widely used, Thurstone's model influenced later theories. David Wechsler produced the first version of his test in 1939. It gradually became more popular and overtook the Stanford-Binet in the 1960s. It has been revised several times, as is common for IQ tests, to incorporate new research. One explanation is that psychologists and educators wanted more information than the single score from the Binet. Wechsler's ten or more subtests provided this. Another is that the Stanford-Binet test reflected mostly verbal abilities, while the Wechsler test also reflected nonverbal abilities. The Stanford-Binet has also been revised several times and is now similar to the Wechsler in several aspects, but the Wechsler continues to be the most popular test in the United States. Eugenics, the set of beliefs and practices which aims at improving the genetic quality of the human population, played a significant role in the history and culture of the United States during the Progressive Era, from the late 19th century until US involvement in World War II. The American eugenics movement was rooted in the biological determinist ideas of the British Scientist Sir Francis Galton. In 1883, Galton first used the word eugenics to describe the biological improvement of human genes and the concept of being "well-born". He believed that differences in a person's ability were acquired primarily through genetics and that eugenics could be implemented through selective breeding in order for the human race to improve in its overall quality, therefore allowing for humans to direct their own evolution. Goddard was a eugenicist. In 1908, he published his own version, "The Binet and Simon Test of Intellectual Capacity", and cordially promoted the test. He quickly extended the use of the scale to the public schools (1913), to immigration (Ellis Island, 1914) and to a court of law (1914). Unlike Galton, who promoted eugenics through selective breeding for positive traits, Goddard went with the US eugenics movement to eliminate "undesirable" traits. Goddard used the term "feeble-minded" to refer to people who did not perform well on the test. He argued that "feeble-mindedness" was caused by heredity, and thus feeble-minded people should be prevented from giving birth, either by institutional isolation or sterilization surgeries. At first, sterilization targeted the disabled, but was later extended to poor people. Goddard's intelligence test was endorsed by the eugenicists to push for laws for forced sterilization. Different states adopted the sterilization laws at different pace. These laws, whose constitutionality was upheld by the Supreme Court in their 1927 ruling Buck v. Bell, forced over 64,000 people to go through sterilization in the United States. California's sterilization program was so effective that the Nazis turned to the government for advice on how to prevent the birth of the "unfit". While the US eugenics movement lost much of its momentum in the 1940s in view of the horrors of Nazi Germany, advocates of eugenics (including Nazi geneticist Otmar Freiherr von Verschuer) continued to work and promote their ideas in the United States. In later decades, some eugenic principles have made a resurgence as a voluntary means of selective reproduction, with some calling them "new eugenics". As it becomes possible to test for and correlate genes with IQ (and its proxies), ethicists and embryonic genetic testing companies are attempting to understand the ways in which the technology can be ethically deployed. Raymond Cattell (1941) proposed two types of cognitive abilities in a revision of Spearman's concept of general intelligence. Fluid intelligence (Gf) was hypothesized as the ability to solve novel problems by using reasoning, and crystallized intelligence (Gc) was hypothesized as a knowledge-based ability that was very dependent on education and experience. In addition, fluid intelligence was hypothesized to decline with age, while crystallized intelligence was largely resistant to the effects of aging. The theory was almost forgotten, but was revived by his student John L. Horn (1966) who later argued Gf and Gc were only two among several factors, and who eventually identified nine or ten broad abilities. The theory continued to be called Gf-Gc theory. John B. Carroll (1993), after a comprehensive reanalysis of earlier data, proposed the three stratum theory, which is a hierarchical model with three levels. The bottom stratum consists of narrow abilities that are highly specialized (e.g., induction, spelling ability). The second stratum consists of broad abilities. Carroll identified eight second-stratum abilities. Carroll accepted Spearman's concept of general intelligence, for the most part, as a representation of the uppermost, third stratum. In 1999, a merging of the Gf-Gc theory of Cattell and Horn with Carroll's Three-Stratum theory has led to the Cattell–Horn–Carroll theory (CHC Theory). It has greatly influenced many of the current broad IQ tests. In CHC theory, a hierarchy of factors is used; "g" is at the top. Under it are ten broad abilities that in turn are subdivided into seventy narrow abilities. The broad abilities are: Modern tests do not necessarily measure all of these broad abilities. For example, Gq and Grw may be seen as measures of school achievement and not IQ. Gt may be difficult to measure without special equipment. "g" was earlier often subdivided into only Gf and Gc, which were thought to correspond to the nonverbal or performance subtests and verbal subtests in earlier versions of the popular Wechsler IQ test. More recent research has shown the situation to be more complex. Modern comprehensive IQ tests do not stop at reporting a single IQ score. Although they still give an overall score, they now also give scores for many of these more restricted abilities, identifying particular strengths and weaknesses of an individual. An alternative to standard IQ tests, meant to test the proximal development of children, originated in the writings of psychologist Lev Vygotsky (1896–1934) during his last two years of his life. According to Vygotsky, the maximum level of complexity and difficulty of problems that a child is capable to solve under some guidance indicates their level of potential development. The difference between this level of potential and the lower level of unassisted performance indicates the child's zone of proximal development. Combination of the two indexesthe level of actual and the zone of the proximal developmentaccording to Vygotsky, provides a significantly more informative indicator of psychological development than the assessment of the level of actual development alone. His ideas on the zone of development were later developed in a number of psychological and educational theories and practices, most notably under the banner of dynamic assessment, which seeks to measure developmental potential (for instance, in the work of Reuven Feuerstein and his associates, who has criticized standard IQ testing for its putative assumption or acceptance of "fixed and immutable" characteristics of intelligence or cognitive functioning). Dynamic assessment has been further elaborated in the work of Ann Brown, and John D. Bransford and in theories of multiple intelligences authored by Howard Gardner and Robert Sternberg. J.P. Guilford's Structure of Intellect (1967) model of intelligence used three dimensions which when combined yielded a total of 120 types of intelligence. It was popular in the 1970s and early 1980s, but faded owing to both practical problems and theoretical criticisms. Alexander Luria's earlier work on neuropsychological processes led to the PASS theory (1997). It argued that only looking at one general factor was inadequate for researchers and clinicians who worked with learning disabilities, attention disorders, intellectual disability, and interventions for such disabilities. The PASS model covers four kinds of processes (planning process, attention/arousal process, simultaneous processing, and successive processing). The planning processes involve decision making, problem solving, and performing activities and requires goal setting and self-monitoring. The attention/arousal process involves selectively attending to a particular stimulus, ignoring distractions, and maintaining vigilance. Simultaneous processing involves the integration of stimuli into a group and requires the observation of relationships. Successive processing involves the integration of stimuli into serial order. The planning and attention/arousal components comes from structures located in the frontal lobe, and the simultaneous and successive processes come from structures located in the posterior region of the cortex. It has influenced some recent IQ tests, and been seen as a complement to the Cattell-Horn-Carroll theory described above. There are a variety of individually administered IQ tests in use in the English-speaking world. The most commonly used individual IQ test series is the Wechsler Adult Intelligence Scale (WAIS) for adults and the Wechsler Intelligence Scale for Children (WISC) for school-age test-takers. Other commonly used individual IQ tests (some of which do not label their standard scores as "IQ" scores) include the current versions of the Stanford-Binet Intelligence Scales, Woodcock-Johnson Tests of Cognitive Abilities, the Kaufman Assessment Battery for Children, the Cognitive Assessment System, and the Differential Ability Scales. IQ tests that measure intelligence also include: IQ scales are ordinally scaled. While one standard deviation is 15 points, and two SDs are 30 points, and so on, this does not imply that mental ability is linearly related to IQ, such that IQ 50 means half the cognitive ability of IQ 100. In particular, IQ points are not percentage points. Psychometricians generally regard IQ tests as having high statistical reliability. Reliability represents the measurement consistency of a test. A reliable test produces similar scores upon repetition. On aggregate, IQ tests exhibit high reliability, although test-takers may have varying scores when taking the same test on differing occasions, and although they may have varying scores when taking different IQ tests at the same age. Like all statistical quantities, any particular estimate of IQ has an associated standard error that measures uncertainty about the estimate. For modern tests, the standard error of measurement is about three points. For individuals with very low scores, the 95% confidence interval may be greater than 40 points, potentially complicating the accuracy of diagnoses of intellectual disability. By the same token, high IQ scores are also significantly less reliable than those near to the population median. Reports of IQ scores much higher than 160 are considered dubious. With regard to unrepresentative scores, low motivation or high anxiety can occasionally lower a person's score. A test being reliable does not automatically equate to a test's validity. While IQ tests are generally considered to measure some forms of intelligence, they may fail to serve as an accurate measure of broader definitions of human intelligence such as creativity and social intelligence. For this reason, Psychologist Wayne Weiten argues that their construct validity must be carefully qualified, and not be overstated. According to Weiten, "IQ tests are valid measures of the kind of intelligence necessary to do well in academic work. But if the purpose is to assess intelligence in a broader sense, the validity of IQ tests is questionable." Along these same lines, critics such as Keith Stanovich do not dispute the reliability of IQ test scores or their capacity to predict some kinds of achievement, but argue that basing a concept of intelligence on IQ test scores alone neglects other important aspects of mental ability. Robert Sternberg, another significant critic of IQ as the main measure of human cognitive abilities, argued that reducing the concept of intelligence to the measure of "g" does not fully account for the different skills and knowledge types that produce success in human society. A 2005 study found that "differential validity in prediction suggests that the WAIS-R test may contain cultural influences that reduce the validity of the WAIS-R as a measure of cognitive ability for Mexican American students," indicating a weaker positive correlation relative to sampled white students. Other recent studies have questioned the culture-fairness of IQ tests when used in South Africa. Standard intelligence tests, such as the Stanford-Binet, are often inappropriate for autistic children; the alternative of using developmental or adaptive skills measures are relatively poor measures of intelligence in autistic children, and may have resulted in incorrect claims that a majority of autistic children are of low intelligence. Some scientists have disputed the value of IQ as a measure of intelligence altogether. In "The Mismeasure of Man" (1981, expanded edition 1996), evolutionary biologist Stephen Jay Gould compared IQ testing with the now-discredited practice of determining intelligence via craniometry, arguing that both are based on the fallacy of reification, “our tendency to convert abstract concepts into entities”. Gould's argument sparked a great deal of debate, and the book is listed as one of "Discover Magazine"'s "25 Greatest Science Books of All Time". Despite these objections, clinical psychologists generally regard IQ scores as having sufficient statistical validity for many clinical purposes. Differential item functioning (DIF), sometimes referred to as measurement bias, is a phenomenon when participants from different groups (e.g. gender, race, disability) with the same latent abilities give different answers to specific questions on the same IQ test. DIF analysis measures such specific items on a test alongside measuring participants' latent abilities on other similar questions. A consistent different group response to a specific question among similar type of questions can indicate an effect of DIF. It does not count as differential item functioning if both groups have an equally valid chance of giving different responses to the same questions. Such bias can be a result of culture, educational level and other factors that are independent of group traits. DIF is only considered if test-takers from different groups "with the same underlying latent ability level" have a different chance of giving specific responses. Such questions are usually removed in order to make the test equally fair for both groups. Common techniques for analyzing DIF are item response theory (IRT) based methods, Mantel-Haenszel, and logistic regression. Since the early 20th century, raw scores on IQ tests have increased in most parts of the world. When a new version of an IQ test is normed, the standard scoring is set so performance at the population median results in a score of IQ 100. The phenomenon of rising raw score performance means if test-takers are scored by a constant standard scoring rule, IQ test scores have been rising at an average rate of around three IQ points per decade. This phenomenon was named the Flynn effect in the book "The Bell Curve" after James R. Flynn, the author who did the most to bring this phenomenon to the attention of psychologists. Researchers have been exploring the issue of whether the Flynn effect is equally strong on performance of all kinds of IQ test items, whether the effect may have ended in some developed nations, whether there are social subgroup differences in the effect, and what possible causes of the effect might be. A 2011 textbook, "IQ and Human Intelligence", by N. J. Mackintosh, noted the Flynn effect demolishes the fears that IQ would be decreased. He also asks whether it represents a real increase in intelligence beyond IQ scores. A 2011 psychology textbook, lead authored by Harvard Psychologist Professor Daniel Schacter, noted that humans' inherited intelligence could be going down while acquired intelligence goes up. Research has revealed that the Flynn effect has slowed or reversed course in several Western countries beginning in the late 20th century. The phenomenon has been termed the "negative Flynn effect". A study of Norwegian military conscripts' test records found that IQ scores have been falling for generations born after the year 1975, and that the underlying nature of both initial increasing and subsequent falling trends appears to be environmental rather than genetic. IQ can change to some degree over the course of childhood. However, in one longitudinal study, the mean IQ scores of tests at ages 17 and 18 were correlated at r=0.86 with the mean scores of tests at ages five, six, and seven and at r=0.96 with the mean scores of tests at ages 11, 12, and 13. For decades, practitioners' handbooks and textbooks on IQ testing have reported IQ declines with age after the beginning of adulthood. However, later researchers pointed out this phenomenon is related to the Flynn effect and is in part a cohort effect rather than a true aging effect. A variety of studies of IQ and aging have been conducted since the norming of the first Wechsler Intelligence Scale drew attention to IQ differences in different age groups of adults. Current consensus is that fluid intelligence generally declines with age after early adulthood, while crystallized intelligence remains intact. Both cohort effects (the birth year of the test-takers) and practice effects (test-takers taking the same form of IQ test more than once) must be controlled to gain accurate data. It is unclear whether any lifestyle intervention can preserve fluid intelligence into older ages. The exact peak age of fluid intelligence or crystallized intelligence remains elusive. Cross-sectional studies usually show that especially fluid intelligence peaks at a relatively young age (often in the early adulthood) while longitudinal data mostly show that intelligence is stable until mid-adulthood or later. Subsequently, intelligence seems to decline slowly. Environmental and genetic factors play a role in determining IQ. Their relative importance has been the subject of much research and debate. The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late adolescents and adults. Heritability measures in infancy are as low as 0.2, around 0.4 in middle childhood, and as high as 0.9 in adulthood. One proposed explanation is that people with different genes tend to reinforce the effects of those genes, for example by seeking out different environments. Family members have aspects of environments in common (for example, characteristics of the home). This shared family environment accounts for 0.25–0.35 of the variation in IQ in childhood. By late adolescence, it is quite low (zero in some studies). The effect for several other psychological traits is similar. These studies have not looked at the effects of extreme environments, such as in abusive families. Although parents treat their children differently, such differential treatment explains only a small amount of nonshared environmental influence. One suggestion is that children react differently to the same environment because of different genes. More likely influences may be the impact of peers and other experiences outside the family. A very large proportion of the over 17,000 human genes are thought to have an effect on the development and functionality of the brain. While a number of individual genes have been reported to be associated with IQ, none have a strong effect. Deary and colleagues (2009) reported that no finding of a strong single gene effect on IQ has been replicated. Recent findings of gene associations with normally varying intellectual differences in adults and children continue to show weak effects for any one gene. David Rowe reported an interaction of genetic effects with socioeconomic status, such that the heritability was high in high-SES families, but much lower in low-SES families. In the US, this has been replicated in infants, children, adolescents, and adults. Outside the US, studies show no link between heritability and SES. Some effects may even reverse sign outside the US. Dickens and Flynn (2001) have argued that genes for high IQ initiate an environment-shaping feedback cycle, with genetic effects causing bright children to seek out more stimulating environments that then further increase their IQ. In Dickens' model, environment effects are modeled as decaying over time. In this model, the Flynn effect can be explained by an increase in environmental stimulation independent of it being sought out by individuals. The authors suggest that programs aiming to increase IQ would be most likely to produce long-term IQ gains if they enduringly raised children's drive to seek out cognitively demanding experiences. In general, educational interventions, as those described below, have shown short-term effects on IQ, but long-term follow-up is often missing. For example, in the US, very large intervention programs such as the Head Start Program have not produced lasting gains in IQ scores. Even when students improve their scores on standardized tests, they do not always improve their cognitive abilities, such as memory, attention and speed. More intensive, but much smaller projects, such as the Abecedarian Project, have reported lasting effects, often on socioeconomic status variables, rather than IQ. Recent studies have shown that training in using one's working memory may increase IQ. A study on young adults published in April 2008 by a team from the Universities of Michigan and Bern supports the possibility of the transfer of fluid intelligence from specifically designed working memory training. Further research will be needed to determine nature, extent and duration of the proposed transfer. Among other questions, it remains to be seen whether the results extend to other kinds of fluid intelligence tests than the matrix test used in the study, and if so, whether, after training, fluid intelligence measures retain their correlation with educational and occupational achievement or if the value of fluid intelligence for predicting performance on other tasks changes. It is also unclear whether the training is durable of extended periods of time. Musical training in childhood correlates with higher than average IQ. However, a study of 10,500 twins found no effects on IQ, suggesting that the correlation was caused by genetic confounders. A meta-analysis concluded that "Music training does not reliably enhance children and young adolescents' cognitive or academic skills, and that previous positive findings were probably due to confounding variables." It is popularly thought that listening to classical music raises IQ. However, multiple attempted replications (e.g.) have shown that this is at best a short-term effect (lasting no longer than 10 to 15 minutes), and is not related to IQ-increase. Several neurophysiological factors have been correlated with intelligence in humans, including the ratio of brain weight to body weight and the size, shape, and activity level of different parts of the brain. Specific features that may affect IQ include the size and shape of the frontal lobes, the amount of blood and chemical activity in the frontal lobes, the total amount of gray matter in the brain, the overall thickness of the cortex, and the glucose metabolic rate. Health is important in understanding differences in IQ test scores and other measures of cognitive ability. Several factors can lead to significant cognitive impairment, particularly if they occur during pregnancy and childhood when the brain is growing and the blood–brain barrier is less effective. Such impairment may sometimes be permanent, sometimes be partially or wholly compensated for by later growth. Since about 2010, researchers such as Eppig, Hassel, and MacKenzie have found a very close and consistent link between IQ scores and infectious diseases, especially in the infant and preschool populations and the mothers of these children. They have postulated that fighting infectious diseases strains the child's metabolism and prevents full brain development. Hassel postulated that it is by far the most important factor in determining population IQ. However, they also found that subsequent factors such as good nutrition and regular quality schooling can offset early negative effects to some extent. Developed nations have implemented several health policies regarding nutrients and toxins known to influence cognitive function. These include laws requiring fortification of certain food products and laws establishing safe levels of pollutants (e.g. lead, mercury, and organochlorides). Improvements in nutrition, and in public policy in general, have been implicated in worldwide IQ increases. Cognitive epidemiology is a field of research that examines the associations between intelligence test scores and health. Researchers in the field argue that intelligence measured at an early age is an important predictor of later health and mortality differences. The American Psychological Association's report "Intelligence: Knowns and Unknowns" states that wherever it has been studied, children with high scores on tests of intelligence tend to learn more of what is taught in school than their lower-scoring peers. The correlation between IQ scores and grades is about .50. This means that the explained variance is 25%. Achieving good grades depends on many factors other than IQ, such as "persistence, interest in school, and willingness to study" (p. 81). It has been found that the correlation of IQ scores with school performance depends on the IQ measurement used. For undergraduate students, the Verbal IQ as measured by WAIS-R has been found to correlate significantly (0.53) with the grade point average (GPA) of the last 60 hours (credits). In contrast, Performance IQ correlation with the same GPA was only 0.22 in the same study. Some measures of educational aptitude correlate highly with IQ tests for instance, Frey and Detterman (2004) reported a correlation of 0.82 between "g" (general intelligence factor) and SAT scores; another research found a correlation of 0.81 between "g" and GCSE scores, with the explained variance ranging "from 58.6% in Mathematics and 48% in English to 18.1% in Art and Design". According to Schmidt and Hunter, "for hiring employees without previous experience in the job the most valid predictor of future performance is general mental ability." The validity of IQ as a predictor of job performance is above zero for all work studied to date, but varies with the type of job and across different studies, ranging from 0.2 to 0.6. The correlations were higher when the unreliability of measurement methods was controlled for. While IQ is more strongly correlated with reasoning and less so with motor function, IQ-test scores predict performance ratings in all occupations. That said, for highly qualified activities (research, management) low IQ scores are more likely to be a barrier to adequate performance, whereas for minimally-skilled activities, athletic strength (manual strength, speed, stamina, and coordination) are more likely to influence performance. The prevailing view among academics is that it is largely through the quicker acquisition of job-relevant knowledge that higher IQ mediates job performance. This view has been challenged by Byington & Felps (2010), who argued that "the current applications of IQ-reflective tests allow individuals with high IQ scores to receive greater access to developmental resources, enabling them to acquire additional capabilities over time, and ultimately perform their jobs better." In establishing a causal direction to the link between IQ and work performance, longitudinal studies by Watkins and others suggest that IQ exerts a causal influence on future academic achievement, whereas academic achievement does not substantially influence future IQ scores. Treena Eileen Rohde and Lee Anne Thompson write that general cognitive ability, but not specific ability scores, predict academic achievement, with the exception that processing speed and spatial ability predict performance on the SAT math beyond the effect of general cognitive ability. The US military has minimum enlistment standards at about the IQ 85 level. There have been two experiments with lowering this to 80 but in both cases these men could not master soldiering well enough to justify their costs. It has been suggested that "in economic terms it appears that the IQ score measures something with decreasing marginal value" and it "is important to have enough of it, but having lots and lots does not buy you that much". However, large-scale longitudinal studies indicate an increase in IQ translates into an increase in performance at all levels of IQ: i.e. ability and job performance are monotonically linked at all IQ levels. The link from IQ to wealth is much less strong than that from IQ to job performance. Some studies indicate that IQ is unrelated to net worth. The American Psychological Association's 1995 report "" stated that IQ scores accounted for about a quarter of the social status variance and one-sixth of the income variance. Statistical controls for parental SES eliminate about a quarter of this predictive power. Psychometric intelligence appears as only one of a great many factors that influence social outcomes. Charles Murray (1998) showed a more substantial effect of IQ on income independent of family background. In a meta-analysis, Strenze (2006) reviewed much of the literature and estimated the correlation between IQ and income to be about 0.23. Some studies claim that IQ only accounts for (explains) a sixth of the variation in income because many studies are based on young adults, many of whom have not yet reached their peak earning capacity, or even their education. On pg 568 of "", Arthur Jensen claims that although the correlation between IQ and income averages a moderate 0.4 (one sixth or 16% of the variance), the relationship increases with age, and peaks at middle age when people have reached their maximum career potential. In the book, "A Question of Intelligence", Daniel Seligman cites an IQ income correlation of 0.5 (25% of the variance). A 2002 study further examined the impact of non-IQ factors on income and concluded that an individual's location, inherited wealth, race, and schooling are more important as factors in determining income than IQ. The American Psychological Association's 1996 report "Intelligence: Knowns and Unknowns" stated that the correlation between IQ and crime was −0.2. It was −0.19 between IQ scores and number of juvenile offenses in a large Danish sample; with social class controlled, the correlation dropped to −0.17. A correlation of 0.20 means that the explained variance is 4%. The causal links between psychometric ability and social outcomes may be indirect. Children with poor scholastic performance may feel alienated. Consequently, they may be more likely to engage in delinquent behavior, compared to other children who do well. In his book "" (1998), Arthur Jensen cited data which showed that, regardless of race, people with IQs between 70 and 90 have higher crime rates than people with IQs below or above this range, with the peak range being between 80 and 90. The 2009 "Handbook of Crime Correlates" stated that reviews have found that around eight IQ points, or 0.5 SD, separate criminals from the general population, especially for persistent serious offenders. It has been suggested that this simply reflects that "only dumb ones get caught" but there is similarly a negative relation between IQ and self-reported offending. That children with conduct disorder have lower IQ than their peers "strongly argues" for the theory. A study of the relationship between US county-level IQ and US county-level crime rates found that higher average IQs were associated with lower levels of property crime, burglary, larceny rate, motor vehicle theft, violent crime, robbery, and aggravated assault. These results were "not confounded by a measure of concentrated disadvantage that captures the effects of race, poverty, and other social disadvantages of the county." Multiple studies conducted in Scotland have found that higher IQs in early life are associated with lower mortality and morbidity rates later in life. There is considerable variation within and overlap among these categories. People with high IQs are found at all levels of education and occupational categories. The biggest difference occurs for low IQs with only an occasional college graduate or professional scoring below 90. With operationalization and methodology derived from the general intelligence factor "g", a new scientific understanding of collective intelligence, defined as a group's general ability to perform a wide range of tasks, aims to explain intelligent behavior of groups. Goal is to detect and explain a general intelligence factor "c" for groups, parallel to the "g" factor for individuals. As "g" is highly interrelated with the concept of IQ, this measurement of collective intelligence can be interpreted as intelligence quotient for groups (Group-IQ) even though the score is not a quotient per se. Current evidence suggests that this Group-IQ is only moderately correlated with group members' IQs but with other correlates such as group members' Theory of Mind. Among the most controversial issues related to the study of intelligence is the observation that intelligence measures such as IQ scores vary between ethnic and racial groups. While there is little scholarly debate about the "existence" of some of these differences, current scientific consensus tells us that there is no evidence for a genetic component behind them. The existence of differences in IQ between the sexes remains controversial, and largely depends on which tests are performed. With the advent of the concept of "g" or general intelligence, many researchers have argued that there are no significant sex differences in general intelligence, though ability in particular types of intelligence does appear to vary. Thus, while some test batteries show slightly greater intelligence in males, others show greater intelligence in females. In particular, studies have shown female subjects performing better on tasks related to verbal ability, and males performing better on tasks related to rotation of objects in space, often categorized as spatial ability. These differences obtain, as Hunt (2010) observes, "even though men and women are essentially equal in general intelligence". Some research indicates that male advantages on some cognitive tests are minimized when controlling for socioeconomic factors. Other research has concluded that there is slightly larger variability in male scores in certain ares compared to female scores, which results in slightly more males than females in the top and bottom of the IQ distribution. The existence of differences between male and female performance on math-related tests is contested, and a meta-analysis focusing on gender differences in math performance found nearly identical performance for boys and girls. Currently, most IQ tests, including popular batteries such as the WAIS and the WISC-R, are constructed so that there are no overall score differences between females and males. While the concept of "race" is a social construct, discussions of a purported relationship between race and intelligence, as well as claims of genetic differences in intelligence along racial lines, have appeared in both popular science and academic research since the inception of IQ testing in the early 20th century. Despite the tremendous amount of research done on the topic, no scientific evidence has emerged that the average IQ scores of different population groups can be attributed to genetic differences between those groups. A 1996 task force investigation on intelligence sponsored by the American Psychological Association concluded that there were significant variations in IQ across races. However, a systematic analysis by William Dickens and James Flynn (2006) showed the gap between black and white Americans to have closed dramatically during the period between 1972 and 2002, suggesting that, in their words, the "constancy of the Black-White IQ gap is a myth." The problem of determining the causes underlying racial variation has been discussed at length as a classic question of "nature versus nurture", for instance by Alan S. Kaufman and Nathan Brody. Researchers such as statistician Bernie Devlin have argued that there are insufficient data to conclude that the black-white gap is due to genetic influences. Dickens and Flynn argued more positively that their results refute the possibility of a genetic origin, concluding that "the environment has been responsible" for observed differences. A review article published in 2012 by leading scholars on human intelligence reached a similar conclusion, after reviewing the prior research literature, that group differences in IQ are best understood as environmental in origin. More recently, geneticist and neuroscientist Kevin Mitchell has argued, on the basis of basic principles of population genetics, that "systematic genetic differences in intelligence between large, ancient populations" are "inherently and deeply implausible". The effects of stereotype threat have been proposed as an explanation for differences in IQ test performance between racial groups, as have issues related to cultural difference and access to education. In the United States, certain public policies and laws regarding military service, education, public benefits, capital punishment, and employment incorporate an individual's IQ into their decisions. However, in the case of Griggs v. Duke Power Co. in 1971, for the purpose of minimizing employment practices that disparately impacted racial minorities, the U.S. Supreme Court banned the use of IQ tests in employment, except when linked to job performance via a job analysis. Internationally, certain public policies, such as improving nutrition and prohibiting neurotoxins, have as one of their goals raising, or preventing a decline in, intelligence. A diagnosis of intellectual disability is in part based on the results of IQ testing. Borderline intellectual functioning is a categorization where a person has below average cognitive ability (an IQ of 71–85), but the deficit is not as severe as intellectual disability (70 or below). In the United Kingdom, the eleven plus exam which incorporated an intelligence test has been used from 1945 to decide, at eleven years of age, which type of school a child should go to. They have been much less used since the widespread introduction of comprehensive schools. IQ classification is the practice used by IQ test publishers for designating IQ score ranges into various categories with labels such as "superior" or "average." IQ classification was preceded historically by attempts to classify human beings by general ability based on other forms of behavioral observation. Those other forms of behavioral observation are still important for validating classifications based on IQ tests. There are social organizations, some international, which limit membership to people who have scores as high as or higher than the 98th percentile (2 standard deviations above the mean) on some IQ test or equivalent. Mensa International is perhaps the best known of these. The largest 99.9th percentile (3 standard deviations above the mean) society is the Triple Nine Society.
https://en.wikipedia.org/wiki?curid=14892
Indian Institute of Technology Kanpur Indian Institute of Technology Kanpur (also known as IIT Kanpur or IITK) is a public technical and research university located in Kanpur, Uttar Pradesh. It was declared to be an Institute of National Importance by the Government of India under the Institutes of Technology Act. Established in 1960 as one of the first Indian Institutes of Technology, the institute was created with the assistance of a consortium of nine US research universities as part of the Kanpur Indo-American Programme (KIAP). IIT Kanpur was established by an Act of Parliament in 1959. The institute was started in December 1959 in a room in the canteen building of the Harcourt Butler Technological Institute at Agricultural Gardens in Kanpur. In 1963, the institute moved to its present location, on the Grand Trunk Road near the locality of Kalyanpur in Kanpur district. The campus was designed by Achyut Kavinde in a modernist style. During the first ten years of its existence, a consortium of nine US universities (namely MIT, UCB, California Institute of Technology, Princeton University, Carnegie Institute of Technology, University of Michigan, Ohio State University, Case Institute of Technology and Purdue University) helped set up IIT Kanpur's research laboratories and academic programmes under the Kanpur Indo-American Programme (KIAP). The first director of the institute was P. K. Kelkar (after whom the Central Library was renamed in 2002). Under the guidance of economist John Kenneth Galbraith, IIT Kanpur was the first institute in India to offer Computer science education. The earliest computer courses were started at IIT Kanpur in August 1963 on an IBM 1620 system. The initiative for computer education came from the Electrical engineering department, then under the chairmanship of Prof. H.K. Kesavan, who was concurrently the chairman of Electrical Engineering and head of the Computer Centre. Prof. Harry Huskey of the University of California, Berkeley, who preceded Kesavan, helped with the computer activity at IIT-Kanpur. In 1971, the institute began an independent academic program in Computer Science and Engineering, leading to MTech and PhD degrees. In 1972 the KIAP program ended, in part because of tensions due to the U.S. support of Pakistan. Government funding was also reduced as a reaction to the sentiment that the IIT's were contributing to the brain drain. The institute's annual technical festival, Techkriti, was first started in 1995. IIT Kanpur is located on the Grand Trunk Road, west of Kanpur City and measures close to . This land was donated by the Government of Uttar Pradesh in 1960 and by March 1963 the institute had moved to its current location. The institute has around 6478 students with 3938 undergraduate students and 2540 postgraduate students and about 500 research associates. IIT Kanpur is to open an extension centre in Noida with the plan of making a small convention centre there for supporting outreach activities. Its foundation was laid on 4 December 2012 on 5 acres of land allocated by Uttar Pradesh state government in the sector-62 of Noida city, which is less than an hour's journey from New Delhi and the Indira Gandhi International Airport. The cost of construction is estimated to be about 25 crores. The new campus will have an auditorium, seminar halls for organising national and international conferences and an International Relations Office along with a 7-storey guest house. Several short-term management courses and refresher courses meant for distance learning will be available at the extension center.News from. IITK. Retrieved 9 October 2013. Being a major industrial town, Kanpur has a good connectivity by rail and by road but it lags behind in terms of air connectivity. IIT Kanpur was suffering significantly in comparison to IIT Delhi and IIT Bombay due to this reason as far as visiting companies and other dignitaries are concerned On 1 June 2013, a helicopter ferry service was started at IIT Kanpur run by Pawan Hans Helicopters Limited. In its initial run the service connects IIT Kanpur to Lucknow, but it is planned to later extend it to New Delhi. Currently there are two flights daily to and from Lucknow Airport with a duration of 25 minutes. Lucknow Airport operates both international and domestic flights to major cities. IIT Kanpur is the first academic institution in the country to provide such a service. The estimated charges are Rs. 6000 (US$100) per person. If anyone would like to avail the facility he/she have to contact Student Placement Office (SPO) at IIT Kanpur, since the helicopter service is subject to availability of chopper rights. The campus also has airstrips which allows flight workshops and joyrides for students. The institute has set up an office in New York with alumnus, Sanjiv Khosla designated as the overseas brand ambassador of the institute. It is located on 62, William Street, Manhattan. The office aims to hunt for qualified and capable faculty abroad, facilitate internship opportunities in North American universities and be conduit for research tie ups with various US universities. The New York Office also tries to amass funds through the alumni based there. A system that invites students and faculty of foreign institutes to IIT Kanpur is also being formulated. Undergraduate admissions until 2012 were being done through the national-level Indian Institute of Technology Joint Entrance Examination (IIT-JEE). Following the Ministry of Human Resource Development's decision to replace IIT-JEE with a common engineering entrance examination, IIT Kanpur's admissions are now based on JEE (Joint Entrance Examination) -Advanced level along with other IITs. Postgraduate admissions are made through the Graduate Aptitude Test in Engineering and Common Aptitude Test. The Students' Gymkhana is the students' government organization of IIT Kanpur, established in 1962. The Students' Gymkhana functions mainly through the Students' Senate, an elected student representative body composed of senators elected from each batch and the six elected executives: The number of senators in the Students' Senate is around 50–55. A senator is elected for every 150 students of IIT Kanpur. The meetings of the Students' Senate are chaired by the chairperson, Students' Senate, who is elected by the Senate. The Senate lays down the guidelines for the functions of the executives, their associated councils, the Gymkhana Festivals and other matters pertaining to the Student body at large. The Students' Senate has a say in the policy and decision making bodies of the institute. The president, Students' Gymkhana and the chairperson, Students' Senate are among the special invitees to the Institute Academic Senate. The president is usually invited to the meetings of the board of governors when matters affecting students are being discussed. Nominees of the Students' Senate are also members of the various standing Committees of the Institute Senate including the disciplinary committee, the Undergraduate and Postgraduate committee, etc. All academic departments have Departmental Undergraduate and Post Graduate Committees consisting of members of the faculty and student nominees. Internationally, IIT Kanpur was ranked 291 in "QS World University Rankings" for 2020. It was ranked 65 in QS Asia Rankings 2020 and 25 among BRICS nations in 2019. The "Times Higher Education World University Rankings" ranked it 601–800 globally in the 2020 ranking 125 in Asia and 77 among Emerging Economies University Rankings 2020. In India, among engineering colleges, fifth by "Outlook India" and fourth by "The Week" in 2019. It was ranked fourth among engineering colleges in India by the "National Institutional Ranking Framework" (NIRF) in 2020, and sixth overall. It ranked fourth among government engineering colleges by "India Today" in 2018. The Department of Industrial and Management Engineering was ranked 22 among management schools in India by NIRF in 2019. IIT Kanpur offers four-year BTech programs in Aerospace Engineering, Biological Sciences and Bio-engineering, Chemical Engineering, Civil Engineering, Computer Science and Engineering, Electrical Engineering, Materials Science and Engineering and Mechanical Engineering. The admission to these programs is procured through Joint Entrance Examination. IITK offers admission only to bachelor's degree now (discontinuing the integrated course programs), but it can be extended by 1 year to make it integrated, depending on the choice of student and based on his/her performance there at undergraduate level. IIT Kanpur also offers four-year B.S. Programs in Pure and Applied Sciences (Mathematics, Physics and Chemistry in particular), Earth Science and Economics. From 2011, IIT Kanpur has started offering a four-year BS program in sciences and has kept its BTech Program intact. Entry to the five-year MTech/MBA programs and Dual degree programme will be done based on the CPI of students instead of JEE rank. In order to reduce the number of student exams, IIT Kanpur has also abolished the earlier system of conducting two mid-term examinations. Instead, only two examinations (plus two quizzes in most courses depending on the instructor-in-charge, one before mid-semesters and the other after the mid-semesters and before the end-semesters examination), one between the semester and other towards the end of it would be held from the academic session starting July 2011 onward as per Academic Review Committee's recommendations. Postgraduate courses in Engineering offer Master of Technology (MTech), MS (R) and PhD degrees. The institute also offers two-tier MSc courses in areas of basic sciences in which students are admitted through Joint Admission Test for MSc (JAM) exam. The institute also offers M.Des. (2 years), M.B.A. (2 years) and MSc (2 years) degrees. Admissions to MTech is made once a year through Graduate Aptitude Test in Engineering. Admissions to M. Des are made once a year through both Graduate Aptitude Test in Engineering(GATE) and Common Entrance Exam for Design (CEED). Until 2011, admissions to the M.B.A. program were accomplished through the Joint Management Entrance Test (JMET), held yearly, and followed by a Group Discussion/Personal Interview process. In 2011, JMET was replaced by Common Admission Test (CAT). The academic departments at IIT Kanpur are: The campus is spread over an area of . Facilities include the National Wind Tunnel Facility. Other large research centres include the Advanced Centre for Material Science, a Bio-technology centre, the Advanced Centre for Electronic Systems, and the Samtel Centre for Display Technology, Centre for Mechatronics, Centre for Laser Technology, Prabhu Goel Research Centre for Computer and Internet Security, Facility for Ecological and Analytical Testing. The departments have their own libraries. The institute has its own airfield, for flight testing and gliding. PK Kelkar Library (formerly Central Library) is an academic library of the institute with a collection of more than 300,000 volumes, and subscriptions to more than 1,000 periodicals. The library was renamed to its present name in 2003 after Dr. P K Kelkar, the first director of the institute. It is housed in a three-story building, with a total floor area of 6973 square metres. The Abstracting and Indexing periodicals, Microform and CD-ROM databases, technical reports, Standards and thesis are in the library. Each year, about 4,500 books and journal volumes are added to the library. The New Core Labs (NCL) is 3-storey building with state of the art physics and chemistry laboratories for courses in the first year. The New Core Labs also has Linux and Windows computer labs for the use of first year courses and a Mathematics department laboratory housing machines with high computing power. IIT Kanpur has set up the Startup Innovation and Incubation Centre (SIIC) "(previously known as "SIDBI" Innovation and Incubation Centre)" in collaboration with the Small Industries development Bank of India (SIDBI) aiming to aid innovation, research, and entrepreneurial activities in technology-based areas. SIIC helps business Start-ups to develop their ideas into commercially viable products. A team of students, working under the guidance of faculty members of the institute and scientists of Indian Space Research Organisation (ISRO) have designed and built India's first nano satellite Jugnu, which was successfully launched in orbit on 12 Oct 2011 by ISRO's PSLV-C18. The Computer Centre is one of the advanced computing service centre among academic institution in India. IT hosts IIT Kanpur website and provides personal web space for students and faculties. It also provides a spam filtered email server and high speed fibre optic Internet to all the hostels and the academics. Users have multiple options to choose among various interfaces to access mail service. It has Linux and windows laboratories equipped with dozens of high-end software like MATLAB, Autocad, Ansys, Abaqus etc. for use of students. Apart from departmental computer labs, computer centre hosts more than 300 Linux terminals and more than 100 Windows terminals and is continuously available to the students for academic work and recreation. Computer centre has recently adopted an open source software policy for its infrastructure and computing. Various high-end compute and GPU servers are remotely available from data centre for user computation. Computer centre has multiple super computing clusters for research and teaching activity. In June 2014 IIT Kanpur launched their 2nd supercomputer which is India's 5th most powerful supercomputer as of now. The new supercomputer 'Cluster Platform SL230s Gen8' manufactured by Hewlett-Packard has 15,360 cores and a theoretical peak (Rpeak) 307.2 TFlop/s and is the world's 192th most powerful supercomputer as of June 2015. Research is controlled by the Office of the Dean of Research and Development. Under the aegis of the Office the students publish the quarterly NERD Magazine (Notes on Engineering Research and Development) which publishes scientific and technical content created by students. Articles may be original work done by students in the form of hobby projects, term projects, internships, or theses. Articles of general interest which are informative but do not reflect original work are also accepted. The institute is part of the European Research and Education Collaboration with Asia (EURECA) programme since 2008. Along with the magazine a student research organisation, PoWER (Promotion of Work Experience and Research) has been started. Under it several independent student groups are working on projects like the Lunar Rover for ISRO, alternate energy solutions under the Group for Environment and Energy Engineering, ICT solutions through a group Young Engineers, solution for diabetes, green community solutions through ideas like zero water and zero waste quality air approach. Through BRaIN (Biological Research and Innovation Network) students interested in solving biological problems get involved in research projects like genetically modifying fruit flies to study molecular systems and developing bio-sensors to detect alcohol levels. A budget of Rs 1.5 to 2 crore has been envisaged to support student projects that demonstrate technology. Assisting the Indian Ordnance Factories in not only upgrading existing products, but also developing new weapon platforms. The students of IIT Kanpur made a nano satellite called Jugnu, which was given by president Pratibha Patil to ISRO for launch. Jugnu is a remote sensing satellite which will be operated by the Indian Institute of Technology Kanpur. It is a nanosatellite which will be used to provide data for agriculture and disaster monitoring. It is a 3-kilogram (6.6 lb) spacecraft, which measures 34 centimetres (13 in) in length by 10 centimetres (3.9 in) in height and width. Its development programme cost around 25 million rupees. It has a design life of one year. Jugnu's primary instrument is the Micro Imaging System, a near infrared camera which will be used to observe vegetation. It also carries a GPS receiver to aid tracking, and is intended to demonstrate a microelectromechanical inertial measurement unit. IITK motorsports is the biggest and most comprehensive student initiative of the college, founded in January 2011. It is a group of students from varied disciplines who aim at designing and fabricating a Formula-style race car for international Formula SAE (Society of Automotive Engineers) events. Most of the components of the car, except the engine, tyres and wheel rims, are designed and manufactured by the team members themselves. The car is designed to provide maximum performance under the constraints of the event, while ensuring the driveability, reliability, driver safety and aesthetics of the car are not compromised. Researchers at IIT Kanpur have developed a series of solar powered UAVs named MARAAL-1 & MARAAL-2. Development of Maraal is notable as it is the first solar powered UAV developed in India. Maraal-2 is fully indigenous.
https://en.wikipedia.org/wiki?curid=14894
Insulin Insulin (, from Latin "insula", 'island') is a peptide hormone produced by beta cells of the pancreatic islets; it is considered to be the main anabolic hormone of the body. It regulates the metabolism of carbohydrates, fats and protein by promoting the absorption of glucose from the blood into liver, fat and skeletal muscle cells. In these tissues the absorbed glucose is converted into either glycogen via glycogenesis or fats (triglycerides) via lipogenesis, or, in the case of the liver, into both. Glucose production and secretion by the liver is strongly inhibited by high concentrations of insulin in the blood. Circulating insulin also affects the synthesis of proteins in a wide variety of tissues. It is therefore an anabolic hormone, promoting the conversion of small molecules in the blood into large molecules inside the cells. Low insulin levels in the blood have the opposite effect by promoting widespread catabolism, especially of reserve body fat. Beta cells are sensitive to blood sugar levels so that they secrete insulin into the blood in response to high level of glucose; and inhibit secretion of insulin when glucose levels are low. Insulin enhances glucose uptake and metabolism in the cells, thereby reducing blood sugar level. Their neighboring alpha cells, by taking their cues from the beta cells, secrete glucagon into the blood in the opposite manner: increased secretion when blood glucose is low, and decreased secretion when glucose concentrations are high. Glucagon increases blood glucose level by stimulating glycogenolysis and gluconeogenesis in the liver. The secretion of insulin and glucagon into the blood in response to the blood glucose concentration is the primary mechanism of glucose homeostasis. Decreased or loss of insulin activity results in diabetes mellitus, a condition of high blood sugar level (hyperglycaemia). There are two types of the disease. In type 1 diabetes mellitus, the beta cells are destroyed by an autoimmune reaction so that insulin can no longer be synthesized or be secreted into the blood. In type 2 diabetes mellitus, the destruction of beta cells is less pronounced than in type 1 diabetes, and is not due to an autoimmune process. Instead, there is an accumulation of amyloid in the pancreatic islets, which likely disrupts their anatomy and physiology. The pathogenesis of type 2 diabetes is not well understood but reduced population of islet beta-cells, reduced secretory function of islet beta-cells that survive, and peripheral tissue insulin resistance are known to be involved. Type 2 diabetes is characterized by increased glucagon secretion which is unaffected by, and unresponsive to the concentration of blood glucose. But insulin is still secreted into the blood in response to the blood glucose. As a result, insulin sugar accumulates in the blood. The human insulin protein is composed of 51 amino acids, and has a molecular mass of 5808 Da. It is a heterodimer of an A-chain and a B-chain, which are linked together by disulfide bonds. Insulin's structure varies slightly between species of animals. Insulin from animal sources differs somewhat in effectiveness (in carbohydrate metabolism effects) from human insulin because of these variations. Porcine insulin is especially close to the human version, and was widely used to treat type 1 diabetics before human insulin could be produced in large quantities by recombinant DNA technologies. Insulin was the first peptide hormone discovered. Frederick Banting and Charles Herbert Best, working in the laboratory of J.J.R. Macleod at the University of Toronto, were the first to isolate insulin from dog pancreas in 1921. Frederick Sanger sequenced the amino acid structure in 1951, which made insulin the first protein to be fully sequenced. The crystal structure of insulin in the solid state was determined by Dorothy Hodgkin in 1969. Insulin is also the first protein to be chemically synthesised and produced by DNA recombinant technology. It is on the WHO Model List of Essential Medicines, the most important medications needed in a basic health system. Insulin may have originated more than a billion years ago. The molecular origins of insulin go at least as far back as the simplest unicellular eukaryotes. Apart from animals, insulin-like proteins are also known to exist in the Fungi and Protista kingdoms. Insulin is produced by beta cells of the pancreatic islets in most vertebrates and by the Brockmann body in some teleost fish. Cone snails "Conus geographus" and "Conus tulipa", venomous sea snails that hunt small fish, use modified forms of insulin in their venom cocktails. The insulin toxin, closer in structure to fishes' than to snails' native insulin, slows down the prey fishes by lowering their blood glucose levels. The preproinsulin precursor of insulin is encoded by the "INS" gene, which is located on Chromosome 11p15.5. A variety of mutant alleles with changes in the coding region have been identified. A read-through gene, INS-IGF2, overlaps with this gene at the 5' region and with the IGF2 gene at the 3' region. In the pancreatic β cells, glucose is the primary physiological stimulus for the regulation of insulin synthesis. Insulin is mainly regulated through the transcription factors PDX1, NeuroD1, and MafA. During a low-glucose state, PDX1 (pancreatic and duodenal homeobox protein 1) is located in the nuclear periphery as a result of interaction with HDAC1 and 2, which results in downregulation of insulin secretion. An increase in blood glucose levels causes phosphorylation of PDX1, which leads it to undergo nuclear translocation and bind the A3 element within the insulin promoter. Upon translocation it interacts with coactivators HAT p300 and SETD7. PDX1 affects the histone modifications through acetylation and deacetylation as well as methylation. It is also said to suppress glucagon. NeuroD1, also known as β2, regulates insulin exocytosis in pancreatic β cells by directly inducing the expression of genes involved in exocytosis. It is localized in the cytosol, but in response to high glucose it becomes glycosylated by OGT and/or phosphorylated by ERK, which causes translocation to the nucleus. In the nucleus β2 heterodimerizes with E47, binds to the E1 element of the insulin promoter and recruits co-activator p300 which acetylates β2. It is able to interact with other transcription factors as well in activation of the insulin gene. MafA is degraded by proteasomes upon low blood glucose levels. Increased levels of glucose make an unknown protein glycosylated. This protein works as a transcription factor for MafA in an unknown manner and MafA is transported out of the cell. MafA is then translocated back into the nucleus where it binds the C1 element of the insulin promoter. These transcription factors work synergistically and in a complex arrangement. Increased blood glucose can after a while destroy the binding capacities of these proteins, and therefore reduce the amount of insulin secreted, causing diabetes. The decreased binding activities can be mediated by glucose induced oxidative stress and antioxidants are said to prevent the decreased insulin secretion in glucotoxic pancreatic β cells. Stress signalling molecules and reactive oxygen species inhibits the insulin gene by interfering with the cofactors binding the transcription factors and the transcription factors itself. Several regulatory sequences in the promoter region of the human insulin gene bind to transcription factors. In general, the A-boxes bind to Pdx1 factors, E-boxes bind to NeuroD, C-boxes bind to MafA, and cAMP response elements to CREB. There are also silencers that inhibit transcription. Contrary to an initial belief that hormones would be generally small chemical molecules, as the first peptide hormone known of its structure, insulin was found to be quite large. A single protein (monomer) of human insulin is composed of 51 amino acids, and has a molecular mass of 5808 Da. The molecular formula of human insulin is C257H383N65O77S6. It is a combination of two peptide chains (dimer) named an A-chain and a B-chain, which are linked together by two disulfide bonds. The A-chain is composed of 21 amino acids, while the B-chain consists of 30 residues. The linking (interchain) disulfide bonds are formed at cysteine residues between the positions A7-B7 and A20-B19. There is an additional (intrachain) disulfide bond within the A-chain between cysteine residues at positions A4 and A11. The A-chain exhibits two α-helical regions at A1-A8 and A12-A19 which are antiparallel; while the B chain has a central α -helix (covering residues B9-B19) flanked by the disulfide bond on either sides and two β-sheets (covering B7-B10 and B20-B23). The amino acid sequence of insulin is strongly conserved and varies only slightly between species. Bovine insulin differs from human in only three amino acid residues, and porcine insulin in one. Even insulin from some species of fish is similar enough to human to be clinically effective in humans. Insulin in some invertebrates is quite similar in sequence to human insulin, and has similar physiological effects. The strong homology seen in the insulin sequence of diverse species suggests that it has been conserved across much of animal evolutionary history. The C-peptide of proinsulin, however, differs much more among species; it is also a hormone, but a secondary one. Insulin is produced and stored in the body as a hexamer (a unit of six insulin molecules), while the active form is the monomer. The hexamer is about 36000 Da in size. The six molecules are linked together as three dimeric units to form symmetrical molecule. An important feature is the presence of zinc atoms (Zn2+) on the axis of symmetry, which are surrounded by three water molecules and three histamine residues at position B10. The hexamer is an inactive form with long-term stability, which serves as a way to keep the highly reactive insulin protected, yet readily available. The hexamer-monomer conversion is one of the central aspects of insulin formulations for injection. The hexamer is far more stable than the monomer, which is desirable for practical reasons; however, the monomer is a much faster-reacting drug because diffusion rate is inversely related to particle size. A fast-reacting drug means insulin injections do not have to precede mealtimes by hours, which in turn gives people with diabetes more flexibility in their daily schedules. Insulin can aggregate and form fibrillar interdigitated beta-sheets. This can cause injection amyloidosis, and prevents the storage of insulin for long periods. Insulin is produced in the pancreas and the Brockmann body (in some fish), and released when any of several stimuli are detected. These stimuli include the rise in plasma concentrations of amino acids and glucose resulting from the digestion of food. Carbohydrates can be polymers of simple sugars or the simple sugars themselves. If the carbohydrates include glucose, then that glucose will be absorbed into the bloodstream and blood glucose level will begin to rise. In target cells, insulin initiates a signal transduction, which has the effect of increasing glucose uptake and storage. Finally, insulin is degraded, terminating the response. In mammals, insulin is synthesized in the pancreas within the beta cells. One million to three million pancreatic islets form the endocrine part of the pancreas, which is primarily an exocrine gland. The endocrine portion accounts for only 2% of the total mass of the pancreas. Within the pancreatic islets, beta cells constitute 65–80% of all the cells. Insulin consists of two polypeptide chains, the A- and B- chains, linked together by disulfide bonds. It is however first synthesized as a single polypeptide called preproinsulin in beta cells. Preproinsulin contains a 24-residue signal peptide which directs the nascent polypeptide chain to the rough endoplasmic reticulum (RER). The signal peptide is cleaved as the polypeptide is translocated into lumen of the RER, forming proinsulin. In the RER the proinsulin folds into the correct conformation and 3 disulfide bonds are formed. About 5–10 min after its assembly in the endoplasmic reticulum, proinsulin is transported to the trans-Golgi network (TGN) where immature granules are formed. Transport to the TGN may take about 30 min. Proinsulin undergoes maturation into active insulin through the action of cellular endopeptidases known as prohormone convertases (PC1 and PC2), as well as the exoprotease carboxypeptidase E. The endopeptidases cleave at 2 positions, releasing a fragment called the C-peptide, and leaving 2 peptide chains, the B- and A- chains, linked by 2 disulfide bonds. The cleavage sites are each located after a pair of basic residues (lysine-64 and arginine-65, and arginine-31 and −32). After cleavage of the C-peptide, these 2 pairs of basic residues are removed by the carboxypeptidase. The C-peptide is the central portion of proinsulin, and the primary sequence of proinsulin goes in the order "B-C-A" (the B and A chains were identified on the basis of mass and the C-peptide was discovered later). The resulting mature insulin is packaged inside mature granules waiting for metabolic signals (such as leucine, arginine, glucose and mannose) and vagal nerve stimulation to be exocytosed from the cell into the circulation. The endogenous production of insulin is regulated in several steps along the synthesis pathway: Insulin and its related proteins have been shown to be produced inside the brain, and reduced levels of these proteins are linked to Alzheimer's disease. Insulin release is stimulated also by beta-2 receptor stimulation and inhibited by alpha-1 receptor stimulation.  In addition, cortisol, glucagon and growth hormone antagonize the actions of insulin during times of stress.  Insulin also inhibits fatty acid release by hormone sensitive lipase in adipose tissue. Beta cells in the islets of Langerhans release insulin in two phases. The first-phase release is rapidly triggered in response to increased blood glucose levels, and lasts about 10 minutes. The second phase is a sustained, slow release of newly formed vesicles triggered independently of sugar, peaking in 2 to 3 hours. Reduced first-phase insulin release may be the earliest detectable beta cell defect predicting onset of type 2 diabetes. First-phase release and insulin sensitivity are independent predictors of diabetes. The description of first phase release is as follows: This is the primary mechanism for release of insulin. Other substances known to stimulate insulin release include the amino acids arginine and leucine, parasympathetic release of acetylcholine (acting via the phospholipase C pathway), sulfonylurea, cholecystokinin (CCK, also via phospholipase C), and the gastrointestinally derived incretins, such as glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic peptide (GIP). Release of insulin is strongly inhibited by norepinephrine (noradrenaline), which leads to increased blood glucose levels during stress. It appears that release of catecholamines by the sympathetic nervous system has conflicting influences on insulin release by beta cells, because insulin release is inhibited by α2-adrenergic receptors and stimulated by β2-adrenergic receptors. The net effect of norepinephrine from sympathetic nerves and epinephrine from adrenal glands on insulin release is inhibition due to dominance of the α-adrenergic receptors. When the glucose level comes down to the usual physiologic value, insulin release from the β-cells slows or stops. If the blood glucose level drops lower than this, especially to dangerously low levels, release of hyperglycemic hormones (most prominently glucagon from islet of Langerhans alpha cells) forces release of glucose into the blood from the liver glycogen stores, supplemented by gluconeogenesis if the glycogen stores become depleted. By increasing blood glucose, the hyperglycemic hormones prevent or correct life-threatening hypoglycemia. Evidence of impaired first-phase insulin release can be seen in the glucose tolerance test, demonstrated by a substantially elevated blood glucose level at 30 minutes after the ingestion of a glucose load (75 or 100 g of glucose), followed by a slow drop over the next 100 minutes, to remain above 120 mg/100 ml after two hours after the start of the test. In a normal person the blood glucose level is corrected (and may even be slightly over-corrected) by the end of the test. An insulin spike is a 'first response' to blood glucose increase, this response is individual and dose specific although it was always previously assumed to be food type specific only. Even during digestion, in general, one or two hours following a meal, insulin release from the pancreas is not continuous, but oscillates with a period of 3–6 minutes, changing from generating a blood insulin concentration more than about 800 p mol/l to less than 100 pmol/l (in rats). This is thought to avoid downregulation of insulin receptors in target cells, and to assist the liver in extracting insulin from the blood. This oscillation is important to consider when administering insulin-stimulating medication, since it is the oscillating blood concentration of insulin release, which should, ideally, be achieved, not a constant high concentration. This may be achieved by delivering insulin rhythmically to the portal vein, by light activated delivery, or by islet cell transplantation to the liver. The blood insulin level can be measured in international units, such as µIU/mL or in molar concentration, such as pmol/L, where 1 µIU/mL equals 6.945 pmol/L. A typical blood level between meals is 8–11 μIU/mL (57–79 pmol/L). The effects of insulin are initiated by its binding to a receptor present in the cell membrane. The receptor molecule contains an α- and β subunits. Two molecules are joined to form what is known as a homodimer. Insulin binds to the α-subunits of the homodimer, which faces the extracellular side of the cells. The β subunits have tyrosine kinase enzyme activity which is triggered by the insulin binding. This activity provokes the autophosphorylation of the β subunits and subsequently the phosphorylation of proteins inside the cell known as insulin receptor substrates (IRS). The phosphorylation of the IRS activates a signal transduction cascade that leads to the activation of other kinases as well as transcription factors that mediate the intracellular effects of insulin. The cascade that leads to the insertion of GLUT4 glucose transporters into the cell membranes of muscle and fat cells, and to the synthesis of glycogen in liver and muscle tissue, as well as the conversion of glucose into triglycerides in liver, adipose, and lactating mammary gland tissue, operates via the activation, by IRS-1, of phosphoinositol 3 kinase (PI3K). This enzyme converts a phospholipid in the cell membrane by the name of phosphatidylinositol 4,5-bisphosphate (PIP2), into phosphatidylinositol 3,4,5-triphosphate (PIP3), which, in turn, activates protein kinase B (PKB). Activated PKB facilitates the fusion of GLUT4 containing endosomes with the cell membrane, resulting in an increase in GLUT4 transporters in the plasma membrane. PKB also phosphorylates glycogen synthase kinase (GSK), thereby inactivating this enzyme. This means that its substrate, glycogen synthase (GS), cannot be phosphorylated, and remains dephosphorylated, and therefore active. The active enzyme, glycogen synthase (GS), catalyzes the rate limiting step in the synthesis of glycogen from glucose. Similar dephosphorylations affect the enzymes controlling the rate of glycolysis leading to the synthesis of fats via malonyl-CoA in the tissues that can generate triglycerides, and also the enzymes that control the rate of gluconeogenesis in the liver. The overall effect of these final enzyme dephosphorylations is that, in the tissues that can carry out these reactions, glycogen and fat synthesis from glucose are stimulated, and glucose production by the liver through glycogenolysis and gluconeogenesis are inhibited. The breakdown of triglycerides by adipose tissue into free fatty acids and glycerol is also inhibited. After the intracellular signal that resulted from the binding of insulin to its receptor has been produced, termination of signaling is then needed. As mentioned below in the section on degradation, endocytosis and degradation of the receptor bound to insulin is a main mechanism to end signaling. In addition, the signaling pathway is also terminated by dephosphorylation of the tyrosine residues in the various signaling pathways by tyrosine phosphatases. Serine/Threonine kinases are also known to reduce the activity of insulin. The structure of the insulin–insulin receptor complex has been determined using the techniques of X-ray crystallography. The actions of insulin on the global human metabolism level include: The actions of insulin (indirect and direct) on cells include: Insulin also influences other body functions, such as vascular compliance and cognition. Once insulin enters the human brain, it enhances learning and memory and benefits verbal memory in particular. Enhancing brain insulin signaling by means of intranasal insulin administration also enhances the acute thermoregulatory and glucoregulatory response to food intake, suggesting that central nervous insulin contributes to the co-ordination of a wide variety of homeostatic or regulatory processes in the human body. Insulin also has stimulatory effects on gonadotropin-releasing hormone from the hypothalamus, thus favoring fertility. Once an insulin molecule has docked onto the receptor and effected its action, it may be released back into the extracellular environment, or it may be degraded by the cell. The two primary sites for insulin clearance are the liver and the kidney. The liver clears most insulin during first-pass transit, whereas the kidney clears most of the insulin in systemic circulation. Degradation normally involves endocytosis of the insulin-receptor complex, followed by the action of insulin-degrading enzyme. An insulin molecule produced endogenously by the beta cells is estimated to be degraded within about one hour after its initial release into circulation (insulin half-life ~ 4–6 minutes). Insulin is a major regulator of endocannabinoid (EC) metabolism and insulin treatment has been shown to reduce intracellular ECs, the 2-arachidonylglycerol (2-AG) and anandamide (AEA), which correspond with insulin-sensitive expression changes in enzymes of EC metabolism. In insulin-resistant adipocytes, patterns of insulin-induced enzyme expression is disturbed in a manner consistent with elevated EC synthesis and reduced EC degradation. Findings suggest that insulin-resistant adipocytes fail to regulate EC metabolism and decrease intracellular EC levels in response to insulin stimulation, whereby obese insulin-resistant individuals exhibit increased concentrations of ECs. This dysregulation contributes to excessive visceral fat accumulation and reduced adiponectin release from abdominal adipose tissue, and further to the onset of several cardiometabolic risk factors that are associated with obesity and type 2 diabetes. Hypoglycemia, also known as "low blood sugar", is when blood sugar decreases to below normal levels. This may result in a variety of symptoms including clumsiness, trouble talking, confusion, loss of consciousness, seizures or death. A feeling of hunger, sweating, shakiness and weakness may also be present. Symptoms typically come on quickly. The most common cause of hypoglycemia is medications used to treat diabetes mellitus such as insulin and sulfonylureas. Risk is greater in diabetics who have eaten less than usual, exercised more than usual or have drunk alcohol. Other causes of hypoglycemia include kidney failure, certain tumors, such as insulinoma, liver disease, hypothyroidism, starvation, inborn error of metabolism, severe infections, reactive hypoglycemia and a number of drugs including alcohol. Low blood sugar may occur in otherwise healthy babies who have not eaten for a few hours. There are several conditions in which insulin disturbance is pathologic: Biosynthetic human insulin (insulin human rDNA, INN) for clinical use is manufactured by recombinant DNA technology. Biosynthetic human insulin has increased purity when compared with extractive animal insulin, enhanced purity reducing antibody formation. Researchers have succeeded in introducing the gene for human insulin into plants as another method of producing insulin ("biopharming") in safflower. This technique is anticipated to reduce production costs. Several analogs of human insulin are available. These insulin analogs are closely related to the human insulin structure, and were developed for specific aspects of glycemic control in terms of fast action (prandial insulins) and long action (basal insulins). The first biosynthetic insulin analog was developed for clinical use at mealtime (prandial insulin), Humalog (insulin lispro), it is more rapidly absorbed after subcutaneous injection than regular insulin, with an effect 15 minutes after injection. Other rapid-acting analogues are NovoRapid and Apidra, with similar profiles. All are rapidly absorbed due to amino acid sequences that will reduce formation of dimers and hexamers (monomeric insulins are more rapidly absorbed). Fast acting insulins do not require the injection-to-meal interval previously recommended for human insulin and animal insulins. The other type is long acting insulin; the first of these was Lantus (insulin glargine). These have a steady effect for an extended period from 18 to 24 hours. Likewise, another protracted insulin analogue (Levemir) is based on a fatty acid acylation approach. A myristic acid molecule is attached to this analogue, which associates the insulin molecule to the abundant serum albumin, which in turn extends the effect and reduces the risk of hypoglycemia. Both protracted analogues need to be taken only once daily, and are used for type 1 diabetics as the basal insulin. A combination of a rapid acting and a protracted insulin is also available, making it more likely for patients to achieve an insulin profile that mimics that of the body's own insulin release. Insulin is usually taken as subcutaneous injections by single-use syringes with needles, via an insulin pump, or by repeated-use insulin pens with disposable needles. Inhaled insulin is also available in the U.S. market now. Synthetic insulin can trigger adverse effects, so some people with diabetes rely on animal-source insulin. Unlike many medicines, insulin currently cannot be taken orally because, like nearly all other proteins introduced into the gastrointestinal tract, it is reduced to fragments, whereupon all activity is lost. There has been some research into ways to protect insulin from the digestive tract, so that it can be administered orally or sublingually. In 1869, while studying the structure of the pancreas under a microscope, Paul Langerhans, a medical student in Berlin, identified some previously unnoticed tissue clumps scattered throughout the bulk of the pancreas. The function of the "little heaps of cells", later known as the "islets of Langerhans", initially remained unknown, but Édouard Laguesse later suggested they might produce secretions that play a regulatory role in digestion. Paul Langerhans' son, Archibald, also helped to understand this regulatory role. In 1889, the physician Oskar Minkowski, in collaboration with Joseph von Mering, removed the pancreas from a healthy dog to test its assumed role in digestion. On testing the urine, they found sugar, establishing for the first time a relationship between the pancreas and diabetes. In 1901, another major step was taken by the American physician and scientist Eugene Lindsay Opie, when he isolated the role of the pancreas to the islets of Langerhans: "Diabetes mellitus when the result of a lesion of the pancreas is caused by destruction of the islands of Langerhans and occurs only when these bodies are in part or wholly destroyed". Over the next two decades researchers made several attempts to isolate the islets' secretions. In 1906 George Ludwig Zuelzer achieved partial success in treating dogs with pancreatic extract, but he was unable to continue his work. Between 1911 and 1912, E.L. Scott at the University of Chicago tried aqueous pancreatic extracts and noted "a slight diminution of glycosuria", but was unable to convince his director of his work's value; it was shut down. Israel Kleiner demonstrated similar effects at Rockefeller University in 1915, but World War I interrupted his work and he did not return to it. In 1916, Nicolae Paulescu developed an aqueous pancreatic extract which, when injected into a diabetic dog, had a normalizing effect on blood-sugar levels. He had to interrupt his experiments because of World War I, and in 1921 he wrote four papers about his work carried out in Bucharest and his tests on a diabetic dog. Later that year, he published "Research on the Role of the Pancreas in Food Assimilation". The name "insulin" was coined by Edward Albert Sharpey-Schafer in 1916 for a hypothetical molecule produced by pancreatic islets of Langerhans (Latin "insula" for islet or island) that controls glucose metabolism. Unbeknown to Sharpey-Schafer, Jean de Meyer had introduced very similar word "insuline" in 1909 for the same molecule. In October 1920, Canadian Frederick Banting concluded that the digestive secretions that Minkowski had originally studied were breaking down the islet secretion, thereby making it impossible to extract successfully. A surgeon by training, Banting knew certain arteries could be tied off that would lead most of the pancreas to atrophy, while leaving the islets of Langerhans intact. He reasoned that a relatively pure extract could be made from the islets once most of the rest of the pancreas was gone. He jotted a note to himself: "Ligate pancreatic ducts of the dog. Keep dogs alive till acini degenerate leaving islets. Try to isolate internal secretion of these and relieve glycosuria." In the spring of 1921, Banting traveled to Toronto to explain his idea to J.J.R. Macleod, Professor of Physiology at the University of Toronto. Macleod was initially skeptical, since Banting had no background in research and was not familiar with the latest literature, but he agreed to provide lab space for Banting to test out his ideas. Macleod also arranged for two undergraduates to be Banting's lab assistants that summer, but Banting required only one lab assistant. Charles Best and Clark Noble flipped a coin; Best won the coin toss and took the first shift. This proved unfortunate for Noble, as Banting kept Best for the entire summer and eventually shared half his Nobel Prize money and credit for the discovery with Best. On 30 July 1921, Banting and Best successfully isolated an extract ("isleton") from the islets of a duct-tied dog and injected it into a diabetic dog, finding that the extract reduced its blood sugar by 40% in 1 hour. Banting and Best presented their results to Macleod on his return to Toronto in the fall of 1921, but Macleod pointed out flaws with the experimental design, and suggested the experiments be repeated with more dogs and better equipment. He moved Banting and Best into a better laboratory and began paying Banting a salary from his research grants. Several weeks later, the second round of experiments was also a success, and Macleod helped publish their results privately in Toronto that November. Bottlenecked by the time-consuming task of duct-tying dogs and waiting several weeks to extract insulin, Banting hit upon the idea of extracting insulin from the fetal calf pancreas, which had not yet developed digestive glands. By December, they had also succeeded in extracting insulin from the adult cow pancreas. Macleod discontinued all other research in his laboratory to concentrate on the purification of insulin. He invited biochemist James Collip to help with this task, and the team felt ready for a clinical test within a month. On January 11, 1922, Leonard Thompson, a 14-year-old diabetic who lay dying at the Toronto General Hospital, was given the first injection of insulin. However, the extract was so impure that Thompson suffered a severe allergic reaction, and further injections were cancelled. Over the next 12 days, Collip worked day and night to improve the ox-pancreas extract. A second dose was injected on January 23, completely eliminating the glycosuria that was typical of diabetes without causing any obvious side-effects. The first American patient was Elizabeth Hughes, the daughter of U.S. Secretary of State Charles Evans Hughes. The first patient treated in the U.S. was future woodcut artist James D. Havens; Dr. John Ralston Williams imported insulin from Toronto to Rochester, New York, to treat Havens. Banting and Best never worked well with Collip, regarding him as something of an interloper, and Collip left the project soon after. Over the spring of 1922, Best managed to improve his techniques to the point where large quantities of insulin could be extracted on demand, but the preparation remained impure. The drug firm Eli Lilly and Company had offered assistance not long after the first publications in 1921, and they took Lilly up on the offer in April. In November, Lilly's head chemist, George B. Walden discovered isoelectric precipitation and was able to produce large quantities of highly refined insulin. Shortly thereafter, insulin was offered for sale to the general public. Toward the end of January 1922, tensions mounted between the four "co-discoverers" of insulin and Collip briefly threatened to separately patent his purification process. John G. FitzGerald, director of the non-commercial public health institution Connaught Laboratories, therefore stepped in as peacemaker. The resulting agreement of 25 January 1922 established two key conditions: 1) that the collaborators would sign a contract agreeing not to take out a patent with a commercial pharmaceutical firm during an initial working period with Connaught; and 2) that no changes in research policy would be allowed unless first discussed among FitzGerald and the four collaborators. It helped contain disagreement and tied the research to Connaught's public mandate. Initially, Macleod and Banting were particularly reluctant to patent their process for insulin on grounds of medical ethics. However, concerns remained that a private third-party would hijack and monopolize the research (as Eli Lilly and Company had hinted), and that safe distribution would be difficult to guarantee without capacity for quality control. To this end, Edward Calvin Kendall gave valuable advice. He had isolated thyroxin at the Mayo Clinic in 1914 and patented the process through an arrangement between himself, the brothers Mayo, and the University of Minnesota, transferring the patent to the public university. On April 12, Banting, Best, Collip, Macleod, and FitzGerald wrote jointly to the president of the University of Toronto to propose a similar arrangement with the aim of assigning a patent to the Board of Governors of the University. The letter emphasized that:The patent would not be used for any other purpose than to prevent the taking out of a patent by other persons. When the details of the method of preparation are published anyone would be free to prepare the extract, but no one could secure a profitable monopoly.The assignment to the University of Toronto Board of Governors was completed on 15 January 1923, for the token payment of $1.00. The arrangement was congratulated in "The World's Work" in 1923 as "a step forward in medical ethics". It has also received much media attention in the 2010s regarding the issue of healthcare and drug affordability. Following further concern regarding Eli Lilly's attempts to separately patent parts of the manufacturing process, Connaught's Assistant Director and Head of the Insulin Division Robert Defries established a patent pooling policy which would require producers to freely share any improvements to the manufacturing process without compromising affordability. Purified animal-sourced insulin was initially the only type of insulin available for experiments and diabetics. John Jacob Abel was the first to produce the crystallised form in 1926. Evidence of the protein nature was first given by Michael Somogyi, Edward A. Doisy, and Philip A. Shaffer in 1924. It was fully proven when Hans Jensen and Earl A. Evans Jr. isolated the amino acids phenylalanine and proline in 1935. The amino acid structure of insulin was first characterized in 1951 by Frederick Sanger, and the first synthetic insulin was produced simultaneously in the labs of Panayotis Katsoyannis at the University of Pittsburgh and Helmut Zahn at RWTH Aachen University in the mid-1960s. Synthetic crystalline bovine insulin was achieved by Chinese researchers in 1965. The complete 3-dimensional structure of insulin was determined by X-ray crystallography in Dorothy Hodgkin's laboratory in 1969. The first genetically engineered, synthetic "human" insulin was produced using "E. coli" in 1978 by Arthur Riggs and Keiichi Itakura at the Beckman Research Institute of the City of Hope in collaboration with Herbert Boyer at Genentech. Genentech, founded by Swanson, Boyer and Eli Lilly and Company, went on in 1982 to sell the first commercially available biosynthetic human insulin under the brand name Humulin. The vast majority of insulin currently used worldwide is now biosynthetic recombinant "human" insulin or its analogues. Recently, another approach has been used by a pioneering group of Canadian researchers, using an easily grown safflower plant, for the production of much cheaper insulin. Recombinant insulin is produced either in yeast (usually "Saccharomyces cerevisiae") or "E. coli". In yeast, insulin may be engineered as a single-chain protein with a KexII endoprotease (a yeast homolog of PCI/PCII) site that separates the insulin A chain from a C-terminally truncated insulin B chain. A chemically synthesized C-terminal tail is then grafted onto insulin by reverse proteolysis using the inexpensive protease trypsin; typically the lysine on the C-terminal tail is protected with a chemical protecting group to prevent proteolysis. The ease of modular synthesis and the relative safety of modifications in that region accounts for common insulin analogs with C-terminal modifications (e.g. lispro, aspart, glulisine). The Genentech synthesis and completely chemical synthesis such as that by Bruce Merrifield are not preferred because the efficiency of recombining the two insulin chains is low, primarily due to competition with the precipitation of insulin B chain. The Nobel Prize committee in 1923 credited the practical extraction of insulin to a team at the University of Toronto and awarded the Nobel Prize to two men: Frederick Banting and J.J.R. Macleod. They were awarded the Nobel Prize in Physiology or Medicine in 1923 for the discovery of insulin. Banting, incensed that Best was not mentioned, shared his prize with him, and Macleod immediately shared his with James Collip. The patent for insulin was sold to the University of Toronto for one dollar. Two other Nobel Prizes have been awarded for work on insulin. British molecular biologist Frederick Sanger, who determined the primary structure of insulin in 1955, was awarded the 1958 Nobel Prize in Chemistry. Rosalyn Sussman Yalow received the 1977 Nobel Prize in Medicine for the development of the radioimmunoassay for insulin. Several Nobel Prizes also have an indirect connection with insulin. George Minot, co-recipient of the 1934 Nobel Prize for the development of the first effective treatment for pernicious anemia, had diabetes mellitus. Dr. William Castle observed that the 1921 discovery of insulin, arriving in time to keep Minot alive, was therefore also responsible for the discovery of a cure for pernicious anemia. Dorothy Hodgkin was awarded a Nobel Prize in Chemistry in 1964 for the development of crystallography, the technique she used for deciphering the complete molecular structure of insulin in 1969. The work published by Banting, Best, Collip and Macleod represented the preparation of purified insulin extract suitable for use on human patients. Although Paulescu discovered the principles of the treatment, his saline extract could not be used on humans; he was not mentioned in the 1923 Nobel Prize. Professor Ian Murray was particularly active in working to correct "the historical wrong" against Nicolae Paulescu. Murray was a professor of physiology at the Anderson College of Medicine in Glasgow, Scotland, the head of the department of Metabolic Diseases at a leading Glasgow hospital, vice-president of the British Association of Diabetes, and a founding member of the International Diabetes Federation. Murray wrote: Insufficient recognition has been given to Paulescu, the distinguished Romanian scientist, who at the time when the Toronto team were commencing their research had already succeeded in extracting the antidiabetic hormone of the pancreas and proving its efficacy in reducing the hyperglycaemia in diabetic dogs. In a private communication, Professor Arne Tiselius, former head of the Nobel Institute, expressed his personal opinion that Paulescu was equally worthy of the award in 1923. * Dr Robert I Misbin, INSULIN History from an FDA Insider, June 1 2020 published as ebook on Amazon
https://en.wikipedia.org/wiki?curid=14895
Inductor An inductor, also called a coil, choke, or reactor, is a passive two-terminal electrical component that stores energy in a magnetic field when electric current flows through it. An inductor typically consists of an insulated wire wound into a coil around a core. When the current flowing through an inductor changes, the time-varying magnetic field induces an electromotive force ("e.m.f.") (voltage) in the conductor, described by Faraday's law of induction. According to Lenz's law, the induced voltage has a polarity (direction) which opposes the change in current that created it. As a result, inductors oppose any changes in current through them. An inductor is characterized by its inductance, which is the ratio of the voltage to the rate of change of current. In the International System of Units (SI), the unit of inductance is the henry (H) named for 19th century American scientist Joseph Henry. In the measurement of magnetic circuits, it is equivalent to weber/ampere. Inductors have values that typically range from 1µH (10−6H) to 20H. Many inductors have a magnetic core made of iron or ferrite inside the coil, which serves to increase the magnetic field and thus the inductance. Along with capacitors and resistors, inductors are one of the three passive linear circuit elements that make up electronic circuits. Inductors are widely used in alternating current (AC) electronic equipment, particularly in radio equipment. They are used to block AC while allowing DC to pass; inductors designed for this purpose are called chokes. They are also used in electronic filters to separate signals of different frequencies, and in combination with capacitors to make tuned circuits, used to tune radio and TV receivers. An electric current flowing through a conductor generates a magnetic field surrounding it. The magnetic flux linkage formula_1 generated by a given current formula_2 depends on the geometric shape of the circuit. Their ratio defines the inductance formula_3. Thus The inductance of a circuit depends on the geometry of the current path as well as the magnetic permeability of nearby materials. An inductor is a component consisting of a wire or other conductor shaped to increase the magnetic flux through the circuit, usually in the shape of a coil or helix. Winding the wire into a coil increases the number of times the magnetic flux lines link the circuit, increasing the field and thus the inductance. The more turns, the higher the inductance. The inductance also depends on the shape of the coil, separation of the turns, and many other factors. By adding a "magnetic core" made of a ferromagnetic material like iron inside the coil, the magnetizing field from the coil will induce magnetization in the material, increasing the magnetic flux. The high permeability of a ferromagnetic core can increase the inductance of a coil by a factor of several thousand over what it would be without it. Any change in the current through an inductor creates a changing flux, inducing a voltage across the inductor. By Faraday's law of induction, the voltage induced by any change in magnetic flux through the circuit is given by Reformulating the definition of formula_3 above, we obtain It follows, that for formula_3 independent of time. So inductance is also a measure of the amount of electromotive force (voltage) generated for a given rate of change of current. For example, an inductor with an inductance of 1 henry produces an EMF of 1 volt when the current through the inductor changes at the rate of 1 ampere per second. This is usually taken to be the constitutive relation (defining equation) of the inductor. The dual of the inductor is the capacitor, which stores energy in an electric field rather than a magnetic field. Its current–voltage relation is obtained by exchanging current and voltage in the inductor equations and replacing "L" with the capacitance "C". In a circuit, an inductor can behave differently at different time instant. However, it's usually easy to think about the short-time limit and long-time limit: The polarity (direction) of the induced voltage is given by Lenz's law, which states that the induced voltage will be such as to oppose the change in current. For example, if the current through an inductor is increasing, the induced voltage will be positive at the current's entrance point and negative at the exit point, tending to oppose the additional current. The energy from the external circuit necessary to overcome this potential "hill" is being stored in the magnetic field of the inductor. If the current is decreasing, the induced voltage will be negative at the current's entrance point and positive at the exit point, tending to maintain the current. In this case energy from the magnetic field is being returned to the circuit. One intuitive explanation as to why a potential difference is induced on a change of current in an inductor goes as follows: When there is a change in current through an inductor there is a change in the strength of the magnetic field. For example, if the current is increased, the magnetic field increases. This, however, does not come without a price. The magnetic field contains potential energy, and increasing the field strength requires more energy to be stored in the field. This energy comes from the electric current through the inductor. The increase in the magnetic potential energy of the field is provided by a corresponding drop in the electric potential energy of the charges flowing through the windings. This appears as a voltage drop across the windings as long as the current increases. Once the current is no longer increased and is held constant, the energy in the magnetic field is constant and no additional energy must be supplied, so the voltage drop across the windings disappears. Similarly, if the current through the inductor decreases, the magnetic field strength decreases, and the energy in the magnetic field decreases. This energy is returned to the circuit in the form of an increase in the electrical potential energy of the moving charges, causing a voltage rise across the windings. The work done per unit charge on the charges passing the inductor is formula_10. The negative sign indicates that the work is done "against" the emf, and is not done "by" the emf. The current formula_2 is the charge per unit time passing through the inductor. Therefore the rate of work formula_12 done by the charges against the emf, that is the rate of change of energy of the current, is given by From the constitutive equation for the inductor, formula_14 so In a ferromagnetic core inductor, when the magnetic field approaches the level at which the core saturates, the inductance will begin to change, it will be a function of the current formula_17. Neglecting losses, the energy formula_12 stored by an inductor with a current formula_19 passing through it is equal to the amount of work required to establish the current through the inductor. This is given by: In an air core inductor or a ferromagnetic core inductor below saturation, the inductance is constant, so the stored energy is For inductors with magnetic cores, the above equation is only valid for linear regions of the magnetic flux, at currents below the saturation level of the inductor, where the inductance is approximately constant. Where this is not the case, the integral form must be used with formula_22 variable. The constitutive equation describes the behavior of an "ideal inductor" with inductance formula_3, and without resistance, capacitance, or energy dissipation. In practice, inductors do not follow this theoretical model; "real inductors" have a measurable resistance due to the resistance of the wire and energy losses in the core, and parasitic capacitance due to electric potentials between turns of the wire. A real inductor's capacitive reactance rises with frequency, and at a certain frequency, the inductor will behave as a resonant circuit. Above this self-resonant frequency, the capacitive reactance is the dominant part of the inductor's impedance. At higher frequencies, resistive losses in the windings increase due to the skin effect and proximity effect. Inductors with ferromagnetic cores experience additional energy losses due to hysteresis and eddy currents in the core, which increase with frequency. At high currents, magnetic core inductors also show sudden departure from ideal behavior due to nonlinearity caused by magnetic saturation of the core. Inductors radiate electromagnetic energy into surrounding space and may absorb electromagnetic emissions from other circuits, resulting in potential electromagnetic interference. An early solid-state electrical switching and amplifying device called a saturable reactor exploits saturation of the core as a means of stopping the inductive transfer of current via the core. The winding resistance appears as a resistance in series with the inductor; it is referred to as DCR (DC resistance). This resistance dissipates some of the reactive energy. The quality factor (or "Q") of an inductor is the ratio of its inductive reactance to its resistance at a given frequency, and is a measure of its efficiency. The higher the Q factor of the inductor, the closer it approaches the behavior of an ideal inductor. High Q inductors are used with capacitors to make resonant circuits in radio transmitters and receivers. The higher the Q is, the narrower the bandwidth of the resonant circuit. The Q factor of an inductor is defined as, where "L" is the inductance, "R" is the DCR, and the product "ωL" is the inductive reactance: "Q" increases linearly with frequency if "L" and "R" are constant. Although they are constant at low frequencies, the parameters vary with frequency. For example, skin effect, proximity effect, and core losses increase "R" with frequency; winding capacitance and variations in permeability with frequency affect "L". At low frequencies and within limits, increasing the number of turns "N" improves "Q" because "L" varies as "N"2 while "R" varies linearly with "N". Similarly increasing the radius "r" of an inductor improves (or increases) "Q" because "L" varies as "r"2 while "R" varies linearly with "r". So high "Q" air core inductors often have large diameters and many turns. Both of those examples assume the diameter of the wire stays the same, so both examples use proportionally more wire. If the total mass of wire is held constant, then there would be no advantage to increasing the number of turns or the radius of the turns because the wire would have to be proportionally thinner. Using a high permeability ferromagnetic core can greatly increase the inductance for the same amount of copper, so the core can also increase the Q. Cores however also introduce losses that increase with frequency. The core material is chosen for best results for the frequency band. High Q inductors must avoid saturation; one way is by using a (physically larger) air core inductor. At VHF or higher frequencies an air core is likely to be used. A well designed air core inductor may have a Q of several hundred. Inductors are used extensively in analog circuits and signal processing. Applications range from the use of large inductors in power supplies, which in conjunction with filter capacitors remove ripple which is a multiple of the mains frequency (or the switching frequency for switched-mode power supplies) from the direct current output, to the small inductance of the ferrite bead or torus installed around a cable to prevent radio frequency interference from being transmitted down the wire. Inductors are used as the energy storage device in many switched-mode power supplies to produce DC current. The inductor supplies energy to the circuit to keep current flowing during the "off" switching periods and enables topographies where the output voltage is higher than the input voltage. A tuned circuit, consisting of an inductor connected to a capacitor, acts as a resonator for oscillating current. Tuned circuits are widely used in radio frequency equipment such as radio transmitters and receivers, as narrow bandpass filters to select a single frequency from a composite signal, and in electronic oscillators to generate sinusoidal signals. Two (or more) inductors in proximity that have coupled magnetic flux (mutual inductance) form a transformer, which is a fundamental component of every electric utility power grid. The efficiency of a transformer may decrease as the frequency increases due to eddy currents in the core material and skin effect on the windings. The size of the core can be decreased at higher frequencies. For this reason, aircraft use 400 hertz alternating current rather than the usual 50 or 60 hertz, allowing a great saving in weight from the use of smaller transformers. Transformers enable switched-mode power supplies that isolate the output from the input. Inductors are also employed in electrical transmission systems, where they are used to limit switching currents and fault currents. In this field, they are more commonly referred to as reactors. Inductors have parasitic effects which cause them to depart from ideal behavior. They create and suffer from electromagnetic interference (EMI). Their physical size prevents them from being integrated on semiconductor chips. So the use of inductors is declining in modern electronic devices, particularly compact portable devices. Real inductors are increasingly being replaced by active circuits such as the gyrator which can synthesize inductance using capacitors. An inductor usually consists of a coil of conducting material, typically insulated copper wire, wrapped around a core either of plastic (to create an air-core inductor) or of a ferromagnetic (or ferrimagnetic) material; the latter is called an "iron core" inductor. The high permeability of the ferromagnetic core increases the magnetic field and confines it closely to the inductor, thereby increasing the inductance. Low frequency inductors are constructed like transformers, with cores of electrical steel laminated to prevent eddy currents. 'Soft' ferrites are widely used for cores above audio frequencies, since they do not cause the large energy losses at high frequencies that ordinary iron alloys do. Inductors come in many shapes. Some inductors have an adjustable core, which enables changing of the inductance. Inductors used to block very high frequencies are sometimes made by stringing a ferrite bead on a wire. Small inductors can be etched directly onto a printed circuit board by laying out the trace in a spiral pattern. Some such planar inductors use a planar core. Small value inductors can also be built on integrated circuits using the same processes that are used to make interconnects. Aluminium interconnect is typically used, laid out in a spiral coil pattern. However, the small dimensions limit the inductance, and it is far more common to use a circuit called a "gyrator" that uses a capacitor and active components to behave similarly to an inductor. Regardless of the design, because of the low inductances and low power dissipation on-die inductors allow, they're currently only commercially used for high frequency RF circuits. Inductors used in power regulation systems, lighting, and other systems that require low-noise operating conditions, are often partially or fully shielded. In telecommunication circuits employing induction coils and repeating transformers shielding of inductors in close proximity reduces circuit cross-talk. The term "air core coil" describes an inductor that does not use a magnetic core made of a ferromagnetic material. The term refers to coils wound on plastic, ceramic, or other nonmagnetic forms, as well as those that have only air inside the windings. Air core coils have lower inductance than ferromagnetic core coils, but are often used at high frequencies because they are free from energy losses called core losses that occur in ferromagnetic cores, which increase with frequency. A side effect that can occur in air core coils in which the winding is not rigidly supported on a form is 'microphony': mechanical vibration of the windings can cause variations in the inductance. At high frequencies, particularly radio frequencies (RF), inductors have higher resistance and other losses. In addition to causing power loss, in resonant circuits this can reduce the Q factor of the circuit, broadening the bandwidth. In RF inductors, which are mostly air core types, specialized construction techniques are used to minimize these losses. The losses are due to these effects: To reduce parasitic capacitance and proximity effect, high Q RF coils are constructed to avoid having many turns lying close together, parallel to one another. The windings of RF coils are often limited to a single layer, and the turns are spaced apart. To reduce resistance due to skin effect, in high-power inductors such as those used in transmitters the windings are sometimes made of a metal strip or tubing which has a larger surface area, and the surface is silver-plated. Small inductors for low current and low power are made in molded cases resembling resistors. These may be either plain (phenolic) core or ferrite core. An ohmmeter readily distinguishes them from similar-sized resistors by showing the low resistance of the inductor. Ferromagnetic-core or iron-core inductors use a magnetic core made of a ferromagnetic or ferrimagnetic material such as iron or ferrite to increase the inductance. A magnetic core can increase the inductance of a coil by a factor of several thousand, by increasing the magnetic field due to its higher magnetic permeability. However the magnetic properties of the core material cause several side effects which alter the behavior of the inductor and require special construction: Low-frequency inductors are often made with laminated cores to prevent eddy currents, using construction similar to transformers. The core is made of stacks of thin steel sheets or laminations oriented parallel to the field, with an insulating coating on the surface. The insulation prevents eddy currents between the sheets, so any remaining currents must be within the cross sectional area of the individual laminations, reducing the area of the loop and thus reducing the energy losses greatly. The laminations are made of low-conductivity silicon steel to further reduce eddy current losses. For higher frequencies, inductors are made with cores of ferrite. Ferrite is a ceramic ferrimagnetic material that is nonconductive, so eddy currents cannot flow within it. The formulation of ferrite is xxFe2O4 where xx represents various metals. For inductor cores soft ferrites are used, which have low coercivity and thus low hysteresis losses. Another material is powdered iron cemented with a binder. In an inductor wound on a straight rod-shaped core, the magnetic field lines emerging from one end of the core must pass through the air to re-enter the core at the other end. This reduces the field, because much of the magnetic field path is in air rather than the higher permeability core material and is a source of electromagnetic interference. A higher magnetic field and inductance can be achieved by forming the core in a closed magnetic circuit. The magnetic field lines form closed loops within the core without leaving the core material. The shape often used is a toroidal or doughnut-shaped ferrite core. Because of their symmetry, toroidal cores allow a minimum of the magnetic flux to escape outside the core (called "leakage flux"), so they radiate less electromagnetic interference than other shapes. Toroidal core coils are manufactured of various materials, primarily ferrite, powdered iron and laminated cores. Probably the most common type of variable inductor today is one with a moveable ferrite magnetic core, which can be slid or screwed in or out of the coil. Moving the core farther into the coil increases the permeability, increasing the magnetic field and the inductance. Many inductors used in radio applications (usually less than 100 MHz) use adjustable cores in order to tune such inductors to their desired value, since manufacturing processes have certain tolerances (inaccuracy). Sometimes such cores for frequencies above 100 MHz are made from highly conductive non-magnetic material such as aluminum. They decrease the inductance because the magnetic field must bypass them. Air core inductors can use sliding contacts or multiple taps to increase or decrease the number of turns included in the circuit, to change the inductance. A type much used in the past but mostly obsolete today has a spring contact that can slide along the bare surface of the windings. The disadvantage of this type is that the contact usually short-circuits one or more turns. These turns act like a single-turn short-circuited transformer secondary winding; the large currents induced in them cause power losses. A type of continuously variable air core inductor is the "variometer". This consists of two coils with the same number of turns connected in series, one inside the other. The inner coil is mounted on a shaft so its axis can be turned with respect to the outer coil. When the two coils' axes are collinear, with the magnetic fields pointing in the same direction, the fields add and the inductance is maximum. When the inner coil is turned so its axis is at an angle with the outer, the mutual inductance between them is smaller so the total inductance is less. When the inner coil is turned 180° so the coils are collinear with their magnetic fields opposing, the two fields cancel each other and the inductance is very small. This type has the advantage that it is continuously variable over a wide range. It is used in antenna tuners and matching circuits to match low frequency transmitters to their antennas. Another method to control the inductance without any moving parts requires an additional DC current bias winding which controls the permeability of an easily saturable core material. "See" Magnetic amplifier. A choke is an inductor designed specifically for blocking high-frequency alternating current (AC) in an electrical circuit, while allowing DC or low-frequency signals to pass. Because the inductor resistricts or "chokes" the changes in current, this type of inductor is called a choke. It usually consists of a coil of insulated wire wound on a magnetic core, although some consist of a donut-shaped "bead" of ferrite material strung on a wire. Like other inductors, chokes resist changes in current passing through them increasingly with frequency. The difference between chokes and other inductors is that chokes do not require the high Q factor construction techniques that are used to reduce the resistance in inductors used in tuned circuits. The effect of an inductor in a circuit is to oppose changes in current through it by developing a voltage across it proportional to the rate of change of the current. An ideal inductor would offer no resistance to a constant direct current; however, only superconducting inductors have truly zero electrical resistance. The relationship between the time-varying voltage "v"("t") across an inductor with inductance "L" and the time-varying current "i"("t") passing through it is described by the differential equation: When there is a sinusoidal alternating current (AC) through an inductor, a sinusoidal voltage is induced. The amplitude of the voltage is proportional to the product of the amplitude ("I"P) of the current and the frequency ("f") of the current. In this situation, the phase of the current lags that of the voltage by π/2 (90°). For sinusoids, as the voltage across the inductor goes to its maximum value, the current goes to zero, and as the voltage across the inductor goes to zero, the current through it goes to its maximum value. If an inductor is connected to a direct current source with value "I" via a resistance "R" (at least the DCR of the inductor), and then the current source is short-circuited, the differential relationship above shows that the current through the inductor will discharge with an exponential decay: The ratio of the peak voltage to the peak current in an inductor energised from an AC source is called the reactance and is denoted "X"L. Thus, where "ω" is the angular frequency. Reactance is measured in ohms but referred to as "impedance" rather than resistance; energy is stored in the magnetic field as current rises and discharged as current falls. Inductive reactance is proportional to frequency. At low frequency the reactance falls; at DC, the inductor behaves as a short-circuit. As frequency increases the reactance increases and at a sufficiently high frequency the reactance approaches that of an open circuit. In filtering applications, with respect to a particular load impedance, an inductor has a corner frequency defined as: When using the Laplace transform in circuit analysis, the impedance of an ideal inductor with no initial current is represented in the "s" domain by: where If the inductor does have initial current, it can be represented by: Inductors in a parallel configuration each have the same potential difference (voltage). To find their total equivalent inductance ("L"eq): The current through inductors in series stays the same, but the voltage across each inductor can be different. The sum of the potential differences (voltage) is equal to the total voltage. To find their total inductance: These simple relationships hold true only when there is no mutual coupling of magnetic fields between individual inductors. Mutual inductance occurs when the magnetic field of an inductor induces a magnetic field in an adjacent inductor. Mutual induction is the basis of transformer construction. M=(L1×L2)^(1/2) where M is the maximum mutual inductance possible between 2 inductors and L1 and L2 are the two inductors. In general M<=(L1×L2)^(1/2) as only a fraction of self flux is linked with the other. This fraction is called "Coefficient of flux linkage" or "Coefficient of coupling". K=M÷((L1×L2)^0.5) The table below lists some common simplified formulas for calculating the approximate inductance of several inductor constructions.
https://en.wikipedia.org/wiki?curid=14896
Insulin pump An insulin pump is a medical device used for the administration of insulin in the treatment of diabetes mellitus, also known as continuous subcutaneous insulin therapy. The device configuration may vary depending on design. A traditional pump includes: Other configurations are possible. More recent models may include disposable or semi-disposable designs for the pumping mechanism and may eliminate tubing from the infusion set. An insulin pump is an alternative to multiple daily injections of insulin by insulin syringes or an insulin pen and allows for flexible insulin therapy when used in conjunction with blood glucose monitoring and carbohydrate counting. Insulin pumps are used to deliver insulin on a continuous basis to a person with type I diabetes. Insulin pumps, cartridges, and infusion sets may be far more expensive than syringes used for insulin injection with several insulin pumps costing more than $6,000; necessary supplies can cost over $300. Another disadvantage of insulin pump use is a higher risk of developing diabetic ketoacidosis if the pump malfunctions. This can happen if the pump battery is discharged, if the insulin is inactivated by heat exposure, if the insulin reservoir runs empty, the tubing becomes loose and insulin leaks rather than being injected, or if the cannula becomes bent or kinked in the body, preventing delivery. Therefore, pump users typically monitor their blood sugars more frequently to evaluate the effectiveness of insulin delivery. Use of insulin pumps is increasing because of: In 1963 Dr. Arnold Kadish designed the first insulin pump to be worn as a backpack. A more wearable version was later devised by Dean Kamen in 1976. Kamen formed a company called "AutoSyringe" to market the product, which he sold to Baxter Health Care in 1981. In 1984 an Infusaid implantable infusion device was used to treat a 22-year-old diabetic female successfully. The insulin pump was first endorsed in the United Kingdom in 2003, by the National Institute for Health and Care Excellence (NICE). New insulin pumps are becoming "smart" as new features are added to their design. These simplify the tasks involved in delivering an insulin bolus. MiniMed 670G is a type of insulin pump and sensor system created by Medtronic. It was approved by the US FDA in September 2016 and was the first approved hybrid closed loop system which senses a diabetic person's basal insulin requirement and automatically adjusts its delivery to the body. An insulin pump allows the replacement of slow-acting insulin for basal needs with a continuous infusion of rapid-acting insulin. The insulin pump delivers a single type of rapid-acting insulin in two ways: An insulin pump user can influence the profile of the rapid-acting insulin by shaping the bolus. Users can experiment with bolus shapes to determine what is best for any given food, which means that they can improve control of blood sugar by adapting the bolus shape to their needs. A standard bolus is an infusion of insulin pumped completely at the onset of the bolus. It's the most similar to an injection. By pumping with a "spike" shape, the expected action is the fastest possible bolus for that type of insulin. The standard bolus is most appropriate when eating high carb low protein low fat meals because it will return blood sugar to normal levels quickly. An extended bolus is a slow infusion of insulin spread out over time. By pumping with a "square wave" shape, the bolus avoids a high initial dose of insulin that may enter the blood and cause low blood sugar before digestion can facilitate sugar entering the blood. The extended bolus also extends the action of insulin well beyond that of the insulin alone. The extended bolus is appropriate when covering high fat high protein meals such as steak, which will be raising blood sugar for many hours past the onset of the bolus. The extended bolus is also useful for those with slow digestion (such as with gastroparesis or coeliac disease). A combination bolus/multiwave bolus is the combination of a standard bolus spike with an extended bolus square wave. This shape provides a large dose of insulin up front, and then also extends the tail of the insulin action. The combination bolus is appropriate for high carb high fat meals such as pizza, pasta with heavy cream sauce, and chocolate cake. A super bolus is a method of increasing the spike of the standard bolus. Since the action of the bolus insulin in the blood stream will extend for several hours, the basal insulin could be stopped or reduced during this time. This facilitates the "borrowing" of the basal insulin and including it into the bolus spike to deliver the same total insulin with faster action than can be achieved with spike and basal rate together. The super bolus is useful for certain foods (like sugary breakfast cereals) which cause a large post-prandial peak of blood sugar. It attacks the blood sugar peak with the fastest delivery of insulin that can be practically achieved by pumping. Since the pump user is responsible to manually start a bolus, this provides an opportunity for the user to pre-bolus to improve upon the insulin pump's capability to prevent post-prandial hyperglycemia. A pre-bolus is simply a bolus of insulin given before it is actually needed to cover carbohydrates eaten. There are two situations where a pre-bolus is helpful: Similarly, a low blood sugar level or a low glycemic food might be best treated with a bolus "after" a meal is begun. The blood sugar level, the type of food eaten, and a person's individual response to food and insulin affect the ideal time to bolus with the pump. The pattern for delivering basal insulin throughout the day can also be customized with a pattern to suit the pump user. Basal insulin requirements will vary between individuals and periods of the day. The basal rate for a particular time period is determined by fasting while periodically evaluating the blood sugar level. Neither food nor bolus insulin must be taken for 4 hours before or during the evaluation period. If the blood sugar level changes dramatically during evaluation, then the basal rate can be adjusted to increase or decrease insulin delivery to keep the blood sugar level approximately steady. For instance, to determine an individual's morning basal requirement, they must skip breakfast. On waking, they would test their blood glucose level periodically until lunch. Changes in blood glucose level are compensated with adjustments in the morning basal rate. The process is repeated over several days, varying the fasting period, until a 24-hour basal profile has been built up which keeps fasting blood sugar levels relatively steady. Once the basal rate is matched to the fasting basal insulin need, the pump user will then gain the flexibility to skip or postpone meals such as sleeping late on the weekends or working overtime on a weekday. Many factors can change insulin requirements and require an adjustment to the basal rate: A pump user should be educated by their diabetes care professional about basal rate determination before beginning pump therapy. Since the basal insulin is provided as a rapid-acting insulin, the basal insulin can be immediately increased or decreased as needed with a temporary basal rate. Examples when this is helpful include: In August 2011, an IBM researcher, Jay Radcliffe, demonstrated a security flaw in insulin pumps. Radcliffe was able to hack the wireless interface used to control the pump remotely. Pump manufacturer Medtronic later said security research by McAfee uncovered a flaw in its pumps that could be exploited. Animas Corporation In 2017, Animas Corporation announced the decision to discontinue both the manufacturing and sale of both their Animas Vibe and OneTouch Ping insulin pumps. The company partnered with Medtronic to ensure that Animas customers were able to stay on pump therapy and receive both the supplies and support they need.
https://en.wikipedia.org/wiki?curid=14899
Intensive insulin therapy Intensive insulin therapy or flexible insulin therapy is a therapeutic regimen for diabetes mellitus treatment. This newer approach contrasts with conventional insulin therapy. Rather than minimize the number of insulin injections per day (a technique which demands a rigid schedule for food and activities), the intensive approach favors flexible meal times with variable carbohydrate as well as flexible physical activities. The trade-off is the increase from 2 or 3 injections per day to 4 or more injections per day, which was considered "intensive" relative to the older approach. In North America in 2004, many endocrinologists prefer the term "flexible insulin therapy" (FIT) to "intensive therapy" and use it to refer to any method of replacing insulin that attempts to mimic the pattern of small continuous basal insulin secretion of a working pancreas combined with larger insulin secretions at mealtimes. The semantic distinction reflects changing treatment. Long-term studies like the UK Prospective Diabetes Study ("UKPDS") and the Diabetes control and complications trial ("DCCT") showed that intensive insulin therapy achieved blood glucose levels closer to non-diabetic people and that this was associated with reduced frequency and severity of blood vessel damage. Damage to large and small blood vessels (macro- and microvascular disease) is central to the development of complications of diabetes. This evidence convinced most physicians who specialize in diabetes care that an important goal of treatment is to make the biochemical profile of the diabetic patient (blood lipids, HbA1c, etc.) as close to the values of non-diabetic people as possible. This is especially true for young patients with many decades of life ahead. A working pancreas continually secretes small amounts of insulin into the blood to maintain normal glucose levels, which would otherwise rise from glucose release by the liver, especially during the early morning dawn phenomenon. This insulin is referred to as "basal insulin secretion", and constitutes almost half the insulin produced by the normal pancreas. Bolus insulin is produced during the digestion of meals. Insulin levels rise immediately as we begin to eat, remaining higher than the basal rate for 1 to 4 hours. This meal-associated ("prandial") insulin production is roughly proportional to the amount of carbohydrate in the meal. Intensive or flexible therapy involves supplying a continual supply of insulin to serve as the "basal insulin", supplying meal insulin in doses proportional to nutritional load of the meals, and supplying extra insulin when needed to correct high glucose levels. These three components of the insulin regimen are commonly referred to as basal insulin, bolus insulin, and high glucose correction insulin. One method of intensive insulinotherapy is based on multiple daily injections (sometimes referred to in medical literature as "MDI"). Meal insulin is supplied by injection of rapid-acting insulin before each meal in an amount proportional to the meal. Basal insulin is provided as a once or twice daily injection of dose of a long-acting insulin. In an MDI regimen, long-acting insulins are preferred for basal use. An older insulin used for this purpose is ultralente, and beef ultralente in particular was considered for decades to be the gold standard of basal insulin. Long-acting insulin analogs such as insulin glargine (brand name "Lantus", made by Sanofi-Aventis) and insulin detemir (brand name "Levemir", made by Novo Nordisk) are also used, with insulin glargine used more than insulin detemir. Rapid-acting insulin analogs such as lispro (brand name "Humalog", made by Eli Lilly and Company) and aspart (brand name "Novolog"/"Novorapid", made by Novo Nordisk and "Apidra" made by Sanofi Aventis) are preferred by many clinicians over older regular insulin for meal coverage and high correction. Many people on MDI regimens carry insulin pens to inject their rapid-acting insulins instead of traditional syringes. Some people on an MDI regimen also use injection ports such as the I-port to minimize the number of daily skin punctures. The other method of intensive/flexible insulin therapy is an insulin pump. It is a small mechanical device about the size of a deck of cards. It contains a syringe-like reservoir with about three days' insulin supply. This is connected by thin, disposable, plastic tubing to a needle-like cannula inserted into the patient's skin and held in place by an adhesive patch. The infusion tubing and cannula must be removed and replaced every few days. An insulin pump can be programmed to infuse a steady amount of rapid-acting insulin under the skin. This steady infusion is termed the basal rate and is designed to supply the background insulin needs. Each time the patient eats, he or she must press a button on the pump to deliver a specified dose of insulin to cover that meal. Extra insulin is also given the same way to correct a high glucose reading. Although current pumps can include a glucose sensor, they cannot automatically respond to meals or to rising or falling glucose levels. Both MDI and pumping can achieve similarly excellent glycemic control. Some people prefer injections because they are less expensive than pumps and do not require the wearing of a continually attached device. However, the clinical literature is very clear that patients whose basal insulin requirements tend not to vary throughout the day or do not require dosage precision smaller than 0.5 IU, are much less likely to realize much significant advantage of pump therapy. Another perceived advantage of pumps is the freedom from syringes and injections, however, infusion sets still require less frequent injections to guide infusion sets into the subcutaneous tissue. Intensive/flexible insulin therapy requires frequent blood glucose checking. To achieve the best balance of blood sugar with either intensive/flexible method, a patient must check his or her glucose level with a meter monitoring of blood glucose several times a day. This allows optimization of the basal insulin and meal coverage as well as correction of high glucose episodes. The two primary advantages of intensive/flexible therapy over more traditional two or three injection regimens are: Major disadvantages of intensive/flexible therapy are that it requires greater amounts of education and effort to achieve the goals, and it increases the daily cost for glucose monitoring four or more times a day. This cost can substantially increase when the therapy is implemented with an insulin pump and/or continuous glucose monitor. It is a common notion that more frequent hypoglycemia is a disadvantage of intensive/flexible regimens. The frequency of hypoglycemia increases with increasing effort to achieve normal blood glucoses with most insulin regimens, but hypoglycemia can be minimized with appropriate glucose targets and control strategies. The difficulties lie in remembering to test, estimating meal size, taking the meal bolus and eating within the prescribed time, and being aware of snacks and meals that are not the expected size. When implemented correctly, flexible regimens offer greater ability to achieve good glycemic control with easier accommodation to variations of eating and physical activity. Over the last two decades, the evidence that better glycemic control (i.e., keeping blood glucose and HbA1c levels as close to normal as possible) reduces the rates of many complications of diabetes has become overwhelming. As a result, diabetes specialists have expended increasing effort to help most people with diabetes achieve blood glucose levels as close to normal as achievable. It takes about the same amount of effort to achieve good glycemic control with a traditional two or three injection regimen as it does with flexible therapy: frequent glucose monitoring, attention to timing and amounts of meals. Many diabetes specialists no longer think of flexible insulin therapy as "intensive" or "special" treatment for a select group of patients but simply as standard care for most patients with type 1 diabetes. The insulin pump is one device used in intensive insulinotherapy. The insulin pump is about the size of a beeper. It can be programmed to send a steady stream of insulin as "basal insulin". It contains a reservoir or cartridge holding several days' worth of insulin, the tiny battery-operated pump, and the computer chip that regulates how much insulin is pumped. The infusion set is a thin plastic tube with a fine needle at the end. There are also newer "pods" which do not require tubing. It carries the insulin from the pump to the infusion site beneath the skin. It sends a larger amount before eating meals as "bolus" doses. The insulin pump replaces insulin injections. This device is useful for people who regularly forget to inject themselves or for people who don't like injections. This machine does the injecting by replacing the slow-acting insulin for basal needs with an ongoing infusion of rapid-acting insulin. Basal insulin: the insulin that controls blood glucose levels between meals and overnight. It controls glucose in the fasting state. Boluses: the insulin that is released when food is eaten or to correct a high reading. Another device used in intensive insulinotherapy is the injection port. An injection port is a small disposable device, similar to the infusion set used with an insulin pump, configured to accept a syringe. Standard insulin injections are administered through the injection port. When using an injection port, the syringe needle always stays above the surface of the skin, thus reducing the number of skin punctures associated with intensive insulinotheraphy.
https://en.wikipedia.org/wiki?curid=14904
Interwiki links Interwiki linking ("W-link") is a facility for creating links to the many wikis on the World Wide Web. Users avoid pasting in entire URLs (as they would for regular web pages) and instead use a shorthand similar to links within the same wiki (intrawiki links). Unlike domain names on the Internet, there is no globally defined list of interwiki prefixes, so owners of a wiki must define an interwiki map (InterMap) appropriate to their needs. Users generally have to create separate accounts for each wiki they intend to use (unless they intend to edit anonymously). Variations in text formatting and layout can also hinder a seamless transition from one wiki to the next. By making wiki links simpler to type for the members of a particular community, these features help bring the different wikis closer together. Furthering that goal, interwiki "bus tours" (similar to webrings) have been created to explain the purposes and highlights of different wikis. Such examples on Wikipedia include and . Interwiki link notation varies, depending largely on the syntax a wiki uses for markup. The two most common link patterns in wikis are CamelCase and free links (arbitrary phrases surrounded by some set delimiter, such as double square brackets). CURIE syntax—an emerging W3C standard—uses a single set of square brackets. Interwiki links on a CamelCase-based wiki frequently take the form of "Code:PageName", where "Code" is the defined InterMap prefix for another wiki. Thus, a link "WikiPedia:InterWiki" could be rendered in HTML as a link to an article on Wikipedia: for example, . Linking from a CamelCase-wiki to a page that contains spaces in its title typically requires replacing the spaces with underscores (e.g. WikiPedia:Main_Page). Interwiki links on wikis based on free links, such as Wikipedia, typically follow the same principle, but using the delimiters that would be used for internal links. These links can then be parsed and escaped as they would be if they were internal, allowing easier typing of spaces but potentially causing problems with other special characters. For example, on Wikipedia, codice_1 appears as , and codice_1 (former syntax: codice_1) appears as . The MediaWiki software has an additional feature which uses similar notation to create automatic interlanguage links—for instance, the link codice_1 (with no leading colon) automatically creates a reference labeled "Other languages: | ..." at the top and bottom of, or in a sidebar next to, the article display. Various other wiki software systems have features for "semi-internal" links of this kind, such as support for namespaces or multiple sub-communities. Most InterMap implementations simply replace the interwiki prefix with a full URL prefix, so many non-wiki websites can also be referred to using the system. A reference to a definition on the Free On-line Dictionary of Computing, for instance, could take the form codice_1 which would tell the system to append and display the link as Foldoc:foo. This makes it very easy to link to commonly referenced resources from within a wiki page, without the need to even know the form of the URL in question. The interwiki concept can equally be applied to links "from" non-wiki websites. Advogato, for instance, offers a syntax for creating shorthand links based on a MeatBall-derived InterMap. WordPress offers a similar "shortcode" shorthand notation for embedding images, videos, LaTeX formulas and equations, maps, etc. hosted on other websites. Internally, a wiki that uses interwiki links needs to have a mapping from wiki-code links to full URLs. For example, codice_1 might appear as , but link to codice_7. Since most wiki systems use URLs for individual pages where the page's title appears at the end of an otherwise unchanging address, the simplest way of defining such mappings is by substituting the interwiki prefix for the unchanging part of the URL. So in the example above, the codice_8 has simply been replaced by codice_9 in creating the target of the HTML rendered link. Rather than creating a new list from scratch for every wiki, it is often useful to obtain a copy of that from another site. Sites such as MeatballWiki and the UseModWiki site contain comprehensive lists which are often used for this purpose - the former being publicly editable in the same way as any other wiki page, and the latter being verified as usable but potentially out of date. MediaWiki's default list of interwiki links is derived from an old version of MeatballWiki's list.
https://en.wikipedia.org/wiki?curid=14906
Inverse function In mathematics, an inverse function (or anti-function) is a function that "reverses" another function: if the function applied to an input gives a result of , then applying its inverse function to gives the result , and vice versa, i.e., if and only if . As an example, consider the real-valued function of a real variable given by . Thinking of this as a step-by-step procedure (namely, take a number , multiply it by 5, then subtract 7 from the result), to reverse this and get back from some output value, say , we should undo each step in reverse order. In this case that means that we should add 7 to and then divide the result by 5. In functional notation this inverse function would be given by, With we have that and . Not all functions have inverse functions. Those that do are called "invertible". For a function to have an inverse, it must have the property that for every in there is one, and only one in so that . This property ensures that a function exists with the necessary relationship with . Let be a function whose domain is the set , and whose codomain is the set . Then is "invertible" if there exists a function with domain and image , with the property: If is invertible, the function is unique, which means that there is exactly one function satisfying this property (no more, no less). That function is then called "the" inverse of , and is usually denoted as . Stated otherwise, a function, considered as a binary relation, has an inverse if and only if the converse relation is a function on the codomain , in which case the converse relation is the inverse function. Not all functions have an inverse. For a function to have an inverse, each element must correspond to no more than one ; a function with this property is called one-to-one or an injection. If is to be a function on , then each element must correspond to some . Functions with this property are called surjections. This property is satisfied by definition if is the image of , but may not hold in a more general context. To be invertible, a function must be both an injection and a surjection. Such functions are called bijections. The inverse of an injection that is not a bijection, that is, a function that is not a surjection, is only a partial function on , which means that for some , is undefined. If a function is invertible, then both it and its inverse function are bijections. There is another convention used in the definition of functions. This can be referred to as the "set-theoretic" or "graph" definition using ordered pairs in which a codomain is never referred to. Under this convention all functions are surjections, and so, being a bijection simply means being an injection. Authors using this convention may use the phrasing that a function is invertible if and only if it is an injection. The two conventions need not cause confusion as long as it is remembered that in this alternate convention the codomain of a function is always taken to be the range of the function. The function given by is not injective since each possible result "y" (except 0) corresponds to two different starting points in – one positive and one negative, and so this function is not invertible. With this type of function it is impossible to deduce an input from its output. Such a function is called non-injective or, in some applications, information-losing. If the domain of the function is restricted to the nonnegative reals, that is, the function is redefined to be with the same "rule" as before, then the function is bijective and so, invertible. The inverse function here is called the "(positive) square root function". If is an invertible function with domain and range , then Using the composition of functions we can rewrite this statement as follows: where is the identity function on the set ; that is, the function that leaves its argument unchanged. In category theory, this statement is used as the definition of an inverse morphism. Considering function composition helps to understand the notation . Repeatedly composing a function with itself is called iteration. If is applied times, starting with the value , then this is written as ; so , etc. Since , composing and yields , "undoing" the effect of one application of . While the notation might be misunderstood, certainly denotes the multiplicative inverse of and has nothing to do with the inverse function of . In keeping with the general notation, some English authors use expressions like to denote the inverse of the sine function applied to (actually a partial inverse; see below) Other authors feel that this may be confused with the notation for the multiplicative inverse of , which can be denoted as . To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin "arcus"). For instance, the inverse of the sine function is typically called the arcsine function, written as . Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin "area"). For instance, the inverse of the hyperbolic sine function is typically written as . Other inverse special functions are sometimes prefixed with the prefix "inv" if the ambiguity of the notation should be avoided. Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. If an inverse function exists for a given function , then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by . There is a symmetry between a function and its inverse. Specifically, if is an invertible function with domain and range , then its inverse has domain and range , and the inverse of is the original function . In symbols, for functions and , This statement is a consequence of the implication that for to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by The inverse of a composition of functions is given by Notice that the order of and have been reversed; to undo followed by , we must first undo and then undo . For example, let and let . Then the composition is the function that first multiplies by three and then adds five, To reverse this process, we must first subtract five, and then divide by three, This is the composition If is a set, then the identity function on is its own inverse: More generally, a function is equal to its own inverse if and only if the composition is equal to . Such a function is called an involution. Single-variable calculus is primarily concerned with functions that map real numbers to real numbers. Such functions are often defined through formulas, such as: A surjective function from the real numbers to the real numbers possesses an inverse as long as it is one-to-one, i.e. as long as the graph of has, for each possible value only one corresponding value, and thus passes the horizontal line test. The following table shows several standard functions and their inverses: One approach to finding a formula for , if it exists, is to solve the equation for . For example, if is the function then we must solve the equation for : Thus the inverse function is given by the formula Sometimes the inverse of a function cannot be expressed by a formula with a finite number of terms. For example, if is the function then is a bijection, and therefore possesses an inverse function . The formula for this inverse has an infinite number of terms: If is invertible, then the graph of the function is the same as the graph of the equation This is identical to the equation that defines the graph of , except that the roles of and have been reversed. Thus the graph of can be obtained from the graph of by switching the positions of the and axes. This is equivalent to reflecting the graph across the line . A continuous function is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function is invertible, since the derivative If the function is differentiable on an interval and for each , then the inverse is differentiable on . If , the derivative of the inverse is given by the inverse function theorem, Using Leibniz's notation the formula above can be written as This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a differentiable multivariable function is invertible in a neighborhood of a point as long as the Jacobian matrix of at is invertible. In this case, the Jacobian of at is the matrix inverse of the Jacobian of at . Even if a function is not one-to-one, it may be possible to define a partial inverse of by restricting the domain. For example, the function is not one-to-one, since . However, the function becomes one-to-one if we restrict to the domain , in which case (If we instead restrict to the domain , then the inverse is the negative of the square root of .) Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: Sometimes this multivalued inverse is called the full inverse of , and the portions (such as and −) are called "branches". The most important branch of a multivalued function (e.g. the positive square root) is called the "principal branch", and its value at is called the "principal value" of . For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). These considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since for every real (and more generally for every integer ). However, the sine is one-to-one on the interval , and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between − and . The following table describes the principal branch of each inverse trigonometric function: If , a left inverse for (or "retraction" of ) is a function such that composing with from the left gives the identity function: That is, the function satisfies the rule Thus, must equal the inverse of on the image of , but may take any values for elements of not in the image. A function with a left inverse is necessarily injective. In classical mathematics, every injective function with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion of the two-element set in the reals violates indecomposability by giving a retraction of the real line to the set . A right inverse for (or "section" of ) is a function such that That is, the function satisfies the rule Thus, may be any of the elements of that map to under . A function has a right inverse if and only if it is surjective (though constructing such an inverse in general requires the axiom of choice). An inverse that is both a left and right inverse must be unique. However, if is a left inverse for , then may or may not be a right inverse for ; and if is a right inverse for , then is not necessarily a left inverse for . For example, let denote the squaring map, such that for all in , and let denote the square root map, such that for all . Then for all in ; that is, is a right inverse to . However, is not a left inverse to , since, e.g., . If is any function (not necessarily invertible), the preimage (or inverse image) of an element is the set of all elements of that map to : The preimage of can be thought of as the image of under the (multivalued) full inverse of the function . Similarly, if is any subset of , the preimage of is the set of all elements of that map to : For example, take a function , where . This function is not invertible for reasons discussed . Yet preimages may be defined for subsets of the codomain: The preimage of a single element – a singleton set – is sometimes called the "fiber" of . When is the set of real numbers, it is common to refer to as a "level set".
https://en.wikipedia.org/wiki?curid=14907
Inertia Inertia is the resistance of any physical object to any change in its velocity. This includes changes to the object's speed, or direction of motion. An aspect of this property is the tendency of objects to keep moving in a straight line at a constant speed, when no forces act upon them. Inertia comes from the Latin word, "iners", meaning idle, sluggish. Inertia is one of the primary manifestations of mass, which is a quantitative property of physical systems. Isaac Newton defined inertia as his first law in his "Philosophiæ Naturalis Principia Mathematica", which states: In common usage, the term "inertia" may refer to an object's "amount of resistance to change in velocity" or for simpler terms, "resistance to a change in motion" (which is quantified by its mass), or sometimes to its momentum, depending on the context. The term "inertia" is more properly understood as shorthand for "the principle of inertia" as described by Newton in his first law of motion: an object not subject to any net external force moves at a constant velocity. Thus, an object will continue moving at its current velocity until some force causes its speed or direction to change. On the surface of the Earth, inertia is often masked by gravity and the effects of friction and air resistance, both of which tend to decrease the speed of moving objects (commonly to the point of rest). This misled the philosopher Aristotle to believe that objects would move only as long as force was applied to them. The principle of inertia is one of the fundamental principles in classical physics that are still used today to describe the motion of objects and how they are affected by the applied forces on them. Prior to the Renaissance, the most generally accepted theory of motion in Western philosophy was based on Aristotle who around about 335 BC to 322 BC said that, in the absence of an external motive power, all objects (on Earth) would come to rest and that moving objects only continue to move so long as there is a power inducing them to do so. Aristotle explained the continued motion of projectiles, which are separated from their projector, by the action of the surrounding medium, which continues to move the projectile in some way. Aristotle concluded that such violent motion in a void was impossible. Despite its general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over nearly two millennia. For example, Lucretius (following, presumably, Epicurus) stated that the "default state" of matter was motion, not stasis. In the 6th century, John Philoponus criticized the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of a surrounding medium, but by some property imparted to the object when it was set in motion. Although this was not the modern concept of inertia, for there was still the need for a power to keep a body in motion, it proved a fundamental step in that direction. This view was strongly opposed by Averroes and by many scholastic philosophers who supported Aristotle. However, this view did not go unchallenged in the Islamic world, where Philoponus did have several supporters who further developed his ideas. In the 11th century, Persian polymath Ibn Sina (Avicenna) claimed that a projectile in a vacuum would not stop unless acted upon. In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named "impetus", dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also believed that impetus could be not only linear, but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan's thought was followed up by his pupil Albert of Saxony (1316–1390) and the Oxford Calculators, who performed various experiments that further undermined the classical, Aristotelian view. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs. Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone: Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion. According to historian of science Charles Coulston Gillispie, inertia "entered science as a physical consequence of Descartes' geometrization of space-matter, combined with the immutability of God." The principle of inertia, which originated with Aristotle for "motions in a void", states that an object tends to resist a change in motion. According to Newton, an object will stay at rest or stay in motion (i.e. maintain its velocity) unless acted on by a net external force, whether it results from gravity, friction, contact, or some other force. The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the Earth is never at rest, but is actually in constant motion around the Sun. Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle: A body moving on a level surface will continue in the same direction at a constant speed unless disturbed. Galileo writes that "all external impediments removed, a heavy body on a spherical surface concentric with the earth will maintain itself in that state in which it has been; if placed in movement towards the west (for example), it will maintain itself in that movement." This notion which is termed "circular inertia" or "horizontal circular inertia" by historians of science, is a precursor to, but distinct from, Newton's notion of rectilinear inertia. For Galileo, a motion is "horizontal" if it does not carry the moving body towards or away from the centre of the earth, and for him, "a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping." It is also worth noting that Galileo later (in 1632) concluded that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Albert Einstein to develop the theory of special relativity. The first physicist to completely break away from the Aristotelian model of motion was Isaac Beeckman in 1614. Concepts of inertia in Galileo's writings would later come to be refined, modified and codified by Isaac Newton as the first of his Laws of Motion (first published in Newton's work, "Philosophiae Naturalis Principia Mathematica", in 1687): Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon. Since initial publication, Newton's Laws of Motion (and by inclusion, this first law) have come to form the basis for the branch of physics known as classical mechanics. The term "inertia" was first introduced by Johannes Kepler in his "Epitome Astronomiae Copernicanae" (published in three parts from 1617–1621); however, the meaning of Kepler's term (which he derived from the Latin word for "idleness" or "laziness") was not quite the same as its modern interpretation. Kepler defined inertia only in terms of a resistance to movement, once again based on the presumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to these concepts as it is today. Nevertheless, despite defining the concept so elegantly in his laws of motion, even Newton did not actually use the term "inertia" to refer to his First Law. In fact, Newton originally viewed the phenomenon he described in his First Law of Motion as being caused by "innate forces" inherent in matter, which resisted any acceleration. Given this perspective, and borrowing from Kepler, Newton attributed the term "inertia" to mean "the innate force possessed by an object which resists changes in motion"; thus, Newton defined "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself. However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one which we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon described by Newton's First Law of Motion, and the two concepts are now considered to be equivalent. Albert Einstein's theory of special relativity, as proposed in his 1905 paper entitled "On the Electrodynamics of Moving Bodies" was built on the understanding of inertial reference frames developed by Galileo and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass, energy, and distance, Einstein's concept of inertia remained unchanged from Newton's original meaning. However, this resulted in a limitation inherent in special relativity: the principle of relativity could only apply to inertial reference frames. To address this limitation, Einstein developed his general theory of relativity ("The Foundation of the General Theory of Relativity", 1916), which provided a theory including "noninertial" (accelerated) reference frames. A quantity related to inertia is "rotational inertia" (→ moment of inertia), the property that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum remains unchanged, unless an external torque is applied; this is also called conservation of angular momentum. Rotational inertia is often considered in relation to the a rigid body. For example, a gyroscope uses the property that it resists any change in the axis of rotation.
https://en.wikipedia.org/wiki?curid=14909
Ibanez The Hoshino Gakki company began in 1908 as the musical instrument sales division of the "Hoshino Shoten", a bookstore chain. Hoshino Gakki decided in 1935 to make Spanish-style acoustic guitars, at first using the "Ibanez Salvador" brand name in honor of Spanish luthier Salvador Ibáñez, and later simply "Ibanez". The modern era of Ibanez guitars began in 1957. The late 1950s and 1960s Ibanez catalogues show guitars with some wild-looking designs, manufactured by Kiso Suzuki Violin, Guyatone, and their own Tama factory established in 1962. After the Tama factory stopped manufacturing guitars in 1966, Hoshino Gakki used the Teisco and FujiGen Gakki guitar factories to make Ibanez guitars, and after the Teisco String Instrument factory closed in 1969/1970, Hoshino Gakki used the FujiGen Gakki guitar factory to make Ibanez guitars. In the 1960s, Japanese guitar makers mainly copied American guitar designs, and Ibanez-branded copies of Gibson, Fender, and Rickenbacker models appear. This resulted in the so-called lawsuit period. Hoshino Gakki introduced Ibanez models that were definitely not copies of the Gibson or Fender designs, such as the Iceman and the Roadstar series. The company has produced its own guitar designs ever since. The late 1980s and early 1990s were an important period for the Ibanez brand. Hoshino Gakki's relationship with guitarist Steve Vai resulted in the introduction of the Ibanez JEM and the Ibanez Universe models; after the earlier successes of the Roadstar and Iceman models in the late 1970s – early 1980s, Hoshino Gakki entered the superstrat market with the RG series, a lower-priced version of their JEM series. Hoshino Gakki also had semi-acoustic, nylon- and steel-stringed acoustic guitars manufactured under the Ibanez name. Most Ibanez guitars were made by the FujiGen guitar factory in Japan up until the mid- to late 1980s, and from then on Ibanez guitars have also been made in other Asian countries such as Korea, China, and Indonesia. During the early 1980s, the FujiGen guitar factory also produced most of the Roland guitar synthesizers, including the Stratocaster-style Roland G-505, the twin-humbucker Roland G-202 (endorsed by Adrian Belew, Eric Clapton, Dean Brown, Jeff Baxter, Yannis Spathas, Christoforos Krokidis, Steve Howe, Mike Rutherford, Andy Summers, Neal Schon and Steve Hackett) and the Ibanez X-ING IMG-2010. Cimar and Starfield were guitar and bass brands owned by Hoshino Gakki. In the 1970s, Hoshino Gakki and Kanda Shokai shared some guitar designs, and so some Ibanez and Greco guitars have the same features. The Greco versions were sold in Japan and the Ibanez versions were sold outside Japan. From 1982, Ibanez guitars have also been sold in Japan as well. ) guitar has an extremely thin body made out of mahogany, and is available in 6, 7 and 8-string models. They may come with either 22 or 24 frets, depending on year of manufacture. The standard line currently have Wizard III necks that are slightly wider and thicker than the original Wizard. All S models have bodies that are thicker in the middle where the pickups are, and taper off towards the outer edges. The guitars use ZR (Zero Resistance), Lo-TRS, and variants of the Edge bridge system as well as fixed bridges. Ibanez currently makes 8 Prestige S-Series guitars. The Ibanez AR is a reissued series originating from the 70s. The AR series features a set-in neck, double cutaway, with 22 frets on a 24.75" scale. The AK is easily distinguishable by its sharper lower body horn (Florentine cutaway ?) that other Artcore guitars do not have. In the 1970s, the Nisshin Onpa company who owned the Maxon brand name, developed and began selling a series of effect pedals in Japan. Hoshino Gakki licensed these for sale using the name Ibanez outside Japan. These two companies eventually began doing less and less business together until Nisshin Onpa ceased manufacturing the TS-9 reissue for Hoshino Gakki in 2002.
https://en.wikipedia.org/wiki?curid=14910
Incest Incest is human sexual activity between family members or close relatives. This typically includes sexual activity between people in consanguinity (blood relations), and sometimes those related by affinity (marriage or stepfamily), adoption, clan, or lineage. The incest taboo is one of the most widespread of all cultural taboos, both in present and in past societies. Most modern societies have laws regarding incest or social restrictions on closely consanguineous marriages. In societies where it is illegal, consensual adult incest is seen by some as a victimless crime. Some cultures extend the incest taboo to relatives with no consanguinity such as milk-siblings, step-siblings, and adoptive siblings, albeit sometimes with less intensity. Third-degree relatives (such as half-aunt, half-nephew, first cousin) on average have 12.5% common genetic heritage, and sexual relations between them are viewed differently in various cultures, from being discouraged to being socially acceptable. Children of incestuous relationships have been regarded as illegitimate, and are still so regarded in some societies today. In most cases, the parents did not have the option to marry to remove that status, as incestuous marriages were, and are, normally also prohibited. A common justification for prohibiting incest is avoiding inbreeding: a collection of genetic disorders suffered by the children of parents with a close genetic relationship. Such children are at greater risk for congenital disorders, death, and developmental and physical disability, and that risk is proportional to their parents' coefficient of relationship—a measure of how closely the parents are related genetically. But cultural anthropologists have noted that inbreeding avoidance cannot form the sole basis for the incest taboo because the boundaries of the incest prohibition vary widely between cultures, and not necessarily in ways that maximize the avoidance of inbreeding. In some societies, such as those of Ancient Egypt, brother–sister, father–daughter, mother–son, cousin–cousin, aunt–nephew, uncle–niece, and other combinations of relations within a royal family were married as a means of perpetuating the royal lineage. Some societies, such as the Balinese and some Inuit tribes, have different views about what constitutes illegal or immoral incest. However, sexual relations with a first-degree relative (meaning a parent or sibling) are almost universally forbidden. The English word "incest" is derived from the Latin "incestus", which has a general meaning of "impure, unchaste". It was introduced into Middle English, both in the generic Latin sense (preserved throughout the Middle English period) and in the narrow modern sense. The derived adjective "incestuous" appears in the 16th century. Before the Latin term came in, incest was known in Old English as "sib-leger" (from "sibb" 'kinship' + "leger" 'to lie') or "mǣġhǣmed" (from "mǣġ" 'kin, parent' + "hǣmed" 'sexual intercourse') but in time, both words fell out of use. Terms like "incester" and "incestual" have been used to describe those interested or involved in sexual relations with relatives among humans, while "inbreeder" has been used in relation to similar behavior among non-human animals or organisms. Other words that describe sexual attraction to relatives include consanguinophilia, consanguinamory, synegenesophilia, incestuality and incestophilia. In ancient China, first cousins with the same surnames (i.e., those born to the father's brothers) were not permitted to marry, while those with different surnames could marry (i.e., maternal cousins and paternal cousins born to the father's sisters). Several of the Egyptian Pharaohs married their siblings and had several children with them. For example, Tutankhamun married his half-sister Ankhesenamun, and was himself the child of an incestuous union between Akhenaten and an unidentified sister-wife. Several scholars, such as Frier et al., state that sibling marriages were widespread among all classes in Egypt during the Graeco-Roman period. Numerous papyri and the Roman census declarations attest to many husbands and wives being brother and sister, of the same father and mother. However, it has also been argued that available evidence does not support the view such relations were common. The most famous of these relationships were in the Ptolemaic royal family; Cleopatra VII was married to her younger brother, Ptolemy XIII, while her mother and father, Cleopatra V and Ptolemy XII, had also been brother and sister. Before the Ptolemies' rule, only circumstances of half-sibling incest could be observed within the royal family in Egypt. Arsinoe II and her younger brother, Ptolemy II Philadelphus, were the first ones to break tradition and participate in a full-sibling marriage. The fable of "Oedipus", with a theme of inadvertent incest between a mother and son, ends in disaster and shows ancient taboos against incest as Oedipus blinds himself in disgust and shame after his incestuous actions. In the "sequel" to Oedipus, "Antigone", his four children are also punished for their parents' incestuousness. Incest appears in the commonly accepted version of the birth of Adonis, when his mother, Myrrha has sex with her father Cinyras during a festival, disguised as a prostitute. In Ancient Greece, Spartan King Leonidas I, hero of the legendary Battle of Thermopylae, was married to his niece Gorgo, daughter of his half-brother Cleomenes I. Greek law allowed marriage between a brother and sister if they had different mothers. For example, some accounts say that Elpinice was for a time married to her half-brother Cimon. Incest was sometimes acknowledged as a positive sign of tyranny in Ancient Greece. Herodotus recounts a dream of Hippias, son of Pesistratus, in which he "slept with his own mother," and this dream gave him assurance that he would regain power over Athens. Suetonius attributes this omen to a dream of Julius Caesar, explaining the symbolism of dreaming of sexual intercourse with one's own mother. Incest is mentioned and condemned in Virgil's "Aeneid" Book VI: "hic thalamum invasit natae vetitosque hymenaeos;" "This one invaded a daughter's room and a forbidden sex act". Roman civil law prohibited marriages within four degrees of consanguinity but had no degrees of affinity with regards to marriage. Roman civil laws prohibited any marriage between parents and children, either in the ascending or descending line "ad infinitum". Adoption was considered the same as affinity in that an adoptive father could not marry an unemancipated daughter or granddaughter even if the adoption had been dissolved. Incestuous unions were discouraged and considered "nefas" (against the laws of gods and man) in ancient Rome. In AD 295 incest was explicitly forbidden by an imperial edict, which divided the concept of "incestus" into two categories of unequal gravity: the "incestus iuris gentium," which was applied to both Romans and non-Romans in the Empire, and the "incestus iuris civilis," which concerned only Roman citizens. Therefore, for example, an Egyptian could marry an aunt, but a Roman could not. Despite the act of incest being unacceptable within the Roman Empire, Roman Emperor Caligula is rumored to have had sexual relationships with all three of his sisters (Julia Livilla, Drusilla, and Agrippina the Younger). Emperor Claudius, after executing his previous wife, married his brother's daughter Agrippina the Younger, and changed the law to allow an otherwise illegal union. The law prohibiting marrying a sister's daughter remained. The taboo against incest in Ancient Rome is demonstrated by the fact that politicians would use charges of incest (often false charges) as insults and means of political disenfranchisement. However, scholars agree that during the first two centuries A.D., in Roman Egypt, full sibling marriage occurred with some frequency among commoners as both Egyptians and Romans announced weddings that have been between full-siblings. This is the only evidence for brother-sister marriage among commoners in any society. In Norse mythology, there are themes of brother-sister marriage, a prominent example being between Njörðr and his unnamed sister (perhaps Nerthus), parents of Freyja and Freyr. Loki in turn also accuses Freyja and Freyr of having a sexual relationship. The earliest Biblical reference to incest involved Cain. It was cited that he knew his wife and she conceived and bore Enoch. During this period, there was no other woman except Eve or there was an unnamed sister and so this meant Cain had incestuous relationship with his mother or his sister. According to the Book of Jubilees, Cain married his sister Awan. Later, in of the Hebrew Bible, the Patriarch Abraham married his half-sister Sarah. Other references include the passage in Samuel where Amnon, King David's son, raped his half-sister, Tamar. According to Michael D. Coogan, it would have been perfectly all right for Amnon to have married her, the Bible being inconsistent about prohibiting incest. In Genesis 19:30-38, living in an isolated area after the destruction of Sodom and Gomorrah, Lot's two daughters conspired to inebriate and seduce their father due to the lack of available partners to continue his line of descent. Because of intoxication, Lot "perceived not" when his firstborn, and the following night his younger daughter, lay with him (Genesis 19:32–35). Moses was also born to an incestuous marriage. detailed how his father Amram was the nephew of his mother Jochebed. An account noted that the incestuous relations did not suffer the fate of childlessness, which was the punishment for such couples in levitical law. It stated, however, that the incest exposed Moses "to the peril of wild beasts, of the weather, of the water, and more." Many European monarchs were related due to political marriages, sometimes resulting in distant cousins – and even first cousins – being married. This was especially true in the Habsburg, Hohenzollern, Savoy and Bourbon royal houses. However, relations between siblings, which may have been tolerated in other cultures, were considered abhorrent. For example, the accusation that Anne Boleyn and her brother George Boleyn had committed incest was one of the reasons that both siblings were executed in May 1536. Incestuous marriages were also seen in the royal houses of ancient Japan and Korea, Inca Peru, Ancient Hawaii, and, at times, Central Africa, Mexico, and Thailand. Like the pharaohs of ancient Egypt, the Inca rulers married their sisters. Huayna Capac, for instance, was the son of Topa Inca Yupanqui and the Inca's sister and wife. The ruling Inca king was expected to marry his full sister. If he had no children by his eldest sister, he married the second and third until they had children. Preservation of the purity of the Sun's blood was one of the reasons for the brother-sister marriage of the Inca king. The Inca kings claimed divine descent from celestial bodies, and emulated the behavior of their celestial ancestor, the Sun, who married his sister, the Moon. Another reason the princes and kings married their sisters was so the heir might inherit the kingdom as much as through his mother as through his father. Therefore, the prince could invoke both principles of inheritance. Half-sibling marriages were found in ancient Japan such as the marriage of Emperor Bidatsu and his half-sister Empress Suiko. Japanese Prince Kinashi no Karu had sexual relationships with his full sister Princess Karu no Ōiratsume, although the action was regarded as foolish. In order to prevent the influence of the other families, a half-sister of Korean Goryeo Dynasty monarch Gwangjong became his wife in the 10th century. Her name was Daemok. Marriage with a family member not related by blood was also regarded as contravening morality and was therefore incest. One example of this is the 14th century Chunghye of Goryeo, who raped one of his deceased father's concubines, who was thus regarded to be his mother. In India, the largest proportion of women aged 13 to 49 who marry their close relative are in Tamil Nadu, then Andhra Pradesh, Karnataka, and Maharashtra. While it is rare for uncle-niece marriages, it is more common in Andhra Pradesh and Tamil Nadu. In some Southeast Asian cultures, stories of incest being common among certain ethnicities are sometimes told as expressions of contempt for those ethnicities. Marriages between younger brothers and their older sisters were common among the early Udegei people. In the Hawaiian Islands, high "ali'i" chiefs were obligated to marry their older sisters in order increase their "mana". These copulations were thought to maintain the purity of the royal blood. Another reason for these familial unions was to maintain a limited size of the ruling "ali'i" group. As per the priestly regulations of Kanalu, put in place after multiple disasters, "chiefs must increase their numbers and this can be done if a brother marries his older sister." Incest between an adult and a person under the age of consent is considered a form of child sexual abuse that has been shown to be one of the most extreme forms of childhood abuse; it often results in serious and long-term psychological trauma, especially in the case of parental incest. Its prevalence is difficult to generalize, but research has estimated 10–15% of the general population as having at least one such sexual contact, with less than 2% involving intercourse or attempted intercourse. Among women, research has yielded estimates as high as 20%. Father–daughter incest was for many years the most commonly reported and studied form of incest. More recently, studies have suggested that sibling incest, particularly older brothers having sexual relations with younger siblings, is the most common form of incest, with some studies finding sibling incest occurring more frequently than other forms of incest. Some studies suggest that adolescent perpetrators of sibling abuse choose younger victims, abuse victims over a lengthier period, use violence more frequently and severely than adult perpetrators, and that sibling abuse has a higher rate of penetrative acts than father or stepfather incest, with father and older brother incest resulting in greater reported distress than stepfather incest. Sex between an adult family member and a child is usually considered a form of child sexual abuse, also known as child incestuous abuse, and for many years has been the most reported form of incest. Father–daughter and stepfather–stepdaughter sex is the most commonly reported form of adult–child incest, with most of the remaining involving a mother or stepmother. Many studies found that stepfathers tend to be far more likely than biological fathers to engage in this form of incest. One study of adult women in San Francisco estimated that 17% of women were abused by stepfathers and 2% were abused by biological fathers. Father–son incest is reported less often, but it is not known how close the frequency is to heterosexual incest because it is likely more under-reported. Prevalence of incest between parents and their children is difficult to estimate due to secrecy and privacy. In a 1999 news story, "BBC" reported, "Close-knit family life in India masks an alarming amount of sexual abuse of children and teenage girls by family members, a new report suggests. Delhi organisation RAHI said 76% of respondents to its survey had been abused when they were children—40% of those by a family member." According to the National Center for Victims of Crime a large proportion of rape committed in the United States is perpetrated by a family member: A study of victims of father–daughter incest in the 1970s showed that there were "common features" within families before the occurrence of incest: estrangement between the mother and the daughter, extreme paternal dominance, and reassignment of some of the mother's traditional major family responsibility to the daughter. Oldest and only daughters were more likely to be the victims of incest. It was also stated that the incest experience was psychologically harmful to the woman in later life, frequently leading to feelings of low self-esteem, very unhealthy sexual activity, contempt for other women, and other emotional problems. Adults who as children were incestuously victimized by adults often suffer from low self-esteem, difficulties in interpersonal relationships, and sexual dysfunction, and are at an extremely high risk of many mental disorders, including depression, anxiety disorders, phobic avoidance reactions, somatoform disorder, substance abuse, borderline personality disorder, and complex post-traumatic stress disorder. The Goler clan in Nova Scotia is a specific instance in which child sexual abuse in the form of forced adult/child and sibling/sibling incest took place over at least three generations. A number of Goler children were victims of sexual abuse at the hands of fathers, mothers, uncles, aunts, sisters, brothers, cousins, and each other. During interrogation by police, several of the adults openly admitted to engaging in many forms of sexual activity, up to and including full intercourse, multiple times with the children. Sixteen adults (both men and women) were charged with hundreds of allegations of incest and sexual abuse of children as young as five. In July 2012, twelve children were removed from the 'Colt' family (a pseudonym) in New South Wales, Australia, after the discovery of four generations of incest. Child protection workers and psychologists said interviews with the children indicated "a virtual sexual free-for-all". In Japan, there is a popular misconception that mother-son incestuous contact is common, due to the manner in which it is depicted in the press and popular media. According to Hideo Tokuoka, "When Americans think of incest, they think of fathers and daughters; in Japan one thinks of mothers and sons" due to the extensive media coverage of mother-son incest there. Some western researchers assumed that mother-son incest is common in Japan, but research into victimization statistics from police and health-care systems discredits this; it shows that the vast majority of sexual abuse, including incest, in Japan is perpetrated by men against young girls. While incest between adults and children generally involves the adult as the perpetrator of abuse, there are rare instances of sons sexually assaulting their mothers. These sons are typically mid adolescent to young adult, and, unlike parent-initiated incest, the incidents involve some kind of physical force. Although the mothers may be accused of being seductive with their sons and inviting the sexual contact, this is contrary to evidence. Such accusations can parallel other forms of rape, where, due to victim blaming, a woman is accused of somehow being at fault for the rape. In some cases, mother-son incest is best classified as acquaintance rape of the mother by the adolescent son. Childhood sibling–sibling incest is considered to be widespread but rarely reported. Sibling–sibling incest becomes child-on-child sexual abuse when it occurs without consent, without equality, or as a result of coercion. In this form, it is believed to be the most common form of intrafamilial abuse. The most commonly reported form of abusive sibling incest is abuse of a younger sibling by an older sibling. A 2006 study showed a large portion of adults who experienced sibling incest abuse have "distorted" or "disturbed" beliefs (such as that the act was "normal") both about their own experience and the subject of sexual abuse in general. Sibling abusive incest is most prevalent in families where one or both parents are often absent or emotionally unavailable, with the abusive siblings using incest as a way to assert their power over a weaker sibling. Absence of the father in particular has been found to be a significant element of most cases of sexual abuse of female children by a brother. The damaging effects on both childhood development and adult symptoms resulting from brother–sister sexual abuse are similar to the effects of father–daughter, including substance abuse, depression, suicidality, and eating disorders. Sexual activity between adult close relatives is sometimes ascribed to genetic sexual attraction. This form of incest has not been widely reported, but evidence has indicated that this behavior does take place, possibly more often than many people realize. Internet chatrooms and topical websites exist that provide support for incestuous couples. Proponents of incest between consenting adults draw clear boundaries between the behavior of consenting adults and rape, child molestation, and abusive incest. However, even consensual relationships such as these are still legally classified as incest, and criminalized in many jurisdictions (although there are certain exceptions). James Roffee, a senior lecturer in criminology at Monash University and former worker on legal responses to familial sexual activity in England and Wales, and Scotland, discussed how the European Convention on Human Rights deems all familial sexual acts to be criminal, even if all parties give their full consent and are knowledgeable to all possible consequences. He also argues that the use of particular language tools in the legislation manipulates the reader to deem all familial sexual activities as immoral and criminal, even if all parties are consenting adults. According to one incest participant who was quoted for an article in "The Guardian": In "Slate", William Saletan drew a legal connection between gay sex and incest between consenting adults. As he described in his article, in 2003, U.S. Senator Rick Santorum commented on a pending U.S. Supreme Court case involving sodomy laws (primarily as a matter of constitutional rights to privacy and equal protection under the law): Saletan argued that, legally and morally, there is essentially no difference between the two, and went on to support incest between consenting adults being covered by a legal right to privacy. UCLA law professor Eugene Volokh has made similar arguments. In a more recent article, Saletan said that incest is wrong because it introduces the possibility of irreparably damaging family units by introducing "a notoriously incendiary dynamic—sexual tension—into the mix". In the Netherlands, marrying one's nephew or niece is legal, but only with the explicit permission of the Dutch Government, due to the possible risk of genetic defects among the offspring. Nephew-niece marriages predominantly occur among foreign immigrants. In November 2008, the Christian Democratic (CDA) party's Scientific Institute announced that it wanted a ban on marriages to nephews and nieces. Consensual sex between adults (persons of 18 years and older) is always lawful in the Netherlands and Belgium, even among closely related family members. Sexual acts between an adult family member and a minor are illegal, though they are not classified as incest, but as abuse of the authority such an adult has over a minor, comparable to that of a teacher, coach or priest. In Florida, consensual adult sexual intercourse with someone known to be your aunt, uncle, niece or nephew constitutes a felony of the third degree. Other states also commonly prohibit marriages between such kin. The legality of sex with a half-aunt or half-uncle varies state by state. In the United Kingdom, incest includes only sexual intercourse with a parent, grandparent, child or sibling, but the more recently introduced offence of "sex with an adult relative" extends also as far as half-siblings, uncles, aunts, nephews and nieces. However, the term 'incest' remains widely used in popular culture to describe any form of sexual activity with a relative. In Canada, marriage between uncles and nieces and between aunts and nephews is legal. The most public case of consensual adult sibling incest in recent years is the case of a brother-sister couple from Germany, Patrick Stübing and Susan Karolewski. Because of violent behavior on the part of his father, Patrick was taken in at the age of 3 by foster parents, who adopted him later. At the age of 23 he learned about his biological parents, contacted his mother, and met her and his then 16-year-old sister Susan for the first time. The now-adult Patrick moved in with his birth family shortly thereafter. After their mother died suddenly six months later, the siblings became intimately close, and had their first child together in 2001. By 2004, they had four children together: Eric, Sarah, Nancy, and Sofia. The public nature of their relationship, and the repeated prosecutions and even jail time they have served as a result, has caused some in Germany to question whether incest between consenting adults should be punished at all. An article about them in "Der Spiegel" states that the couple are happy together. According to court records, the first three children have mental and physical disabilities, and have been placed in foster care. In April 2012, at the European Court of Human Rights, Patrick Stübing lost his case that the conviction violated his right to a private and family life. On September 24, 2014, the German Ethics Council has recommended that the government abolish laws criminalizing incest between siblings, arguing that such bans impinge upon citizens. Some societies differentiate between full sibling and half sibling relations. In ancient societies, full sibling and half sibling marriages occurred. Marriages and sexual relationships between first cousins are stigmatized as incest in some cultures, but tolerated in much of the world. Currently, 24 US states prohibit marriages between first cousins, and another seven permit them only under special circumstances. The United Kingdom permits both marriage and sexual relations between first cousins. In some non-Western societies, marriages between close biological relatives account for 20% to 60% of all marriages.
https://en.wikipedia.org/wiki?curid=14912
Industrial Revolution The Industrial Revolution, now also known as the First Industrial Revolution, was the transition to new manufacturing processes in Europe and the United States, in the period from about 1760 to sometime between 1820 and 1840. This transition included going from hand production methods to machines, new chemical manufacturing and iron production processes, the increasing use of steam power and water power, the development of machine tools and the rise of the mechanized factory system. The Industrial Revolution also led to an unprecedented rise in the rate of population growth. Textiles were the dominant industry of the Industrial Revolution in terms of employment, value of output and capital invested. The textile industry was also the first to use modern production methods. The Industrial Revolution began in Great Britain, and many of the technological innovations were of British origin. By the mid-18th century Britain was the world's leading commercial nation, controlling a global trading empire with colonies in North America and the Caribbean, and with major military and political hegemony on the Indian subcontinent, particularly with the proto-industrialised Mughal Bengal, through the activities of the East India Company. The development of trade and the rise of business were among the major causes of the Industrial Revolution. The Industrial Revolution marks a major turning point in history; almost every aspect of daily life was influenced in some way. In particular, average income and population began to exhibit unprecedented sustained growth. Some economists say that the major effect of the Industrial Revolution was that the standard of living for the general population in the western world began to increase consistently for the first time in history, although others have said that it did not begin to meaningfully improve until the late 19th and 20th centuries. GDP per capita was broadly stable before the Industrial Revolution and the emergence of the modern capitalist economy, while the Industrial Revolution began an era of per-capita economic growth in capitalist economies. Economic historians are in agreement that the onset of the Industrial Revolution is the most important event in the history of humanity since the domestication of animals and plants. Although the structural change from agriculture to industry is widely associated with the Industrial Revolution, in the United Kingdom it was already almost complete by 1760. The precise start and end of the Industrial Revolution is still debated among historians, as is the pace of economic and social changes. Eric Hobsbawm held that the Industrial Revolution began in Britain in the 1780s and was not fully felt until the 1830s or 1840s, while T. S. Ashton held that it occurred roughly between 1760 and 1830. Rapid industrialization first began in Britain, starting with mechanized spinning in the 1780s, with high rates of growth in steam power and iron production occurring after 1800. Mechanized textile production spread from Great Britain to continental Europe and the United States in the early 19th century, with important centres of textiles, iron and coal emerging in Belgium and the United States and later textiles in France. An economic recession occurred from the late 1830s to the early 1840s when the adoption of the original innovations of the Industrial Revolution, such as mechanized spinning and weaving, slowed and their markets matured. Innovations developed late in the period, such as the increasing adoption of locomotives, steamboats and steamships, hot blast iron smelting and new technologies, such as the electrical telegraph, widely introduced in the 1840s and 1850s, were not powerful enough to drive high rates of growth. Rapid economic growth began to occur after 1870, springing from a new group of innovations in what has been called the Second Industrial Revolution. These new innovations included new steel making processes, mass-production, assembly lines, electrical grid systems, the large-scale manufacture of machine tools and the use of increasingly advanced machinery in steam-powered factories. The earliest recorded use of the term "Industrial Revolution" seems to have been in a letter from 6 July 1799 written by French envoy Louis-Guillaume Otto, announcing that France had entered the race to industrialise. In his 1976 book "", Raymond Williams states in the entry for "Industry": "The idea of a new social order based on major industrial change was clear in Southey and Owen, between 1811 and 1818, and was implicit as early as Blake in the early 1790s and Wordsworth at the turn of the [19th] century." The term "Industrial Revolution" applied to technological change was becoming more common by the late 1830s, as in Jérôme-Adolphe Blanqui's description in 1837 of "la révolution industrielle". Friedrich Engels in "The Condition of the Working Class in England" in 1844 spoke of "an industrial revolution, a revolution which at the same time changed the whole of civil society". However, although Engels wrote in the 1840s, his book was not translated into English until the late 1800s, and his expression did not enter everyday language until then. Credit for popularising the term may be given to Arnold Toynbee, whose 1881 lectures gave a detailed account of the term. Economic historians and authors such as Mendels, Pomeranz and Kridte argue that the proto-industrialization in parts of Europe, Islamic world, Mughal India, and China created the social and economic conditions that led to the Industrial Revolution, thus emerging the Great Divergence. Some historians, such as John Clapham and Nicholas Crafts, have argued that the economic and social changes occurred gradually and the term "revolution" is a misnomer. This is still a subject of debate among some historians. The commencement of the Industrial Revolution is closely linked to a small number of innovations, beginning in the second half of the 18th century. By the 1830s the following gains had been made in important technologies: In 1750 Britain imported 2.5 million pounds of raw cotton, most of which was spun and woven by cottage industry in Lancashire. The work was done by hand in workers' homes or occasionally in shops of master weavers. In 1787 raw cotton consumption was 22 million pounds, most of which was cleaned, carded and spun on machines. The British textile industry used 52 million pounds of cotton in 1800, which increased to 588 million pounds in 1850. The share of value added by the cotton textile industry in Britain was 2.6% in 1760, 17% in 1801 and 22.4% in 1831. Value added by the British woollen industry was 14.1% in 1801. Cotton factories in Britain numbered approximately 900 in 1797. In 1760 approximately one-third of cotton cloth manufactured in Britain was exported, rising to two-thirds by 1800. In 1781 cotton spun amounted to 5.1 million pounds, which increased to 56 million pounds by 1800. In 1800 less than 0.1% of world cotton cloth was produced on machinery invented in Britain. In 1788 there were 50,000 spindles in Britain, rising to 7 million over the next 30 years. Wages in Lancashire, a core region for cottage industry and later factory spinning and weaving, were about six times those in India in 1770, when overall productivity in Britain was about three times higher than in India. Parts of India, China, Central America, South America and the Middle-East have a long history of hand manufacturing cotton textiles, which became a major industry sometime after 1000 AD. In tropical and subtropical regions where it was grown, most was grown by small farmers alongside their food crops and was spun and woven in households, largely for domestic consumption. In the 15th century China began to require households to pay part of their taxes in cotton cloth. By the 17th century almost all Chinese wore cotton clothing. Almost everywhere cotton cloth could be used as a medium of exchange. In India a significant amount of cotton textiles were manufactured for distant markets, often produced by professional weavers. Some merchants also owned small weaving workshops. India produced a variety of cotton cloth, some of exceptionally fine quality. Cotton was a difficult raw material for Europe to obtain before it was grown on colonial plantations in the Americas. The early Spanish explorers found Native Americans growing unknown species of excellent quality cotton: sea island cotton ("Gossypium barbadense") and upland green seeded cotton "Gossypium hirsutum". Sea island cotton grew in tropical areas and on barrier islands of Georgia and South Carolina, but did poorly inland. Sea island cotton began being exported from Barbados in the 1650s. Upland green seeded cotton grew well on inland areas of the southern U.S., but was not economical because of the difficulty of removing seed, a problem solved by the cotton gin. A strain of cotton seed brought from Mexico to Natchez, Mississippi in 1806 became the parent genetic material for over 90% of world cotton production today; it produced bolls that were three to four times faster to pick. The Age of Discovery was followed by a period of colonialism beginning around the 16th century. Following the discovery of a trade route to India around southern Africa by the Portuguese, the Dutch established the Verenigde Oostindische Compagnie (abbr. VOC) or Dutch East India Company, the world's first transnational corporation and the first multinational enterprise to issue shares of stock to the public. The British later founded the East India Company, along with smaller companies of different nationalities which established trading posts and employed agents to engage in trade throughout the Indian Ocean region and between the Indian Ocean region and North Atlantic Europe. One of the largest segments of this trade was in cotton textiles, which were purchased in India and sold in Southeast Asia, including the Indonesian archipelago, where spices were purchased for sale to Southeast Asia and Europe. By the mid-1760s cloth was over three-quarters of the East India Company's exports. Indian textiles were in demand in North Atlantic region of Europe where previously only wool and linen were available; however, the amount of cotton goods consumed in Western Europe was minor until the early 19th century. By 1600 Flemish refugees began weaving cotton cloth in English towns where cottage spinning and weaving of wool and linen was well established; however, they were left alone by the guilds who did not consider cotton a threat. Earlier European attempts at cotton spinning and weaving were in 12th-century Italy and 15th-century southern Germany, but these industries eventually ended when the supply of cotton was cut off. The Moors in Spain grew, spun and wove cotton beginning around the 10th century. British cloth could not compete with Indian cloth because India's labour cost was approximately one-fifth to one-sixth that of Britain's. In 1700 and 1721 the British government passed Calico Acts in order to protect the domestic woollen and linen industries from the increasing amounts of cotton fabric imported from India. The demand for heavier fabric was met by a domestic industry based around Lancashire that produced fustian, a cloth with flax warp and cotton weft. Flax was used for the warp because wheel-spun cotton did not have sufficient strength, but the resulting blend was not as soft as 100% cotton and was more difficult to sew. On the eve of the Industrial Revolution, spinning and weaving were done in households, for domestic consumption and as a cottage industry under the putting-out system. Occasionally the work was done in the workshop of a master weaver. Under the putting-out system, home-based workers produced under contract to merchant sellers, who often supplied the raw materials. In the off season the women, typically farmers' wives, did the spinning and the men did the weaving. Using the spinning wheel, it took anywhere from four to eight spinners to supply one hand loom weaver. The flying shuttle, patented in 1733 by John Kay, with a number of subsequent improvements including an important one in 1747, doubled the output of a weaver, worsening the imbalance between spinning and weaving. It became widely used around Lancashire after 1760 when John's son, Robert, invented the drop box, which facilitated changing thread colors. Lewis Paul patented the roller spinning frame and the flyer-and-bobbin system for drawing wool to a more even thickness. The technology was developed with the help of John Wyatt of Birmingham. Paul and Wyatt opened a mill in Birmingham which used their new rolling machine powered by a donkey. In 1743 a factory opened in Northampton with 50 spindles on each of five of Paul and Wyatt's machines. This operated until about 1764. A similar mill was built by Daniel Bourn in Leominster, but this burnt down. Both Lewis Paul and Daniel Bourn patented carding machines in 1748. Based on two sets of rollers that travelled at different speeds, it was later used in the first cotton spinning mill. Lewis's invention was later developed and improved by Richard Arkwright in his water frame and Samuel Crompton in his spinning mule. In 1764 in the village of Stanhill, Lancashire, James Hargreaves invented the spinning jenny, which he patented in 1770. It was the first practical spinning frame with multiple spindles. The jenny worked in a similar manner to the spinning wheel, by first clamping down on the fibres, then by drawing them out, followed by twisting. It was a simple, wooden framed machine that only cost about £6 for a 40-spindle model in 1792, and was used mainly by home spinners. The jenny produced a lightly twisted yarn only suitable for weft, not warp. The spinning frame or water frame was developed by Richard Arkwright who, along with two partners, patented it in 1769. The design was partly based on a spinning machine built for Thomas High by clockmaker John Kay, who was hired by Arkwright. For each spindle the water frame used a series of four pairs of rollers, each operating at a successively higher rotating speed, to draw out the fibre, which was then twisted by the spindle. The roller spacing was slightly longer than the fibre length. Too close a spacing caused the fibres to break while too distant a spacing caused uneven thread. The top rollers were leather-covered and loading on the rollers was applied by a weight. The weights kept the twist from backing up before the rollers. The bottom rollers were wood and metal, with fluting along the length. The water frame was able to produce a hard, medium count thread suitable for warp, finally allowing 100% cotton cloth to be made in Britain. A horse powered the first factory to use the spinning frame. Arkwright and his partners used water power at a factory in Cromford, Derbyshire in 1771, giving the invention its name. Samuel Crompton's Spinning Mule was introduced in 1779. Mule implies a hybrid because it was a combination of the spinning jenny and the water frame, in which the spindles were placed on a carriage, which went through an operational sequence during which the rollers stopped while the carriage moved away from the drawing roller to finish drawing out the fibres as the spindles started rotating. Crompton's mule was able to produce finer thread than hand spinning and at a lower cost. Mule spun thread was of suitable strength to be used as warp, and finally allowed Britain to produce highly competitive yarn in large quantities. Realising that the expiration of the Arkwright patent would greatly increase the supply of spun cotton and led to a shortage of weavers, Edmund Cartwright developed a vertical power loom which he patented in 1785. In 1776 he patented a two-man operated loom which was more conventional. Cartwright built two factories; the first burned down and the second was sabotaged by his workers. Cartwright's loom design had several flaws, the most serious being thread breakage. Samuel Horrocks patented a fairly successful loom in 1813. Horock's loom was improved by Richard Roberts in 1822 and these were produced in large numbers by Roberts, Hill & Co. The demand for cotton presented an opportunity to planters in the Southern United States, who thought upland cotton would be a profitable crop if a better way could be found to remove the seed. Eli Whitney responded to the challenge by inventing the inexpensive cotton gin. A man using a cotton gin could remove seed from as much upland cotton in one day as would previously, working at the rate of one pound of cotton per day, have taken a woman two months to process. These advances were capitalised on by entrepreneurs, of whom the best known is Richard Arkwright. He is credited with a list of inventions, but these were actually developed by such people as Thomas Highs and John Kay; Arkwright nurtured the inventors, patented the ideas, financed the initiatives, and protected the machines. He created the cotton mill which brought the production processes together in a factory, and he developed the use of powerfirst horse power and then water powerwhich made cotton manufacture a mechanised industry. Other inventors increased the efficiency of the individual steps of spinning (carding, twisting and spinning, and rolling) so that the supply of yarn increased greatly. Before long steam power was applied to drive textile machinery. Manchester acquired the nickname Cottonopolis during the early 19th century owing to its sprawl of textile factories. Although mechanization dramatically decreased the cost of cotton cloth, by the mid-19th century machine-woven cloth still could not equal the quality of hand-woven Indian cloth, in part due to the fineness of thread made possible by the type of cotton used in India, which allowed high thread counts. However, the high productivity of British textile manufacturing allowed coarser grades of British cloth to undersell hand-spun and woven fabric in low-wage India, eventually destroying the industry. The earliest European attempts at mechanized spinning were with wool; however, wool spinning proved more difficult to mechanize than cotton. Productivity improvement in wool spinning during the Industrial Revolution was significant but was far less than that of cotton. Arguably the first highly mechanised factory was John Lombe's water-powered silk mill at Derby, operational by 1721. Lombe learned silk thread manufacturing by taking a job in Italy and acting as an industrial spy; however, because the Italian silk industry guarded its secrets closely, the state of the industry at that time is unknown. Although Lombe's factory was technically successful, the supply of raw silk from Italy was cut off to eliminate competition. In order to promote manufacturing the Crown paid for models of Lombe's machinery which were exhibited in the Tower of London. Bar iron was the commodity form of iron used as the raw material for making hardware goods such as nails, wire, hinges, horse shoes, wagon tires, chains, etc. and for structural shapes. A small amount of bar iron was converted into steel. Cast iron was used for pots, stoves and other items where its brittleness was tolerable. Most cast iron was refined and converted to bar iron, with substantial losses. Bar iron was also made by the bloomery process, which was the predominant iron smelting process until the late 18th century. In the UK in 1720 there were 20,500 tons of cast iron produced with charcoal and 400 tons with coke. In 1750 charcoal iron production was 24,500 and coke iron was 2,500 tons. In 1788 the production of charcoal cast iron was 14,000 tons while coke iron production was 54,000 tons. In 1806 charcoal cast iron production was 7,800 tons and coke cast iron was 250,000 tons. In 1750 the UK imported 31,200 tons of bar iron and either refined from cast iron or directly produced 18,800 tons of bar iron using charcoal and 100 tons using coke. In 1796 the UK was making 125,000 tons of bar iron with coke and 6,400 tons with charcoal; imports were 38,000 tons and exports were 24,600 tons. In 1806 the UK did not import bar iron but exported 31,500 tons. A major change in the iron industries during the era of the Industrial Revolution was the replacement of wood and other bio-fuels with coal. For a given amount of heat, coal required much less labour to mine than cutting wood and converting it to charcoal, and coal was much more abundant than wood, supplies of which were becoming scarce before the enormous increase in iron production that took place in the late 18th century. By 1750 coke had generally replaced charcoal in smelting of copper and lead, and was in widespread use in making glass. In the smelting and refining of iron, coal and coke produced inferior iron to that made with charcoal because of the coal's sulfur content. Low sulfur coals were known, but they still contained harmful amounts. Conversion of coal to coke only slightly reduces the sulfur content. A minority of coals are coking. Another factor limiting the iron industry before the Industrial Revolution was the scarcity of water power to power blast bellows. This limitation was overcome by the steam engine. Use of coal in iron smelting started somewhat before the Industrial Revolution, based on innovations by Sir Clement Clerke and others from 1678, using coal reverberatory furnaces known as cupolas. These were operated by the flames playing on the ore and charcoal or coke mixture, reducing the oxide to metal. This has the advantage that impurities (such as sulphur ash) in the coal do not migrate into the metal. This technology was applied to lead from 1678 and to copper from 1687. It was also applied to iron foundry work in the 1690s, but in this case the reverberatory furnace was known as an air furnace. (The foundry cupola is a different, and later, innovation.) By 1709 Abraham Darby made progress using coke to fuel his blast furnaces at Coalbrookdale. However, the coke pig iron he made was not suitable for making wrought iron and was used mostly for the production of cast iron goods, such as pots and kettles. He had the advantage over his rivals in that his pots, cast by his patented process, were thinner and cheaper than theirs. Coke pig iron was hardly used to produce wrought iron until 1755–56, when Darby's son Abraham Darby II built furnaces at Horsehay and Ketley where low sulfur coal was available (and not far from Coalbrookdale). These new furnaces were equipped with water-powered bellows, the water being pumped by Newcomen steam engines. The Newcomen engines were not attached directly to the blowing cylinders because the engines alone could not produce a steady air blast. Abraham Darby III installed similar steam-pumped, water-powered blowing cylinders at the Dale Company when he took control in 1768. The Dale Company used several Newcomen engines to drain its mines and made parts for engines which it sold throughout the country. Steam engines made the use of higher-pressure and volume blast practical; however, the leather used in bellows was expensive to replace. In 1757, iron master John Wilkinson patented a hydraulic powered blowing engine for blast furnaces.
https://en.wikipedia.org/wiki?curid=14914
International Court of Justice The International Court of Justice (ICJ), sometimes called the World Court, is one of the six principal organs of the United Nations (UN). It settles disputes between states and gives advisory opinions on international legal issues referred to it by the UN. Its opinions and rulings serve as sources of international law. The ICJ is the successor of the Permanent Court of International Justice (PCIJ), which was established by the League of Nations in 1920. After the Second World War, both the League and the PCIJ were succeeded by the United Nations and ICJ, respectively. The Statute of the ICJ draws heavily from that of its predecessor, and the latter's decisions remain valid. All members of the UN are party to the ICJ Statute. The ICJ comprises a panel of 15 judges elected by the General Assembly and Security Council for nine-year terms. The court is seated in the Peace Palace in The Hague, Netherlands, making it the only principal UN organ not located in New York City. Its official working languages are English and French. The first permanent institution established for the purpose of settling international disputes was the Permanent Court of Arbitration (PCA), which was created by the Hague Peace Conference of 1899. Initiated by Russian Czar Nicholas II, the conference involved all the world's major powers, as well as several smaller states, resulted in the first multilateral treaties concerned with the conduct of warfare. Among these was the "Convention for the Pacific Settlement of International Disputes", which set forth the institutional and procedural framework for arbitral proceedings, which would take place in The Hague, Netherlands. Although the proceedings would be supported by a permanent bureau—whose functions would be equivalent to that of a secretariat or court registry—the arbitrators would be appointed by the disputing states from a larger pool provided by each member of the Convention. The PCA was established in 1900 and began proceedings in 1902. A second Hague Peace Conference in 1907, which involved most of the world's sovereign states, revised the Convention and enhanced the rules governing arbitral proceedings before the PCA. During this conference, the United States, Great Britain and Germany submitted a joint proposal for a permanent court whose judges would serve full-time. As the delegates could not agree as to how the judges would be selected, the matter was temporarily shelved pending an agreement to be adopted at a later convention. The Hague Peace Conferences, and the ideas that emerged therefrom, influenced the creation of the Central American Court of Justice, which was established in 1908 as one of the earliest regional judicial bodies. Various plans and proposals were made between 1911 and 1919 for the establishment of an international judicial tribunal, which would not be realized into the formation of a new international system following the First World War. The unprecedented bloodshed of the First World War led to the creation of the League of Nations, established by the Paris Peace Conference of 1919 as the first worldwide intergovernmental organization aimed at maintaining peace and collective security. Article 14 League's Covenant called for the establishment of a Permanent Court of International Justice (PCIJ), which would be responsible for adjudicating any international dispute submitted to it by the contesting parties, as well as to provide an advisory opinion upon any dispute or question referred to it by the League of Nations. In December 1920, following several drafts and debates, the Assembly of the League unanimously adopted the Statute of the PCIJ, which was signed and ratified the following year by a majority of members. Among other things, the new Statute resolved the contentious issues of selecting judges by providing that the judges be elected by both the Council and the Assembly of the League concurrently but independently. The makeup of the PCIJ would reflect the "main forms of civilization and the principal legal systems of the world”. The PCIJ would be permanently placed at the Peace Palace in The Hague, alongside Permanent Court of Arbitration. The PCIJ represented a major innovation in international jurisprudence in several ways: Unlike the ICJ, the PCIJ was not part of the League, nor were members of the League automatically a party to its Statute. The United States, which played a key role in both the second Hague Peace Conference and the Paris Peace Conference, was notably not a member of the League, although several of its nationals served as judges of the Court. From its first session in 1922 until 1940, the PCIJ dealt with 29 interstate disputes and issued 27 advisory opinions. The Court's widespread acceptance was reflected by the fact that several hundred international treaties and agreements conferred jurisdiction upon it over specified categories of disputes. In addition to helping resolve several serious international disputes, the PCIJ helped clarify several ambiguities in international law that contributed to its development. The United States played a major role in setting up the World Court but never joined. Presidents Wilson, Harding, Coolidge, Hoover and Roosevelt all supported membership, but it was impossible to get a 2/3 majority in the Senate for a treaty. Following a peak of activity in 1933, the PCIJ began to decline in its activities due to the growing international tension and isolationism that characterized the era. The Second World War effectively put an end to the Court, which held its last public session in December 1939 and issued its last orders in February 1940. In 1942 the United States and United Kingdom jointly declared support for establishing or re-establishing an international court after the war, and in 1943, the U.K. chaired a panel of jurists from around the world, the "Inter-Allied Committee", to discuss the matter. Its 1944 report recommended that: Several months later, a conference of the major Allied Powers—China, the USSR, the U.K., and the U.S.—issued a joint declaration recognizing the necessity “of establishing at the earliest practicable date a general international organization, based on the principle of the sovereign equality of all peace-loving States, and open to membership by all such States, large and small, for the maintenance of international peace and security”. The following Allied conference at Dumbarton Oaks, in the United States, published a proposal in October 1944 that called for the establishment of an intergovernmental organization that would include an international court. A meeting was subsequently convened in Washington, D.C. in April 1945, involving 44 jurists from around the world to draft a statute for the proposed court. The draft statute was substantially similar to that of the PCIJ, and it was questioned whether a new court should even be created. During the San Francisco Conference, which took place from 25 April to 26 June 1945 and involved 50 countries, it was decided that an entirely new court should be established as a principal organ of the new United Nations. The statute of this court would form an integral part of the United Nations Charter, which, to maintain continuity, expressly held that the Statute of the International Court of Justice (ICJ) was based upon that of the PCIJ. Consequently, the PCIJ convened for the last time in October 1945 and resolved to transfer its archives to its successor, which would take its place at the Peace Palace. The judges of the PCIJ all resigned on 31 January 1946, with the election of the first members of the ICJ taking place the following February at the First Session of the United Nations General Assembly and Security Council. In April 1946, the PCIJ was formally dissolved, and the ICJ, in its first meeting, elected as President José Gustavo Guerrero of El Salvador, who had served as the last President of the PCIJ. The Court also appointed members of its Registry, drawn largely from that of the PCIJ, and held an inaugural public sitting later that month. The first case was submitted in May 1947 by the United Kingdom against Albania concerning incidents in the Corfu Channel. Established in 1945 by the UN Charter, the court began work in 1946 as the successor to the Permanent Court of International Justice. The Statute of the International Court of Justice, similar to that of its predecessor, is the main constitutional document constituting and regulating the court. The court's workload covers a wide range of judicial activity. After the court ruled that the United States's covert war against Nicaragua was in violation of international law ("Nicaragua v. United States"), the United States withdrew from compulsory jurisdiction in 1986 to accept the court's jurisdiction only on a discretionary basis. Chapter XIV of the United Nations Charter authorizes the UN Security Council to enforce Court rulings. However, such enforcement is subject to the veto power of the five permanent members of the Council, which the United States used in the "Nicaragua" case. The ICJ is composed of fifteen judges elected to nine-year terms by the UN General Assembly and the UN Security Council from a list of people nominated by the national groups in the Permanent Court of Arbitration. The election process is set out in Articles 4–19 of the ICJ Statute. Elections are staggered, with five judges elected every three years to ensure continuity within the court. Should a judge die in office, the practice has generally been to elect a judge in a special election to complete the term. No two judges may be nationals of the same country. According to Article 9, the membership of the court is supposed to represent the "main forms of civilization and of the principal legal systems of the world". Essentially, that has meant common law, civil law and socialist law (now post-communist law). There is an informal understanding that the seats will be distributed by geographic regions so that there are five seats for Western countries, three for African states (including one judge of francophone civil law, one of Anglophone common law and one Arab), two for Eastern European states, three for Asian states and two for Latin American and Caribbean states. For most of the court's history, the five permanent members of the United Nations Security Council (France, USSR, China, the United Kingdom, and the United States) have always had a judge serving, thereby occupying three of the Western seats, one of the Asian seats and one of the Eastern European seats. Exceptions have been China not having a judge on the court from 1967 to 1985, during which time it did not put forward a candidate, and British judge Sir Christopher Greenwood being withdrawn as a candidate for election for a second nine-year term on the bench in 2017, leaving no judges from the United Kingdom on the court. Greenwood had been supported by the UN Security Council but failed to get a majority in the UN General Assembly. Indian judge Dalveer Bhandari instead took the seat. Article 6 of the Statute provides that all judges should be "elected regardless of their nationality among persons of high moral character" who are either qualified for the highest judicial office in their home states or known as lawyers with sufficient competence in international law. Judicial independence is dealt with specifically in Articles 16–18. Judges of the ICJ are not able to hold any other post or act as counsel. In practice, members of the court have their own interpretation of these rules and allow them to be involved in outside arbitration and hold professional posts as long as there is no conflict of interest. A judge can be dismissed only by a unanimous vote of the other members of the court. Despite these provisions, the independence of ICJ judges has been questioned. For example, during the "Nicaragua" case, the United States issued a communiqué suggesting that it could not present sensitive material to the court because of the presence of judges from the Soviet bloc. Judges may deliver joint judgments or give their own separate opinions. Decisions and advisory opinions are by majority, and, in the event of an equal division, the President's vote becomes decisive, which occurred in the "Legality of the Use by a State of Nuclear Weapons in Armed Conflict" (Opinion requested by WHO), [1996] ICJ Reports 66. Judges may also deliver separate dissenting opinions. Article 31 of the statute sets out a procedure whereby "ad hoc" judges sit on contentious cases before the court. The system allows any party to a contentious case (if it otherwise does not have one of that party's nationals sitting on the court) to select one additional person to sit as a judge on that case only. It is thus possible that as many as seventeen judges may sit on one case. The system may seem strange when compared with domestic court processes, but its purpose is to encourage states to submit cases. For example, if a state knows that it will have a judicial officer who can participate in deliberation and offer other judges local knowledge and an understanding of the state's perspective, it may be more willing to submit to the jurisdiction of the court. Although this system does not sit well with the judicial nature of the body, it is usually of little practical consequence. "Ad hoc" judges usually (but not always) vote in favour of the state that appointed them and thus cancel each other out. Generally, the court sits as full bench, but in the last fifteen years, it has on occasion sat as a chamber. Articles 26–29 of the statute allow the court to form smaller chambers, usually 3 or 5 judges, to hear cases. Two types of chambers are contemplated by Article 26: firstly, chambers for special categories of cases, and second, the formation of "ad hoc" chambers to hear particular disputes. In 1993, a special chamber was established, under Article 26(1) of the ICJ statute, to deal specifically with environmental matters (although it has never been used). "Ad hoc" chambers are more frequently convened. For example, chambers were used to hear the "Gulf of Maine Case" (Canada/US). In that case, the parties made clear they would withdraw the case unless the court appointed judges to the chamber acceptable to the parties. Judgments of chambers may have either less authority than full Court judgments or diminish the proper interpretation of universal international law informed by a variety of cultural and legal perspectives. On the other hand, the use of chambers might encourage greater recourse to the court and thus enhance international dispute resolution. , the composition of the court is as follows: As stated in Article 93 of the UN Charter, all UN members are automatically parties to the court's statute. Non-UN members may also become parties to the court's statute under the Article 93(2) procedure. For example, before becoming a UN member state, Switzerland used this procedure in 1948 to become a party, and Nauru became a party in 1988. Once a state is a party to the court's statute, it is entitled to participate in cases before the court. However, being a party to the statute does not automatically give the court jurisdiction over disputes involving those parties. The issue of jurisdiction is considered in the three types of ICJ cases: contentious issues, incidental jurisdiction, and advisory opinions. In contentious cases (adversarial proceedings seeking to settle a dispute), the ICJ produces a binding ruling between states that agree to submit to the ruling of the court. Only states may be parties in contentious cases. Individuals, corporations, component parts of a federal state, NGOs, UN organs and self-determination groups are excluded from direct participation in cases although the court may receive information from public international organizations. That does not preclude non-state interests from being the subject of proceedings if a state brings the case against another. For example, a state may, in cases of "diplomatic protection", bring a case on behalf of one of its nationals or corporations. Jurisdiction is often a crucial question for the court in contentious cases. (See Procedure below.) The key principle is that the ICJ has jurisdiction only on the basis of consent. Article 36 outlines four bases on which the court's jurisdiction may be founded: Until rendering a final judgment, the court has competence to order interim measures for the protection of the rights of a party to a dispute. One or both parties to a dispute may apply the ICJ for issuing interim measures. In the "Frontier Dispute" Case, both parties to the dispute, Burkina Faso and Mali, submitted an application to the court to indicate interim measures. Incidental jurisdiction of the court derives from the Article 41 of the Statute of it. Such as the final judgment, the order for interim measures of the court are binding on state parties to the dispute. The ICJ has competence to indicate interim measures only if the "prima facie" jurisdiction is satisfied. An advisory opinion is a function of the court open only to specified United Nations bodies and agencies. The UN Charter grants the General Assembly or the Security Council a power to request the court to issue an advisory opinion on any legal question. Other organs of the UN rather than GA and SC may not request an advisory opinion of the ICJ unless the General Assembly authorizes them. Other organs of the UN only request an advisory opinion of the court regarding the matters falling into the scope of their activities. On receiving a request, the court decides which states and organizations might provide useful information and gives them an opportunity to present written or oral statements. Advisory opinions were intended as a means by which UN agencies could seek the court's help in deciding complex legal issues that might fall under their respective mandates. In principle, the court's advisory opinions are only consultative in character but they are influential and widely respected. Certain instruments or regulations can provide in advance that the advisory opinion shall be specifically binding on particular agencies or states, but inherently, they are non-binding under the Statute of the Court. This non-binding character does not mean that advisory opinions are without legal effect, because the legal reasoning embodied in them reflects the court's authoritative views on important issues of international law. In arriving at them, the court follows essentially the same rules and procedures that govern its binding judgments delivered in contentious cases submitted to it by sovereign states. An advisory opinion derives its status and authority from the fact that it is the official pronouncement of the principal judicial organ of the United Nations. Advisory opinions have often been controversial because the questions asked are controversial or the case was pursued as an indirect way of bringing what is really a contentious case before the court. Examples of advisory opinions can be found in the section advisory opinions in the List of International Court of Justice cases article. One such well-known advisory opinion is the "Nuclear Weapons Case". Article 94 establishes the duty of all UN members to comply with decisions of the court involving them. If parties do not comply, the issue may be taken before the Security Council for enforcement action. There are obvious problems with such a method of enforcement. If the judgment is against one of the permanent five members of the Security Council or its allies, any resolution on enforcement would then be vetoed. That occurred, for example, after the "Nicaragua" case, when Nicaragua brought the issue of the United States' noncompliance with the court's decision before the Security Council. Furthermore, if the Security Council refuses to enforce a judgment against any other state, there is no method of forcing the state to comply. Furthermore, the most effective form to take action for the Security Council, coercive action under Chapter VII of the United Nations Charter, can be justified only if international peace and security are at stake. The Security Council has never done that so far. The relationship between the ICJ and the Security Council, and the separation of their powers, was considered by the court in 1992 in the "Pan Am" case. The court had to consider an application from Libya for the order of provisional measures of protection to safeguard its rights, which, it alleged, were being infringed by the threat of economic sanctions by the United Kingdom and United States. The problem was that these sanctions had been authorized by the Security Council, which resulted in a potential conflict between the Chapter VII functions of the Security Council and the judicial function of the court. The court decided, by eleven votes to five, that it could not order the requested provisional measures because the rights claimed by Libya, even if legitimate under the Montreal Convention, could not be "prima facie" regarded as appropriate since the action was ordered by the Security Council. In accordance with Article 103 of the UN Charter, obligations under the Charter took precedence over other treaty obligations. Nevertheless, the court declared the application admissible in 1998. A decision on the merits has not been given since the parties (United Kingdom, United States, and Libya) settled the case out of court in 2003. There was a marked reluctance on the part of a majority of the court to become involved in a dispute in such a way as to bring it potentially into conflict with the Council. The court stated in the "Nicaragua" case that there is no necessary inconsistency between action by the Security Council and adjudication by the ICJ. However, when there is room for conflict, the balance appears to be in favour of the Security Council. Should either party fail "to perform the obligations incumbent upon it under a judgment rendered by the Court", the Security Council may be called upon to "make recommendations or decide upon measures" if the Security Council deems such actions necessary. In practice, the court's powers have been limited by the unwillingness of the losing party to abide by the court's ruling and by the Security Council's unwillingness to impose consequences. However, in theory, "so far as the parties to the case are concerned, a judgment of the Court is binding, final and without appeal", and "by signing the Charter, a State Member of the United Nations undertakes to comply with any decision of the International Court of Justice in a case to which it is a party." For example, the United States had previously accepted the court's compulsory jurisdiction upon its creation in 1946 but in 1984, after "Nicaragua v. United States", withdrew its acceptance following the court's judgment that called on the US to "cease and to refrain" from the "unlawful use of force" against the government of Nicaragua. The court ruled (with only the American judge dissenting) that the United States was "in breach of its obligation under the Treaty of Friendship with Nicaragua not to use force against Nicaragua" and ordered the United States to pay war reparations. When deciding cases, the court applies international law as summarized in of the ICJ Statute, which provides that in arriving at its decisions the court shall apply international conventions, international custom and the "general principles of law recognized by civilized nations." It may also refer to academic writing ("the teachings of the most highly qualified publicists of the various nations") and previous judicial decisions to help interpret the law although the court is not formally bound by its previous decisions under the doctrine of stare decisis. makes clear that the common law notion of precedent or "stare decisis" does not apply to the decisions of the ICJ. The court's decision binds only the parties to that particular controversy. Under 38(1)(d), however, the court may consider its own previous decisions. If the parties agree, they may also grant the court the liberty to decide "ex aequo et bono" ("in justice and fairness"), granting the ICJ the freedom to make an equitable decision based on what is fair under the circumstances. That provision has not been used in the court's history. So far, the International Court of Justice has dealt with about 130 cases. The ICJ is vested with the power to make its own rules. Court procedure is set out in the "Rules of Court of the International Court of Justice 1978" (as amended on 29 September 2005). Cases before the ICJ will follow a standard pattern. The case is lodged by the applicant, which files a written memorial setting out the basis of the court's jurisdiction and the merits of its claim. The respondent may accept the court's jurisdiction and file its own memorial on the merits of the case. A respondent that does not wish to submit to the jurisdiction of the court may raise preliminary objections. Any such objections must be ruled upon before the court can address the merits of the applicant's claim. Often, a separate public hearing is held on the preliminary objections and the court will render a judgment. Respondents normally file preliminary objections to the jurisdiction of the court and/or the admissibility of the case. Inadmissibility refers to a range of arguments about factors the court should take into account in deciding jurisdiction, such as the fact that the issue is not justiciable or that it is not a "legal dispute". In addition, objections may be made because all necessary parties are not before the court. If the case necessarily requires the court to rule on the rights and obligations of a state that has not consented to the court's jurisdiction, the court does not proceed to issue a judgment on the merits. If the court decides it has jurisdiction and the case is admissible, the respondent then is required to file a Memorial addressing the merits of the applicant's claim. Once all written arguments are filed, the court holds a public hearing on the merits. Once a case has been filed, any party (usually the applicant) may seek an order from the court to protect the "status quo" pending the hearing of the case. Such orders are known as Provisional (or Interim) Measures and are analogous to interlocutory injunctions in United States law. Article 41 of the statute allows the court to make such orders. The court must be satisfied to have "prima facie" jurisdiction to hear the merits of the case before it grants provisional measures. In cases in which a third state's interests are affected, that state may be permitted to intervene in the case and participate as a full party. Under Article 62, a state "with an interest of a legal nature" may apply; however, it is within the court's discretion whether or not to allow the intervention. Intervention applications are rare, and the first successful application occurred only in 1991. Once deliberation has taken place, the court issues a majority opinion. Individual judges may issue concurring opinions (if they agree with the outcome reached in the judgment of the court but differ in their reasoning) or dissenting opinions (if they disagree with the majority). No appeal is possible, but any party may ask for the court to clarify if there is a dispute as to the meaning or scope of the court's judgment. The International Court has been criticized with respect to its rulings, its procedures, and its authority. As with criticisms of the United Nations, many of these criticisms refer more to the general authority assigned to the body by member states through its charter than to specific problems with the composition of judges or their rulings. Major criticisms include the following:
https://en.wikipedia.org/wiki?curid=14918
International Standard Book Number The International Standard Book Number (ISBN) is a numeric commercial book identifier which is intended to be unique. Publishers purchase ISBNs from an affiliate of the International ISBN Agency. An ISBN is assigned to each separate edition and variation (except reprintings) of a publication. For example, an e-book, a paperback and a hardcover edition of the same book will each have a different ISBN. The ISBN is ten digits long if assigned before 2007, and thirteen digits long if assigned on or after 1 January 2007. The method of assigning an ISBN is nation-specific and varies between countries, often depending on how large the publishing industry is within a country. The initial ISBN identification format was devised in 1967, based upon the 9-digit Standard Book Numbering (SBN) created in 1966. The 10-digit ISBN format was developed by the International Organization for Standardization (ISO) and was published in 1970 as international standard ISO 2108 (the 9-digit SBN code can be converted to a 10-digit ISBN by prefixing it with a zero digit '0'). Privately published books sometimes appear without an ISBN. The International ISBN Agency sometimes assigns such books ISBNs on its own initiative. Another identifier, the International Standard Serial Number (ISSN), identifies periodical publications such as magazines and newspapers. The International Standard Music Number (ISMN) covers musical scores. The Standard Book Number (SBN) is a commercial system using nine-digit code numbers to identify books. It was created by Gordon Foster, Emeritus Professor of Statistics at Trinity College, Dublin, for the booksellers and stationers WHSmith and others in 1965. The ISBN identification format was conceived in 1967 in the United Kingdom by David Whitaker (regarded as the "Father of the ISBN") and in 1968 in the United States by Emery Koltay (who later became director of the U.S. ISBN agency R. R. Bowker). The 10-digit ISBN format was developed by the International Organization for Standardization (ISO) and was published in 1970 as international standard ISO 2108. The United Kingdom continued to use the nine-digit SBN code until 1974. ISO has appointed the International ISBN Agency as the registration authority for ISBN worldwide and the ISBN Standard is developed under the control of ISO Technical Committee 46/Subcommittee 9 TC 46/SC 9. The ISO on-line facility only refers back to 1978. An SBN may be converted to an ISBN by prefixing the digit "0". For example, the second edition of "Mr. J. G. Reeder Returns", published by Hodder in 1965, has , where "340" indicates the publisher, "01381" is the serial number assigned by the publisher, and "8" is the check digit. By prefixing a zero, this can be converted to ; the check digit does not need to be re-calculated. Some publishers, such as Ballantine Books, would sometimes use 12-digit SBNs where the last three digits indicated the price of the book; for example, "Woodstock Handmade Houses" had a 12-digit Standard Book Number of 345-24223-8-595 (valid SBN: 345-24223-8, : 0-345-24223-8), and it cost . Since 1 January 2007, ISBNs have contained thirteen digits, a format that is compatible with "Bookland" European Article Numbers, which have 13 digits. A separate ISBN is assigned to each edition and variation (except reprintings) of a publication. For example, an ebook, audiobook, paperback, and hardcover edition of the same book will each have a different ISBN assigned to it. The ISBN is thirteen digits long if assigned on or after 1 January 2007, and ten digits long if assigned before 2007. An International Standard Book Number consists of four parts (if it is a 10-digit ISBN) or five parts (for a 13-digit ISBN). Section 5 of the International ISBN Agency's official user manual describes the structure of the 13-digit ISBN, as follows: A 13-digit ISBN can be separated into its parts ("prefix element", "registration group", "registrant", "publication" and "check digit"), and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts ("registration group", "registrant", "publication" and "check digit") of a 10-digit ISBN is also done with either hyphens or spaces. Figuring out how to correctly separate a given ISBN is complicated, because most of the parts do not use a fixed number of digits. ISBN is most often used alongside other special identifiers to describe references in Wikipedia, and can help to find the same sources with different descriptions in various language versions (for example different spellings of the title or authors depending on the language). ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for that country or territory regardless of the publication language. The ranges of ISBNs assigned to any particular country are based on the publishing profile of the country concerned, and so the ranges will vary depending on the number of books and the number, type, and size of publishers that are active. Some ISBN registration agencies are based in national libraries or within ministries of culture and thus may receive direct funding from government to support their services. In other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. A full directory of ISBN agencies is available on the International ISBN Agency website. Partial listing: The ISBN registration group identifier is a 1- to 5-digit number that is valid within a single prefix element (i.e. one of 978 or 979), and can be separated between hyphens, such as . Registration group identifiers have primarily been allocated within the 978 prefix element. The single-digit group identifiers within the 978-prefix element are: 0 or 1 for English-speaking countries; 2 for French-speaking countries; 3 for German-speaking countries; 4 for Japan; 5 for Russian-speaking countries; and 7 for People's Republic of China. An example 5-digit group identifier is 99936, for Bhutan. The allocated group IDs are: 0–5, 600–625, 65, 7, 80–94, 950–989, 9917–9989, and 99901–99983. Books published in rare languages typically have longer group identifiers. Within the 979 prefix element, the registration group identifier 0 is reserved for compatibility with International Standard Music Numbers (ISMNs), but such material is not actually assigned an ISBN. The registration group identifiers within prefix element 979 that have been assigned are 8 for the United States of America, 10 for France, 11 for the Republic of Korea, and 12 for Italy. The original 9-digit standard book number (SBN) had no registration group identifier, but prefixing a zero (0) to a 9-digit SBN creates a valid 10-digit ISBN. The national ISBN agency assigns the registrant element (cf. ) and an accompanying series of ISBNs within that registrant element to the publisher; the publisher then allocates one of the ISBNs to each of its books. In most countries, a book publisher is not legally required to assign an ISBN, although most large bookstores only handle publications that have ISBNs assigned to them. A listing of more than 900,000 assigned publisher codes is published, and can be ordered in book form. The web site of the ISBN agency does not offer any free method of looking up publisher codes. Partial lists have been compiled (from library catalogs) for the English-language groups: identifier 0 and identifier 1. Publishers receive blocks of ISBNs, with larger blocks allotted to publishers expecting to need them; a small publisher may receive ISBNs of one or more digits for the registration group identifier, several digits for the registrant, and a single digit for the publication element. Once that block of ISBNs is used, the publisher may receive another block of ISBNs, with a different registrant element. Consequently, a publisher may have different allotted registrant elements. There also may be more than one registration group identifier used in a country. This might occur once all the registrant elements from a particular registration group have been allocated to publishers. By using variable block lengths, registration agencies are able to customise the allocations of ISBNs that they make to publishers. For example, a large publisher may be given a block of ISBNs where fewer digits are allocated for the registrant element and many digits are allocated for the publication element; likewise, countries publishing many titles have few allocated digits for the registration group identifier and many for the registrant and publication elements. Here are some sample ISBN-10 codes, illustrating block length variations. English-language registration group elements are 0 and 1 (2 of more than 220 registration group elements). These two registration group elements are divided into registrant elements in a systematic pattern, which allows their length to be determined, as follows: A check digit is a form of redundancy check used for error detection, the decimal equivalent of a binary check bit. It consists of a single digit computed from the other digits in the number. The method for the 10-digit ISBN is an extension of that for SBNs, so the two systems are compatible; an SBN prefixed with a zero (the 10-digit ISBN) will give the same check digit as the SBN without the zero. The check digit is base eleven, and can be an integer between 0 and 9, or an 'X'. The system for 13-digit ISBNs is not compatible with SBNs and will, in general, give a different check digit from the corresponding 10-digit ISBN, so does not provide the same protection against transposition. This is because the 13-digit code was required to be compatible with the EAN format, and hence could not contain an 'X'. According to the 2001 edition of the International ISBN Agency's official user manual, the ISBN-10 check digit (which is the last digit of the 10-digit ISBN) must range from 0 to 10 (the symbol 'X' is used for 10), and must be such that the sum of the ten digits, each multiplied by its (integer) weight, descending from 10 to 1, is a multiple of 11. That is, if is the th digit, then must be chosen such that: For example, for an ISBN-10 of 0-306-40615-2: Formally, using modular arithmetic, this is rendered: It is also true for ISBN-10s that the sum of all ten digits, each multiplied by its weight in "ascending" order from 1 to 10, is a multiple of 11. For this example: Formally, this is rendered: The two most common errors in handling an ISBN (e.g. when typing it or writing it down) are a single altered digit or the transposition of adjacent digits. It can be proven mathematically that all pairs of valid ISBN-10s differ in at least two digits. It can also be proven that there are no pairs of valid ISBN-10s with eight identical digits and two transposed digits. (These proofs are true because the ISBN is less than eleven digits long and because 11 is a prime number.) The ISBN check digit method therefore ensures that it will always be possible to detect these two most common types of error, i.e., if either of these types of error has occurred, the result will never be a valid ISBN – the sum of the digits multiplied by their weights will never be a multiple of 11. However, if the error were to occur in the publishing house and remain undetected, the book would be issued with an invalid ISBN. In contrast, it is possible for other types of error, such as two altered non-transposed digits, or three altered digits, to result in a valid ISBN (although it is still unlikely). Each of the first nine digits of the 10-digit ISBN—excluding the check digit itself—is multiplied by its (integer) weight, descending from 10 to 2, and the sum of these nine products found. The value of the check digit is simply the one number between 0 and 10 which, when added to this sum, means the total is a multiple of 11. For example, the check digit for an ISBN-10 of 0-306-40615-"?" is calculated as follows: Adding 2 to 130 gives a multiple of 11 (because 132 = 12×11) – this is the only number between 0 and 10 which does so. Therefore, the check digit has to be 2, and the complete sequence is ISBN 0-306-40615-2. If the value of formula_1 required to satisfy this condition is 10, then an 'X' should be used. Alternatively, modular arithmetic is convenient for calculating the check digit using modulus 11. The remainder of this sum when it is divided by 11 (i.e. its value modulo 11), is computed. This remainder plus the check digit must equal either 0 or 11. Therefore, the check digit is (11 minus the remainder of the sum of the products modulo 11) modulo 11. Taking the remainder modulo 11 a second time accounts for the possibility that the first remainder is 0. Without the second modulo operation, the calculation could result in a check digit value of = 11, which is invalid. (Strictly speaking, the "first" "modulo 11" is not needed, but it may be considered to simplify the calculation.) For example, the check digit for the ISBN-10 of 0-306-40615-"?" is calculated as follows: Thus the check digit is 2. It is possible to avoid the multiplications in a software implementation by using two accumulators. Repeatedly adding codice_1 into codice_2 computes the necessary multiples: // Returns ISBN error syndrome, zero for a valid ISBN, non-zero for an invalid one. // digits[i] must be between 0 and 10. int CheckISBN(int const digits[10]) The modular reduction can be done once at the end, as shown above (in which case codice_2 could hold a value as large as 496, for the invalid ISBN 99999-999-9-X), or codice_2 and codice_1 could be reduced by a conditional subtract after each addition. Appendix 1 of the International ISBN Agency's official user manual describes how the 13-digit ISBN check digit is calculated. The ISBN-13 check digit, which is the last digit of the ISBN, must range from 0 to 9 and must be such that the sum of all the thirteen digits, each multiplied by its (integer) weight, alternating between 1 and 3, is a multiple of 10. Formally, using modular arithmetic, this is rendered: The calculation of an ISBN-13 check digit begins with the first twelve digits of the 13-digit ISBN (thus excluding the check digit itself). Each digit, from left to right, is alternately multiplied by 1 or 3, then those products are summed modulo 10 to give a value ranging from 0 to 9. Subtracted from 10, that leaves a result from 1 to 10. A zero (0) replaces a ten (10), so, in all cases, a single check digit results. For example, the ISBN-13 check digit of 978-0-306-40615-"?" is calculated as follows: Thus, the check digit is 7, and the complete sequence is ISBN 978-0-306-40615-7. In general, the ISBN-13 check digit is calculated as follows. Let Then This check system – similar to the UPC check digit formula – does not catch all errors of adjacent digit transposition. Specifically, if the difference between two adjacent digits is 5, the check digit will not catch their transposition. For instance, the above example allows this situation with the 6 followed by a 1. The correct order contributes 3×6+1×1 = 19 to the sum; while, if the digits are transposed (1 followed by a 6), the contribution of those two digits will be 3×1+1×6 = 9. However, 19 and 9 are congruent modulo 10, and so produce the same, final result: both ISBNs will have a check digit of 7. The ISBN-10 formula uses the prime modulus 11 which avoids this blind spot, but requires more than the digits 0–9 to express the check digit. Additionally, if the sum of the 2nd, 4th, 6th, 8th, 10th, and 12th digits is tripled then added to the remaining digits (1st, 3rd, 5th, 7th, 9th, 11th, and 13th), the total will always be divisible by 10 (i.e., end in 0). An ISBN-10 is converted to ISBN-13 by prepending "978" to the ISBN-10 and recalculating the final checksum digit using the ISBN-13 algorithm. The reverse process can also be performed, but not for numbers commencing with a prefix other than 978, which have no 10-digit equivalent. Publishers and libraries have varied policies about the use of the ISBN check digit. Publishers sometimes fail to check the correspondence of a book title and its ISBN before publishing it; that failure causes book identification problems for libraries, booksellers, and readers. For example, is shared by two books – "Ninja gaiden®: a novel based on the best-selling game by Tecmo" (1990) and "Wacky laws" (1997), both published by Scholastic. Most libraries and booksellers display the book record for an invalid ISBN issued by the publisher. The Library of Congress catalogue contains books published with invalid ISBNs, which it usually tags with the phrase "Cancelled ISBN". However, book-ordering systems such as Amazon.com will not search for a book if an invalid ISBN is entered to its search engine. OCLC often indexes by invalid ISBNs, if the book is indexed in that way by a member library. Only the term "ISBN" should be used; the terms "eISBN" and "e-ISBN" have historically been sources of confusion and should be avoided. If a book exists in one or more digital (e-book) formats, each of those formats must have its own ISBN. In other words, each of the three separate EPUB, Amazon Kindle, and PDF formats of a particular book will have its own specific ISBN. They should not share the ISBN of the paper version, and there is no generic "eISBN" which encompasses all the e-book formats for a title. Currently the barcodes on a book's back cover (or inside a mass-market paperback book's front cover) are EAN-13; they may have a separate barcode encoding five digits called an EAN-5 for the currency and the recommended retail price. For 10-digit ISBNs, the number "978", the Bookland "country code", is prefixed to the ISBN in the barcode data, and the check digit is recalculated according to the EAN-13 formula (modulo 10, 1x and 3x weighting on alternating digits). Partly because of an expected shortage in certain ISBN categories, the International Organization for Standardization (ISO) decided to migrate to a 13-digit ISBN (ISBN-13). The process began on 1 January 2005 and was planned to conclude on 1 January 2007. , all the 13-digit ISBNs began with 978. As the 978 ISBN supply is exhausted, the 979 prefix was introduced. Part of the 979 prefix is reserved for use with the Musicland code for musical scores with an ISMN. The 10-digit ISMN codes differed visually as they began with an "M" letter; the bar code represents the "M" as a zero (0), and for checksum purposes it counted as a 3. All ISMNs are now thirteen digits commencing ; to will be used by ISBN. Publisher identification code numbers are unlikely to be the same in the 978 and 979 ISBNs, likewise, there is no guarantee that language area code numbers will be the same. Moreover, the 10-digit ISBN check digit generally is not the same as the 13-digit ISBN check digit. Because the GTIN-13 is part of the Global Trade Item Number (GTIN) system (that includes the GTIN-14, the GTIN-12, and the GTIN-8), the 13-digit ISBN falls within the 14-digit data field range. Barcode format compatibility is maintained, because (aside from the group breaks) the ISBN-13 barcode format is identical to the EAN barcode format of existing 10-digit ISBNs. So, migration to an EAN-based system allows booksellers the use of a single numbering system for both books and non-book products that is compatible with existing ISBN based data, with only minimal changes to information technology systems. Hence, many booksellers (e.g., Barnes & Noble) migrated to EAN barcodes as early as March 2005. Although many American and Canadian booksellers were able to read EAN-13 barcodes before 2005, most general retailers could not read them. The upgrading of the UPC barcode system to full EAN-13, in 2005, eased migration to the ISBN-13 in North America.
https://en.wikipedia.org/wiki?curid=14919
IP address An Internet Protocol address (IP address) is a numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication. An IP address serves two main functions: host or network interface identification and location addressing. Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. However, because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP (IPv6), using 128 bits for the IP address, was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s. IP addresses are written and displayed in human-readable notations, such as in IPv4, and in IPv6. The size of the routing prefix of the address is designated in CIDR notation by suffixing the address with the number of significant bits, e.g., , which is equivalent to the historically used subnet mask . The IP address space is managed globally by the Internet Assigned Numbers Authority (IANA), and by five regional Internet registries (RIRs) responsible in their designated territories for assignment to local Internet registries, such as Internet service providers, and other end users. IPv4 addresses were distributed by IANA to the RIRs in blocks of approximately 16.8 million addresses each, but have been exhausted at the IANA level since 2011. Only one of the RIRs still has a supply for local assignments in Africa. Some IPv4 addresses are reserved for private networks and are not globally unique. Network administrators assign an IP address to each device connected to a network. Such assignments may be on a "static" (fixed or permanent) or "dynamic" basis, depending on network practices and software features. An IP address serves two principal functions. It identifies the host, or more specifically its network interface, and it provides the location of the host in the network, and thus the capability of establishing a path to that host. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there." The header of each IP packet contains the IP address of the sending host, and that of the destination host. Two versions of the Internet Protocol are in common use in the Internet today. The original version of the Internet Protocol that was first deployed in 1983 in the ARPANET, the predecessor of the Internet, is Internet Protocol version 4 (IPv4). The rapid exhaustion of IPv4 address space available for assignment to Internet service providers and end user organizations by the early 1990s, prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand the addressing capability in the Internet. The result was a redesign of the Internet Protocol which became eventually known as Internet Protocol Version 6 (IPv6) in 1995. IPv6 technology was in various testing stages until the mid-2000s, when commercial production deployment commenced. Today, these two versions of the Internet Protocol are in simultaneous use. Among other technical changes, each version defines the format of addresses differently. Because of the historical prevalence of IPv4, the generic term "IP address" typically still refers to the addresses defined by IPv4. The gap in version sequence between IPv4 and IPv6 resulted from the assignment of version 5 to the experimental Internet Stream Protocol in 1979, which however was never referred to as IPv5. Other versions v1 to v9 were defined, but only v4 and v6 ever gained widespread use. v1 and v2 were names for TCP protocols in 1974 and 1977, as there was to separate IP specification at the time. v3 was defined in 1978, and v3.1 is the first version where TCP is separated from IP. v6 is a synthesis of several suggested versions, v6 "Simple Internet Protocol", v7 "TP/IX: The Next Internet", v8 "PIP — The P Internet Protocol", and v9 "TUBA — Tcp & Udp with Big Addresses". IP networks may be divided into subnetworks in both IPv4 and IPv6. For this purpose, an IP address is recognized as consisting of two parts: the "network prefix" in the high-order bits and the remaining bits called the "rest field", "host identifier", or "interface identifier" (IPv6), used for host numbering within a network. The subnet mask or CIDR notation determines how the IP address is divided into network and host parts. The term "subnet mask" is only used within IPv4. Both IP versions however use the CIDR concept and notation. In this, the IP address is followed by a slash and the number (in decimal) of bits used for the network part, also called the "routing prefix". For example, an IPv4 address and its subnet mask may be and , respectively. The CIDR notation for the same IP address and subnet is , because the first 24 bits of the IP address indicate the network and subnet. An IPv4 address has a size of 32 bits, which limits the address space to (232) addresses. Of this number, some addresses are reserved for special purposes such as private networks (~18 million addresses) and multicast addressing (~270 million addresses). IPv4 addresses are usually represented in dot-decimal notation, consisting of four decimal numbers, each ranging from 0 to 255, separated by dots, e.g., . Each part represents a group of 8 bits (an octet) of the address. In some cases of technical writing, IPv4 addresses may be presented in various hexadecimal, octal, or binary representations. In the early stages of development of the Internet Protocol, the network number was always the highest order octet (most significant eight bits). Because this method allowed for only 256 networks, it soon proved inadequate as additional networks developed that were independent of the existing networks already designated by a network number. In 1981, the addressing specification was revised with the introduction of classful network architecture. Classful network design allowed for a larger number of individual network assignments and fine-grained subnetwork design. The first three bits of the most significant octet of an IP address were defined as the "class" of the address. Three classes ("A", "B", and "C") were defined for universal unicast addressing. Depending on the class derived, the network identification was based on octet boundary segments of the entire address. Each class used successively additional octets in the network identifier, thus reducing the possible number of hosts in the higher order classes ("B" and "C"). The following table gives an overview of this now obsolete system. Classful network design served its purpose in the startup stage of the Internet, but it lacked scalability in the face of the rapid expansion of networking in the 1990s. The class system of the address space was replaced with Classless Inter-Domain Routing (CIDR) in 1993. CIDR is based on variable-length subnet masking (VLSM) to allow allocation and routing based on arbitrary-length prefixes. Today, remnants of classful network concepts function only in a limited scope as the default configuration parameters of some network software and hardware components (e.g. netmask), and in the technical jargon used in network administrators' discussions. Early network design, when global end-to-end connectivity was envisioned for communications with all Internet hosts, intended that IP addresses be globally unique. However, it was found that this was not always necessary as private networks developed and public address space needed to be conserved. Computers not connected to the Internet, such as factory machines that communicate only with each other via TCP/IP, need not have globally unique IP addresses. Today, such private networks are widely used and typically connect to the Internet with network address translation (NAT), when needed. Three non-overlapping ranges of IPv4 addresses for private networks are reserved. These addresses are not routed on the Internet and thus their use need not be coordinated with an IP address registry. Any user may use any of the reserved blocks. Typically, a network administrator will divide a block into subnets; for example, many home routers automatically use a default address range of through (). In IPv6, the address size was increased from 32 bits in IPv4 to 128 bits, thus providing up to 2128 (approximately ) addresses. This is deemed sufficient for the foreseeable future. The intent of the new design was not to provide just a sufficient quantity of addresses, but also redesign routing in the Internet by allowing more efficient aggregation of subnetwork routing prefixes. This resulted in slower growth of routing tables in routers. The smallest possible individual allocation is a subnet for 264 hosts, which is the square of the size of the entire IPv4 Internet. At these levels, actual address utilization ratios will be small on any IPv6 network segment. The new design also provides the opportunity to separate the addressing infrastructure of a network segment, i.e. the local administration of the segment's available space, from the addressing prefix used to route traffic to and from external networks. IPv6 has facilities that automatically change the routing prefix of entire networks, should the global connectivity or the routing policy change, without requiring internal redesign or manual renumbering. The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and, where appropriate, to be aggregated for efficient routing. With a large address space, there is no need to have complex address conservation methods as used in CIDR. All modern desktop and enterprise server operating systems include native support for the IPv6 protocol, but it is not yet widely deployed in other devices, such as residential networking routers, voice over IP (VoIP) and multimedia equipment, and some networking hardware. Just as IPv4 reserves addresses for private networks, blocks of addresses are set aside in IPv6. In IPv6, these are referred to as unique local addresses (ULAs). The routing prefix is reserved for this block, which is divided into two blocks with different implied policies. The addresses include a 40-bit pseudorandom number that minimizes the risk of address collisions if sites merge or packets are misrouted. Early practices used a different block for this purpose (), dubbed site-local addresses. However, the definition of what constituted a "site" remained unclear and the poorly defined addressing policy created ambiguities for routing. This address type was abandoned and must not be used in new systems. Addresses starting with , called link-local addresses, are assigned to interfaces for communication on the attached link. The addresses are automatically generated by the operating system for each network interface. This provides instant and automatic communication between all IPv6 host on a link. This feature is used in the lower layers of IPv6 network administration, such as for the Neighbor Discovery Protocol. Private and link-local address prefixes may not be routed on the public Internet. IP addresses are assigned to a host either dynamically as they join the network, or persistently by configuration of the host hardware or software. Persistent configuration is also known as using a static IP address. In contrast, when a computer's IP address is assigned each time it restarts, this is known as using a dynamic IP address. Dynamic IP addresses are assigned by network using Dynamic Host Configuration Protocol (DHCP). DHCP is the most frequently used technology for assigning addresses. It avoids the administrative burden of assigning specific static addresses to each device on a network. It also allows devices to share the limited address space on a network if only some of them are online at a particular time. Typically, dynamic IP configuration is enabled by default in modern desktop operating systems. The address assigned with DHCP is associated with a "lease" and usually has an expiration period. If the lease is not renewed by the host before expiry, the address may be assigned to another device. Some DHCP implementations attempt to reassign the same IP address to a host, based on its MAC address, each time it joins the network. A network administrator may configure DHCP by allocating specific IP addresses based on MAC address. DHCP is not the only technology used to assign IP addresses dynamically. Bootstrap Protocol is a similar protocol and predecessor to DHCP. Dialup and some broadband networks use dynamic address features of the Point-to-Point Protocol. Computers and equipment used for the network infrastructure, such as routers and mail servers, are typically configured with static addressing. In the absence or failure of static or dynamic address configurations, an operating system may assign a link-local address to a host using stateless address autoconfiguration. A "sticky dynamic IP address" is an informal term used by cable and DSL Internet access subscribers to describe a dynamically assigned IP address which seldom changes. The addresses are usually assigned with DHCP. Since the modems are usually powered on for extended periods of time, the address leases are usually set to long periods and simply renewed. If a modem is turned off and powered up again before the next expiration of the address lease, it often receives the same IP address. Address block is defined for the special use in link-local addressing for IPv4 networks. In IPv6, every interface, whether using static or dynamic address assignments, also receives a link-local address automatically in the block . These addresses are only valid on the link, such as a local network segment or point-to-point connection, to which a host is connected. These addresses are not routable and, like private addresses, cannot be the source or destination of packets traversing the Internet. When the link-local IPv4 address block was reserved, no standards existed for mechanisms of address autoconfiguration. Filling the void, Microsoft developed a protocol called Automatic Private IP Addressing (APIPA), whose first public implementation appeared in Windows 98. APIPA has been deployed on millions of machines and became a de facto standard in the industry. In May 2005, the IETF defined a formal standard for it. An IP address conflict occurs when two devices on the same local physical or wireless network claim to have the same IP address. A second assignment of an address generally stops the IP functionality of one or both of the devices. Many modern operating systems notify the administrator of IP address conflicts. When IP addresses are assigned by multiple people and systems with differing methods, any of them may be at fault. If one of the devices involved in the conflict is the default gateway access beyond the LAN for all devices on the LAN may be impaired. IP addresses are classified into several classes of operational characteristics: unicast, multicast, anycast and broadcast addressing. The most common concept of an IP address is in unicast addressing, available in both IPv4 and IPv6. It normally refers to a single sender or a single receiver, and can be used for both sending and receiving. Usually, a unicast address is associated with a single device or host, but a device or host may have more than one unicast address. Sending the same data to multiple unicast addresses requires the sender to send all the data many times over, once for each recipient. Broadcasting is an addressing technique available in IPv4 to address data to all possible destinations on a network in one transmission operation as an "all-hosts broadcast". All receivers capture the network packet. The address is used for network broadcast. In addition, a more limited directed broadcast uses the all-ones host address with the network prefix. For example, the destination address used for directed broadcast to devices on the network is . IPv6 does not implement broadcast addressing, and replaces it with multicast to the specially defined all-nodes multicast address. A multicast address is associated with a group of interested receivers. In IPv4, addresses through (the former Class D addresses) are designated as multicast addresses. IPv6 uses the address block with the prefix for multicast. In either case, the sender sends a single datagram from its unicast address to the multicast group address and the intermediary routers take care of making copies and sending them to all interested receivers (those that have joined the corresponding multicast group). Like broadcast and multicast, anycast is a one-to-many routing topology. However, the data stream is not transmitted to all receivers, just the one which the router decides is closest in the network. Anycast addressing is an built-in feature of IPv6. In IPv4, anycast addressing is implemented with Border Gateway Protocol using the shortest-path metric to choose destinations. Anycast methods are useful for global load balancing and are commonly used in distributed DNS systems. A host may use geolocation software to deduce the location of its communicating peer. A public IP address, in common parlance, is a globally routable unicast IP address, meaning that the address is not an address reserved for use in private networks, such as those reserved by , or the various IPv6 address formats of local scope or site-local scope, for example for link-local addressing. Public IP addresses may be used for communication between hosts on the global Internet. For security and privacy considerations, network administrators often desire to restrict public Internet traffic within their private networks. The source and destination IP addresses contained in the headers of each IP packet are a convenient means to discriminate traffic by IP address blocking or by selectively tailoring responses to external requests to internal servers. This is achieved with firewall software running on the network's gateway router. A database of IP addresses of restricted and permissible traffic may be maintained in blacklists and whitelists respectively. Multiple client devices can appear to share an IP address, either because they are part of a shared web hosting service environment or because an IPv4 network address translator (NAT) or proxy server acts as an intermediary agent on behalf of the client, in which case the real originating IP address is masked from the server receiving a request. A common practice is to have a NAT mask many devices in a private network. Only the public interface(s) of the NAT needs to have an Internet-routable address. The NAT device maps different IP addresses on the private network to different TCP or UDP port numbers on the public network. In residential networks, NAT functions are usually implemented in a residential gateway. In this scenario, the computers connected to the router have private IP addresses and the router has a public address on its external interface to communicate on the Internet. The internal computers appear to share one public IP address. Computer operating systems provide various diagnostic tools to examine network interfaces and address configuration. Microsoft Windows provides the command-line interface tools ipconfig and netsh and users of Unix-like systems may use ifconfig, netstat, route, lanstat, fstat, and iproute2 utilities to accomplish the task.
https://en.wikipedia.org/wiki?curid=14921
If and only if In logic and related fields such as mathematics and philosophy, if and only if (shortened as iff) is a biconditional logical connective between statements, where either both statements are true or both are false. The connective is biconditional (a statement of material equivalence), and can be likened to the standard material conditional ("only if", equal to "if ... then") combined with its reverse ("if"); hence the name. The result is that the truth of either one of the connected statements requires the truth of the other (i.e. either both statements are true, or both are false), though it is controversial whether the connective thus defined is properly rendered by the English "if and only if" — with its pre-existing meaning. For example, "P if and only if Q" means that the only case in which "P" is true is if "Q" is also true, whereas in the case of "P if Q" there could be other scenarios where "P" is true when "Q" is false. In writing, phrases commonly used as alternatives to P "if and only if" Q include: "Q is necessary and sufficient for P", "P is equivalent (or materially equivalent) to Q" (compare material implication), "P precisely if Q", "P precisely (or exactly) when Q", "P exactly in case Q", and "P just in case Q". Some authors regard "iff" as unsuitable in formal writing; others consider it a "borderline case" and tolerate its use. In logical formulae, logical symbols are used instead of these phrases; see the discussion of notation. The truth table of "P" formula_1 "Q" is as follows: It is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate. The corresponding logical symbols are "↔", "formula_1", and "≡", and sometimes "iff". These are usually treated as equivalent. However, some texts of mathematical logic (particularly those on first-order logic, rather than propositional logic) make a distinction between these, in which the first, ↔, is used as a symbol in logic formulas, while ⇔ is used in reasoning about those logic formulas (e.g., in metalogic). In Łukasiewicz's Polish notation, it is the prefix symbol 'E'. Another term for this logical connective is exclusive nor. In TeX "if and only if" is shown as a long double arrow: formula_3 via command \iff. In most logical systems, one proves a statement of the form "P iff Q" by proving either "if P, then Q" and "if Q, then P", or "if P, then Q" and "if not-P, then not-Q". Proving these pair of statements sometimes leads to a more natural proof, since there are not obvious conditions in which one would infer a biconditional directly. An alternative is to prove the disjunction "(P and Q) or (not-P and not-Q)", which itself can be inferred directly from either of its disjuncts—that is, because "iff" is truth-functional, "P iff Q" follows if P and Q have been shown to be both true, or both false. Usage of the abbreviation "iff" first appeared in print in John L. Kelley's 1955 book "General Topology". Its invention is often credited to Paul Halmos, who wrote "I invented 'iff,' for 'if and only if'—but I could never believe I was really its first inventor." It is somewhat unclear how "iff" was meant to be pronounced. In current practice, the single 'word' "iff" is almost always read as the four words "if and only if". However, in the preface of "General Topology", Kelley suggests that it should be read differently: "In some cases where mathematical content requires 'if and only if' and euphony demands something less I use Halmos' 'iff'". The authors of one discrete mathematics textbook suggest: "Should you need to pronounce iff, really hang on to the 'ff' so that people hear the difference from 'if'", implying that "iff" could be pronounced as . Technically, definitions are always "if and only if" statements; some texts — such as Kelley's "General Topology" — follow the strict demands of logic, and use "if and only if" or "iff" in definitions of new terms. However, this logically correct usage of "if and only if" is relatively uncommon, as the majority of textbooks, research papers and articles (including English Wikipedia articles) follow the special convention to interpret "if" as "if and only if" whenever a mathematical definition is involved (as in "a topological space is compact if every open cover has a finite subcover"). Sufficiency is the converse of necessity. That is to say, given "P"→"Q" (i.e. if "P" then "Q"), "P" would be a sufficient condition for "Q", and "Q" would be a necessary condition for "P". Also, given "P"→"Q", it is true that "¬Q"→"¬P" (where ¬ is the negation operator, i.e. "not"). This means that the relationship between "P" and "Q", established by "P"→"Q", can be expressed in the following, all equivalent, ways: As an example, take (1), above, which states "P"→"Q", where "P" is "the fruit in question is an apple" and "Q" is "Madison will eat the fruit in question". The following are four equivalent ways of expressing this very relationship: We see that (2), above, can be restated in the form of "if...then" as "If Madison will eat the fruit in question, then it is an apple"; taking this in conjunction with (1), we find that (3) can be stated as "If the fruit in question is an apple, then Madison will eat it; "and" if Madison will eat the fruit, then it is an apple". Euler diagrams show logical relationships among events, properties, and so forth. "P only if Q", "if P then Q", and "P→Q" all mean that P is a subset, either proper or improper, of Q. "P if Q", "if Q then P", and Q→P all mean that Q is a proper or improper subset of P. "P if and only if Q" and "Q if and only if P" both mean that the sets P and Q are identical to each other. Iff is used outside the field of logic as well. Wherever logic is applied, especially in mathematical discussions, it has the same meaning as above: it is an abbreviation for "if and only if", indicating that one statement is both necessary and sufficient for the other. This is an example of mathematical jargon (although, as noted above, "if" is more often used than "iff" in statements of definition). The elements of "X" are "all and only" the elements of "Y" is used to mean: "for any "z" in the domain of discourse, "z" is in "X" if and only if "z" is in "Y"."
https://en.wikipedia.org/wiki?curid=14922
Isaac Ambrose Isaac Ambrose (1604 – 20 January 1664) was an English Puritan divine. He graduated with a BA. from Brasenose College, Oxford, on 1624. He obtained the curacy of St Edmund’s Church, Castleton, Derbyshire, in 1627. He was one of king's four preachers in Lancashire in 1631. He was twice imprisoned by commissioners of array. He worked for establishment of Presbyterianism; successively at Leeds, Preston, and Garstang, whence he was ejected for nonconformity in 1662. He also published religious works. Ambrose was born in 1604. He was the son of Richard Ambrose, vicar of Ormskirk, and was probably descended from the Ambroses of Lowick in Furness, a well-known Roman Catholic family. He entered Brasenose College, Oxford, in 1621, in his seventeenth year. Having graduated B.A. in 1624 and been ordained, Ambroses received in 1627 the little cure of Castleton in Derbyshire. By the influence of William Russell, earl of Bedford, he was appointed one of the king's itinerant preachers in Lancashire, and after living for a time in Garstang, he was selected by the Lady Margaret Hoghton as vicar of Preston. He associated himself with Presbyterianism, and was on the celebrated committee for the ejection of "scandalous and ignorant ministers and schoolmasters" during the Commonwealth. So long as Ambrose continued at Preston he was favoured with the warm friendship of the Hoghton family, their ancestral woods and the tower near Blackburn affording him sequestered places for those devout meditations and "experiences" that give such a charm to his diary, portions of which are quoted in his "Prima Media" and "Ultima" (1650, 1659). The immense auditory of his sermon ("Redeeming the Time") at the funeral of Lady Hoghton was long a living tradition all over the county. On account of the feeling engendered by the civil war Ambrose left his great church of Preston in 1654, and became minister of Garstang, whence, however, in 1662 he was ejected along two thousand ministers who refused to conform (see Great Ejection). His after years were passed among old friends and in quiet meditation at Preston. He died of apoplexy about 20 January 1664. As a religious writer Ambrose has a vividness and freshness of imagination possessed by scarcely any of the Puritan Nonconformists. Many who have no love for Puritan doctrine, nor sympathy with Puritan experience, have appreciated the pathos and beauty of his writings, and his "Looking unto Jesus" long held its own in popular appreciation with the writings of John Bunyan. Dr Edmund Calamy the Elder (1600–1666) wrote about him: In the opinion of John Eglington Bailey (his biographer in the DNB), his character has been misrepresented by Wood. He was of a peaceful disposition; and though he put his name to the fierce "Harmonious Consent", he was not naturally a partisan. He evaded the political controversies of the time. His gentleness of character and earnest presentation of the gospel attached him to his people. He was much given to secluding himself, retiring every May into the woods of Hoghton Tower and remaining there a month. Bailey continues that Dr. Halley justly characterises him as the most meditative puritan of Lancashire. This quality pervades his writings, which abound, besides, in deep feeling and earnest piety. Mr. Hunter has called attention to his recommendation of diaries as a means of advancing personal piety, and has remarked, in reference to the fragments from Ambrose's diary quoted in the "Media", that "with such passages before us we cannot but lament that the carelessness of later times should have suffered such a curious and valuable document to perish; for perished it is to be feared it has".
https://en.wikipedia.org/wiki?curid=14928
International Convention for the Regulation of Whaling The International Convention for the Regulation of Whaling is an international environmental agreement signed in 1946 in order to "provide for the proper conservation of whale stocks and thus make possible the orderly development of the whaling industry". It governs the commercial, scientific, and aboriginal subsistence whaling practices of eighty-nine member nations. It was signed by 15 nations in Washington, D.C. on 2 December 1946 and took effect on 10 November 1948. Its protocol (which represented the first substantial revision of the convention and extended the definition of a "whale-catcher" to include helicopters as well as ships) was signed in Washington on 19 November 1956. The convention is a successor to the International Agreement for the Regulation of Whaling, signed in London on 8 June 1937, and the protocols for that agreement signed in London on 24 June 1938, and 26 November 1945. The objectives of the agreement are the protection of all whale species from overhunting, the establishment of a system of international regulation for the whale fisheries to ensure proper conservation and development of whale stocks, and safeguarding for future generations the great natural resources represented by whale stocks. The primary instrument for the realization of these aims is the International Whaling Commission which was established pursuant to this convention. The commission has made many revisions to the schedule that makes up the bulk of the convention. The Commission process has also reserved for governments the right to carry out scientific research which involves killing of whales. There have been consistent disagreement over the scope of the convention. According to the IWC: The 1946 Convention does not define a 'whale', although a list of names in a number of languages was annexed to the Final Act of the Convention. Some Governments take the view that the IWC has the legal competence to regulate catches of only these named great whales (the baleen whales and the sperm whale). Others believe that all cetaceans, including the smaller dolphins and porpoises, also fall within IWC jurisdiction. As of January 2014, membership consists of 89 states of the world. The initial signatory states were Argentina, Australia, Brazil, Canada, Chile, Denmark, France, the Netherlands, New Zealand, Norway, Peru, South Africa, the Soviet Union, the United Kingdom and the United States. Although Norway is a party to the convention, it maintains an objection to paragraph 10(e) of the convention (the section referring to the 1986 moratorium). Therefore, that provision is not binding upon Norway and the 1986 IWC global moratorium does not apply to it. As of January 2014, eight states that were formerly parties to the convention have withdrawn by denouncing it. These states are Canada (which withdrew on 30 June 1982), Egypt, Greece, Jamaica, Mauritius, Philippines, the Seychelles and Venezuela. Belize, Brazil, Dominica, Ecuador, Iceland, Japan, New Zealand, and Panama have all withdrawn from the convention for a period of time after ratification but subsequently have ratified it a second time. The Netherlands, Norway, and Sweden have all withdrawn from the convention twice, only to have accepted it a third time. Japan withdrew from the convention in January 2019.
https://en.wikipedia.org/wiki?curid=14933
International Organization for Standardization The International Organization for Standardization (ISO; ) is an international standard-setting body composed of representatives from various national standards organizations. In contrast to many international organizations, which utilize the British English form of spelling, the ISO uses English with Oxford spelling as one of its official languages along with French and Russian. Founded on 23 February 1947, the organization promotes worldwide proprietary, industrial, and commercial standards. It is headquartered in Geneva, Switzerland, and works in 164 countries. It was one of the first organizations granted general consultative status with the United Nations Economic and Social Council. The International Organization for Standardization is an independent, non-governmental organization, the members of which are the standards organizations of the 164 member countries. It is the world's largest developer of voluntary international standards and it facilitates world trade by providing common standards among nations. More than twenty thousand standards have been set, covering everything from manufactured products and technology to food safety, agriculture, and healthcare. Use of the standards aids in the creation of products and services that are safe, reliable, and of good quality. The standards help businesses increase productivity while minimizing errors and waste. By enabling products from different markets to be directly compared, they facilitate companies in entering new markets and assist in the development of global trade on a fair basis. The standards also serve to safeguard consumers and the end-users of products and services, ensuring that certified products conform to the minimum standards set internationally. The organization today known as ISO, began in the 1920s as the International Federation of the National Standardizing Associations (ISA). It was suspended in 1942 during World War II, but after the war ISA was approached by the recently-formed United Nations Standards Coordinating Committee (UNSCC) with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met in London and agreed to join forces to create the new International Organization for Standardization. The new organization officially began operations in February 1947. The three official languages of the ISO are English (with Oxford spelling), French, and Russian. The name of the organization in French is ', and in Russian, ('). "ISO" is not an acronym. The organization adopted "ISO" as its abbreviated name in reference to the Greek word "" (, meaning "equal"), as its name in the three official languages would have had different acronyms. During the founding meetings of the new organization, the Greek word explanation was not invoked, so this meaning may have been made public later, making it a backronym. ISO gives this explanation of the name: "Because 'International Organization for Standardization' would have different acronyms in different languages (IOS in English, OIN in French), our founders decided to give it the short form "ISO". "ISO" is derived from the Greek "", meaning equal. Whatever the country, whatever the language, the short form of our name is always "ISO"." Both the name "ISO" and the ISO logo are registered trademarks and their use is restricted. ISO is a voluntary organization whose members are recognized authorities on standards, each one representing one country. Members meet annually at a General Assembly to discuss the strategic objectives of ISO. The organization is coordinated by a central secretariat based in Geneva. A council with a rotating membership of 20 member bodies provides guidance and governance, including setting the annual budget of the central secretariat. The technical management board is responsible for more than 250 technical committees, who develop the ISO standards. ISO has formed two joint committees with the International Electrotechnical Commission (IEC) to develop standards and terminology in the areas of electrical and electronic related technologies. ISO/IEC Joint Technical Committee 1 (JTC 1) was created in 1987 to "[d]evelop, maintain, promote and facilitate IT standards", where IT refers to information technology. ISO/IEC Joint Technical Committee 2 (JTC 2) was created in 2009 for the purpose of "[s]tandardization in the field of energy efficiency and renewable energy sources". ISO has 164 national members. ISO has three membership categories, Participating members are called "P" members, as opposed to observing members, who are called "O" members. ISO is funded by a combination of: International standards are the main products of ISO. It also publishes technical reports, technical specifications, publicly available specifications, technical corrigenda, and guides.
https://en.wikipedia.org/wiki?curid=14934
Individualist anarchism Individualist anarchism is the branch of anarchism that emphasize the individual and their will over external determinants such as groups, society, traditions and ideological systems. Although usually contrasted to social anarchism, both individualist and social anarchism have influenced each other. Mutualism, an economic theory particularly influential within individualist anarchism whose pursued liberty has been called the synthesis of communism and property, has been considered sometimes part of individualist anarchism and other times part of social anarchism. Many anarcho-communists regard themselves as radical individualists, seeing anarcho-communism as the best social system for the realization of individual freedom. As a term, "individualist anarchism" is not a single philosophy, but it refers to a group of individualist philosophies that sometimes are in conflict. Among the early influences on individualist anarchism were William Godwin (philosophical anarchism), Josiah Warren (sovereignty of the individual), Max Stirner (egoism), Lysander Spooner (natural law), Pierre-Joseph Proudhon (mutualism), Henry David Thoreau (transcendentalism), Herbert Spencer (law of equal liberty) and Anselme Bellegarrigue (civil disobedience). From there, individualist anarchism expanded through Europe and the United States, where prominent 19th-century individualist anarchist Benjamin Tucker held that "if the individual has the right to govern himself, all external government is tyranny". Individualist anarchism has been described as the anarchist school most influenced by and tied to liberalism (especially classical liberalism) as well as the liberal wing—contra the collectivist or communist wing—of anarchism and libertarian socialism. Nonetheless, the very idea of an individualist–socialist divide is contested as individualist anarchism is largely socialistic and can be considered a form of individualist socialism, with non-Lockean individualism encompassing socialism. The term individualist anarchism is often used as a classificatory term, but in very different ways. Some sources such as "An Anarchist FAQ" use the classification social anarchism/individualist anarchism. Some see individualist anarchism as distinctly non-socialist and use the classification socialist anarchism/individualist anarchism accordingly. Other classifications include mutualist/communal anarchism. Michael Freeden identifies four broad types of individualist anarchism. He says the first is the type associated with William Godwin that advocates self-government with a "progressive rationalism that included benevolence to others". The second type is the amoral self-serving rationality of egoism as most associated with Max Stirner. The third type is "found in Herbert Spencer's early predictions, and in that of some of his disciples such as Wordsworth Donisthorpe, foreseeing the redundancy of the state in the source of social evolution". The fourth type retains a moderated form of egoism and accounts for social cooperation through the advocacy of market relationships. Individualist anarchism of different kinds have a few things in common. These are the following: The egoist form of individualist anarchism, derived from the philosophy of Max Stirner, supports the individual doing exactly what he pleases—taking no notice of God, state, or moral rules. To Stirner, rights were "spooks" in the mind, and he held that society does not exist but "the individuals are its reality"—he supported property by force of might rather than moral right. Stirner advocated self-assertion and foresaw "associations of egoists" drawn together by respect for each other's ruthlessness. Individualist anarchists such as Benjamin Tucker argued that it was "not Socialist Anarchism against Individualist Anarchism, but of Communist Socialism against Individualist Socialism". Tucker further noted that "the fact that State Socialism has overshadowed other forms of Socialism gives it no right to a monopoly of the Socialistic idea". For historian Eunice Minette Schuster, American individualist anarchism "stresses the isolation of the individual – his right to his own tools, his mind, his body, and to the products of his labor. To the artist who embraces this philosophy it is "aesthetic" anarchism, to the reformer, ethical anarchism, to the independent mechanic, economic anarchism. The former is concerned with philosophy, the latter with practical demonstration. The economic anarchist is concerned with constructing a society on the basis of anarchism. Economically he sees no harm whatever in the private possession of what the individual produces by his own labor, but only so much and no more. The aesthetic and ethical type found expression in the transcendentalism, humanitarianism, and romanticism of the first part of the nineteenth century, the economic type in the pioneer life of the West during the same period, but more favorably after the Civil War". It is for this reason that it has been suggested that in order to understand individualist anarchism one must take into account "the social context of their ideas, namely the transformation of America from a pre-capitalist to a capitalist society [...] the non-capitalist nature of the early U.S. can be seen from the early dominance of self-employment (artisan and peasant production). At the beginning of the 19th century, around 80% of the working (non-slave) male population were self-employed. The great majority of Americans during this time were farmers working their own land, primarily for their own needs" and "[i]ndividualist anarchism is clearly a form of artisanal socialism [...] while communist anarchism and anarcho-syndicalism are forms of industrial (or proletarian) socialism". Contemporary individualist anarchist Kevin Carson characterizes American individualist anarchism by saying that "[u]nlike the rest of the socialist movement, the individualist anarchists believed that the natural wage of labor in a free market was its product, and that economic exploitation could only take place when capitalists and landlords harnessed the power of the state in their interests. Thus, individualist anarchism was an alternative both to the increasing statism of the mainstream socialist movement, and to a classical liberal movement that was moving toward a mere apologetic for the power of big business". In European individualist anarchism, a different social context helped the rise of European individualist illegalism and as such "[t]he illegalists were proletarians who had nothing to sell but their labour power, and nothing to discard but their dignity; if they disdained waged-work, it was because of its compulsive nature. If they turned to illegality it was due to the fact that honest toil only benefited the employers and often entailed a complete loss of dignity, while any complaints resulted in the sack; to avoid starvation through lack of work it was necessary to beg or steal, and to avoid conscription into the army many of them had to go on the run". A European tendency of individualist anarchism advocated violent individual acts of individual reclamation, propaganda by the deed and criticism of organization. Such individualist anarchist tendencies include French illegalism and Italian anti-organizational insurrectionarism. Bookchin reports that at the end of the 19th century and the beginning of the 20th "it was in times of severe social repression and deadening social quiescence that individualist anarchists came to the foreground of libertarian activity – and then primarily as terrorists. In France, Spain, and the United States, individualistic anarchists committed acts of terrorism that gave anarchism its reputation as a violently sinister conspiracy". Another important tendency within individualist anarchist currents emphasizes individual subjective exploration and defiance of social conventions. Individualist anarchist philosophy attracted "amongst artists, intellectuals and the well-read, urban middle classes in general". As such, Murray Bookchin describes a lot of individualist anarchism as people who "expressed their opposition in uniquely personal forms, especially in fiery tracts, outrageous behavior and aberrant lifestyles in the cultural ghettos of fin de siecle New York, Paris and London. As a credo, individualist anarchism remained largely a bohemian lifestyle, most conspicuous in its demands for sexual freedom ('free love') and enamored of innovations in art, behavior, and clothing". In this way, free love currents and other radical lifestyles such as naturism had popularity among individualist anarchists. For Catalan historian Xavier Diez, "under its iconoclastic, antiintelectual, antitheist run, which goes against all sacralized ideas or values it entailed, a philosophy of life which could be considered a reaction against the sacred gods of capitalist society. Against the idea of nation, it opposed its internationalism. Against the exaltation of authority embodied in the military institution, it opposed its antimilitarism. Against the concept of industrial civilization, it opposed its naturist vision". In regards to economic questions, there are diverse positions. There are adherents to mutualism (Proudhon, Émile Armand and the early Tucker), egoistic disrespect for "ghosts" such as private property and markets (Stirner, John Henry Mackay, Lev Chernyi and the later Tucker) and adherents to anarcho-communism (Albert Libertad, illegalism and Renzo Novatore). Anarchist historian George Woodcock finds a tendency in individualist anarchism of a "distrust (of) all co-operation beyond the barest minimum for an ascetic life". On the issue of violence opinions have gone from a violentist point of view mainly exemplified by illegalism and insurrectionary anarchism to one that can be called anarcho-pacifist. In the particular case of Spanish individualist anarchist Miguel Gimenez Igualada, he went from illegalist practice in his youth towards a pacifist position later in his life. William Godwin can be considered an individualist anarchist and philosophical anarchist who was influenced by the ideas of the Age of Enlightenment, and developed what many consider the first expression of modern anarchist thought. According to Peter Kropotkin, Godwin was "the first to formulate the political and economical conceptions of anarchism, even though he did not give that name to the ideas developed in his work". Godwin himself attributed the first anarchist writing to Edmund Burke's "A Vindication of Natural Society". Godwin advocated extreme individualism, proposing that all cooperation in labor be eliminated. Godwin was a utilitarian who believed that all individuals are not of equal value, with some of us "of more worth and importance" than others depending on our utility in bringing about social good. Therefore, he does not believe in equal rights, but the person's life that should be favored that is most conducive to the general good. Godwin opposed government because it infringes on the individual's right to "private judgement" to determine which actions most maximize utility, but also makes a critique of all authority over the individual's judgement. This aspect of Godwin's philosophy, minus the utilitarianism, was developed into a more extreme form later by Stirner. Godwin took individualism to the radical extent of opposing individuals performing together in orchestras, writing in "Political Justice" that "everything understood by the term co-operation is in some sense an evil". The only apparent exception to this opposition to cooperation is the spontaneous association that may arise when a society is threatened by violent force. One reason he opposed cooperation is he believed it to interfere with an individual's ability to be benevolent for the greater good. Godwin opposes the idea of government, but wrote that a minimal state as a present "necessary evil" that would become increasingly irrelevant and powerless by the gradual spread of knowledge. He expressly opposed democracy, fearing oppression of the individual by the majority, though he believed it to be preferable to dictatorship. Godwin supported individual ownership of property, defining it as "the empire to which every man is entitled over the produce of his own industry". However, he also advocated that individuals give to each other their surplus property on the occasion that others have a need for it, without involving trade (e.g. gift economy). Thus while people have the right to private property, they should give it away as enlightened altruists. This was to be based on utilitarian principles and he said: "Every man has a right to that, the exclusive possession of which being awarded to him, a greater sum of benefit or pleasure will result than could have arisen from its being otherwise appropriated". However, benevolence was not to be enforced, being a matter of free individual "private judgement". He did not advocate a community of goods or assert collective ownership as is embraced in communism, but his belief that individuals ought to share with those in need was influential on the later development of anarcho-communism. Godwin's political views were diverse and do not perfectly agree with any of the ideologies that claim his influence as writers of the "Socialist Standard", organ of the Socialist Party of Great Britain, consider Godwin both an individualist and a communist; Murray Rothbard did not regard Godwin as being in the individualist camp at all, referring to him as the "founder of communist anarchism"; and historian Albert Weisbord considers him an individualist anarchist without reservation. Some writers see a conflict between Godwin's advocacy of "private judgement" and utilitarianism as he says that ethics requires that individuals give their surplus property to each other resulting in an egalitarian society, but at the same time he insists that all things be left to individual choice. As noted by Kropotkin, many of Godwin's views changed over time. William Godwin's influenced "the socialism of Robert Owen and Charles Fourier. After success of his British venture, Owen himself established a cooperative community within the United States at New Harmony, Indiana during 1825. One member of this commune was Josiah Warren (1798–1874), considered to be the first individualist anarchist. After New Harmony failed Warren shifted his ideological loyalties from socialism to anarchism (which was no great leap, given that Owen's socialism had been predicated on Godwin's anarchism)". Pierre-Joseph Proudhon (1809–1865) was the first philosopher to label himself an "anarchist". Some consider Proudhon to be an individualist anarchist while others regard him to be a social anarchist. Some commentators do not identify Proudhon as an individualist anarchist due to his preference for association in large industries, rather than individual control. Nevertheless, he was influential among some of the American individualists—in the 1840s and 1850s, Charles A. Dana and William B. Greene introduced Proudhon's works to the United States. Greene adapted Proudhon's mutualism to American conditions and introduced it to Benjamin Tucker. Proudhon opposed government privilege that protects capitalist, banking and land interests and the accumulation or acquisition of property (and any form of coercion that led to it) which he believed hampers competition and keeps wealth in the hands of the few. Proudhon favoured a right of individuals to retain the product of their labour as their own property, but believed that any property beyond that which an individual produced and could possess was illegitimate. Thus he saw private property as both essential to liberty and a road to tyranny, the former when it resulted from labour and was required for labour and the latter when it resulted in exploitation (profit, interest, rent and tax). He generally called the former "possession" and the latter "property". For large-scale industry, he supported workers associations to replace wage labour and opposed the ownership of land. Proudhon maintained that those who labour should retain the entirety of what they produce and that monopolies on credit and land are the forces that prohibit such. He advocated an economic system that included private property as possession and exchange market, but without profit, which he called mutualism. It is Proudhon's philosophy that was explicitly rejected by Joseph Dejacque in the inception of anarcho-communism, with the latter asserting directly to Proudhon in a letter that "it is not the product of his or her labour that the worker has a right to, but to the satisfaction of his or her needs, whatever may be their nature". An individualist rather than anarcho-communist, Proudhon said that "communism ... is the very denial of society in its foundation" and famously declared that "property is theft!" in reference to his rejection of ownership rights to land being granted to a person who is not using that land. After Dejacque and others split from Proudhon due to the latter's support of individual property and an exchange economy, the relationship between the individualists (who continued in relative alignment with the philosophy of Proudhon) and the anarcho-communists was characterised by various degrees of antagonism and harmony. For example, individualists like Tucker on the one hand translated and reprinted the works of collectivists like Mikhail Bakunin while on the other hand rejected the economic aspects of collectivism and communism as incompatible with anarchist ideals. Mutualism is an anarchist school of thought which can be traced to the writings of Pierre-Joseph Proudhon, who envisioned a society where each person might possess a means of production, either individually or collectively, with trade representing equivalent amounts of labor in the free market. Integral to the scheme was the establishment of a mutual-credit bank which would lend to producers at a minimal interest rate only high enough to cover the costs of administration. Mutualism is based on a labor theory of value which holds that when labour or its product is sold, in exchange it ought to receive goods or services embodying "the amount of labor necessary to produce an article of exactly similar and equal utility". Some mutualists believe that if the state did not intervene, as a result of increased competition in the marketplace, individuals would receive no more income than that in proportion to the amount of labor they exert. Mutualists oppose the idea of individuals receiving an income through loans, investments and rent as they believe these individuals are not labouring. Some of them argue that if state intervention ceased, these types of incomes would disappear due to increased competition in capital. Though Proudhon opposed this type of income, he expressed: "I never meant to [...] forbid or suppress, by sovereign decree, ground rent and interest on capital. I believe that all these forms of human activity should remain free and optional for all". Insofar as they ensure the workers right to the full product of their labor, mutualists support markets and private property in the product of labor. However, they argue for conditional titles to land, whose private ownership is legitimate only so long as it remains in use or occupation (which Proudhon called "possession"). Proudhon's Mutualism supports labor-owned cooperative firms and associations for "we need not hesitate, for we have no choice [...] it is necessary to form an ASSOCIATION among workers ... because without that, they would remain related as subordinates and superiors, and there would ensue two ... castes of masters and wage-workers, which is repugnant to a free and democratic society" and so "it becomes necessary for the workers to form themselves into democratic societies, with equal conditions for all members, on pain of a relapse into feudalism". As for capital goods (man-made, non-land, "means of production"), mutualist opinion differs on whether these should be commonly managed public assets or private property. Following Proudhon, mutualists originally considered themselves to be libertarian socialists. However, "some mutualists have abandoned the labor theory of value, and prefer to avoid the term "socialist." But they still retain some cultural attitudes, for the most part, that set them off from the libertarian right". Mutualists have distinguished themselves from state socialism and do not advocate social control over the means of production. Benjamin Tucker said of Proudhon that "though opposed to socializing the ownership of capital, Proudhon aimed nevertheless to socialize its effects by making its use beneficial to all instead of a means of impoverishing the many to enrich the few ... by subjecting capital to the natural law of competition, thus bringing the price of its own use down to cost". Johann Kaspar Schmidt (October 25, 1806 – June 26, 1856), better known as Max Stirner (the "nom de plume" he adopted from a schoolyard nickname he had acquired as a child because of his high brow, in German 'Stirn'), was a German philosopher, who ranks as one of the literary fathers of nihilism, existentialism, post-modernism and anarchism, especially of individualist anarchism. Stirner's main work is "The Ego and Its Own", also known as "The Ego and His Own" ("Der Einzige und sein Eigentum" in German, which translates literally as "The Only One [individual] and his Property"). This work was first published in 1844 in Leipzig and has since appeared in numerous editions and translations. Max Stirner's philosophy, sometimes called egoism, is a form of individualist anarchism. Stirner was a Hegelian philosopher whose "name appears with familiar regularity in historically oriented surveys of anarchist thought as one of the earliest and best-known exponents of individualist anarchism". In 1844, his "The Ego and Its Own" ("Der Einzige and sein Eigentum" which may literally be translated as "The Unique Individual and His Property") was published, which is considered to be "a founding text in the tradition of individualist anarchism". Stirner does not recommend that the individual try to eliminate the state, but simply that they disregard the state when it conflicts with one's autonomous choices and go along with it when doing so is conducive to one's interests. He says that the egoist rejects pursuit of devotion to "a great idea, a good cause, a doctrine, a system, a lofty calling," saying that the egoist has no political calling, but rather "lives themselves out" without regard to "how well or ill humanity may fare thereby". Stirner held that the only limitation on the rights of the individual is that individual's power to obtain what he desires. He proposes that most commonly accepted social institutions—including the notion of State, property as a right, natural rights in general, and the very notion of society—were mere spooks in the mind. Stirner wants to "abolish not only the state but also society as an institution responsible for its members". Stirner advocated self-assertion and foresaw Union of egoists, non-systematic associations, which Stirner proposed in as a form of organization in place of the state. A Union is understood as a relation between egoists which is continually renewed by all parties' support through an act of will. Even murder is permissible "if it is right for me", though it is claimed by egoist anarchists that egoism will foster genuine and spontaneous union between individuals. For Stirner, property simply comes about through might: "Whoever knows how to take, to defend, the thing, to him belongs property". He further says: "What I have in my power, that is my own. So long as I assert myself as holder, I am the proprietor of the thing" and that "I do not step shyly back from your property, but look upon it always as my property, in which I respect nothing. Pray do the like with what you call my property!". His concept of "egoistic property" not only a lack of moral restraint on how one obtains and uses "things", but includes other people as well. His embrace of egotism is in stark contrast to Godwin's altruism. Stirner was opposed to communism, seeing it as a form of authority over the individual. This position on property is much different from the Native American, natural law, form of individualist anarchism, which defends the inviolability of the private property that has been earned through labor and trade. However, Benjamin Tucker rejected the natural rights philosophy and adopted Stirner's egoism in 1886, with several others joining with him. This split the American individualists into fierce debate, "with the natural rights proponents accusing the egoists of destroying libertarianism itself." Other egoists include James L. Walker, Sidney Parker, Dora Marsden and John Beverly Robinson. In Russia, individualist anarchism inspired by Stirner combined with an appreciation for Friedrich Nietzsche attracted a small following of bohemian artists and intellectuals such as Lev Chernyi, as well as a few lone wolves who found self-expression in crime and violence. They rejected organizing, believing that only unorganized individuals were safe from coercion and domination, believing this kept them true to the ideals of anarchism. This type of individualist anarchism inspired anarcha-feminist Emma Goldman. Though Stirner's philosophy is individualist, it has influenced some libertarian communists and anarcho-communists. "For Ourselves Council for Generalized Self-Management" discusses Stirner and speaks of a "communist egoism", which is said to be a "synthesis of individualism and collectivism" and says that "greed in its fullest sense is the only possible basis of communist society". Forms of libertarian communism such as Situationism are influenced by Stirner. Anarcho-communist Emma Goldman was influenced by both Stirner and Peter Kropotkin and blended their philosophies together in her own as shown in books of hers such as "Anarchism And Other Essays". Josiah Warren is widely regarded as the first American anarchist and the four-page weekly paper he edited during 1833, "The Peaceful Revolutionist", was the first anarchist periodical published, an enterprise for which he built his own printing press, cast his own type and made his own printing plates. Warren was a follower of Robert Owen and joined Owen's community at New Harmony, Indiana. Warren termed the phrase "Cost the limit of price", with "cost" here referring not to monetary price paid but the labor one exerted to produce an item. Therefore, "[h]e proposed a system to pay people with certificates indicating how many hours of work they did. They could exchange the notes at local time stores for goods that took the same amount of time to produce". He put his theories to the test by establishing an experimental "labor for labor store" called the Cincinnati Time Store where trade was facilitated by notes backed by a promise to perform labor. The store proved successful and operated for three years after which it was closed so that Warren could pursue establishing colonies based on mutualism. These included Utopia and Modern Times. Warren said that Stephen Pearl Andrews' "The Science of Society" (published in 1852) was the most lucid and complete exposition of Warren's own theories. Catalan historian Xavier Diez report that the intentional communal experiments pioneered by Warren were influential in European individualist anarchists of the late 19th and early 20th centuries such as Émile Armand and the intentional communities started by them. Henry David Thoreau (1817–1862) was an important early influence in individualist anarchist thought in the United States and Europe. Thoreau was an American author, poet, naturalist, tax resister, development critic, surveyor, historian, philosopher and leading transcendentalist. He is best known for his book "Walden", a reflection upon simple living in natural surroundings; and his essay, "Civil Disobedience", an argument for individual resistance to civil government in moral opposition to an unjust state. His thought is an early influence on green anarchism, but with an emphasis on the individual experience of the natural world influencing later naturist currents, simple living as a rejection of a materialist lifestyle and self-sufficiency were Thoreau's goals and the whole project was inspired by transcendentalist philosophy. Many have seen in Thoreau one of the precursors of ecologism and anarcho-primitivism represented today in John Zerzan. For George Woodcock, this attitude can be also motivated by certain idea of resistance to progress and of rejection of the growing materialism which is the nature of American society in the mid 19th century. The essay Civil Disobedience ("Resistance to Civil Government") was first published in 1849. It argues that people should not permit governments to overrule or atrophy their consciences and that people have a duty to avoid allowing such acquiescence to enable the government to make them the agents of injustice. Thoreau was motivated in part by his disgust with slavery and the Mexican–American War. The essay later influenced Mohandas Gandhi, Martin Luther King, Jr., Martin Buber and Leo Tolstoy through its advocacy of nonviolent resistance. It is also the main precedent for anarcho-pacifism. The American version of individualist anarchism has a strong emphasis on the non-aggression principle and individual sovereignty. Some individualist anarchists such as Thoreau do not speak of economics, but simply the right of "disunion" from the state and foresee the gradual elimination of the state through social evolution. An important current within individualist anarchism is free love. Free love advocates sometimes traced their roots back to Josiah Warren and to experimental communities, and viewed sexual freedom as a clear, direct expression of an individual's self-ownership. Free love particularly stressed women's rights since most sexual laws, such as those governing marriage and use of birth control, discriminated against women. The most important American free love journal was "Lucifer the Lightbearer" (1883–1907) edited by Moses Harman and Lois Waisbrooker but also there existed Ezra Heywood and Angela Heywood's "The Word" (1872–1890, 1892–1893). M. E. Lazarus was also an important American individualist anarchist who promoted free love. John William Lloyd, a collaborator of Benjamin Tucker's periodical "Liberty", published in 1931 a sex manual that he called "The Karezza Method or Magnetation: The Art of Connubial Love". In Europe, the main propagandist of free love within individualist anarchism was Émile Armand. He proposed the concept of "la camaraderie amoureuse" to speak of free love as the possibility of voluntary sexual encounter between consenting adults. He was also a consistent proponent of polyamory. In France, there was also feminist activity inside individualist anarchism as promoted by individualist feminists Marie Küge, Anna Mahé, Rirette Maitrejean and Sophia Zaïkovska. The Brazilian individualist anarchist Maria Lacerda de Moura lectured on topics such as education, women's rights, free love and antimilitarism. Her writings and essays garnered her attention not only in Brazil, but also in Argentina and Uruguay. She also wrote for the Spanish individualist anarchist magazine "Al Margen" alongside Miguel Gimenez Igualada. In Germany, the Stirnerists Adolf Brand and John Henry Mackay were pioneering campaigners for the acceptance of male bisexuality and homosexuality. Freethought as a philosophical position and as activism was important in both North American and European individualist anarchism, but in the United States freethought was basically an anti-Christian, anti-clerical movement whose purpose was to make the individual politically and spiritually free to decide for himself on religious matters. A number of contributors to "Liberty" were prominent figures in both freethought and anarchism. The individualist anarchist George MacDonald was a co-editor of "Freethought" and for a time "The Truth Seeker". E.C. Walker was co-editor of "Lucifer, the Light-Bearer". Many of the anarchists were ardent freethinkers; reprints from freethought papers such as "Lucifer, the Light-Bearer", "Freethought" and "The Truth Seeker" appeared in "Liberty". The church was viewed as a common ally of the state and as a repressive force in and of itself. In Europe, a similar development occurred in French and Spanish individualist anarchist circles: "Anticlericalism, just as in the rest of the libertarian movement, is another of the frequent elements which will gain relevance related to the measure in which the (French) Republic begins to have conflicts with the church [...] Anti-clerical discourse, frequently called for by the french individualist André Lorulot, will have its impacts in "Estudios" (a Spanish individualist anarchist publication). There will be an attack on institutionalized religion for the responsibility that it had in the past on negative developments, for its irrationality which makes it a counterpoint of philosophical and scientific progress. There will be a criticism of proselitism and ideological manipulation which happens on both believers and agnostics". This tendencies will continue in French individualist anarchism in the work and activism of Charles-Auguste Bontemps and others. In the Spanish individualist anarchist magazine "Ética" and "Iniciales", "there is a strong interest in publishing scientific news, usually linked to a certain atheist and anti-theist obsession, philosophy which will also work for pointing out the incompatibility between science and religion, faith and reason. In this way there will be a lot of talk on Darwin's theories or on the negation of the existence of the soul". Another important current, especially within French and Spanish individualist anarchist groups was naturism. Naturism promoted an ecological worldview, small ecovillages and most prominently nudism as a way to avoid the artificiality of the industrial mass society of modernity. Naturist individualist anarchists saw the individual in his biological, physical and psychological aspects and avoided and tried to eliminate social determinations. An early influence in this vein was Henry David Thoreau and his famous book "Walden". Important promoters of this were Henri Zisly and Émile Gravelle who collaborated in "La Nouvelle Humanité" followed by "Le Naturien", "Le Sauvage", "L'Ordre Naturel" and "La Vie Naturelle". This relationship between anarchism and naturism was quite important at the end of the 1920s in Spain, when "[t]he linking role played by the 'Sol y Vida' group was very important. The goal of this group was to take trips and enjoy the open air. The Naturist athenaeum, 'Ecléctico', in Barcelona, was the base from which the activities of the group were launched. First "Etica" and then "Iniciales", which began in 1929, were the publications of the group, which lasted until the Spanish Civil War. We must be aware that the naturist ideas expressed in them matched the desires that the libertarian youth had of breaking up with the conventions of the bourgeoisie of the time. That is what a young worker explained in a letter to 'Iniciales' He writes it under the odd pseudonym of 'silvestre del campo', (wild man in the country). "I find great pleasure in being naked in the woods, bathed in light and air, two natural elements we cannot do without. By shunning the humble garment of an exploited person, (garments which, in my opinion, are the result of all the laws devised to make our lives bitter), we feel there no others left but just the natural laws. Clothes mean slavery for some and tyranny for others. Only the naked man who rebels against all norms, stands for anarchism, devoid of the prejudices of outfit imposed by our money-oriented society". The relation between anarchism and naturism "gives way to the Naturist Federation, in July 1928, and to the lV Spanish Naturist Congress, in September 1929, both supported by the Libertarian Movement. However, in the short term, the Naturist and Libertarian movements grew apart in their conceptions of everyday life. The Naturist movement felt closer to the Libertarian individualism of some French theoreticians such as Henri Ner (real name of Han Ryner) than to the revolutionary goals proposed by some Anarchist organisations such as the FAI, (Federación Anarquista Ibérica)". The thought of German philosopher Friedrich Nietzsche has been influential in individualist anarchism, specifically in thinkers such as France's Émile Armand, the Italian Renzo Novatore and the Colombian Biofilo Panclasta. Robert C. Holub, author of "Nietzsche: Socialist, Anarchist, Feminist" posits that "translations of Nietzsche's writings in the United States very likely appeared first in "Liberty", the anarchist journal edited by Benjamin Tucker". For American anarchist historian Eunice Minette Schuster, "[i]t is apparent [...] that Proudhonian Anarchism was to be found in the United States at least as early as 1848 and that it was not conscious of its affinity to the Individualist Anarchism of Josiah Warren and Stephen Pearl Andrews [...] William B. Greene presented this Proudhonian Mutualism in its purest and most systematic form". William Batchelder Greene (1819–1878) is best known for the works "Mutual Banking" (1850), which proposed an interest-free banking system; and "Transcendentalism", a critique of the New England philosophical school. He saw mutualism as the synthesis of "liberty and order". His "associationism ... is checked by individualism ... "Mind your own business," "Judge not that ye be not judged." Over matters which are purely personal, as for example, moral conduct, the individual is sovereign, as well as over that which he himself produces. For this reason he demands "mutuality" in marriage – the equal right of a woman to her own personal freedom and property and feminist and spiritualist tendencies". Within some individualist anarchist circles, "mutualism" came to mean non-communist anarchism. Contemporary American anarchist Hakim Bey reports that "Steven Pearl Andrews ... was not a fourierist (see Charles Fourier), but he lived through the brief craze for phalansteries in America & adopted a lot of fourierist principles & practices ... a maker of worlds out of words. He syncretized Abolitionism, Free Love, spiritual universalism, (Josiah) Warren, & (Charles) Fourier into a grand utopian scheme he called the Universal Pantarchy ... He was instrumental in founding several "intentional communities," including the "Brownstone Utopia" on 14th St. in New York, & "Modern Times" in Brentwood, Long Island. The latter became as famous as the best-known fourierist communes (Brook Farm in Massachusetts & the North American Phalanx in New Jersey) – in fact, Modern Times became downright notorious (for "Free Love") & finally foundered under a wave of scandalous publicity. Andrews (& Victoria Woodhull) were members of the infamous Section 12 of the 1st International, expelled by Marx for its anarchist, feminist, & spiritualist tendencies". Another form of individualist anarchism was found in the United States as advocated by the so-called Boston anarchists. By default, American individualists had no difficulty accepting the concepts that "one man employ another" or that "he direct him", in his labor but rather demanded that "all natural opportunities requisite to the production of wealth be accessible to all on equal terms and that monopolies arising from special privileges created by law be abolished". They believed state monopoly capitalism (defined as a state-sponsored monopoly) prevented labor from being fully rewarded. Voltairine de Cleyre summed up the philosophy by saying that the anarchist individualists "are firm in the idea that the system of employer and employed, buying and selling, banking, and all the other essential institutions of Commercialism, centred upon private property, are in themselves good, and are rendered vicious merely by the interference of the State". Even among the 19th-century American individualists, there was not a monolithic doctrine as they disagreed amongst each other on various issues including intellectual property rights and possession versus property in land. A major schism occurred later in the 19th century when Tucker and some others abandoned their traditional support of natural rights as espoused by Lysander Spooner and converted to an "egoism" modeled upon Max Stirner's philosophy. Lysander Spooner besides his individualist anarchist activism was also an important anti-slavery activist and became a member of the First International. Some Boston anarchists, including Benjamin Tucker, identified themselves as socialists, which in the 19th century was often used in the sense of a commitment to improving conditions of the working class (i.e. "the labor problem"). The Boston anarchists such as Tucker and his followers continue to be considered socialists due to their opposition to usury. They do so because as the modern economist Jim Stanford points out there are many different kinds of competitive markets such as market socialism and capitalism is only one type of a market economy. By around the start of the 20th century, the heyday of individualist anarchism had passed. George Woodcock reports that the American individualist anarchists Lysander Spooner and William B. Greene had been members of the socialist First International. Two individualist anarchists who wrote in Benjamin Tucker's "Liberty" were also important labor organizers of the time. Joseph Labadie (April 18, 1850 – October 7, 1933) was an American labor organizer, individualist anarchist, social activist, printer, publisher, essayist and poet. In 1883, Labadie embraced a non-violent version of individualist anarchism. Without the oppression of the state, Labadie believed, humans would choose to harmonize with "the great natural laws [...] without robbing [their] fellows through interest, profit, rent and taxes". However, he supported community cooperation as he supported community control of water utilities, streets and railroads. Although he did not support the militant anarchism of the Haymarket anarchists, he fought for the clemency of the accused because he did not believe they were the perpetrators. In 1888, Labadie organized the Michigan Federation of Labor, became its first president and forged an alliance with Samuel Gompers. A colleague of Labadie's at "Liberty", Dyer Lum was another important individualist anarchist labor activist and poet of the era. A leading anarcho-syndicalist and a prominent left-wing intellectual of the 1880s, he is remembered as the lover and mentor of early anarcha-feminist Voltairine de Cleyre. Lum was a prolific writer who wrote a number of key anarchist texts and contributed to publications including "Mother Earth", "Twentieth Century", "The Alarm" (the journal of the International Working People's Association) and "The Open Court" among others. Lum's political philosophy was a fusion of individualist anarchist economics—"a radicalized form of "laissez-faire" economics" inspired by the Boston anarchists—with radical labor organization similar to that of the Chicago anarchists of the time. Herbert Spencer and Pierre-Joseph Proudhon influenced Lum strongly in his individualist tendency. He developed a "mutualist" theory of unions and as such was active within the Knights of Labor and later promoted anti-political strategies in the American Federation of Labor. Frustration with abolitionism, spiritualism and labor reform caused Lum to embrace anarchism and radicalize workers. Convinced of the necessity of violence to enact social change he volunteered to fight in the American Civil War, hoping thereby to bring about the end of slavery. Kevin Carson has praised Lum's fusion of individualist "laissez-faire" economics with radical labor activism as "creative" and described him as "more significant than any in the Boston group". Some of the American individualist anarchists later in this era, such as Benjamin Tucker, abandoned natural rights positions and converted to Max Stirner's egoist anarchism. Rejecting the idea of moral rights, Tucker said that there were only two rights, "the right of might" and "the right of contract". He also said after converting to Egoist individualism that "[i]n times past [...] it was my habit to talk glibly of the right of man to land. It was a bad habit, and I long ago sloughed it off [...] Man's only right to land is his might over it". In adopting Stirnerite egoism in 1886, Tucker rejected natural rights which had long been considered the foundation of libertarianism. This rejection galvanized the movement into fierce debates, with the natural rights proponents accusing the egoists of destroying libertarianism itself. So bitter was the conflict that a number of natural rights proponents withdrew from the pages of "Liberty" in protest even though they had hitherto been among its frequent contributors. Thereafter, "Liberty" championed egoism although its general content did not change significantly. Several periodicals were undoubtedly influenced by "Liberty"'s presentation of egoism. They included "I" published by Clarence Lee Swartz, edited by William Walstein Gordak and J. William Lloyd (all associates of "Liberty"); and "The Ego" and "The Egoist", both of which were edited by Edward H. Fulton. Among the egoist papers that Tucker followed were the German "Der Eigene", edited by Adolf Brand; and "The Eagle" and "The Serpent", issued from London. The latter, the most prominent English-language egoist journal, was published from 1898 to 1900 with the subtitle "A Journal of Egoistic Philosophy and Sociology". American anarchists who adhered to egoism include Benjamin Tucker, John Beverley Robinson, Steven T. Byington, Hutchins Hapgood, James L. Walker, Victor Yarros and Edward H. Fulton. Robinson wrote an essay called "Egoism" in which he states that "[m]odern egoism, as propounded by Stirner and Nietzsche, and expounded by Ibsen, Shaw and others, is all these; but it is more. It is the realization by the individual that they are an individual; that, as far as they are concerned, they are the only individual". Walker published the work "The Philosophy of Egoism" in which he argued that egosim "implies a rethinking of the self-other relationship, nothing less than "a complete revolution in the relations of mankind" that avoids both the "archist" principle that legitimates domination and the "moralist" notion that elevates self-renunciation to a virtue. Walker describes himself as an "egoistic anarchist" who believed in both contract and cooperation as practical principles to guide everyday interactions". For Walker, "what really defines egoism is not mere self-interest, pleasure, or greed; it is the sovereignty of the individual, the full expression of the subjectivity of the individual ego". Italian anti-organizationalist individualist anarchism was brought to the United States by Italian born individualists such as Giuseppe Ciancabilla and others who advocated for violent propaganda by the deed there. Anarchist historian George Woodcock reports the incident in which the important Italian social anarchist Errico Malatesta became involved "in a dispute with the individualist anarchists of Paterson, who insisted that anarchism implied no organization at all, and that every man must act solely on his impulses. At last, in one noisy debate, the individual impulse of a certain Ciancabilla directed him to shoot Malatesta, who was badly wounded but obstinately refused to name his assailant". Enrico Arrigoni (pseudonym Frank Brand) was an Italian American individualist anarchist Lathe operator, house painter, bricklayer, dramatist and political activist influenced by the work of Max Stirner. He took the pseudonym Brand from a fictional character in one of Henrik Ibsen's plays. In the 1910s, he started becoming involved in anarchist and anti-war activism around Milan. From the 1910s until the 1920s, he participated in anarchist activities and popular uprisings in various countries including Switzerland, Germany, Hungary, Argentina and Cuba. He lived from the 1920s onwards in New York City, where he edited the individualist anarchist eclectic journal "Eresia" in 1928. He also wrote for other American anarchist publications such as "L' Adunata dei refrattari", "Cultura Obrera", "Controcorrente" and "Intesa Libertaria". During the Spanish Civil War, he went to fight with the anarchists, but he was imprisoned and was helped on his release by Emma Goldman. Afterwards, Arrigoni became a longtime member of the Libertarian Book Club in New York City. His written works include "The Totalitarian Nightmare" (1975), "The Lunacy of the Superman" (1977), "Adventures in the Country of the Monoliths" (1981) and "Freedom: My Dream" (1986). Murray Bookchin has identified post-left anarchy as a form of individualist anarchism in "" where he identifies "a shift among Euro-American anarchists away from social anarchism and toward individualist or lifestyle anarchism. Indeed, lifestyle anarchism today is finding its principal expression in spray-can graffiti, post-modernist nihilism, antirationalism, neo-primitivism, anti-technologism, neo-Situationist 'cultural terrorism', mysticism, and a 'practice' of staging Foucauldian 'personal insurrections'". Post-left anarchist Bob Black in his long critique of Bookchin's philosophy called "Anarchy After Leftism" said about post-left anarchy that "[i]t is, unlike Bookchinism, "individualistic" in the sense that if the freedom and happiness of the individual – i.e., each and every really existing person, every Tom, Dick and Murray – is not the measure of the good society, what is?". A strong relationship does exist between post-left anarchism and the work of individualist anarchist Max Stirner. Jason McQuinn says that "when I (and other anti-ideological anarchists) criticize ideology, it is always from a specifically critical, anarchist perspective rooted in both the skeptical, individualist-anarchist philosophy of Max Stirner. Bob Black and Feral Faun/Wolfi Landstreicher also strongly adhere to stirnerist egoist anarchism. Bob Black has humorously suggested the idea of "marxist stirnerism". Hakim Bey has said: "From Stirner's "Union of Self-Owning Ones" we proceed to Nietzsche's circle of "Free Spirits" and thence to Charles Fourier's "Passional Series", doubling and redoubling ourselves even as the Other multiplies itself in the eros of the group". Bey also wrote: "The Mackay Society, of which Mark & I are active members, is devoted to the anarchism of Max Stirner, Benj. Tucker & John Henry Mackay [...] The Mackay Society, incidentally, represents a little-known current of individualist thought which never cut its ties with revolutionary labor. Dyer Lum, Ezra & Angela Haywood represent this school of thought; Jo Labadie, who wrote for Tucker's "Liberty", made himself a link between the American "plumb-line" anarchists, the "philosophical" individualists, & the syndicalist or communist branch of the movement; his influence reached the Mackay Society through his son, Laurance. Like the Italian Stirnerites (who influenced us through our late friend Enrico Arrigoni) we support all anti-authoritarian currents, despite their apparent contradictions". As far as posterior individualist anarchists, Jason McQuinn for some time used the pseudonym Lev Chernyi in honor of the Russian individualist anarchist of the same name while Feral Faun has quoted Italian individualist anarchist Renzo Novatore and has translated both Novatore and the young Italian individualist anarchist Bruno Filippi Egoism has had a strong influence on insurrectionary anarchism, as can be seen in the work of Wolfi Landstreicher. Feral Faun wrote in 1995: In the game of insurgence – a lived guerilla war game – it is strategically necessary to use identities and roles. Unfortunately, the context of social relationships gives these roles and identities the power to define the individual who attempts to use them. So I, Feral Faun, became [...] an anarchist [...] a writer [...] a Stirner-influenced, post-situationist, anti-civilization theorist [...] if not in my own eyes, at least in the eyes of most people who've read my writings. European individualist anarchism proceeded from the roots laid by William Godwin, Pierre-Joseph Proudhon and Max Stirner. Proudhon was an early pioneer of anarchism as well as of the important individualist anarchist current of mutualism. Stirner became a central figure of individualist anarchism through the publication of his seminal work "The Ego and Its Own" which is considered to be "a founding text in the tradition of individualist anarchism". Another early figure was Anselme Bellegarrigue. Individualist anarchism expanded and diversified through Europe, incorporating influences from North American individualist anarchism. European individualist anarchists include Albert Libertad, Bellegarrigue, Oscar Wilde, Émile Armand, Lev Chernyi, John Henry Mackay, Han Ryner, Adolf Brand, Miguel Gimenez Igualada, Renzo Novatore and currently Michel Onfray. Important currents within it include free love, anarcho-naturism and illegalism. From the legacy of Proudhon and Stirner there emerged a strong tradition of French individualist anarchism. An early important individualist anarchist was Anselme Bellegarrigue. He participated in the French Revolution of 1848, was author and editor of "Anarchie, Journal de l'Ordre and Au fait ! Au fait ! Interprétation de l'idée démocratique" and wrote the important early Anarchist Manifesto in 1850. Catalan historian of individualist anarchism Xavier Diez reports that during his travels in the United States "he at least contacted (Henry David) Thoreau and, probably (Josiah) Warren". "Autonomie Individuelle" was an individualist anarchist publication that ran from 1887 to 1888. It was edited by Jean-Baptiste Louiche, Charles Schæffer and Georges Deherme. Later, this tradition continued with such intellectuals as Albert Libertad, André Lorulot, Émile Armand, Victor Serge, Zo d'Axa and Rirette Maitrejean, who in 1905 developed theory in the main individualist anarchist journal in France, "L'Anarchie". Outside this journal, Han Ryner wrote "Petit Manuel individualiste" (1903). In 1891, Zo d'Axa created the journal L'En-Dehors. Anarcho-naturism was promoted by Henri Zisly, Emile Gravelle and Georges Butaud. Butaud was an individualist "partisan of the "milieux libres", publisher of "Flambeau" ("an enemy of authority") in 1901 in Vienna" and most of his energies were devoted to creating anarchist colonies (communautés expérimentales) in which he participated in several. In this sense, "the theoretical positions and the vital experiences of [F]rench individualism are deeply iconoclastic and scandalous, even within libertarian circles. The call of nudist naturism, the strong defence of birth control methods, the idea of "unions of egoists" with the sole justification of sexual practices, that will try to put in practice, not without difficulties, will establish a way of thought and action, and will result in sympathy within some, and a strong rejection within others". French individualist anarchists grouped behind Émile Armand, published "L'Unique" after World War II. "L'Unique" went from 1945 to 1956 with a total of 110 numbers. Gérard de Lacaze-Duthiers was a French writer, art critic, pacifist and anarchist. Lacaze-Duthiers, an art critic for the Symbolist review journal "La Plume", was influenced by Oscar Wilde, Friedrich Nietzsche and Max Stirner. His (1906) "L'Ideal Humain de l'Art" helped found the "artistocracy movement"—a movement advocating life in the service of art. His ideal was an anti-elitist aestheticism: "All men should be artists". Together with André Colomer and Manuel Devaldes, in 1913 he founded "L'Action d'Art", an anarchist literary journal. After World War II, he contributed to the journal "L'Unique". Within the synthesist anarchist organization, the Fédération Anarchiste, there existed an individualist anarchist tendency alongside anarcho-communist and anarchosyndicalist currents. Individualist anarchists participating inside the Fédération Anarchiste included Charles-Auguste Bontemps, Georges Vincey and André Arru. The new base principles of the francophone Anarchist Federation were written by the individualist anarchist Charles-Auguste Bontemps and the anarcho-communist Maurice Joyeux which established an organization with a plurality of tendencies and autonomy of federated groups organized around synthesist principles. Charles-Auguste Bontemps was a prolific author mainly in the anarchist, freethinking, pacifist and naturist press of the time. His view on anarchism was based around his concept of "Social Individualism" on which he wrote extensively. He defended an anarchist perspective which consisted on "a collectivism of things and an individualism of persons". In 2002, Libertad organized a new version of the "L'EnDehors", collaborating with "Green Anarchy" and including several contributors, such as Lawrence Jarach, Patrick Mignard, Thierry Lodé, Ron Sakolsky and Thomas Slut. Numerous articles about capitalism, human rights, free love and social fights were published. "The EnDehors" continues now as a website, EnDehors.org. The prolific contemporary French philosopher Michel Onfray has been writing from an individualist anarchist perspective influenced by Nietzsche, French post-structuralists thinkers such as Michel Foucault and Gilles Deleuze; and Greek classical schools of philosophy such as the Cynics and Cyrenaics. Among the books which best expose Onfray's individualist anarchist perspective include "La sculpture de soi : la morale esthétique" ("The Sculpture of Oneself: Aesthetic Morality"), "La philosophie féroce : exercices anarchistes", "La puissance d'exister" and "Physiologie de Georges Palante, portrait d'un nietzchéen de gauche" which focuses on French individualist philosopher Georges Palante. Illegalism is an anarchist philosophy that developed primarily in France, Italy, Belgium and Switzerland during the early 1900s as an outgrowth of Stirner's individualist anarchism. Illegalists usually did not seek moral basis for their actions, recognizing only the reality of "might" rather than "right"; and for the most part, illegal acts were done simply to satisfy personal desires, not for some greater ideal, although some committed crimes as a form of propaganda of the deed. The illegalists embraced direct action and propaganda of the deed. Influenced by theorist Max Stirner's egoism as well as Pierre-Joseph Proudhon (his view that "Property is theft!"), Clément Duval and Marius Jacob proposed the theory of la "reprise individuelle" (individual reclamation) which justified robbery on the rich and personal direct action against exploiters and the system. Illegalism first rose to prominence among a generation of Europeans inspired by the unrest of the 1890s, during which Ravachol, Émile Henry, Auguste Vaillant and Sante Geronimo Caserio committed daring crimes in the name of anarchism in what is known as propaganda of the deed. France's Bonnot Gang was the most famous group to embrace illegalism. In Germany, the Scottish-German John Henry McKay became the most important propagandist for individualist anarchist ideas. He fused Stirnerist egoism with the positions of Benjamin Tucker and actually translated Tucker into German. Two semi-fictional writings of his own, "Die Anarchisten" and "Der Freiheitsucher", contributed to individualist theory through an updating of egoist themes within a consideration of the anarchist movement. English translations of these works arrived in the United Kingdom and in individualist American circles led by Tucker. McKay is also known as an important European early activist for gay rights. Using the pseudonym Sagitta, Mackay wrote a series of works for pederastic emancipation, titled "Die Buecher der namenlosen Liebe" ("Books of the Nameless Love"). This series was conceived in 1905 and completed in 1913 and included the "Fenny Skaller", a story of a pederast. Under the same pseudonym, he also published fiction, such as "Holland" (1924) and a pederastic novel of the Berlin boy-bars, "Der Puppenjunge" ("The Hustler") (1926). Adolf Brand (1874–1945) was a German writer, Stirnerist anarchist and pioneering campaigner for the acceptance of male bisexuality and homosexuality. In 1896, Brand published a German homosexual periodical, "Der Eigene". This was the first ongoing homosexual publication in the world. The name was taken from writings of egoist philosopher Max Stirner (who had greatly influenced the young Brand) and refers to Stirner's concept of "self-ownership" of the individual. "Der Eigene" concentrated on cultural and scholarly material and may have had an average of around 1,500 subscribers per issue during its lifetime, although the exact numbers are uncertain. Contributors included Erich Mühsam, Kurt Hiller, John Henry Mackay (under the pseudonym Sagitta) and artists Wilhelm von Gloeden, Fidus and Sascha Schneider. Brand contributed many poems and articles himself. Benjamin Tucker followed this journal from the United States. "Der Einzige" was a German individualist anarchist magazine. It appeared in 1919 as a weekly, then sporadically until 1925 and was edited by cousins Anselm Ruest (pseudonym for Ernst Samuel) and Mynona (pseudonym for Salomo Friedlaender). Its title was adopted from the book "Der Einzige und sein Eigentum" ("The Ego and Its Own") by Max Stirner. Another influence was the thought of German philosopher Friedrich Nietzsche. The publication was connected to the local expressionist artistic current and the transition from it towards Dada. In Italy, individualist anarchism had a strong tendency towards illegalism and violent propaganda by the deed similar to French individualist anarchism, but perhaps more extreme and which emphazised criticism of organization be it anarchist or of other type. In this respect, we can consider notorious magnicides carried out or attempted by individualists Giovanni Passannante, Sante Caserio, Michele Angiolillo, Luigi Luccheni and Gaetano Bresci who murdered King Umberto I. Caserio lived in France and coexisted within French illegalism and later assassinated French President Sadi Carnot. The theoretical seeds of current insurrectionary anarchism were already laid out at the end of 19th century Italy in a combination of individualist anarchism criticism of permanent groups and organization with a socialist class struggle worldview. During the rise of fascism, this thought also motivated Gino Lucetti, Michele Schirru and Angelo Sbardellotto in attempting the assassination of Benito Mussolini. During the early 20th century, the intellectual work of individualist anarchist Renzo Novatore came to importance and he was influenced by Max Stirner, Friedrich Nietzsche, Georges Palante, Oscar Wilde, Henrik Ibsen, Arthur Schopenhauer and Charles Baudelaire. He collaborated in numerous anarchist journals and participated in futurism avant-garde currents. In his thought, he adhered to Stirnerist disrespect for private property, only recognizing property of one's own spirit.
https://en.wikipedia.org/wiki?curid=14936
Italo Calvino Italo Calvino (, also , ; 15 October 1923 – 19 September 1985) was an Italian journalist and writer of short stories and novels. His best known works include the "Our Ancestors" trilogy (1952–1959), the "Cosmicomics" collection of short stories (1965), and the novels "Invisible Cities" (1972) and "If on a winter's night a traveler" (1979). Admired in Britain, Australia and the United States, he was the most translated contemporary Italian writer at the time of his death. Italo Calvino is buried in the cemetery of Castiglione della Pescaia, in Tuscany. Italo Calvino was born in Santiago de las Vegas, a suburb of Havana, Cuba, in 1923. His father, Mario, was a tropical agronomist and botanist who also taught agriculture and floriculture. Born 47 years earlier in Sanremo, Italy, Mario Calvino had emigrated to Mexico in 1909 where he took up an important position with the Ministry of Agriculture. In an autobiographical essay, Italo Calvino explained that his father "had been in his youth an anarchist, a follower of Kropotkin and then a Socialist Reformist". In 1917, Mario left for Cuba to conduct scientific experiments, after living through the Mexican Revolution. Calvino's mother, Giuliana Luigia Evelina "Eva" Mameli, was a botanist and university professor. A native of Sassari in Sardinia and 11 years younger than her husband, she married while still a junior lecturer at Pavia University. Born into a secular family, Eva was a pacifist educated in the "religion of civic duty and science". Eva gave Calvino his unusual first name to remind him of his Italian heritage, although since he wound up growing up in Italy after all, Calvino thought his name sounded "belligerently nationalist". Calvino described his parents as being "very different in personality from one another", suggesting perhaps deeper tensions behind a comfortable, albeit strict, middle-class upbringing devoid of conflict. As an adolescent, he found it hard relating to poverty and the working-class, and was "ill at ease" with his parents' openness to the laborers who filed into his father's study on Saturdays to receive their weekly paycheck. In 1925, less than two years after Calvino's birth, the family returned to Italy and settled permanently in Sanremo on the Ligurian coast. Calvino's brother Floriano, who became a distinguished geologist, was born in 1927. The family divided their time between the Villa Meridiana, an experimental floriculture station which also served as their home, and Mario's ancestral land at San Giovanni Battista. On this small working farm set in the hills behind Sanremo, Mario pioneered in the cultivation of then exotic fruits such as avocado and grapefruit, eventually obtaining an entry in the "Dizionario biografico degli italiani" for his achievements. The vast forests and luxuriant fauna omnipresent in Calvino's early fiction such as "The Baron in the Trees" derives from this "legacy". In an interview, Calvino stated that "San Remo continues to pop out in my books, in the most diverse pieces of writing." He and Floriano would climb the tree-rich estate and perch for hours on the branches reading their favorite adventure stories. Less salubrious aspects of this "paternal legacy" are described in "The Road to San Giovanni", Calvino's memoir of his father in which he exposes their inability to communicate: "Talking to each other was difficult. Both verbose by nature, possessed of an ocean of words, in each other's presence we became mute, would walk in silence side by side along the road to San Giovanni." A fan of Rudyard Kipling's "The Jungle Book" as a child, Calvino felt that his early interest in stories made him the "black sheep" of a family that held literature in less esteem than the sciences. Fascinated by American movies and cartoons, he was equally attracted to drawing, poetry, and theatre. On a darker note, Calvino recalled that his earliest memory was of a Marxist professor who had been brutally assaulted by Benito Mussolini's Blackshirts: "I remember clearly that we were at dinner when the old professor came in with his face beaten up and bleeding, his bowtie all torn up over it, asking for help." Other legacies include the parents' beliefs in Freemasonry, Republicanism with elements of Anarchism and Marxism. Austere freethinkers with an intense hatred of the ruling National Fascist Party, Eva and Mario also refused to give their sons any education in the Catholic Faith or any other religion. Italo attended the English nursery school St George's College, followed by a Protestant elementary private school run by Waldensians. His secondary schooling, with a classical lyceum curriculum, was completed at the state-run Liceo Gian Domenico Cassini where, at his parents' request, he was exempted from religion classes but frequently asked to justify his anti-conformism to teachers, janitors, and fellow pupils. In his mature years, Calvino described the experience as having made him "tolerant of others' opinions, particularly in the field of religion, remembering how irksome it was to hear myself mocked because I did not follow the majority's beliefs". In 1938, Eugenio Scalfari, who went on to found the weekly magazine "L'Espresso" and "La Repubblica", a major Italian newspaper, came from Civitavecchia to join the same class though a year younger, and they shared the same desk. The two teenagers formed a lasting friendship, Calvino attributing his political awakening to their university discussions. Seated together "on a huge flat stone in the middle of a stream near our land", he and Scalfari founded the MUL (University Liberal Movement). Eva managed to delay her son's enrolment in the Party's armed scouts, the "Balilla Moschettieri", and then arranged that he be excused, as a non-Catholic, from performing devotional acts in Church. But later on, as a compulsory member, he could not avoid the assemblies and parades of the "Avanguardisti", and was forced to participate in the Italian invasion of the French Riviera in June 1940. In 1941, Calvino enrolled at the University of Turin, choosing the Agriculture Faculty where his father had previously taught courses in agronomy. Concealing his literary ambitions to please his family, he passed four exams in his first year while reading anti-Fascist works by Elio Vittorini, Eugenio Montale, Cesare Pavese, Johan Huizinga, and Pisacane, and works by Max Planck, Werner Heisenberg, and Albert Einstein on physics. Disdainful of Turin students, Calvino saw himself as enclosed in a "provincial shell" that offered the illusion of immunity from the Fascist nightmare: "We were ‘hard guys’ from the provinces, hunters, snooker-players, show-offs, proud of our lack of intellectual sophistication, contemptuous of any patriotic or military rhetoric, coarse in our speech, regulars in the brothels, dismissive of any romantic sentiment and desperately devoid of women." Calvino transferred to the University of Florence in 1943 and reluctantly passed three more exams in agriculture. By the end of the year, the Germans had succeeded in occupying Liguria and setting up Benito Mussolini's puppet Republic of Salò in northern Italy. Now twenty years old, Calvino refused military service and went into hiding. Reading intensely in a wide array of subjects, he also reasoned politically that, of all the partisan groupings, the communists were the best organized with "the most convincing political line". In spring 1944, Eva encouraged her sons to enter the Italian Resistance in the name of "natural justice and family virtues". Using the battlename of "Santiago", Calvino joined the "Garibaldi Brigades", a clandestine Communist group and, for twenty months, endured the fighting in the Maritime Alps until 1945 and the Liberation. As a result of his refusal to be a conscript, his parents were held hostage by the Nazis for an extended period at the Villa Meridiana. Calvino wrote of his mother's ordeal that "she was an example of tenacity and courage… behaving with dignity and firmness before the SS and the Fascist militia, and in her long detention as a hostage, not least when the blackshirts three times pretended to shoot my father in front of her eyes. The historical events which mothers take part in acquire the greatness and invincibility of natural phenomena". Calvino settled in Turin in 1945, after a long hesitation over living there or in Milan. He often humorously belittled this choice, describing Turin as a "city that is serious but sad". Returning to university, he abandoned Agriculture for the Arts Faculty. A year later, he was initiated into the literary world by Elio Vittorini, who published his short story "Andato al comando" (1945; "Gone to Headquarters") in "Il Politecnico", a Turin-based weekly magazine associated with the university. The horror of the war had not only provided the raw material for his literary ambitions but deepened his commitment to the Communist cause. Viewing civilian life as a continuation of the partisan struggle, he confirmed his membership of the Italian Communist Party. On reading Vladimir Lenin's "State and Revolution", he plunged into post-war political life, associating himself chiefly with the worker's movement in Turin. In 1947, he graduated with a Master's thesis on Joseph Conrad, wrote short stories in his spare time, and landed a job in the publicity department at the Einaudi publishing house run by Giulio Einaudi. Although brief, his stint put him in regular contact with Cesare Pavese, Natalia Ginzburg, Norberto Bobbio, and many other left-wing intellectuals and writers. He then left Einaudi to work as a journalist for the official Communist daily, "L'Unità", and the newborn Communist political magazine, "Rinascita". During this period, Pavese and poet Alfonso Gatto were Calvino's closest friends and mentors. His first novel, "Il sentiero dei nidi di ragno" ("The Path to the Nest of Spiders") written with valuable editorial advice from Pavese, won the Premio Riccione on publication in 1947. With sales topping 5000 copies, a surprise success in postwar Italy, the novel inaugurated Calvino's neorealist period. In a clairvoyant essay, Pavese praised the young writer as a "squirrel of the pen" who "climbed into the trees, more for fun than fear, to observe partisan life as a fable of the forest". In 1948, he interviewed one of his literary idols, Ernest Hemingway, travelling with Natalia Ginzburg to his home in Stresa. "Ultimo viene il corvo" ("The Crow Comes Last"), a collection of stories based on his wartime experiences, was published to acclaim in 1949. Despite the triumph, Calvino grew increasingly worried by his inability to compose a worthy second novel. He returned to Einaudi in 1950, responsible this time for the literary volumes. He eventually became a consulting editor, a position that allowed him to hone his writing talent, discover new writers, and develop into "a reader of texts". In late 1951, presumably to advance in the Communist Party, he spent two months in the Soviet Union as correspondent for "l'Unità". While in Moscow, he learned of his father's death on 25 October. The articles and correspondence he produced from this visit were published in 1952, winning the Saint-Vincent Prize for journalism. Over a seven-year period, Calvino wrote three realist novels, "The White Schooner" (1947–1949), "Youth in Turin" (1950–1951), and "The Queen's Necklace" (1952–54), but all were deemed defective. Calvino’s first efforts as a fictionist were marked with his experience in the Italian resistance during the Second World War, however his acclamation as a writer of fantastic stories came in the 1950s. During the eighteen months it took to complete "I giovani del Po" ("Youth in Turin"), he made an important self-discovery: "I began doing what came most naturally to me – that is, following the memory of the things I had loved best since boyhood. Instead of making myself write the book I "ought" to write, the novel that was expected of me, I conjured up the book I myself would have liked to read, the sort by an unknown writer, from another age and another country, discovered in an attic." The result was "Il visconte dimezzato" (1952; "The Cloven Viscount") composed in 30 days between July and September 1951. The protagonist, a seventeenth century viscount sundered in two by a cannonball, incarnated Calvino's growing political doubts and the divisive turbulence of the Cold War. Skillfully interweaving elements of the fable and the fantasy genres, the allegorical novel launched him as a modern "fabulist". In 1954, Giulio Einaudi commissioned his "Fiabe Italiane" (1956; "Italian Folktales") on the basis of the question, "Is there an Italian equivalent of the Brothers Grimm?" For two years, Calvino collated tales found in 19th century collections across Italy then translated 200 of the finest from various dialects into Italian. Key works he read at this time were Vladimir Propp's "Morphology of the Folktale" and "Historical Roots of Russian Fairy Tales", stimulating his own ideas on the origin, shape and function of the story. In 1952 Calvino wrote with Giorgio Bassani for "Botteghe Oscure", a magazine named after the popular name of the party's head-offices in Rome. He also worked for "Il Contemporaneo", a Marxist weekly. From 1955 to 1958 Calvino had an affair with Italian actress Elsa De Giorgi, a married, older woman. Excerpts of the hundreds of love letters Calvino wrote to her were published in the "Corriere della Sera" in 2004, causing some controversy. In 1957, disillusioned by the 1956 Soviet invasion of Hungary, Calvino left the Italian Communist Party. In his letter of resignation published in "L'Unità" on 7 August, he explained the reason of his dissent (the violent suppression of the Hungarian uprising and the revelation of Joseph Stalin's crimes) while confirming his "confidence in the democratic perspectives" of world Communism. He withdrew from taking an active role in politics and never joined another party. Ostracized by the PCI party leader Palmiro Togliatti and his supporters on publication of "Becalmed in the Antilles" ("La gran bonaccia delle Antille"), a satirical allegory of the party's immobilism, Calvino began writing "The Baron in the Trees". Completed in three months and published in 1957, the fantasy is based on the "problem of the intellectual's political commitment at a time of shattered illusions". He found new outlets for his periodic writings in the journals "Città aperta" and "Tempo presente", the magazine "Passato e presente", and the weekly "Italia Domani". With Vittorini in 1959, he became co-editor of "'Il Menabò", a cultural journal devoted to literature in the modern industrial age, a position he held until 1966. Despite severe restrictions in the US against foreigners holding communist views, Calvino was allowed to visit the United States, where he stayed six months from 1959 to 1960 (four of which he spent in New York), after an invitation by the Ford Foundation. Calvino was particularly impressed by the "New World": "Naturally I visited the South and also California, but I always felt a New Yorker. My city is New York." The letters he wrote to Einaudi describing this visit to the United States were first published as "American Diary 1959–1960" in "Hermit in Paris" in 2003. In 1962 Calvino met Argentinian translator Esther Judith Singer ("Chichita") and married her in 1964 in Havana, during a trip in which he visited his birthplace and was introduced to Ernesto "Che" Guevara. On 15 October 1967, a few days after Guevara's death, Calvino wrote a tribute to him that was published in Cuba in 1968, and in Italy thirty years later. He and his wife settled in Rome in the via Monte Brianzo where their daughter, Giovanna, was born in 1965. Once again working for Einaudi, Calvino began publishing some of his "Cosmicomics" in "Il Caffè", a literary magazine. Vittorini's death in 1966 greatly affected Calvino. He went through what he called an "intellectual depression", which the writer himself described as an important passage in his life: "...I ceased to be young. Perhaps it's a metabolic process, something that comes with age, I'd been young for a long time, perhaps too long, suddenly I felt that I had to begin my old age, yes, old age, perhaps with the hope of prolonging it by beginning it early." In the fermenting atmosphere that evolved into 1968's cultural revolution (the French May), he moved with his family to Paris in 1967, setting up home in a villa in the Square de Châtillon. Nicknamed "L'ironique amusé", he was invited by Raymond Queneau in 1968 to join the Oulipo ("Ouvroir de littérature potentielle") group of experimental writers where he met Roland Barthes, Georges Perec, and Claude Lévi-Strauss, all of whom influenced his later production. That same year, he turned down the Viareggio Prize for "Ti con zero" ("Time and the Hunter") on the grounds that it was an award given by "institutions emptied of meaning". He accepted, however, both the Asti Prize and the Feltrinelli Prize for his writing in 1970 and 1972, respectively. In two autobiographical essays published in 1962 and 1970, Calvino described himself as "atheist" and his outlook as "non-religious". Calvino had more intense contacts with the academic world, with notable experiences at the Sorbonne (with Barthes) and the University of Urbino. His interests included classical studies: Honoré de Balzac, Ludovico Ariosto, Dante, Ignacio de Loyola, Cervantes, Shakespeare, Cyrano de Bergerac, and Giacomo Leopardi. Between 1972–1973 Calvino published two short stories, "The Name, the Nose" and the Oulipo-inspired "The Burning of the Abominable House" in the Italian edition of "Playboy". He became a regular contributor to the Italian newspaper "Corriere della Sera", spending his summer vacations in a house constructed in Roccamare near Castiglione della Pescaia, Tuscany. In 1975 Calvino was made Honorary Member of the American Academy. Awarded the Austrian State Prize for European Literature in 1976, he visited Mexico, Japan, and the United States where he gave a series of lectures in several American towns. After his mother died in 1978 at the age of 92, Calvino sold Villa Meridiana, the family home in San Remo. Two years later, he moved to Rome in Piazza Campo Marzio near the Pantheon and began editing the work of Tommaso Landolfi for Rizzoli. Awarded the French Légion d'honneur in 1981, he also accepted to be jury president of the 29th Venice Film Festival. During the summer of 1985, Calvino prepared a series of texts on literature for the Charles Eliot Norton Lectures to be delivered at Harvard University in the fall. On 6 September, he was admitted to the ancient hospital of Santa Maria della Scala in Siena where he died during the night between 18 and 19 September of a cerebral hemorrhage. His lecture notes were published posthumously in Italian in 1988 and in English as "Six Memos for the Next Millennium" in 1993. A selected bibliography of Calvino's writings follows, listing the works that have been translated into and published in English, along with a few major untranslated works. More exhaustive bibliographies can be found in Martin McLaughlin's "Italo Calvino", and Beno Weiss's "Understanding Italo Calvino". The "Scuola Italiana Italo Calvino", an Italian curriculum school in Moscow, Russia, is named after him. A crater on the planet Mercury, Calvino, and a main belt asteroid, "22370 Italocalvino", are also named after him. "Salt Hill Journal" and University of Louisville award annually the Italo Calvino Prize "for a work of fiction written in the fabulist experimental style of Italo Calvino". General
https://en.wikipedia.org/wiki?curid=14937
Intercontinental ballistic missile An intercontinental ballistic missile (ICBM) is a guided ballistic missile with a minimum range of primarily designed for nuclear weapons delivery (delivering one or more thermonuclear warheads). Similarly, conventional, chemical, and biological weapons can also be delivered with varying effectiveness, but have never been deployed on ICBMs. Most modern designs support multiple independently targetable reentry vehicles (MIRVs), allowing a single missile to carry several warheads, each of which can strike a different target. Russia, United States, China, France, India, United Kingdom, and North Korea are the only countries that have operational ICBMs. Early ICBMs had limited precision, which made them suitable for use only against the largest targets, such as cities. They were seen as a "safe" basing option, one that would keep the deterrent force close to home where it would be difficult to attack. Attacks against military targets (especially hardened ones) still demanded the use of a more precise, manned bomber. Second- and third-generation designs (such as the LGM-118 Peacekeeper) dramatically improved accuracy to the point where even the smallest point targets can be successfully attacked. ICBMs are differentiated by having greater range and speed than other ballistic missiles: intermediate-range ballistic missiles (IRBMs), medium-range ballistic missiles (MRBMs), short-range ballistic missiles (SRBMs) and tactical ballistic missiles (TBMs). Short and medium-range ballistic missiles are known collectively as theatre ballistic missiles. The first practical design for an ICBM grew out of Nazi Germany's V-2 rocket program. The liquid-fueled V-2, designed by Wernher von Braun and his team, was widely used at the end of World War II to bomb British and Belgian cities. Under "Projekt Amerika," von Braun's team developed the A9/10 ICBM, intended for use in bombing New York and other American cities. Initially intended to be guided by radio, it was changed to be a piloted craft after the failure of Operation Elster. The second stage of the A9/A10 rocket was tested a few times in January and February 1945. After the war, the U.S. executed Operation Paperclip, which brought von Braun and hundreds of other leading German scientists to the United States to develop IRBMs, ICBMs, and launchers for the U.S. Army. This technology was predicted by U.S. Army General Hap Arnold, who wrote in 1943: After World War II, the U.S. and USSR started rocket research programs based on the V-2 and other German wartime designs. Each branch of the U.S. military started its own programs, leading to considerable duplication of effort. In the USSR, rocket research was centrally organized, although several teams worked on different designs. In the USSR, early development was focused on missiles able to attack European targets. This changed in 1953 when Sergei Korolyov was directed to start development of a true ICBM able to deliver newly developed hydrogen bombs. Given steady funding throughout, the R-7 developed with some speed. The first launch took place on 15 May 1957 and led to an unintended crash from the site. The first successful test followed on 21 August 1957; the R-7 flew over and became the world's first ICBM. The first strategic-missile unit became operational on 9 February 1959 at Plesetsk in north-west Russia. It was the same R-7 launch vehicle that placed the first artificial satellite in space, Sputnik, on 4 October 1957. The first human spaceflight in history was accomplished on a derivative of R-7, Vostok, on 12 April 1961, by Soviet cosmonaut Yuri Gagarin. A heavily modernized version of the R-7 is still used as the launch vehicle for the Soviet/Russian Soyuz spacecraft, marking more than 60 years of operational history of Sergei Korolyov's original rocket design. The US initiated ICBM research in 1946 with the RTV-A-2 Hiroc project. This was a three-stage effort with the ICBM development not starting until the third stage. However, funding was cut after only three partially successful launches in 1948 of the second stage design, used to test variations on the V-2 design. With overwhelming air superiority and truly intercontinental bombers, the newly forming US Air Force did not take the problem of ICBM development seriously. Things changed in 1953 with the Soviet testing of their first thermonuclear weapon, but it was not until 1954 that the Atlas missile program was given the highest national priority. The Atlas A first flew on 11 June 1957; the flight lasted only about 24 seconds before the rocket blew up. The first successful flight of an Atlas missile to full range occurred 28 November 1958. The first armed version of the Atlas, the Atlas D, was declared operational in January 1959 at Vandenberg, although it had not yet flown. The first test flight was carried out on 9 July 1959, and the missile was accepted for service on 1 September. The R-7 and Atlas each required a large launch facility, making them vulnerable to attack, and could not be kept in a ready state. Failure rates were very high throughout the early years of ICBM technology. Human spaceflight programs (Vostok, Mercury, Voskhod, Gemini, etc.) served as a highly visible means of demonstrating confidence in reliability, with successes translating directly to national defense implications. The US was well behind the Soviet Union in the Space Race, so US President John F. Kennedy increased the stakes with the Apollo program, which used Saturn rocket technology that had been funded by President Dwight D. Eisenhower. These early ICBMs also formed the basis of many space launch systems. Examples include R-7, Atlas, Redstone, Titan, and Proton, which was derived from the earlier ICBMs but never deployed as an ICBM. The Eisenhower administration supported the development of solid-fueled missiles such as the LGM-30 Minuteman, Polaris and Skybolt. Modern ICBMs tend to be smaller than their ancestors, due to increased accuracy and smaller and lighter warheads, and use solid fuels, making them less useful as orbital launch vehicles. The Western view of the deployment of these systems was governed by the strategic theory of mutual assured destruction. In the 1950s and 1960s, development began on anti-ballistic missile systems by both the US and USSR; these systems were restricted by the 1972 ABM treaty. The first successful ABM test were conducted by the USSR in 1961, which later deployed a fully operational system defending Moscow in the 1970s (see Moscow ABM system). The 1972 SALT treaty froze the number of ICBM launchers of both the US and the USSR at existing levels and allowed new submarine-based SLBM launchers only if an equal number of land-based ICBM launchers were dismantled. Subsequent talks, called SALT II, were held from 1972 to 1979 and actually reduced the number of nuclear warheads held by the US and USSR. SALT II was never ratified by the United States Senate, but its terms were nevertheless honored by both sides until 1986, when the Reagan administration "withdrew" after accusing the USSR of violating the pact. In the 1980s, President Ronald Reagan launched the Strategic Defense Initiative as well as the MX and Midgetman ICBM programs. China developed a minimal independent nuclear deterrent entering its own cold war after an ideological split with the Soviet Union beginning in the early 1960s. After first testing a domestic built nuclear weapon in 1964, it went on to develop various warheads and missiles. Beginning in the early 1970s, the liquid fuelled DF-5 ICBM was developed and used as a satellite launch vehicle in 1975. The DF-5, with a range of --long enough to strike the western US and the USSR--was silo deployed, with the first pair in service by 1981 and possibly twenty missiles in service by the late 1990s. China also deployed the JL-1 Medium-range ballistic missile with a reach of aboard the ultimately unsuccessful type 92 submarine. In 1991, the United States and the Soviet Union agreed in the START I treaty to reduce their deployed ICBMs and attributed warheads. , all five of the nations with permanent seats on the United Nations Security Council have operational long-range ballistic missile systems; Russia, the United States, and China also have land-based ICBMs (the US missiles are silo-based, while China and Russia have both silo and road-mobile (DF-31, RT-2PM2 Topol-M missiles). Israel is believed to have deployed a road mobile nuclear ICBM, the Jericho III, which entered service in 2008; an upgraded version is in development. India successfully test fired Agni V, with a strike range of more than on 19 April 2012, claiming entry into the ICBM club. The missile's actual range is speculated by foreign researchers to be up to with India having downplayed its capabilities to avoid causing concern to other countries. By 2012 there was speculation by some intelligence agencies that North Korea is developing an ICBM. North Korea successfully put a satellite into space on 12 December 2012 using the Unha-3 rocket. The United States claimed that the launch was in fact a way to test an ICBM. (See Timeline of first orbital launches by country.) In early July 2017, North Korea claimed for the first time to have tested successfully an ICBM capable of carrying a large thermonuclear warhead. In July 2014, China announced the development of its newest generation of ICBM, the Dongfeng-41 (DF-41), which has a range of 12,000 kilometres (7,500 miles), capable of reaching the United States, and which analysts believe is capable of being outfitted with MIRV technology. Most countries in the early stages of developing ICBMs have used liquid propellants, with the known exceptions being the Indian Agni-V, the planned but cancelled South African RSA-4 ICBM, and the now in service Israeli Jericho III. The RS-28 Sarmat (Russian: РС-28 Сармат; NATO reporting name: SATAN 2), is a Russian liquid-fueled, MIRV-equipped, super-heavy thermonuclear armed intercontinental ballistic missile in development by the Makeyev Rocket Design Bureau from 2009, intended to replace the previous R-36 missile. Its large payload would allow for up to 10 heavy warheads or 15 lighter ones or up to 24 hypersonic glide vehicles Yu-74, or a combination of warheads and massive amounts of countermeasures designed to defeat anti-missile systems; it was heralded by the Russian military as a response to the US Prompt Global Strike. The following flight phases can be distinguished: ICBMs usually use the trajectory which optimizes range for a given amount of payload (the "minimum-energy trajectory"); an alternative is a depressed trajectory, which allows less payload, shorter flight time, and has a much lower apogee. Modern ICBMs typically carry multiple independently targetable reentry vehicles ("MIRVs"), each of which carries a separate nuclear warhead, allowing a single missile to hit multiple targets. MIRV was an outgrowth of the rapidly shrinking size and weight of modern warheads and the Strategic Arms Limitation Treaties (SALT I and SALT II), which imposed limitations on the number of launch vehicles. It has also proved to be an "easy answer" to proposed deployments of anti-ballistic missile (ABM) systems: It is far less expensive to add more warheads to an existing missile system than to build an ABM system capable of shooting down the additional warheads; hence, most ABM system proposals have been judged to be impractical. The first operational ABM systems were deployed in the United States during the 1970s. The Safeguard ABM facility, located in North Dakota, was operational from 1975 to 1976. The USSR deployed its ABM-1 Galosh system around Moscow in the 1970s, which remains in service. Israel deployed a national ABM system based on the Arrow missile in 1998, but it is mainly designed to intercept shorter-ranged theater ballistic missiles, not ICBMs. The Alaska-based United States national missile defense system attained initial operational capability in 2004. ICBMs can be deployed from multiple platforms: The last three kinds are mobile and therefore hard to find. During storage, one of the most important features of the missile is its serviceability. One of the key features of the first computer-controlled ICBM, the Minuteman missile, was that it could quickly and easily use its computer to test itself. After launch, a booster pushes the missile and then falls away. Most modern boosters are solid-fueled rocket motors, which can be stored easily for long periods of time. Early missiles used liquid-fueled rocket motors. Many liquid-fueled ICBMs could not be kept fueled all the time as the cryogenic fuel liquid oxygen boiled off and caused ice formation, and therefore fueling the rocket was necessary before launch. This procedure was a source of significant operational delay, and might allow the missiles to be destroyed by enemy counterparts before they could be used. To resolve this problem the United Kingdom invented the missile silo that protected the missile from a first strike and also hid fuelling operations underground. Once the booster falls away, the remaining "bus" releases several warheads, each of which continues on its own unpowered ballistic trajectory, much like an artillery shell or cannonball. The warhead is encased in a cone-shaped reentry vehicle and is difficult to detect in this phase of flight as there is no rocket exhaust or other emissions to mark its position to defenders. The high speeds of the warheads make them difficult to intercept and allow for little warning, striking targets many thousands of kilometers away from the launch site (and due to the possible locations of the submarines: anywhere in the world) within approximately 30 minutes. Many authorities say that missiles also release aluminized balloons, electronic noise-makers, and other items intended to confuse interception devices and radars. As the nuclear warhead reenters the Earth's atmosphere its high speed causes compression of the air, leading to a dramatic rise in temperature which would destroy it if it were not shielded in some way. As a result, warhead components are contained within an aluminium honeycomb substructure, sheathed in a pyrolytic carbon-epoxy synthetic resin composite material heat shield. Warheads are also often radiation-hardened (to protect against nuclear-tipped ABMs or the nearby detonation of friendly warheads), one neutron-resistant material developed for this purpose in the UK is three-dimensional quartz phenolic. Circular error probable is crucial, because halving the circular error probable decreases the needed warhead energy by a factor of four. Accuracy is limited by the accuracy of the navigation system and the available geodetic information. Strategic missile systems are thought to use custom integrated circuits designed to calculate navigational differential equations thousands to millions of FLOPS in order to reduce navigational errors caused by calculation alone. These circuits are usually a network of binary addition circuits that continually recalculate the missile's position. The inputs to the navigation circuit are set by a general purpose computer according to a navigational input schedule loaded into the missile before launch. One particular weapon developed by the Soviet Unionthe Fractional Orbital Bombardment Systemhad a partial orbital trajectory, and unlike most ICBMs its target could not be deduced from its orbital flight path. It was decommissioned in compliance with arms control agreements, which address the maximum range of ICBMs and prohibit orbital or fractional-orbital weapons. However, according to reports, Russia is working on the new Sarmat ICBM which leverages Fractional Orbital Bombardment concepts to use a Southern polar approach instead of flying over the Northern polar regions. Using this approach, it is theorized, avoids the US missile defense batteries in California and Alaska. New development of ICBM technology are ICBMs able to carry hypersonic glide vehicles as a payload such as RS-28 Sarmat. Specific types of ICBMs (current, past and under development) include: Russia, the United States, China, North Korea and India are the only countries currently known to possess land-based ICBMs, Israel has also tested ICBMs but is not open about actual deployment. The United States currently operates 405 ICBMs in three USAF bases. The only model deployed is LGM-30G Minuteman-III. All previous USAF Minuteman II missiles were destroyed in accordance with START II, and their launch silos have been sealed or sold to the public. The powerful MIRV-capable Peacekeeper missiles were phased out in 2005. The Russian Strategic Rocket Forces have 286 ICBMs able to deliver 958 nuclear warheads: 46 silo-based R-36M2 (SS-18), 30 silo-based UR-100N (SS-19), 36 mobile RT-2PM "Topol" (SS-25), 60 silo-based RT-2UTTH "Topol M" (SS-27), 18 mobile RT-2UTTH "Topol M" (SS-27), 84 mobile RS-24 "Yars" (SS-29), and 12 silo-based RS-24 "Yars" (SS-29). China has developed several long range ICBMs, like the DF-31. The Dongfeng 5 or DF-5 is a 3-stage liquid fuel ICBM and has an estimated range of 13,000 kilometers. The DF-5 had its first flight in 1971 and was in operational service 10 years later. One of the downsides of the missile was that it took between 30 and 60 minutes to fuel. The Dong Feng 31 (a.k.a. CSS-10) is a medium-range, three-stage, solid-propellant intercontinental ballistic missile, and is a land-based variant of the submarine-launched JL-2. The DF-41 or CSS-X-10 can carry up to 10 nuclear warheads, which are MIRVs and has a range of approximately . The DF-41 deployed in underground Xinjiang, Qinghai, Gansu and Inner Mongolia area. The mysterious underground subway ICBM carrier systems they called "Underground Great Wall Project". Israel is believed to have deployed a road mobile nuclear ICBM, the Jericho III, which entered service in 2008. It is possible for the missile to be equipped with a single nuclear warhead or up to three MIRV warheads. It is believed to be based on the Shavit space launch vehicle and is estimated to have a range of . In November 2011 Israel tested an ICBM believed to be an upgraded version of the Jericho III. India has a series of ballistic missiles called Agni. On 19 April 2012, India successfully test fired its first Agni-V, a three-stage solid fueled missile, with a strike range of more than . The missile was test-fired for the second time on 15 September 2013. On 31 January 2015, India conducted a third successful test flight of the Agni-V from the Abdul Kalam Island facility. The test used a canisterised version of the missile, mounted over a Tatra truck. An anti-ballistic missile is a missile which can be deployed to counter an incoming nuclear or non-nuclear ICBM. ICBMs can be intercepted in three regions of their trajectory: boost phase, mid-course phase or terminal phase. Currently China, the US, Russia, France, India and Israel have developed anti-ballistic missile systems, of which the Russian A-135 anti-ballistic missile system and the US Ground-Based Midcourse Defense systems have the capability to intercept ICBMs carrying nuclear, chemical, biological, or conventional warheads.
https://en.wikipedia.org/wiki?curid=14939
Ice Ice is water frozen into a solid state. Depending on the presence of impurities such as particles of soil or bubbles of air, it can appear transparent or a more or less opaque bluish-white color. In the Solar System, ice is abundant and occurs naturally from as close to the Sun as Mercury to as far away as the Oort cloud objects. Beyond the Solar System, it occurs as interstellar ice. It is abundant on Earth's surfaceparticularly in the polar regions and above the snow lineand, as a common form of precipitation and deposition, plays a key role in Earth's water cycle and climate. It falls as snowflakes and hail or occurs as frost, icicles or ice spikes and aggregates from snow as glaciers, ice sheets. Ice molecules can exhibit eighteen or more different phases (packing geometries) that depend on temperature and pressure. When water is cooled rapidly (quenching), up to three different types of amorphous ice can form depending on the history of its pressure and temperature. When cooled slowly correlated proton tunneling occurs below (, ) giving rise to macroscopic quantum phenomena. Virtually all the ice on Earth's surface and in its atmosphere is of a hexagonal crystalline structure denoted as ice Ih (spoken as "ice one h") with minute traces of cubic ice denoted as ice Ic. The most common phase transition to ice Ih occurs when liquid water is cooled below (, ) at standard atmospheric pressure. It may also be deposited directly by water vapor, as happens in the formation of frost. The transition from ice to water is melting and from ice directly to water vapor is sublimation. Ice is used in a variety of ways, including cooling, winter sports and ice sculpture. As a naturally occurring crystalline inorganic solid with an ordered structure, ice is considered to be a mineral. It possesses a regular crystalline structure based on the molecule of water, which consists of a single oxygen atom covalently bonded to two hydrogen atoms, or H–O–H. However, many of the physical properties of water and ice are controlled by the formation of hydrogen bonds between adjacent oxygen and hydrogen atoms; while it is a weak bond, it is nonetheless critical in controlling the structure of both water and ice. An unusual property of water is that its solid form—ice frozen at atmospheric pressure—is approximately 8.3% less dense than its liquid form; this is equivalent to a volumetric expansion of 9%. The density of ice is 0.9167–0.9168 g/cm3 at 0 °C and standard atmospheric pressure (101,325 Pa), whereas water has a density of 0.9998–0.999863 g/cm3 at the same temperature and pressure. Liquid water is densest, essentially 1.00 g/cm3, at 4 °C and becomes less dense as the water molecules begin to form the hexagonal crystals of ice as the freezing point is reached. This is due to hydrogen bonding dominating the intermolecular forces, which results in a packing of molecules less compact in the solid. Density of ice increases slightly with decreasing temperature and has a value of 0.9340 g/cm3 at −180 °C (93 K). When water freezes, it increases in volume (about 9% for fresh water). The effect of expansion during freezing can be dramatic, and ice expansion is a basic cause of freeze-thaw weathering of rock in nature and damage to building foundations and roadways from frost heaving. It is also a common cause of the flooding of houses when water pipes burst due to the pressure of expanding water when it freezes. The result of this process is that ice (in its most common form) floats on liquid water, which is an important feature in Earth's biosphere. It has been argued that without this property, natural bodies of water would freeze, in some cases permanently, from the bottom up, resulting in a loss of bottom-dependent animal and plant life in fresh and sea water. Sufficiently thin ice sheets allow light to pass through while protecting the underside from short-term weather extremes such as wind chill. This creates a sheltered environment for bacterial and algal colonies. When sea water freezes, the ice is riddled with brine-filled channels which sustain sympagic organisms such as bacteria, algae, copepods and annelids, which in turn provide food for animals such as krill and specialised fish like the bald notothen, fed upon in turn by larger animals such as emperor penguins and minke whales. When ice melts, it absorbs as much energy as it would take to heat an equivalent mass of water by 80 °C. During the melting process, the temperature remains constant at 0 °C. While melting, any energy added breaks the hydrogen bonds between ice (water) molecules. Energy becomes available to increase the thermal energy (temperature) only after enough hydrogen bonds are broken that the ice can be considered liquid water. The amount of energy consumed in breaking hydrogen bonds in the transition from ice to water is known as the "heat of fusion". As with water, ice absorbs light at the red end of the spectrum preferentially as the result of an overtone of an oxygen–hydrogen (O–H) bond stretch. Compared with water, this absorption is shifted toward slightly lower energies. Thus, ice appears blue, with a slightly greener tint than liquid water. Since absorption is cumulative, the color effect intensifies with increasing thickness or if internal reflections cause the light to take a longer path through the ice. Other colors can appear in the presence of light absorbing impurities, where the impurity is dictating the color rather than the ice itself. For instance, icebergs containing impurities (e.g., sediments, algae, air bubbles) can appear brown, grey or green. Ice may be any one of the 18 known solid crystalline phases of water, or in an amorphous solid state at various densities. Most liquids under increased pressure freeze at "higher" temperatures because the pressure helps to hold the molecules together. However, the strong hydrogen bonds in water make it different: for some pressures higher than , water freezes at a temperature "below" 0 °C, as shown in the phase diagram below. The melting of ice under high pressures is thought to contribute to the movement of glaciers. Ice, water, and water vapour can coexist at the triple point, which is exactly 273.16 K (0.01 °C) at a pressure of 611.657 Pa. The kelvin was in fact defined as of the difference between this triple point and absolute zero, though this definition changed in May 2019. Unlike most other solids, ice is difficult to superheat. In an experiment, ice at −3 °C was superheated to about 17 °C for about 250 picoseconds. Subjected to higher pressures and varying temperatures, ice can form in 18 separate known crystalline phases. With care, at least 15 of these phases (one of the known exceptions being ice X) can be recovered at ambient pressure and low temperature in metastable form. The types are differentiated by their crystalline structure, proton ordering, and density. There are also two metastable phases of ice under pressure, both fully hydrogen-disordered; these are IV and XII. Ice XII was discovered in 1996. In 2006, XIII and XIV were discovered. Ices XI, XIII, and XIV are hydrogen-ordered forms of ices Ih, V, and XII respectively. In 2009, ice XV was found at extremely high pressures and −143 °C. At even higher pressures, ice is predicted to become a metal; this has been variously estimated to occur at 1.55 TPa or 5.62 TPa. As well as crystalline forms, solid water can exist in amorphous states as amorphous ice (ASW) of varying densities. Water in the interstellar medium is dominated by amorphous ice, making it likely the most common form of water in the universe. Low-density ASW (LDA), also known as hyperquenched glassy water, may be responsible for noctilucent clouds on Earth and is usually formed by deposition of water vapor in cold or vacuum conditions. High-density ASW (HDA) is formed by compression of ordinary ice Ih or LDA at GPa pressures. Very-high-density ASW (VHDA) is HDA slightly warmed to 160K under 1–2 GPa pressures. In outer space, hexagonal crystalline ice (the predominant form found on Earth) is extremely rare. Amorphous ice is more common; however, hexagonal crystalline ice can be formed by volcanic action. Ice from a theorized superionic water may possess two crystalline structures. At pressures in excess of such "superionic ice" would take on a body-centered cubic structure. However, at pressures in excess of the structure may shift to a more stable face-centered cubic lattice. The low coefficient of friction ("slipperiness") of ice has been attributed to the pressure of an object coming into contact with the ice, melting a thin layer of the ice and allowing the object to glide across the surface. For example, the blade of an ice skate, upon exerting pressure on the ice, would melt a thin layer, providing lubrication between the ice and the blade. This explanation, called "pressure melting", originated in the 19th century. It, however, did not account for skating on ice temperatures lower than , which is often skated upon. A second theory describing the coefficient of friction of ice suggested that ice molecules at the interface cannot properly bond with the molecules of the mass of ice beneath (and thus are free to move like molecules of liquid water). These molecules remain in a semi-liquid state, providing lubrication regardless of pressure against the ice exerted by any object. However, the significance of this hypothesis is disputed by experiments showing a high coefficient of friction for ice using atomic force microscopy. A third theory is "friction heating", which suggests that friction of the material is the cause of the ice layer melting. However, this theory does not sufficiently explain why ice is slippery when standing still even at below-zero temperatures. A comprehensive theory of ice friction takes into account all the above-mentioned friction mechanisms. This model allows quantitative estimation of the friction coefficient of ice against various materials as a function of temperature and sliding speed. In typical conditions related to winter sports and tires of a vehicle on ice, melting of a thin ice layer due to the frictional heating is the primary reason for the slipperiness. The mechanism controlling the frictional properties of ice is still an active area of scientific study. The term that collectively describes all of the parts of the Earth's surface where water is in frozen form is the "cryosphere." Ice is an important component of the global climate, particularly in regard to the water cycle. Glaciers and snowpacks are an important storage mechanism for fresh water; over time, they may sublimate or melt. Snowmelt is an important source of seasonal fresh water. The World Meteorological Organization defines several kinds of ice depending on origin, size, shape, influence and so on. Clathrate hydrates are forms of ice that contain gas molecules trapped within its crystal lattice. Ice that is found at sea may be in the form of drift ice floating in the water, fast ice fixed to a shoreline or anchor ice if attached to the sea bottom. Ice which calves (breaks off) from an ice shelf or glacier may become an iceberg. Sea ice can be forced together by currents and winds to form pressure ridges up to tall. Navigation through areas of sea ice occurs in openings called "polynyas" or "leads" or requires the use of a special ship called an "icebreaker". Ice on land ranges from the largest type called an "ice sheet" to smaller ice caps and ice fields to glaciers and ice streams to the snow line and snow fields. Aufeis is layered ice that forms in Arctic and subarctic stream valleys. Ice, frozen in the stream bed, blocks normal groundwater discharge, and causes the local water table to rise, resulting in water discharge on top of the frozen layer. This water then freezes, causing the water table to rise further and repeat the cycle. The result is a stratified ice deposit, often several meters thick. Freezing rain is a type of winter storm called an ice storm where rain falls and then freezes producing a glaze of ice. Ice can also form icicles, similar to stalactites in appearance, or stalagmite-like forms as water drips and re-freezes. The term "ice dam" has three meanings (others discussed below). On structures, an ice dam is the buildup of ice on a sloped roof which stops melt water from draining properly and can cause damage from water leaks in buildings. Ice which forms on moving water tends to be less uniform and stable than ice which forms on calm water. Ice jams (sometimes called "ice dams"), when broken chunks of ice pile up, are the greatest ice hazard on rivers. Ice jams can cause flooding, damage structures in or near the river, and damage vessels on the river. Ice jams can cause some hydropower industrial facilities to completely shut down. An ice dam is a blockage from the movement of a glacier which may produce a proglacial lake. Heavy ice flows in rivers can also damage vessels and require the use of an icebreaker to keep navigation possible. Ice discs are circular formations of ice surrounded by water in a river. Pancake ice is a formation of ice generally created in areas with less calm conditions. Ice forms on calm water from the shores, a thin layer spreading across the surface, and then downward. Ice on lakes is generally four types: primary, secondary, superimposed and agglomerate. Primary ice forms first. Secondary ice forms below the primary ice in a direction parallel to the direction of the heat flow. Superimposed ice forms on top of the ice surface from rain or water which seeps up through cracks in the ice which often settles when loaded with snow. Shelf ice occurs when floating pieces of ice are driven by the wind piling up on the windward shore. Candle ice is a form of rotten ice that develops in columns perpendicular to the surface of a lake. Rime is a type of ice formed on cold objects when drops of water crystallize on them. This can be observed in foggy weather, when the temperature drops during the night. Soft rime contains a high proportion of trapped air, making it appear white rather than transparent, and giving it a density about one quarter of that of pure ice. Hard rime is comparatively dense. Ice pellets are a form of precipitation consisting of small, translucent balls of ice. This form of precipitation is also referred to as "sleet" by the United States National Weather Service. (In British English "sleet" refers to a mixture of rain and snow.) Ice pellets are usually smaller than hailstones. They often bounce when they hit the ground, and generally do not freeze into a solid mass unless mixed with freezing rain. The METAR code for ice pellets is "PL". Ice pellets form when a layer of above-freezing air is located between above the ground, with sub-freezing air both above and below it. This causes the partial or complete melting of any snowflakes falling through the warm layer. As they fall back into the sub-freezing layer closer to the surface, they re-freeze into ice pellets. However, if the sub-freezing layer beneath the warm layer is too small, the precipitation will not have time to re-freeze, and freezing rain will be the result at the surface. A temperature profile showing a warm layer above the ground is most likely to be found in advance of a warm front during the cold season, but can occasionally be found behind a passing cold front. Like other precipitation, hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted up again. Hail has a diameter of or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least and GS for smaller. Stones just larger than golf ball-sized are one of the most frequently reported hail sizes. Hailstones can grow to and weigh more than . In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud. Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing . Hail-producing clouds are often identifiable by their green coloration. The growth rate is maximized at about , and becomes vanishingly small much below as supercooled water droplets become rare. For this reason, hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of . Entrainment of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is actually less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater depth. Hail in the tropics occurs mainly at higher elevations. Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. These droplets are able to remain liquid at temperatures lower than , because to freeze, a few molecules in the droplet need to get together by chance to form an arrangement similar to that in an ice lattice; then the droplet freezes around this "nucleus." Experiments show that this "homogeneous" nucleation of cloud droplets only occurs at temperatures lower than . In warmer clouds an aerosol particle or "ice nucleus" must be present in (or in contact with) the droplet to act as a nucleus. Our understanding of what particles make efficient ice nuclei is poor – what we do know is they are very rare compared to that cloud condensation nuclei on which liquid droplets form. Clays, desert dust and biological particles may be effective, although to what extent is unclear. Artificial nuclei are used in cloud seeding. The droplet then grows by condensation of water vapor onto the ice surfaces. So-called "diamond dust", also known as ice needles or ice crystals, forms at temperatures approaching due to air with slightly higher moisture from aloft mixing with colder, surface-based air. The METAR identifier for diamond dust within international hourly weather reports is "IC". Ablation of ice refers to both its melting and its dissolution. In fresh ambient melting describes a phase transition from solid to liquid. To melt ice means breaking the hydrogen bonds between the water molecules. The ordering of the molecules in the solid breaks down to a less ordered state and the solid melts to become a liquid. This is achieved by increasing the internal energy of the ice beyond the melting point. When ice melts it absorbs as much energy as would be required to heat an equivalent amount of water by 80 °C. While melting, the temperature of the ice surface remains constant at 0 °C. The velocity of the melting process depends on the efficiency of the energy exchange process. An ice surface in fresh water melts solely by free convection with a velocity that depends linearly on the water temperature, "T"∞, when "T"∞ is less than 3.98 °C, and superlinearly when "T"∞ is equal to or greater than 3.98 °C, with the rate being proportional to (T∞ − 3.98 °C)"α", with "α" =  for "T"∞ much greater than 8 °C, and α =  for in between temperatures "T"∞. In salty ambient conditions, dissolution rather than melting often causes the ablation of ice. For example, the temperature of the Arctic Ocean is generally below the melting point of ablating sea ice. The phase transition from solid to liquid is achieved by mixing salt and water molecules, similar to the dissolution of sugar in water, even though the water temperature is far below the melting point of the sugar. Hence dissolution is rate limited by salt transport whereas melting can occur at much higher rates that are characteristic for heat transport. Humans have used ice for cooling and food preservation for centuries, relying on harvesting natural ice in various forms and then transitioning to the mechanical production of the material. Ice also presents a challenge to transportation in various forms and a setting for winter sports. Ice has long been valued as a means of cooling. In 400 BC Iran, Persian engineers had already mastered the technique of storing ice in the middle of summer in the desert. The ice was brought in during the winters from nearby mountains in bulk amounts, and stored in specially designed, naturally cooled "refrigerators", called yakhchal (meaning "ice storage"). This was a large underground space (up to 5000 m3) that had thick walls (at least two meters at the base) made of a special mortar called "sarooj", composed of sand, clay, egg whites, lime, goat hair, and ash in specific proportions, and which was known to be resistant to heat transfer. This mixture was thought to be completely water impenetrable. The space often had access to a qanat, and often contained a system of windcatchers which could easily bring temperatures inside the space down to frigid levels on summer days. The ice was used to chill treats for royalty. There were thriving industries in 16th–17th century England whereby low-lying areas along the Thames Estuary were flooded during the winter, and ice harvested in carts and stored inter-seasonally in insulated wooden houses as a provision to an icehouse often located in large country houses, and widely used to keep fish fresh when caught in distant waters. This was allegedly copied by an Englishman who had seen the same activity in China. Ice was imported into England from Norway on a considerable scale as early as 1823. In the United States, the first cargo of ice was sent from New York City to Charleston, South Carolina, in 1799, and by the first half of the 19th century, ice harvesting had become big business. Frederic Tudor, who became known as the "Ice King", worked on developing better insulation products for the long distance shipment of ice, especially to the tropics; this became known as the ice trade. Trieste sent ice to Egypt, Corfu, and Zante; Switzerland sent it to France; and Germany sometimes was supplied from Bavarian lakes. The Hungarian Parliament building used ice harvested in the winter from Lake Balaton for air conditioning. Ice houses were used to store ice formed in the winter, to make ice available all year long, and early refrigerators were known as iceboxes, because they had a block of ice in them. In many cities, it was not unusual to have a regular ice delivery service during the summer. The advent of artificial refrigeration technology has since made delivery of ice obsolete. Ice is still harvested for ice and snow sculpture events. For example, a swing saw is used to get ice for the Harbin International Ice and Snow Sculpture Festival each year from the frozen surface of the Songhua River. Ice is now produced on an industrial scale, for uses including food storage and processing, chemical manufacturing, concrete mixing and curing, and consumer or packaged ice. Most commercial icemakers produce three basic types of fragmentary ice: flake, tubular and plate, using a variety of techniques. Large batch ice makers can produce up to 75 tons of ice per day. In 2002, there were 426 commercial ice-making companies in the United States, with a combined value of shipments of $595,487,000. Home refrigerators can also make ice with a built in icemaker, which will typically make ice cubes or crushed ice. Stand-alone icemaker units that make ice cubes are often called ice machines. Ice can present challenges to safe transportation on land, sea and in the air. Ice forming on roads is a dangerous winter hazard. Black ice is very difficult to see, because it lacks the expected frosty surface. Whenever there is freezing rain or snow which occurs at a temperature near the melting point, it is common for ice to build up on the windows of vehicles. Driving safely requires the removal of the ice build-up. Ice scrapers are tools designed to break the ice free and clear the windows, though removing the ice can be a long and laborious process. Far enough below the freezing point, a thin layer of ice crystals can form on the inside surface of windows. This usually happens when a vehicle has been left alone after being driven for a while, but can happen while driving, if the outside temperature is low enough. Moisture from the driver's breath is the source of water for the crystals. It is troublesome to remove this form of ice, so people often open their windows slightly when the vehicle is parked in order to let the moisture dissipate, and it is now common for cars to have rear-window defrosters to solve the problem. A similar problem can happen in homes, which is one reason why many colder regions require double-pane windows for insulation. When the outdoor temperature stays below freezing for extended periods, very thick layers of ice can form on lakes and other bodies of water, although places with flowing water require much colder temperatures. The ice can become thick enough to drive onto with automobiles and trucks. Doing this safely requires a thickness of at least 30 cm (one foot). For ships, ice presents two distinct hazards. Spray and freezing rain can produce an ice build-up on the superstructure of a vessel sufficient to make it unstable, and to require it to be hacked off or melted with steam hoses. And icebergs – large masses of ice floating in water (typically created when glaciers reach the sea) – can be dangerous if struck by a ship when underway. Icebergs have been responsible for the sinking of many ships, the most famous being the "Titanic". For harbors near the poles, being ice-free is an important advantage. Ideally, all year long. Examples are Murmansk (Russia), Petsamo (Russia, formerly Finland) and Vardø (Norway). Harbors which are not ice-free are opened up using icebreakers. For aircraft, ice can cause a number of dangers. As an aircraft climbs, it passes through air layers of different temperature and humidity, some of which may be conducive to ice formation. If ice forms on the wings or control surfaces, this may adversely affect the flying qualities of the aircraft. During the first non-stop flight across the Atlantic, the British aviators Captain John Alcock and Lieutenant Arthur Whitten Brown encountered such icing conditions – Brown left the cockpit and climbed onto the wing several times to remove ice which was covering the engine air intakes of the Vickers Vimy aircraft they were flying. One vulnerability effected by icing that is associated with reciprocating internal combustion engines is the carburetor. As air is sucked through the carburetor into the engine, the local air pressure is lowered, which causes adiabatic cooling. Thus, in humid near-freezing conditions, the carburetor will be colder, and tend to ice up. This will block the supply of air to the engine, and cause it to fail. For this reason, aircraft reciprocating engines with carburetors are provided with carburetor air intake heaters. The increasing use of fuel injection—which does not require carburetors—has made "carb icing" less of an issue for reciprocating engines. Jet engines do not experience carb icing, but recent evidence indicates that they can be slowed, stopped, or damaged by internal icing in certain types of atmospheric conditions much more easily than previously believed. In most cases, the engines can be quickly restarted and flights are not endangered, but research continues to determine the exact conditions which produce this type of icing, and find the best methods to prevent, or reverse it, in flight. Ice also plays a central role in winter recreation and in many sports such as ice skating, tour skating, ice hockey, bandy, ice fishing, ice climbing, curling, broomball and sled racing on bobsled, luge and skeleton. Many of the different sports played on ice get international attention every four years during the Winter Olympic Games. A sort of sailboat on blades gives rise to ice yachting. Another sport is ice racing, where drivers must speed on lake ice, while also controlling the skid of their vehicle (similar in some ways to dirt track racing). The sport has even been modified for ice rinks. The solid phases of several other volatile substances are also referred to as "ices"; generally a volatile is classed as an ice if its melting point lies above or around 100 K. The best known example is dry ice, the solid form of carbon dioxide. A "magnetic analogue" of ice is also realized in some insulating magnetic materials in which the magnetic moments mimic the position of protons in water ice and obey energetic constraints similar to the Bernal-Fowler ice rules arising from the geometrical frustration of the proton configuration in water ice. These materials are called spin ice.
https://en.wikipedia.org/wiki?curid=14946
Ionic bonding Ionic bonding is a type of chemical bonding that involves the electrostatic attraction between oppositely charged ions, and is the primary interaction occurring in ionic compounds. It is one of the main types of bonding along with covalent bonding and metallic bonding. Ions are atoms (or groups of atoms) with an electrostatic charge. Atoms that gain electrons make negatively charged ions (called anions). Atoms that lose electrons make positively charged ions (called cations). This transfer of electrons is known as electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complex nature, e.g. molecular ions like or . In simpler words, an ionic bond results from the transfer of electrons from a metal to a non-metal in order to obtain a full valence shell for both atoms. It is important to recognize that "clean" ionic bonding — in which one atom or molecule completely transfers an electron to another — cannot exist: all ionic compounds have some degree of covalent bonding, or electron sharing. Thus, the term "ionic bonding" is given when the ionic character is greater than the covalent character – that is, a bond in which a large electronegativity difference exists between the two atoms, causing the bonding to be more polar (ionic) than in covalent bonding where electrons are shared more equally. Bonds with partially ionic and partially covalent character are called polar covalent bonds. Ionic compounds conduct electricity when molten or in solution, typically not when solid. Ionic compounds generally have a high melting point, depending on the charge of the ions they consist of. The higher the charges the stronger the cohesive forces and the higher the melting point. They also tend to be soluble in water; the stronger the cohesive forces, the lower the solubility. Atoms that have an almost full or almost empty valence shell tend to be very reactive. Atoms that are strongly electronegative (as is the case with halogens) often have only one or two empty orbitals in their valence shell, and frequently bond with other molecules or gain electrons to form anions. Atoms that are weakly electronegative (such as alkali metals) have relatively few valence electrons, which can easily be shared with atoms that are strongly electronegative. As a result, weakly electronegative atoms tend to distort their electron cloud and form cations. Ionic bonding can result from a redox reaction when atoms of an element (usually metal), whose ionization energy is low, give some of their electrons to achieve a stable electron configuration. In doing so, cations are formed. An atom of another element (usually nonmetal) with greater electron affinity accepts the electron(s) to attain a stable electron configuration, and after accepting electron(s) an atom becomes an anion. Typically, the stable electron configuration is one of the noble gases for elements in the s-block and the p-block, and particular stable electron configurations for d-block and f-block elements. The electrostatic attraction between the anions and cations leads to the formation of a solid with a crystallographic lattice in which the ions are stacked in an alternating fashion. In such a lattice, it is usually not possible to distinguish discrete molecular units, so that the compounds formed are not molecular in nature. However, the ions themselves can be complex and form molecular ions like the acetate anion or the ammonium cation. For example, common table salt is sodium chloride. When sodium (Na) and chlorine (Cl) are combined, the sodium atoms each lose an electron, forming cations (Na+), and the chlorine atoms each gain an electron to form anions (Cl−). These ions are then attracted to each other in a 1:1 ratio to form sodium chloride (NaCl). However, to maintain charge neutrality, strict ratios between anions and cations are observed so that ionic compounds, in general, obey the rules of stoichiometry despite not being molecular compounds. For compounds that are transitional to the alloys and possess mixed ionic and metallic bonding, this may not be the case anymore. Many sulfides, e.g., do form non-stoichiometric compounds. Many ionic compounds are referred to as salts as they can also be formed by the neutralization reaction of an Arrhenius base like NaOH with an Arrhenius acid like HCl The salt NaCl is then said to consist of the acid rest Cl− and the base rest Na+. The removal of electrons from the cation is endothermic, raising the system's overall energy. There may also be energy changes associated with breaking of existing bonds or the addition of more than one electron to form anions. However, the action of the anion's accepting the cation's valence electrons and the subsequent attraction of the ions to each other releases (lattice) energy and, thus, lowers the overall energy of the system. Ionic bonding will occur only if the overall energy change for the reaction is favorable. In general, the reaction is exothermic, but, e.g., the formation of mercuric oxide (HgO) is endothermic. The charge of the resulting ions is a major factor in the strength of ionic bonding, e.g. a salt C+A− is held together by electrostatic forces roughly four times weaker than C2+A2− according to Coulombs law, where C and A represent a generic cation and anion respectively. The sizes of the ions and the particular packing of the lattice are ignored in this rather simplistic argument. Ionic compounds in the solid state form lattice structures. The two principal factors in determining the form of the lattice are the relative charges of the ions and their relative sizes. Some structures are adopted by a number of compounds; for example, the structure of the rock salt sodium chloride is also adopted by many alkali halides, and binary oxides such as magnesium oxide. Pauling's rules provide guidelines for predicting and rationalizing the crystal structures of ionic crystals For a solid crystalline ionic compound the enthalpy change in forming the solid from gaseous ions is termed the lattice energy. The experimental value for the lattice energy can be determined using the Born–Haber cycle. It can also be calculated (predicted) using the Born–Landé equation as the sum of the electrostatic potential energy, calculated by summing interactions between cations and anions, and a short-range repulsive potential energy term. The electrostatic potential can be expressed in terms of the interionic separation and a constant (Madelung constant) that takes account of the geometry of the crystal. The further away from the nucleus the weaker the shield. The Born-Landé equation gives a reasonable fit to the lattice energy of, e.g., sodium chloride, where the calculated (predicted) value is −756 kJ/mol, which compares to −787 kJ/mol using the Born–Haber cycle. In aqueous solution the binding strength can be described by the Bjerrum or Fuoss equation as function of the ion charges, rather independent of the nature of the ions such as polaribility or size The strength of salt bridges is most often evaluated by measurements of equilibria between molecules containing cationic and anionioc sites, most often in solution. Equilibrium constants in water indicate additive free energy contributions for each salt bridge. Another method for the identification of hydrogen bonds also in complicated molecules is crystallography, sometimes also NMR-spectroscopy. The attractive forces defining the strength of ionic bonding can be modelled by Coulomb's Law. Ionic bond strengths are typically (cited ranges vary) between 170 and 1500 kJ/mol. Ions in crystal lattices of purely ionic compounds are spherical; however, if the positive ion is small and/or highly charged, it will distort the electron cloud of the negative ion, an effect summarised in Fajans' rules. This polarization of the negative ion leads to a build-up of extra charge density between the two nuclei, that is, to partial covalency. Larger negative ions are more easily polarized, but the effect is usually important only when positive ions with charges of 3+ (e.g., Al3+) are involved. However, 2+ ions (Be2+) or even 1+ (Li+) show some polarizing power because their sizes are so small (e.g., LiI is ionic but has some covalent bonding present). Note that this is not the ionic polarization effect that refers to displacement of ions in the lattice due to the application of an electric field. In ionic bonding, the atoms are bound by attraction of oppositely charged ions, whereas, in covalent bonding, atoms are bound by sharing electrons to attain stable electron configurations. In covalent bonding, the molecular geometry around each atom is determined by valence shell electron pair repulsion VSEPR rules, whereas, in ionic materials, the geometry follows maximum packing rules. One could say that covalent bonding is more "directional" in the sense that the energy penalty for not adhering to the optimum bond angles is large, whereas ionic bonding has no such penalty. There are no shared electron pairs to repel each other, the ions should simply be packed as efficiently as possible. This often leads to much higher coordination numbers. In NaCl, each ion has 6 bonds and all bond angles are 90°. In CsCl the coordination number is 8. By comparison carbon typically has a maximum of four bonds. Purely ionic bonding cannot exist, as the proximity of the entities involved in the bonding allows some degree of sharing electron density between them. Therefore, all ionic bonding has some covalent character. Thus, bonding is considered ionic where the ionic character is greater than the covalent character. The larger the difference in electronegativity between the two types of atoms involved in the bonding, the more ionic (polar) it is. Bonds with partially ionic and partially covalent character are called polar covalent bonds. For example, Na–Cl and Mg–O interactions have a few percent covalency, while Si–O bonds are usually ~50% ionic and ~50% covalent. Pauling estimated that an electronegativity difference of 1.7 (on the Pauling scale) corresponds to 50% ionic character, so that a difference greater than 1.7 corresponds to a bond which is predominantly ionic. Ionic character in covalent bonds can be directly measured for atoms having quadrupolar nuclei (2H, 14N, 81,79Br, 35,37Cl or 127I). These nuclei are generally objects of NQR nuclear quadrupole resonance and NMR nuclear magnetic resonance studies. Interactions between the nuclear quadrupole moments "Q" and the electric field gradients (EFG) are characterized via the nuclear quadrupole coupling constants where the "eq"zz term corresponds to the principal component of the EFG tensor and "e" is the elementary charge. In turn, the electric field gradient opens the way to description of bonding modes in molecules when the QCC values are accurately determined by NMR or NQR methods. In general, when ionic bonding occurs in the solid (or liquid) state, it is not possible to talk about a single "ionic bond" between two individual atoms, because the cohesive forces that keep the lattice together are of a more collective nature. This is quite different in the case of covalent bonding, where we can often speak of a distinct bond localized between two particular atoms. However, even if ionic bonding is combined with some covalency, the result is "not" necessarily discrete bonds of a localized character. In such cases, the resulting bonding often requires description in terms of a band structure consisting of gigantic molecular orbitals spanning the entire crystal. Thus, the bonding in the solid often retains its collective rather than localized nature. When the difference in electronegativity is decreased, the bonding may then lead to a semiconductor, a semimetal or eventually a metallic conductor with metallic bonding.
https://en.wikipedia.org/wiki?curid=14951
Immune system The immune system is a host defense system comprising many biological structures and processes within an organism that protects against disease. To function properly, an immune system must detect a wide variety of agents, known as pathogens, from viruses to parasitic worms, and distinguish them from the organism's own healthy tissue. In many species, there are two major subsystems of the immune system: the innate immune system and the adaptive immune system. Both subsystems use humoral immunity and cell-mediated immunity to perform their functions. In humans, the blood–brain barrier, blood–cerebrospinal fluid barrier, and similar fluid–brain barriers separate the peripheral immune system from the neuroimmune system, which protects the brain. Pathogens can rapidly evolve and adapt, and thereby avoid detection and neutralization by the immune system; however, multiple defense mechanisms have also evolved to recognize and neutralize pathogens. Even simple unicellular organisms such as bacteria possess a rudimentary immune system in the form of enzymes that protect against bacteriophage infections. Other basic immune mechanisms evolved in ancient eukaryotes and remain in their modern descendants, such as plants and invertebrates. These mechanisms include phagocytosis, antimicrobial peptides called defensins, and the complement system. Jawed vertebrates, including humans, have even more sophisticated defense mechanisms, including the ability to adapt over time to recognize specific pathogens more efficiently. Adaptive (or acquired) immunity creates immunological memory after an initial response to a specific pathogen, leading to an enhanced response to subsequent encounters with that same pathogen. This process of acquired immunity is the basis of vaccination. Disorders of the immune system can result in autoimmune diseases, inflammatory diseases and cancer. Immunodeficiency occurs when the immune system is less active than normal, resulting in recurring and life-threatening infections. In humans, immunodeficiency can either be the result of a genetic disease such as severe combined immunodeficiency, acquired conditions such as HIV/AIDS, or the use of immunosuppressive medication. In contrast, autoimmunity results from a hyperactive immune system attacking normal tissues as if they were foreign organisms. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Immunology covers the study of all aspects of the immune system. The immune system protects its host from infection with layered defenses of increasing specificity. In simple terms, physical barriers prevent pathogens such as bacteria and viruses from entering the organism. If a pathogen breaches these barriers, the innate immune system provides an immediate, but non-specific response. Innate immune systems are found in all plants and animals. If pathogens successfully evade the innate response, vertebrates possess a second layer of protection, the adaptive immune system, which is activated by the innate response. Here, the immune system adapts its response during an infection to improve its recognition of the pathogen. This improved response is then retained after the pathogen has been eliminated, in the form of an immunological memory, and allows the adaptive immune system to mount faster and stronger attacks each time this pathogen is encountered. Both innate and adaptive immunity depend on the ability of the immune system to distinguish between self and non-self molecules. In immunology, "self" molecules are those components of an organism's body that can be distinguished from foreign substances by the immune system. Conversely, "non-self" molecules are those recognized as foreign molecules. One class of non-self molecules are called antigens (short for "anti"body "gen"erators) and are defined as substances that bind to specific immune receptors and elicit an immune response. Newborn infants have no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. During pregnancy, a particular type of antibody, called IgG, is transported from mother to baby directly through the placenta, so human babies have high levels of antibodies even at birth, with the same range of antigen specificities as their mother. Breast milk or colostrum also contains antibodies that are transferred to the gut of the infant and protect against bacterial infections until the newborn can synthesize its own antibodies. This is passive immunity because the fetus does not actually make any memory cells or antibodies—it only borrows them. This passive immunity is usually short-term, lasting from a few days up to several months. In medicine, protective passive immunity can also be transferred artificially from one individual to another via antibody-rich serum. Microorganisms or toxins that successfully enter an organism encounter the cells and mechanisms of the innate immune system. The innate response is usually triggered when microbes are identified by pattern recognition receptors, which recognize components that are conserved among broad groups of microorganisms, or when damaged, injured or stressed cells send out alarm signals, many of which (but not all) are recognized by the same receptors as those that recognize pathogens. Innate immune defenses are non-specific, meaning these systems respond to pathogens in a generic way. This system does not confer long-lasting immunity against a pathogen. The innate immune system is the dominant system of host defense in most organisms. Cells in the innate immune system use pattern recognition receptors (PRRs) to recognize molecular structures that are produced by microbial pathogens. PRRs are germline-encoded host sensors, which detect molecules typical for the pathogens. They are proteins expressed, mainly, by cells of the innate immune system, such as dendritic cells, macrophages, monocytes, neutrophils and epithelial cells, to identify two classes of molecules: pathogen-associated molecular patterns (PAMPs), which are associated with microbial pathogens, and damage-associated molecular patterns (DAMPs), which are associated with components of host's cells that are released during cell damage or death. Recognition of extracellular or endosomal pathogen-associated molecular patterns (PAMPs) is mediated by transmembrane proteins known as toll-like receptors (TLRs). TLRs share a typical structural motif, the Leucine rich repeats (LRR), which give them their specific appearance and are also responsible for TLR functionality. Toll-like receptors were first discovered in "Drosophila" and trigger the synthesis and secretion of cytokines and activation of other host defense programs that are necessary for both innate or adaptive immune responses. To date, ten functional members of the TLR family have been described in humans. Cells in the innate immune system have pattern recognition receptors that detect infection or cell damage in the cytosol. Three major classes of these cytosolic receptors are NOD–like receptors, RIG (retinoic acid-inducible gene)-like receptors, and cytosolic DNA sensors. Inflammasomes are multiprotein complexes (consist of an NLR, the adaptor protein ASC, and the effector molecule pro-caspase-1) that form in response to cytosolic PAMPs and DAMPs, whose function is to generate active forms of the inflammatory cytokines IL-1β and IL-18. Several barriers protect organisms from infection, including mechanical, chemical, and biological barriers. The waxy cuticle of most leaves, the exoskeleton of insects, the shells and membranes of externally deposited eggs, and skin are examples of mechanical barriers that are the first line of defense against infection. However, as organisms cannot be completely sealed from their environments, other systems act to protect body openings such as the lungs, intestines, and the genitourinary tract. In the lungs, coughing and sneezing mechanically eject pathogens and other irritants from the respiratory tract. The flushing action of tears and urine also mechanically expels pathogens, while mucus secreted by the respiratory and gastrointestinal tract serves to trap and entangle microorganisms. Chemical barriers also protect against infection. The skin and respiratory tract secrete antimicrobial peptides such as the β-defensins. Enzymes such as lysozyme and phospholipase A2 in saliva, tears, and breast milk are also antibacterials. Vaginal secretions serve as a chemical barrier following menarche, when they become slightly acidic, while semen contains defensins and zinc to kill pathogens. In the stomach, gastric acid serves as a powerful chemical defense against ingested pathogens. Within the genitourinary and gastrointestinal tracts, commensal flora serve as biological barriers by competing with pathogenic bacteria for food and space and, in some cases, by changing the conditions in their environment, such as pH or available iron. As a result of the symbiotic relationship between commensals and the immune system, the probability that pathogens will reach sufficient numbers to cause illness is reduced. However, since most antibiotics non-specifically target bacteria and do not affect fungi, oral antibiotics can lead to an "overgrowth" of fungi and cause conditions such as a vaginal candidiasis (a yeast infection). There is good evidence that re-introduction of probiotic flora, such as pure cultures of the lactobacilli normally found in unpasteurized yogurt, helps restore a healthy balance of microbial populations in intestinal infections in children and encouraging preliminary data in studies on bacterial gastroenteritis, inflammatory bowel diseases, urinary tract infection and post-surgical infections. Leukocytes (white blood cells) act like independent, single-celled organisms and are the second arm of the innate immune system. The innate leukocytes include the phagocytes (macrophages, neutrophils, and dendritic cells), innate lymphoid cells, mast cells, eosinophils, basophils, and natural killer cells. These cells identify and eliminate pathogens, either by attacking larger pathogens through contact or by engulfing and then killing microorganisms. Innate cells are also important mediators in lymphoid organ development and the activation of the adaptive immune system. Phagocytosis is an important feature of cellular innate immunity performed by cells called phagocytes that engulf, or eat, pathogens or particles. Phagocytes generally patrol the body searching for pathogens, but can be called to specific locations by cytokines. Once a pathogen has been engulfed by a phagocyte, it becomes trapped in an intracellular vesicle called a phagosome, which subsequently fuses with another vesicle called a lysosome to form a phagolysosome. The pathogen is killed by the activity of digestive enzymes or following a respiratory burst that releases free radicals into the phagolysosome. Phagocytosis evolved as a means of acquiring nutrients, but this role was extended in phagocytes to include engulfment of pathogens as a defense mechanism. Phagocytosis probably represents the oldest form of host defense, as phagocytes have been identified in both vertebrate and invertebrate animals. Neutrophils and macrophages are phagocytes that travel throughout the body in pursuit of invading pathogens. Neutrophils are normally found in the bloodstream and are the most abundant type of phagocyte, normally representing 50% to 60% of the total circulating leukocytes, and consisting of neutrophil-killer and neutrophil-cager subpopulations. During the acute phase of inflammation, particularly as a result of bacterial infection, neutrophils migrate toward the site of inflammation in a process called chemotaxis, and are usually the first cells to arrive at the scene of infection. Macrophages are versatile cells that reside within tissues and produce a wide array of chemicals including enzymes, complement proteins, and cytokines, while they can also act as scavengers that rid the body of worn-out cells and other debris, and as antigen-presenting cells that activate the adaptive immune system. Dendritic cells (DC) are phagocytes in tissues that are in contact with the external environment; therefore, they are located mainly in the skin, nose, lungs, stomach, and intestines. They are named for their resemblance to neuronal dendrites, as both have many spine-like projections, but dendritic cells are in no way connected to the nervous system. Dendritic cells serve as a link between the bodily tissues and the innate and adaptive immune systems, as they present antigens to T cells, one of the key cell types of the adaptive immune system. Granulocytes are leukocytes that have granules in their cytoplasm. In this category are neutrophils, mast cells, basophils, and eosinophils. Mast cells reside in connective tissues and mucous membranes, and regulate the inflammatory response. They are most often associated with allergy and anaphylaxis. Basophils and eosinophils are related to neutrophils. They secrete chemical mediators that are involved in defending against parasites and play a role in allergic reactions, such as asthma. Innate lymphoid cells (ILCs) are a group of innate immune cells that are derived from common lymphoid progenitor (CLP) and belong to the lymphoid lineage. These cells are defined by absence of antigen specific B or T cell receptor because of the lack of recombination activating gene (RAG). ILCs do not express myeloid or dendritic cell markers. Natural killer cells, one of member ILCs, are lymphocytes and a component of the innate immune system which does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self." This term describes cells with low levels of a cell-surface marker called MHC I (major histocompatibility complex)—a situation that can arise in viral infections of host cells. They were named "natural killer" because of the initial notion that they do not require activation in order to kill cells that are "missing self." For many years it was unclear how NK cells recognize tumor cells and infected cells. It is now known that the MHC makeup on the surface of those cells is altered and the NK cells become activated through recognition of "missing self". Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors (KIR) which essentially put the brakes on NK cells. Inflammation is one of the first responses of the immune system to infection. The symptoms of inflammation are redness, swelling, heat, and pain, which are caused by increased blood flow into tissue. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens. The complement system is a biochemical cascade that attacks the surfaces of foreign cells. It contains over 20 different proteins and is named for its ability to "complement" the killing of pathogens by antibodies. Complement is the major humoral component of the innate immune response. Many species have complement systems, including non-mammals like plants, fish, and some invertebrates. In humans, this response is activated by complement binding to antibodies that have attached to these microbes or the binding of complement proteins to carbohydrates on the surfaces of microbes. This recognition signal triggers a rapid killing response. The speed of the response is a result of signal amplification that occurs after sequential proteolytic activation of complement molecules, which are also proteases. After complement proteins initially bind to the microbe, they activate their protease activity, which in turn activates other complement proteases, and so on. This produces a catalytic cascade that amplifies the initial signal by controlled positive feedback. The cascade results in the production of peptides that attract immune cells, increase vascular permeability, and opsonize (coat) the surface of a pathogen, marking it for destruction. This deposition of complement can also kill cells directly by disrupting their plasma membrane. The adaptive immune system evolved in early vertebrates and allows for a stronger immune response as well as immunological memory, where each pathogen is "remembered" by a signature antigen. The adaptive immune response is antigen-specific and requires the recognition of specific "non-self" antigens during a process called antigen presentation. Antigen specificity allows for the generation of responses that are tailored to specific pathogens or pathogen-infected cells. The ability to mount these tailored responses is maintained in the body by "memory cells". Should a pathogen infect the body more than once, these specific memory cells are used to quickly eliminate it. The cells of the adaptive immune system are special types of leukocytes, called lymphocytes. B cells and T cells are the major types of lymphocytes and are derived from hematopoietic stem cells in the bone marrow. B cells are involved in the humoral immune response, whereas T cells are involved in cell-mediated immune response. Killer T cells only recognize antigens coupled to Class I MHC molecules, while helper T cells and regulatory T cells only recognize antigens coupled to Class II MHC molecules. These two mechanisms of antigen presentation reflect the different roles of the two types of T cell. A third, minor subtype are the γδ T cells that recognize intact antigens that are not bound to MHC receptors. The double-positive T cells are exposed to a wide variety of self-antigens in the thymus, in which iodine is necessary for its thymus development and activity. In contrast, the B cell antigen-specific receptor is an antibody molecule on the B cell surface and recognizes whole pathogens without any need for antigen processing. Each lineage of B cell expresses a different antibody, so the complete set of B cell antigen receptors represent all the antibodies that the body can manufacture. Both B cells and T cells carry receptor molecules that recognize specific targets. T cells recognize a "non-self" target, such as a pathogen, only after antigens (small fragments of the pathogen) have been processed and presented in combination with a "self" receptor called a major histocompatibility complex (MHC) molecule. There are two major subtypes of T cells: the killer T cell and the helper T cell. In addition there are regulatory T cells which have a role in modulating immune response. Killer T cells are a sub-group of T cells that kill cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional. As with B cells, each type of T cell recognizes a different antigen. Killer T cells are activated when their T-cell receptor (TCR) binds to this specific antigen in a complex with the MHC Class I receptor of another cell. Recognition of this MHC:antigen complex is aided by a co-receptor on the T cell, called CD8. The T cell then travels throughout the body in search of cells where the MHC I receptors bear this antigen. When an activated T cell contacts such cells, it releases cytotoxins, such as perforin, which form pores in the target cell's plasma membrane, allowing ions, water and toxins to enter. The entry of another toxin called granulysin (a protease) induces the target cell to undergo apoptosis. T cell killing of host cells is particularly important in preventing the replication of viruses. T cell activation is tightly controlled and generally requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T cells (see below). Helper T cells regulate both the innate and adaptive immune responses and help determine which immune responses the body makes to a particular pathogen. These cells have no cytotoxic activity and do not kill infected cells or clear pathogens directly. They instead control the immune response by directing other cells to perform these tasks. Helper T cells express T cell receptors (TCR) that recognize antigen bound to Class II MHC molecules. The MHC:antigen complex is also recognized by the helper cell's CD4 co-receptor, which recruits molecules inside the T cell (e.g., Lck) that are responsible for the T cell's activation. Helper T cells have a weaker association with the MHC:antigen complex than observed for killer T cells, meaning many receptors (around 200–300) on the helper T cell must be bound by an MHC:antigen in order to activate the helper cell, while killer T cells can be activated by engagement of a single MHC:antigen molecule. Helper T cell activation also requires longer duration of engagement with an antigen-presenting cell. The activation of a resting helper T cell causes it to release cytokines that influence the activity of many cell types. Cytokine signals produced by helper T cells enhance the microbicidal function of macrophages and the activity of killer T cells. In addition, helper T cell activation causes an upregulation of molecules expressed on the T cell's surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells. Gamma delta T cells (γδ T cells) possess an alternative T-cell receptor (TCR) as opposed to CD4+ and CD8+ (αβ) T cells and share the characteristics of helper T cells, cytotoxic T cells and NK cells. The conditions that produce responses from γδ T cells are not fully understood. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as CD1d-restricted natural killer T cells, γδ T cells straddle the border between innate and adaptive immunity. On one hand, γδ T cells are a component of adaptive immunity as they rearrange TCR genes to produce receptor diversity and can also develop a memory phenotype. On the other hand, the various subsets are also part of the innate immune system, as restricted TCR or NK receptors may be used as pattern recognition receptors. For example, large numbers of human Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted Vδ1+ T cells in epithelia respond to stressed epithelial cells. A B cell identifies pathogens when antibodies on its surface bind to a specific foreign antigen. This antigen/antibody complex is taken up by the B cell and processed by proteolysis into peptides. The B cell then displays these antigenic peptides on its surface MHC class II molecules. This combination of MHC and antigen attracts a matching helper T cell, which releases lymphokines and activates the B cell. As the activated B cell then begins to divide, its offspring (plasma cells) secrete millions of copies of the antibody that recognizes this antigen. These antibodies circulate in blood plasma and lymph, bind to pathogens expressing the antigen and mark them for destruction by complement activation or for uptake and destruction by phagocytes. Antibodies can also neutralize challenges directly, by binding to bacterial toxins or by interfering with the receptors that viruses and bacteria use to infect cells. When B cells and T cells are activated and begin to replicate, some of their offspring become long-lived memory cells. Throughout the lifetime of an animal, these memory cells remember each specific pathogen encountered and can mount a strong response if the pathogen is detected again. This is "adaptive" because it occurs during the lifetime of an individual as an adaptation to infection with that pathogen and prepares the immune system for future challenges. Immunological memory can be in the form of either passive short-term memory or active long-term memory. The immune system is involved in many aspects of physiological regulation in the body. The immune system interacts intimately with other systems, such as the endocrine and the nervous systems. The immune system also plays a crucial role in embryogenesis (development of the embryo), as well as in tissue repair and regeneration. Hormones can act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. By contrast, male sex hormones such as testosterone seem to be immunosuppressive. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D. When a T-cell encounters a foreign pathogen, it extends a vitamin D receptor. This is essentially a signaling device that allows the T-cell to bind to the active form of vitamin D, the steroid hormone calcitriol. T-cells have a symbiotic relationship with vitamin D. Not only does the T-cell extend a vitamin D receptor, in essence asking to bind to the steroid hormone version of vitamin D, calcitriol, but the T-cell expresses the gene CYP27B1, which is the gene responsible for converting the pre-hormone version of vitamin D, calcidiol into the steroid hormone version, calcitriol. Only after binding to calcitriol can T-cells perform their intended function. Other immune system cells that are known to express CYP27B1 and thus activate vitamin D calcidiol, are dendritic cells, keratinocytes and macrophages. It is conjectured that a progressive decline in hormone levels with age is partially responsible for weakened immune responses in aging individuals. Conversely, some hormones are regulated by the immune system, notably thyroid hormone activity. The age-related decline in immune function is also related to decreasing vitamin D levels in the elderly. As people age, two things happen that negatively affect their vitamin D levels. First, they stay indoors more due to decreased activity levels. This means that they get less sun and therefore produce less cholecalciferol via UVB radiation. Second, as a person ages the skin becomes less adept at producing vitamin D. The immune system is affected by sleep and rest, and sleep deprivation is detrimental to immune function. Complex feedback loops involving cytokines, such as interleukin-1 and tumor necrosis factor-α produced in response to infection, appear to also play a role in the regulation of non-rapid eye movement (REM) sleep. Thus the immune response to infection may result in changes to the sleep cycle, including an increase in slow-wave sleep relative to REM sleep. When suffering from sleep deprivation, active immunizations may have a diminished effect and may result in lower antibody production, and a lower immune response, than would be noted in a well-rested individual. Additionally, proteins such as NFIL3, which have been shown to be closely intertwined with both T-cell differentiation and our circadian rhythms, can be affected through the disturbance of natural light and dark cycles through instances of sleep deprivation, shift work, etc. As a result, these disruptions can lead to an increase in chronic conditions such as heart disease, chronic pain, and asthma. In addition to the negative consequences of sleep deprivation, sleep and the intertwined circadian system have been shown to have strong regulatory effects on immunological functions affecting both the innate and the adaptive immunity. First, during the early slow-wave-sleep stage, a sudden drop in blood levels of cortisol, epinephrine, and norepinephrine induce increased blood levels of the hormones leptin, pituitary growth hormone, and prolactin. These signals induce a pro-inflammatory state through the production of the pro-inflammatory cytokines interleukin-1, interleukin-12, TNF-alpha and IFN-gamma. These cytokines then stimulate immune functions such as immune cells activation, proliferation, and differentiation. It is during this time that undifferentiated, or less differentiated, like naïve and central memory T cells, peak (i.e. during a time of a slowly evolving adaptive immune response). In addition to these effects, the milieu of hormones produced at this time (leptin, pituitary growth hormone, and prolactin) support the interactions between APCs and T-cells, a shift of the Th1/Th2 cytokine balance towards one that supports Th1, an increase in overall Th cell proliferation, and naïve T cell migration to lymph nodes. This milieu is also thought to support the formation of long-lasting immune memory through the initiation of Th1 immune responses. In contrast, during wake periods differentiated effector cells, such as cytotoxic natural killer cells and CTLs (cytotoxic T lymphocytes), peak in order to elicit an effective response against any intruding pathogens. As well during awake active times, anti-inflammatory molecules, such as cortisol and catecholamines, peak. There are two theories as to why the pro-inflammatory state is reserved for sleep time. First, inflammation would cause serious cognitive and physical impairments if it were to occur during wake times. Second, inflammation may occur during sleep times due to the presence of melatonin. Inflammation causes a great deal of oxidative stress and the presence of melatonin during sleep times could actively counteract free radical production during this time. Overnutrition is associated with diseases such as diabetes and obesity, which are known to affect immune function. More moderate malnutrition, as well as certain specific trace mineral and nutrient deficiencies, can also compromise the immune response. Foods rich in certain fatty acids may foster a healthy immune system. Likewise, fetal undernourishment can cause a lifelong impairment of the immune system. The immune system, particularly the innate component, plays a decisive role in tissue repair after an insult. Key actors include macrophages and neutrophils, but other cellular actors, including γδ T cells, innate lymphoid cells (ILCs), and regulatory T cells (Tregs), are also important. The plasticity of immune cells and the balance between pro-inflammatory and anti-inflammatory signals are crucial aspects of efficient tissue repair. Immune components and pathways are involved in regeneration as well, for example in amphibians. According to one hypothesis, organisms that can regenerate could be less immunocompetent than organisms that cannot regenerate. The immune system is a remarkably effective structure that incorporates specificity, inducibility and adaptation. Failures of host defense do occur, however, and fall into three broad categories: immunodeficiencies, autoimmunity, and hypersensitivities. Immunodeficiencies occur when one or more of the components of the immune system are inactive. The ability of the immune system to respond to pathogens is diminished in both the young and the elderly, with immune responses beginning to decline at around 50 years of age due to immunosenescence. In developed countries, obesity, alcoholism, and drug use are common causes of poor immune function, while malnutrition is the most common cause of immunodeficiency in developing countries. Diets lacking sufficient protein are associated with impaired cell-mediated immunity, complement activity, phagocyte function, IgA antibody concentrations, and cytokine production. Additionally, the loss of the thymus at an early age through genetic mutation or surgical removal results in severe immunodeficiency and a high susceptibility to infection. Immunodeficiencies can also be inherited or 'acquired'. Chronic granulomatous disease, where phagocytes have a reduced ability to destroy pathogens, is an example of an inherited, or congenital, immunodeficiency. AIDS and some types of cancer cause acquired immunodeficiency. Overactive immune responses form the other end of immune dysfunction, particularly the autoimmune disorders. Here, the immune system fails to properly distinguish between self and non-self, and attacks part of the body. Under normal circumstances, many T cells and antibodies react with "self" peptides. One of the functions of specialized cells (located in the thymus and bone marrow) is to present young lymphocytes with self antigens produced throughout the body and to eliminate those cells that recognize self-antigens, preventing autoimmunity. Hypersensitivity is an immune response that damages the body's own tissues. They are divided into four classes (Type I – IV) based on the mechanisms involved and the time course of the hypersensitive reaction. Type I hypersensitivity is an immediate or anaphylactic reaction, often associated with allergy. Symptoms can range from mild discomfort to death. Type I hypersensitivity is mediated by IgE, which triggers degranulation of mast cells and basophils when cross-linked by antigen. Type II hypersensitivity occurs when antibodies bind to antigens on the patient's own cells, marking them for destruction. This is also called antibody-dependent (or cytotoxic) hypersensitivity, and is mediated by IgG and IgM antibodies. Immune complexes (aggregations of antigens, complement proteins, and IgG and IgM antibodies) deposited in various tissues trigger Type III hypersensitivity reactions. Type IV hypersensitivity (also known as cell-mediated or "delayed type hypersensitivity") usually takes between two and three days to develop. Type IV reactions are involved in many autoimmune and infectious diseases, but may also involve "contact dermatitis" (poison ivy). These reactions are mediated by T cells, monocytes, and macrophages. Inflammation is one of the first responses of the immune system to infection, but it can appear without known cause. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens. The immune response can be manipulated to suppress unwanted responses resulting from autoimmunity, allergy, and transplant rejection, and to stimulate protective responses against pathogens that largely elude the immune system (see immunization) or cancer. Immunosuppressive drugs are used to control autoimmune disorders or inflammation when excessive tissue damage occurs, and to prevent transplant rejection after an organ transplant. Anti-inflammatory drugs are often used to control the effects of inflammation. Glucocorticoids are the most powerful of these drugs; however, these drugs can have many undesirable side effects, such as central obesity, hyperglycemia, osteoporosis, and their use must be tightly controlled. Lower doses of anti-inflammatory drugs are often used in conjunction with cytotoxic or immunosuppressive drugs such as methotrexate or azathioprine. Cytotoxic drugs inhibit the immune response by killing dividing cells such as activated T cells. However, the killing is indiscriminate and other constantly dividing cells and their organs are affected, which causes toxic side effects. Immunosuppressive drugs such as cyclosporin prevent T cells from responding to signals correctly by inhibiting signal transduction pathways. Cancer immunotherapy covers the medical ways to stimulate the immune system to attack cancer tumours. Long-term "active" memory is acquired following infection by activation of B and T cells. Active immunity can also be generated artificially, through vaccination. The principle behind vaccination (also called immunization) is to introduce an antigen from a pathogen in order to stimulate the immune system and develop specific immunity against that particular pathogen without causing disease associated with that organism. This deliberate induction of an immune response is successful because it exploits the natural specificity of the immune system, as well as its inducibility. With infectious disease remaining one of the leading causes of death in the human population, vaccination represents the most effective manipulation of the immune system mankind has developed. Most viral vaccines are based on live attenuated viruses, while many bacterial vaccines are based on acellular components of micro-organisms, including harmless toxin components. Since many antigens derived from acellular vaccines do not strongly induce the adaptive response, most bacterial vaccines are provided with additional adjuvants that activate the antigen-presenting cells of the innate immune system and maximize immunogenicity. Another important role of the immune system is to identify and eliminate tumors. This is called immune surveillance. The "transformed cells" of tumors express antigens that are not found on normal cells. To the immune system, these antigens appear foreign, and their presence causes immune cells to attack the transformed tumor cells. The antigens expressed by tumors have several sources; some are derived from oncogenic viruses like human papillomavirus, which causes cancer of the cervix, vulva, vagina, penis, anus, mouth, and throat, while others are the organism's own proteins that occur at low levels in normal cells but reach high levels in tumor cells. One example is an enzyme called tyrosinase that, when expressed at high levels, transforms certain skin cells (e.g. melanocytes) into tumors called melanomas. A third possible source of tumor antigens are proteins normally important for regulating cell growth and survival, that commonly mutate into cancer inducing molecules called oncogenes. The main response of the immune system to tumors is to destroy the abnormal cells using killer T cells, sometimes with the assistance of helper T cells. Tumor antigens are presented on MHC class I molecules in a similar way to viral antigens. This allows killer T cells to recognize the tumor cell as abnormal. NK cells also kill tumorous cells in a similar way, especially if the tumor cells have fewer MHC class I molecules on their surface than normal; this is a common phenomenon with tumors. Sometimes antibodies are generated against tumor cells allowing for their destruction by the complement system. Clearly, some tumors evade the immune system and go on to become cancers. Tumor cells often have a reduced number of MHC class I molecules on their surface, thus avoiding detection by killer T cells. Some tumor cells also release products that inhibit the immune response; for example by secreting the cytokine TGF-β, which suppresses the activity of macrophages and lymphocytes. In addition, immunological tolerance may develop against tumor antigens, so the immune system no longer attacks the tumor cells. Paradoxically, macrophages can promote tumor growth when tumor cells send out cytokines that attract macrophages, which then generate cytokines and growth factors such as tumor-necrosis factor alpha that nurture tumor development or promote stem-cell-like plasticity. In addition, a combination of hypoxia in the tumor and a cytokine produced by macrophages induces tumor cells to decrease production of a protein that blocks metastasis and thereby assists spread of cancer cells. Anti-tumor M1 macrophages are recruited in early phases to tumor development but are progressively differentiated to M2 with pro-tumor effect, an immunosuppressor switch. The hypoxia reduces the cytokine production for the anti-tumor response and progressively macrophages acquire pro-tumor M2 functions driven by the tumor microenvironment, including IL-4 and IL-10. Larger drugs (>500 Da) can provoke a neutralizing immune response, meaning that the immune system produces neutralizing antibodies that counteract the action of the drugs, particularly if the drugs are administered repeatedly, or in larger doses. This limits the effectiveness of drugs based on larger peptides and proteins (which are typically larger than 6000 Da). In some cases, the drug itself is not immunogenic, but may be co-administered with an immunogenic compound, as is sometimes the case for Taxol. Computational methods have been developed to predict the immunogenicity of peptides and proteins, which are particularly useful in designing therapeutic antibodies, assessing likely virulence of mutations in viral coat particles, and validation of proposed peptide-based drug treatments. Early techniques relied mainly on the observation that hydrophilic amino acids are overrepresented in epitope regions than hydrophobic amino acids; however, more recent developments rely on machine learning techniques using databases of existing known epitopes, usually on well-studied virus proteins, as a training set. A publicly accessible database has been established for the cataloguing of epitopes from pathogens known to be recognizable by B cells. The emerging field of bioinformatics-based studies of immunogenicity is referred to as "immunoinformatics". Immunoproteomics is the study of large sets of proteins (proteomics) involved in the immune response. It is likely that a multicomponent, adaptive immune system arose with the first vertebrates, as invertebrates do not generate lymphocytes or an antibody-based humoral response. Many species, however, utilize mechanisms that appear to be precursors of these aspects of vertebrate immunity. Immune systems appear even in the structurally most simple forms of life, with bacteria using a unique defense mechanism, called the restriction modification system to protect themselves from viral pathogens, called bacteriophages. Prokaryotes also possess acquired immunity, through a system that uses CRISPR sequences to retain fragments of the genomes of phage that they have come into contact with in the past, which allows them to block virus replication through a form of RNA interference. Prokaryotes also possess other defense mechanisms. Offensive elements of the immune systems are also present in unicellular eukaryotes, but studies of their roles in defense are few. Pattern recognition receptors are proteins used by nearly all organisms to identify molecules associated with pathogens. Antimicrobial peptides called defensins are an evolutionarily conserved component of the innate immune response found in all animals and plants, and represent the main form of invertebrate systemic immunity. The complement system and phagocytic cells are also used by most forms of invertebrate life. Ribonucleases and the RNA interference pathway are conserved across all eukaryotes, and are thought to play a role in the immune response to viruses. Unlike animals, plants lack phagocytic cells, but many plant immune responses involve systemic chemical signals that are sent through a plant. Individual plant cells respond to molecules associated with pathogens known as Pathogen-associated molecular patterns or PAMPs. When a part of a plant becomes infected, the plant produces a localized hypersensitive response, whereby cells at the site of infection undergo rapid apoptosis to prevent the spread of the disease to other parts of the plant. Systemic acquired resistance (SAR) is a type of defensive response used by plants that renders the entire plant resistant to a particular infectious agent. RNA silencing mechanisms are particularly important in this systemic response as they can block virus replication. Evolution of the adaptive immune system occurred in an ancestor of the jawed vertebrates. Many of the classical molecules of the adaptive immune system (e.g., immunoglobulins and T-cell receptors) exist only in jawed vertebrates. However, a distinct lymphocyte-derived molecule has been discovered in primitive jawless vertebrates, such as the lamprey and hagfish. These animals possess a large array of molecules called Variable lymphocyte receptors (VLRs) that, like the antigen receptors of jawed vertebrates, are produced from only a small number (one or two) of genes. These molecules are believed to bind pathogenic antigens in a similar way to antibodies, and with the same degree of specificity. The success of any pathogen depends on its ability to elude host immune responses. Therefore, pathogens evolved several methods that allow them to successfully infect a host, while evading detection or destruction by the immune system. Bacteria often overcome physical barriers by secreting enzymes that digest the barrier, for example, by using a type II secretion system. Alternatively, using a type III secretion system, they may insert a hollow tube into the host cell, providing a direct route for proteins to move from the pathogen to the host. These proteins are often used to shut down host defenses. An evasion strategy used by several pathogens to avoid the innate immune system is to hide within the cells of their host (also called intracellular pathogenesis). Here, a pathogen spends most of its life-cycle inside host cells, where it is shielded from direct contact with immune cells, antibodies and complement. Some examples of intracellular pathogens include viruses, the food poisoning bacterium "Salmonella" and the eukaryotic parasites that cause malaria ("Plasmodium falciparum") and leishmaniasis ("Leishmania spp."). Other bacteria, such as "Mycobacterium tuberculosis", live inside a protective capsule that prevents lysis by complement. Many pathogens secrete compounds that diminish or misdirect the host's immune response. Some bacteria form biofilms to protect themselves from the cells and proteins of the immune system. Such biofilms are present in many successful infections, e.g., the chronic "Pseudomonas aeruginosa" and "Burkholderia cenocepacia" infections characteristic of cystic fibrosis. Other bacteria generate surface proteins that bind to antibodies, rendering them ineffective; examples include "Streptococcus" (protein G), "Staphylococcus aureus" (protein A), and "Peptostreptococcus magnus" (protein L). The mechanisms used to evade the adaptive immune system are more complicated. The simplest approach is to rapidly change non-essential epitopes (amino acids and/or sugars) on the surface of the pathogen, while keeping essential epitopes concealed. This is called antigenic variation. An example is HIV, which mutates rapidly, so the proteins on its viral envelope that are essential for entry into its host target cell are constantly changing. These frequent changes in antigens may explain the failures of vaccines directed at this virus. The parasite "Trypanosoma brucei" uses a similar strategy, constantly switching one type of surface protein for another, allowing it to stay one step ahead of the antibody response. Masking antigens with host molecules is another common strategy for avoiding detection by the immune system. In HIV, the envelope that covers the virion is formed from the outermost membrane of the host cell; such "self-cloaked" viruses make it difficult for the immune system to identify them as "non-self" structures. Immunology is a science that examines the structure and function of the immune system. It originates from medicine and early studies on the causes of immunity to disease. The earliest known reference to immunity was during the plague of Athens in 430 BC. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. In the 18th century, Pierre-Louis Moreau de Maupertuis made experiments with scorpion venom and observed that certain dogs and mice were immune to this venom. In the 10th century, Persian physician al-Razi (also known as Rhazes) wrote the first recorded theory of acquired immunity, noting that a smallpox bout protected its survivors from future infections. Although he explained the immunity in terms of "excess moisture" getting expelled from the blood—therefore preventing the disease to occur for a second time—this theory explained many observations about smallpox known during this time. These and other observations of acquired immunity were later exploited by Louis Pasteur in his development of vaccination and his proposed germ theory of disease. Pasteur's theory was in direct opposition to contemporary theories of disease, such as the miasma theory. It was not until Robert Koch's 1891 proofs, for which he was awarded a Nobel Prize in 1905, that microorganisms were confirmed as the cause of infectious disease. Viruses were confirmed as human pathogens in 1901, with the discovery of the yellow fever virus by Walter Reed. Immunology made a great advance towards the end of the 19th century, through rapid developments, in the study of humoral immunity and cellular immunity. Particularly important was the work of Paul Ehrlich, who proposed the side-chain theory to explain the specificity of the antigen-antibody reaction; his contributions to the understanding of humoral immunity were recognized by the award of a Nobel Prize in 1908, which was jointly awarded to the founder of cellular immunology, Elie Metchnikoff. Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff, it was cells—more precisely, phagocytes—that were responsible for immune responses. In contrast, the humoral theory of immunity, held, among others, by Robert Koch and Emil von Behring, stated that the active immune agents were soluble components (molecules) found in the organism's “humors” rather than its cells. In the mid-1950s, Frank Burnet, inspired by a suggestion made by Niels Jerne, formulated the clonal selection theory (CST) of immunity. On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (pathogens, an allograft) trigger a destructive immune response. The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells. The self/nonself theory of immunity and the self/nonself vocabulary have been criticized, but remain very influential. More recently, several theoretical frameworks have been suggested in immunology, including "autopoietic" views, "cognitive immune" views, the "danger model" (or "danger theory"), and the "discontinuity" theory. The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions. The following organs and body parts play a role in the immune system.
https://en.wikipedia.org/wiki?curid=14958
Immunology Immunology is a branch of biology that covers the study of immune systems in all organisms. Immunology charts, measures, and contextualizes the physiological functioning of the immune system in states of both health and diseases; malfunctions of the immune system in immunological disorders (such as autoimmune diseases, hypersensitivities, immune deficiency, and transplant rejection); and the physical, chemical, and physiological characteristics of the components of the immune system "in vitro", "in situ", and "in vivo". Immunology has applications in numerous disciplines of medicine, particularly in the fields of organ transplantation, oncology, rheumatology, virology, bacteriology, parasitology, psychiatry, and dermatology. The term was coined by Russian biologist Ilya Ilyich Mechnikov, who advanced studies on immunology and received the Nobel Prize for his work in 1908. He pinned small thorns into starfish larvae and noticed unusual cells surrounding the thorns. This was the active response of the body trying to maintain its integrity. It was Mechnikov who first observed the phenomenon of phagocytosis, in which the body defends itself against a foreign body. Prior to the designation of immunity, from the etymological root "immunis", which is Latin for "exempt", early physicians characterized organs that would later be proven as essential components of the immune system. The important lymphoid organs of the immune system are the thymus, bone marrow, and chief lymphatic tissues such as spleen, tonsils, lymph vessels, lymph nodes, adenoids, and liver. When health conditions worsen to emergency status, portions of immune system organs, including the thymus, spleen, bone marrow, lymph nodes, and other lymphatic tissues, can be surgically excised for examination while patients are still alive. Many components of the immune system are typically cellular in nature and not associated with any specific organ, but rather are embedded or circulating in various tissues located throughout the body. Classical immunology ties in with the fields of epidemiology and medicine. It studies the relationship between the body systems, pathogens, and immunity. The earliest written mention of immunity can be traced back to the plague of Athens in 430 BCE. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. Many other ancient societies have references to this phenomenon, but it was not until the 19th and 20th centuries before the concept developed into scientific theory. The study of the molecular and cellular components that comprise the immune system, including their function and interaction, is the central science of immunology. The immune system has been divided into a more primitive innate immune system and, in vertebrates, an acquired or adaptive immune system. The latter is further divided into humoral (or antibody) and cell-mediated components. The immune system has the capability of self and non-self-recognition. An antigen is a substance that ignites the immune response. The cells involved in recognizing the antigen are Lymphocytes. Once they recognize, they secrete antibodies. Antibodies are proteins that neutralize the disease-causing microorganisms. Antibodies don't directly kill pathogens, but instead, identify antigens as targets for destruction by other immune cells such as phagocytes or NK cells. The humoral (antibody) response is defined as the interaction between antibodies and antigens. Antibodies are specific proteins released from a certain class of immune cells known as B lymphocytes, while antigens are defined as anything that elicits the generation of antibodies ("anti"body "gen"erators). Immunology rests on an understanding of the properties of these two biological entities and the cellular response to both. It's now getting clear that the immune responses contribute to the development of many common disorders not traditionally viewed as immunologic, including metabolic, cardiovascular, cancer, and neurodegenerative conditions like Alzheimer’s disease. Besides, there are direct implications of the immune system in the infectious diseases (tuberculosis, malaria, hepatitis, pneumonia, dysentery, and helminth infestations) as well. Hence, research in the field of immunology is of prime importance for the advancements in the fields of modern medicine, biomedical research, and biotechnology. Immunological research continues to become more specialized, pursuing non-classical models of immunity and functions of cells, organs and systems not previously associated with the immune system (Yemeserach 2010). Clinical immunology is the study of diseases caused by disorders of the immune system (failure, aberrant action, and malignant growth of the cellular elements of the system). It also involves diseases of other systems, where immune reactions play a part in the pathology and clinical features. The diseases caused by disorders of the immune system fall into two broad categories: Other immune system disorders include various hypersensitivities (such as in asthma and other allergies) that respond inappropriately to otherwise harmless compounds. The most well-known disease that affects the immune system itself is AIDS, an immunodeficiency characterized by the suppression of CD4+ ("helper") T cells, dendritic cells and macrophages by the Human Immunodeficiency Virus (HIV). Clinical immunologists also study ways to prevent the immune system's attempts to destroy allografts (transplant rejection). The body’s capability to react to antigens depends on a person's age, antigen type, maternal factors and the area where the antigen is presented. Neonates are said to be in a state of physiological immunodeficiency, because both their innate and adaptive immunological responses are greatly suppressed. Once born, a child’s immune system responds favorably to protein antigens while not as well to glycoproteins and polysaccharides. In fact, many of the infections acquired by neonates are caused by low virulence organisms like "Staphylococcus" and "Pseudomonas". In neonates, opsonic activity and the ability to activate the complement cascade is very limited. For example, the mean level of C3 in a newborn is approximately 65% of that found in the adult. Phagocytic activity is also greatly impaired in newborns. This is due to lower opsonic activity, as well as diminished up-regulation of integrin and selectin receptors, which limit the ability of neutrophils to interact with adhesion molecules in the endothelium. Their monocytes are slow and have a reduced ATP production, which also limits the newborn's phagocytic activity. Although, the number of total lymphocytes is significantly higher than in adults, the cellular and humoral immunity is also impaired. Antigen-presenting cells in newborns have a reduced capability to activate T cells. Also, T cells of a newborn proliferate poorly and produce very small amounts of cytokines like IL-2, IL-4, IL-5, IL-12, and IFN-g which limits their capacity to activate the humoral response as well as the phagocitic activity of macrophage. B cells develop early during gestation but are not fully active. Maternal factors also play a role in the body’s immune response. At birth, most of the immunoglobulin present is maternal IgG. Because IgM, IgD, IgE and IgA don't cross the placenta, they are almost undetectable at birth. Some IgA is provided by breast milk. These passively-acquired antibodies can protect the newborn for up to 18 months, but their response is usually short-lived and of low affinity. These antibodies can also produce a negative response. If a child is exposed to the antibody for a particular antigen before being exposed to the antigen itself then the child will produce a dampened response. Passively acquired maternal antibodies can suppress the antibody response to active immunization. Similarly, the response of T-cells to vaccination differs in children compared to adults, and vaccines that induce Th1 responses in adults do not readily elicit these same responses in neonates. Between six and nine months after birth, a child’s immune system begins to respond more strongly to glycoproteins, but there is usually no marked improvement in their response to polysaccharides until they are at least one year old. This can be the reason for distinct time frames found in vaccination schedules. During adolescence, the human body undergoes various physical, physiological and immunological changes triggered and mediated by hormones, of which the most significant in females is 17-β-estradiol (an estrogen) and, in males, is testosterone. Estradiol usually begins to act around the age of 10 and testosterone some months later. There is evidence that these steroids not only act directly on the primary and secondary sexual characteristics but also have an effect on the development and regulation of the immune system, including an increased risk in developing pubescent and post-pubescent autoimmunity. There is also some evidence that cell surface receptors on B cells and macrophages may detect sex hormones in the system. The female sex hormone 17-β-estradiol has been shown to regulate the level of immunological response, while some male androgens such as testosterone seem to suppress the stress response to infection. Other androgens, however, such as DHEA, increase immune response. As in females, the male sex hormones seem to have more control of the immune system during puberty and post-puberty than during the rest of a male's adult life. Physical changes during puberty such as thymic involution also affect immunological response. Ecoimmunology, or ecological immunology, explores the relationship between the immune system of an organism and its social, biotic and abiotic environment. More recent ecoimmunological research has focused on host pathogen defences traditionally considered "non-immunological", such as pathogen avoidance, self-medication, symbiont-mediated defenses, and fecundity trade-offs. Behavioural immunity, a phrase coined by Mark Schaller, specifically refers to psychological pathogen avoidance drivers, such as disgust aroused by stimuli encountered around pathogen-infected individuals, such as the smell of vomit. More broadly, "behavioural" ecological immunity has been demonstrated in multiple species. For example, the Monarch butterfly often lays its eggs on certain toxic milkweed species when infected with parasites. These toxins reduce parasite growth in the offspring of the infected Monarch. However, when uninfected Monarch butterflies are forced to feed only on these toxic plants, they suffer a fitness cost as reduced lifespan relative to other uninfected Monarch butterflies. This indicates that laying eggs on toxic plants is a costly behaviour in Monarchs which has probably evolved to reduce the severity of parasite infection. Symbiont-mediated defenses are also heritable across host generations, despite a non-genetic direct basis for the transmission. Aphids, for example, rely on several different symbionts for defense from key parasites, and can vertically transmit their symbionts from parent to offspring. Therefore, a symbiont that successfully confers protection from a parasite is more likely to be passed to the host offspring, allowing coevolution with parasites attacking the host in a way similar to traditional immunity. The use of immune system components or antigens to treat a disease or disorder is known as immunotherapy. Immunotherapy is most commonly used to treat allergies, autoimmune disorders such as Crohn’s disease and rheumatoid arthritis, and certain cancers. Immunotherapy is also often used in the immunosuppressed (such as HIV patients) and people suffering from other immune deficiencies. This includes regulating factors such as IL-2, IL-10, GM-CSF B, IFN-α. The specificity of the bond between antibody and antigen has made the antibody an excellent tool for the detection of substances by a variety of diagnostic techniques. Antibodies specific for a desired antigen can be conjugated with an isotopic (radio) or fluorescent label or with a color-forming enzyme in order to detect it. However, the similarity between some antigens can lead to false positives and other errors in such tests by antibodies cross-reacting with antigens that aren't exact matches. The study of the interaction of the immune system with cancer cells can lead to diagnostic tests and therapies with which to find and fight cancer. The immunology concerned with physiological reaction characteristic of the immune state. This area of the immunology is devoted to the study of immunological aspects of the reproductive process including fetus acceptance. The term has also been used by fertility clinics to address fertility problems, recurrent miscarriages, premature deliveries and dangerous complications such as pre-eclampsia. Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff, it was cells – more precisely, phagocytes – that were responsible for immune responses. In contrast, the humoral theory of immunity, held by Robert Koch and Emil von Behring, among others, stated that the active immune agents were soluble components (molecules) found in the organism's "humors" rather than its cells. In the mid-1950s, Macfarlane Burnet, inspired by a suggestion made by Niels Jerne, formulated the clonal selection theory (CST) of immunity. On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (e.g., pathogens, an allograft) trigger a destructive immune response. The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells. The self/nonself theory of immunity and the self/nonself vocabulary have been criticized, but remain very influential. More recently, several theoretical frameworks have been suggested in immunology, including "autopoietic" views, "cognitive immune" views, the "danger model" (or "danger theory"), and the "discontinuity" theory. The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions.
https://en.wikipedia.org/wiki?curid=14959
Ice beer Ice beer is a beer style, in which beer has undergone some degree of fractional freezing during production. These brands generally have higher alcohol content than typical beer and generally have a low price relative to their alcohol content. The process of "icing" beer involves lowering the temperature of a batch of beer until ice crystals form. Since ethanol has a much lower freezing point (-114 °C; -173.2 °F) than water (0 °C; 32 °F), when the ice is removed, the alcohol concentration of the beer increases. The process is known as fractional freezing or freeze distillation. Ice beer was developed by brewing a strong, dark lager, then freezing the beer and removing some of the ice. This concentrates the aroma and taste of the beer, and also raises the alcoholic strength of the finished beer. This produces a beer with 12 to 15 per cent alcohol. In North America, water would be added to lower the alcohol level. Eisbock was introduced to Canada in 1989 by the microbrewery Niagara Falls Brewing Company. The brewers started with a strong dark lager (15.3 degrees Plato/1.061 original gravity, 6% alcohol by volume), then used the traditional method of freezing and removing ice to concentrate aroma and flavours while increasing the alcoholic strength to 8% ABV. Niagara Falls Eisbock was released annually as a seasonal winter beer; each year the label would feature a different historic view of nearby Niagara Falls in winter. This continued each year until the company was sold in 1994. Despite this precedent, the large Canadian brewer Molson (now part of Molson Coors) claimed to have made the first ice beer in North America when it introduced "Canadian Ice" in April 1993. However, Molson's main competitor in Canada, Labatt (now part of Anheuser-Busch InBev), claimed to have patented the ice beer process earlier. When Labatt introduced an ice beer in August 1993, capturing a 10% market share in Canada, this instigated the so-called "Ice Beer Wars" of the 1990s. Labatt had patented a specific method for making ice beer in 1997, 1998 and 2000, "A process for chill-treating, which is exemplified by a process for preparing a fermented malt beverage wherein brewing materials are mashed with water and the resulting mash is heated and wort separated therefrom. The wort is boiled cooled and fermented, and the beer is subjected to a finishing stage, which includes aging, to produce the final beverage. The improvement comprises subjecting the beer to a cold stage comprising rapidly cooling the beer to a temperature of about its freezing point in such a manner that ice crystals are formed therein in only minimal amounts. The resulting cooled beer is then mixed for a short period of time with a beer slurry containing ice crystals, without any appreciable collateral increase in the amount of ice crystals in the resulting mixture. Finally, the so-treated beer is extracted from the mixture." The company provides the following explanation for the layman: "During this unique process, the temperature is reduced until fine ice crystals form in the beer. Then using an exclusive process, the crystals are removed. The result is a full-flavoured balanced beer." Miller acquired the U.S. marketing and distribution rights to Molson's products, and first introduced the Molson product in the United States in August 1993 as "Molson Ice". Miller also introduced the "Icehouse" brand under the "Plank Road Brewery" brand name shortly thereafter, and it is still sold nationwide. Anheuser-Busch introduced "Bud Ice" (5.5% ABV) in 1994, and it remains one of the country's top selling ice beers. "Bud Ice" has a somewhat lower alcohol content than most other ice beer brands. In 1995, Anheuser-Busch also introduced two other major brands: "Busch Ice" (5.9% ABV, introduced 1995) and "Natural Ice" (also 5.9% ABV, also introduced in 1995). "Natural Ice" is the No. 1 selling ice beer brand in the United States; its low price makes it very popular on college campuses all over the country. Keystone Ice, a value-based subdivision of Coors, also produces a 5.9% ABV brew labeled "Keystone Ice". Common ice beer brands in Canada in 2017, with approximately 5.5 to 6 per cent alcohol content, include Carling Ice, Molson Keystone Ice, Molson's Black Ice, Busch Ice, Old Milwaukee Ice, Brick's Laker Ice and Labatt Ice. There is a Labatt Maximum Ice too, with 7.1 per cent alcohol. The ice beers are typically known for their high alcohol-to-dollar ratio. In some areas, a substantial number of ice beer products are considered to often be bought by "street drunks," and are prohibited for sale. For example, most of the products that are explicitly listed as prohibited in the beer and malt liquor category in the Seattle area are ice beers.
https://en.wikipedia.org/wiki?curid=14961
Identity element In mathematics, an identity element, or neutral element, is a special type of element of a set with respect to a binary operation on that set, which leaves any element of the set unchanged when combined with it. This concept is used in algebraic structures such as groups and rings. The term "identity element" is often shortened to "identity" (as in the case of additive identity and multiplicative identity), when there is no possibility of confusion, but the identity implicitly depends on the binary operation it is associated with. Let be a set  equipped with a binary operation ∗. Then an element  of  is called a left identity if for all  in , and a right identity if for all  in . If is both a left identity and a right identity, then it is called a two-sided identity, or simply an identity. An identity with respect to addition is called an additive identity (often denoted as 0) and an identity with respect to multiplication is called a multiplicative identity (often denoted as 1). These need not be ordinary addition and multiplication—as the underlying operation could be rather arbitrary. The distinction is used most often for sets that support both binary operations, such as rings, integral domains, and fields. The multiplicative identity is often called unity in the latter context (a ring with unity). This should not be confused with a unit in ring theory, which is any element having a multiplicative inverse. By its own definition, unity itself is necessarily a unit. As the last example (a semigroup) shows, it is possible for to have several left identities. In fact, every element can be a left identity. In a similar manner, there can be several right identities. But if there is both a right identity and a left identity, then they must be equal, resulting in a single two-sided identity. To see this, note that if is a left identity and is a right identity, then . In particular, there can never be more than one two-sided identity: if there were two, say and , then would have to be equal to both and . It is also quite possible for to have "no" identity element, such as the case of even integers under the multiplication operation. Another common example is the cross product of vectors, where the absence of an identity element is related to the fact that the direction of any nonzero cross product is always orthogonal to any element multiplied. That is, it is not possible to obtain a non-zero vector in the same direction as the original. Yet another example of group without identity element involves the additive semigroup of positive natural numbers.
https://en.wikipedia.org/wiki?curid=14962
Instrumental An instrumental is a recording without any vocals, although it might include some inarticulate vocals, such as shouted backup vocals in a Big Band setting. Through semantic widening, a broader sense of the word song may refer to instrumentals. The music is primarily or exclusively produced using musical instruments. An instrumental can exist in music notation, after it is written by a composer; in the mind of the composer (especially in cases where the composer themselves will perform the piece, as in the case of a blues solo guitarist or a folk music fiddle player); as a piece that is performed live by a single instrumentalist or a musical ensemble, which could range in components from a duo or trio to a large Big Band, concert band or orchestra. In a song that is otherwise sung, a section that is not sung but which is played by instruments can be called an instrumental interlude, or, if it occurs at the beginning of the song, before the singer starts to sing, an instrumental introduction. If the instrumental section highlights the skill, musicality, and often the virtuosity of a particular performer (or group of performers), the section may be called a "solo" (e.g., the guitar solo that is a key section of heavy metal music and hard rock songs). If the instruments are percussion instruments, the interlude can be called a percussion interlude or "percussion break". These interludes are a form of break in the song. In commercial popular music, instrumental tracks are sometimes renderings, remixes of a corresponding release that features vocals, but they may also be compositions originally conceived without vocals. One example of a genre in which both vocal/instrumental and solely instrumental songs are produced is blues. A blues band often uses mostly songs that have lyrics that are sung, but during the band's show, they may also perform instrumental songs which only include electric guitar, harmonica, upright bass/electric bass and drum kit. The opposite of instrumental music, that is, music for voices alone, without any accompaniment instruments, is a cappella, an Italian phrase that means "in the chapel". In early music, instruments such as trumpet and drums were considered outdoor instruments, and music for inside a chapel typically used quieter instruments, voices, or just voices alone. A capella music exists in both Classical music choir pieces (for choir without any accompanist piano or pipe organ) and in popular music styles such as doo wop groups and Barbershop quartets. For genres in which a non-vocal song or interlude is conceived using computers and software, rather than with acoustic musical instruments or electronic musical instruments, the term instrumental is still used for it. Some recordings which include brief or non-musical use of the human voice are typically considered instrumentals. Examples include songs with the following: Songs including actual musical—rhythmic, melodic, and lyrical—vocals might still be categorized as instrumentals if the vocals appear only as a short part of an extended piece (e.g., "Unchained Melody" (Les Baxter), "Batman Theme", "TSOP (The Sound of Philadelphia)", "Pick Up the Pieces", "The Hustle", "Fly, Robin, Fly", "Get Up and Boogie", "Do It Any Way You Wanna", and "Gonna Fly Now"), though this definition is loose and subjective. Falling just outside of that definition is "Theme From Shaft" by Isaac Hayes. "Better Off Alone", which began as an instrumental by DJ Jurgen, had vocals by Judith Pronk, who would become a seminal part of Alice Deejay, added in later releases of the track.
https://en.wikipedia.org/wiki?curid=14967
Regular icosahedron In geometry, a regular icosahedron ( or ) is a convex polyhedron with 20 faces, 30 edges and 12 vertices. It is one of the five Platonic solids, and the one with the most sides. It has five equilateral triangular faces meeting at each vertex. It is represented by its Schläfli symbol {3,5}, or sometimes by its vertex figure as 3.3.3.3.3 or 35. It is the dual of the dodecahedron, which is represented by {5,3}, having three pentagonal faces around each vertex. A regular icosahedron is a strictly convex deltahedron and a gyroelongated pentagonal bipyramid and a biaugmented pentagonal antiprism in any of six orientations. The name comes . The plural can be either "icosahedrons" or "icosahedra" (). If the edge length of a regular icosahedron is "a", the radius of a circumscribed sphere (one that touches the icosahedron at all vertices) is and the radius of an inscribed sphere (tangent to each of the icosahedron's faces) is while the midradius, which touches the middle of each edge, is where "ϕ" is the golden ratio. The surface area "A" and the volume "V" of a regular icosahedron of edge length "a" are: The latter is "F" = "20" times the volume of a general tetrahedron with apex at the center of the inscribed sphere, where the volume of the tetrahedron is one third times the base area times its height "ri". The volume filling factor of the circumscribed sphere is: A sphere inscribed in an icosahedron will enclose 89.635% of its volume, compared to only 75.47% for a dodecahedron. The midsphere of an icosahedron will have a volume 1.01664 times the volume of the icosahedron, which is by far the closest similarity in volume of any platonic solid with its midsphere. This arguably makes the icosahedron the "roundest" of the platonic solids. The vertices of an icosahedron centered at the origin with an edge-length of 2 and a circumradius of formula_7 are described by circular permutations of: where "ϕ" =  is the golden ratio. Taking all permutations (not just cyclic ones) results in the Compound of two icosahedra. Note that these vertices form five sets of three concentric, mutually orthogonal golden rectangles, whose edges form Borromean rings. If the original icosahedron has edge length 1, its dual dodecahedron has edge length = = "ϕ" − 1. The 12 edges of a regular octahedron can be subdivided in the golden ratio so that the resulting vertices define a regular icosahedron. This is done by first placing vectors along the octahedron's edges such that each face is bounded by a cycle, then similarly subdividing each edge into the golden mean along the direction of its vector. The five octahedra defining any given icosahedron form a regular polyhedral compound, while the two icosahedra that can be defined in this way from any given octahedron form a uniform polyhedron compound. The locations of the vertices of a regular icosahedron can be described using spherical coordinates, for instance as latitude and longitude. If two vertices are taken to be at the north and south poles (latitude ±90°), then the other ten vertices are at latitude ±arctan() ≈ ±26.57°. These ten vertices are at evenly spaced longitudes (36° apart), alternating between north and south latitudes. This scheme takes advantage of the fact that the regular icosahedron is a pentagonal gyroelongated bipyramid, with D5d dihedral symmetry—that is, it is formed of two congruent pentagonal pyramids joined by a pentagonal antiprism. The icosahedron has three special orthogonal projections, centered on a face, an edge and a vertex: The icosahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. The following construction of the icosahedron avoids tedious computations in the number field [] necessary in more elementary approaches. The existence of the icosahedron amounts to the existence of six equiangular lines in . Indeed, intersecting such a system of equiangular lines with a Euclidean sphere centered at their common intersection yields the twelve vertices of a regular icosahedron as can easily be checked. Conversely, supposing the existence of a regular icosahedron, lines defined by its six pairs of opposite vertices form an equiangular system. In order to construct such an equiangular system, we start with this 6 × 6 square matrix: A straightforward computation yields (where "" is the 6 × 6 identity matrix). This implies that "A" has eigenvalues – and , both with multiplicity 3 since "A" is symmetric and of trace zero. The matrix induces thus a Euclidean structure on the quotient space , which is isomorphic to since the kernel of has dimension 3. The image under the projection of the six coordinate axes "v", …, "v" in forms thus a system of six equiangular lines in intersecting pairwise at a common acute angle of arccos . Orthogonal projection of ±"v"1, …, ±"v"6 onto the -eigenspace of "A" yields thus the twelve vertices of the icosahedron. A second straightforward construction of the icosahedron uses representation theory of the alternating group "A"5 acting by direct isometries on the icosahedron. The rotational symmetry group of the regular icosahedron is isomorphic to the alternating group on five letters. This non-abelian simple group is the only non-trivial normal subgroup of the symmetric group on five letters. Since the Galois group of the general quintic equation is isomorphic to the symmetric group on five letters, and this normal subgroup is simple and non-abelian, the general quintic equation does not have a solution in radicals. The proof of the Abel–Ruffini theorem uses this simple fact, and Felix Klein wrote a book that made use of the theory of icosahedral symmetries to derive an analytical solution to the general quintic equation, . See icosahedral symmetry: related geometries for further history, and related symmetries on seven and eleven letters. The full symmetry group of the icosahedron (including reflections) is known as the full icosahedral group, and is isomorphic to the product of the rotational symmetry group and the group "C"2 of size two, which is generated by the reflection through the center of the icosahedron. The icosahedron has a large number of stellations. According to specific rules defined in the book "The Fifty-Nine Icosahedra", 59 stellations were identified for the regular icosahedron. The first form is the icosahedron itself. One is a regular Kepler–Poinsot polyhedron. Three are regular compound polyhedra. The small stellated dodecahedron, great dodecahedron, and great icosahedron are three facetings of the regular icosahedron. They share the same vertex arrangement. They all have 30 edges. The regular icosahedron and great dodecahedron share the same edge arrangement but differ in faces (triangles vs pentagons), as do the small stellated dodecahedron and great icosahedron (pentagrams vs triangles). There are distortions of the icosahedron that, while no longer regular, are nevertheless vertex-uniform. These are invariant under the same rotations as the tetrahedron, and are somewhat analogous to the snub cube and snub dodecahedron, including some forms which are chiral and some with Th-symmetry, i.e. have different planes of symmetry from the tetrahedron. The icosahedron is unique among the Platonic solids in possessing a dihedral angle not less than 120°. Its dihedral angle is approximately 138.19°. Thus, just as hexagons have angles not less than 120° and cannot be used as the faces of a convex regular polyhedron because such a construction would not meet the requirement that at least three faces meet at a vertex and leave a positive defect for folding in three dimensions, icosahedra cannot be used as the cells of a convex regular polychoron because, similarly, at least three cells must meet at an edge and leave a positive defect for folding in four dimensions (in general for a convex polytope in "n" dimensions, at least three facets must meet at a peak and leave a positive defect for folding in "n"-space). However, when combined with suitable cells having smaller dihedral angles, icosahedra can be used as cells in semi-regular polychora (for example the snub 24-cell), just as hexagons can be used as faces in semi-regular polyhedra (for example the truncated icosahedron). Finally, non-convex polytopes do not carry the same strict requirements as convex polytopes, and icosahedra are indeed the cells of the icosahedral 120-cell, one of the ten non-convex regular polychora. An icosahedron can also be called a gyroelongated pentagonal bipyramid. It can be decomposed into a gyroelongated pentagonal pyramid and a pentagonal pyramid or into a pentagonal antiprism and two equal pentagonal pyramids. It can be projected to 3D from the 6D 6-demicube using the same basis vectors that form the hull of the Rhombic triacontahedron from the 6-cube. Shown here including the inner 20 vertices which are not connected by the 30 outer hull edges of 6D norm length . The inner vertices form a dodecahedron. The 3D projection basis vectors [u,v,w] used are: There are 3 uniform colorings of the icosahedron. These colorings can be represented as 11213, 11212, 11111, naming the 5 triangular faces around each vertex by their color. The icosahedron can be considered a snub tetrahedron, as snubification of a regular tetrahedron gives a regular icosahedron having chiral tetrahedral symmetry. It can also be constructed as an alternated truncated octahedron, having pyritohedral symmetry. The pyritohedral symmetry version is sometimes called a pseudoicosahedron, and is dual to the pyritohedron. Many viruses, e.g. herpes virus, have icosahedral shells. Viral structures are built of repeated identical protein subunits known as capsomeres, and the icosahedron is the easiest shape to assemble using these subunits. A "regular" polyhedron is used because it can be built from a single basic unit protein used over and over again; this saves space in the viral genome. Various bacterial organelles with an icosahedral shape were also found. The icosahedral shell encapsulating enzymes and labile intermediates are built of different types of proteins with BMC domains. In 1904, Ernst Haeckel described a number of species of Radiolaria, including "Circogonia icosahedra", whose skeleton is shaped like a regular icosahedron. A copy of Haeckel's illustration for this radiolarian appears in the article on regular polyhedra. The closo-carboranes are chemical compounds with shape very close to icosahedron. Icosahedral twinning also occurs in crystals, especially nanoparticles. Many borides and allotropes of boron contain boron B12 icosahedron as a basic structure unit. Icosahedral dice with twenty sides have been used since ancient times. In several roleplaying games, such as "Dungeons & Dragons", the twenty-sided die (d20 for short) is commonly used in determining success or failure of an action. This die is in the form of a regular icosahedron. It may be numbered from "0" to "9" twice (in which form it usually serves as a ten-sided die, or d10), but most modern versions are labeled from "1" to "20". An icosahedron is the three-dimensional game board for Icosagame, formerly known as the Ico Crystal Game. An icosahedron is used in the board game "Scattergories" to choose a letter of the alphabet. Six letters are omitted (Q, U, V, X, Y, and Z). In the "Nintendo 64" game "", the boss Miracle Matter is a regular icosahedron. Inside a Magic 8-Ball, various answers to yes–no questions are inscribed on a regular icosahedron. R. Buckminster Fuller and Japanese cartographer Shoji Sadao designed a world map in the form of an unfolded icosahedron, called the Fuller projection, whose maximum distortion is only 2%. The American electronic music duo ODESZA use a regular icosahedron as their logo. The skeleton of the icosahedron (the vertices and edges) forms a graph. It is one of 5 Platonic graphs, each a skeleton of its Platonic solid. The high degree of symmetry of the polygon is replicated in the properties of this graph, which is distance-transitive and symmetric. The automorphism group has order 120. The vertices can be colored with 4 colors, the edges with 5 colors, and the diameter is 3. The icosahedral graph is Hamiltonian: there is a cycle containing all the vertices. It is also a planar graph. There are 4 related Johnson solids, including pentagonal faces with a subset of the 12 vertices. The similar dissected regular icosahedron has 2 adjacent vertices diminished, leaving two trapezoidal faces, and a bifastigium has 2 opposite sets of vertices removed and 4 trapezoidal faces. The pentagonal antiprism is formed by removing two opposite vertices. The icosahedron can be transformed by a truncation sequence into its dual, the dodecahedron: As a snub tetrahedron, and alternation of a truncated octahedron it also exists in the tetrahedral and octahedral symmetry families: This polyhedron is topologically related as a part of sequence of regular polyhedra with Schläfli symbols {3,"n"}, continuing into the hyperbolic plane. The regular icosahedron, seen as a "snub tetrahedron", is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3."n") and Coxeter–Dynkin diagram . These figures and their duals have ("n"32) rotational symmetry, being in the Euclidean plane for "n" = 6, and hyperbolic plane for any higher "n". The series can be considered to begin with "n" = 2, with one set of faces degenerated into digons. The icosahedron can tessellate hyperbolic space in the order-3 icosahedral honeycomb, with 3 icosahedra around each edge, 12 icosahedra around each vertex, with Schläfli symbol {3,5,3}. It is one of four regular tessellations in the hyperbolic 3-space.
https://en.wikipedia.org/wiki?curid=14968
Industrial archaeology of Dartmoor The industrial archaeology of Dartmoor covers a number of the industries which have, over the ages, taken place on Dartmoor, and the remaining evidence surrounding them. Currently only three industries are economically significant, yet all three will inevitably leave their own traces on the moor: china clay mining, farming and tourism. A good general guide to the commercial activities on Dartmoor at the end of the 19th century is William Crossing's "The Dartmoor Worker". In former times, lead, silver, tin and copper were mined extensively on Dartmoor. The most obvious evidence of mining to the casual visitor to Dartmoor are the remains of the old engine-house at Wheal Betsy which is alongside the A386 road between Tavistock and Okehampton. The word "Wheal" has a particular meaning in Devon and Cornwall being either a tin or a copper mine, however in the case of Wheal Betsy it was principally lead and silver which were mined. Once widely practised by many miners across the moor, by the early 1900s only a few tinners remained, and mining had almost completely ceased twenty years later. Some of the more significant mines were Eylesbarrow, Knock Mine, Vitifer Mine and Hexworthy Mine. The last active mine in the Dartmoor area was Great Rock Mine, which shut down in 1969. Dartmoor granite has been used in many Devon and Cornish buildings. The prison at Princetown was built from granite taken from Walkhampton Common. When the horse tramroad from Plymouth to Princetown was completed in 1823, large quantities of granite were more easily transported. There were three major granite quarries on the moor: Haytor, Foggintor and Merrivale. The granite quarries around Haytor were the source of the stone used in several famous structures, including the New London Bridge, completed in 1831. This granite was transported from the moor via the Haytor Granite Tramway, stretches of which are still visible. The extensive quarries at Foggintor provided granite for the construction of London's Nelson's Column in the early 1840s, and New Scotland Yard was faced with granite from the quarry at Merrivale. Merrivale Quarry continued excavating and working its own granite until the 1970s, producing gravestones and agricultural rollers. Work at Merrivale continued until the 1990s, for the last 20 years imported stone such as gabbro from Norway and Italian marble was dressed and polished. The unusual pink granite at Great Trowlesworthy Tor was also quarried, and there were many other small granite quarries dotted around the moor. Various metamorphic rocks were also quarried in the metamorphic aureole around the edge of the moor, most notably at Meldon. In 1844 a factory for making gunpowder was built on the open moor, not far from Postbridge. Gunpowder was needed for the tin mines and granite quarries then in operation on the moor. The buildings were widely spaced from one another for safety and the mechanical power for grinding ("incorporating") the powder was derived from waterwheels driven by a leat. Now known as "Powdermills" or "Powder Mills", there are extensive remains of this factory still visible. Two chimneys still stand and the walls of the two sturdily-built incorporating mills with central waterwheels survive well: they were built with substantial walls but flimsy roofs so that in the event of an explosion, the force of the blast would be directed safely upwards. The ruins of a number of ancillary buildings also survive. A proving mortar—a type of small cannon used to gauge the strength of the gunpowder—used by the factory still lies by the side of the road to the nearby pottery. Peat-cutting for fuel occurred at some locations on Dartmoor until certainly the 1970s, usually for personal use. The right of Dartmoor commoners to cut peat for fuel is known as "turbary". These rights were conferred a long time ago, pre-dating most written records. The area once known as the "Turbary of Alberysheved" between the River Teign and the headwaters of the River Bovey is mentioned in the Perambulation of the Forest of Dartmoor of 1240 (by 1609 the name of the area had changed to Turf Hill). An attempt was made to commercialise the cutting of peat in 1901 at Rattle Brook Head, however this quickly failed. From at least the 13th century until early in the 20th, rabbits were kept on a commercial scale, both for their flesh and their fur. Documentary evidence for this exists in place names such as Trowlesworthy Warren (mentioned in a document dated 1272) and Warren House Inn. The physical evidence, in the form of pillow mounds is also plentiful, for example there are 50 pillow mounds at Legis Tor Warren. The sophistication of the warreners is shown by the existence of vermin traps that were placed near the warrens to capture weasels and stoats attempting to get at the rabbits. The significance of the term "warren" nowadays is not what it once was. In the Middle Ages it was a privileged place, and the creatures of the warren were protected by the king 'for his princely delight and pleasure'. The subject of warrening on Dartmoor was addressed in Eden Phillpotts' story "The River". Farming has been practised on Dartmoor since time immemorial. The dry-stone walls which separate fields and mark boundaries give an idea of the extent to which the landscape has been shaped by farming. There is little or no arable farming within the moor, mostly being given over to livestock farming on account of the thin and rocky soil. Some Dartmoor farms are remote in the extreme.
https://en.wikipedia.org/wiki?curid=14971
Idempotence Idempotence (, ) is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. The concept of idempotence arises in a number of places in abstract algebra (in particular, in the theory of projectors and closure operators) and functional programming (in which it is connected to the property of referential transparency). The term was introduced by Benjamin Peirce in the context of elements of algebras that remain invariant when raised to a positive integer power, and literally means "(the quality of having) the same power", from + "potence" (same + power). An element "x" of a magma ("M", •) is said to be "idempotent" if: If all elements are idempotent with respect to •, then • is called idempotent. The formula ∀"x", is called the idempotency law for •. In the monoid ("EE", ∘) of the functions from a set "E" to itself with the function composition ∘, idempotent elements are the functions such that , in other words such that for all "x" in "E", (the image of each element in "E" is a fixed point of "f"). For example: Other examples include: If the set "E" has "n" elements, we can partition it into "k" chosen fixed points and non-fixed points under "f", and then "k""n"−"k" is the number of different idempotent functions. Hence, taking into account all possible partitions, is the total number of possible idempotent functions on the set. The integer sequence of the number of idempotent functions as given by the sum above for "n" = 0, 1, 2, 3, 4, 5, 6, 7, 8, … starts with 1, 1, 3, 10, 41, 196, 1057, 6322, 41393, … . Neither the property of being idempotent nor that of being not is preserved under function composition. As an example for the former, mod 3 and "g"("x") = max("x", 5) are both idempotent, but is not, although happens to be. As an example for the latter, the negation function ¬ on the Boolean domain is not idempotent, but is. Similarly, unary negation of real numbers is not idempotent, but is. In computer science, the term "idempotence" may have a different meaning depending on the context in which it is applied: This is a very useful property in many situations, as it means that an operation can be repeated or retried as often as necessary without causing unintended effects. With non-idempotent operations, the algorithm may have to keep track of whether the operation was already performed or not. A function looking up a customer's name and address in a database is typically idempotent, since this will not cause the database to change. Similarly, changing a customer's address to XYZ is typically idempotent, because the final address will be the same no matter how many times XYZ is submitted. However, placing an order for a cart for the customer is typically not idempotent, since running the call several times will lead to several orders being placed. Canceling an order is idempotent, because the order remains canceled no matter how many requests are made. A composition of idempotent methods or subroutines, however, is not necessarily idempotent if a later method in the sequence changes a value that an earlier method depends on – "idempotence is not closed under composition". For example, suppose the initial value of a variable is 3 and there is a sequence that reads the variable, then changes it to 5, and then reads it again. Each step in the sequence is idempotent: both steps reading the variable have no side effects and changing a variable to 5 will always have the same effect no matter how many times it is executed. Nonetheless, executing the entire sequence once produces the output (3, 5), but executing it a second time produces the output (5, 5), so the sequence is not idempotent. In the Hypertext Transfer Protocol (HTTP), idempotence and safety are the major attributes that separate HTTP verbs. Of the major HTTP verbs, GET, PUT, and DELETE should be implemented in an idempotent manner according to the standard, but POST need not be. GET retrieves a resource; PUT stores content at a resource; and DELETE eliminates a resource. As in the example above, reading data usually has no side effects, so it is idempotent (in fact "nullipotent"). Storing and deleting a given set of content are each usually idempotent as long as the request specifies a location or identifier that uniquely identifies that resource and only that resource again in the future. The PUT and DELETE operations with unique identifiers reduce to the simple case of assignment to an immutable variable of either a value or the null-value, respectively, and are idempotent for the same reason; the end result is always the same as the result of the initial execution, even if the response differs. Violation of the unique identification requirement in storage or deletion typically causes violation of idempotence. For example, storing or deleting a given set of content without specifying a unique identifier: POST requests, which do not need to be idempotent, often do not contain unique identifiers, so the creation of the identifier is delegated to the receiving system which then creates a corresponding new record. Similarly, PUT and DELETE requests with nonspecific criteria may result in different outcomes depending on the state of the system - for example, a request to delete the most recent record. In each case, subsequent executions will further modify the state of the system, so they are not idempotent. In Event stream processing, idempotence refers to the ability of a system to produce the same outcome, even if the same file, event or message is received more than once. In a load-store architecture, instructions that might possibly cause a page fault are idempotent. So if a page fault occurs, the OS can load the page from disk and then simply re-execute the faulted instruction. In a processor where such instructions are not idempotent, dealing with page faults is much more complex. When reformatting output, pretty-printing is expected to be idempotent. In other words, if the output is already "pretty", there should be nothing to do for the pretty-printer. In service-oriented architecture (SOA), a multiple-step orchestration process composed entirely of idempotent steps can be replayed without side-effects if any part of that process fails. Applied examples that many people could encounter in their day-to-day lives include elevator call buttons and crosswalk buttons. The initial activation of the button moves the system into a requesting state, until the request is satisfied. Subsequent activations of the button between the initial activation and the request being satisfied have no effect, unless the system is designed to adjust the time for satisfying the request based on the number of activations.
https://en.wikipedia.org/wiki?curid=14972
Ithaca, New York Ithaca is a city in the Finger Lakes region of New York. It is the seat of Tompkins County, as well as the largest community in the Ithaca–Tompkins County metropolitan area. This area contains the municipalities of the Town of Ithaca, the village of Cayuga Heights, and other towns and villages in Tompkins County. The city of Ithaca is located on the southern shore of Cayuga Lake, in Central New York, about south-west of Syracuse. It is named after the Greek island of Ithaca. Additionally, Ithaca is located southeast of Toronto, and northwest of New York City. Ithaca is home to Cornell University, an Ivy League school of over 20,000 students, most of whom study at its local campus. In addition, Ithaca College is a private, nonsectarian, liberal arts college of over 7,000 students, located just south of the city in the Town of Ithaca, adding to the area's "college town" atmosphere. Nearby is Tompkins Cortland Community College (TC3). These three colleges bring tens of thousands of students, who increase Ithaca's seasonal population during the school year. The city's voters are notably more liberal than those in the remainder of Tompkins County or in upstate New York, generally voting for Democratic Party candidates. As of 2010, the city's population was 30,014. A 2019 census estimate stated the population was 30,837. Namgyal Monastery in Ithaca is the North American seat of Tenzin Gyatso, the 14th Dalai Lama. Native Americans wandered in this area for thousands of years. When discovered, this area was controlled by the Cayuga tribe of Indians, one of the powerful Five Nations of the "Haudenosaunee" or Iroquois League. Jesuit missionaries from New France (Quebec) are said to have had a mission to convert the Cayuga as early as 1657. Saponi and Tutelo peoples, Siouan-speaking tribes, later occupied lands at the south end of Cayuga Lake. Dependent tributaries of the Cayuga, they had been permitted to settle on the tribe's hunting lands at the south end of Cayuga Lake, as well as in Pony (originally Sapony) Hollow of what is known as present-day Newfield, New York. Remnants of these tribes had been forced from Virginia and North Carolina by tribal conflicts and European colonial settlement. Similarly, the Tuscarora people, an Iroquoian-speaking tribe from the Carolinas, migrated after defeat in the Yamasee War; they settled with the Oneida people and became the sixth nation of the Haudenosaunee, with chiefs stating the migration was complete in 1722. During the Revolutionary War, four of the then six Iroquois nations helped the British attempt to crush the revolution, although bands made decisions on fighting in a highly decentralized way. Conflict with the rebel colonists was fierce throughout the Mohawk Valley and western New York. In retaliation for conflicts to the east and resentment at the savage way in which the Iroquois made war, the 1779 Sullivan Expedition was conducted against the Iroquois in the west of the state, destroying more than 40 villages and stored winter crops and forcing their retreat from the area. It destroyed the Tutelo village of Coregonal, located near what is now the junction of state routes 13 and 13A just south of the Ithaca city limits. Most Iroquois were forced from the state after the Revolutionary War, but some remnants remained. The state sold off the former Iroquois lands to stimulate development and settlement by Americans; lands were also granted as payment to veterans of the war. Within the current boundaries of the City of Ithaca, Native Americans maintained only a temporary hunting camp at the base of Cascadilla Gorge. In 1788, eleven men from Kingston, New York came to the area with two Delaware people (Lenape) guides, to explore what they considered wilderness. The following year Jacob Yaple, Isaac Dumond, and Peter Hinepaw returned with their families and constructed log cabins. That same year Abraham Bloodgood of Albany obtained a patent from the state for 1,400 acres, which included all of the present downtown west of Tioga Street. In 1790, the federal government and state began an official program to grant land in the area, known as the Central New York Military Tract, as payment for service to the American soldiers of the Revolutionary War, as the government was cash poor. Most local land titles trace back to these Revolutionary war grants. As part of this process, the Central New York Military Tract, which included northern Tompkins County, was surveyed by Simeon De Witt, Bloodgood's son-in-law. De Witt was also the nephew of Governor George Clinton. The Commissioners of Lands of New York State (chairman Gov. George Clinton) met in 1790. The Military Tract township in which proto-Ithaca was located was named the Town of Ulysses. A few years later De Witt moved to Ithaca, then called variously "The Flats," "The City," or "Sodom"; he renamed it for the Greek island home of Ulysses in the spirit of the multitude of settlement names in the region derived from classical literature, such as Aurelius, Ovid, and especially of Ulysses, New York, the town that contained Ithaca at the time. Around 1791 De Witt surveyed what is now the downtown area into lots and sold them at modest prices. That same year John Yaple built a grist mill on Cascadilla Creek. The first frame house was erected in 1800 by Abram Markle. In 1804 the village had a postmaster and in 1805 a tavern. Ithaca became a transshipping point for salt from curing beds near Salina, New York to buyers south and east. This prompted construction in 1810 of the Owego Turnpike. When the War of 1812 cut off access to Nova Scotia gypsum, used for fertilizer, Ithaca became the center of trade in Cayuga gypsum. The Cayuga Steamboat Company was organized in 1819 and in 1820 launched the first steamboat on Cayuga Lake, the "Enterprise." In 1821, the village was incorporated at the same time the Town of Ithaca was organized and separated from the parent Town of Ulysses. In 1834, the Ithaca and Owego Railroad's first horse-drawn train began service, connecting traffic on the east–west Erie Canal (completed in 1825) with the Susquehanna River to the south to expand the trade network. With the depression of 1837, the railroad was re-organized as the Cayuga & Susquehanna. It was re-engineered with switchbacks in the late 1840s; in the late 20th century a short section of this route in the city and town of Ithaca was used for the South Hill Recreation Way. However, easier railroad routes were constructed, such as that of the Syracuse, Binghamton & New York (1854). In the decade following the Civil War, railroads were built from Ithaca to surrounding points (Geneva; Cayuga; Cortland; and Elmira, New York; and Athens, Pennsylvania), mainly with financing from Ezra Cornell. The geography of the city, on a steep hill by the lake, has prevented it from being directly connected to a major transportation artery. When the Lehigh Valley Railroad built its main line from Pennsylvania to Buffalo, New York in 1890, it bypassed Ithaca (running via eastern Schuyler County on easier grades), as the Delaware, Lackawanna and Western Railroad had done in the 1850s. In the late 19th century, more industry developed in Ithaca. In 1883 William Henry Baker and his partners started the Ithaca Gun Company, making shotguns. The original factory was located in the Fall Creek neighborhood of the city, on a slope later known as Gun Hill, where the nearby waterfall supplied the main source of energy for the plant. The company became an icon in the hunting and shooting world, its shotguns famous for their fine decorative work. Wooden gunstocks with knots or other imperfections were donated to the high school woodworking shop to be made into lamps. John Philip Sousa and trick-shooter Annie Oakley favored Ithaca guns. In 1937 the company began producing the Ithaca 37, based on a 1915 patent by noted firearms designer John Browning. Its 12-gauge shotguns were the standard used for decades by the New York City Police Department and Los Angeles Police Department. In 1885, Ithaca Children's Home was established on West Seneca Street. The orphanage had two programs at the time: a residential home for both orphaned and destitute children, and a day nursery. The village established its first trolley in 1887. Ithaca developed as a small manufacturing and retail center and was incorporated as a city in 1888. The largest industrial company in the area was Morse Chain, elements of which were absorbed into Emerson Power Transmission on South Hill and Borg Warner Automotive in Lansing, New York. Ithaca claims to be the birthplace of the ice cream sundae, created in 1892 when fountain shop owner Chester Platt "served his local priest vanilla ice cream covered in cherry syrup with a dark candied cherry on top. The priest suggested the dessert be named after the day, Sunday—although the spelling was later changed out of fear some would find it offensive." The local Unitarian church, where the priest, Rev. John Scott, preached, has an annual "Sundae Sunday" every September in commemoration. Ithaca's claim has long been disputed by Two Rivers, Wisconsin. Also in 1892, the Ithaca Kitty became one of the first mass-produced stuffed animal toys in the United States. In 1903 a typhoid epidemic, resulting from poor sanitation infrastructure, devastated the city. One out of 10 citizens fell ill or died. In 1900 Cornell anatomy professor G.S. Moler made an early movie using frame-by-frame technology. For "The Skeleton Dance," he took single-frame photos of a human skeleton in varying positions, giving the illusion of a dancing skeleton. During the early 20th century, Ithaca was an important center in the silent film industry. These films often featured the local natural scenery. Many of these films were the work of Leopold Wharton and his brother Theodore; their studio was on the site of what is now Stewart Park. The Star Theatre on East Seneca Street was built in 1911 and became the most popular vaudeville venue in the region. Wharton movies were also filmed and shown there. After the film industry centralized in Hollywood, production in Ithaca effectively ceased. Few of the silent films made in Ithaca have been preserved. After World War II, the Langmuir Research Labs of General Electric developed as a major employer; the defense industry continued to expand. GE's headquarters were in Schenectady, New York, to the east in the Mohawk Valley. For decades, the Ithaca Gun Company tested their shotguns behind the plant on Lake Street; the shot fell into the Fall Creek gorge at the base of Ithaca Falls. Lead accumulated in the soil in and around the factory and gorge. A major lead clean-up effort sponsored by the United States Superfund took place from 2002 to 2004, managed through the Environmental Protection Agency. The old Ithaca Gun building has been dismantled. It was scheduled to be replaced by development of an apartment complex on the cleaned land. The former Morse Chain company factory on South Hill, now owned by Emerson Power Transmission, was the site of extensive groundwater and soil contamination from its industrial operations. Emerson Power Transmission has been working with the state and South Hill residents to determine the extent and danger of the contamination and aid in cleanup. In 2004, Gayraud Townsend, a 20-year-old senior in Cornell's School of Industrial and Labor Relations, was sworn in as alderman of the city council, the first black male to be elected to the council and the youngest African American to be elected to office in the United States. He served his full term and has mentored other student politicians. In 2011 Cornell Class of 2009 graduate Svante Myrick was elected as the youngest mayor of the city of Ithaca. The valley in which Cayuga Lake is located is long and narrow with a north–south orientation. Ithaca is located at the southern end (the "head") of the lake, but the valley continues to the southwest behind the city. Originally a river valley, it was deepened and widened by the action of Pleistocene ice sheets over the last several hundred thousand years. The lake, which drains to the north, formed behind a dam of glacial moraine. The rock is predominantly Devonian and, north of Ithaca, is relatively fossil rich. Glacial erratics can be found in the area. The world-renowned fossils found in this area can be examined at the Museum of the Earth. Ithaca was founded on flat land just south of the lake—land that formed in fairly recent geological times when silt filled the southern end of the lake. The city ultimately spread to the adjacent hillsides, which rise several hundred feet above the central flats: East Hill, West Hill, and South Hill. Its sides are fairly steep, and a number of the streams that flow into the valley from east or west have cut deep canyons, usually with several waterfalls. The natural vegetation of the Ithaca area, seen in areas unbuilt and not been farmed on, is northern temperate broadleaf forest, dominated by deciduous trees. According to the Köppen climate classification method, Ithaca experiences a warm-summer humid continental climate, also known as a hemiboreal climate ("Dfb"). Summers are warm but brief, and it is cool-to-cold the rest of the year, with long, snowy winters; an average of of snow falls per year. In addition, frost may occur any time of year except mid-summer. Winter is typically characterized by freezing temperatures, cloudy skies, and light-to-moderate snows, with some heavier falls; the largest snowfall in one day was on February 14, 1914. But the season is also variable; there can be short mild periods with some rain, but also outbreaks of frigid air with night temperatures down to or lower. Summers usually bring sunshine, along with moderate heat and humidity, but also frequent afternoon thunderstorms. Nights are pleasant and sometimes cool. Occasionally, there can be heatwaves, with temperatures rising into the to range, but they tend to be brief. The average date of the first freeze is October 5, and the average date of the last freeze is May 15, giving Ithaca a growing season of 141 days. The average date of the first and last snowfalls are November 12 and April 7, respectively. Extreme temperatures range from as recently as February 2, 1961 up to on July 9, 1936. The valley flatland has slightly cooler weather in winter, and occasionally Ithaca residents experience simultaneous snow on the hills and rain in the valley. The phenomenon of mixed precipitation (rain, wind, and snow), common in the late fall and early spring, is known tongue-in-cheek as "ithacation" to many of the local residents. Due to the microclimates created by the impact of the lakes, the region surrounding Ithaca (Finger Lakes American Viticultural Area) experiences a short but adequate growing season for winemaking similar to the Rhine Valley wine district of Germany. As such, the region is home to many wineries. Ithaca is the larger principal city of the Ithaca-Cortland CSA, a Combined Statistical Area that includes the Ithaca metropolitan area (Tompkins County) and the Cortland micropolitan area (Cortland County), which had a combined population of 145,100 at the 2000 census. As of the census of 2000, there were 29,287 people, 10,287 households, and 2,962 families residing in the city. The population density was 5,360.9 people per square mile (2,071.0/km2). There were 10,736 housing units at an average density of 1,965.2 per square mile (759.2/km2). The racial makeup of the city was 73.97% White, 13.65% Asian, 6.71% Black or African American, 0.39% Native American, 0.05% Pacific Islander, 1.86% from other races, and 3.36% from two or more races. Hispanic or Latino of any race were 5.31% of the population. There were 10,287 households out of which 14.2% had children under the age of 18 living with them, 19.0% were married couples living together, 7.8% had a female householder with no husband present, and 71.2% were non-families. 43.3% of all households were made up of individuals and 7.4% had someone living alone who was 65 years of age or older. The average household size was 2.13 and the average family size was 2.81. In the city, the population was spread out with 9.2% under the age of 18, 53.8% from 18 to 24, 20.1% from 25 to 44, 10.6% from 45 to 64, and 6.3% who were 65 years of age or older. The median age was 22 years. For every 100 females, there were 102.6 males. For every 100 females age 18 and over, there were 102.2 males. The median income for a household in the city was $21,441, and the median income for a family was $42,304. Males had a median income of $29,562 versus $27,828 for females. The per capita income for the city was $13,408. About 13.2% of individuals and 4.2% of families were below the poverty line. The term "Greater Ithaca" encompasses both the City and Town of Ithaca, as well as several smaller settled places within or adjacent to the Town: Municipalities Census-designated places There are two governmental entities in the area: the Town of Ithaca and the City of Ithaca. The Town of Ithaca is one of the nine towns comprising Tompkins County. The City of Ithaca is surrounded by, but legally independent of, the Town. The City of Ithaca has a mayor–council government. The charter of the City of Ithaca provides for a full-time mayor and city judge, each independent and elected at-large. Since 1995, the mayor has been elected to a four-year term, and since 1989, the city judge has been elected to a six-year term. Since 1983, the city has been divided into five wards. Each elects two representatives to the city council, known as the Common Council, for staggered four-year terms. In March 2015, the Common Council unanimously adopted a resolution recognizing freedom from domestic violence as a fundamental human right. Since students won the right to vote where they attend colleges, some have become more active in local politics. In 2004, Gayraud Townsend, a 20-year-old senior in Cornell's School of Industrial and Labor Relations, was sworn in as alderman of the city council, representing the 4th Ward. He is the first black male to be elected to the council and was then the youngest African American to be elected to office in the United States. He served his full term and has mentored other young student politicians. In 2011, Cornell graduate Svante Myrick was elected Mayor of the City of Ithaca, becoming the youngest mayor in the city's history. In December 2005, the City and Town governments began discussing opportunities for increased government consolidation, including the possibility of joining the two into a single entity. This topic had been previously discussed in 1963 and 1969. Cayuga Heights, a village adjacent to the city on its northeast, voted against annexation into the city of Ithaca in 1954. Politically, the majority of city's voters (many of them students) have supported liberalism and the Democratic Party. A November 2004 study by ePodunk lists it as New York's most liberal city. This contrasts with the more conservative leanings of the generally rural Upstate New York region; the city's voters are also more liberal than those in the rest of Tompkins County. In 2008, Barack Obama, running against New York State's US Senator Hillary Clinton, won Tompkins County in the Democratic Presidential Primary, the only county that he won in New York State. Obama won Tompkins County (including Ithaca) by a wide margin of 41% over his opponent John McCain in the November 2008 election. Ithaca is a major educational center in Central New York. The two major post-secondary educational institutions located in Ithaca were each founded in the late nineteenth century. In 1865, Ezra Cornell founded Cornell University, which overlooks the town from East Hill. It was opened as a coeducational institution. Women first enrolled in 1870. Ezra Cornell also established a public library for the city. Ithaca College was founded as the Ithaca Conservatory of Music in 1892. Ithaca College was originally located in the downtown area, but relocated to South Hill in the 1960s. In 2018 there were 23,600 students enrolled at Cornell and 6,700 at Ithaca College. Tompkins Cortland Community College is located in the neighboring Town of Dryden, and has an extension center in downtown Ithaca. Empire State College offers non-traditional college courses to adults in downtown Ithaca. The Ithaca City School District, based in Ithaca, encompasses the city and its surrounding area and enrolls about 5,500 K-12 students in eight elementary schools (roughly one for every neighborhood, i.e., Cayuga Heights, Fall Creek, etc.), two middle schools (Boynton and Dewitt), Ithaca High School, and the Lehman Alternative Community School, a combined middle and high school. Several private elementary and secondary schools are located in the Ithaca area, including the Roman Catholic Immaculate Conception School, the Cascadilla School, the New Roots Charter School, the Elizabeth Ann Clune Montessori School, the Namaste Montessori School (in the Trumansburg area), and the Ithaca Waldorf School. Ithaca has two networks for supporting its home-schooling families: Loving Education At Home (LEAH) and the Northern Light Learning Center (NLLC). TST BOCES is located in Tompkins County. The Tompkins County Public Library, located at 101 East Green Street, serves as the public library for Tompkins County and is the Central Library for the Finger Lakes Library System. The library serves over 37,000 registered borrowers and contains 250,000 items in its circulating collection. The economy of Ithaca is based on education, with agriculture, technology and tourism in supporting roles. As of 2006, Ithaca has continued to have one of the few expanding economies in New York State outside New York City. It draws commuters for work from the neighboring rural counties of Cortland, Tioga, and Schuyler, as well as from the more urbanized Chemung County. Ithaca has tried to maintain its traditional downtown shopping area with its pedestrian orientation; this includes the Ithaca Commons pedestrian mall and Center Ithaca, a small mixed-use complex built at the end of the urban renewal era. Another commercial center, Collegetown, is located next to the Cornell campus. It features a number of restaurants, shops, and bars, and an increasing number of high-rise apartments. It is primarily frequented by Cornell University students. Ithaca has many of the businesses characteristic of small American university towns: bookstores, art house cinemas, craft stores, and vegetarian-friendly restaurants. The collective Moosewood Restaurant, founded in 1973, published a number of vegetarian cookbooks. "Bon Appetit" magazine ranked it among the thirteen most influential restaurants of the 20th century. Ithaca has many local restaurants and chains both in the city and town with a range of ethnic foods and has been regarded as having more restaurants per capita than New York City. It has become a destination and residence for retirees. The Ithaca Farmers Market, a cooperative with 150 vendors who live within 30 miles of Ithaca, first opened for business on Saturdays in 1973. It is located at Steamboat Landing, where steamboats from Cayuga Lake used to dock. The South Hills Business Campus originally opened in 1957 as the regional headquarters of the National Cash Register Company. Running three full factory shifts, NCR was a major employer. Although it was sold in 1991 to American Telephone and Telegraph and later acquired by Cognitive TPG, TPG remains a major tenant of the South Hill Business Campus, which is now owned by a group of private investors. Ithaca, home to Cornell University College of Agriculture and Life Sciences, has a deep connection to Central New York's farming and dairy industries. About 60 small farms are located in the greater Ithaca/Trumansburg area, including a number of research farms managed by the Cornell University Agricultural Experiment Station. Cornell's Dairy Research Facility is a center of research and support for New York's large and growing milk and yogurt industries. The "Ithaca Journal" was founded in 1815 and is a morning daily newspaper which has been owned by Gannett since 1912. The "Ithaca Times" is a free alternative weekly newspaper that's published every Wednesday. The "Cornell Daily Sun" is also published in Ithaca, operating since 1880. Other media outlets include the online independent news outlet, The Ithaca Voice, and the online magazine 14850.com. Ithaca is home to several radio stations: Public radio: Other FM stations include: Saga's "98.7 The Vine", a low-powered translator station; WFIZ "Z95.5", airing a top-40 (CHR) format; contemporary Christian music station WCII 88.9; and classic rock "The Wall" WLLW 99.3 and 96.3, based in Seneca Falls with a transmitter in Ithaca. Founded in 1983, the Sciencenter, is a non-profit hands-on science museum, accredited by the American Alliance of Museums (AAM) and is a member of the Association of Science-Technology Centers (ASTC) and Association of Children's Museums (ACM). The Museum of the Earth is a natural history museum created in 2003 by the Paleontological Research Institution (PRI). The PRI was founded in Ithaca in 1932 and is the publisher of the oldest journal of paleontology in the western hemisphere. Exhibits cover the 4.5 billion year history of the earth in an accessible manner, including interactive displays. As of 2004, the PRI is now formally affiliated with Cornell. The Cayuga Nature Center occupies the site of the 1914 Cayuga Preventorium, a facility for children with tuberculosis; treatment of what was then considered an incurable disease was based on rest and good nutrition. In 1981, the Cayuga Nature Center was incorporated as an independent, private, non-profit educational organization, offering environmental education to local school districts. In 2011, the PRI merged with the Cayuga Nature Center, making it a sister organization to the Museum of the Earth. The Cornell Lab of Ornithology is located in the Imogene Powers Johnson Center for Birds and Biodiversity. The Lab's Visitors' Center and observation areas are open to the public. Displays include a surround sound theater, object-theater presentation, sound studio, and informational kiosks featuring bird sounds and information. The Herbert F. Johnson Museum of Art houses one of the finest collections of art in upstate New York. Special exhibitions are mounted each year, plus selections from a global permanent collection, which is displayed on six public floors. The collection includes art from throughout Asia, Africa, Europe, the Americas, graphic arts, medallic art, and Tiffany glass, ranging from the ancient to the contemporary. The Center for the Arts at Ithaca, Inc., operates the "Hangar Theatre". Opened in 1975 in a renovated municipal airport hangar, the Hangar hosts a summer season and brings a range of theatre to regional audiences including students, producing a school tour and Artists-in-the-Schools programs. Ithaca is also the home to Kitchen Theatre Company, a non-profit professional company with a theatre on West State Street; and Civic Ensemble, a creative collaborative ensemble staging emerging playwrights' work and community-based original productions. Ithaca is noted for its annual community celebration, The Ithaca Festival. The Constance Saltonstall Foundation for the Arts provides grants and summer fellowships at the Saltonstall Arts Colony for New York State artists and writers. Ithaca also hosts one of the largest used-book sales in the United States. The city and town also sponsor The Apple Festival in the fall, the Chili Fest in February, the Finger Lakes International Dragon Boat Festival in July; Porchfest in late September, and the Ithaca Brew Fest in Stewart Park in September. Ithaca has also pioneered the Ithaca Health Fund, a popular cooperative health insurance. Ithaca is home to one of the United States' first local currency systems, Ithaca Hours, developed by Paul Glover. Ithaca is the home of the Cayuga Chamber Orchestra. The Cornell Concert Series has been hosting musicians and ensembles of international stature since 1903. For its initial 84 years, the series featured Western classical artists exclusively. In 1987, however, the series broke with tradition to present Ravi Shankar and has since grown to encompass a broader spectrum of the world's great musics. Now, it balances of a mix of Western classical music, traditions from around the world, jazz, and new musics in these genres. In a single season, Cornell Concert Series presents performers ranging from the Leipzig Tomanerchor and Danish Quartet to Simon Shaheen, Vida Guitar Quartet, and Eighth Blackbird. The School of Music at Ithaca College was founded in 1892 by William Egbert as a music conservatory on Buffalo Street. Among the degree programs offered are those in Performance, Theory, Music Education, and Composition. Since 1941, the School of Music has been accredited by the National Association of Schools of Music. Ithaca's Suzuki school, Ithaca Talent Education, provides musical training for children of all ages and also teacher training for undergraduate and graduate-level students. The Community School of Music and Art uses an extensive scholarship system to offer classes and lessons to any student, regardless of age, background, economic status, or artistic ability. A number of musicians call Ithaca home, most notably Samite of Uganda, The Burns Sisters, The Horse Flies, Johnny Dowd, Mary Lorson, cellist Hank Roberts, reggae band John Brown's Body, Kurt Riley, X Ambassadors, and Alex Kresovich. Old-time music is a staple and folk music is featured weekly on WVBR-FM's "Bound for Glory", North America's longest-running live folk concert broadcast. The Finger Lakes GrassRoots Festival of Music and Dance, hosted by local band Donna the Buffalo, is held annually during the third week in July in the nearby village of Trumansburg, with more than 60 local, national and international acts. Ithaca is the center of a thriving live music scene, featuring over 200 groups playing most genres of American popular music, the predominant genres being Folk, Rock, Blues, Jazz, and Country. There are over 80 live music venues within a 40-mile radius of the city, including cafes, pubs, clubs, and concert halls. In 2009, the Ithaca metropolitan statistical area (MSA) ranked as the highest in the United States for percentage of commuters who walked to work (15.1 percent). In 2013, the Ithaca MSA ranked as the second lowest in the United States for percentage of commuters who traveled by private vehicle (68.7 percent). During the same year, 17.5 percent of commuters in the Ithaca MSA walked to work. Ithaca is in the rural Finger Lakes region about northwest of New York City; the nearest larger cities, Binghamton and Syracuse, are an hour's drive away by car, Rochester and Scranton are two hours, Buffalo and Albany are three. New York City, Philadelphia, Toronto, and Ottawa are about four hours away. Ithaca lies at over a half hour's drive from any interstate highway, and all car trips to Ithaca involve some driving on two-lane state rural highways. The city is at the convergence of many regional two-lane state highways: Routes 13, 13A, 34, 79, 89, 96, 96B, and 366. These are usually not congested except in Ithaca proper. However, Route 79 between the I-81 access at Whitney Point and Ithaca receives a significant amount of Ithaca-bound congestion right before Ithaca's colleges reopen after breaks. In July 2008, a non-profit called Ithaca Carshare began a carsharing service in Ithaca. Ithaca Carshare has a fleet of vehicles shared by over 1500 members as of July 2015 and has become a popular service among both city residents and the college communities. Vehicles are located throughout Ithaca downtown and the two major institutions. With Ithaca Carshare as the first locally run carsharing organization in New York State, others have since launched in Buffalo, Albany, NY, and Syracuse. Rideshare services to promote carpooling and vanpooling are operated by ZIMRIDE and VRIDE. A community mobility education program, Way2Go is operated by Cornell Cooperative Extension of Tompkins County. Way2Go's website provides consumer information and videos. Way2Go works collaboratively to help people save money, stress less, go green and improve mobility options. The 2-1-1 Tompkins/Cortland Help line connects people with services, including transportation, in the community, by telephone and web on a 24/7 basis. The information and referral service is operated by the Human Services Coalition of Tompkins County, Inc. Together, 2-1-1 Information and Referral and Way2Go are a one-call, one-click resource designed to mobility services information for Ithaca and throughout Tompkins County. As a growing urban area, Ithaca is facing steady increases in levels of vehicular traffic on the city grid and on the state highways. Outlying areas have limited bus service, and many people consider a car essential. However, many consider Ithaca a walkable and bikeable community. One positive trend for the health of downtown Ithaca is the new wave of increasing urban density in and around the Ithaca Commons. Because the downtown area is the region's central business district, dense mixed-use development that includes housing may increase the proportion of people who can walk to work and recreation, and mitigate the likely increased pressure on already busy roads as Ithaca grows. The downtown area is also the area best served by frequent public transportation. Still, traffic congestion around the Commons is likely to progressively increase. There is frequent intercity bus service by Greyhound Lines, New York Trailways, OurBus, and Shortline (Coach USA), particularly to Binghamton and New York City, with limited service to Rochester, Buffalo and Syracuse, and (via connections in Binghamton) to Utica and Albany. OurBus also provides limited holiday services to Allentown, Pennsylvania, Philadelphia, and Washington, DC. Cornell University runs a premium campus to campus bus between its Ithaca campus and its medical school in Manhattan, New York City which is open to the public. Starting September 2019, intercity buses serving Ithaca operated from the downtown bus stop at 131 East Green Street, as the former Greyhound bus station on West State Street closed due to staff retirement and building maintenance issues. Ithaca is the center of an extensive bus public transportation network. Tompkins Consolidated Area Transit, Inc. (TCAT, Inc.) is a not-for-profit corporation that provides public transportation for Tompkins County, New York. TCAT was reorganized as a non-profit corporation in 2004 and is primarily supported locally by Cornell University, the City of Ithaca and Tompkins County. TCAT's ridership increased from 2.7 million in 2004 to 4.4 million in 2013. TCAT operates 34 routes, many running seven days a week. It has frequent service to downtown, Cornell University, Ithaca College, and the Shops at Ithaca Mall in the Town of Lansing, but less frequent service to many residential and rural areas, including Trumansburg and Newfield. Chemung County Transit (C-TRAN) runs weekday commuter service from Chemung County to Ithaca. Cortland Transit runs commuter service to Cornell University. Tioga County Public Transit operates three routes to Ithaca and Cornell, but will cease operating on November 30, 2014. GADABOUT Transportation Services, Inc. provides demand-response paratransit service for seniors over 60 and people with disabilities. Ithaca Dispatch provides local and regional taxi service. In addition, Ithaca Airline Limousine and IthaCar Service connect to the local airports. Ithaca is served by Ithaca Tompkins International Airport, located about three miles to the northeast of the city center. In late 2019 the airport completed a major $34.8 million renovation which included a larger terminal with additional passenger gates and jet bridges, expanded passenger amenities, and a customs facility that enables it to receive international charter and private flights. American Eagle offers daily flights to its hub at Philadelphia, and weekly service to its Charlotte hub, both operated by Piedmont Airlines using Embraer ERJ-145 airliners. Delta Connection provides service to its hub at Detroit Metro airport, operated by SkyWest Airlines using Bombardier CRJ-200 airliners. United Express offers two daily flights to Washington Dulles International Airport, operated by CommutAir using the Embraer ERJ-145. Into the mid-twentieth century, it was possible to reach Ithaca by passenger rail. Two trains per day serviced Ithaca along either the Delaware, Lackawanna and Western Railroad or the Lehigh Valley Railroad. The trip took "about seven hours" from New York City, "about eight hours" from Philadelphia, and "about three hours" from Buffalo. There has been no passenger rail service since 1961. From the 1870s through 1961 there were trains to Buffalo via Geneva, New York; to New York City via Wilkes-Barre, Pennsylvania (Lehigh Valley Railroad) and (until the 1940s) Scranton, Pennsylvania (DL&W); to Auburn, New York via Cayuga Junction; and to the US northeast via Cortland, New York The Lehigh Valley's top New York City-Ithaca-Buffalo passenger train, the "Black Diamond," was optimistically publicized as 'The Handsomest Train in the World', perhaps to compensate for its roundabout route to Buffalo. It was named after the railroad's largest commodity, anthracite coal. Into the early 1940s, the Lackawanna Railroad operated two shuttle trains a day to Owego, where passengers could transfer to trains to Buffalo and Chicago to the west and eastbound to Hoboken across from New York City. Until 1958, the Lackawanna maintained service through nearby Cortland. Until 1958, two Lehigh Valley trains a day made west-bound stops in Ithaca, and three trains a day made the station stops east-bound for New York City. The last train passenger train making stops in Ithaca was the Lehigh's "Maple Leaf," discontinued on February 1, 1961. Within Ithaca, electric railways ran along Stewart Avenue and Eddy Street. In fact, Ithaca was the fourth community in New York state with a street railway; streetcars ran from 1887 to summer 1935. In 2019 the Ithaca Central Railroad, a Watco subsidiary, took over the operation of the Norfolk Southern Ithaca Secondary line from Sayre, Pennsylvania to the Lake Ridge site of the AES Cayuga, a coal power plant (known as Milliken Station during NYSEG ownership). Unit trains of coal, delivered at Sayre, Pennsylvania from the Norfolk Southern are now rare. The main traffic is salt from the Cargill salt mine on the east shore of Cayuga Lake near Myers Point. In addition to its liberal politics, Ithaca is commonly listed among the most culturally liberal of American small cities. The "Utne Reader" named Ithaca "America's most enlightened town" in 1997. According to ePodunk's Gay Index, Ithaca has a score of 231, versus a national average score of 100. Like many small college towns, Ithaca has also received accolades for having a high overall quality of life. In 2004, "Cities Ranked and Rated" named Ithaca the best "emerging city" to live in the United States. In 2006, the Internet realty website "Relocate America" named Ithaca the fourth best city in the country to relocate to. In July 2006, Ithaca was listed as one of the "12 Hippest Hometowns for Vegetarians" by "VegNews Magazine" and chosen by "Mother Earth News" as one of the "12 Great Places You've Never Heard Of." In 2012, the city was listed among the 10 best places to retire in the U.S. by U.S. News. Ithaca was also ranked 13th among America's Best College Towns by "Travel + Leisure" in 2013 and ranked as the #1 Best College Town in America in the American Institute for Economic Research's 2013–2014 College Destination Index. In its earliest years, during the frontier days, what is now Ithaca was briefly known by the names "The Flats" and "Sodom," the name of the Biblical city of sin, due to its reputation as a town of "notorious immorality", a place of horse racing, gambling, profanity, Sabbath breaking, and readily available liquor. These names did not last long; Simeon De Witt renamed the town Ithaca in the early 19th century, though nearby Robert H. Treman State Park still contains Lucifer Falls. Today, Ithaca is primarily known for its growing wineries and microbreweries, live music, colleges, and small dairy farms.
https://en.wikipedia.org/wiki?curid=14973
Bodyline Bodyline, also known as fast leg theory bowling, was a cricketing tactic devised by the English cricket team for their 1932–33 Ashes tour of Australia, specifically to combat the extraordinary batting skill of Australia's Don Bradman. A bodyline delivery was one where the cricket ball was bowled at the body of the batsman, in the hope that when he defended himself with his bat, a resulting deflection could be caught by one of several fielders standing close by. Critics considered the tactic intimidating and physically threatening, to the point of being unfair in a game that was supposed to uphold gentlemanly traditions. England's use of a tactic perceived by some as overly aggressive or even unfair ultimately threatened diplomatic relations between the two countries before the situation was calmed. Although no serious injuries arose from any short-pitched deliveries while a leg theory field was actually set, the tactic still led to considerable ill feeling between the two teams, particularly when Australian batsmen suffered actual injuries in separate incidents, which inflamed the watching crowds. Short-pitched bowling continues to be permitted in cricket, even when aimed at the batsman. However, over time, several of the Laws of Cricket were changed to render the bodyline tactic less effective. Bodyline is a tactic devised for and primarily used in the Ashes series between England and Australia in 1932–33. The tactic involved bowling at leg stump or just outside it, pitching the ball short so that it reared at the body of a batsman standing in an orthodox batting position. A ring of fielders ranged on the leg side would catch any defensive deflection from the bat. The batsman's options were to evade the ball through ducking or moving aside, allow the ball to strike his body or play the ball with his bat. The last course carried additional risks. Defensive shots brought few runs and could carry far enough to be caught by the fielders on the leg side; pull and hook shots could be caught on the edge of the field where two men were usually placed for such a shot. Bodyline bowling is intimidatory, and was largely designed as an attempt to curb the prolific scoring of Donald Bradman, although other prolific Australian batsmen such as Bill Woodfull, Bill Ponsford and Alan Kippax were also targeted. Several different terms were used to describe this style of bowling before the name "bodyline" was used. Among the first to use it was the writer and former Australian Test cricketer Jack Worrall; in the match between the English team and an Australian XI, when bodyline was first used in full, he referred to "half-pitched slingers on the body line" and first used it in print after the first Test. Other writers used a similar phrase around this time, but the first use of "bodyline" in print seems to have been by the journalist Hugh Buggy in the Melbourne "Herald", in his report on the first day's play of the first Test. In the 19th century, most cricketers considered it unsportsmanlike to bowl the ball at the leg stump or for batsmen to hit on the leg side. But by the early years of the 20th century, some bowlers, usually slow or medium-paced, used leg theory as a tactic; the ball was aimed outside the line of leg stump and the fielders placed on that side of the field, the object being to test the batsman's patience and force a rash stroke. Two English left-arm bowlers, George Hirst in 1903–04 and Frank Foster in 1911–12, bowled leg theory to packed leg side fields in Test matches in Australia; Warwick Armstrong used it regularly for Australia. In the years immediately before the First World War, several bowlers used leg theory in county cricket. When cricket resumed after the war, few bowlers maintained the tactic, which was unpopular with spectators owing to its negativity. Fred Root, the Worcestershire bowler, used it regularly and with considerable success in county cricket. Root later defended the use of leg theory—and bodyline—observing that when bowlers bowled outside off stump, the batsmen were able to let the ball pass them without playing a shot. Some fast bowlers experimented with leg theory prior to 1932, sometimes accompanying the tactic with short-pitched bowling. In 1925, Australian Jack Scott first bowled a form of what would later have been called bodyline in a state match for New South Wales; his captain Herbie Collins disliked it and would not let him use it again. Other Australian captains were less particular, including Vic Richardson, who asked the South Australian bowler Lance Gun to use it in 1925, and later let Scott use it when he moved to South Australia. Scott repeated the tactics against the MCC in 1928–29. In 1927, in a Test trial match, "Nobby" Clark bowled short to a leg-trap (a cluster of fielders placed close on the leg side). He was representing England in a side captained by Douglas Jardine. In 1928–29, Harry Alexander bowled fast leg theory at an England team, and Harold Larwood briefly used a similar tactic on that same tour in two Test matches. Freddie Calthorpe, the England captain, criticised Learie Constantine's use of short-pitched bowling to a leg side field in a Test match in 1930; one such ball struck Andy Sandham, but Constantine only reverted to more conventional tactics after a complaint from the England team. The Australian cricket team toured England in 1930. Australia won the five-Test series 2–1, and Donald Bradman scored 974 runs at a batting average of 139.14, an aggregate record that still stands. By the time of the next Ashes series of 1932–33, Bradman's average hovered around 100, approximately twice that of all other world-class batsmen. The English cricket authorities felt that new tactics would be required to prevent Bradman being even more successful on Australian pitches; some critics believed that Bradman could be dismissed by leg-spin as Walter Robins and Ian Peebles had supposedly caused him problems; two leg-spinners were included in the English touring party of 1932–33. Gradually, the idea developed that Bradman was vulnerable to pace bowling. In the final Test of the 1930 Ashes series, while he was batting, the pitch became briefly difficult following rain. Bradman was seen to be uncomfortable facing deliveries which bounced higher than usual at a faster pace, being seen to step back out of the line of the ball. Former England player and Surrey captain Percy Fender was one who noticed, and the incident was much discussed by cricketers. Given that Bradman scored 232, it was not initially thought that a way to curb his prodigious scoring had been found. When Douglas Jardine later saw film footage of the Oval incident and noticed Bradman's discomfort, according to his daughter he shouted, "I've got it! He's yellow!" The theory of Bradman's vulnerability developed when Fender received correspondence from Australia in 1932, describing how Australian batsmen were increasingly moving across the stumps towards the off side to play the ball on the on side. Fender showed these letters to Jardine when it became clear that he was to captain the English team in Australia during the 1932–33 tour, and he also discussed Bradman's discomfort at the Oval. It was also known in England that Bradman was dismissed for a four-ball duck by fast bowler Eddie Gilbert, and looked very uncomfortable. Bradman had also appeared uncomfortable against the pace of Sandy Bell in his innings of 299 not out at the Adelaide Oval in South Africa's tour of Australia earlier in 1932, when the desperate bowler decided to bowl short to him, and fellow South African Herbie Taylor, according to Jack Fingleton, may have mentioned this to English cricketers in 1932. Fender felt Bradman might be vulnerable to fast, short-pitched deliveries on the line of leg stump. Jardine felt that Bradman was afraid to stand his ground against intimidatory bowling, citing instances in 1930 when he shuffled about, contrary to orthodox batting technique. Jardine's first experience against Australia came when he scored an unbeaten 96 to secure a draw against the 1921 Australian touring side for Oxford University. The tourists were criticised in the press for not allowing Jardine to reach his hundred, but had tried to help him with some easy bowling. There has been speculation that this incident helped develop Jardine's antipathy towards Australians, although Jardine's biographer Christopher Douglas denies this. Jardine's attitude towards Australia hardened after he toured the country in 1928–29. When he scored three consecutive hundreds in the early games, he was frequently jeered by the crowd for slow play; the Australian spectators took an increasing dislike to him, mainly for his superior attitude and bearing, his awkward fielding, and particularly his choice of headwear—a Harlequin cap that was given to successful Oxford cricketers. Although Jardine may simply have worn the cap out of superstition, it conveyed a negative impression to the spectators; his general demeanour drew one comment of "Where's the butler to carry the bat for you?" By this stage Jardine had developed an intense dislike for Australian crowds. During his third century at the start of the tour, during a period of abuse from the spectators, he observed to Hunter Hendry that "All Australians are uneducated, and an unruly mob". After the innings, when teammate Patsy Hendren remarked that the Australian crowds did not like Jardine, he replied "It's fucking mutual". During the tour, Jardine fielded next to the crowd on the boundary. There, he was roundly abused and mocked for his awkward fielding, particularly when chasing the ball. On one occasion, he spat towards the crowd while fielding on the boundary as he changed position for the final time. Jardine was appointed captain of England for the 1931 season, replacing Percy Chapman who had led the team in 1930. He defeated New Zealand in his first series, but opinion was divided as to how effective he had been. The following season, he led England again and was appointed to lead the team to tour Australia for the 1932–33 Ashes series. A meeting was arranged between Jardine, Nottinghamshire captain Arthur Carr and his two fast bowlers Harold Larwood and Bill Voce at London's Piccadilly Hotel to discuss a plan to combat Bradman. Jardine asked Larwood and Voce if they could bowl on leg stump and make the ball rise into the body of the batsman. The bowlers agreed they could, and that it might prove effective. Jardine also visited Frank Foster to discuss his field-placing in Australia in 1911–12. Larwood and Voce practised the plan over the remainder of the 1932 season with varying but increasing success and several injuries to batsmen. Ken Farnes experimented with short-pitched, leg-theory bowling but was not selected for the tour. Bill Bowes also used short-pitched bowling, notably against Jack Hobbs. The England team which toured Australia in 1932–33 contained four fast bowlers and a few medium pacers; such a heavy concentration on pace was unusual at the time, and drew comment from the Australian press and players, including Bradman. On the journey, Jardine instructed his team on how to approach the tour and discussed tactics with several players, including Larwood; at this stage, he seems to have settled on leg theory, if not full bodyline, as his main tactic. Some players later reported that he told them to hate the Australians in order to defeat them, while instructing them to refer to Bradman as "the little bastard." Upon arrival, Jardine quickly alienated the press and crowds through his manner and approach. In the early matches, although there were instances of the English bowlers pitching the ball short and causing problems with their pace, full bodyline tactics were not used. There had been little unusual about the English bowling except the number of fast bowlers. Larwood and Voce were given a light workload in the early matches by Jardine. The English tactics changed in a game against an Australian XI team at Melbourne in mid-November, when full bodyline tactics were deployed for the first time. Jardine had left himself out of the English side, which was led instead by Bob Wyatt who later wrote that the team experimented with a diluted form of bodyline bowling. He reported to Jardine that Bradman, who was playing for the opposition, seemed uncomfortable against the bowling tactics of Larwood, Voce and Bowes. The crowd, press and Australian players were shocked by what they experienced and believed that the bowlers were targeting the batsmen's heads. Bradman adopted unorthodox tactics—ducking, weaving and moving around the crease—which did not meet with universal approval from Australians and he scored just 36 and 13 in the match. The tactic continued to be used in the next game by Voce (Larwood and Bowes did not play in this game), against New South Wales, for whom Jack Fingleton made a century and received several blows in the process. Bradman again failed twice, and had scored just 103 runs in six innings against the touring team; many Australian fans were now worried by Bradman's form. Meanwhile, Jardine wrote to tell Fender that his information about the Australian batting technique was correct and that it meant he was having to move more and more fielders onto the leg side: "if this goes on I shall have to move the whole bloody lot to the leg side." The Australian press were shocked and criticised the hostility of Larwood in particular. Some former Australian players joined the criticism, saying the tactics were ethically wrong. But at this stage, not everyone was opposed, and the Australian Board of Control believed the English team had bowled fairly. On the other hand, Jardine increasingly came into disagreement with tour manager Warner over bodyline as the tour progressed. Warner hated bodyline but would not speak out against it. He was accused of hypocrisy for not taking a stand on either side, particularly after expressing sentiments at the start of the tour that cricket "has become a synonym for all that is true and honest. To say 'that is not cricket' implies something underhand, something not in keeping with the best ideals ... all who love it as players, as officials or spectators must be careful lest anything they do should do it harm." Bradman missed the first Test at Sydney, worn out by constant cricket and the ongoing argument with the Board of Control. Jardine later wrote that the real reason was that the batsman had suffered a nervous breakdown. The English bowlers used bodyline intermittently in the first match, to the crowd's vocal displeasure, and the Australians lost the game by ten wickets. Larwood was particularly successful, returning match figures of ten wickets for 124 runs. One of the English bowlers, Gubby Allen, refused to bowl with fielders on the leg side, clashing with Jardine over these tactics. The only Australian batsman to make an impact was Stan McCabe, who hooked and pulled everything aimed at his upper body, to score 187 not out in four hours from 233 deliveries. Behind the scenes, administrators began to express concerns to each other. Yet the English tactics still did not earn universal disapproval; former Australian captain Monty Noble praised the English bowling. Meanwhile, Woodfull was being encouraged to retaliate to the short-pitched English attack, not least by members of his own side such as Vic Richardson, or to include pace bowlers such as Eddie Gilbert or Laurie Nash to match the aggression of the opposition. But Woodfull refused to consider doing so. He had to wait until minutes before the game before he was confirmed as captain by the selectors. For the second Test, Bradman returned to the team after his newspaper employers released him from his contract. England continued to use bodyline and Bradman was dismissed by his first ball in the first innings. In the second innings, against the full bodyline attack, he scored an unbeaten century which helped Australia to win the match and level the series at one match each. Critics began to believe bodyline was not quite the threat that had been perceived and Bradman's reputation, which had suffered slightly with his earlier failures, was restored. However, the pitch was slightly slower than others in the series, and Larwood was suffering from problems with his boots which reduced his effectiveness. The controversy reached its peak during the Third Test at Adelaide. On the second day, a Saturday, before a crowd of 50,962 spectators, Australia bowled out England who had batted through the first day. In the third over of the Australian innings, Larwood bowled to Woodfull. The fifth ball narrowly missed Woodfull's head and the final ball, delivered short on the line of middle stump, struck Woodfull over the heart. The batsman dropped his bat and staggered away holding his chest, bent over in pain. The England players surrounded Woodfull to offer sympathy but the crowd began to protest noisily. Jardine called to Larwood: "Well bowled, Harold!" Although the comment was aimed at unnerving Bradman, who was also batting at the time, Woodfull was appalled. Play resumed after a brief delay, once it was certain the Australian captain was fit to carry on and, since Larwood's over had ended, Woodfull did not have to face the bowling of Allen in the next over. However, when Larwood was ready to bowl at Woodfull again, play was halted once more when the fielders were moved into bodyline positions, causing the crowd to protest and call abuse at the England team. Subsequently, Jardine claimed that Larwood requested a field change, Larwood said that Jardine had done so. Many commentators condemned the alteration of the field as unsporting, and the angry spectators became extremely volatile. Jardine, although writing that Woodfull could have retired hurt if he was unfit, later expressed his regret at making the field change at that moment. The fury of the crowd was such that a riot might have occurred had another incident taken place and several writers suggested that the anger of the spectators was the culmination of feelings built up over the two months that bodyline had developed. During the over, another rising Larwood delivery knocked the bat out of Woodfull's hands. He batted for 89 minutes, being hit a few more times before Allen bowled him for 22. Later in the day, Pelham Warner, one of the England managers, visited the Australian dressing room. He expressed sympathy to Woodfull but was surprised by the Australian's response. According to Warner, Woodfull replied, "I don't want to see you, Mr Warner. There are two teams out there. One is trying to play cricket and the other is not." Fingleton wrote that Woodfull had added, "This game is too good to be spoilt. It is time some people got out of it." Woodfull was usually dignified and quietly spoken, making his reaction surprising to Warner and others present. Warner was so shaken that he was found in tears later that day in his hotel room. There was no play on the following day, Sunday being a rest day, but on Monday morning, the exchange between Warner and Woodfull was reported in several Australian newspapers. The players and officials were horrified that a sensitive private exchange had been reported to the press. Leaks to the press were practically unknown in 1933. David Frith notes that discretion and respect were highly prized and such a leak was "regarded as a moral offence of the first order." Woodfull made it clear that he severely disapproved of the leak, and later wrote that he "always expected cricketers to do the right thing by their team-mates." As the only full-time journalist in the Australian team, suspicion immediately fell on Fingleton, although as soon as the story was published, he told Woodfull he was not responsible. Warner offered Larwood a reward of one pound if he could dismiss Fingleton in the second innings; Larwood obliged by bowling him for a duck. Fingleton later claimed that Sydney Sun reporter Claude Corbett had received the information from Bradman; for the rest of their lives, Fingleton and Bradman made claim and counter-claim that the other man was responsible for the leak. The following day, as Australia faced a large deficit on the first innings, Bert Oldfield played a long innings in support of Bill Ponsford, who scored 85. In the course of the innings, the English bowlers used bodyline against him, and he faced several short-pitched deliveries but took several fours from Larwood to move to 41. Having just conceded a four, Larwood bowled fractionally shorter and slightly slower. Oldfield attempted to hook but lost sight of the ball and edged it onto his temple; the ball fractured his skull. Oldfield staggered away and fell to his knees and play stopped as Woodfull came onto the pitch and the angry crowd jeered and shouted, once more reaching the point where a riot seemed likely. Several English players thought about arming themselves with stumps should the crowd come onto the field. The ball which injured Oldfield was bowled to a conventional, non-bodyline field; Larwood immediately apologised but Oldfield said that it was his own fault before he was helped back to the dressing room and play continued. Jardine later secretly sent a telegram of sympathy to Oldfield's wife and arranged for presents to be given to his young daughters. At the end of the fourth day's play of the third Test match, the Australian Board of Control sent a cable to the Marylebone Cricket Club (MCC), cricket's ruling body and the club that selected the England team, in London: Not all Australians, including the press and players, believed that the cable should have been sent, particularly immediately following a heavy defeat. The suggestion of unsportsmanlike behaviour was deeply resented by the MCC, and was one of the worst accusations that could have been levelled at the team at the time. Additionally, members of the MCC believed that the Australians had over-reacted to the English bowling. The MCC took some time to draft a reply: At this point, the remainder of the series was under threat. Jardine was shaken by the events and by the hostile reactions to his team. Stories appeared in the press, possibly leaked by the disenchanted Nawab of Pataudi, about fights and arguments between the England players. Jardine offered to stop using bodyline if the team did not support him, but after a private meeting (not attended by Jardine or either of the team managers) the players released a statement fully supporting the captain and his tactics. Even so, Jardine would not have played in the fourth Test without the withdrawal of the unsportsmanlike accusation. The Australian Board met to draft a reply cable, which was sent on 30 January, indicating that they wished the series to continue and offering to postpone consideration of the fairness of bodyline bowling until after the series. The MCC's reply, on 2 February, suggested that continuing the series would be impossible unless the accusation of unsporting behaviour was withdrawn. The situation escalated into a diplomatic incident. Figures high up in both the British and Australian government saw bodyline as potentially fracturing an international relationship that needed to remain strong. The Governor of South Australia, Alexander Hore-Ruthven, who was in England at the time, expressed his concern to British Secretary of State for Dominion Affairs James Henry Thomas that this would cause a significant impact on trade between the nations. The standoff was settled when the Australian prime minister, Joseph Lyons, met with members of the Australian Board and outlined to them the severe economic hardships that could be caused in Australia if the British public boycotted Australian trade. Following considerable discussion and debate in the English and Australian press, the Australian Board sent a cable to the MCC which, while maintaining its opposition to bodyline bowling, stated "We do not regard the sportsmanship of your team as being in question". Even so, correspondence between the Australian Board and the MCC continued for almost a year. Voce missed the fourth Test of the series, being replaced by a leg spinner, Tommy Mitchell. Larwood continued to use bodyline, but he was the only bowler in the team using the tactic; even so, he used it less frequently than usual and seemed less effective in high temperatures and humidity. England won the game by eight wickets, thanks in part to an innings of 83 by Eddie Paynter who had been admitted to hospital with tonsillitis but left in order to bat when England were struggling in their innings. Voce returned for the final Test, but neither he nor Allen were fully fit, and despite the use of bodyline tactics, Australia scored 435 at a rapid pace, aided by several dropped catches. Australia included a fast bowler for this final game, Harry Alexander who bowled some short deliveries but was not allowed to use many fielders on the leg side by his captain, Woodfull. England built a lead of 19 but their tactics in Australia's second innings were disrupted when Larwood left the field with an injured foot; Hedley Verity, a spinner, claimed five wickets to bowl Australia out; England won by eight wickets and won the series by four Tests to one. Bodyline continued to be bowled occasionally in the 1933 English season—most notably by Nottinghamshire, who had Carr, Voce and Larwood in their team. This gave the English crowds their first chance to see what all the fuss was about. Ken Farnes, the Cambridge University fast bowler, also bowled it in the University Match, hitting a few Oxford batsmen. Jardine himself had to face bodyline bowling in a Test match. The West Indian cricket team toured England in 1933, and, in the second Test at Old Trafford, Jackie Grant, their captain, decided to try bodyline. He had a couple of fast bowlers, Manny Martindale and Learie Constantine. Facing bodyline tactics for the first time, England first suffered, falling to 134 for 4, with Wally Hammond being hit on the chin, though he recovered to continue his innings. Then Jardine himself faced Martindale and Constantine. Jardine never flinched. With Les Ames finding himself in difficulties, Jardine said, "You get yourself down this end, Les. I'll take care of this bloody nonsense." He played right back to the bouncers, standing on tiptoe, and played them with a dead bat, sometimes playing the ball one handed for more control. While the Old Trafford pitch was not as suited to bodyline as the hard Australian wickets, Martindale did take 5 for 73, but Constantine only took 1 for 55. Jardine himself made 127, his only Test century. In the West Indian second innings, Clark bowled bodyline back to the West Indians, taking 2 for 64. The match in the end was drawn but played a large part in turning English opinion against bodyline. "The Times" used the word bodyline, without using inverted commas or using the qualification "so-called", for the first time. "Wisden" also said that "most of those watching it for the first time must have come to the conclusion that, while strictly within the law, it was not nice." In 1934, Bill Woodfull led Australia back to England on a tour that had been under a cloud after the tempestuous cricket diplomacy of the previous bodyline series. Jardine had retired from International cricket in early 1934 after captaining a fraught tour of India and under England's new captain, Bob Wyatt, agreements were put in place so that bodyline would not be used. However, there were occasions when the Australians felt that their hosts had crossed the mark with tactics resembling bodyline. In a match between the Australians and Nottinghamshire, Voce, one of the bodyline practitioners of 1932–33, employed the strategy with the wicket-keeper standing to the leg side and took 8/66. In the second innings, Voce repeated the tactic late in the day, in fading light against Woodfull and Bill Brown. Of his 12 balls, 11 were no lower than head height. Woodfull told the Nottinghamshire administrators that, if Voce's leg-side bowling was repeated, his men would leave the field and return to London. He further said that Australia would not return to the country in the future. The following day, Voce was absent, ostensibly due to a leg injury. Already angered by the absence of Larwood, the Nottinghamshire faithful heckled the Australians all day. Australia had previously and privately complained that some pacemen had strayed past the agreement in the Tests. As a direct consequence of the 1932–33 tour, the MCC introduced a new rule to the laws of cricket for the 1935 English cricket season. Originally, the MCC hoped that captains would ensure that the game was played in the correct spirit, and passed a resolution that bodyline bowling would breach this spirit. When this proved to be insufficient, the MCC passed a law that "direct attack" bowling was unfair and became the responsibility of the umpires to identify and stop. In 1957, the laws were altered to prevent more than two fielders standing behind square on the leg side; the intention was to prevent negative bowling tactics whereby off spinners and slow inswing bowlers aimed at the leg stump of batsmen with fielders concentrated on the leg side. However, an indirect effect was to make bodyline fields impossible to implement. Later law changes, under the heading of "Intimidatory Short Pitched Bowling", also restricted the number of "bouncers" which might be bowled in an over. Nevertheless, the tactic of intimidating the batsman is still used to an extent that would have been shocking in 1933, although it is less dangerous now because today's players wear helmets and generally far more protective gear. The West Indies teams of the 1980s, who regularly fielded a bowling attack comprising some of the best fast bowlers in cricket history, were perhaps the most feared exponents. The English players and management were consistent in referring to their tactic as "fast leg theory" considering it to be a variant of the established and unobjectionable leg theory tactic. The inflammatory term "bodyline" was coined and perpetuated by the Australian press (see below). English writers used the term "fast leg theory". The terminology reflected differences in understanding, as neither the English public nor the Board of the Marylebone Cricket Club (MCC)—the governing body of English cricket—could understand why the Australians were complaining about what they perceived as a commonly used tactic. Some concluded that the Australian cricket authorities and public were sore losers. Of the four fast bowlers in the tour party, Gubby Allen was a voice of dissent in the English camp, refusing to bowl short on the leg side, and writing several letters home to England critical of Jardine, although he did not express this in public in Australia. A number of other players, while maintaining a united front in public, also deplored bodyline in private. The amateurs Bob Wyatt (the vice-captain), Freddie Brown and the Nawab of Pataudi opposed it, as did Wally Hammond and Les Ames among the professionals. During the season, Woodfull's physical courage, stoic and dignified leadership won him many admirers. He flatly refused to employ retaliatory tactics and did not publicly complain even though he and his men were repeatedly hit. Jardine however insisted his tactic was not designed to cause injury and that he was leading his team in a sportsmanlike and gentlemanly manner, arguing that it was up to the Australian batsmen to play their way out of trouble. It was subsequently revealed that several of the players had private reservations, but they did not express them publicly at the time. Following the 1932–33 series, several authors, including many of the players involved, released books expressing various points of view about bodyline. Many argued that it was a scourge on cricket and must be stamped out, while some did not see what all the fuss was about. The series has been described as the most controversial period in Australian cricket history, and voted the most important Australian moment by a panel of Australian cricket identities. The MCC asked Harold Larwood to sign an apology to them for his bowling in Australia, making his selection for England again conditional upon it. Larwood was furious at the notion, pointing out that he had been following orders from his upper-class captain, and that was where any blame should lie. Larwood refused, never played for England again, and became vilified in his own country. Douglas Jardine always defended his tactics and in the book he wrote about the tour, "In Quest of the Ashes", described allegations that the England bowlers directed their attack with the intention of causing physical harm as stupid and patently untruthful. The immediate effect of the law change which banned bodyline in 1935 was to make commentators and spectators sensitive to the use of short-pitched bowling; bouncers became exceedingly rare and bowlers who delivered them were practically ostracised. This attitude ended after the Second World War, and among the first teams to make extensive use of short-pitched bowling was the Australian team captained by Bradman between 1946 and 1948. Other teams soon followed. Outside the sport, there were significant consequences for Anglo-Australian relations, which remained strained until the outbreak of World War II made cooperation paramount. Business between the two countries was adversely affected as citizens of each country avoided goods manufactured in the other. Australian commerce also suffered in British colonies in Asia: the "North China Daily News" published a pro-bodyline editorial, denouncing Australians as sore losers. An Australian journalist reported that several business deals in Hong Kong and Shanghai were lost by Australians because of local reactions. English immigrants in Australia found themselves shunned and persecuted by locals, and Australian visitors to England were treated similarly. In 1934–35 a statue of Prince Albert in Sydney was vandalised, with an ear being knocked off and the word "BODYLINE" painted on it. Both before and after World War II, numerous satirical cartoons and comedy skits were written, mostly in Australia, based on events of the bodyline tour. Generally, they poked fun at the English. In 1984, Australia's Network Ten produced a television mini-series titled "Bodyline", dramatising the events of the 1932–33 English tour of Australia. It starred Gary Sweet as Don Bradman, Hugo Weaving as Douglas Jardine, Jim Holt as Harold Larwood, Rhys McConnochie as Pelham Warner, and Frank Thring as Jardine's mentor Lord Harris. The series took some liberties with historical accuracy for the sake of drama, including a depiction of angry Australian fans burning a British flag at the Sydney Cricket Ground, an event which was never documented. Larwood, having emigrated to Australia in 1950, received several threatening and obscene phone calls after the series aired. The series was widely and strongly attacked by the surviving players for its inaccuracy and sensationalism. To this day, the bodyline tour remains one of the most significant events in the history of cricket, and strong in the consciousness of many cricket followers. In a poll of cricket journalists, commentators, and players in 2004, the bodyline tour was ranked the most important event in cricket history.
https://en.wikipedia.org/wiki?curid=18053
Laws of infernal dynamics The laws of infernal dynamics are an adage about the cursedness of the universe. Attributed to Science fiction author David Gerrold, the laws are as follows: The laws are a parody on the first and second of Newton's laws of motion in the spirit of Murphy's law. Newton's first law of motion has here been split into two parts, the first two laws. Newton's third law of motion is left unparodied, though a separate adage states that "for every action, there is an equal and opposite criticism."
https://en.wikipedia.org/wiki?curid=18056
Louise Erdrich Louise Erdrich ( ; born Karen Louise Erdrich, June 7, 1954) is an American author, writer of novels, poetry, and children's books featuring Native American characters and settings. She is an enrolled member of the Turtle Mountain Band of Chippewa Indians, a federally recognized tribe of the Anishinaabe (also known as Ojibwe and Chippewa). Erdrich is widely acclaimed as one of the most significant writers of the second wave of the Native American Renaissance. In 2009, her novel "The Plague of Doves" was a finalist for the Pulitzer Prize for Fiction and received an Anisfield-Wolf Book Award. In November 2012, she received the National Book Award for Fiction for her novel "The Round House". She was awarded the Library of Congress Prize for American Fiction at the National Book Festival in September 2015. She is a 2013 recipient of the Alex Awards. She was married to author Michael Dorris and the two collaborated on a number of works. The couple separated in 1995. She is also the owner of Birchbark Books, a small independent bookstore in Minneapolis that focuses on Native American literature and the Native community in the Twin Cities. Erdrich was born on June 7, 1954, in Little Falls, Minnesota. She was the oldest of seven children born to Ralph Erdrich, a German-American, and Rita (née Gourneau), a Chippewa woman (of half Ojibwe and half French blood). Both parents taught at a boarding school in Wahpeton, North Dakota, set up by the Bureau of Indian Affairs. Erdrich's maternal grandfather, Patrick Gourneau, served as tribal chairman for the federally recognized tribe of Turtle Mountain Band of Chippewa Indians for many years. Though not raised in a reservation, she often visited relatives there. She was raised "with all the accepted truths" of Catholicism. While Erdrich was a child, her father paid her a nickel for every story she wrote. Her sister Heidi became a poet and also lives in Minnesota; she publishes under the name Heid E. Erdrich. Another sister, Lise Erdrich, has written children's books and collections of fiction and essays. Erdrich attended Dartmouth College from 1972 to 1976. She was a part of the first class of women admitted to the college and earned an A.B. in English. During her first year, Erdrich met Michael Dorris, an anthropologist, writer, and then-director of the new Native American Studies program. While attending Dorris' class, she began to look into her own ancestry, which inspired her to draw from it for her literary work, such as poems, short stories, and novels. During that time, she worked as a lifeguard, waitress, researcher for films, and as an editor for the Boston Indian Council newspaper "The Circle." In 1978, Erdrich enrolled in a Master of Arts program at Johns Hopkins University in Baltimore, Maryland. She earned the Master of Arts in the Writing Seminars in 1979. Erdrich later published some of the poems and stories she wrote while in the M.A. program. She returned to Dartmouth as a writer-in-residence. Early work In 1982, Erdrich's story, "The World's Greatest Fisherman," won $5,000 in the Nelson Algren fiction competition. She expanded the story into the novel "Love Medicine" (1984), which won the National Book Critics Circle Award for Fiction. It is the only debut novel ever to receive that honor. Erdrich later turned "Love Medicine" into a tetralogy that includes "The Beet Queen" (1986), "Tracks" (1988), and "The Bingo Palace" (1994). She has written 28 books in all, including fiction, non-fiction, poetry and children's books. Her latest novel, "The Night Watchman", was published in 2020 and was inspired by her maternal grandfather's life. After graduating from Dartmouth, Erdrich remained in contact with Michael Dorris. He attended one of her poetry readings, became impressed with her work, and developed an interest in working with her. Although Erdrich and Dorris were on two different sides of the world, Erdrich in Boston and Dorris in New Zealand for field research, the two began to collaborate on short stories. The pair's literary partnership led them to a romantic relationship. They married in 1981, and raised three children whom Dorris had adopted as a single parent and three biological children together (Persia, Pallas, Madeline, Reynold Abel, Sava and Aza Marion). Reynold Abel suffered from fetal alcohol syndrome and in 1991, at age 23, he was killed when he was hit by a car. In 1995 their son Jeffrey Sava accused Dorris of committing child abuse; in 1997, after Dorris's death, their adopted daughter Madeline claimed that Dorris sexually abused her and Erdrich had neglected to stop the abuse. Dorris and Erdrich separated in 1995, and Dorris died by suicide in 1997. In his will, he only named his biological children with Erdrich. In 2001, at age 47, Erdrich gave birth to a daughter, Azure, fathered by a Native American man Erdrich declines to identify publicly. She discusses her pregnancy with Azure, and Azure's father, in her 2003 non-fiction book, "Books and Islands in Ojibwe Country". She uses the name "Tobasonakwut" to refer to him. He is described as a traditional healer and teacher, who is eighteen years Erdrich's senior and a married man. In a number of publications, Tobasonakwut Kinew, who died in 2012, is referred to as Erdrich's partner and the father of Azure. When asked in an interview if writing is a lonely life for her, Erdrich replied, "Strangely, I think it is. I am surrounded by an abundance of family and friends and yet I am alone with the writing. And that is perfect." Erdrich lives in Minneapolis. In 1975, Erdrich won the American Academy of Poets Prize. In 1979 she wrote "The World's Greatest Fisherman", a short story about June Kashpaw, a divorced Ojibwe woman whose death by hypothermia brought her relatives home to a fictional North Dakota reservation for her funeral. She wrote this while "barricaded in the kitchen." At her husband's urging, she submitted it to the Nelson Algren Short Fiction prize in 1982, which it won, and eventually it became the first chapter of her debut novel, "Love Medicine", published by Holt, Rinehart, and Winston in 1984. "When I found out about the prize I was living on a farm in New Hampshire near the college I’d attended," Erdrich told an interviewer. "I was nearly broke and driving a car with bald tires. My mother knitted my sweaters, and all else I bought at thrift stores ... The recognition dazzled me. Later, I became friends with Studs Terkel and Kay Boyle, the judges, toward whom I carry a lifelong gratitude. This prize made an immense difference in my life." "Love Medicine" won the 1984 National Book Critics Circle Award. It has also been featured on the National Advanced Placement Test for Literature. In the early years of their marriage, Erdrich and Michael Dorris often collaborated on their work, saying they plotted the books together, "talk about them before any writing is done, and then we share almost every day, whatever it is we've written" but "the person whose name is on the books is the one who's done most of the primary writing." They got started with "domestic, romantic stuff" published under the shared pen name of "Milou North" (Michael + Louise + where they live). During the publication of "Love Medicine", Erdrich produced her first collection of poems, "Jacklight" (1984), which highlights the struggles between Native and non-Native cultures, as well as celebrating family, ties of kinship, autobiographical meditations, monologues, and love poetry. She incorporates elements of Ojibwe myths and legends. Erdrich continues to write poems, which have been included in her collections. Erdrich is best known as a novelist, and has published a dozen award-winning and best-selling novels. She followed "Love Medicine" with "The Beet Queen" (1986), which continued her technique of using multiple narrators and expanded the fictional reservation universe of "Love Medicine" to include the nearby town of Argus, North Dakota.The action of the novel takes place mostly before World War II. Leslie Marmon Silko accused Erdrich's "The Beet Queen" of being more concerned with postmodern technique than with the political struggles of Native peoples. "Tracks" (1988) goes back to the early 20th century at the formation of the reservation. It introduces the trickster figure of Nanapush, who owes a clear debt to Ojibwe figure Nanabozho. "Tracks" shows early clashes between traditional ways and the Roman Catholic Church. "The Bingo Palace" (1994), set in the 1980s, describes the effects of a casino and a factory on the reservation community. "Tales of Burning Love" (1997) finishes the story of Sister Leopolda, a recurring character from all the previous books, and introduces a new set of European-American people into the reservation universe. "The Antelope Wife" (1998), Erdrich's first novel after her divorce from Dorris, was the first of her novels to be set outside the continuity of the previous books. She subsequently returned to the reservation and nearby towns. She has published five novels since 1998 dealing with events in that fictional area. Among these are "The Last Report on the Miracles at Little No Horse" (2001) and "The Master Butchers Singing Club" (2003). Both novels have geographic and character connections with "The Beet Queen". In 2009, Erdrich was a Pulitzer Prize finalist for "The Plague of Doves" and a National Book Award finalist for "The Last Report on the Miracles at Little No Horse". It focuses on the historical lynching of four Native people wrongly accused of murdering a Caucasian family, and the effect of this injustice on the current generations. She also writes for younger audiences; she has a children's picture book "Grandmother's Pigeon," and her children's book "The Birchbark House", was a National Book Award finalist. She continued the series with "The Game of Silence", winner of the Scott O'Dell Award for Historical Fiction; and "The Porcupine Year". In addition to fiction and poetry, Erdrich has published nonfiction. "The Blue Jay's Dance" (1995) is about her pregnancy and the birth of her first child. "Books and Islands in Ojibwe Country" traces her travels in northern Minnesota and Ontario's lakes following the birth of her last daughter. Erdrich and her two sisters have hosted writers' workshops on the Turtle Mountain Indian Reservation in North Dakota. Her heritage from both parents is influential in her life and prominent in her work. Although many of Erdrich's works explore her Native American heritage, her novel "The Master Butchers Singing Club" (2003) featured the European, specifically German, side of her ancestry. The novel includes stories of a World War I veteran of the German Army and is set in a small North Dakota town. The novel was a finalist for the National Book Award. Erdrich's complexly interwoven series of novels have drawn comparisons with William Faulkner's Yoknapatawpha novels. Like Faulkner's, Erdrich's successive novels created multiple narratives in the same fictional area and combined the tapestry of local history with current themes and modern consciousness. Her bookstore hosts literary readings and other events. Erdrich's new works are read here, and events celebrate the works and careers of other writers as well, particularly local Native writers. Erdrich and her staff consider Birchbark Books to be a "teaching bookstore". In addition to books, the store sells Native art and traditional medicines, and Native American jewelry. Wiigwaas Press, a small nonprofit publisher founded by Erdrich and her sister, is affiliated with the store.
https://en.wikipedia.org/wiki?curid=18057
Latin literature Latin literature includes the essays, histories, poems, plays, and other writings written in the Latin language. The beginning of Latin literature dates to 240 BC, when the first stage play was performed in Rome. Latin literature would flourish for the next six centuries. The classical era of Latin literature can be roughly divided into the following periods: Early Latin literature, The Golden Age, The Imperial Period and Late Antiquity. Latin was the language of the ancient Romans, but it was also the "lingua franca" of Western Europe throughout the Middle Ages, so Latin literature includes not only Roman authors like Cicero, Vergil, Ovid and Horace, but also includes European writers after the fall of the Empire, from religious writers like Aquinas (1225–1274), to secular writers like Francis Bacon (1561–1626), Baruch Spinoza (1632–1677), and Isaac Newton (1642–1727). Formal Latin literature began in 240 BC, when a Roman audience saw a Latin version of a Greek play. The adaptor was Livius Andronicus, a Greek who had been brought to Rome as a prisoner of war in 272 BC. Andronicus also translated Homer's Greek epic the "Odyssey" into an old type of Latin verse called "Saturnian". The first Latin poet to write on a Roman theme was Gnaeus Naevius during the 3rd century BC. He composed an epic poem about the first Punic War, in which he had fought. Naevius's dramas were mainly reworkings of Greek originals, but he also created tragedies based on Roman myths and history. Other epic poets followed Naevius. Quintus Ennius wrote a historical epic, the "Annals" (soon after 200 BC), describing Roman history from the founding of Rome to his own time. He adopted Greek dactylic hexameter, which became the standard verse form for Roman epics. He also became famous for his tragic dramas. In this field, his most distinguished successors were Marcus Pacuvius and Lucius Accius. These three writers rarely used episodes from Roman history. Instead, they wrote Latin versions of tragic themes that the Greeks had already handled. But even when they copied the Greeks, their translations were not straightforward replicas. Only fragments of their plays have survived. Considerably more is known about early Latin comedy, as 26 Early Latin comedies are extant – 20 of which Plautus wrote, and the remaining six of which Terence wrote. These men modeled their comedies on Greek plays known as New Comedy. But they treated the plots and wording of the originals freely. Plautus scattered songs through his plays and increased the humor with puns and wisecracks, plus comic actions by the actors. Terence's plays were more polite in tone, dealing with domestic situations. His works provided the chief inspiration for French and English comedies of the 17th century AD, and even for modern American comedy. The prose of the period is best known through "On Agriculture" (160 BC) by Cato the Elder. Cato also wrote the first Latin history of Rome and of other Italian cities. He was the first Roman statesman to put his political speeches in writing as a means of influencing public opinion. Early Latin literature ended with Gaius Lucilius, who created a new kind of poetry in his 30 books of "Satires" (2nd century BC). He wrote in an easy, conversational tone about books, food, friends, and current events. Traditionally, the height of Latin literature has been assigned to the period from 81 BC to AD 17, although recent scholarship has questioned the assumptions that privileged the works of this period over both earlier and later works. This period is usually said to have begun with the first known speech of Cicero and ended with the death of Ovid. Cicero has traditionally been considered the master of Latin prose. The writing he produced from about 80 BC until his death in 43 BC exceeds that of any Latin author whose work survives in terms of quantity and variety of genre and subject matter, as well as possessing unsurpassed stylistic excellence. Cicero's many works can be divided into four groups: (1) letters, (2) rhetorical treatises, (3) philosophical works, and (4) orations. His letters provide detailed information about an important period in Roman history and offer a vivid picture of the public and private life among the Roman governing class. Cicero's works on oratory are our most valuable Latin sources for ancient theories on education and rhetoric. His philosophical works were the basis of moral philosophy during the Middle Ages. His speeches inspired many European political leaders and the founders of the United States. Julius Caesar and Sallust were outstanding historical writers of Cicero's time. Caesar wrote commentaries on the Gallic and civil wars in a straightforward style to justify his actions as a general. Sallust adopted an abrupt, pointed style in his historical works. He wrote brilliant descriptions of people and their motives. The birth of lyric poetry in Latin occurred during the same period. The short love lyrics of Catullus are noted for their emotional intensity. Catullus also wrote poems that attacked his enemies. In his longer poems, he suggested images in rich, delicate language. Contemporary with Catullus, Lucretius expounded the Epicurean philosophy in a long poem, "De rerum natura". One of the most learned writers of the period was Marcus Terentius Varro. Called "the most learned of the Romans" by Quintillian, he wrote about a remarkable variety of subjects, from religion to poetry. But only his writings on agriculture and the Latin language are extant in their complete form. The emperor Augustus took a personal interest in the literary works produced during his years of power from 27 BC to AD 14. This period is sometimes called the Augustan Age of Latin Literature. Virgil published his pastoral "Eclogues", the "Georgics", and the "Aeneid", an epic poem describing the events that led to the creation of Rome. Virgil told how the Trojan hero Aeneas became the ancestor of the Roman people. Virgil also provided divine justification for Roman rule over the world. Although Virgil died before he could put the finishing touches on his poem, it was soon recognized as the greatest work of Latin literature. Virgil's friend Horace wrote "Epodes", "Odes", "Satires", and "Epistles". The perfection of the "Odes" in content, form, and style has charmed readers for hundreds of years. The "Satires" and "Epistles" discuss ethical and literary problems in an urbane, witty manner. Horace's "Art of Poetry", probably published as a separate work, greatly influenced later poetic theories. It stated the basic rules of classical writing as the Romans understood and used them. After Virgil died, Horace was Rome's leading poet. The Latin elegy reached its highest development in the works of Tibullus, Propertius, and Ovid. Most of this poetry is concerned with love. Ovid also wrote the "Fasti", which describes Roman festivals and their legendary origins. Ovid's greatest work, the "Metamorphoses" weaves various myths into a fast-paced, fascinating story. Ovid was a witty writer who excelled in creating lively and passionate characters. The "Metamorphoses" was the best-known source of Greek and Roman mythology throughout the Middle Ages and the Renaissance. It inspired many poets, painters, and composers. In prose, Livy produced a history of the Roman people in 142 books. Only 35 survived, but they are a major source of information on Rome. From the death of Augustus in AD 14 until about 200, Roman authors emphasized style and tried new and startling ways of expression. During the reign of Nero from 54 to 68, the Stoic philosopher Seneca wrote a number of dialogues and letters on such moral themes as mercy and generosity. In his "Natural Questions", Seneca analyzed earthquakes, floods, and storms. Seneca's tragedies greatly influenced the growth of tragic drama in Europe. His nephew Lucan wrote the "Pharsalia" (about 60), an epic poem describing the civil war between Caesar and Pompey. The "Satyricon" (about 60) by Petronius was the first picaresque Latin novel. Only fragments of the complete work survive. It describes the adventures of various low-class characters in absurd, extravagant, and dangerous situations, often in the world of petty crime. Epic poems included the "Argonautica" of Gaius Valerius Flaccus, the "Thebaid" of Statius, and the "Punica" of Silius Italicus. At the hands of Martial, the epigram achieved the stinging quality still associated with it. Juvenal brilliantly satirized vice. The historian Tacitus painted an unforgettably dark picture of the early empire in his "Histories" and "Annals", both written in the early 2nd century. His contemporary Suetonius wrote biographies of the 12 Roman rulers from Julius Caesar through Domitian. The letters of Pliny the Younger described Roman life of the period. Quintilian composed the most complete work on ancient education that we possess. Important works from the 2nd century include the "Attic Nights" of Aulus Gellius, a collection of anecdotes and reports of literary discussions among his friends; and the letters of the orator Marcus Cornelius Fronto to Marcus Aurelius. The most famous work of the period was "Metamorphoses", also called "The Golden Ass", by Apuleius. This novel concerns a young man who is accidentally changed into a donkey. The story is filled with colorful tales of love and witchcraft. Pagan Latin literature showed a final burst of vitality from the late 3rd century till the 5th centuries. Ammianus Marcellinus in history, Quintus Aurelius Symmachus in oratory, and Ausonius and Rutilius Claudius Namatianus in poetry all wrote with great talent. The "Mosella" by Ausonius demonstrated a modernism of feeling that indicates the end of classical literature as such. At the same time, other men laid the foundations of Christian Latin literature during the 4th century and 5th century. They included the church fathers Augustine of Hippo, Jerome, and Ambrose, and the first great Christian poet, Prudentius. During the Renaissance there was a return to the Latin of classical times, called for this reason Neo-Latin. This purified language continued to be used as the "lingua franca" among the learned throughout Europe, with the great works of Descartes, Francis Bacon, and Baruch Spinoza all being composed in Latin. Among the last important books written primarily in Latin prose were the works of Swedenborg (d. 1772), Linnaeus (d. 1778), Euler (d. 1783), Gauss (d. 1855), and Isaac Newton (d. 1727), and Latin remains a necessary skill for modern readers of great early modern works of linguistics, literature, and philosophy. Several of the leading English poets wrote in Latin as well as English. Milton's 1645 Poems are one example, but there were also Thomas Campion, George Herbert and Milton's colleague Andrew Marvell. Some indeed wrote chiefly in Latin and were valued for the elegance and Classicism of their style. Examples of these were Anthony Alsop and Vincent Bourne, who were noted for the ingenious way that they adapted their verse to describing details of life in the 18th century while never departing from the purity of Latin diction. One of the last to be noted for the quality of his Latin verse well into the 19th century was Walter Savage Landor. Much Latin writing reflects the Romans' interest in rhetoric, the art of speaking and persuading. Public speaking had great importance for educated Romans because most of them wanted successful political careers. When Rome was a republic, effective speaking often determined who would be elected or what bills would pass. After Rome became an empire, the ability to impress and persuade people by the spoken word lost much of its importance. But training in rhetoric continued to flourish and to affect styles of writing. A large part of rhetoric consists of the ability to present a familiar idea in a striking new manner that attracts attention. Latin authors became masters of this art of variety. Latin is a highly "inflected" language, with many grammatical forms for various words. As a result, it can be used with a pithiness and brevity unknown in English. It also lends itself to elaboration, because its tight syntax holds even the longest and most complex sentence together as a logical unit. Latin can be used with striking conciseness, as in the works of Sallust and Tacitus. Or it can have wide, sweeping phrases, as in the works of Livy and the speeches of Cicero. Latin lacks the rich poetic vocabulary that marks the Greek poetry. Some earlier Latin poets tried to make up for this deficiency by creating new compound words, as the Greeks had done. But Roman writers seldom invented words. Except in epic poetry, they tended to use a familiar vocabulary, giving it poetic value by imaginative combinations of words and by rich sound effects. Rome's leading poets had great technical skill in the choice and arrangement of language. They also had an intimate knowledge of the Greek poets, whose themes appear in almost all Roman literature. Latin moves with impressive dignity in the writings of Ovid, Cicero, or Virgil. It reflects the seriousness and sense of responsibility that characterized the ruling class of Rome during the great years of the republic. But the Romans could also relax and allow what Horace called the "Italian vinegar" in their systems to pour forth in wit and satire.
https://en.wikipedia.org/wiki?curid=18058
Leonard Bloomfield Leonard Bloomfield (April 1, 1887 – April 18, 1949) was an American linguist who led the development of structural linguistics in the United States during the 1930s and the 1940s. His influential textbook "Language", published in 1933, presented a comprehensive description of American structural linguistics. He made significant contributions to Indo-European historical linguistics, the description of Austronesian languages, and description of languages of the Algonquian family. Bloomfield's approach to linguistics was characterized by its emphasis on the scientific basis of linguistics, adherence to behaviorism especially in his later work, and emphasis on formal procedures for the analysis of linguistic data. The influence of Bloomfieldian structural linguistics declined in the late 1950s and 1960s as the theory of generative grammar developed by Noam Chomsky came to predominate. Bloomfield was born in Chicago, Illinois, on April 1, 1887, to Jewish parents (Sigmund Bloomfield and Carola Buber Bloomfield). His father immigrated to the United States as a child in 1868; the original family name "Blumenfeld" was changed to Bloomfield after their arrival. In 1896 his family moved to Elkhart Lake, Wisconsin, where he attended elementary school, but returned to Chicago for secondary school. His uncle Maurice Bloomfield was a prominent linguist at Johns Hopkins University, and his aunt Fannie Bloomfield Zeisler was a well-known concert pianist. Bloomfield attended Harvard College from 1903 to 1906, graduating with the A.B. degree. He subsequently began graduate work at the University of Wisconsin, taking courses in German and Germanic philology, in addition to courses in other Indo-European languages. A meeting with Indo-Europeanist Eduard Prokosch, a faculty member at the University of Wisconsin, convinced Bloomfield to pursue a career in linguistics. In 1908 Bloomfield moved to the University of Chicago where he took courses in German and Indo-European philology with Frances A. Wood and Carl Darling Buck. His doctoral dissertation in Germanic historical linguistics, "A semasiologic differentiation in Germanic secondary ablaut", was supervised by Wood, and he graduated in 1909. He undertook further studies at the University of Leipzig and the University of Göttingen in 1913 and 1914 with leading Indo-Europeanists August Leskien, Karl Brugmann, as well as Hermann Oldenberg, a specialist in Vedic Sanskrit. Bloomfield also studied at Göttingen with Sanskrit specialist Jacob Wackernagel, and considered both Wackernagel and the Sanskrit grammatical tradition of rigorous grammatical analysis associated with Pāṇini as important influences on both his historical and descriptive work. Further training in Europe was a condition for promotion at the University of Illinois from Instructor to the rank of Assistant Professor. Bloomfield was instructor in German at the University of Cincinnati, 1909–1910; Instructor in German at the University of Illinois at Urbana–Champaign, 1910–1913; Assistant Professor of Comparative Philology and German, also University of Illinois, 1913–1921; Professor of German and Linguistics at the Ohio State University, 1921–1927; Professor of Germanic Philology at the University of Chicago, 1927–1940; Sterling Professor of Linguistics at Yale University, 1940–1949. During the summer of 1925 Bloomfield worked as Assistant Ethnologist with the Geological Survey of Canada in the Canadian Department of Mines, undertaking linguistic field work on Plains Cree; this position was arranged by Edward Sapir, who was then Chief of the Division of Anthropology, Victoria Museum, Geological Survey of Canada, Canadian Department of Mines. Bloomfield was one of the founding members of the Linguistic Society of America. In 1924, along with George M. Bolling (Ohio State University) and Edgar Sturtevant (Yale University) he formed a committee to organize the creation of the Society, and drafted the call for the Society's foundation. He contributed the lead article to the inaugural issue of the Society's journal "Language", and was President of the Society in 1935. He taught in the Society's summer Linguistic Institute in 1938–1941, with the 1938–1940 Institutes being held in Ann Arbor, Michigan, and the 1941 Institute in Chapel Hill, North Carolina. Bloomfield's earliest work was in historical Germanic studies, beginning with his dissertation, and continuing with a number of papers on Indo-European and Germanic phonology and morphology. His post-doctoral studies in Germany further strengthened his expertise in the Neogrammarian tradition, which still dominated Indo-European historical studies. Bloomfield throughout his career, but particularly during his early career, emphasized the Neogrammarian principle of regular sound change as a foundational concept in historical linguistics. Bloomfield's work in Indo-European beyond his dissertation was limited to an article on palatal consonants in Sanskrit and one article on the Sanskrit grammatical tradition associated with Pāṇini, in addition to a number of book reviews. Bloomfield made extensive use of Indo-European materials to explain historical and comparative principles in both of his textbooks, "An introduction to language" (1914), and his seminal "Language" (1933). In his textbooks he selected Indo-European examples that supported the key Neogrammarian hypothesis of the regularity of sound change, and emphasized a sequence of steps essential to success in comparative work: (a) appropriate data in the form of texts which must be studied intensively and analysed; (b) application of the comparative method; (c) reconstruction of proto-forms. He further emphasized the importance of dialect studies where appropriate, and noted the significance of sociological factors such as prestige, and the impact of meaning. In addition to regular linguistic change, Bloomfield also allowed for borrowing and analogy. It is argued that Bloomfield's Indo-European work had two broad implications: "He stated clearly the theoretical bases for Indo-European linguistics" and "he established the study of Indo-European languages firmly within general linguistics." As part of his training with leading Indo-Europeanists in Germany in 1913 and 1914 Bloomfield studied the Sanskrit grammatical tradition originating with Pāṇini, who lived in northwestern India during the fifth or fourth century BC. Pāṇini's grammar is characterized by its extreme thoroughness and explicitness in accounting for Sanskrit linguistic forms, and by its complex context-sensitive, rule-based generative structure. Bloomfield noted that "Pāṇini gives the formation of every inflected, compounded, or derived word, with an exact statement of the sound-variations (including accent) and of the meaning". In a letter to Algonquianist Truman Michelson, Bloomfield noted "My models are Pāṇini and the kind of work done in Indo-European by my teacher, Professor Wackernagel of Basle." Pāṇini's systematic approach to analysis includes components for: (a) forming grammatical rules, (b) an inventory of sounds, (c) a list of verbal roots organized into sublists, and (d) a list of classes of morphs. Bloomfield's approach to key linguistic ideas in his textbook "Language" reflect the influence of Pāṇini in his treatment of basic concepts such as "linguistic form", "free form", and others. Similarly, Pāṇini is the source for Bloomfield's use of the terms "exocentric" and "endocentric" used to describe compound words. Concepts from Pāṇini are found in "Eastern Ojibwa", published posthumously in 1958, in particular his use of the concept of a morphological zero, a morpheme that has no overt realization. Pāṇini's influence is also present in Bloomfield's approach to determining parts of speech (Bloomfield uses the term "form-classes") in both "Eastern Ojibwa" and in the later "Menomini language", published posthumously in 1962. While at the University of Illinois Bloomfield undertook research on Tagalog, an Austronesian language spoken in the Philippines. He carried out linguistic field work with Alfredo Viola Santiago, who was an engineering student at the university from 1914 to 1917. The results were published as "Tagalog texts with grammatical analysis", which includes a series of texts dictated by Santiago in addition to an extensive grammatical description and analysis of every word in the texts. Bloomfield's work on Tagalog, from the beginning of field research to publication, took no more than two years. His study of Tagalog has been described as "the best treatment of any Austronesian language ... The result is a description of Tagalog which has never been surpassed for completeness, accuracy, and wealth of exemplification." Bloomfield's only other publication on an Austronesian language was an article on the syntax of Ilocano, based upon research undertaken with a native speaker of Ilocano who was a student at Yale University. This article has been described as a "tour de force, for it covers in less than seven pages the entire taxonomic syntax of Ilocano". Bloomfield's work on Algonquian languages had both descriptive and comparative components. He published extensively on four Algonquian languages: Fox, Cree, Menominee, and Ojibwe, publishing grammars, lexicons, and text collections. Bloomfield used the materials collected in his descriptive work to undertake comparative studies leading to the reconstruction of Proto-Algonquian, with an early study reconstructing the sound system of Proto-Algonquian, and a subsequent more extensive paper refining his phonological analysis and adding extensive historical information on general features of Algonquian grammar. Bloomfield undertook field research on Cree, Menominee, and Ojibwe, and analysed the material in previously published Fox text collections. His first Algonquian research, beginning around 1919, involved study of text collections in the Fox language that had been published by William Jones and Truman Michelson. Working through the texts in these collections, Bloomfield excerpted grammatical information to create a grammatical sketch of Fox. A lexicon of Fox based on his excerpted material was published posthumously. Bloomfield undertook field research on Menominee in the summers of 1920 and 1921, with further brief field research in September 1939 and intermittent visits from Menominee speakers in Chicago in the late 1930s, in addition to correspondence with speakers during the same period. Material collected by Morris Swadesh in 1937 and 1938, often in response to specific queries from Bloomfield, supplemented his information. Significant publications include a collection of texts, a grammar and a lexicon (both published posthumously), in addition to a theoretically significant article on Menomini phonological alternations. Bloomfield undertook field research in 1925 among Plains Cree speakers in Saskatchewan at the Sweet Grass reserve, and also at the Star Blanket reserve, resulting in two volumes of texts and a posthumous lexicon. He also undertook brief field work on Swampy Cree at The Pas, Manitoba. Bloomfield's work on Swampy Cree provided data to support the predictive power of the hypothesis of exceptionless phonological change. Bloomfield's initial research on Ojibwe was through study of texts collected by William Jones, in addition to nineteenth century grammars and dictionaries. During the 1938 Linguistic Society of America Linguistic Institute held at the University of Michigan in Ann Arbor, Michigan, he taught a field methods class with Andrew Medler, a speaker of the Ottawa dialect who was born in Saginaw, Michigan, but spent most of his life on Walpole Island, Ontario. The resulting grammatical description, transcribed sentences, texts, and lexicon were published posthumously in a single volume. In 1941 Bloomfield worked with Ottawa dialect speaker Angeline Williams at the 1941 Linguistic Institute held at the University of North Carolina in Chapel Hill, North Carolina, resulting in a posthumously published volume of texts.
https://en.wikipedia.org/wiki?curid=18060
Leather Leather is a durable and flexible material created by tanning animal rawhide and skins. The most common raw material is cattle hide. It can be produced at manufacturing scales ranging from artisan to modern industrial scale. Leather is used to make a variety of articles, including footwear, automobile seats, clothing, bags, book bindings, fashion accessories, and furniture. It is produced in a wide variety of types and styles and decorated by a wide range of techniques. The earliest record of leather artifacts dates back to 2200 BC. Leather usage has come under criticism in the 20th and 21st centuries by animal-rights groups (e.g. PETA). These groups claim that buying or wearing leather is unethical because producing leather requires animals to be killed. The leather manufacturing process is divided into three fundamental subprocesses: preparatory stages, tanning, and crusting. A further subprocess, finishing, can be added into the leather process sequence, but not all leathers receive finishing. The preparatory stages are when the hide is prepared for tanning. Preparatory stages may include soaking, hair removal, liming, deliming, bating, bleaching, and pickling. Tanning is a process that stabilizes the proteins, particularly collagen, of the raw hide to increase the thermal, chemical and microbiological stability of the hides and skins, making it suitable for a wide variety of end applications. The principal difference between raw and tanned hides is that raw hides dry out to form a hard, inflexible material that, when rewetted, will putrefy, while tanned material dries to a flexible form that does not become putrid when rewetted. Many tanning methods and materials exist. The typical process sees tanners load the hides into a drum and immerse them in a tank that contains the tanning "liquor." The hides soak while the drum slowly rotates about its axis, and the tanning liquor slowly penetrates through the full thickness of the hide. Once the process achieves even penetration, workers slowly raise the liquor's pH in a process called basification, which fixes the tanning material to the leather. The more tanning material fixed, the higher the leather's hydrothermal stability and shrinkage temperature resistance. Crusting is a process that thins and lubricates leather. It often includes a coloring operation. Chemicals added during crusting must be fixed in place. Crusting culminates with a drying and softening operation, and may include splitting, shaving, dyeing, whitening or other methods. For some leathers, tanners apply a surface coating, called "finishing". Finishing operations can include oiling, brushing, buffing, coating, polishing, embossing, glazing, or tumbling, among others. Leather can be oiled to improve its water resistance. This currying process after tanning supplements the natural oils remaining in the leather itself, which can be washed out through repeated exposure to water. Frequent oiling of leather, with mink oil, neatsfoot oil, or a similar material keeps it supple and improves its lifespan dramatically. Tanning processes largely differ in which chemicals are used in the tanning liquor. Some common types include: In general, leather is produced in the following grades: Today, most leather is made of cattle hides, which constitute about 65% of all leather produced. Other animals that are used include sheep (about 13%), goats (about 11%), and pigs (about 10%). Obtaining accurate figures from around the world is difficult, especially for areas where the skin may be eaten. Other animals mentioned below only constitute a fraction of a percent of total leather production. Horse hides are used to make particularly durable leathers. Shell cordovan is a horse leather made not from the outer skin but from an under layer found only in equine species called the shell. It is prized for its mirror-like finish and anti-creasing properties. Lamb and deerskin are used for soft leather in more expensive apparel. Deerskin is widely used in work gloves and indoor shoes. Reptilian skins, such as alligator, crocodile, and snake, are noted for their distinct patterns that reflect the scales of their species. This has led to hunting and farming of these species in part for their skins. Kangaroo leather is used to make items that must be strong and flexible. It is the material most commonly used in bullwhips. Some motorcyclists favor kangaroo leather for motorcycle leathers because of its light weight and abrasion resistance. Kangaroo leather is also used for falconry jesses, soccer footwear, and boxing speed bags. Although originally raised for their feathers in the 19th century, ostriches are now more popular for both meat and leather. Ostrich leather has a characteristic "goose bump" look because of the large follicles where the feathers grew. Different processes produce different finishes for many applications, including upholstery, footwear, automotive products, accessories, and clothing. In Thailand, stingray leather is used in wallets and belts. Stingray leather is tough and durable. The leather is often dyed black and covered with tiny round bumps in the natural pattern of the back ridge of an animal. These bumps are then usually dyed white to highlight the decoration. Stingray rawhide is also used as grips on Chinese swords, Scottish basket hilted swords, and Japanese katanas. Stingray leather is also used for high abrasion areas in motorcycle racing leathers (especially in gloves, where its high abrasion resistance helps prevent wear through in the event of an accident). For a given thickness, fish leather is typically much stronger due to its criss-crossed fibers. Leather produces some environmental impact, most notably due to: One estimate of the carbon footprint of leather goods is 0.51 kg of CO2 equivalent per £1 of output at 2010 retail prices, or 0.71 kg CO2eq per £1 of output at 2010 industry prices. One ton of hide or skin generally produces 20 to 80 m3 of waste water, including chromium levels of 100–400 mg/l, sulfide levels of 200–800 mg/l, high levels of fat and other solid wastes, and notable pathogen contamination. Producers often add pesticides to protect hides during transport. With solid wastes representing up to 70% of the wet weight of the original hides, the tanning process represents a considerable strain on water treatment installations. Leather biodegrades slowly—taking 25 to 40 years to decompose. However, vinyl and petrochemical-derived materials take 500 or more years to decompose. Tanning is especially polluting in countries where environmental regulations are lax, such as in India, the world's third-largest producer and exporter of leather. To give an example of an efficient pollution prevention system, chromium loads per produced tonne are generally abated from 8 kg to 1.5 kg. VOC emissions are typically reduced from 30 kg/t to 2 kg/t in a properly managed facility. A review of the total pollution load decrease achievable according to the United Nations Industrial Development Organization posts precise data on the abatement achievable through industrially proven low-waste advanced methods, while noting, "even though the chrome pollution load can be decreased by 94% on introducing advanced technologies, the minimum residual load 0.15 kg/t raw hide can still cause difficulties when using landfills and composting sludge from wastewater treatment on account of the regulations currently in force in some countries." In Kanpur, the self-proclaimed "Leather City of World"—with 10,000 tanneries as of 2011 and a city of three million on the banks of the Ganges—pollution levels were so high, that despite an industry crisis, the pollution control board decided to shut down 49 high-polluting tanneries out of 404 in July 2009. In 2003 for instance, the main tanneries' effluent disposal unit was dumping 22 tonnes of chromium-laden solid waste per day in the open. In the Hazaribagh neighborhood of Dhaka in Bangladesh, chemicals from tanneries end up in Dhaka's main river. Besides the environmental damage, the health of both local factory workers and the end consumer is also negatively affected. After approximately 15 years of ignoring high court rulings, the government shut down more than 100 tanneries the weekend of 8 April 2017 in the neighborhood. The higher cost associated with the treatment of effluents than to untreated effluent discharging leads to illegal dumping to save on costs. For instance, in Croatia in 2001, proper pollution abatement cost US$70–100 per ton of raw hides processed against $43/t for irresponsible behavior. In November 2009, one of Uganda's main leather making companies was caught directly dumping waste water into a wetland adjacent to Lake Victoria. Enzymes like proteases, lipases, and amylases have an important role in the soaking, dehairing, degreasing, and bating operations of leather manufacturing. Proteases are the most commonly used enzymes in leather production. The enzyme must not damage or dissolve collagen or keratin, but should hydrolyze casein, elastin, albumin, globulin-like proteins, and nonstructural proteins that are not essential for leather making. This process is called bating. Lipases are used in the degreasing operation to hydrolyze fat particles embedded in the skin. Amylases are used to soften skin, to bring out the grain, and to impart strength and flexibility to the skin. These enzymes are rarely used. The natural fibers of leather break down with the passage of time. Acidic leathers are particularly vulnerable to red rot, which causes powdering of the surface and a change in consistency. Damage from red rot is aggravated by high temperatures and relative humidities. Although it is chemically irreversible, treatments can add handling strength and prevent disintegration of red rotted leather. Exposure to long periods of low relative humidities (below 40%) can cause leather to become desiccated, irreversibly changing the fibrous structure of the leather. Chemical damage can also occur from exposure to environmental factors, including ultraviolet light, ozone, acid from sulfurous and nitrous pollutants in the air, or through a chemical action following any treatment with tallow or oil compounds. Both oxidation and chemical damage occur faster at higher temperatures. Various treatments are available such as conditioners. Saddle soap is used for cleaning, conditioning, and softening leather. Leather shoes are widely conditioned with shoe polish. Due to its excellent resistance to abrasion and wind, leather found a use in rugged occupations. The enduring image of a cowboy in leather chaps gave way to the leather-jacketed and leather-helmeted aviator. When motorcycles were invented, some riders took to wearing heavy leather jackets to protect from road rash and wind blast; some also wear chaps or full leather pants to protect the lower body. Leather's flexibility allows it to be formed and shaped into balls and protective gear. Subsequently, many sports use equipment made from leather, such as baseball gloves and the ball used in American football. Leather fetishism is the name popularly used to describe a fetishistic attraction to people wearing leather, or in certain cases, to the garments themselves. Many rock groups (particularly heavy metal and punk groups in the 1980s) are well known for wearing leather clothing. Extreme metal bands (especially black metal bands) and Goth rock groups have extensive black leather clothing. Leather has become less common in the punk community over the last three decades, as there is opposition to the use of leather from punks who support animal rights. Many cars and trucks come with optional or standard leather or "leather faced" seating. In countries with significant populations of individuals observing religions which place restrictions on material choices, leather vendors typically clarify the kinds of leather in their products. For example, leather shoes bear a label that identifies the animal from which the leather came. This labeling helps a Muslim to not accidentally purchase pigskin or a Hindu to avoid cattleskin; it facilitates religious observance and respect. Many vegetarian Hindus do not use any kind of leather. Such taboos increase the demand for religiously neutral leathers such as ostrich and deer. Judaism forbids the comfort of wearing leather shoes on Yom Kippur, Tisha B'Av, and during mourning. Also, see Teffilin and Torah Scroll. Jainism prohibits the use of leather, since it is obtained by killing animals. Many forms of artificial leather have been developed, usually involving polyurethane or vinyl coatings applied to a cloth backing. Many names and brands for such artificial leathers exist, including "pleather", a portmanteau of "plastic leather", and the brand name Naugahyde. Another alternative is cultured leather which is lab-grown using cell culture methods, mushroom-based materials and gelatin-based textile made by upcycling meat industry waste.
https://en.wikipedia.org/wiki?curid=18062
Long Parliament The Long Parliament was an English Parliament which lasted from 1640 until 1660. It followed the fiasco of the Short Parliament, which had convened for only three weeks during the spring of 1640 after an 11-year parliamentary absence. In September 1640, King Charles I issued writs summoning a parliament to convene on 3 November 1640. He intended it to pass financial bills, a step made necessary by the costs of the Bishops' Wars in Scotland. The Long Parliament received its name from the fact that, by Act of Parliament, it stipulated it could be dissolved only with agreement of the members; and, those members did not agree to its dissolution until 16 March 1660, after the English Civil War and near the close of the Interregnum. The parliament sat from 1640 until 1648, when it was purged by the New Model Army. After this point, the remaining members of the House of Commons became known as the Rump Parliament. In the chaos following the death of Oliver Cromwell in 1658, General George Monck allowed the members barred in 1648 to retake their seats, so that they could pass the necessary legislation to allow the Restoration and dissolve the Long Parliament. This cleared the way for a new parliament to be elected, which was known as the Convention Parliament. Some key members of the Long Parliament, such as Sir Henry Vane the Younger and General Edmond Ludlow were barred from the final acts of the Long Parliament. They claimed the parliament was not legally dissolved, its final votes a procedural irregularity (words used contemporaneously "device" and "conspiracy") by General George Monck to ensure the restoration of King Charles II of England. On the restoration the general was awarded with a dukedom. The Long Parliament later became a key moment in Whig histories of the seventeenth century. American Whig historian Charles Wentworth Upham believed the Long Parliament comprised "a set of the greatest geniuses for government that the world ever saw embarked together in one common cause" and whose actions produced an effect, which, at the time, made their country the wonder and admiration of the world, and is still felt and exhibited far beyond the borders of that country, in the progress of reform, and the advancement of popular liberty. He believed its republican principles made it a precursor to the American Revolutionary War. Charles found himself unable to fund the Bishops Wars without taxes; in April 1640, Parliament was recalled for the first time in eleven years, but when it refused to vote taxes without concessions, he dissolved it after only three weeks. The humiliating terms imposed by the Scots Covenanters after a second defeat forced him to hold fresh elections in November, which produced a large majority for the opposition, led by John Pym. Parliament was almost immediately presented with a series of "Root and Branch petitions". These demanded the expulsion of bishops from the Church of England, reflecting widespread concern at the growth of "Catholic practices" within the church. Charles' willingness to make war on the Protestant Scots, but not assist his exiled nephew Charles Louis, led to fears he was about to sign an alliance with Spain, a view shared by the experienced Venetian and French ambassadors. This meant ending arbitrary rule was important not just for England, but the Protestant cause in general. Since direct attacks on the monarch were considered unacceptable, the usual route was to prosecute his 'evil counsellors.' Doing so showed even if the king was above the law, his subordinates were not, and he could not protect them; the intention was to make others think twice about their actions. Their main target was the Earl of Strafford, former Lord Deputy of Ireland; aware of this, he urged Charles to use military force to seize the Tower of London, and arrest any MP or peer guilty of 'treasonable correspondence with the Scots.' While Charles hesitated, Pym struck first; on 11 November Strafford was impeached, arrested, and sent to the Tower. Other targets, including John Finch, fled abroad; Laud was impeached in December 1640, and joined Strafford in the Tower. At his trial in March, Strafford was indicted on 28 counts of 'arbitrary and tyrannical government'. Even if these charges were proved, it was not clear they constituted a crime against the king, the legal definition of treason. If he went free, his opponents would replace him in the Tower, and so Pym immediately moved a Bill of Attainder, asserting Strafford's guilt and ordering his execution. Although Charles announced he would not sign the attainder, on 21 April, 204 MPs voted in favour, 59 against, while 250 abstained. On 1 May, rumours of a military plot to release Strafford from the Tower led to widespread demonstrations in London, and on 7th, the Lords voted for execution by 51 to 9. Claiming to fear for his family's safety, Charles signed the death warrant on 10 May, and Strafford was beheaded two days later. This seemed to provide a basis for a programme of constitutional reforms, and Parliament voted Charles an immediate grant of £400,000. The Triennial Acts required Parliament meet at least every three years, and if the King failed to issue proper summons, the members could assemble on their own. Levying taxes without consent of Parliament was declared unlawful, including Ship money and forced loans, while the Star Chamber and High Commission courts abolished. These reforms were supported by many who later became Royalists, including Edward Hyde, Viscount Falkland, and Sir John Strangways. Where they differed from Pym and his supporters was their refusal to accept Charles would not keep his commitments, despite evidence to the contrary. He reneged on those made in the 1628 Petition of Right, and agreed terms with the Scots in 1639, while preparing another attack. Both he and Henrietta Maria openly told foreign ambassadors any concessions were temporary, and would be retrieved by force if needed. In this period, 'true religion' and 'good government' were seen as one and the same. Although the vast majority believed a 'well-ordered' monarchy was a divinely mandated requirement, they disagreed on what 'well-ordered' meant, and who held ultimate authority in clerical affairs. Royalists generally supported a Church of England governed by bishops, appointed by, and answerable to, the king; most Parliamentarians were Puritans, who believed he was answerable to the leaders of the church, appointed by their congregations. However, Puritan meant anyone who wanted to reform, or 'purify', the Church of England, and contained many different opinions. Some simply objected to Laud's reforms; Presbyterians like Pym wanted to reform the Church of England, along the same lines as the Church of Scotland. Independents believed any state church was wrong, while many were also political radicals like the Levellers. Presbyterians in England and Scotland gradually came to see them as more dangerous than the Royalists; an alliance between these three groups led to the Second English Civil War in 1648. While it is not clear there was a majority for removing bishops from the Church, their presence in the Lords became increasingly resented due to their role in blocking many of these reforms. Tensions came to a head in October with the outbreak of the Irish Rebellion; both Charles and Parliament supported raising troops to suppress it, but neither trusted the other with their control. On 22 November, the Commons passed the Grand Remonstrance by 159 votes to 148, and presented it to Charles on 1 December. The first half listed over 150 perceived 'misdeeds', the second proposed solutions, including church reform and Parliamentary control over the appointment of royal ministers. In the Militia Ordinance, Parliament asserted control over appointment of army and navy commanders; Charles rejected the Grand Remonstrance and refused to assent to the Militia Ordinance. It was at this point moderates like Hyde decided Pym and his supporters had gone too far, and switched sides. Increasing unrest in London culminated in 23 to 29 December with widespread riots in Westminster, while the hostility of the crowd meant the bishops stopped attending the Lords. On 30 December, Charles induced John Williams, Archbishop of York and eleven other bishops, to sign a complaint, disputing the legality of any laws passed by the Lords during their exclusion. This was viewed by the Commons as inviting the king to dissolve Parliament; all twelve were arrested. On 3 January, Charles ordered his Attorney-general to bring charges of treason against Edward Montagu, 2nd Earl of Manchester, and Five Members of the Commons; Pym, John Hampden, Denzil Holles, Arthur Haselrig, and William Strode. This confirmed fears he intended to use force to shut down Parliament, while the members were prewarned, and evaded arrest. Soon after, Charles left London, accompanied by many Royalist MPs, and members of the Lords, a major tactical mistake. By doing so, he abandoned the largest arsenal in England, the commercial power of the City of London, and guaranteed his opponents majorities in both houses. In February, Parliament passed the Clergy Act, excluding bishops from the Lords; Charles approved it, since he had already decided to retrieve all such concessions by assembling an army. In March 1642, Parliament decreed its own Parliamentary Ordinances were valid laws, even without royal assent. The Militia Ordinance gave them control of the local militia, or Trained Bands; those in London were the most strategically critical, because they could protect Parliament from armed intervention by any soldiers which Charles had near the capital. Charles declared Parliament in rebellion and began raising an army, by issuing competing Charles revived the Commissions of Array. At the end of 1642, he set up his court at Oxford, where the Royalist MPs formed the Oxford Parliament. In 1645 Parliament reaffirmed its determination to fight the war to a finish. It passed the Self-denying Ordinance, by which all members of either House of Parliament resigned any military commands, and formed the New Model Army under the command of Fairfax and Cromwell. The New Model Army soon destroyed Charles' armies, and by early 1646, he was on the verge of defeat. Charles left Oxford in disguise on 27 April; on 6 May, Parliament received a letter from David Leslie, commander of Scottish forces besieging Newark, announcing he had the king in custody. Charles ordered the Royalist governor, Lord Belasyse, to surrender Newark, and the Scots withdrew to Newcastle, taking the king with them. This marked the end of the First English Civil War. Many Parliamentarians had assumed military defeat would force Charles to compromise, which proved a fundamental misunderstanding of his character. When Prince Rupert suggested in August 1645 the war was lost, Charles responded he was correct, if seen from a military viewpoint, but 'God will not suffer rebels and traitors to prosper'. This deeply-held conviction meant he refused any substantial concessions. Aware of divisions among his opponents, he used his position as king of both Scotland and England to deepen them, assuming he was essential to any government; while true in 1646, by 1648 significant elements believed it was pointless to negotiate with someone who could not be trusted to keep any agreement. Unlike England, where Presbyterians were a minority, the 1639 and 1640 Bishops Wars resulted in a Covenanter, or Presbyterian government, and Presbyterian kirk, or Church of Scotland. The Scots wanted to preserve these achievements; the 1643 Solemn League and Covenant was driven by their concern at the implications for this settlement if Charles defeated Parliament. By 1646, they viewed Charles as a lesser threat than the Independents, who opposed their demand for a unified, Presbyterian church of England and Scotland; Cromwell claimed he would fight, rather than agree to it. In July, the Scots and English commissioners presented Charles with the Newcastle Propositions, which he rejected; his refusal to negotiate created a dilemma for the Covenanters. Even if Charles agreed to a Presbyterian union, there was no guarantee it would be approved by Parliament. Keeping him was too dangerous; as subsequent events proved, whether Royalist or Covenanter, many Scots supported his retention. In February 1647, they agreed a financial settlement, handed Charles over to Parliament, and retreated into Scotland. In England, Parliament was struggling with the economic cost of the war, a poor 1646 harvest, and a recurrence of the plague. The Presbyterian faction had the support of the London Trained Bands, the Army of the Western Association, leaders like Rowland Laugharne in Wales, and elements of the Royal Navy. By March 1647, the New Model was owed more than £3 million in unpaid wages; Parliament ordered it to Ireland, stating only those who agreed would be paid. When their representatives demanded full payment for all in advance, it was disbanded. The New Model refused to be disbanded; in early June, Charles was removed from his Parliamentary guards, and taken to Thriplow, where he was presented with the Army Council's terms. More lenient than the Newcastle Propositions, Charles rejected them; on 26 July, pro-Presbyterian rioters burst into Parliament, demanding he be invited to London. In early August, Fairfax and the New Model took control of the city; this re-established command authority over the rank and file, completed at Corkbush in November. In late November, the king escaped from his guards, and made his way to Carisbrooke Castle. In April 1648, the Engagers became a majority in the Scottish Parliament; in return for restoring him to the English throne, Charles agreed to impose Presbyterianism in England for three years, and suppress the Independents. His refusal to take the Covenant himself split the Scots; the Kirk Party did not trust Charles, objected to an alliance with English and Scots Royalists, and denounced the Engagement as 'sinful.' After two years of constant negotiation, and refusal to compromise, Charles finally had the pieces in place for a rising by Royalists, supported by some English Presbyterians, and Scots Covenanters. However, lack of co-ordination meant the Second English Civil War was quickly suppressed. Divisions emerged between various factions, culminating in Pride's Purge on 7 December 1648, when, under the orders of Oliver Cromwell's son-in-law Henry Ireton, Colonel Pride physically barred and arrested 41 of the members of Parliament. Many of the excluded members were Presbyterians. Henry Vane the Younger removed himself from Parliament in protest of this unlawful action by Ireton. He was not party to the execution of Charles I, although Cromwell was. In the wake of the ejections, the remnant, the "Rump Parliament", arranged for the trial and execution of Charles I on 30 January 1649. It was also responsible for the setting up of the Commonwealth of England in 1649. Henry Vane the Younger was persuaded to rejoin Parliament on 17 February 1649 and a Council of state was installed, into whose hands the executive government of the nation was committed. Sir Henry Vane was appointed a member of the Council. Cromwell used great pains to induce Vane to accept the appointment, and after many consultations, he so far prevailed in satisfying Vane of the purity of his principles in reference to the Commonwealth, as to overcome his reluctance again to enter the public service. Sir Henry Vane was for some time President of the Council, and, as Treasurer and Commissioner for the Navy, he had almost the exclusive direction of that branch of public service. Cromwell "well knew that while the Long Parliament, that noble company, who had fought the great battle of liberty from the beginning, remained in session, and such men as Vane were enabled to mingle in its deliberations, it would be utterly useless for him to think of executing his purposes" (to set up a Protectorate or Dictatorship). Henry Vane was working on a Reform Bill. Cromwell knew "that if the Reform Bill should be suffered to pass, and a House of Commons be convened, freely elected on popular principles, and constituting a full and fair and equal representation, it would be impossible ever after to overthrow the liberties of the people, or break down the government of the country". According to General Edmund Ludlow (an unapologetic supporter of the Good Old Cause who lived in exile after the Restoration), this reform bill provided for an equal representation of the people, disfranchised several boroughs which had ceased to have a population in proportion to representation, fixed the number of the House at four hundred". It would have "secured to England and to the rest of the world the blessings of republican institutions, two centuries earlier than can now be expected". "Harrison, who was in Cromwell's confidence on this occasion, rose to debate the motion, merely in order to gain time. Word was carried to Cromwell, that the House were on the point of putting the final motion; and Colonel Ingoldby hastened to Whitehall to tell him, that, if he intended to do anything decisive, he had no time to lose". Once the troops were in place Cromwell entered the assembly. He was dressed in a suit of plain black; with grey worsted stockings. He took his seat; and appeared to be listening to the debate. As the Speaker was about to rise to put the question, Cromwell whispered to Harrison, "Now is the time; I must do it". As he rose, his countenance became flushed and blacked by the terrific passions which the crisis awakened. With the most reckless violence of manner and language, he abused the character of the House; and, after the first burst of his denunciations had passed, suddenly changing his tone, he exclaimed, "You think, perhaps, this is not parliamentary language; I know it; nor are you to expect such from me". He then advanced out into the middle of the hall, and walked to and fro, like a man beside himself. In a few moments he stamped upon the floor, the doors flew open and a file of musketeers entered. As they advanced, Cromwell exclaimed, looking over the House, "You are no Parliament; I say you are no Parliament; begone, and give place to honester men". "While this extraordinary scene was transacting, the members, hardly believing their own ears and eyes, sat in mute amazement, horror, and pity of the maniac traitor who was storming and raving before them. At length Vane rose to remonstrate, and call him to his senses; but Cromwell, instead of listening to him, drowned his voice, repeating with great vehemence, and as though with the desperate excitement of the moment, "Sir Harry Vane! Sir Harry Vane! Good Lord deliver me from Sir Harry Vane!" He then seized the records, snatched the bill from the hands of the clerk, drove the members out at the point of the bayonet, locked the doors, put the key in his pocket, and returned to Whitehall. Oliver Cromwell forcibly disbanded the Rump in 1653 when it seemed to be planning to perpetuate itself rather than call new elections as had been agreed. It was followed by Barebone's Parliament and then the First, Second and Third Protectorate Parliament. After Richard Cromwell, who had succeeded his father Oliver as Lord Protector in 1658, was effectively deposed by an officers' coup in April 1659, the officers re-summoned the Rump Parliament to sit. It convened on 7 May 1659, but after five months in power it again clashed with the army (led by John Lambert) and was again forcibly dissolved on 13 October 1659. Once again, Sir Henry Vane was the leading catalyst for the republican cause in opposition to force by the military. The persons connected with the administration as it existed at the death of Oliver were, of course, interested in keeping things as they were. Also, it was necessary for someone to assume the reins of government until the public will could be ascertained and brought into exercise. Henry Vane was elected to Parliament at Kingston upon Hull, but the certificate was given to another. Vane proceeded to Bristol, entered the canvass, and received the majority. Again the certificate was given to another. Finally Vane proceeded to Whitechurch in Hampshire and was elected a third time and was this time seated in Parliament. Vane managed the debates on behalf of the House of Commons. One of Vane's speeches effectively ended Richard Cromwell's career: This speech swept everything before it. The Rump Parliament which Oliver Cromwell had dispersed in 1653 was once more summoned to assemble, by a declaration from the Council of Officers dated on 6 May 1659. Edmond Ludlow made several attempts to reconcile the army and parliament in this time period but was ultimately unsuccessful. Parliament ordered the regiments of Colonel Morley and Colonel Moss to march to Westminster for their security, and sent for the rest of the troops that were about London to draw down to them also with all speed. In October 1659, Colonel Lambert and various subordinate members of the army, acting in the military interest, resisted Colonel Morley and others who were defending the rump Parliament. Colonel Lambert, Major Grimes, and Colonel Sydenham eventually gained their points, and placed guards both by land and water, to hinder the members of Parliament from approaching the House. Colonel Lambert subsequently acquitted himself to Henry Vane the Younger, Edmond Ludlow and the "Committee on Safety," an instrument of the Wallingford House party acting under their misdirection. Nevertheless, Parliament was closed once again by military force until such time that the army and leaders of Parliament could effect a resolution. Rule then passed to an unelected "Committee of Safety", including Lambert and Vane; pending a resolution or compromise with the Army. During these disorders, the Council of State still assembled at the usual place, and: The Council of Officers at first attempted to come to some agreement with the leaders of Parliament. On 15 October 1659, the Council of Officers appointed ten persons to "consider of fit ways and means to carry on the affairs and government of the Commonwealth". On 26 October 1659 the Council of Officers appointed a new Committee of Safety of twenty-three members. On 1 November 1659, the Committee of Safety nominated a committee "to consider of and prepare a form of government to be settled over the three nations in the way of a free state and Commonwealth, and afterwards to present it to the Committee of Safety for their further considerations". The designs of General Fleetwood of the army and the Wallingford House party were now suspected as being in a possible alliance with Charles II. According to Edmond Ludlow: Edmond Ludlow warned both the Army and key members of Parliament that unless a compromise could be made it would "render all the blood and treasure that had been spent in asserting our liberties of no use to us, but also force us under such a yoke of servitude, that neither we nor our posterity should be able to bear". Starting on 17 December 1659, Henry Vane representing the Parliament, Major Saloway and Colonel Salmon with powers from the officers of the army to treat with the fleet, and Vice-Admiral Lawson met in negotiating a compromise. The navy was very adverse to any proposal of terms to be made with the Parliament before Parliament's readmission, insisting upon the absolute submission of the army to the authority of Parliament. A plan was then put in place declaring a resolution to join with the Generals at Portsmouth, Colonel Monck, and Vice-Admiral Lawson, but it was still unknown to the republican party that Colonel Monck was in league with King Charles II. Colonel Monck, though a hero to the restoration of King Charles II, was also treacherously disloyal to the Long Parliament, to his oath to the present Parliament, and to the Good Old Cause. Ludlow stated in early January 1660 when in conversation with several key officers of the army: This statement may be verified by the many executions of key Parliament members and Generals after the restoration of King Charles II. Therefore, the restoration of King Charles II could not be an act of the Long Parliament acting freely under its own authority, but only under the influence of the sword by Colonel Monck, who traded his loyalties for the present Long Parliament, in preference to a reformed Long Parliament and to the restoration of King Charles II. General George Monck, who had been Cromwell's viceroy in Scotland, feared that the military stood to lose power and secretly shifted his loyalty to the Crown. As he began to march south, Lambert, who had ridden out to face him, lost support in London. However, the Navy declared for Parliament, and on 26 December 1659 the Rump was restored to power. On 9 January 1660, Monck arrived in London and his plans were communicated. Whereupon Henry Vane the Younger was discharged from being a member of the Long Parliament; and Major Saloway was reproved for his role and committed to the Tower during the pleasure of the house. Lieutenant-General Fleetwood, Col. Sydenham, Lord Commissioner Whitlock, Cornelius Holland, and Mr. Strickland were required to clear themselves touching their deportment in that affair. High treason was also declared against Miles Corbet, Cor. John Jones, Col. Thomlinson, and Edmond Ludlow on 19 January 1660. 1,500 other officers were removed from their command and "scarce one of ten of the old officers of the army were continued". Any known Anabaptists in the army were specifically discharged. So tame had Parliament become, that though it was most visible that Monk's letters and Arthur Haslerig's instructions were designed for the dissolution of the Long Parliament, they were obeyed by the remainder of the members and all these designs were to be put into execution. Though named by Parliament for treason, Miles Corbet and Edmond Ludlow were for a while were permitted to continue to sit with Parliament, and for a time the charges against these men were dropped. After his initial show of deference to the Rump, Monck quickly found them unwilling to continue in cooperation with his plan for an election of a new parliament (the Rump Parliament believed Monck was accountable to them and had its own plan for free elections); so on 21 February 1660 he forcibly reinstated the members 'secluded' by Pride's purge in 1648, so that they could prepare legislation for the Convention Parliament. Some of the Rump Parliament were opposed and refused to sit with the Secluded Members. On 27 February 1660, "the new Council of State being informed of some designs against the usurped power, issued out warrants for apprehending divers officers of the army; and having some jealousy of others that were members of Parliament, they procured an order of their House to authorize them to seize any member who had not sat since the coming in of the Secluded Members, if there should be occasion. When the house was ready to pass the act for dissolution, Crew who had been as forward as any man in beginning and carrying on the war against the last King, moved, that before they dissolved themselves, they would bear their witness against the horrid murder, as he called it, of the King. According to Ludlow: Having called for elections for a new Parliament to meet on 25 April, the Long Parliament was dissolved on 16 March 1660. Finally, on 22 April 1660, "Major-General Lambert's party was dispersed" and General Lambert taken prisoner by Colonel Ingoldsby. "Hitherto Monk had continued to make solemn protestations of his affection and fidelity to the Commonwealth interest, against a King and House of Lords; but the new militia being settled, and a Convention, calling themselves a Parliament and fit for his purpose, being met at Westminster, he sent to such lords as had sat with the Parliament till 1648, to return to the place where they used to sit, which they did, upon assurance from him, that no others should be permitted to sit with them; which promise he also broke, and let in not only such as had deserted to Oxford, but the late created lords. And Charles Stuart, eldest son of the late King, being informed of these transactions, left the Spanish territories where he then resided, and by the advice of Monk went to Breda, a town belonging to the States of Holland: from when he sent his letters and a declaration to the two House by Sir John Greenvil; whereupon the nominal House of Commons, though called by a Commonwealth writ in the name of the Keepers of the Liberties of England, passed a vote [on about April 25, 1660], 'That the government of the nation should be by a King, Lords and Commons, and that Charles Stuart should be proclaimed King of England'". "The Lord Mayor, Sheriffs and Aldermen of the City, treated their King with a collation under a tent, placed in St. George's Fields; and five or six hundred citizens cloathed in coats of black velvet, and (not improperly) wearing chains about their necks, by an order of the Common Council, attended on the triumph of that day; ... and those who had been so often defeated in the field, and had contributed nothing either of bravery or policy to this change, in ordering the souldiery to ride with swords drawn through the city of London to White Hall, the Duke of York and Monk leading the way; and intimating (as was supposed) a resolution to maintain that by force which had been obtained by fraud". Initially seven, and later 'twenty persons were put to death for life and estate.' These included: Chief Justice Coke, who had been Solicitor to the High Court of Justice, Major-General Harrison, Col. John Jones (also a member of the High Court of Justice), Mr. Thomas Scot, Sir. Henry Vane, Sir. Arthur Haslerig, Sir. Henry Mildmay, Mr. Robert Wallop, the Lord Mounson, Sir. James Harrington, Mr. James Challoner, Mr. John Phelps, Mr. John Carew, Mr. Hugh Peters, Mr. Gregory Clement, Colonel Adrian Scroop, Col. Francis Hacker, Col. Daniel Axtel. Among those who appeared the most basely subservient to these 'exorbitancies' of the Court, 'Mr. William Prynn was singularly remarkable' and attempted to add to these all who 'abjured the family of the Stuarts' previously, though this motion failed. "John Finch who had been accused of high treason twenty years before, by a full Parliament, and who by flying from their justice had saved his life, was appointed to judge some of those who should have been his judges; and Sir. Orlando Bridgman, who upon his submission to Cromwell had been permitted to practice the law in a private manner, and under that colour had served both as spy and agent for his master, was entrusted with the principal management of this tragic scene; and in his charge to the Grand Jury, had the assurance to tell them 'That no authority, no single person, or community of men; not the people collectively or representatively, had any coercive power over the King of England'". In framing the Act of Indemnity and Oblivion, the House of Commons were unwilling to except Sir Henry Vane, Sir. Arthur Haslerig, and Major-General Lambert as they had no immediate hand in the death of the King, and there was as much reason to except them as most of the members of Parliament from its benefits. In Henry Vane's case the House of Lords were desirous of having him specifically excepted, so as to leave him at the mercy of the government and thus restrain him from the exercise of his great talents in promoting his favourite republican principles at any time during the remainder of his life. At a conference between the two Houses, it was concluded that the Commons should consent to except him from the act of indemnity, the Lords agreeing, on their part, to concur with the other House in petitioning the King, in case of the condemnation of Vane, not to carry the sentence into execution. General Edmond Ludlow, still loyal to the Rump Parliament was also excepted. According to contemporary royalist legal theory, the Long Parliament was regarded as having been automatically dissolved from the moment of Charles I's execution on 30 January 1649. This view was confirmed by a court ruling during the treason trial of Henry Vane the Younger – a ruling that Henry Vane himself had concurred with in opposition to Oliver Cromwell years earlier. The trial given to Vane as to his own person, and defence of his own part played for the Long Parliament was a foregone conclusion. It was not a fair trial as both his defence, and deportment at the time of defence bears out. He was not given legal counsel (other than the judges that sat at his trial); and was left to conduct his own defence after years in prison. Sir Henry Vane maintained the following at his trial: King Charles II did not keep the promise made to the house but executed the sentence of death on Sir Henry Vane the Younger. The solicitor, openly declared in his speech afterwards "that he (Henry Vane) must be made a public sacrifice". One of his judges stated: "We knew not how to answer him, but we know what to do with him". Edmond Ludlow one of the members of Parliament excepted by the act of indemnity, fled to Switzerland after the restoration of King Charles II, where he wrote his memoirs of these events. The Long Parliament began with the execution of Lord Stafford, and effectively ended with the execution of Henry Vane the Younger. The republican theory is that the goal and aim of the Long Parliament was to institute a constitutional, balanced, and equally representative form of government along similar lines as were later accomplished in America by the American Revolution. It is clear from the writings of both Ludlow, Vane, and historians of the early American period such as Upham, that this is what they were striving for and why they were excepted from the acts of indemnity. The republican theory also suggests that the Long Parliament would have been successful in these necessary reforms except through the forceful intervention of Oliver Cromwell (and others) in removing the loyalists party, the unlawful execution of King Charles I, later dissolving the Rump Parliament; and finally the forceful dissolution of the reconvened Rump Parliament by Monck when less than a fourth of the required members were present. It is believed that in many ways this struggle was but a precursor to the American Revolution.
https://en.wikipedia.org/wiki?curid=18065
Lubricant A lubricant is a substance, usually organic, introduced to reduce friction between surfaces in mutual contact, which ultimately reduces the heat generated when the surfaces move. It may also have the function of transmitting forces, transporting foreign particles, or heating or cooling the surfaces. The property of reducing friction is known as lubricity. In addition to industrial applications, lubricants are used for many other purposes. Other uses include cooking (oils and fats in use in frying pans, in baking to prevent food sticking), bioapplications on humans (e.g. lubricants for artificial joints), ultrasound examination, medical examination. It is mainly used to reduce friction and to contribute to a better and efficient functioning of a mechanism. Lubricants have been in some use for thousands of years. Calcium soaps have been identified on the axles of chariots dated to 1400 BC. Building stones were slid on oil-impregrated lumber in the time of the pyramids. In the Roman era, lubricants were based on olive oil and rapeseed oil, as well as animal fats. The growth of lubrication accelerated in the Industrial Revolution with the accompanying use of metal-based machinery. Relying initially on natural oils, needs for such machinery shifted toward petroleum-based materials early in the 1900s. A breakthrough came with the development of vacuum distillation of petroleum, as described by the Vacuum Oil Company. This technology allowed the purification of very nonvolatile substances, which are common in many lubricants. A good lubricant generally possesses the following characteristics: Typically lubricants contain 90% base oil (most often petroleum fractions, called mineral oils) and less than 10% additives. Vegetable oils or synthetic liquids such as hydrogenated polyolefins, esters, silicones, fluorocarbons and many others are sometimes used as base oils. Additives deliver reduced friction and wear, increased viscosity, improved viscosity index, resistance to corrosion and oxidation, aging or contamination, etc. Non-liquid lubricants include powders (dry graphite, PTFE, molybdenum disulphide, tungsten disulphide, etc.), PTFE tape used in plumbing, air cushion and others. Dry lubricants such as graphite, molybdenum disulphide and tungsten disulphide also offer lubrication at temperatures (up to 350 °C) higher than liquid and oil-based lubricants are able to operate. Limited interest has been shown in low friction properties of compacted oxide glaze layers formed at several hundred degrees Celsius in metallic sliding systems, however, practical use is still many years away due to their physically unstable nature. A large number of additives are used to impart performance characteristics to the lubricants. Modern automotive lubricants contain as many as ten additives, comprising up to 20% of the lubricant, the main families of additives are: In 1999, an estimated 37,300,000 tons of lubricants were consumed worldwide. Automotive applications dominate, including electric vehicles but other industrial, marine, and metal working applications are also big consumers of lubricants. Although air and other gas-based lubricants are known (e.g., in fluid bearings), liquid lubricants dominate the market, followed by solid lubricants. Lubricants are generally composed of a majority of base oil plus a variety of additives to impart desirable characteristics. Although generally lubricants are based on one type of base oil, mixtures of the base oils also are used to meet performance requirements. The term "mineral oil" is used to refer to lubricating base oils derived from crude oil. The American Petroleum Institute (API) designates several types of lubricant base oil: The lubricant industry commonly extends this group terminology to include: Can also be classified into three categories depending on the prevailing compositions: Petroleum-derived lubricant can also be produced using synthetic hydrocarbons (derived ultimately from petroleum), "synthetic oils". These include: PTFE: polytetrafluoroethylene (PTFE) is typically used as a coating layer on, for example, cooking utensils to provide a non-stick surface. Its usable temperature range up to 350 °C and chemical inertness make it a useful additive in special greases. Under extreme pressures, PTFE powder or solids is of little value as it is soft and flows away from the area of contact. Ceramic or metal or alloy lubricants must be used then. Inorganic solids: Graphite, hexagonal boron nitride, molybdenum disulfide and tungsten disulfide are examples of solid lubricants. Some retain their lubricity to very high temperatures. The use of some such materials is sometimes restricted by their poor resistance to oxidation (e.g., molybdenum disulfide degrades above 350 °C in air, but 1100 °C in reducing environments. Metal/alloy: Metal alloys, composites and pure metals can be used as grease additives or the sole constituents of sliding surfaces and bearings. Cadmium and gold are used for plating surfaces which gives them good corrosion resistance and sliding properties, Lead, tin, zinc alloys and various bronze alloys are used as sliding bearings, or their powder can be used to lubricate sliding surfaces alone. Aqueous lubrication is of interest in a number of technological applications. Strongly hydrated brush polymers such as PEG can serve as lubricants at liquid solid interfaces. By continuous rapid exchange of bound water with other free water molecules, these polymer films keep the surfaces separated while maintaining a high fluidity at the brush–brush interface at high compressions, thus leading to a very low coefficient of friction. Biolubricants are derived from vegetable oils and other renewable sources. They usually are triglyceride esters (fats obtained from plants and animals). For lubricant base oil use, the vegetable derived materials are preferred. Common ones include high oleic canola oil, castor oil, palm oil, sunflower seed oil and rapeseed oil from vegetable, and tall oil from tree sources. Many vegetable oils are often hydrolyzed to yield the acids which are subsequently combined selectively to form specialist synthetic esters. Other naturally derived lubricants include lanolin (wool grease, a natural water repellent). Whale oil was a historically important lubricant, with some uses up to the latter part of the 20th century as a friction modifier additive for automatic transmission fluid. In 2008, the biolubricant market was around 1% of UK lubricant sales in a total lubricant market of 840,000 tonnes/year. , researchers at Australia]s CSIRO have been studying safflower oil as an engine lubricant, finding superior performance and lower emissions than petroleum-based lubricants in applications such as engine-driven lawn mowers, chainsaws and other agricultural equipment. Grain-growers trialling the product have welcomed the innovation, with one describing it as needing very little refining, biodegradable, a bioenergy and biofuel. The scientists have reengineered the plant using gene silencing, creating a variety that produces up to 93% of oil, the highest currently available from any plant. Researchers at Montana State University’s Advanced Fuel Centre in the US studying the oil’s performance in a large diesel engine, comparing it with conventional oil, have described the results as a "game-changer". One of the largest applications for lubricants, in the form of motor oil, is protecting the internal combustion engines in motor vehicles and powered equipment. Anti-tack or anti-stick coatings are designed to reduce the adhesive condition (stickiness) of a given material. The rubber, hose, and wire and cable industries are the largest consumers of anti-tack products but virtually every industry uses some form of anti-sticking agent. Anti-sticking agents differ from "lubricants" in that they are designed to reduce the inherently adhesive qualities of a given compound while lubricants are designed to reduce friction between any two surfaces. Lubricants are typically used to separate moving parts in a system. This separation has the benefit of reducing friction, wear and surface fatigue, together with reduced heat generation, operating noise and vibrations. Lubricants achieve this in several ways. The most common is by forming a physical barrier i.e., a thin layer of lubricant separates the moving parts. This is analogous to hydroplaning, the loss of friction observed when a car tire is separated from the road surface by moving through standing water. This is termed hydrodynamic lubrication. In cases of high surface pressures or temperatures, the fluid film is much thinner and some of the forces are transmitted between the surfaces through the lubricant. Typically the lubricant-to-surface friction is much less than surface-to-surface friction in a system without any lubrication. Thus use of a lubricant reduces the overall system friction. Reduced friction has the benefit of reducing heat generation and reduced formation of wear particles as well as improved efficiency. Lubricants may contain polar additives known as friction modifiers that chemically bind to metal surfaces to reduce surface friction even when there is insufficient bulk lubricant present for hydrodynamic lubrication, e.g. protecting the valve train in a car engine at startup. The base oil itself might also be polar in nature and as a result inherently able to bind to metal surfaces, as with polyolester oils. Both gas and liquid lubricants can transfer heat. However, liquid lubricants are much more effective on account of their high specific heat capacity. Typically the liquid lubricant is constantly circulated to and from a cooler part of the system, although lubricants may be used to warm as well as to cool when a regulated temperature is required. This circulating flow also determines the amount of heat that is carried away in any given unit of time. High flow systems can carry away a lot of heat and have the additional benefit of reducing the thermal stress on the lubricant. Thus lower cost liquid lubricants may be used. The primary drawback is that high flows typically require larger sumps and bigger cooling units. A secondary drawback is that a high flow system that relies on the flow rate to protect the lubricant from thermal stress is susceptible to catastrophic failure during sudden system shut downs. An automotive oil-cooled turbocharger is a typical example. Turbochargers get red hot during operation and the oil that is cooling them only survives as its residence time in the system is very short (i.e. high flow rate). If the system is shut down suddenly (pulling into a service area after a high-speed drive and stopping the engine) the oil that is in the turbo charger immediately oxidizes and will clog the oil ways with deposits. Over time these deposits can completely block the oil ways, reducing the cooling with the result that the turbo charger experiences total failure, typically with seized bearings. Non-flowing lubricants such as greases and pastes are not effective at heat transfer although they do contribute by reducing the generation of heat in the first place. Lubricant circulation systems have the benefit of carrying away internally generated debris and external contaminants that get introduced into the system to a filter where they can be removed. Lubricants for machines that regularly generate debris or contaminants such as automotive engines typically contain detergent and dispersant additives to assist in debris and contaminant transport to the filter and removal. Over time the filter will get clogged and require cleaning or replacement, hence the recommendation to change a car's oil filter at the same time as changing the oil. In closed systems such as gear boxes the filter may be supplemented by a magnet to attract any iron fines that get created. It is apparent that in a circulatory system the oil will only be as clean as the filter can make it, thus it is unfortunate that there are no industry standards by which consumers can readily assess the filtering ability of various automotive filters. Poor automotive filters significantly reduces the life of the machine (engine) as well as making the system inefficient. Lubricants known as hydraulic fluid are used as the working fluid in hydrostatic power transmission. Hydraulic fluids comprise a large portion of all lubricants produced in the world. The automatic transmission's torque converter is another important application for power transmission with lubricants. Lubricants prevent wear by keeping the moving parts apart. Lubricants may also contain anti-wear or extreme pressure additives to boost their performance against wear and fatigue. Many lubricants are formulated with additives that form chemical bonds with surfaces or that exclude moisture, to prevent corrosion and rust. It reduces corrosion between two metallic surface and avoids contact between these surfaces to avoid immersed corrosion. Lubricants will occupy the clearance between moving parts through the capillary force, thus sealing the clearance. This effect can be used to seal pistons and shafts. A further phenomenon that has undergone investigation in relation to high temperature wear prevention and lubrication, is that of a compacted oxide layer glaze formation. Such glazes are generated by sintering a compacted oxide layer. Such glazes are crystalline, in contrast to the amorphous glazes seen in pottery. The required high temperatures arise from metallic surfaces sliding against each other (or a metallic surface against a ceramic surface). Due to the elimination of metallic contact and adhesion by the generation of oxide, friction and wear is reduced. Effectively, such a surface is self-lubricating. As the "glaze" is already an oxide, it can survive to very high temperatures in air or oxidising environments. However, it is disadvantaged by it being necessary for the base metal (or ceramic) having to undergo some wear first to generate sufficient oxide debris. It is estimated that 40% of all lubricants are released into the environment. Common disposal methods include recycling, burning, landfill and discharge into water, though typically disposal in landfill and discharge into water are strictly regulated in most countries, as even small amount of lubricant can contaminate a large amount of water. Most regulations permit a threshold level of lubricant that may be present in waste streams and companies spend hundreds of millions of dollars annually in treating their waste waters to get to acceptable levels. Burning the lubricant as fuel, typically to generate electricity, is also governed by regulations mainly on account of the relatively high level of additives present. Burning generates both airborne pollutants and ash rich in toxic materials, mainly heavy metal compounds. Thus lubricant burning takes place in specialized facilities that have incorporated special scrubbers to remove airborne pollutants and have access to landfill sites with permits to handle the toxic ash. Unfortunately, most lubricant that ends up directly in the environment is due to general public discharging it onto the ground, into drains and directly into landfills as trash. Other direct contamination sources include runoff from roadways, accidental spillages, natural or man-made disasters and pipeline leakages. Improvement in filtration technologies and processes has now made recycling a viable option (with rising price of base stock and crude oil). Typically various filtration systems remove particulates, additives and oxidation products and recover the base oil. The oil may get refined during the process. This base oil is then treated much the same as virgin base oil however there is considerable reluctance to use recycled oils as they are generally considered inferior. Basestock fractionally vacuum distilled from used lubricants has superior properties to all natural oils, but cost effectiveness depends on many factors. Used lubricant may also be used as refinery feedstock to become part of crude oil. Again, there is considerable reluctance to this use as the additives, soot and wear metals will seriously poison/deactivate the critical catalysts in the process. Cost prohibits carrying out both filtration (soot, additives removal) and re-refining (distilling, isomerisation, hydrocrack, etc.) however the primary hindrance to recycling still remains the collection of fluids as refineries need continuous supply in amounts measured in cisterns, rail tanks. Occasionally, unused lubricant requires disposal. The best course of action in such situations is to return it to the manufacturer where it can be processed as a part of fresh batches. "Environment:" Lubricants both fresh and used can cause considerable damage to the environment mainly due to their high potential of serious water pollution. Further the additives typically contained in lubricant can be toxic to flora and fauna. In used fluids the oxidation products can be toxic as well. Lubricant persistence in the environment largely depends upon the base fluid, however if very toxic additives are used they may negatively affect the persistence. Lanolin lubricants are non-toxic making them the environmental alternative which is safe for both users and the environment.
https://en.wikipedia.org/wiki?curid=18069
Lise Meitner Lise Meitner (; ; 7 November 1878 – 27 October 1968) was an Austrian-Swedish physicist who worked on radioactivity and nuclear physics. Meitner and Otto Robert Frisch discovered nuclear fission of uranium when it absorbed an extra neutron; the results were published in early 1939. They understood the nuclear fission process, which splits the atomic nucleus of uranium into two smaller nuclei, must be accompanied by an enormous release of energy. Their research into nuclear fission helped to pioneer nuclear reactors to generate electricity as well as the development of nuclear weapons during World War II. Meitner spent most of her scientific career in Berlin, Germany, where she was a physics professor and a department head at the Kaiser Wilhelm Institute; she was the first woman to become a full professor of physics in Germany. She lost these positions in the 1930s because of the anti-Jewish Nuremberg Laws of Nazi Germany, and in 1938 she fled to Sweden, where she lived for many years, ultimately becoming a Swedish citizen. Meitner received many awards and honours late in her life. But she and Otto Frisch did not share in the 1944 Nobel Prize in Chemistry for nuclear fission, which was awarded exclusively to her long-time collaborator Otto Hahn. Several scientists and journalists have called her exclusion "unjust". However, Meitner has received many posthumous honours, including the naming of chemical element 109 meitnerium in 1992. According to the Nobel Prize archive, she was nominated 19 times for Nobel Prize in Chemistry between 1924 and 1947, and 29 times for Nobel Prize in Physics between 1937 and 1965. Despite not having been awarded the Nobel Prize, Meitner was invited to attend the Lindau Nobel Laureate Meeting in 1962. She was born Elise Meitner on 7 November 1878 into a Jewish upper-middle-class family at the family home in 27 Kaiser Josefstrasse in the Leopoldstadt district of Vienna, the third of eight children of Hedwig and Philipp Meitner. The birth register of Vienna's Jewish community lists her as being born on 17 November 1878, but all other documents list her date of birth as 7 November, which is what she used. She shortened her name from Elise to Lise. Her father was one of the first Jewish lawyers admitted to practice in Austria. She had two older siblings, Gisela and Auguste (Gusti), and five younger: Moriz (Fritz), Carola (Lola), Frida and Walter; all eight, including the five girls, ultimately pursued an advanced education. As an adult, she converted to Christianity, following Lutheranism, and was baptized in 1908; her sisters Gisela and Lola converted to Catholicism that same year. Meitner's earliest research began at age 8, when she kept a notebook of her records underneath her pillow. She was particularly drawn to maths and science, and first studied colours of an oil slick, thin films, and reflected light. Women were not allowed to attend public institutions of higher education in Vienna until 1897, and she completed her final year in 1892; her education included bookkeeping, arithmetic, history, geography, science, French and gymnastics.. The only career available to women was teaching, so she trained as a French teacher. Her sister Gisela passed the "Matura", and entered medical school in 1900. In 1899, Meitner began taking private lessons with two other young women, cramming the missing eight years of secondary education into just two. Physics was taught by Arthur Szarvasy. In July 1901 the girls sat and external examination at the Akademisches Gymnasium. Only four out of fourteen girls passed, including Meitner and Henriette Boltzmann, the daughter of physicist Ludwig Boltzmann. Meitner was entered the University of Vienna in October 1901. Her professors gave her inspiration to do her best. She was particularly inspired by Boltzmann, and was said to often speak with contagious enthusiasm of his lectures. On 1 February 1906, she became the second woman to obtain a doctoral degree in physics, writing her dissertation was on "heat conduction in an inhomogeneous body" under the supervision of Franz Exner. Paul Ehrenfest asked her to investigate an article on optics by Lord Rayleigh that detailed an experiment that produced results that Rayleigh had been unable to explain. She was not only able to explain what was going on, she went further and made predictions based on her explanation, and then verified them experimentally, demonstrating her ability to carry out independent and unsupervised research. While engaged in this research, Meitner was introduced by Stefan Meyer to radioactivity, then a very new field of study. This started her on studying alpha particles. While studying a beam of alpha particles, she found that scattering increased with the atomic mass of the metal atoms, in her experiments with collimators and metal foil, which led Ernest Rutherford later on to the nuclear atom, and which had been her forte, submitting her report of same to the "Physikalische Zeitschrift" on 29 June 1907. After she received her doctorate, Meitner rejected an offer to work in a gas lamp factory. Encouraged by her father and backed by his financial support, she went to the Friedrich-Wilhelms-Universität in Berlin where famous physicist Max Planck allowed her to attend his lectures, an unusual gesture by Planck, who until then had rejected any woman wanting to attend his lectures. After one year of attending Planck's lectures, Meitner became Planck's assistant. When Meitner met Hahn, he was working at the Chemical Institute, ran by Emil Fischer, which did not allow female scientists to enter. Therefore, they set up a carpenter's workshop for radiation measurement. During the first years she worked together with chemist Otto Hahn and together with him discovered several new isotopes. In 1909 she presented two papers on beta radiation. She also, together with Otto Hahn, discovered and developed a physical separation method known as radioactive recoil, in which a daughter nucleus is forcefully ejected from its matrix as it recoils at the moment of decay. While Hahn was more concerned with discovering new elements, Meitner was more concerned with understanding the radiations of these elements. Together, their first results were mostly inaccurate assumptions. In 1912, the research group Hahn–Meitner moved to the newly founded Kaiser-Wilhelm-Institute (KWI) in Berlin-Dahlem, south west in Berlin. She worked without salary as a "guest" in Hahn's department of Radiochemistry. It was not until 1913, at 35 years old and following an offer to go to Prague as associate professor, that she got a permanent position at KWI. When World War I began, Hahn was called into service and Meitner volunteered as a nurse, handling X-ray equipment. Sometimes, Meitner and Hahn would have leave at the same time. At that time, they would work on their research together. In the first part of World War I, she served as a nurse handling X-ray equipment. She returned to Berlin and her research in 1916, but not without inner struggle. In a way, she felt ashamed of wanting to continue her research efforts when thinking about the pain and suffering of the victims of war and their medical and emotional needs. In 1917, she and Hahn discovered the first long-lived isotope of the element protactinium, for which she was awarded the Leibniz Medal by the Berlin Academy of Sciences. That year, Meitner was given her own physics section at the Kaiser Wilhelm Institute for Chemistry. In 1922, she discovered the cause of the emission of electrons from surfaces of atoms with 'signature' energies, known as the Auger effect. The effect is named for Pierre Victor Auger, a French scientist who independently discovered the effect in 1923.In 1922, Meitner also gave an inaugural lecture over cosmic physics. In 1926, Meitner became the first woman in Germany to assume a post of full professor in physics, at the University of Berlin.In the same year, Meitner was recognized as an extraordinary professor. While she had not given any courses, she had been participating in the weekly physics colloquium. In 1935, as head of the physics department of the Kaiser Wilhelm Institute for Chemistry in Berlin-Dahlem (today the Hahn-Meitner Building of the Free University) she and Otto Hahn, the director of the KWI, undertook the so-called "transuranium research" program. This program eventually led to the unexpected discovery of nuclear fission of heavy nuclei in December 1938, half a year after she had left Berlin. She was praised by Albert Einstein as the "German Marie Curie". While at the Kaiser Wilhelm Institute, Meitner corresponded with James Chadwick at the Cavendish Laboratory at Cambridge. As Chadwick and others were attempting to prove the existence of the neutron, Meitner sent polonium to Chadwick for his experiments. Chadwick eventually required and received more polonium for his experiments from a hospital in Baltimore, but he would remain grateful to Meitner. Later, he said he was "quite convinced that [Meitner] would have discovered the neutron if it had been firmly in her mind, if she had had the advantage of, say, living in the Cavendish for years, as I had done". In 1930, Meitner taught a seminar on nuclear physics and chemistry with Leó Szilárd. After the discovery of the neutron in the early 1930s, the scientific community speculated that it might be possible to create elements heavier than uranium (atomic number 92) in the laboratory. A scientific race began between the teams of Ernest Rutherford in Britain, Irène Joliot-Curie in France, Enrico Fermi in Italy, and Meitner and Hahn in Berlin. At the time, all concerned believed that this was abstract research for the probable honour of a Nobel prize. None suspected that this research would culminate in nuclear weapons. When Adolf Hitler came to power in 1933, Meitner was still acting as head of the physics department of the Kaiser Wilhelm Institute for Chemistry. Although she was protected by her Austrian citizenship, all other Jewish scientists, including Szilárd, Fritz Haber, her nephew Otto Frisch, and many other eminent figures, were dismissed or forced to resign from their posts. Most of them emigrated from Germany. Her response was to say nothing and bury herself in her work. After the Anschluss in March 1938, her situation became difficult. On 13 July 1938, Meitner, with the support of Otto Hahn and the help from the Dutch physicists Dirk Coster and Adriaan Fokker, departed for the Netherlands. She was forced to travel under cover to the Dutch border, where Coster persuaded German immigration officers that she had permission to travel to the Netherlands. She reached safety, though without her possessions. Meitner later said that she left Germany forever with 10 marks in her purse. Before she left, Otto Hahn had given her a diamond ring he had inherited from his mother: this was to be used to bribe the frontier guards if required. It was not required, and Meitner's nephew's wife later wore it. Although they were separated, Meitner and Hahn continued their research by letter. An appointment at the University of Groningen did not come through, and with the help of Eva von Bahr and Carl Wilhelm Oseen she went instead to Stockholm, where she took up a post at Manne Siegbahn's , despite the difficulty caused by Siegbahn's prejudice against women in science. Here she established a working relationship with Niels Bohr, who travelled regularly between Copenhagen and Stockholm. She continued to correspond with Hahn and other German scientists. On the occasion of a lecture by Hahn in Niels Bohr's Institute, Hahn, Bohr, Meitner and Frisch met in Copenhagen on 10 November 1938. Later they continued to exchange a series of letters. In December Hahn and his assistant Fritz Strassmann performed the difficult experiments which isolated the evidence for nuclear fission at their laboratory in Berlin-Dahlem. The surviving correspondence shows that Hahn recognized that 'fission' was the only explanation for the presence of barium (at first he named the process a 'bursting' of the uranium), but, baffled by this remarkable conclusion, he wrote to Meitner. The possibility that uranium nuclei might break up under neutron bombardment had been suggested years before, among others by Ida Noddack in 1934. However, by employing the existing "liquid-drop" model of the nucleus, Meitner and Frisch, exclusively informed by Hahn in advance, were therefore the first to articulate a theory of how the nucleus of an atom could be split into smaller parts: uranium nuclei had split to form barium and krypton, accompanied by the ejection of several neutrons and a large amount of energy (the latter two products accounting for the loss in mass). She and Frisch had discovered the reason that no stable elements beyond uranium (in atomic number) existed naturally; the electrical repulsion of so many protons overcame the strong nuclear force. They also first realized that Einstein's famous equation, E = mc2, explained the source of the tremendous releases of energy in nuclear fission, by the conversion of rest mass into kinetic energy, popularly described as the conversion of mass into energy. Ironically, Meitner was motivated to begin these calculations in order to show that Irène Joliot-Curie's interpretation of some experiments violated the liquid drop model. A letter from Bohr had sparked the above inspiration in December 1938: he commented on the fact that the amount of energy released when he bombarded uranium atoms was far larger than had been predicted by calculations based on a non-fissile core. But Meitner and Frisch later confirmed that chemistry had been solely responsible for the discovery, although Hahn, as a chemist, was reluctant to explain the fission process in correct physical terms. In a later appreciation Lise Meitner wrote: The discovery of nuclear fission by Otto Hahn and Fritz Strassmann opened up a new era in human history. It seems to me that what makes the science behind this discovery so remarkable is that it was achieved by purely chemical means. And in an interview with the West German television (ARD, 8 March 1959) Meitner said: Otto Hahn and Fritz Strassmann were able to do this by exceptionally good chemistry, fantastically good chemistry, which was way ahead of what any one else was capable of at that time. The Americans learned to do it later. But at that time, Hahn and Strassmann were really the only ones who could do it. And that was because they were such good chemists. Somehow they really succeeded in using chemistry to demonstrate and prove a physical process. Fritz Strassmann responded in the same interview with this clarification: Professor Meitner stated that the success could be attributed to chemistry. I have to make a slight correction. Chemistry merely isolated the individual substances, it did not precisely identify them. It took Professor Hahn's method to do this. This is where his achievement lies. Hahn and Strassmann had sent the manuscript of their first paper to "Naturwissenschaften" in December 1938, reporting they had detected and identified the element barium after bombarding uranium with neutrons; simultaneously, Hahn had communicated their results exclusively to Meitner in several letters, and did not inform the physicists in his own institute. In their second publication on the evidence of barium ("Naturwissenschaften", 10 February 1939) Hahn and Strassmann used for the first time the name "Uranspaltung" (uranium fission) and predicted the existence and liberation of additional neutrons during the fission process (which was proved later to be a chain reaction by Frédéric Joliot and his team). Meitner and Frisch were the first who correctly interpreted Hahn's and Strassmann's results as being nuclear fission, a term coined by Frisch, and published their paper in "Nature". Frisch confirmed this experimentally on 13 January 1939. These three reports, the first Hahn-Strassmann publication of 6 January 1939, the second Hahn-Strassmann publication of 10 February 1939, and the Frisch-Meitner publication of 11 February 1939, had electrifying effects on the scientific community. Because there was a possibility that fission could be used as a weapon, and since the knowledge was in German hands, Szilárd, Edward Teller, and Eugene Wigner jumped into action, persuading Albert Einstein, a celebrity, to write President Franklin D. Roosevelt a letter of caution. In 1940 Frisch and Rudolf Peierls produced the Frisch–Peierls memorandum, which first set out how an atomic explosion could be generated, and this ultimately led to the establishment in 1942 of the Manhattan Project. Meitner refused an offer to work on the project at Los Alamos, declaring "I will have nothing to do with a bomb!" Meitner said that Hiroshima had come as a surprise to her, and that she was "sorry that the bomb had to be invented". In Sweden, Meitner was first active at Siegbahn's Nobel Institute for Physics, and at the Swedish National Defence Research Institute (FOA) and the Royal Institute of Technology in Stockholm, where she had a laboratory and participated in research on R1, Sweden's first nuclear reactor. In 1947, a personal position was created for her at the University College of Stockholm with the salary of a professor and funding from the Council for Atomic Research. Despite the many honors that Meitner received in her lifetime, she did not receive the Nobel Prize awarded to Otto Hahn for the discovery of nuclear fission. On 15 November 1945, the Royal Swedish Academy of Sciences announced that Hahn had been awarded the 1944 Nobel Prize in Chemistry for "his discovery of the fission of heavy atomic nuclei". Meitner was the one that told Hahn and Strassman to test their radium in more detail and it was she that told Hahn that it was possible for the nucleus of uranium to disintegrate. Without these contributions of Meitner, Hahn would not have found that the uranium nucleus can split in half. At the time Meitner herself wrote in a letter, "Surely Hahn fully deserved the Nobel Prize for chemistry. There is really no doubt about it. But I believe that Otto Robert Frisch and I contributed something not insignificant to the clarification of the process of uranium fission—how it originates and that it produces so much energy and that was something very remote to Hahn." In a similar vein, Carl Friedrich von Weizsäcker, Lise Meitner's former assistant, later added that Hahn "certainly did deserve this Nobel Prize. He would have deserved it even if he had not made this discovery. But everyone recognized that the splitting of the atomic nucleus merited a Nobel Prize." Hahn's receipt of a Nobel Prize was long expected. Both he and Meitner had been nominated for both the chemistry and the physics prizes several times even before the discovery of nuclear fission. In 1945 the Committee in Sweden that selected the Nobel Prize in Chemistry decided to award that prize solely to Hahn. In the 1990s, the long-sealed records of the Nobel Committee's proceedings became public, and the comprehensive biography of Meitner published in 1996 by Ruth Lewin Sime took advantage of this unsealing to reconsider Meitner's exclusion. In a 1997 article in the American Physical Society journal "Physics Today", Sime and her colleagues Elisabeth Crawford and Mark Walker wrote: "It appears that Lise Meitner did not share the 1944 prize because the structure of the Nobel committees was ill-suited to assess interdisciplinary work; because the members of the chemistry committee were unable or unwilling to judge her contribution fairly; and because during the war the Swedish scientists relied on their own limited expertise. Meitner's exclusion from the chemistry award may well be summarized as a mixture of disciplinary bias, political obtuseness, ignorance, and haste." Max Perutz, the 1962 Nobel prizewinner in chemistry, reached a similar conclusion: "Having been locked up in the Nobel Committee's files these fifty years, the documents leading to this unjust award now reveal that the protracted deliberations by the Nobel jury were hampered by lack of appreciation both of the joint work that had preceded the discovery and of Meitner's written and verbal contributions after her flight from Berlin." After the war Meitner acknowledged her own moral failing in staying in Germany from 1933 to 1938. She said, "It was not only stupid but very wrong that I did not leave at once." When Hitler came to power in 1933, Meitner was acting director of the Institute for Chemistry. Although she was protected by her Austrian citizenship, other Jewish scientists were forced to leave. She later regretted her inaction during this period, but she was also bitterly critical of Hahn, Max von Laue and other German scientists who, she thought, would have collaborated with the Nazis and done nothing to protest against the crimes of Hitler's regime. Referring to the leading German nuclear physicist Werner Heisenberg, she said: "Heisenberg and many millions with him should be forced to see these camps and the martyred people." In a June 1945 draft letter addressed to Hahn, but never received by him, she wrote: You all worked for Nazi Germany. And you tried to offer only a passive resistance. Certainly, to buy off your conscience you helped here and there a persecuted person, but millions of innocent human beings were allowed to be murdered without any kind of protest being uttered ... [it is said that] first you betrayed your friends, then your children in that you let them stake their lives on a criminal war – and finally that you betrayed Germany itself, because when the war was already quite hopeless, you did not once arm yourselves against the senseless destruction of Germany. After the war in the 1950s and 1960s, Meitner again enjoyed visiting Germany and staying with Hahn and his family for several days on different occasions, particularly on 8 March 1959, to celebrate Hahn's 80th birthday in Göttingen, where she addressed recollections in his honour. Also Hahn wrote in his memoirs, which were published shortly after his death in 1968, that he and Meitner had remained lifelong close friends. Even though their friendship was full of trials, arguably more so experienced by Meitner, she "never voiced anything but deep affection for Hahn". In 1947, Meitner retired from the Siegbahn Institute and started research in a new laboratory that was created specifically for her by the Swedish Atomic Energy Commission at the Royal Institute of Technology. She became a Swedish citizen in 1949. She retired in 1960 and moved to the UK where most of her relatives were, although she continued working part-time and giving lectures. A strenuous trip to the United States in 1964 led to Meitner's having a heart attack, from which she spent several months recovering. Her physical and mental condition weakened by atherosclerosis, she was unable to travel to the US to receive the Enrico Fermi prize. President Johnson sent Glenn Seaborg, the discoverer of plutonium, to present it to her. The presentation was made in the home of Max Perutz in Cambridge. After breaking her hip in a fall and suffering several small strokes in 1967, Meitner made a partial recovery, but eventually was weakened to the point where she moved into a Cambridge nursing home. She died in her sleep on 27 October 1968 at the age of 89. Meitner was not informed of the deaths of Otto Hahn (d. July 1968) or his wife Edith, as her family believed it would be too much for someone so frail. As was her wish, she was buried in the village of Bramley in Hampshire, at St. James parish church, close to her younger brother Walter, who had died in 1964. Her nephew Frisch composed the inscription on her headstone. It reads: Lise Meitner: a physicist who never lost her humanity. On a visit to the US in 1946, Meitner received the honour of "Woman of the Year" by the National Press Club and had dinner with President Harry Truman and others at the Women's National Press Club. She lectured at Princeton, Harvard and other US universities, and was awarded a number of honorary doctorates. She received jointly with Hahn the Max Planck Medal of the German Physical Society in 1949, and in 1955 she was awarded the first Otto Hahn Prize of the German Chemical Society. In 1957 the German President Theodor Heuss awarded her the highest German order for scientists, the peace class of the Pour le Mérite. She was nominated by Otto Hahn for both honours. Meitner's name was submitted, also by Hahn, to the Nobel Prize committee more than ten times, but she was not accepted. Meitner was elected a foreign member of the Royal Swedish Academy of Sciences in 1945, and had her status changed to that of a Swedish member in 1951. Four years later she was elected a Foreign Member of the Royal Society (ForMemRS). She was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1960. Meitner received 21 scientific honours and awards for her work (including 5 honorary doctorates and membership of 12 academies). In 1947 she received the Award of the City of Vienna for science. She was the first female member of the scientific class of the Austrian Academy of Sciences. In 1960, Meitner was awarded the Wilhelm Exner Medal and in 1967, the Austrian Decoration for Science and Art. In 1966 Hahn, Fritz Strassmann and Meitner were jointly awarded the Enrico Fermi Award by President Lyndon B. Johnson and the United States Atomic Energy Commission (USAEC) in Washington D.C. Lise Meitner's diploma bears the words: ""For pioneering research in the naturally occurring radioactivities and extensive experimental studies leading to the discovery of fission."" Otto Hahn's diploma is different but essentially similar: ""For pioneering research in the naturally occurring radioactivities and extensive experimental studies culminating in the discovery of fission."" Since Meitner's 1968 death, she has received many naming honours. In 1997, element 109 was named meitnerium in her honour. She is the first and so far only non-mythological woman thus honoured. (Curium was named after both Marie and Pierre Curie.) Additional naming honours are the Hahn–Meitner-Institut in Berlin, craters on the Moon and on Venus, and the main-belt asteroid 6999 Meitner. In 2000, the European Physical Society established the biannual "Lise Meitner Prize" for excellent research in nuclear science. In 2006 the "Gothenburg Lise Meitner Award" was established by the University of Gothenburg and Chalmers University of Technology in Sweden; it is awarded annually to a scientist who has made a breakthrough in physics. In 2008, the chemical, biological, radiological, and nuclear defense school of the Austrian Armed Forces (NBC) established the Lise Meitner Award. In October 2010, a building at the Free University of Berlin was named the Hahn-Meitner Building; this was a renaming of a building previously known as the Otto Hahn Building. In July 2014 a statue of Lise Meitner was unveiled in the garden of the Humboldt University of Berlin next to similar statues of Hermann von Helmholtz and Max Planck. A short residential street in Bramley, her resting place, is named Meitner Close. Schools and streets were named after her in many cities in Austria and Germany. Since 2008 the Austrian Physical Society together with the German Physical Society organize the Lise-Meitner Lectures, a series of annual public talks given by distinguished female physicists, and since 2015 the AlbaNova University Centre in Stockholm has an annual Lise Meitner Distinguished Lecture. In 2016, the Institute of Physics in the UK established the Meitner Medal for public engagement within physics. In 2017, the Advanced Research Projects Agency-Energy in the United States named a major nuclear energy research program in her honor.
https://en.wikipedia.org/wiki?curid=18070
Llama The llama (; ) ("Lama glama") is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the Pre-Columbian era. Llamas are very social animals and live with others as a herd. Their wool is very soft and lanolin-free. Llamas can learn simple tasks after a few repetitions. When using a pack, they can carry about 25 to 30% of their body weight for 8 to 13 km (5–8 miles). The name "llama" (in the past also spelled "lama" or "glama") was adopted by European settlers from native Peruvians. The ancestors of Llamas are thought to have originated from the central plains of North America about 40 million years ago, and susequently migrated to South America about three million years ago during the Great American Interchange. By the end of the last ice age (10,000–12,000 years ago), camelids were extinct in North America. As of 2007, there were over seven million llamas and alpacas in South America, and due to importation from South America in the late 20th century, there are now over 158,000 llamas and 100,000 alpacas in the United States and Canada. In Aymara mythology llamas are important beings. The Heavenly Llama is said to drink water from the ocean and urinate it as rain. According to Aymara eschatology llamas will return to the water springs and lagoons where they come from at the end of time. Lamoids, or llamas (as they are more generally known as a group), consist of the vicuña ("Vicugna vicugna", prev. "Lama vicugna"), guanaco ("Lama guanicoe"), Suri alpaca, and Huacaya alpaca ("Vicugna pacos", prev. "Lama guanicoe pacos"), and the domestic llama ("Lama glama"). Guanacos and vicuñas live in the wild, while llamas and alpacas exist only as domesticated animals. Although early writers compared llamas to sheep, their similarity to the camel was soon recognized. They were included in the genus "Camelus" along with alpaca in the "Systema Naturae" (1758) of Carl Linnaeus. They were, however, separated by Georges Cuvier in 1800 under the name of "lama" along with the guanaco. DNA analysis has confirmed that the guanaco is the wild ancestor of the llama, while the vicuña is the wild ancestor of the alpaca; the latter two were placed in the genus "Vicugna". The genera "Lama" and "Vicugna" are, with the two species of true camels, the sole existing representatives of a very distinct section of the Artiodactyla or even-toed ungulates, called Tylopoda, or "bump-footed", from the peculiar bumps on the soles of their feet. The Tylopoda consist of a single family, the Camelidae, and shares the order Artiodactyla with the Suina (pigs), the Tragulina (chevrotains), the Pecora (ruminants), and the Whippomorpha (hippos and cetaceans, which belong to Artiodactyla from a cladistic, if not traditional, standpoint). The Tylopoda have more or less affinity to each of the sister taxa, standing in some respects in a middle position between them, sharing some characteristics from each, but in others showing special modifications not found in any of the other taxa. The 19th-century discoveries of a vast and previously unexpected extinct Paleogene fauna of North America, as interpreted by paleontologists Joseph Leidy, Edward Drinker Cope, and Othniel Charles Marsh, aided understanding of the early history of this family. Llamas were not always confined to South America; abundant llama-like remains were found in Pleistocene deposits in the Rocky Mountains and in Central America. Some of the fossil llamas were much larger than current forms. Some species remained in North America during the last ice ages. North American llamas are categorized as a single extinct genus, "Hemiauchenia". Llama-like animals would have been a common sight 25,000 years ago, in modern-day California, Texas, New Mexico, Utah, Missouri, and Florida. The camelid lineage has a good fossil record. Camel-like animals have been traced from the thoroughly differentiated, modern species back through early Miocene forms. Their characteristics became more general, and they lost those that distinguished them as camelids; hence, they were classified as ancestral artiodactyls. No fossils of these earlier forms have been found in the Old World, indicating that North America was the original home of camelids, and that Old World camels crossed over via the Bering Land Bridge. The formation of the Isthmus of Panama three million years ago allowed camelids to spread to South America as part of the Great American Interchange, where they evolved further. Meanwhile, North American camelids died out at the end of the Pleistocene. A full-grown llama can reach a height of at the top of the head, and can weigh between . At birth, a baby llama (called a "cria") can weigh between . Llamas typically live for 15 to 25 years, with some individuals surviving 30 years or more. The following characteristics apply especially to llamas. Dentition of adults:-incisors 1/3 canines 1/1, premolars 2/2, molars 3/2; total 32. In the upper jaw, a compressed, sharp, pointed laniariform incisor near the hinder edge of the premaxilla is followed in the male at least by a moderate-sized, pointed, curved true canine in the anterior part of the maxilla. The isolated canine-like premolar that follows in the camels is not present. The teeth of the molar series, which are in contact with each other, consist of two very small premolars (the first almost rudimentary) and three broad molars, constructed generally like those of "Camelus". In the lower jaw, the three incisors are long, spatulate, and procumbent; the outer ones are the smallest. Next to these is a curved, suberect canine, followed after an interval by an isolated minute and often deciduous simple conical premolar; then a contiguous series of one premolar and three molars, which differ from those of "Camelus" in having a small accessory column at the anterior outer edge. The skull generally resembles that of "Camelus", the larger brain-cavity and orbits and less-developed cranial ridges being due to its smaller size. The nasal bones are shorter and broader, and are joined by the premaxilla. Vertebrae: The ears are rather long and slightly curved inward, characteristically known as "banana" shaped. There is no dorsal hump. The feet are narrow, the toes being more separated than in the camels, each having a distinct plantar pad. The tail is short, and fibre is long, woolly and soft. In essential structural characteristics, as well as in general appearance and habits, all the animals of this genus very closely resemble each other, so whether they should be considered as belonging to one, two, or more species is a matter of controversy among naturalists. The question is complicated by the circumstance of the great majority of individuals that have come under observation being either in a completely or partially domesticated state. Many are also descended from ancestors that have previously been domesticated, a state that tends to produce a certain amount of variation from the original type. The four forms commonly distinguished by the inhabitants of South America are recognized as distinct species, though with difficulties in defining their distinctive characteristics. These are: The llama and alpaca are only known in the domestic state, and are variable in size and of many colors, being often white, brown, or piebald. Some are grey or black. The guanaco and vicuña are wild, the former being endangered, and of a nearly uniform light-brown color, passing into white below. They certainly differ from each other, the vicuña being smaller, more slender in its proportions, and having a shorter head than the guanaco. The vicuña lives in herds on the bleak and elevated parts of the mountain range bordering the region of perpetual snow, amidst rocks and precipices, occurring in various suitable localities throughout Peru, in the southern part of Ecuador, and as far south as the middle of Bolivia. Its manners very much resemble those of the chamois of the European Alps; it is as vigilant, wild, and timid. The fiber is extremely delicate and soft, and highly valued for the purposes of weaving, but the quantity that each animal produces is minimal. Alpacas are descended from wild vicuna ancestors, while domesticated llamas are descended from wild guanaco ancestors, though a considerable amount of hybridization between the two species has occurred. Differential characteristics between llamas and alpacas include the llama's larger size, longer head, and curved ears. Alpaca fiber is generally more expensive, but not always more valuable. Alpacas tend to have a more consistent color throughout the body. The most apparent visual difference between llamas and camels is that camels have a hump or humps and llamas do not. Llamas are not ruminants, pseudo-ruminants, or modified ruminants. They have a complex stomach with several compartments that allows them to consume lower quality, high cellulose foods. The stomach compartments allow for fermentation of tough food stuffs, followed by regurgitation and re-chewing. Ruminants have four compartments (cows, sheep, goats), whereas llamas have only three stomach compartments: the rumen, omasum, and abomasum. In addition, the llama (and other camelids) have an extremely long and complex large intestine (colon). The large intestine's role in digestion is to reabsorb water, vitamins and electrolytes from food waste that is passing through it. The length of the llama's colon allows it to survive on much less water than other animals. This is a major advantage in arid climates where they live. Llamas have an unusual reproductive cycle for a large animal. Female llamas are induced ovulators. Through the act of mating, the female releases an egg and is often fertilized on the first attempt. Female llamas do not go into estrus ("heat"). Like humans, llama males and females mature sexually at different rates. Females reach puberty at about 12 months old; males do not become sexually mature until around three years of age. Llamas mate with in a kush (lying down) position, which is fairly unusual in a large animal. They mate for an extended time (20–45 minutes), also unusual in a large animal. The gestation period of a llama is 11.5 months (350 days). Dams (female llamas) do not lick off their babies, as they have an attached tongue that does not reach outside of the mouth more than . Rather, they will nuzzle and hum to their newborns. A cria (from Spanish for "baby") is the name for a baby llama, alpaca, vicuña, or guanaco. Crias are typically born with all the females of the herd gathering around, in an attempt to protect against the male llamas and potential predators. Llamas give birth standing. Birth is usually quick and problem-free, over in less than 30 minutes. Most births take place between 8 am and noon, during the warmer daylight hours. This may increase cria survival by reducing fatalities due to hypothermia during cold Andean nights. This birthing pattern is speculated to be a continuation of the birthing patterns observed in the wild. Crias are up and standing, walking and attempting to suckle within the first hour after birth. Crias are partially fed with llama milk that is lower in fat and salt and higher in phosphorus and calcium than cow or goat milk. A female llama will only produce about of milk at a time when she gives milk, so the cria must suckle frequently to receive the nutrients it requires. In harem mating, the male is left with females most of the year. For field mating, a female is turned out into a field with a male llama and left there for some period of time. This is the easiest method in terms of labor, but the least useful in terms of prediction of a likely birth date. An ultrasound test can be performed, and together with the exposure dates, a better idea of when the cria is expected can be determined. Hand mating is the most efficient method, but requires the most work on the part of the human involved. A male and female llama are put into the same pen and mating is monitored. They are then separated and re-mated every other day until one or the other refuses the mating. Usually, one can get in two matings using this method, though some stud males routinely refuse to mate a female more than once. The separation presumably helps to keep the sperm count high for each mating and also helps to keep the condition of the female llama's reproductive tract more sound. If the mating is not successful within two to three weeks, the female is mated again. Options for feeding llamas are quite wide; a wide variety of commercial and farm-based feeds are available. The major determining factors include feed cost, availability, nutrient balance and energy density required. Young, actively growing llamas require a greater concentration of nutrients than mature animals because of their smaller digestive tract capacities. Llamas that are well-socialized and trained to halter and lead after weaning are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling. Llamas have started showing up in nursing homes and hospitals as certified therapy animals. Rojo the Llama, located in the Pacific Northwest was certified in 2008. The Mayo Clinic says animal-assisted therapy can reduce pain, depression, anxiety, and fatigue. This type of therapy is growing in popularity, and there are several organizations throughout the United States that participate. When correctly reared, llamas spitting at a human is a rare thing. Llamas are very social herd animals, however, and do sometimes spit at each other as a way of disciplining lower-ranked llamas in the herd. A llama's social rank in a herd is never static. They can always move up or down in the social ladder by picking small fights. This is usually done between males to see which will become dominant. Their fights are visually dramatic, with spitting, ramming each other with their chests, neck wrestling and kicking, mainly to knock the other off balance. The females are usually only seen spitting as a means of controlling other herd members. One may determine how agitated the llama is by the materials in the spit. The more irritated the llama is, the further back into each of the three stomach compartments it will try to draw materials from for its spit. While the social structure might always be changing, they live as a family and they do take care of each other. If one notices a strange noise or feels threatened, an alarm call - a loud, shrill sound which rhythmically rises and falls - is sent out and all others become alert. They will often hum to each other as a form of communication. The sound of the llama making groaning noises or going "mwa" (/mwaʰ/) is often a sign of fear or anger. Unhappy or agitated llamas will lay their ears back, while ears being perked upwards is a sign of happiness or curiosity. An "orgle" is the mating sound of a llama or alpaca, made by the sexually aroused male. The sound is reminiscent of gargling, but with a more forceful, buzzing edge. Males begin the sound when they become aroused and continue throughout the act of procreation – from 15 minutes to more than an hour. Using llamas as livestock guards in North America began in the early 1980s, and some sheep producers have used llamas successfully since then. Some would even use them to guard their smaller cousins, the alpaca. They are used most commonly in the western regions of the United States, where larger predators, such as coyotes and feral dogs, are prevalent. Typically, a single gelding (castrated male) is used. Research suggests the use of multiple guard llamas is not as effective as one. Multiple males tend to bond with one another, rather than with the livestock, and may ignore the flock. A gelded male of two years of age bonds closely with its new charges and is instinctively very effective in preventing predation. Some llamas appear to bond more quickly to sheep or goats if they are introduced just prior to lambing. Many sheep and goat producers indicate a special bond quickly develops between lambs and their guard llama and the llama is particularly protective of the lambs. Using llamas as guards has reduced the losses to predators for many producers. The value of the livestock saved each year more than exceeds the purchase cost and annual maintenance of a llama. Although not every llama is suited to the job, most are a viable, nonlethal alternative for reducing predation, requiring no training and little care. Doctors and researches have determined that llamas possess antibodies that are well suited to treat certain diseases. Scientists have been studying the way llamas might contribute to the fight against coronaviruses, including MERS and SARS-CoV-2 (which causes COVID-19). Scholar Alex Chepstow-Lusty has argued that the switch from a hunter-gatherer lifestyle to widespread agriculture was only possible because of the use of llama dung as fertilizer. The Moche people frequently placed llamas and llama parts in the burials of important people, as offerings or provisions for the afterlife. The Moche culture of pre-Columbian Peru depicted llamas quite realistically in their ceramics. In the Inca Empire, llamas were the only beasts of burden, and many of the people dominated by the Inca had long traditions of llama herding. For the Inca nobility, the llama was of symbolic significance, and llama figures were often buried with the dead. In South America, llamas are still used as beasts of burden, as well as for the production of fiber and meat. The Inca deity Urcuchillay was depicted in the form of a multicolored llama. Carl Troll has argued that the large numbers of llamas found in the southern Peruvian highlands were an important factor in the rise of the Inca Empire. It is worth considering the maximum extent of the Inca Empire roughly coincided with the greatest distribution of alpacas and llamas in Pre-Hispanic America. The link between the Andean biomes of puna and páramo, llama pastoralism and the Inca state is a matter of research. One of the main uses for llamas at the time of the Spanish conquest was to bring down ore from the mines in the mountains. Gregory de Bolivar estimated that in his day, as many as 300,000 were employed in the transport of produce from the Potosí mines alone, but since the introduction of horses, mules, and donkeys, the importance of the llama as a beast of burden has greatly diminished. According to Juan Ignacio Molina, the Dutch captain Joris van Spilbergen observed the use of hueques (possibly a llama type) by native Mapuches of Mocha Island as plow animals in 1614. In Chile hueque populations declined towards extinction in the 16th and 17th century being replaced by European livestock. The causes of its extinction are not clear but it is known that the introduction of sheep caused some competition among both domestic species. Anecdotal evidence of the mid-17th century show that both species coexisted but suggests that there were many more sheep than hueques. The decline of hueques reached a point in the late 18th century when only the Mapuche from Mariquina and Huequén next to Angol raised the animal. Llamas were first imported into the US in the late 1800s as zoo exhibits. Restrictions on importation of livestock from South America due to hoof and mouth disease, combined with lack of commercial interest, resulted in the number of llamas staying low until the late 20th century. In the 1970s, interest in llamas as livestock began to grow, and the number of llamas increased as farmers bred and produced an increasing number of animals. Both the price and number of llamas in the US climbed rapidly in the 1980s and 1990s. With little market for llama fiber or meat in the US, and the value of guard llamas limited, the primary value in llamas was in breeding more animals, a classic sign of a speculative bubble in agriculture. By 2002, there were almost 145,000 llamas in the US according to the US Department of Agriculture, and animals sold for as much as $220,000. However, the lack of any end market for the animals resulted in a crash in both llama prices and the number of llamas; the Great Recession further dried up investment capital, and the number of llamas in the US began to decline as fewer animals were bred and older animals died of old age. By 2017, the number of llamas in the US had dropped below 40,000. A similar speculative bubble was experienced with the closely related alpaca, which burst shortly after the llama bubble. Llamas have a fine undercoat, which can be used for handicrafts and garments. The coarser outer guard hair is used for rugs, wall-hangings and lead ropes. The fiber comes in many different colors ranging from white or grey to reddish-brown, brown, dark brown and black.
https://en.wikipedia.org/wiki?curid=18071
Lexicon A lexicon, word-hoard, wordbook, or word-stock is the vocabulary of a person, language, or branch of knowledge (such as nautical or medical). In linguistics, a lexicon is a language's inventory of lexemes. The word "lexicon" derives from the Greek ("lexicon"), neuter of ("lexikos") meaning "of or for words." Linguistic theories generally regard human languages as consisting of two parts: a lexicon, essentially a catalogue of a language's words (its wordstock); and a grammar, a system of rules which allow for the combination of those words into meaningful sentences. The lexicon is also thought to include bound morphemes, which cannot stand alone as words (such as most affixes). In some analyses, compound words and certain classes of idiomatic expressions and other collocations are also considered to be part of the lexicon. Dictionaries represent attempts at listing, in alphabetical order, the lexicon of a given language; usually, however, bound morphemes are not included. Items in the lexicon are called lexemes, or lexical items, or word forms. Lexemes are not atomic elements but contain both phonological and morphological components. When describing the lexicon, a reductionist approach is used, trying to remain general while using a minimal description. To describe the size of a lexicon, lexemes are grouped into lemmas. A lemma is a group of lexemes generated by inflectional morphology. Lemmas are represented in dictionaries by headwords which list the citation forms and any irregular forms, since these must be learned to use the words correctly. Lexemes derived from a word by derivational morphology are considered new lemmas. The lexicon is also organized according to open and closed categories. Closed categories, such as determiners or pronouns, are rarely given new lexemes; their function is primarily syntactic. Open categories, such as nouns and verbs, have highly active generation mechanisms and their lexemes are more semantic in nature. A central role of the lexicon is the documenting of established "lexical norms and conventions". Lexicalization is the process by which new words, having gained widespread usage, enter the lexicon. Since lexicalization may modify lexemes phonologically and morphologically, it is possible that a single etymological source may be inserted into a single lexicon in two or more forms. These pairs, called a doublet, are often close semantically. Two examples are "aptitude" versus "attitude" and "employ" versus "imply". The mechanisms, not mutually exclusive, are: Neologisms are new lexeme candidates which, if they gain wide usage over time, become part of a language's lexicon. Neologisms are often introduced by children who produce erroneous forms by mistake. Other common sources are slang and advertising. There are two types of borrowings (neologisms based on external sources) that retain the sound of the source language material: The following are examples of external lexical expansion using the source language lexical item as the basic material for the neologization, listed in decreasing order of phonetic resemblance to the original lexical item (in the source language): The following are examples of simultaneous external and internal lexical expansion using target language lexical items as the basic material for the neologization but still resembling the sound of the lexical item in the source language: Another mechanism involves generative devices that combine morphemes according to a language's rules. For example, the suffix "-able" is usually only added to transitive verbs, as in "readable" but not "cryable". A compound word is a lexeme composed of several established lexemes, whose semantics is not the sum of that of their constituents. They can be interpreted through analogy, common sense and, most commonly, context. Compound words can have simple or complex morphological structures. Usually only the head requires inflection for agreement. Compounding may result in lexemes of unwieldy proportion. This is compensated by mechanisms that reduce the length of words. A similar phenomenon has been recently shown to feature in social media also where hashtags compound to form longer-sized hashtags that are at times more popular than the individual constituent hashtags forming the compound. Compounding is the most common of word formation strategies cross-linguistically. Comparative historical linguistics studies the evolutions languages and takes a diachronic view of the lexicon. The evolution of lexicons in different languages occurs through parallel mechanism. Over time historical forces work to shape the lexicon, making it simpler to acquire and often creating an illusion of great regularity in language. The term "lexicon" is generally used in the context of single language. Therefore, multi-lingual speakers are generally thought to have multiple lexicons. Speakers of language variants (Brazilian Portuguese and European Portuguese, for example) may be considered to possess a single lexicon. Thus a "cash dispenser" (British English) as well as an automatic teller machine or ATM in American English would be understood by both American and British speakers, despite each group using different dialects. When linguists study a lexicon, they consider such things as what constitutes a word; the word/concept relationship; lexical access and lexical access failure; how a word's phonology, syntax, and meaning intersect; the morphology-word relationship; vocabulary structure within a given language; language use (pragmatics); language acquisition; the history and evolution of words (etymology); and the relationships between words, often studied within philosophy of language. Various models of how lexicons are organized and how words are retrieved have been proposed in psycholinguistics, neurolinguistics and computational linguistics.
https://en.wikipedia.org/wiki?curid=18077
Leonardo da Vinci Leonardo di ser Piero da Vinci (; 14/15 April 14522 May 1519), known as Leonardo da Vinci ( ), was an Italian polymath of the Renaissance whose areas of interest included science and invention, drawing, painting, sculpture, architecture, music, mathematics, engineering, literature, anatomy, geology, astronomy, botany, paleontology, and cartography. He has been variously called the father of palaeontology, ichnology, and architecture, and is widely considered one of the greatest painters of all time (despite perhaps only 15 of his paintings having survived). Born out of wedlock to a notary, Piero da Vinci, and a peasant woman, Caterina, in Vinci, in the region of Florence, Italy, Leonardo was educated in the studio of the renowned Italian painter Andrea del Verrocchio. Much of his earlier working life was spent in the service of Ludovico il Moro in Milan, and he later worked in Rome, Bologna and Venice. He spent his last three years in France, where he died in 1519. Leonardo is renowned primarily as a painter. The "Mona Lisa" is the most famous of his works and the most popular portrait ever made. "The Last Supper" is the most reproduced religious painting of all time and his "Vitruvian Man" drawing is regarded as a cultural icon as well. "Salvator Mundi" was sold for a world record $450.3 million at a Christie's auction in New York, 15 November 2017, the highest price ever paid for a work of art. Leonardo's paintings and preparatory drawings—together with his notebooks, which contain sketches, scientific diagrams, and his thoughts on the nature of painting—compose a contribution to later generations of artists rivalled only by that of his contemporary Michelangelo. Although he had no formal academic training, many historians and scholars regard Leonardo as the prime exemplar of the "Universal Genius" or "Renaissance Man", an individual of "unquenchable curiosity" and "feverishly inventive imagination." He is widely considered one of the most diversely talented individuals ever to have lived. According to art historian Helen Gardner, the scope and depth of his interests were without precedent in recorded history, and "his mind and personality seem to us superhuman, while the man himself mysterious and remote." Scholars interpret his view of the world as being based in logic, though the empirical methods he used were unorthodox for his time. Leonardo is revered for his technological ingenuity. He conceptualized flying machines, a type of armoured fighting vehicle, concentrated solar power, an adding machine, and the double hull. Relatively few of his designs were constructed or even feasible during his lifetime, as the modern scientific approaches to metallurgy and engineering were only in their infancy during the Renaissance. Some of his smaller inventions, however, entered the world of manufacturing unheralded, such as an automated bobbin winder and a machine for testing the tensile strength of wire. He is also sometimes credited with the inventions of the parachute, helicopter, and tank. He made substantial discoveries in anatomy, civil engineering, geology, optics, and hydrodynamics, but he did not publish his findings and they had little to no direct influence on subsequent science. Leonardo was born out of wedlock to notary Piero da Vinci and a peasant woman named Caterina in Vinci in the region of Florence, and he was educated in the studio of Florentine painter Andrea del Verrocchio. Much of his earlier working life was spent in the service of Ludovico il Moro in Milan. He later worked in Rome, Bologna, and Venice, and he spent his last years in France at the home awarded to him by Francis I. Leonardo was born on 14/15 April 1452 in the Tuscan hill town of Vinci, in the lower valley of the Arno river in the territory of the Medici-ruled Republic of Florence. He was the out-of-wedlock son of Messer Piero Fruosino di Antonio da Vinci, a wealthy Florentine legal notary, and a peasant named Caterina, identified as Caterina Buti del Vacca and more recently as Caterina di Meo Lippi by historian Martin Kemp. There have been many theories regarding Leonardo's mother's identity, including that she was a slave of foreign origin or an impoverished local youth. Leonardo had no surname in the modern sense—da Vinci simply meaning "of Vinci"; his full birth name was Lionardo di ser Piero da Vinci, meaning "Leonardo, (son) of ser Piero (from) Vinci." Leonardo spent his first years in the hamlet of Anchiano in the home of his mother, and from at least 1457 lived in the household of his father, grandparents and uncle in the small town of Vinci. His father had married a 16-year-old girl named Albiera Amadori, who loved Leonardo but died young in 1465 without children. In 1468, when Leonardo was 16, his father married again to 20-year-old Francesca Lanfredini, who also died without children. Piero's legitimate heirs were born from his third wife Margherita di Guglielmo, who gave birth to six children, and his fourth and final wife, Lucrezia Cortigiani, who bore him another six heirs. In all, Leonardo had 12 half-siblings, who were much younger than he was (the last was born when Leonardo was 40 years old) and with whom he had very little contact. Leonardo received an informal education in Latin, geometry and mathematics. In later life, Leonardo recorded few distinct childhood incidents. One was of a kite coming to his cradle and opening his mouth with its tail; he regarded this as an omen of his writing on the subject. The second occurred while he was exploring in the mountains: he discovered a cave and was both terrified that some great monster might lurk there and driven by curiosity to find out what was inside. He also seems to have remembered some of his childhood observations of water, writing and crossing out the name of his hometown in one of his notebooks on the formation of rivers. Leonardo's early life has been the subject of historical conjecture. Vasari, the 16th-century biographer of Renaissance painters, tells a story of Leonardo as a very young man: A local peasant made himself a round shield and requested that Ser Piero have it painted for him. Leonardo, inspired by the story of Medusa, responded with a painting of a monster spitting fire that was so terrifying that his father bought a different shield to give to the peasant and sold Leonardo's to a Florentine art dealer for 100 ducats, who in turn sold it to the Duke of Milan. In the mid-1460s, Leonardo's family moved to Florence, and around the age of 14, he became a "garzone" (studio boy) in the workshop of Verrocchio, who was the leading Florentine painter and sculptor of his time. Leonardo became an apprentice by the age of 17 and remained in training for seven years. Other famous painters apprenticed in the workshop or associated with it include Ghirlandaio, Perugino, Botticelli, and Lorenzo di Credi. Leonardo was exposed to both theoretical training and a wide range of technical skills, including drafting, chemistry, metallurgy, metal working, plaster casting, leather working, mechanics, and wood-work, as well as the artistic skills of drawing, painting, sculpting, and modelling. Much of the painting in Verrocchio's workshop was done by his employees. According to Vasari, Leonardo collaborated with Verrocchio on his "The Baptism of Christ", painting the young angel holding Jesus' robe in a manner that was so far superior to his master's that Verrocchio put down his brush and never painted again, although this is believed to be an apocryphal story. Close examination reveals areas of the work that have been painted or touched-up over the tempera, using the new technique of oil paint, including the landscape, the rocks seen through the brown mountain stream, and much of the figure of Jesus, bearing witness to the hand of Leonardo. Leonardo may have been the model for two works by Verrocchio: the bronze statue of "David" in the Bargello, and the Archangel Raphael in "Tobias and the Angel". By 1472, at the age of 20, Leonardo qualified as a master in the Guild of Saint Luke, the guild of artists and doctors of medicine, but even after his father set him up in his own workshop, his attachment to Verrocchio was such that he continued to collaborate and live with him. Leonardo's earliest known dated work is of the Arno valley, which has been cited as the first "pure" landscape in the Occident. According to Vasari, the young Leonardo was the first to suggest making the Arno river a navigable channel between Florence and Pisa. In January 1478, Leonardo received an independent commission to paint an altarpiece for the Chapel of St. Bernard in the Palazzo Vecchio, an indication of his independence from Verrocchio's studio. One anonymous writer claims that in 1480, Leonardo was living with the Medici and often worked in the garden of the Piazza San Marco, Florence, where a Neoplatonic academy of artists, poets and philosophers organized by the Medici met. In March 1481, he received a commission from the monks of for "The Adoration of the Magi". Neither of these initial commissions were completed, being abandoned when Leonardo went to offer his services to Duke of Milan Ludovico Sforza. In 1482, he casted a silver stringed instrument from a horse's skull and ram horns to bring to Sforza, whom he wrote a letter describing the diverse things that he could achieve in the fields of engineering and weapon design, and mentioning that he could paint. Leonardo worked in Milan from 1482 until 1499. He was commissioned to paint the "Virgin of the Rocks" for the Confraternity of the Immaculate Conception and "The Last Supper" for the monastery of Santa Maria delle Grazie. In the spring of 1485, Leonardo travelled to Hungary on behalf of Sforza to meet king Matthias Corvinus, and was commissioned by him to paint a Madonna. Leonardo was employed on many other projects for Sforza, including the preparation of floats and pageants for special occasions, and wooden model for a competition to design the cupola for Milan Cathedral (which he withdrew), and a model for a huge equestrian monument to Ludovico's predecessor Francesco Sforza. This would have surpassed in size the only two large equestrian statues of the Renaissance, Donatello's "Gattamelata" in Padua and Verrocchio's "Bartolomeo Colleoni" in Venice, and became known as the "Gran Cavallo". Leonardo completed a model for the horse and made detailed plans for its casting, but in November 1494, Ludovico gave the bronze to his brother-in-law to be used for a cannon to defend the city from Charles VIII. With Ludovico Sforza overthrown at the dawn of the Second Italian War, Leonardo, with his assistant Salaì and friend, the mathematician Luca Pacioli, fled Milan for Venice. There, he was employed as a military architect and engineer, devising methods to defend the city from naval attack. On his return to Florence in 1500, he and his household were guests of the Servite monks at the monastery of Santissima Annunziata and were provided with a workshop where, according to Vasari, Leonardo created the cartoon of "The Virgin and Child with St Anne and St John the Baptist", a work that won such admiration that "men and women, young and old" flocked to see it "as if they were attending a great festival." In Cesena in 1502, Leonardo entered the service of Cesare Borgia, the son of Pope Alexander VI, acting as a military architect and engineer and travelling throughout Italy with his patron. Leonardo created a map of Cesare Borgia's stronghold, a town plan of Imola in order to win his patronage. Maps were extremely rare at the time and it would have seemed like a new concept. Upon seeing it, Cesare hired Leonardo as his chief military engineer and architect. Later in the year, Leonardo produced another map for his patron, one of Chiana Valley, Tuscany, so as to give his patron a better overlay of the land and greater strategic position. He created this map in conjunction with his other project of constructing a dam from the sea to Florence, in order to allow a supply of water to sustain the canal during all seasons. Leonardo had left Borgia's service and returned to Florence by early 1503, where he rejoined the Guild of Saint Luke on 18 October of that year. By this same month, Leonardo had begun working on a portrait of Lisa del Giocondo, the model for the "Mona Lisa", which he would continue working on until his twilight years. In January 1504, he was part of a committee formed to recommend where Michelangelo's statue of "David" should be placed. He then spent two years in Florence designing and painting a mural of "The Battle of Anghiari" for the Signoria, with Michelangelo designing its companion piece, "The Battle of Cascina". In 1506, Leonardo was summoned to Milan by Charles II d'Amboise, the acting French governor of the city. The Council of Florence wished Leonardo to return promptly to finish "The Battle of Anghiari", but he was given leave at the behest of Louis XII, who considered commissioning the artist to make some portraits. Leonardo may have commenced a project for an equestrian figure of d'Amboise; a wax model survives and, if genuine, is the only extant example of Leonardo's sculpture. Leonardo was otherwise free to pursue his scientific interests. Many of Leonardo's most prominent pupils either knew or worked with him in Milan, including Bernardino Luini, Giovanni Antonio Boltraffio, and Marco d'Oggiono. In 1507, Leonardo was in Florence sorting out a dispute with his brothers over the estate of his father, who had died in 1504. By 1508, Leonardo was back in Milan, living in his own house in Porta Orientale in the parish of Santa Babila. In 1512, Leonardo was working on plans for an equestrian monument for Gian Giacomo Trivulzio, but this was prevented by an invasion of a confederation of Swiss, Spanish and Venetian forces, which drove the French from Milan. Leonardo stayed in the city, spending several months in 1513 at the Medici's Vaprio d'Adda villa. In March of that year, Lorenzo de' Medici's son Giovanni assumed the papacy (as Leo X); Leonardo went to Rome that September, where he was received by the pope's brother Giuliano. From September 1513 to 1516, Leonardo spent much of his time living in the Belvedere Courtyard in the Apostolic Palace, where Michelangelo and Raphael were both active. Leonardo was given an allowance of 33 ducats a month, and according to Vasari, decorated a lizard with scales dipped in quicksilver. The pope gave him a painting commission of unknown subject matter, but cancelled it when the artist set about developing a new kind of varnish. Leonardo became ill, in what may have been the first of multiple strokes leading to his death. He practiced botany in the Gardens of Vatican City, and was commissioned to make plans for the pope's proposed draining of the Pontine Marshes. He also dissected cadavers, making notes for a treatise on vocal cords; these he gave to an official in hopes of regaining the pope's favor, but was unsuccessful. In October 1515, King Francis I of France recaptured Milan. Leonardo was present at the 19 December meeting of Francis I and Leo X, which took place in Bologna. In 1516, Leonardo entered Francis' service, being given the use of the manor house Clos Lucé, near the king's residence at the royal Château d'Amboise. Being frequently visited by Francis, he drew plans for an immense castle town the king intended to erect at Romorantin, and made a mechanical lion, which during a pageant walked toward the king and—upon being struck by a wand—opened its chest to reveal a cluster of lilies. Leonardo was accompanied during this time by his friend and apprentice, Francesco Melzi, and supported by a pension totalling 10,000 scudi. At some point, Melzi drew a ; the only others known from his lifetime were a sketch by an unknown assistant on the back of one of Leonardo's studies (c. 1517) and a drawing by Giovanni Ambrogio Figino depicting an elderly Leonardo with his right arm assuaged by cloth. The latter, in addition to the record of an October 1517 visit by Louis d'Aragon, confirms an account of Leonardo's right hand being paralytic at the age of 65, which may indicate why he left works such as the "Mona Lisa" unfinished. He continued to work at some capacity until eventually becoming ill and bedridden for several months. Leonardo died at Clos Lucé on 2 May 1519 at the age of 67, possibly of a stroke. Francis I had become a close friend. Vasari describes Leonardo as lamenting on his deathbed, full of repentance, that "he had offended against God and men by failing to practice his art as he should have done." Vasari states that in his last days, Leonardo sent for a priest to make his confession and to receive the Holy Sacrament. Vasari also records that the king held Leonardo's head in his arms as he died, although this story may be legend rather than fact. In accordance with his will, sixty beggars carrying tapers followed Leonardo's casket. Melzi was the principal heir and executor, receiving, as well as money, Leonardo's paintings, tools, library and personal effects. Leonardo also remembered his other long-time pupil and companion, Salaì, and his servant Baptista de Vilanis, who each received half of Leonardo's vineyards. His brothers received land, and his serving woman received a fur-lined cloak. On 12 August 1519, Leonardo's remains were interred in the Collegiate Church of Saint Florentin at the Château d'Amboise. Some 20 years after Leonardo's death, Francis was reported by the goldsmith and sculptor Benvenuto Cellini as saying: "There had never been another man born in the world who knew as much as Leonardo, not so much about painting, sculpture and architecture, as that he was a very great philosopher." Florence at the time of Leonardo's youth was the centre of Christian Humanist thought and culture. Leonardo commenced his apprenticeship with Verrocchio in 1466, the year that Verrocchio's master, the great sculptor Donatello, died. The painter Uccello, whose early experiments with perspective were to influence the development of landscape painting, was a very old man. The painters Piero della Francesca and Filippo Lippi, sculptor Luca della Robbia, and architect and writer Leon Battista Alberti were in their sixties. The successful artists of the next generation were Leonardo's teacher Verrocchio, Antonio del Pollaiuolo, and the portrait sculptor Mino da Fiesole. Leonardo's youth was spent in a Florence that was ornamented by the works of these artists and by Donatello's contemporaries, Masaccio, whose figurative frescoes were imbued with realism and emotion; and Ghiberti, whose "Gates of Paradise", gleaming with gold leaf, displayed the art of combining complex figure compositions with detailed architectural backgrounds. Piero della Francesca had made a detailed study of perspective, and was the first painter to make a scientific study of light. These studies and Alberti's treatise "De pictura" were to have a profound effect on younger artists and in particular on Leonardo's own observations and artworks. A prevalent tradition in Florence was the small altarpiece of the Virgin and Child. Many of these were created in tempera or glazed terracotta by the workshops of Filippo Lippi, Verrocchio and the prolific della Robbia family. Leonardo's early Madonnas such as "The Madonna with a carnation" and the "Benois Madonna" followed this tradition while showing idiosyncratic departures, particularly in the latter in which the Virgin is set at an oblique angle to the picture space with the Christ Child at the opposite angle. This compositional theme was to emerge in Leonardo's later paintings such as "The Virgin and Child with St. Anne". Leonardo was a contemporary of Botticelli, Domenico Ghirlandaio and Perugino, who were all slightly older than he was. He would have met them at the workshop of Verrocchio, with whom they had associations, and at the Academy of the Medici. Botticelli was a particular favourite of the Medici family, and thus his success as a painter was assured. Ghirlandaio and Perugino were both prolific and ran large workshops. They competently delivered commissions to well-satisfied patrons who appreciated Ghirlandaio's ability to portray the wealthy citizens of Florence within large religious frescoes, and Perugino's ability to deliver a multitude of saints and angels of unfailing sweetness and innocence. These three were among those commissioned to paint the walls of the Sistine Chapel, the work commencing with Perugino's employment in 1479. Leonardo was not part of this prestigious commission. His first significant commission, The "Adoration of the Magi" for the Monks of Scopeto, was never completed. In 1476, during the time of Leonardo's association with Verrocchio's workshop, the Portinari Altarpiece by Hugo van der Goes arrived in Florence, bringing from Northern Europe new painterly techniques that were to profoundly affect Leonardo, Ghirlandaio, Perugino and others. In 1479, the Sicilian painter Antonello da Messina, who worked exclusively in oils, travelled north on his way to Venice, where the leading painter Giovanni Bellini adopted the technique of oil painting, quickly making it the preferred method in Venice. Leonardo was also later to visit Venice. Like the two contemporary architects Donato Bramante (who designed the Belvedere Courtyard) and Antonio da Sangallo the Elder, Leonardo experimented with designs for centrally planned churches, a number of which appear in his journals, as both plans and views, although none was ever realised. Leonardo's political contemporaries were Lorenzo de' Medici (il Magnifico), who was three years older, and his younger brother Giuliano, who was slain in the Pazzi conspiracy in 1478. Leonardo was sent as an ambassador by the Medici court to Ludovico il Moro, who ruled Milan between 1479 and 1499. With Alberti, Leonardo visited the home of the Medici and through them came to know the older Humanist philosophers of whom Marsiglio Ficino, proponent of Neo Platonism; Cristoforo Landino, writer of commentaries on Classical writings, and John Argyropoulos, teacher of Greek and translator of Aristotle were the foremost. Also associated with the Academy of the Medici was Leonardo's contemporary, the brilliant young poet and philosopher Pico della Mirandola. Leonardo later wrote in the margin of a journal, "The Medici made me and the Medici destroyed me." While it was through the action of Lorenzo that Leonardo received his employment at the court of Milan, it is not known exactly what Leonardo meant by this cryptic comment. Although usually named together as the three giants of the High Renaissance, Leonardo, Michelangelo and Raphael were not of the same generation. Leonardo was 23 when Michelangelo was born and 31 when Raphael was born. Raphael died at the age of 37 in 1520, the year after Leonardo died, but Michelangelo went on creating for another 45 years. Within Leonardo's lifetime, his extraordinary powers of invention, his "outstanding physical beauty," "infinite grace," "great strength and generosity," "regal spirit and tremendous breadth of mind," as described by Vasari, as well as all other aspects of his life, attracted the curiosity of others. One such aspect was his vegetarianism and his habit, according to Vasari, of purchasing caged birds and releasing them. Leonardo had many friends who are now renowned either in their fields or for their historical significance. They included the mathematician Luca Pacioli, with whom he collaborated on the book "Divina proportione" in the 1490s. Leonardo appears to have had no close relationships with women except for his friendship with Cecilia Gallerani and the two Este sisters, Beatrice and Isabella. While on a journey that took him through Mantua, he drew a portrait of Isabella that appears to have been used to create a painted portrait, now lost. Beyond friendship, Leonardo kept his private life secret. His sexuality has been the subject of satire, analysis, and speculation. This trend began in the mid-16th century and was revived in the 19th and 20th centuries, most notably by Sigmund Freud. Leonardo's most intimate relationships were perhaps with his pupils Salaì and Melzi. Melzi, writing to inform Leonardo's brothers of his death, described Leonardo's feelings for his pupils as both loving and passionate. It has been claimed since the 16th century that these relationships were of a sexual or erotic nature. Court records of 1476, when he was aged twenty-four, show that Leonardo and three other young men were charged with sodomy in an incident involving a well-known male prostitute. The charges were dismissed for lack of evidence, and there is speculation that since one of the accused, Lionardo de Tornabuoni, was related to Lorenzo de' Medici, the family exerted its influence to secure the dismissal. Since that date much has been written about his presumed homosexuality and its role in his art, particularly in the androgyny and eroticism manifested in "Saint John the Baptist" and "Bacchus" and more explicitly in a number of erotic drawings. Gian Giacomo Caprotti da Oreno, nicknamed Salaì or Il Salaino ("The Little Unclean One," i.e., the devil), entered Leonardo's household in 1490. After only a year, Leonardo made a list of his misdemeanours, calling him "a thief, a liar, stubborn, and a glutton," after he had made off with money and valuables on at least five occasions and spent a fortune on clothes. Nevertheless, Leonardo treated him with great indulgence, and he remained in Leonardo's household for the next thirty years. Salaì executed a number of paintings under the name of Andrea Salaì, but although Vasari claims that Leonardo "taught him a great deal about painting," his work is generally considered to be of less artistic merit than others among Leonardo's pupils, such as Marco d'Oggiono and Boltraffio. In 1515, he painted a nude version of the "Mona Lisa", known as "Monna Vanna". Salaì owned the "Mona Lisa" at the time of his death in 1524, and in his will it was assessed at 505 lire, an exceptionally high valuation for a small panel portrait. In 1506, Leonardo took on another pupil, Count Francesco Melzi, the son of a Lombard aristocrat, who is considered to have been his favourite student. He travelled to France with Leonardo and remained with him until Leonardo's death. Melzi inherited the artistic and scientific works, manuscripts, and collections of Leonardo and administered the estate. Despite the recent awareness and admiration of Leonardo as a scientist and inventor, for the better part of four hundred years his fame rested on his achievements as a painter. A handful of works that are either authenticated or attributed to him have been regarded as among the great masterpieces. These paintings are famous for a variety of qualities that have been much imitated by students and discussed at great length by connoisseurs and critics. By the 1490s Leonardo had already been described as a "Divine" painter. Among the qualities that make Leonardo's work unique are his innovative techniques for laying on the paint; his detailed knowledge of anatomy, light, botany and geology; his interest in physiognomy and the way humans register emotion in expression and gesture; his innovative use of the human form in figurative composition; and his use of subtle gradation of tone. All these qualities come together in his most famous painted works, the "Mona Lisa", the "Last Supper", and the "Virgin of the Rocks". Leonardo first gained attention for his work on the "Baptism of Christ", painted in conjunction with Verrocchio. Two other paintings appear to date from his time at Verrocchio's workshop, both of which are Annunciations. One is small, long and high. It is a "predella" to go at the base of a larger composition, a painting by Lorenzo di Credi from which it has become separated. The other is a much larger work, long. In both Annunciations, Leonardo used a formal arrangement, like two well-known pictures by Fra Angelico of the same subject, of the Virgin Mary sitting or kneeling to the right of the picture, approached from the left by an angel in profile, with a rich flowing garment, raised wings and bearing a lily. Although previously attributed to Ghirlandaio, the larger work is now generally attributed to Leonardo. In the smaller painting, Mary averts her eyes and folds her hands in a gesture that symbolised submission to God's will. Mary is not submissive, however, in the larger piece. The girl, interrupted in her reading by this unexpected messenger, puts a finger in her bible to mark the place and raises her hand in a formal gesture of greeting or surprise. This calm young woman appears to accept her role as the Mother of God, not with resignation but with confidence. In this painting, the young Leonardo presents the humanist face of the Virgin Mary, recognising humanity's role in God's incarnation. In the 1480s, Leonardo received two very important commissions and commenced another work that was of ground-breaking importance in terms of composition. Two of the three were never finished, and the third took so long that it was subject to lengthy negotiations over completion and payment. One of these paintings was "Saint Jerome in the Wilderness", which Bortolon associates with a difficult period of Leonardo's life, as evidenced in his diary: "I thought I was learning to live; I was only learning to die." Although the painting is barely begun, the composition can be seen and is very unusual. Jerome, as a penitent, occupies the middle of the picture, set on a slight diagonal and viewed somewhat from above. His kneeling form takes on a trapezoid shape, with one arm stretched to the outer edge of the painting and his gaze looking in the opposite direction. J. Wasserman points out the link between this painting and Leonardo's anatomical studies. Across the foreground sprawls his symbol, a great lion whose body and tail make a double spiral across the base of the picture space. The other remarkable feature is the sketchy landscape of craggy rocks against which the figure is silhouetted. The daring display of figure composition, the landscape elements and personal drama also appear in the great unfinished masterpiece, the "Adoration of the Magi", a commission from the Monks of San Donato a Scopeto. It is a complex composition, of about Leonardo did numerous drawings and preparatory studies, including a detailed one in linear perspective of the ruined classical architecture that forms part of the background. In 1482 Leonardo went to Milan at the behest of Lorenzo de' Medici in order to win favour with Ludovico il Moro, and the painting was abandoned. The third important work of this period is the "Virgin of the Rocks", commissioned in Milan for the Confraternity of the Immaculate Conception. The painting, to be done with the assistance of the de Predis brothers, was to fill a large complex altarpiece. Leonardo chose to paint an apocryphal moment of the infancy of Christ when the infant John the Baptist, in protection of an angel, met the Holy Family on the road to Egypt. The painting demonstrates an eerie beauty as the graceful figures kneel in adoration around the infant Christ in a wild landscape of tumbling rock and whirling water. While the painting is quite large, about , it is not nearly as complex as the painting ordered by the monks of St Donato, having only four figures rather than about fifty and a rocky landscape rather than architectural details. The painting was eventually finished; in fact, two versions of the painting were finished: one remained at the chapel of the Confraternity, while Leonardo took the other to France. The Brothers did not get their painting, however, nor the de Predis their payment, until the next century. Leonardo's most famous painting of the 1490s is "The Last Supper", commissioned for the refectory of the Convent of Santa Maria della Grazie in Milan. It represents the last meal shared by Jesus with his disciples before his capture and death, and shows the moment when Jesus has just said "one of you will betray me", and the consternation that this statement caused. The novelist Matteo Bandello observed Leonardo at work and wrote that some days he would paint from dawn till dusk without stopping to eat and then not paint for three or four days at a time. This was beyond the comprehension of the prior of the convent, who hounded him until Leonardo asked Ludovico to intervene. Vasari describes how Leonardo, troubled over his ability to adequately depict the faces of Christ and the traitor Judas, told the Duke that he might be obliged to use the prior as his model. When finished, the painting was acclaimed as a masterpiece of design and characterization, but it deteriorated rapidly, so that within a hundred years it was described by one viewer as "completely ruined." Leonardo, instead of using the reliable technique of fresco, had used tempera over a ground that was mainly gesso, resulting in a surface subject to mould and to flaking. Despite this, the painting remains one of the most reproduced works of art; countless copies have been made in various mediums. Leonardo's most remarkable portrait of this period is the "Lady with an Ermine", presumed to be Cecilia Gallerani (c. 1483–1490), lover of Ludovico Sforza. The painting is charcterised by the pose of the figure with the head turned at a very different angle to the torso, unusual at a date when many portraits were still rigidly in profile. The ermine plainly carries symbolic meaning, relating either to the sitter, or to Ludovico who belonged to the prestigious Order of the Ermine. It is recorded that in 1492, Leonardo, with assistants painted the Sala delle Asse in the Sforza Castle in Milan, with a trompe-l'œil depicting trees, with an intricate labyrinth of leaves and knots on the ceiling. In 1505 Leonardo was commissioned to paint "The Battle of Anghiari" in the Salone dei Cinquecento (Hall of the Five Hundred) in the Palazzo Vecchio, Florence. Leonardo devised a dynamic composition depicting four men riding raging war horses engaged in a battle for possession of a standard, at the Battle of Anghiari in 1440. Michelangelo was assigned the opposite wall to depict the Battle of Cascina. Leonardo's painting deteriorated rapidly and is now known from a copy by Rubens Among the works created by Leonardo in the 16th century is the small portrait known as the "Mona Lisa" or "La Gioconda", the laughing one. In the present era, it is arguably the most famous painting in the world. Its fame rests, in particular, on the elusive smile on the woman's face, its mysterious quality perhaps due to the subtly shadowed corners of the mouth and eyes such that the exact nature of the smile cannot be determined. The shadowy quality for which the work is renowned came to be called "sfumato," or Leonardo's smoke. Vasari, who is generally thought to have known the painting only by repute, said that "the smile was so pleasing that it seemed divine rather than human; and those who saw it were amazed to find that it was as alive as the original." Other characteristics of the painting are the unadorned dress, in which the eyes and hands have no competition from other details; the dramatic landscape background, in which the world seems to be in a state of flux; the subdued colouring; and the extremely smooth nature of the painterly technique, employing oils laid on much like tempera, and blended on the surface so that the brushstrokes are indistinguishable. Vasari expressed the opinion that the manner of painting would make even "the most confident master...despair and lose heart." The perfect state of preservation and the fact that there is no sign of repair or overpainting is rare in a panel painting of this date. In the painting "Virgin and Child with St. Anne", the composition again picks up the theme of figures in a landscape, which Wasserman describes as "breathtakingly beautiful" and harkens back to the St Jerome picture with the figure set at an oblique angle. What makes this painting unusual is that there are two obliquely set figures superimposed. Mary is seated on the knee of her mother, St Anne. She leans forward to restrain the Christ Child as he plays roughly with a lamb, the sign of his own impending sacrifice. This painting, which was copied many times, influenced Michelangelo, Raphael, and Andrea del Sarto, and through them Pontormo and Correggio. The trends in composition were adopted in particular by the Venetian painters Tintoretto and Veronese. Leonardo was not a prolific painter, but he was a most prolific draftsman, keeping journals full of small sketches and detailed drawings recording all manner of things that took his attention. As well as the journals there exist many studies for paintings, some of which can be identified as preparatory to particular works such as "The Adoration of the Magi", "The Virgin of the Rocks" and "The Last Supper". His earliest dated drawing is a "Landscape of the Arno Valley", 1473, which shows the river, the mountains, Montelupo Castle and the farmlands beyond it in great detail. According to art historian Ludwig Heydenreich, this is "The first true landscape in art." Massimo Polidoro says that it was the first landscape "not to be the background of some religious scene or a portrait. It is the first [documented] time where a landscape was drawn just for the sake of it." Among his famous drawings are the "Vitruvian Man", a study of the proportions of the human body; the "Head of an Angel", for "The Virgin of the Rocks" in the Louvre; a botanical study of "Star of Bethlehem"; and a large drawing (160×100 cm) in black chalk on coloured paper of "The Virgin and Child with St. Anne and St. John the Baptist" in the National Gallery, London. This drawing employs the subtle "sfumato" technique of shading, in the manner of the "Mona Lisa". It is thought that Leonardo never made a painting from it, the closest similarity being to "The Virgin and Child with St. Anne" in the Louvre. Other drawings of interest include numerous studies generally referred to as "caricatures" because, although exaggerated, they appear to be based upon observation of live models. Vasari relates that if Leonardo saw a person with an interesting face he would follow them around all day observing them. There are numerous studies of beautiful young men, often associated with Salaì, with the rare and much admired facial feature, the so-called "Grecian profile." These faces are often contrasted with that of a warrior. Salaì is often depicted in fancy-dress costume. Leonardo is known to have designed sets for pageants with which these may be associated. Other, often meticulous, drawings show studies of drapery. A marked development in Leonardo's ability to draw drapery occurred in his early works. Another often-reproduced drawing is a macabre sketch that was done by Leonardo in Florence in 1479 showing the body of Bernardo Baroncelli, hanged in connection with the murder of Giuliano, brother of Lorenzo de' Medici, in the Pazzi conspiracy. In his notes, Leonardo recorded the colours of the robes that Baroncelli was wearing when he died. Renaissance humanism recognised no mutually exclusive polarities between the sciences and the arts, and Leonardo's studies in science and engineering are sometimes considered as impressive and innovative as his artistic work. These studies were recorded in 13,000 pages of notes and drawings, which fuse art and natural philosophy (the forerunner of modern science). They were made and maintained daily throughout Leonardo's life and travels, as he made continual observations of the world around him. Leonardo's notes and drawings display an enormous range of interests and preoccupations, some as mundane as lists of groceries and people who owed him money and some as intriguing as designs for wings and shoes for walking on water. There are compositions for paintings, studies of details and drapery, studies of faces and emotions, of animals, babies, dissections, plant studies, rock formations, whirlpools, war machines, flying machines and architecture. These notebooks—originally loose papers of different types and sizes, were largely entrusted to Leonardo's pupil and heir Francesco Melzi after the master's death. These were to be published, a task of overwhelming difficulty because of its scope and Leonardo's idiosyncratic writing. Some of Leonardo's drawings were copied by an anonymous Milanese artist for a planned treatise on art c. 1570. After Melzi's death in 1570, the collection passed to his son, the lawyer Orazio, who initially took little interest in the journals. In 1587, a Melzi household tutor named Lelio Gavardi took 13 of the manuscripts to Pisa; there, the architect Giovanni Magenta reproached Gavardi for having taken the manuscripts illicitly and returned them to Orazio. Having many more such works in his possession, Orazio gifted the volumes to Magenta. News spread of these lost works of Leonardo's, and Orazio retrieved seven of the 13 manuscripts, which he then gave to Pompeo Leoni for publication in two volumes; one of these was the Codex Atlanticus. The other six works had been distributed to a few others. After Orazio's death, his heirs sold the rest of Leonardo's possessions, and thus began their dispersal. Some works have found their way into major collections such as the Royal Library at Windsor Castle, the Louvre, the Biblioteca Nacional de España, the Victoria and Albert Museum, the Biblioteca Ambrosiana in Milan, which holds the 12-volume Codex Atlanticus, and the British Library in London, which has put a selection from the Codex Arundel (BL Arundel MS 263) online. Works have also been at Holkham Hall, the Metropolitan Museum of Art, and in the private hands of John Nicholas Brown I and Robert Lehman. The Codex Leicester is the only privately owned major scientific work of Leonardo; it is owned by Bill Gates and displayed once a year in different cities around the world. Most of Leonardo's writings are in mirror-image cursive. Since Leonardo wrote with his left hand, it was probably easier for him to write from right to left. Leonardo used a variety of shorthand and symbols, and states in his notes that he intended to prepare them for publication. In many cases a single topic is covered in detail in both words and pictures on a single sheet, together conveying information that would not be lost if the pages were published out of order. Why they were not published during Leonardo's lifetime is unknown. Leonardo's approach to science was observational: he tried to understand a phenomenon by describing and depicting it in utmost detail and did not emphasise experiments or theoretical explanation. Since he lacked formal education in Latin and mathematics, contemporary scholars mostly ignored Leonardo the scientist, although he did teach himself Latin. In the 1490s he studied mathematics under Luca Pacioli and prepared a series of drawings of regular solids in a skeletal form to be engraved as plates for Pacioli's book "Divina proportione", published in 1509. While living in Milan, he studied light from the summit of Monte Rosa. Scientific writings in his notebook on fossils have been considered as influential on early palaeontology and he has been called the father of ichnology. The content of his journals suggest that he was planning a series of treatises on a variety of subjects. A coherent treatise on anatomy is said to have been observed during a visit by Cardinal Louis d'Aragon's secretary in 1517. Aspects of his work on the studies of anatomy, light and the landscape were assembled for publication by Melzi and eventually published as "A Treatise on Painting" in France and Italy in 1651 and Germany in 1724, with engravings based upon drawings by the Classical painter Nicolas Poussin. According to Arasse, the treatise, which in France went into 62 editions in fifty years, caused Leonardo to be seen as "the precursor of French academic thought on art." While Leonardo's experimentation followed scientific methods, a recent and exhaustive analysis of Leonardo as a scientist by Fritjof Capra argues that Leonardo was a fundamentally different kind of scientist from Galileo, Newton and other scientists who followed him in that, as a "Renaissance Man", his theorising and hypothesising integrated the arts and particularly painting. Leonardo started his study in the anatomy of the human body under the apprenticeship of Andrea del Verrocchio, who demanded that his students develop a deep knowledge of the subject. As an artist, he quickly became master of "topographic anatomy", drawing many studies of muscles, tendons and other visible anatomical features. As a successful artist, Leonardo was given permission to dissect human corpses at the Hospital of Santa Maria Nuova in Florence and later at hospitals in Milan and Rome. From 1510 to 1511 he collaborated in his studies with the doctor Marcantonio della Torre. Leonardo made over 240 detailed drawings and wrote about 13,000 words towards a treatise on anatomy. Only a small amount of the material on anatomy was published in Leonardo's "Treatise on painting". During the time that Melzi was ordering the material into chapters for publication, they were examined by a number of anatomists and artists, including Vasari, Cellini and Albrecht Dürer, who made a number of drawings from them. Leonardo's anatomical drawings include many studies of the human skeleton and its parts, and of muscles and sinews. He studied the mechanical functions of the skeleton and the muscular forces that are applied to it in a manner that prefigured the modern science of biomechanics. He drew the heart and vascular system, the sex organs and other internal organs, making one of the first scientific drawings of a fetus "in utero". The drawings and notation are far ahead of their time, and if published would undoubtedly have made a major contribution to medical science. Leonardo also closely observed and recorded the effects of age and of human emotion on the physiology, studying in particular the effects of rage. He drew many figures who had significant facial deformities or signs of illness. Leonardo also studied and drew the anatomy of many animals, dissecting cows, birds, monkeys, bears, and frogs, and comparing in his drawings their anatomical structure with that of humans. He also made a number of studies of horses. Leonardo's dissections and documentation of muscles, nerves, and vessels helped to describe the physiology and mechanics of movement. He attempted to identify the source of 'emotions' and their expression. He found it difficult to incorporate the prevailing system and theories of bodily humours, but eventually he abandoned these physiological explanations of bodily functions. He made the observations that humours were not located in cerebral spaces or ventricles. He documented that the humours were not contained in the heart or the liver, and that it was the heart that defined the circulatory system. He was the first to define atherosclerosis and liver cirrhosis. He created models of the cerebral ventricles with the use of melted wax and constructed a glass aorta to observe the circulation of blood through the aortic valve by using water and grass seed to watch flow patterns. Vesalius published his work on anatomy and physiology in "De humani corporis fabrica" in 1543. During his lifetime, Leonardo was also valued as an engineer. With the same rational and analytical approach that moved him to represent the human body and to investigate anatomy, Leonardo studied and designed a bewildering number of machines and devices. He drew their “anatomy” with unparalleled mastery, producing the first form of the modern technical drawing, including a perfected "exploded view" technique, to represent internal components. Those studies and projects collected in his codices fill more than 5,000 pages. In a letter of 1482 to the lord of Milan Ludovico il Moro, he wrote that he could create all sorts of machines both for the protection of a city and for siege. When he fled from Milan to Venice in 1499, he found employment as an engineer and devised a system of moveable barricades to protect the city from attack. In 1502, he created a scheme for diverting the flow of the Arno river, a project on which Niccolò Machiavelli also worked. He continued to contemplate the canalization of Lombardy's plains while in Louis XII's company and of the Loire and its tributaries in the company of Francis I. Leonardo's journals include a vast number of inventions, both practical and impractical. They include musical instruments, a mechanical knight, hydraulic pumps, reversible crank mechanisms, finned mortar shells, and a steam cannon. Leonardo was fascinated by the phenomenon of flight for much of his life, producing many studies, including Codex on the Flight of Birds (c. 1505), as well as plans for several flying machines, such as a flapping ornithopter and a machine with a helical rotor. A 2003 documentary by British television station Channel Four, titled "Leonardo's Dream Machines", various designs by Leonardo, such as a parachute and a giant crossbow, were interpreted and constructed. Some of those designs proved successful, whilst others fared less well when tested. Research performed by Marc van den Broek revealed older prototypes for more than 100 inventions that are ascribed to Leonardo. Similarities between Leonardo´s illustrations and drawings from the Middle Ages and from Ancient Greece and Rome, the Chinese and Persian Empires, and Egypt suggest that a large portion of Leonardo's inventions had been conceived before his lifetime. Leonardo´s innovation was to combine different functions from existing drafts and set them into scenes that illustrated their utility. By reconstituting technical inventions he created something new. Leonardo's fame within his own lifetime was such that the King of France carried him away like a trophy, and was claimed to have supported him in his old age and held him in his arms as he died. Interest in Leonardo and his work has never diminished. Crowds still queue to see his best-known artworks, T-shirts still bear his most famous drawing, and writers continue to hail him as a genius while speculating about his private life, as well as about what one so intelligent actually believed in. The continued admiration that Leonardo commanded from painters, critics and historians is reflected in many other written tributes. Baldassare Castiglione, author of "Il Cortegiano" ("The Courtier"), wrote in 1528: "...Another of the greatest painters in this world looks down on this art in which he is unequalled..." while the biographer known as "Anonimo Gaddiano" wrote, : "His genius was so rare and universal that it can be said that nature worked a miracle on his behalf..." Giorgio Vasari, in the enlarged edition of "Lives of the Artists" (1568) introduced his chapter on Leonardo with the following words: The 19th century brought a particular admiration for Leonardo's genius, causing Henry Fuseli to write in 1801: "Such was the dawn of modern art, when Leonardo da Vinci broke forth with a splendour that distanced former excellence: made up of all the elements that constitute the essence of genius..." This is echoed by A.E. Rio who wrote in 1861: "He towered above all other artists through the strength and the nobility of his talents." By the 19th century, the scope of Leonardo's notebooks was known, as well as his paintings. Hippolyte Taine wrote in 1866: "There may not be in the world an example of another genius so universal, so incapable of fulfilment, so full of yearning for the infinite, so naturally refined, so far ahead of his own century and the following centuries." Art historian Bernard Berenson wrote in 1896: "Leonardo is the one artist of whom it may be said with perfect literalness: Nothing that he touched but turned into a thing of eternal beauty. Whether it be the cross section of a skull, the structure of a weed, or a study of muscles, he, with his feeling for line and for light and shade, forever transmuted it into life-communicating values." The interest in Leonardo's genius has continued unabated; experts study and translate his writings, analyse his paintings using scientific techniques, argue over attributions and search for works which have been recorded but never found. Liana Bortolon, writing in 1967, said: "Because of the multiplicity of interests that spurred him to pursue every field of knowledge...Leonardo can be considered, quite rightly, to have been the universal genius par excellence, and with all the disquieting overtones inherent in that term. Man is as uncomfortable today, faced with a genius, as he was in the 16th century. Five centuries have passed, yet we still view Leonardo with awe." Twenty-first-century author Walter Isaacson based much of his biography of Leonardo on thousands of notebook entries, studying the personal notes, sketches, budget notations, and musings of the man whom he considers the greatest of innovators. Isaacson was surprised to discover a "fun, joyous" side of Leonardo in addition to his limitless curiosity and creative genius. On the 500th anniversary of Leonardo's death, the Louvre in Paris arranged for the largest ever single exhibit of his work, called "Leonardo", between November 2019 and February 2020. The exhibit includes over 100 paintings, drawings and notebooks. Eleven of the paintings that Leonardo completed in his lifetime were included. Five of these are owned by the Louvre, but the "Mona Lisa" was not included because it is in such great demand among general visitors to the Louvre; it remains on display in its gallery. "Vitruvian Man", however, is on display following a legal battle with its owner, the Gallerie dell'Accademia in Venice. "Salvator Mundi" was also not included because its Saudi owner did not agree to lease the work. Much of the Collegiate Church of Saint Florentin at the Château d'Amboise, where Leonardo was buried, was damaged during the French Revolution, leading to the church's demolition in 1802. Some of the graves were destroyed in the process, scattering the bones interred there and thereby leaving the whereabouts of Leonardo's remains subject to dispute. While excavating the site in 1863, fine-arts inspector general Arsène Houssaye found a partially complete skeleton with a bronze ring on one finger, some white hair, and stone fragments bearing the inscriptions "EO," "AR," "DUS," and "VINC"—interpreted as forming "Leonardus Vinci". A silver shield found near the bones depicts a beardless Francis I, corresponding to the king's appearance during Leonardo's lifetime, and the skull's inclusion of eight teeth corresponds to someone of approximately the appropriate age. The unusually large skull led Houssaye to believe that he had located Leonardo's remains, but he thought the skeleton seemed too short. Other art historians say that the tall skeleton may well be Leonardo's. The remains, except for the ring and a lock of hair which Houssaye kept, were brought to Paris in a lead box, where the skull was allegedly presented to Napoleon III, before being returned to the Château d'Amboise and in the Chapel of Saint Hubert in 1874. A new memorial tombstone was added by sculptor Francesco La Monaca in the 1930s. Reflecting doubts about the attribution, a plaque above the tomb states that the remains are only presumed to be those of Leonardo. It has since been theorized that the folding of the skeleton's right arm over the head may correspond to the paralysis of Leonardo's right hand. In 2016, it was announced that DNA tests would be conducted to determine whether the attribution is correct. The DNA of the remains will be compared to that of samples collected from Leonardo's work and his half-brother Domenico's descendants; it may also be sequenced. The lock of hair and ring, now in a private US collection, were displayed in Vinci beginning on 2 May 2019, the 500th anniversary of the artist's death. "Salvator Mundi", a painting by Leonardo depicting Jesus holding an orb, sold for a world record US$450.3 million at a Christie's auction in New York, 15 November 2017. The highest known sale price for any artwork was previously US$300 million, for Willem de Kooning's "Interchange", which was sold privately in September 2015. The highest price previously paid for a work of art at auction was for Pablo Picasso's "Les Femmes d'Alger", which sold for US$179.4 million in May 2015 at Christie's New York.
https://en.wikipedia.org/wiki?curid=18079
Lacrosse Lacrosse is a team sport played with a lacrosse stick and a lacrosse ball. It is the oldest organized sport in North America, with its origins in a tribal game played by the indigenous peoples of the Eastern Woodlands and by various other indigenous peoples of North America. The game was extensively modified reducing the violence by European colonizers to create its current collegiate and professional form. The modern sport is governed by World Lacrosse and is the only international sport organization to recognize First Nations bands and Native American tribes as sovereign nations. The organization hosts the World Lacrosse Championship for men, the Women's Lacrosse World Cup, the World Indoor Lacrosse Championship for box lacrosse, and the Under-19 World Lacrosse Championships for both men and women. Each is held every four years. Lacrosse at the Summer Olympics has been contested at two editions of the Summer Olympic Games, 1904 and 1908. It was also held as a demonstration event at the 1928, 1932, and 1948 Summer Olympics. Players use the head of the lacrosse stick to carry, pass, catch, and shoot the ball into the goal. The sport has four versions that have different sticks, fields, rules and equipment: field lacrosse, women's lacrosse, box lacrosse and intercrosse. The men's games, field lacrosse (outdoor) and box lacrosse (indoor), are contact sports and all players wear protective gear: helmet, gloves, shoulder pads, and elbow pads. The women's game is played outdoors and does not allow body contact but does allow stick to stick contact. The only protective gear required for women players is eyegear, while goalies wear helmets and protective pads. Intercrosse is a mixed-gender non-contact sport played indoors that uses an all-plastic stick and a softer ball. Lacrosse is based on games played by various Native American communities as early as 1100 AD. By the 17th century, a version of lacrosse was well-established and was documented by Jesuit missionary priests in the territory of present-day Canada. In the traditional aboriginal Canadian version, each team consisted of about 100 to 1,000 men on a field several miles (several kilometers) long. These games lasted from sunup to sundown for two to three days straight and were played as part of ceremonial ritual, a kind of symbolic warfare, or to give thanks to the Creator or Master. Lacrosse played a significant role in the community and religious life of tribes across the continent for many years. Early lacrosse was characterized by deep spiritual involvement, befitting the spirit of combat in which it was undertaken. Those who took part did so in the role of warriors, with the goal of bringing glory and honor to themselves and their tribes. The game was said to be played "for the Creator" or was referred to as "The Creator's Game." The French Jesuit missionary saw Huron tribesmen play the game during 1637 in present-day Ontario. He called it , "the stick" in French. The name seems to be originated from the French term for field hockey, . James Smith described in some detail a game being played in 1757 by Mohawk people "wherein now they used a wooden ball, about in diameter, and the instrument they moved it with was a strong staff about long, with a hoop net on the end of it, large enough to contain the ball." Anglophones from Montreal noticed the game being played by Mohawk people and started playing themselves in the 1830s. In 1856, William George Beers, a Canadian dentist, founded the Montreal Lacrosse Club. In 1860, Beers codified the game, shortening the length of each game and reducing the number of players to 12 per team. The first game played under Beers's rules was at Upper Canada College in 1867; they lost to the Toronto Cricket Club by a score of 3–1. The new sport proved to be very popular and spread across the English-speaking world; by 1900 there were dozens of men's clubs in Canada, the United States, England, Australia, and New Zealand. The women's game was introduced by Louisa Lumsden in Scotland in 1890. The first women's club in the United States was started by Rosabelle Sinclair at Bryn Mawr School in 1926. In the United States, lacrosse during the late 1800s and first half of the 1900s was primarily a regional sport centered around the Mid-Atlantic states, especially New York and Maryland. However, in the last half of the 20th century, the sport spread outside this region, and can be currently found in most of the United States. According to a survey conducted by US Lacrosse in 2016, there are over 825,000 lacrosse participants nationwide and lacrosse is the fastest-growing team sport among NFHS member schools. Field lacrosse is the men's outdoor version of the sport. There are ten players on each team: three attackmen, three midfielders, three defensemen, and one goalie. Each player carries a lacrosse stick. A short stick measures between long and is used by attackmen and midfielders. A maximum of four players on the field per team may carry a long stick which is between long and is used by the three defensemen and sometimes one defensive midfielder. The goalie uses a stick with a head as wide as that can be between long. The field of play is . The goals are and are apart. Each goal sits inside a circular "crease", measuring in diameter. The goalie has special privileges within the crease to avoid opponents' stick checks. Offensive players or their sticks may not enter into the crease at any time. The mid-field line separates the field into an offensive and defensive zone for each team. Each team must keep four players in its defensive zone and three players in its offensive zone at all times. It does not matter which positional players satisfy the requirement, although usually the three attackmen stay in the offensive zone, the three defensemen and the goalie stay in the defensive zone, and the three middies play in both zones. A team that violates this rule is offsides and either loses possession of the ball if they have it or incurs a technical foul if they do not. The regulation playing time of a game is 60 minutes, divided into four periods of 15 minutes each. Play is started at the beginning of each quarter and after each goal with a face-off. During a face-off, two players lay their sticks on the ground parallel to the mid-line, the two heads of their sticks on opposite sides of the ball. At the whistle, the face-off-men scrap for the ball, often by "clamping" it under their stick and flicking it out to their teammates. When one of the teams has possession of the ball, they bring it into their offensive zone and try to score a goal. Due to the offsides rule, settled play involves six offensive players versus six defensive players and a goalie. If the ball goes out of bounds, possession is awarded against the team that touched it last. The exception is when the ball is shot towards the goal. Missed shots that go out of bounds are awarded to the team that has the player who is the closest to the ball when and where the ball goes out. During play, teams may substitute players in and out if they leave and enter the field through the substitution area, sometimes referred to as "on the fly". After penalties and goals, players may freely substitute and do not have to go through the substitution area. Penalties are either technical or personal fouls. Personal fouls such as cross-checking, illegal body check or slashing, are about player safety. Cross-checking is when a player checks another player with their hands further than shoulder-width apart on the shaft. A slash is when a player swings his/her stick at another player uncontrollably, usually to the discretion of the referee. Even if he/she does not hit the player in possession, a flag can still be thrown. These fouls draw 1-minute or longer penalties; the offending player must leave the field and stay in the substitution area for the length of the penalty. Penalties are either releasable or non-releasable; releasable means that if a goal is scored by either team during the time that the penalty is served, the player serving the penalty can re-enter the play and both teams will once again have an equal number of players. Non-releasable means that the player must serve the entire time of the penalty, regardless of any goals scored. His team plays with nine players for the duration. Because of the offsides rule, this means the opponent plays with six attackers versus five defenders plus the goalie. Technical fouls, such as offsides, pushing, and holding, result in either a turnover or a 30-second penalty, depending on which team has the ball. The team that has taken the penalty is said to be playing man down, while the other team is man up. Teams will use various lacrosse strategies to attack and defend while a player is being penalized. Box lacrosse is played by teams of five runners plus a goalie on a hockey rink where the ice has been removed or covered by artificial turf, or in an indoor soccer field. The enclosed playing area is called a box, in contrast to the open playing field of the traditional game. This version of the game was introduced in Canada in the 1930s to promote business for hockey arenas outside of the ice hockey season. Within several years it had nearly supplanted field lacrosse in Canada. The goals in box lacrosse are smaller than field lacrosse, traditionally wide and tall. Also, the goaltender wears much more protective padding, including a massive chest protector and armguard combination known as "uppers", large shin guards known as leg pads (both of which must follow strict measurement guidelines), and ice hockey-style goalie masks. The style of the game is quick, accelerated by the close confines of the floor and a shot clock. The shot clock requires the attacking team to take a shot on goal within 30 seconds of gaining possession of the ball. Box lacrosse is also a much more physical game. Since cross checking is legal in box lacrosse, players wear rib pads and the shoulder and elbow pads are bigger and stronger than what field lacrosse players wear. Box lacrosse players wear a hockey helmet with a box lacrosse cage. There is no offsides in box lacrosse, the players substitute freely from their bench areas as in hockey. However, most players specialize in offense or defense, so usually all five runners substitute for teammates as their team transitions between offense and defense. For penalties, the offending player is sent to the penalty box and his team has to play without him, or man-down, for the length of the penalty. Most fouls are minor penalties and last for two minutes, major penalties for serious offenses last five minutes. What separates box lacrosse (and ice hockey) from other sports is that at the top levels of professional and junior lacrosse, participating in a fight does not automatically cause an ejection, but a five-minute major penalty is given. Box lacrosse is played at the highest level in the National Lacrosse League and by the Senior A divisions of the Canadian Lacrosse Association. The National Lacrosse League (NLL) employs some minor rule changes from the Canadian Lacrosse Association (CLA) rules. Notably, the goals are wide instead of and the games are played during the winter. The NLL games consist of four fifteen-minute quarters compared with three periods of twenty minutes each in CLA games. NLL players may only use sticks with hollow shafts, while CLA permits solid wooden sticks. The rules of women's lacrosse differ significantly from men's lacrosse, most notably by equipment and the degree of allowable physical contact. Women's lacrosse rules also differ significantly between the US and all other countries, who play by the Federation of International Lacrosse (FIL) rules. Women's lacrosse does not allow physical contact, the only protective equipment worn is a mouth guard and eye-guard. In the early part of the 21st century, there have been discussions of requiring headgear to prevent concussions. In 2008, Florida was the first state to mandate headgear in women's lacrosse. Stick checking is permitted in the women's game, but only in certain levels of play and within strict rules. Women's lacrosse also does not allow players to have a pocket, or loose net, on the lacrosse stick. Women start the game with a "draw" instead of a face-off. The two players stand up and the ball is placed between their stick heads while their sticks are horizontal at waist-height. At the whistle, the players lift their sticks into the air, trying to control where the ball goes. The first modern women's lacrosse game was held at St Leonards School in Scotland in 1890. It was introduced by the school's headmistress Louisa Lumsden after a visit to Quebec, where she saw it played. The first women's lacrosse team in the United States was established at Bryn Mawr School in Baltimore, Maryland in 1926. Both the number of players and the lines on the field differ from men's lacrosse. There are 12 players in women's lacrosse and players must abide by certain boundaries that do not exist in men's play. The three specific boundaries are the "fan" in front of the goal ( internationally), the ( internationally) half circle that surrounds the 8-meter fan, and the draw circle in the center of the field, which is used for draws to start quarters and after goals. The goal circle is also positioned slightly closer to the end line in women's lacrosse compared to men's. In women's lacrosse on either the offensive or defensive end, the players besides the goaltender are not able to step inside the goal circle; this becomes a "goal-circle violation". However, at the women's collegiate level, a new rule has been established that allows defenders to pass through the goal circle. The 8-meter fan that is in front of the goal circle has a few restrictions in it. Defenders cannot stand inside the 8-meter fan longer than 3 seconds without being a stick-length away from the offensive player they are guarding. This is very similar to the three-second rule in basketball. A three seconds violation results in a player from the other team taking a free shot against the goalie. If you are an attacker trying to shoot the ball into the goal, you are not supposed to take a shot while a defender is in "shooting space." To make sure that you, the defender, are being safe, you want to lead with your lacrosse stick and once you are a sticks-length away, you can be in front of her. Intercrosse, or soft stick lacrosse, is a non-contact form of lacrosse with a standardized set of rules using modified lacrosse equipment. An intercrosse stick is different from a normal lacrosse stick, the head is made completely of plastic instead of leather or nylon pockets in traditional lacrosse sticks. The ball is larger, softer and hollow, unlike a lacrosse ball, which is solid rubber. Intercrosse is a competitive adult sport is popular in Quebec, Canada, as well as in many European countries, particularly in the Czech Republic. Generally, teams consist of five players per side, and the field size is wide and long. Goals for adults are the same size as box lacrosse, in height and width. The international governing body, the Fédération Internationale d'Inter-Crosse, hosts a World Championship bi-annually. Soft stick lacrosse is a popular way to introduce youth to the sport. It can be played outdoors or indoors and has a developed curriculum for physical education classes. Lacrosse has historically been played for the most part in Canada and the United States, with small but dedicated lacrosse communities in the United Kingdom and Australia. Recently, however, lacrosse has begun to flourish at the international level, with teams being established around the world, particularly in Europe and East Asia. In August 2008, the men's international governing body, the International Lacrosse Federation, merged with the women's, the International Federation of Women's Lacrosse Associations, to form the Federation of International Lacrosse (FIL). The FIL changed its name to World Lacrosse in May 2019. There are currently 62 member nations of World Lacrosse. World Lacrosse sponsors five world championship tournaments: the World Lacrosse Championship for men's field, the Women's Lacrosse World Cup for women's, the World Indoor Lacrosse Championship for box lacrosse, and the Under-19 World Lacrosse Championships for men and women. Each is held every four years. The World Lacrosse Championship (WLC) began in 1968 as a four-team invitational tournament sponsored by the International Lacrosse Federation. Until 1990, only the United States, Canada, England, and Australia had entered. With the expansion of the game internationally, the 2014 World Lacrosse Championship was contested by 38 countries. The WLC has been dominated by the United States. Team USA has won 9 of the 12 titles, with Canada winning the other three. The Women's Lacrosse World Cup (WLWC) began in 1982. The United States has won 8 of the 10 titles, with Australia winning the other two. Canada and England have always finished in the top five. The 2017 tournament was held in England and featured 25 countries. The first World Indoor Lacrosse Championship (WILC) was held in 2003 and contested by six nations at four sites in Ontario. Canada won the championship by beating the Iroquois Nationals 21–4 in the final. The 2007 championship hosted by the Onondaga Nation included 13 teams. Canada has dominated the competition, winning all four gold medals and never losing a game. The Iroquois Nationals are the men's national team representing the Six Nations of the Iroquois Confederacy in international field lacrosse competition. The team was admitted to the FIL in 1987. It is the only First Nations team sanctioned for international competition in any sport. The Nationals placed fourth in the 1998, 2002 and 2006 World Lacrosse Championships and third in 2014. The indoor team won the silver medal in all four World Indoor Lacrosse Championships. In 2008, the Iroquois women's team was admitted to the FIL as the Haudenosaunee Nationals. They placed 7th at the 2013 Women's Lacrosse World Cup. Field lacrosse was a medal sport in the 1904 and the 1908 Summer Olympics. In 1904, three teams competed in the games held in St. Louis. Two Canadian teams, the Winnipeg Shamrocks and a team of Mohawk people from the Iroquois Confederacy, plus the local St. Louis Amateur Athletic Association team representing the United States participated. The Winnipeg Shamrocks captured the gold medal. The 1908 games held in London, England, featured only two teams, representing Canada and Great Britain. The Canadians again won the gold medal in a single championship match by a score of 14–10. In the 1928, 1932, and the 1948 Summer Olympics, lacrosse was a demonstration sport. The 1928 Olympics in Amsterdam featured three teams: the United States, Canada, and Great Britain. The 1932 games in Los Angeles featured a three-game exhibition between a Canadian all-star team and the United States. The United States was represented by Johns Hopkins in both the 1928 and 1932 Olympics. The 1948 games featured an exhibition by an "All-England" team organized by the English Lacrosse Union and the collegiate lacrosse team from Rensselaer Polytechnic Institute representing the United States. This exhibition match ended in a 5–5 tie. Efforts were made to include lacrosse as an exhibition sport at the 1996 Summer Olympics in Atlanta, Georgia and the 2000 Summer Olympics in Sydney, Australia, but they were not successful. An obstacle for lacrosse to return to the Olympics is insufficient international participation. To be considered for the Olympics, a sport must be played on four continents and by at least 75 countries. Lacrosse is played on all six continents, but as of August 2019 when Ghana joined, there are only 63 countries playing the sport. The European Lacrosse Federation (ELF) was established in 1995 and held the first European Lacrosse Championships that year. Originally an annual event, it is now held every four years, in between FIL's men's and women's championships. In 2004, 12 men's and 6 women's teams played in the tournament, making it the largest international lacrosse event of the year. The last men's tournament was in 2016, when 24 countries participated. England won its ninth gold medal out of the ten tournaments played. 2015 was the last women's tournament, when 17 teams participated in the Czech Republic. England won its sixth gold medal, with Wales earning silver and Scotland bronze. These three countries from Great Britain have dominated the women's championships, earning all but three medals since the tournament began in 1996. There are currently 29 members of the ELF, they make up the majority of nations in the FIL. The Asia Pacific Lacrosse Union was founded in 2004 by Australia, Hong Kong, South Korea and Japan. It currently has 12 members and holds the Asia Pacific Championship for both men's and women's teams every two years. Lacrosse was played in the World Games for the first time at the 2017 World Games held in Poland. Only women's teams took part in the competition. The United States won the gold medal defeating Canada in the finals. Australia won the bronze medal match. The Haudenosaunee Nationals women's lacrosse team could not participate. Collegiate lacrosse in the United States is played at the NCAA, NAIA and club levels. There are currently 71 NCAA Division I men's lacrosse teams, 93 Division II teams, and 236 Division III teams. Thirty-two schools participate at the NAIA level. 184 men's club teams compete in the Men's Collegiate Lacrosse Association, including most universities and colleges outside the northeastern United States. The National College Lacrosse League and Great Lakes Lacrosse League are two other lower-division club leagues. In Canada, 14 teams from Ontario and Quebec play field lacrosse in the fall in the Canadian University Field Lacrosse Association. The first U. S. intercollegiate men's lacrosse game was played on November 22, 1877 between New York University and Manhattan College. An organizing body for the sport, the U. S. National Lacrosse Association, was founded in 1879 and the first intercollegiate lacrosse tournament was held in 1881, with Harvard beating Princeton 3–0 in the championship game. Annual post-season championships were awarded by a variety of early lacrosse associations through the 1930s. From 1936 to 1972, the United States Intercollegiate Lacrosse Association awarded the Wingate Memorial Trophy to the best college lacrosse team each year. The NCAA began sponsoring a men's lacrosse championship in 1971, when Cornell took the first title over Maryland, 12–6. Syracuse has 10 Division I titles, Johns Hopkins 9, and Princeton 6. The NCAA national championship weekend tournament draws over 80,000 fans. There are currently 112 Division I women's lacrosse teams, 109 Division II teams, and 282 Division III teams. There are 36 NAIA women's lacrosse teams. The NCAA started sponsoring a women's lacrosse championship in 1982. Maryland has traditionally dominated women's intercollegiate play, producing many head coaches and U.S. national team players. The Terrapins won seven consecutive NCAA championships from 1995 through 2001. Princeton's women's teams have made it to the final game seven times since 1993 and have won three NCAA titles, in 1993, 2002, and 2003. In recent years, Northwestern has become a force, winning the national championship from 2005 through 2009. Maryland ended Northwestern's streak by defeating the Wildcats in the 2010 final, however, Northwestern won the next two titles in 2011 and 2012. Maryland again claimed the national championship in 2014, 2015, and 2017. The Women's Collegiate Lacrosse Associates (WCLA) is a collection of over 260 college club teams that are organized by US Lacrosse. Teams are organized into two divisions and various leagues. Major League Lacrosse (MLL) is a semi-professional field lacrosse league started in 2001 with six teams in the Northeastern United States. The league currently has nine teams in the Eastern United States and Denver playing a 14-game season from April to August. MLL rules are based on NCAA men's rules with several exceptions, such as a 16-yard 2-point line and a 60-second shot clock. MLL venues range from small stadiums with under 10,000 capacity to an NFL stadium in Denver that seats 76,000. Overall league average attendance is around 4,000 per game, but Denver has averaged around 10,000 per game since its founding in 2006. The rookie salary is $7,000 per season and most players make between $10,000 and $20,000 per season. Therefore, players have other jobs, often non-lacrosse related, and travel to games on the weekends. The Chesapeake Bayhawks, who have played in the Annapolis–Baltimore–Washington, DC area since 2001, are the most successful franchise with five championships. The National Lacrosse League (NLL) is a men's semi-professional box lacrosse league in North America. The NLL currently has nine teams, five in the United States and four in Canada. The 18-game regular season runs from December to April; games are always on the weekends. The champion is awarded the National Lacrosse League Cup in early June. Games are played in ice rinks with artificial turf covering the ice. Venues range from NHL arenas seating 19,000 to smaller arenas with under 10,000 capacity. In 2017, average attendance ranged from 3,200 per game in Vancouver to over 15,000 in Buffalo. Overall, the league averaged 9,500 people per game. With an average salary around $20,000 per season, players have regular jobs, mostly non-lacrosse related, and live in different cities, flying into town for games. Canadians and Native Americans make up over 90% of the players. The NLL started in 1987 as the Eagle Pro Box Lacrosse League. Teams in Philadelphia, New Jersey, Baltimore and Washington, DC, played a 6-game season. The league operated as the Major Indoor Lacrosse League from 1989 to 1997, when there were six teams playing a 10-game schedule. The current NLL name began in the 1998 season, which included the first Canadian team. The most successful franchises have been the Toronto Rock and the now-defunct Philadelphia Wings, each has won six championships. In October 2018, former MLL player Paul Rabil branched away from the MLL and created the Premier Lacrosse League. The PLL focuses on being a traveling lacrosse league that will bring the best players in the world to different cities in the United States. The United Women's Lacrosse League (UWLX), a four-team women's lacrosse league, was launched in 2016. The teams are the Baltimore Ride, Boston Storm, Long Island Sound and Philadelphia Force. Long Island won the first two championships. The Women's Professional Lacrosse League is a professional women's lacrosse league with 5 teams that started in 2018. The lacrosse stick has two parts, the head and the shaft. There are three parts to the head: the scoop, sidewall, and pocket. The scoop is the top of the stick that affects picking up ground ball as well as passing and shooting. The sidewall is the side of the head that affects the depth of the head and the stiffness. The pocket is the leather or nylon mesh attached to the sidewall and scoop. A wider pocket allows an easier time catching balls, but will also cause less ball control. A narrower pocket makes catching harder, but allows more ball retention and accuracy. Shafts are usually made of hollow metal. They are octagonal, instead of round, in order to provide a better grip. Most are made of aluminum, titanium, scandium, or alloys, but some shafts are made from other materials, including wood, plastic, carbon fiber, or fiberglass. Stick length, both shaft and head together, is governed by NCAA regulations, which require that men's sticks be from long for offensive players, long for defensemen, and long for goalies. Women's sticks must be an overall length of . The head must be seven to nine inches wide and the top of the ball must remain above the side walls when dropped in the pocket. The goalkeeper's stick must be long. The head of the goalie's stick can up to wide and the pocket may be mesh. The ball is made of solid rubber. It is typically white for men's lacrosse, or yellow for women's Lacrosse; but is also produced in a wide variety of colors, such as yellow, orange or lime green according to the Men's Lacrosse Rules and Interpretations. In the college level the Lacrosse ball is orange. Men's field lacrosse protective equipment contains a pair of gloves, elbow pads, shoulder pads, helmet, mouthguard, and cleats. Pads differ in size and protection from player to player based on position, ability, comfort and preference. For example, many attack players wear larger and more protective elbow pads to protect themselves from checks thrown at them while defenders typically wear smaller and less protective pads due to their smaller possibility of being checked and goalies usually wear no elbow pads due to the very limited opportunities of being checked. A goalkeeper must also wear a large protective chest pad to cover their stomach and chest and a plastic neck guard that connects to the chin of their helmet to protect them from shots hitting their windpipe. In addition, male goalkeepers are required to wear a protective cup. Men's box players wear more protective gear than field players due to the increased physical contact and more permissive checking rules. Cross-checking in the back is allowed by the rules. Runners wear larger and heavier elbow pads and stronger shoulder pads that extend down the back of the player. Most players wear rib pads as well. Box goalies wear equipment very similar to ice hockey goalies, the leg blockers are somewhat smaller, although the shoulder pads are bigger than ice hockey pads. Women's field players are not required to wear protective equipment besides eyegear and a mouthguard. Eyegear is a metal cage covering the eyes attached with a strap around the back of the head. In recent years, there has been discussion about allowing or requiring padded headgear to protect against concussions. Women goalies wear a helmet, gloves, and chest protector.
https://en.wikipedia.org/wiki?curid=18080
Liverpool Liverpool is a city and metropolitan borough in Merseyside, England. The population in 2019 was approximately . Liverpool is the ninth-largest English district by population, and the largest in Merseyside and the surrounding region. It lies within the UK's sixth-most populous urban area, and its metropolitan area is the fifth-largest in the UK with a population of 2.24 million. Liverpool is on the eastern side of the Mersey Estuary and historically lay within the ancient hundred of West Derby in North West England's county of Lancashire. It became a borough in 1207 and a city in 1880. In 1889, it became a county borough independent of Lancashire. Its growth as a major port was paralleled by the expansion of the city throughout the Industrial Revolution. Along with general cargo, freight, and raw materials such as coal and cotton, merchants were involved in the slave trade. In the 19th century, Liverpool was a major port of departure for English and Irish emigrants to North America. It was home to both the Cunard and White Star Line, and was the port of registry of the ocean liners , , , and . Liverpool is ranked at No. 6 on the list of the most visited UK cities. It is noted for its culture, architecture, and transport links. The city is closely associated with the arts, especially music; the popularity of the Beatles, widely regarded as the most influential music group in history, contributed to the city's status as a tourist destination. Since then, Liverpool has continued to produce many notable musical acts and record from Liverpool have produced 56 No. 1 hit singles, more than any other city in the world. Liverpool also has a long-standing reputation as the origin of various , , , , , , and . The city has the second-highest number of art galleries, national museums, listed buildings, and listed parks in the UK; only the capital, London, has more. The Liverpool Maritime Mercantile City includes the Pier Head, Albert Dock, and William Brown Street. In sports, the city is best known for being the home of Premier League football clubs Liverpool and Everton, with matches between the two being known as the Merseyside derby. The annual Grand National horse race takes place at Aintree Racecourse. Several areas of the city centre were granted World Heritage Site status by UNESCO in 2004, and the city's collection of parks and open spaces has been described as the "most important in the country" by England's Register of Historic Parks and Gardens of Special Historic Interest. Its status as a port city attracted a diverse population from a wide range of cultures, primarily Ireland, Norway, and Wales. It is also home to the oldest black community in the UK and the oldest Chinese community in Europe. Natives of Liverpool (and occasionally longtime residents) are formally referred to as "Liverpudlians" but are more often called "Scousers", a reference to the form of stew made popular by sailors in the city, which also became the most common name for the local accent and dialect. The city celebrated its 800th anniversary in 2007 and was named the 2008 European Capital of Culture, which it shared with the Norwegian city of Stavanger. The name comes from the Old English "lifer", meaning thick or muddy water, and "pōl", meaning a pool or creek, and is first recorded around 1190 as "Liuerpul". According to the "Cambridge Dictionary of English Place-Names", "The original reference was to a pool or tidal creek now filled up into which two streams drained". The place appearing as "Leyrpole", in a legal record of 1418, may also refer to Liverpool. Other origins of the name have been suggested, including "elverpool", a reference to the large number of eels in the Mersey while another such suggestion is derivation from Welsh "llyvr pwl", apparently meaning "expanse or confluence at the pool". The adjective "Liverpudlian" is first recorded in 1833. King John's letters patent of 1207 announced the foundation of the borough of Liverpool. By the middle of the 16th century, the population was still around 500. The original street plan of Liverpool is said to have been designed by King John near the same time it was granted a royal charter, making it a borough. The original seven streets were laid out in an H shape: Bank Street (now Water Street), Castle Street, Chapel Street, Dale Street, Juggler Street (now High Street), Moor Street (now Tithebarn Street) and Whiteacre Street (now Old Hall Street). In the 17th century there was slow progress in trade and population growth. Battles for control of the town were waged during the English Civil War, including an eighteen-day siege in 1644. In 1699, the same year as its first recorded slave ship, "Liverpool Merchant", set sail for Africa, Liverpool was made a parish by Act of Parliament, although arguably the legislation of 1695 that reformed the Liverpool council was of more significance to its subsequent development. Since Roman times, the nearby city of Chester on the River Dee had been the region's principal port on the Irish Sea. However, as the Dee began to silt up, maritime trade from Chester became increasingly difficult and shifted towards Liverpool on the neighbouring River Mersey. As trade from the West Indies, including sugar, surpassed that of Ireland and Europe, and as the River Dee continued to silt up, Liverpool began to grow with increasing rapidity. The first commercial wet dock was built in Liverpool in 1715. Substantial profits from the slave trade and tobacco helped the town to prosper and rapidly grow, although several prominent local men, including William Rathbone, William Roscoe and Edward Rushton, were at the forefront of the local abolitionist movement. By the start of the 19th century, a large volume of trade was passing through Liverpool, and the construction of major buildings reflected this wealth. In 1830, Liverpool and Manchester became the first cities to have an intercity rail link, through the Liverpool and Manchester Railway. The population continued to rise rapidly, especially during the 1840s when Irish migrants began arriving by the hundreds of thousands as a result of the Great Famine. In her poem "Liverpool" (1832), which celebrates the city's worldwide commerce, Letitia Elizabeth Landon refers specifically to the Macgregor Laird expedition to the Niger River, at that time in progress. Great Britain was a major market for cotton imported from the Deep South of the United States, which fed the textile industry in the country. Given the crucial place of both cotton and slavery in the city's economy, during the American Civil War Liverpool was, in the words of historian Sven Beckert, "the most pro-Confederate place in the world outside the Confederacy itself." For periods during the 19th century, the wealth of Liverpool exceeded that of London, and Liverpool's Custom House was the single largest contributor to the British Exchequer. Liverpool was the only British city ever to have its own Whitehall office. In the early 19th century, Liverpool played a major role in the Antarctic sealing industry, in recognition of which Liverpool Beach in the South Shetland Islands is named after the city. As early as 1851 the city was described as "the New York of Europe". During the late 19th and early 20th centuries, Liverpool was attracting immigrants from across Europe. This resulted in construction of a diverse array of religious buildings in the city for the new ethnic and religious groups, many of which are still in use today. The Deutsche Kirche Liverpool, Greek Orthodox Church of St Nicholas, Gustav Adolf Church and Princes Road Synagogue were all established in the 1800s to serve Liverpool's growing German, Greek, Nordic and Jewish communities, respectively. One of Liverpool's oldest surviving churches, St. Peter's Roman Catholic Church, served the Polish community in its final years as a place of worship. The postwar period after the Great War was marked by social unrest, as society grappled with the massive war losses of young men, as well as trying to integrate veterans into the economy. Union organising and strikes took place in numerous locations, including police strikes in London among the Metropolitan Police. Numerous colonial soldiers and sailors from Africa and India, who had served with the UK, settled in Liverpool and other port cities. In June 1919 they were subject to attack by whites in racial riots; residents in the port included Swedish immigrants, and both groups had to compete with native people from Liverpool for jobs and housing. In this period, race riots also took place in Cardiff, Newport and Barry, and there had been incidents in Glasgow, South Shields, London, Hull and Salford. Similarly, racial riots of whites against blacks took place across the United States in numerous industrial cities, so that a black leader termed the period of time Red Summer. In that first postwar year, there were also riots in Caribbean and South African cities. The Housing Act 1919 resulted in mass council housing being built across Liverpool during the 1920s and 1930s. Thousands of families were relocated from the inner-city to new suburban housing estates, based on the belief that this would improve their standard of living, though this is largely subjective. Numerous private homes were also built during this era. During the Great Depression of the early 1930s, unemployment peaked at around 30% in the city. Liverpool was the site of Britain's first provincial airport, operating from 1930. During the Second World War, the critical strategic importance of Liverpool was recognised by both Hitler and Churchill. The city was heavily bombed by the Germans, suffering a blitz second only to London's. The pivotal Battle of the Atlantic was planned, fought and won from Liverpool. The "Luftwaffe" made 80 air raids on Merseyside, killing 2,500 people and causing damage to almost half the homes in the metropolitan area. Significant rebuilding followed the war, including massive housing estates and the Seaforth Dock, the largest dock project in Britain. Much of the immediate reconstruction of the city centre has been deeply unpopular. It was as flawed as much subsequent town planning renewal in the 1950s and 1960s. The historic portions of the city that had survived German bombing suffered extensive destruction during urban renewal. Since 1952 Liverpool has been twinned with Cologne, Germany, a city which also suffered severe aerial bombing during the war. A significant West Indian black community has existed in the city since the first two decades of the 20th century. Like most British cities and industrialised towns, Liverpool became home to a significant number of Commonwealth immigrants, beginning after World War I with colonial soldiers and sailors who had served in the area. More immigrants arrived after World War II, mostly settling in older inner-city areas such as Toxteth, where housing was less expensive. The construction of suburban public housing expanded after the Second World War. Some of the older inner city areas were redeveloped for new homes. In the 1960s Liverpool was the centre of the "Merseybeat" sound, which became synonymous with the Beatles and fellow Liverpudlian rock bands. Influenced by American rhythm and blues and rock music, they also in turn strongly affected American music for years and were internationally popular. The Beatles became internationally known in the early 1960s and performed for years together; they were the most commercially successful and musically influential band in popular history. Their co-founder singer and composer John Lennon was killed in New York City in 1980, after the Beatles stopped performing together. Liverpool airport was renamed for him in 2002, the first British airport to be named in honour of an individual. Previously part of Lancashire, and a county borough from 1889, Liverpool in 1974 became a metropolitan borough within the newly created metropolitan county of Merseyside. From the mid-1970s onwards, Liverpool's docks and traditional manufacturing industries declined due to restructuring of shipping and heavy industry, causing massive losses of jobs. The advent of containerisation meant that the city's docks became largely obsolete, and dock workers were thrown out of jobs. By the early 1980s unemployment rates in Liverpool were among the highest in the UK, standing at 17% by January 1982. This was about half the level of unemployment that had affected the city during the Great Depression 50 years previously. In the later 20th century, Liverpool's economy began to recover. Since the mid-1990s the city has enjoyed growth rates higher than the national average. To celebrate the Golden Jubilee of Elizabeth II in 2002, the conservation charity Plantlife organised a competition to choose county flowers; the sea-holly was Liverpool's final choice. Capitalising on the popularity of 1960s rock groups, such as the Beatles, as well as the city's world-class art galleries, museums and landmarks, tourism has also become a significant factor in Liverpool's economy. In 2004, property developer Grosvenor started the Paradise Project, a £920 m development based on Paradise Street. This produced the most significant changes to Liverpool's city centre since the post-war reconstruction. Renamed 'Liverpool ONE,' the centre opened in May 2008. In 2007, the city celebrated the 800th anniversary of the founding of the borough of Liverpool, for which a number of events were planned. Liverpool was designated as a joint European Capital of Culture for 2008. The main celebrations, in September 2008, included erection of La Princesse, a large mechanical spider 20 metres high and weighing 37 tonnes, and represents the "eight legs" of Liverpool: honour, history, music, the Mersey, the ports, governance, sunshine and culture. La Princesse roamed the streets of the city during the festivities, and concluded by entering the Queensway Tunnel. Spearheaded by the multi-billion-pound Liverpool ONE development, regeneration has continued through to the start of the early 2010s. Some of the most significant redevelopment projects include new buildings in the Commercial District, the King's Dock, Mann Island, the Lime Street Gateway, the Baltic Triangle, the RopeWalks, and the Edge Lane Gateway. All projects could be eclipsed by the Liverpool Waters scheme, which if built will cost in the region of £5.5billion and be one of the largest megaprojects in the UK's history. Liverpool Waters is a mixed-use development planned to contain one of Europe's largest skyscraper clusters. The project received outline planning permission in 2012, despite fierce opposition from such groups as UNESCO, which claimed that it would adversely affect Liverpool's World Heritage status. In June 2014, Prime Minister David Cameron launched the International Festival for Business in Liverpool, the world's largest business event in 2014, and the largest in the UK since the Festival of Britain in 1951. Liverpool has been a centre of invention and innovation. Railways, transatlantic steamships, municipal trams, and electric trains were all pioneered in Liverpool as modes of mass transit. In 1829 and 1836, the first railway tunnels in the world were constructed under Liverpool (Wapping Tunnel). From 1950 to 1951, the world's first scheduled passenger helicopter service ran between Liverpool and Cardiff. The first School for the Blind, Mechanics' Institute, High School for Girls, council house, and Juvenile Court were all founded in Liverpool. Charities such as the RSPCA, NSPCC, Age Concern, Relate, and Citizen's Advice Bureau all evolved from work in the city. The first lifeboat station, public bath and wash-house, sanitary act, medical officer for health (William Henry Duncan), district nurse, slum clearance, purpose-built ambulance, X-ray medical diagnosis, school of tropical medicine (Liverpool School of Tropical Medicine), motorised municipal fire-engine, free school meal, cancer research centre, and zoonosis research centre all originated in Liverpool. The first British Nobel Prize was awarded in 1902 to Ronald Ross, professor at the School of Tropical Medicine, the first school of its kind in the world. Orthopaedic surgery was pioneered in Liverpool by Hugh Owen Thomas, and modern medical anaesthetics by Thomas Cecil Gray. The world's first integrated sewer system was constructed in Liverpool by James Newlands, appointed in 1847 as the UK's first borough engineer. Liverpool also founded the UK's first Underwriters' Association and the first Institute of Accountants. The Western world's first financial derivatives (cotton futures) were traded on the Liverpool Cotton Exchange in the late 1700s. In the arts, Liverpool was home to the first lending library (The Lyceum), athenaeum society (Liverpool Athenaeum), arts centre (Bluecoat Chambers), and public art conservation centre (National Conservation Centre). It is also home to the UK's oldest surviving classical orchestra (Royal Liverpool Philharmonic Orchestra) and repertory theatre (Liverpool Playhouse). In 1864, Peter Ellis built the world's first iron-framed, curtain-walled office building, Oriel Chambers, which was a prototype of the skyscraper. The UK's first purpose-built department store was Compton House, completed in 1867 for the retailer J.R. Jeffrey. It was the largest store in the world at the time. Between 1862 and 1867, Liverpool held an annual Grand Olympic Festival. Devised by John Hulley and Charles Melly, these games were the first to be wholly amateur in nature and international in outlook. The programme of the first modern Olympiad in Athens in 1896 was almost identical to that of the Liverpool Olympics. In 1865, Hulley co-founded the National Olympian Association in Liverpool, a forerunner of the British Olympic Association. Its articles of foundation provided the framework for the International Olympic Charter. Sir Alfred Lewis Jones, a shipowner, introduced bananas to the UK via Liverpool's docks in 1884. The Mersey Railway, opened in 1886, incorporated the world's first tunnel under a tidal estuary and the world's first deep-level underground stations (Liverpool James Street railway station). In 1889, borough engineer John Alexander Brodie invented the football goal net. He also was a pioneer in the use of pre-fabricated housing and oversaw the construction of the UK's first ring road (A5058) and intercity highway (East Lancashire Road), as well as the Queensway Tunnel linking Liverpool and Birkenhead. Described as "the eighth wonder of the world" at the time of its construction, it was the longest underwater tunnel in the world for 24 years. In 1897, the Lumière brothers filmed Liverpool, including what is believed to be the world's first tracking shot, taken from the Liverpool Overhead Railway, the world's first elevated electrified railway. The Overhead Railway was the first railway in the world to use electric multiple units, employ automatic signalling, and install an escalator. Liverpool inventor Frank Hornby was a visionary in toy development and manufacture, producing three of the most popular lines of toys in the 20th century: Meccano, Hornby Model Railways, and Dinky Toys. The British Interplanetary Society, founded in Liverpool in 1933 by Phillip Ellaby Cleator, is the world's oldest existing organisation devoted to the promotion of spaceflight. Its journal, the "Journal of the British Interplanetary Society", is the longest-running astronautical publication in the world. In 1999, Liverpool was the first city outside of London to be awarded blue plaques by English Heritage in recognition of the "significant contribution made by its sons and daughters in all walks of life". Liverpool is governed by a Unitary Authority, as when Merseyside County Council was disbanded civic functions were returned to a district borough level. However several services such as the police and fire and rescue service, continue to be run at a county-wide level. The city also elects four members of Parliament (MPs) to the Westminster Parliament. The City of Liverpool is governed by the Directly elected mayor of Liverpool and Liverpool City Council, and is one of six metropolitan boroughs that combine to make up the Liverpool City Region. The mayor is elected by the citizens of Liverpool every four years and is responsible for the day-to-day running of the council. The council's 90 elected councillors who represent local communities throughout the city, are responsible for scrutinising the mayor's decisions, setting the budget, and policy framework of the city. The mayor's responsibility is to be a powerful voice for the city both nationally and internationally, to lead, build investor confidence, and to direct resources to economic priorities. The mayor also exchanges direct dialogue with government ministers and the prime minister through his seat at the Cabinet of Mayors. Discussions include pressing decision makers in the government on local issues as well as building relationships with the other directly elected mayors in England and Wales. The mayor is Joe Anderson. The City of Liverpool effectively has two mayors. As well as the directly elected mayor, there is the ceremonial lord mayor (or civic mayor) who is elected by the full city council at its annual general meeting in May, and stands for one year in office. The lord mayor acts as the "first citizen" of Liverpool and is responsible for promoting the city, supporting local charities and community groups as well as representing the city at civic events. The Lord Mayor is Councillor Christine Banks. For local elections the city is split into 30 local council wards, which in alphabetical order are: During the most recent local elections, held in May 2011, the Labour Party consolidated its control of Liverpool City Council, following on from regaining power for the first time in 12 years, during the previous elections in May 2010. The Labour Party gained 11 seats during the election, taking their total to 62 seats, compared with the 22 held by the Liberal Democrats. Of the remaining seats the Liberal Party won three and the Green Party claimed two. The Conservative Party, one of the three major political parties in the UK had no representation on Liverpool City Council. In February 2008, Liverpool City Council was reported to be the worst-performing council in the country, receiving just a one star rating (classified as inadequate). The main cause of the poor rating was attributed to the council's poor handling of tax-payer money, including the accumulation of a £20m shortfall on Capital of Culture funding. While Liverpool through most of the 19th and early 20th centuries was a municipal stronghold of Toryism, support for the Conservative Party recently has been among the lowest in any part of Britain, particularly since the monetarist economic policies of prime minister Margaret Thatcher after her 1979 general election victory contributed to high unemployment in the city which did not begin to fall for many years. Liverpool is one of the Labour Party's key strongholds; however the city has seen hard times under Labour governments as well, particularly in the Winter of Discontent (late 1978 and early 1979) when Liverpool suffered public sector strikes along with the rest of the United Kingdom but also suffered the particularly humiliating misfortune of having grave-diggers going on strike, leaving the dead unburied. The City of Liverpool is one of the six constituent local government districts of the Liverpool City Region. Since 1 April 2014, some of the city's responsibilities have been pooled with neighbouring authorities within the metropolitan area and subsumed into the Liverpool City Region Combined Authority. The combined authority has effectively become the top-tier administrative body for the local governance of the city region and The Mayor of Liverpool, along with the five other leaders from neighbouring local government districts, take strategic decisions over economic development, transport, employment and skills, tourism, culture, housing and physical infrastructure. As of July 2015, negotiations are currently taking place between the UK national government and the combined authority over a possible devolution deal to confer greater powers on the region. Discussions include whether to introduce an elected ‘Metro Mayor' to oversee the entire metropolitan area. Liverpool has four parliamentary constituencies entirely within the city, through which MPs are elected to represent the city in Westminster: Liverpool Riverside, Liverpool Walton, Liverpool Wavertree and Liverpool West Derby. At the last general election, all were won by Labour with representation being from Kim Johnson, Dan Carden, Paula Barker and Ian Byrne respectively. Due to boundary changes prior to the 2010 election, the Liverpool Garston constituency was merged with most of Knowsley South to form the Garston and Halewood cross-boundary seat. At the most 2019 election this seat was won by Maria Eagle of the Labour Party. Liverpool has been described as having "the most splendid setting of any English city." At (53.4, −2.98), northwest of London, located on the Liverpool Bay of the Irish Sea the city of Liverpool is built across a ridge of sandstone hills rising up to a height of around 230 feet (70 m) above sea-level at Everton Hill, which represents the southern boundary of the West Lancashire Coastal Plain. The Mersey Estuary separates Liverpool from the Wirral Peninsula. The boundaries of Liverpool are adjacent to Bootle, Crosby and Maghull in south Sefton to the north, and Kirkby, Huyton, Prescot and Halewood in Knowsley to the east. Liverpool experiences a temperate maritime climate (Köppen: "Cfb"), like much of the British Isles, with relatively mild summers, cool winters and rainfall spread fairly evenly throughout the year. Rainfall and Temperature records have been kept at Bidston since 1867, but records for atmospheric pressure go back as far as 1845. Bidston closed down in 2002 but the Met Office also has a weather station at Crosby. Since records began in 1867, temperatures have ranged from on 21 December 2010 to on 2 August 1990. Although, Liverpool Airport recorded a temperature of on 19 July 2006. The lowest amount of sunshine on record was 16.5hrs in December 1927 whereas the most was 314.5hrs in July 2013. Tornado activity or funnel cloud formation is very rare in and around the Liverpool area and tornadoes that do form are usually weak. Recent tornadoes or funnel clouds have been seen in 1998, 2014 and 2018. During the period 1981–2010, Crosby recorded an average of 32.8 days of air frost per year, which is low for the United Kingdom. Snow is fairly common during the winter although heavy snow is rare. Snow generally falls between November and March but can occasionally fall earlier and later. In recent times, the earliest snowfall was on 1 October 2008 while the latest occurred on 15 May 2012. Although historically, the earliest snowfall occurred on 4 September 1974 and the latest on 2 June 1975. Rainfall, although light, is quite a common occurrence in Liverpool, with the wettest month on record being August 1956, which recorded of rain. The only other month to exceed was September 1976. However, droughts can occasionally become a problem, especially, but not exclusively, in the summer, this happened most recently in 2018. However, the longest run of days without any rainfall was 41 days between 16 July and 25 August 1995. The driest year on record was 2010, with of rain and the wettest was 1872, with . Since Bidston closed down in 2002, new records are: Suburbs and districts of Liverpool include: In 2010 Liverpool City Council and the Primary Care Trust Commissioned The Mersey Forest to complete A Green Infrastructure Strategy for the City. Liverpool is a core urban element of a green belt region that extends into the wider surrounding counties, which is in place to reduce urban sprawl, prevent the towns in the conurbation from further convergence, protect the identity of outlying communities, encourage brownfield reuse, and preserve nearby countryside. This is achieved by restricting inappropriate development within the designated areas, and imposing stricter conditions on permitted building. Due to being already highly built up, the city contains limited portions of protected green belt area within greenfield throughout the borough, at Fazakerley, Croxteth Hall and country park and Craven Wood, Woodfields Park and nearby golf courses in Netherley, small greenfield tracts east of the Speke area by the St Ambrose primary school, and the small hamlet of Oglet and surrounding area south of Liverpool Airport. The green belt was first drawn up in 1983 under Merseyside County Council and the size in the city amounts to . At the 2011 UK Census the recorded population of Liverpool was 466,415, a 6.1% increase on the figure of 439,473 recorded in the 2001 census. The population of the central Liverpool local authority peaked in the 1930s with 846,101 recorded in the 1931 census, before suburbanisation and the establishment of new towns in the region. As with many British cities including London and Manchester, the city centre covered by the Liverpool council area had experienced negative population growth since the 1931 census. Much of the population loss was as a result of large-scale resettlement programmes to nearby areas introduced in the aftermath of the Second World War, with satellite towns such as Kirkby, Skelmersdale and Runcorn seeing a corresponding rise in their populations (Kirkby being the fastest growing town in Britain during the 1960s). Liverpool's population is younger than that of England as a whole, with 42.5 per cent of its population under the age of 30, compared to an English average of 37.699999999999996 per cent. , 66 per cent of the population was of working age. Liverpool is the largest local authority by populace, GDP and area in Merseyside. Liverpool is typically grouped with the wider Merseyside area for the purpose of defining its metropolitan footprint, and there are several methodologies. Liverpool is defined as a standalone NUTS3 area by the ONS for statistical purpose, and makes up part of the NUTS2 area "Merseyside" along with East Merseyside (Knowsley, St Helens and Halton), Sefton and the Wirral. The population of this area was 1,513,306 based on 2014 estimates. The "Liverpool Urban Area" is a term used by the Office for National Statistics (ONS) to denote the urban area around the city to the east of the River Mersey. The contiguous built-up area extends beyond the area administered by Liverpool City Council into adjoining local authority areas, particularly parts of Sefton and Knowsley. As defined by ONS, the area extends as far east as Haydock and St. Helens. Unlike the Metropolitan area, the Urban Area does not include The Wirral or its contiguous areas. The population of this area as of 2011 was 864,211. The "Liverpool City Region" is an economic partnership between local authorities in Merseyside under the umbrella of the Liverpool City Region Combined Authority as defined by the Mersey Partnership. The area covers Merseyside and the Borough of Halton and has an estimated population between 1,500,000 and 2,000,000 and. In 2006 ESPON (now (European Observation Network for Territorial Development and Cohesion) released a study defining a "Liverpool/Birkenhead Metropolitan area" as a functional urban area consisting of contiguous urban areas, labour pool, and commuter "Travel To Work Areas". The analysis grouped the Merseyside metropolitan county with the borough of Halton, Wigan in Greater Manchester, the city of Chester as well as number of towns in Lancashire and Cheshire including Ormskirk and Warrington, estimating the polynuclear metropolitan area to have a population of 2,241,000 people. Liverpool and Manchester are sometimes considered as one large polynuclear metropolitan area, or megalopolis. According to data from the 2011 census, 84.8 per cent of Liverpool's population was White British, 1.4 per cent White Irish, 2.6 per cent White Other, 4.1 per cent Asian or Asian British (including 1.1 per cent British Indian and 1.7 per cent British Chinese), 2.6 per cent Black or Black British (including 1.8 per cent Black African) and 2.5 per cent mixed-race. 1.8 per cent of respondents were from other ethnic groups. According to a 2014 survey, the ten most popular surnames of Liverpool (With surname origin), followed with their population are; Liverpool is home to Britain's oldest Black community, dating to at least the 1730s. Some Black Liverpudlians can trace their ancestors in the city back ten generations. Early Black settlers in the city included seamen, the children of traders sent to be educated, and freed slaves, since slaves entering the country after 1722 were deemed free men. Since the 20th century, Liverpool is also noted for its large African-Caribbean, Ghanaian, and Somali communities, formed of more recent African-descended immigrants and their subsequent generations. The city is also home to the oldest Chinese community in Europe; the first residents of the city's Chinatown arrived as seamen in the 19th century. The traditional Chinese gateway erected in Liverpool's Chinatown is the largest gateway outside China. Liverpool also has a long-standing Filipino community. Lita Roza, a singer from Liverpool who was the first woman to achieve a UK number one hit, had Filipino ancestry. The city is also known for its large Irish population and its historically large Welsh population. In 1813, 10 per cent of Liverpool's population was Welsh, leading to the city becoming known as "the capital of North Wales." Following the start of the Great Irish Famine in the mid-19th century, up to two million Irish people travelled to Liverpool within one decade, with many subsequently departing for the United States. By 1851, more than 20 per cent of the population of Liverpool was Irish. At the 2001 Census, 1.17 per cent of the population were Welsh-born and 0.75 per cent were born in the Republic of Ireland, while 0.54 per cent were born in Northern Ireland, but many more Liverpudlians are of Welsh or Irish ancestry. Other contemporary ethnicities include Indian, Latin American, Malaysian, and Yemeni communities, which number several thousand each. The thousands of migrants and sailors passing through Liverpool resulted in a religious diversity that is still apparent today. This is reflected in the equally diverse collection of religious buildings, including two Christian cathedrals. Liverpool is known to be England's 'most Catholic city', with a Catholic population much larger than in other parts of England. The parish church of Liverpool is the Anglican Our Lady and St Nicholas, colloquially known as "the sailors church", which has existed near the waterfront since 1257. It regularly plays host to Catholic masses. Other notable churches include the Greek Orthodox Church of St Nicholas (built in the Neo-Byzantine architecture style), and the Gustav Adolf Church (the Swedish Seamen's Church, reminiscent of Nordic styles). Liverpool's wealth as a port city enabled the construction of two enormous cathedrals in the 20th century. The Anglican Cathedral, which was designed by Sir Giles Gilbert Scott and plays host to the annual Liverpool Shakespeare Festival, has one of the longest naves, largest organs and heaviest and highest peals of bells in the world. The Roman Catholic Metropolitan Cathedral, on Mount Pleasant next to Liverpool Science Park, was initially planned to be even larger. Of Sir Edwin Lutyens's original design, only the crypt was completed. The cathedral was eventually built to a simpler design by Sir Frederick Gibberd. While this is on a smaller scale than Lutyens' original design it still incorporates the largest panel of stained glass in the world. The road running between the two cathedrals is called Hope Street, a coincidence which pleases believers. The cathedral is colloquially referred to as "Paddy's Wigwam" due to its shape. Liverpool contains several synagogues, of which the Grade I listed Moorish Revival Princes Road Synagogue is architecturally the most notable. Princes Road is widely considered to be the most magnificent of Britain's Moorish Revival synagogues and one of the finest buildings in Liverpool. Liverpool has a thriving Jewish community with a further two orthodox Synagogues, one in the Allerton district of the city and a second in the Childwall district of the city where a significant Jewish community reside. A third orthodox Synagogue in the Greenbank Park area of L17 has recently closed, and is a listed 1930s structure. There is also a Lubavitch Chabad House and a reform Synagogue. Liverpool has had a Jewish community since the mid-18th century. The Jewish population of Liverpool is around 5,000. The Liverpool Talmudical College existed from 1914 until 1990, when its classes moved to the Childwall Synagogue. Liverpool also has a Hindu community, with a Mandir on Edge Lane, Edge Hill. The Shri Radha Krishna Temple from the Hindu Cultural Organisation in Liverpool is located there. Liverpool also has the Guru Nanak Sikh Gurdwara in Wavertree and a Bahá'í Centre in the same area. The city had the earliest mosque in England, and possibly the UK, founded in 1887 by William Abdullah Quilliam, a lawyer who had converted to Islam, and set up the Liverpool Muslim Institute in a terraced house on West Derby Road. The building was used as a house of worship until 1908, when it was sold to the City Council and converted into offices. Plans have been accepted to re-convert the building where the mosque once stood into a museum. There are three mosques in Liverpool: the largest and main one, Al-Rahma mosque, in the Toxteth area of the city and a mosque recently opened in the Mossley Hill district of the city. The third mosque was also recently opened in Toxteth and is on Granby Street. Natives of the city of Liverpool are referred to as Liverpudlians, and colloquially as "Scousers", a reference to "scouse", a form of stew. The word "Scouse" has also become synonymous with the Liverpool accent and dialect. Many people "self-identify" as Liverpudlians or Scousers without actually being born or living within the city boundaries of Liverpool. The Economy of Liverpool is one of the largest within the United Kingdom, sitting at the centre of one of the two core economies within the North West of England. In 2006, the city's GVA was £7,626 million, providing a per capita figure of £17,489, which was above the North West average. Liverpool's economy has seen strong growth since the mid-1990s, with its GVA increasing 71.8% between 1995 and 2006 and employment increasing 12% between 1998 and 2006. GDP per capita was estimated to stand at $32,121 in 2014, and total GDP at $65.8 billion. In common with much of the rest of the UK today, Liverpool's economy is dominated by service sector industries, both public and private. In 2007, over 60% of all employment in the city was in the public administration, education, health, banking, finance and insurance sectors. Over recent years there has also been significant growth in the knowledge economy of Liverpool with the establishment of the Liverpool Knowledge Quarter in sectors such as media and life sciences. Liverpool's rich architectural base has also helped the city become the second most filmed city in the UK outside London, including doubling for Chicago, London, Moscow, New York, Paris and Rome. Another important component of Liverpool's economy are the tourism and leisure sectors. Liverpool is the sixth most visited UK city and one of the 100 most visited cities in the world by international tourists. In 2008, during the city's European Capital of Culture celebrations, overnight visitors brought £188m into the local economy, while tourism as a whole is worth approximately £1.3bn a year to Liverpool. The city's new cruise liner terminal, which is situated close to the Pier Head, also makes Liverpool one of the few places in the world where cruise ships are able to berth right in the centre of the city. Other recent developments in Liverpool such as the Echo Arena and Liverpool One have made Liverpool an important leisure centre with the latter helping to lift Liverpool into the top five retail destinations in the UK. Historically, the economy of Liverpool was centred on the city's port and manufacturing base, although a smaller proportion of total employment is today derived from the port. Nonetheless the city remains one of the most important ports in the United Kingdom, handling over 32.2m tonnes of cargo in 2008. A new multimillion-pound expansion to the Port of Liverpool, Liverpool2, is scheduled to be operational from the end of 2015, and is projected to greatly increase the volume of cargo which Liverpool is able to handle. Liverpool is also home to the UK headquarters of many shipping lines including Japanese firm NYK and Danish firm Maersk Line, whilst shipping firm Atlantic Container Line has recently invested significant amounts in expanding its Liverpool operations, with a new headquarters currently under construction. Future plans to redevelop the city's northern dock system, in a project known as Liverpool Waters, could see £5.5bn invested in the city over the next 50 years, creating 17,000 new jobs. Car manufacturing also takes place in the city at the Jaguar Land Rover Halewood plant where the Range Rover Evoque model is assembled. Liverpool's history means that there are a considerable variety of architectural styles found within the city, ranging from 16th century Tudor buildings to modern-day contemporary architecture. The majority of buildings in the city date from the late-18th century onwards, the period during which the city grew into one of the foremost powers in the British Empire. There are over 2,500 listed buildings in Liverpool, of which 27 are Grade I listed and 85 are Grade II* listed. The city also has a greater number of public sculptures than any other location in the United Kingdom aside from Westminster and more Georgian houses than the city of Bath. This richness of architecture has subsequently seen Liverpool described by English Heritage, as England's finest Victorian city. The value of Liverpool's architecture and design was recognised in 2004, when several areas throughout the city were declared a UNESCO World Heritage Site. Known as the Liverpool Maritime Mercantile City, the sites were added in recognition of the city's role in the development of international trade and docking technology. As a major British port, the docks in Liverpool have historically been central to the city's development. Several major docking firsts have occurred in the city including the construction of the world's first enclosed wet dock (the Old Dock) in 1715 and the first ever hydraulic lifting cranes. The best-known dock in Liverpool is the Albert Dock, which was constructed in 1846 and today comprises the largest single collection of Grade I listed buildings anywhere in Britain. Built under the guidance of Jesse Hartley, it was considered to be one of the most advanced docks anywhere in the world upon completion and is often attributed with helping the city to become one of the most important ports in the world. The Albert Dock houses restaurants, bars, shops, two hotels as well as the Merseyside Maritime Museum, International Slavery Museum, Tate Liverpool and The Beatles Story. North of the city centre is Stanley Dock, home to the Stanley Dock Tobacco Warehouse, which was at the time of its construction in 1901, the world's largest building in terms of area and today stands as the world's largest brick-work building. One of the most famous locations in Liverpool is the Pier Head, renowned for the trio of buildings – the Royal Liver Building, the Cunard Building and the Port of Liverpool Building – which sit upon it. Collectively referred to as the "Three Graces", these buildings stand as a testament to the great wealth in the city during the late 19th and early 20th century. Built in a variety of architectural styles, they are recognised as being the symbol of Maritime Liverpool, and are regarded by many as contributing to one of the most impressive waterfronts in the world. In recent years, several areas along Liverpool's waterfront have undergone significant redevelopment. Amongst the notable recent developments are the Museum of Liverpool, the construction of the Liverpool Arena and BT Convention Centre on Kings Dock, Alexandra Tower and 1 Princes Dock on Prince's Dock and Liverpool Marina around Coburg and Brunswick Docks. The Wheel of Liverpool opened on 25 March 2010. However, plans to redevelop parts of the Liverpool have been marred by controversy. In December 2016, a newly formed company called North Point Global Ltd. was given the rights to develop part of the docks under the "New Chinatown" banner. Though heavily advertised in Liverpool, Hong Kong and Chinese cities with glossy advertisements and videos, the "New Chinatown" development failed to materialise. In January 2018, the "Liverpool Echo" and "Asia Times" revealed that the site remained sans any construction, North Point Global as well as its subcontractor "Bilt" had both declared bankruptcy, and the small investors (mostly middle class couples) who had already paid money for the apartments had lost most of their savings in them. Five similar development projects, mostly targeting individual Chinese and Hong Kong based citizens, were suspended due to financial misappropriations. Liverpool's historic position as one of the most important trading ports in the world has meant that over time many grand buildings have been constructed in the city as headquarters for shipping firms, insurance companies, banks and other large firms. The great wealth this brought, then allowed for the development of grand civic buildings, which were designed to allow the local administrators to 'run the city with pride'. The commercial district is centred on the Castle Street, Dale Street and Old Hall Street areas of the city, with many of the area's roads still following their medieval layout. Having developed over a period of three centuries the area is regarded as one of the most important architectural locations in the city, as recognised by its inclusion in Liverpool's World Heritage site. The oldest building in the area is the Grade I listed Liverpool Town Hall, which is located at the top of Castle Street and dates from 1754. Often regarded as the city's finest piece of Georgian architecture, the building is known as one of the most extravagantly decorated civic buildings anywhere in Britain. Also on Castle Street is the Grade I listed Bank of England Building, constructed between 1845 and 1848, as one of only three provincial branches of the national bank. Amongst the other buildings in the area are the Tower Buildings, Albion House (the former White Star Line headquarters), the Municipal Buildings and Oriel Chambers, which is considered to be one of the earliest Modernist style buildings ever built. The area around William Brown Street is referred to as the city's 'Cultural Quarter', owing to the presence of numerous civic buildings, including the William Brown Library, Walker Art Gallery, Picton Reading Rooms and World Museum Liverpool. The area is dominated by neo-classical architecture, of which the most prominent, St George's Hall, is widely regarded as the best example of a neo-classical building anywhere in Europe. A Grade I listed building, it was constructed between 1840 and 1855 to serve a variety of civic functions in the city and its doors are inscribed with "S.P.Q.L." (Latin "senatus populusque Liverpudliensis"), meaning "the senate and people of Liverpool". William Brown Street is also home to numerous public monuments and sculptures, including Wellington's Column and the Steble Fountain. Many others are located around the area, particularly in St John's Gardens, which was specifically developed for this purpose. The William Brown Street area has been likened to a modern recreation of the Roman Forum. While the majority of Liverpool's architecture dates from the mid-18th century onwards, there are several buildings that pre-date this time. One of the oldest surviving buildings is Speke Hall, a Tudor manor house located in the south of the city, which was completed in 1598. The building is one of the few remaining timber framed Tudor houses left in the north of England and is particularly noted for its Victorian interior, which was added in the mid-19th century. In addition to Speke Hall, many of the city's other oldest surviving buildings are also former manor houses including Croxteth Hall and Woolton Hall, which were completed in 1702 and 1704 respectively. The oldest building within the city centre is the Grade I listed Bluecoat Chambers, which was built between 1717 and 1718. Constructed in British Queen Anne style, the building was influenced in part by the work of Christopher Wren and was originally the home of the Bluecoat School (who later moved to a larger site in Wavertree in the south of the city). Since 1908 it has acted as a centre for arts in Liverpool. Liverpool is noted for having two Cathedrals, each of which imposes over the landscape around it. The Anglican Cathedral, which was constructed between 1904 and 1978, is the largest Cathedral in Britain and the fifth largest in the world. Designed and built in Gothic style, it is regarded as one of the greatest buildings to have been constructed during the 20th century and was described by former British Poet Laureate, John Betjeman, as 'one of the great buildings of the world'. The Roman Catholic Metropolitan Cathedral was constructed between 1962 and 1967 and is known as one of the first Cathedrals to break the traditional longitudinal design. In recent years, many parts of Liverpool's city centre have undergone significant redevelopment and regeneration after years of decline. The largest of these developments has been Liverpool One, which has seen almost £1 billion invested in the redevelopment of of land, providing new retail, commercial, residential and leisure space. Around the north of the city centre several new skyscrapers have also been constructed including the RIBA award-winning Unity Buildings and West Tower, which at 140m is Liverpool's tallest building. Many redevelopment schemes are also in progress including Central Village / Circus, the Lime Street gateway, and the highly ambitious Liverpool Waters. There are many other notable buildings in Liverpool, including the art deco former terminal building of Speke Airport, the University of Liverpool's Victoria Building, (which provided the inspiration for the term "Red Brick University"), and the Adelphi Hotel, which was in that past considered to be one of the finest hotels anywhere in the world. The English Heritage National Register of Historic Parks describes Merseyside's as collectively the "most important in the country". The city of Liverpool has ten listed parks and cemeteries, including two Grade I and five Grade II*, more than any other English city apart from London. Transport in Liverpool is primarily centred on the city's road and rail networks, both of which are extensive and provide links across the United Kingdom. Liverpool has an extensive local public transport network, which is managed by Merseytravel, and includes buses, trains and ferries. Additionally, the city also has an international airport and a major port, both of which provides links to locations outside the country. As a major city, Liverpool has direct road links with many other areas within England. To the east, the M62 motorway connects Liverpool with Hull and along the route provides links to several large cities, including Manchester, Leeds and Bradford. The M62 also provides a connection to both the M6 and M1 motorways, providing indirect links to more distant areas including Birmingham, London, Nottingham, Preston and Sheffield. To the west of the city, the Kingsway and Queensway Tunnels connect Liverpool with the Wirral Peninsula, including Birkenhead, and Wallasey. The A41 road and M53 motorway, which both begin in Birkenhead, link to Cheshire and Shropshire and via the A55, to North Wales. To the south, Liverpool is connected to Widnes and Warrington via the A562 and across the River Mersey to Runcorn, via the Silver Jubilee and Mersey Gateway bridges. Liverpool is served by two separate rail networks. The local rail network is managed and run by Merseyrail and provides links throughout Merseyside and beyond (see Local travel below), while the national network, which is managed by Network Rail, provides Liverpool with connections to major towns and cities across the England. The city's primary mainline station is Lime Street station, which is the terminus for several lines into the city, with numerous destinations, including London (in 2 hours 8 minutes with Pendolino trains), Birmingham, Newcastle upon Tyne, Manchester, Preston, Leeds, Scarborough, Sheffield, Nottingham and Norwich. In the south of the city, Liverpool South Parkway provides a connection to the city's airport. The Port of Liverpool is one of Britain's largest ports, providing passenger ferry services across the Irish Sea to Belfast, Dublin and the Isle of Man. Services are provided by several companies, including the Isle of Man Steam Packet Company, P&O Ferries and Stena Line. In 2007, a new cruise terminal was opened in Liverpool, located alongside the Pier Head in the city centre. November 2016 saw the official opening of Liverpool2, an extension to the port that allows post-Panamax vessels to dock in Liverpool. Leeds and Liverpool Canal runs into Liverpool city centre via Liverpool Canal Link at Pier Head since 2009. Liverpool Cruise Terminal in the city centre provides long-distance passenger cruises, Fred. Olsen Cruise Lines MS Black Watch and Cruise & Maritime Voyages MS Magellan using the terminal to depart to Iceland, France, Spain and Norway. Liverpool John Lennon Airport, which is located in the south of the city, provides Liverpool with direct air connections across the United Kingdom and Europe. In 2008, the airport handled over 5.3 million passengers and today offers services to 68 destinations, including Berlin, Rome, Milan, Paris, Barcelona and Zürich. The airport is primarily served by low-cost airlines, notably Ryanair and Easyjet, although it does provide additional charter services in the summer. Liverpool's local rail network is one of the busiest and most extensive in the country. The network consists of three lines: the Northern Line, which runs to Southport, Ormskirk, Kirkby and Hunts Cross; the Wirral Line, which runs through the Mersey Railway Tunnel and has branches to New Brighton, West Kirby, Chester and Ellesmere Port; and the City Line, which begins at Lime Street, providing links to St Helens, Wigan, Preston, Warrington and Manchester. The network is predominantly electric. Electrification of the City Line was completed in 2015. The two lines operated by Merseyrail are the busiest British urban commuter networks outside London, covering of track, with an average of 110,000 passenger journeys per weekday. Services are operated by the Merseyrail franchise and managed by Merseytravel. Local services on the City Line are operated by Northern rather than Merseyrail, although the line itself remains part of the Merseyrail network. Within the city centre the majority of the network is underground, with four city centre stations and over of tunnels. Local bus services within and around Liverpool are managed by Merseytravel and are run by several different companies, including Arriva and Stagecoach. The two principal termini for local buses are Queen Square Bus Station (located near Lime Street railway station) for services north and east of the city, and Liverpool One Bus Station formerly known as Paradise Street Bus Interchange (located near the Albert Dock) for services to the south and east. Cross-river services to the Wirral use roadside terminus points in Castle Street and Sir Thomas Street. A night bus service also operates on Saturdays providing services from the city centre across Liverpool and Merseyside. City Sights and City explorer by Maghull coaches offer a tour bus service. National Express also operates. The cross river ferry service in Liverpool, known as the Mersey Ferry, is managed and operated by Merseytravel, with services operating between the Pier Head in Liverpool and both Woodside in Birkenhead and Seacombe in Wallasey. Services operate at intervals ranging from 20 minutes at peak times, to every hour during the middle of the day and during weekends. Despite remaining an important transport link between the city and the Wirral Peninsula, the Mersey Ferry has become an increasingly popular tourist attraction within the city, with daytime River Explorer Cruises providing passengers with an historical overview of the River Mersey and surrounding areas. In May 2014, the CityBike hire scheme was launched in the city. The scheme provides access to over 1,000 bikes stationed at over 140 docking stations across the city. National Cycle Route 56, National Cycle Route 62 and National Cycle Route 810 run through Liverpool. As with other large cities, Liverpool is an important cultural centre within the United Kingdom, incorporating music, performing arts, museums and art galleries, literature and nightlife amongst others. In 2008, the cultural heritage of the city was celebrated with the city holding the title of European Capital of Culture, during which time a wide range of cultural celebrations took place in the city, including Go Superlambananas! and La Princesse. Liverpool has also held Europe's largest music and poetry event, the Welsh national Eisteddfod, three times, despite being in England, in 1884, 1900, and 1929. Liverpool is internationally known for music and is recognised by "Guinness World Records" as the "World Capital City of Pop". Musicians from the city have produced 56 No. 1 singles, more than any other city in the world. Both the most successful male band and girl group in global music history have contained Liverpudlian members. Liverpool is most famous as the birthplace of the Beatles and during the 1960s was at the forefront of the Beat Music movement, which would eventually lead to the British Invasion. Many notable musicians of the time originated in the city including Billy J Kramer, Cilla Black, Gerry and the Pacemakers and The Searchers. The influence of musicians from Liverpool, coupled with other cultural exploits of the time, such as the Liverpool poets, prompted American poet Allen Ginsberg to proclaim that the city was "the centre of consciousness of the human universe". Other musicians from Liverpool include Billy Fury, A Flock of Seagulls, Echo and the Bunnymen, Frankie Goes to Hollywood, Frankie Vaughan, Anathema, Ladytron, The Zutons, Cast, Atomic Kitten and Rebecca Ferguson. The La's 1990 hit single "There She Goes" was described by "Rolling Stone" as a "founding piece of Britpop’s foundation." The city is also home to the oldest surviving professional symphony orchestra in the UK, the Royal Liverpool Philharmonic Orchestra, which is based in the Philharmonic Hall. The chief conductor of the orchestra is Vasily Petrenko. Sir Edward Elgar dedicated his Pomp and Circumstance March No. 1 to the Liverpool Orchestral Society, and the piece had its first performance in the city in 1901. Among Liverpool's curiosities, the Austrian émigré Fritz Spiegl is notable. He not only became a world expert on the etymology of Scouse, but composed the music to Z-cars and the Radio 4 UK Theme. The Mathew Street Festival is an annual street festival that is one of the most important musical events in Liverpool's calendar. It is Europe's largest free music event and takes place every August. Other well established festivals in the city include Africa Oyé and Brazilica which are the UK's largest free African and Brazilian music festivals respectively. The dance music festival Creamfields was established by the Liverpool-based Cream clubbing brand which started life as a weekly event at Nation nightclub. There are numerous music venues located across the city, however the Echo Arena is by far the largest. Opened in 2008 the 11,000-seat arena hosted the MTV Europe Music Awards the same year and since then has held host to world-renowned acts such as Andrea Bocelli, Beyoncé, Elton John, Kanye West, Kasabian, The Killers, Lady Gaga, Oasis, Pink, Rihanna, UB40. Liverpool has more galleries and national museums than any other city in the United Kingdom apart from London. National Museums Liverpool is the only English national collection based wholly outside London. The Tate Liverpool gallery houses the modern art collection of the Tate in the North of England and was, until the opening of Tate Modern, the largest exhibition space dedicated to modern art in the United Kingdom. The FACT centre hosts touring multimedia exhibitions, while the Walker Art Gallery houses one of the most impressive permanent collections of Pre-Raphaelite art in the world. Sudley House contains another major collection of pre-20th-century art. Liverpool University's Victoria Building was re-opened as a public art gallery and museum to display the University's artwork and historical collections which include the largest display of art by Audubon outside the US. A number of artists have also come from the city, including painter George Stubbs who was born in Liverpool in 1724. The Liverpool Biennial festival of arts runs from mid-September to late November and comprises three main sections; the International, The Independents and New Contemporaries although fringe events are timed to coincide. It was during the 2004 festival that Yoko Ono's work "My mother is beautiful" caused widespread public protest when photographs of a naked woman's pubic area were exhibited on the main shopping street. Felicia Hemans (née Browne) was born in Dale Street, Liverpool, in 1793, although she later moved to Flintshire, in Wales. Felicia was born in Liverpool, a granddaughter of the Venetian consul in that city. Her father's business soon brought the family to Denbighshire in North Wales, where she spent her youth. They made their home near Abergele and St. Asaph (Flintshire), and it is clear that she came to regard herself as Welsh by adoption, later referring to Wales as "Land of my childhood, my home and my dead". Her first poems, dedicated to the Prince of Wales, were published in Liverpool in 1808, when she was only fourteen, arousing the interest of Percy Bysshe Shelley, who briefly corresponded with her. A number of notable authors have visited Liverpool, including Daniel Defoe, Washington Irving, Thomas De Quincey, Herman Melville, Nathaniel Hawthorne, Charles Dickens, Gerard Manley Hopkins and Hugh Walpole. Daniel Defoe, after visiting the city, described it, as "one of the wonders of Britain in his 'Tour through England and Wales'". Herman Melville's novel Redburn deals with the first seagoing voyage of 19 years old Wellingborough Redburn between New York and Liverpool in 1839. Largely autobiographical, the middle sections of the book are set in Liverpool and describe the young merchantman's wanderings, and his reflections. Hawthorne was stationed in Liverpool as United States consul between 1853 and 1856. Charles Dickens visited the city on numerous occasions to give public readings. Hopkins served as priest at St Francis Xavier Church, Langdale St., Liverpool, between 1879 and 81. Although he is not known to have ever visited Liverpool, Jung famously had a vivid dream of the city which he analysed in one of his works. Of all the poets who are connected with Liverpool, perhaps the greatest is Constantine P. Cavafy, a twentieth-century Greek cultural icon, although he was born in Alexandria. From a wealthy family, his father had business interests in Egypt, London and Liverpool. After his father's death, Cavafy's mother brought him in 1872 at the age of nine to Liverpool where he spent part of his childhood being educated. He lived first in Balmoral Road, then when the family firm crashed, he lived in poorer circumstances in Huskisson Street. After his father died in 1870, Cavafy and his family settled for a while in Liverpool. In 1876, his family faced financial problems due to the Long Depression of 1873, so, by 1877, they had to move back to Alexandria. "Her Benny", a novel telling the tragic story of Liverpool street urchins in the 1870s, written by Methodist preacher Silas K. Hocking, was a best-seller and the first book to sell a million copies in the author's lifetime. The prolific writer of adventure novels, Harold Edward Bindloss (1866–1945), was born in Liverpool. The writer, docker and political activist George Garrett was born in Secombe, on the Wirral Peninsula in 1896 and was brought up in Liverpool's South end, around Park Road, the son of a fierce Liverpool–Irish Catholic mother and a staunch 'Orange' stevedore father. In the 1920s and 1930s his organisation within the Seamen's Vigilance Committees, unemployed demonstrations, and hunger marches from Liverpool became part of a wider cultural force. He spoke at reconciliation meetings in sectarian Liverpool, and helped found the Unity Theatre in the 1930s as part of the Popular Front against the rise of fascism, particularly its echoes in the Spanish Civil War. Garrett died in 1966. The novelist and playwright James Hanley (1897–1985) was born in Kirkdale, Liverpool, in 1897 (not Dublin, nor 1901 as he generally implied) to a working-class family. Hanley grew up close to the docks and much of his early writing is about seamen. "The Furys" (1935) is first in a sequence of five loosely autobiographical novels about working-class life in Liverpool. James Hanley's brother, novelist Gerald Hanley (1916–92) was also born in Liverpool (not County Cork, Ireland, as he claimed). While he published a number of novels he also wrote radio plays for the BBC as well as some film scripts, most notably "The Blue Max" (1966). He was also one of several script writers for a life of Gandhi (1964). Novelist Beryl Bainbridge (1932–2010) was born in Liverpool and raised in nearby Formby. She was primarily known for her works of psychological fiction, often set among the English working classes. Bainbridge won the Whitbread Awards prize for best novel in 1977 and 1996 and was nominated five times for the Booker Prize. "The Times" newspaper named Bainbridge among their list of "The 50 greatest British writers since 1945". J. G. Farrell was born in Liverpool in 1935 but left at the outbreak of war in 1939. A novelist of Irish descent, Farrell gained prominence for his historical fiction, most notably his "Empire Trilogy" ("Troubles", "The Siege of Krishnapur" and "The Singapore Grip"), dealing with the political and human consequences of British colonial rule. However, his career ended when he drowned in Ireland in 1979 at the age of 44. Helen Forrester was the pen name of June Bhatia (née Huband) (1919–2011), who was known for her books about her early childhood in Liverpool during the Great Depression, including "Twopence to Cross the Mersey" (1974), as well as several works of fiction. During the late 1960s the city became well known for the Liverpool poets, who include Roger McGough and the late Adrian Henri. An anthology of poems, "The Mersey Sound", written by Henri, McGough and Brian Patten, has sold well since it was first being published in 1967. Liverpool has produced several noted writers of horror fiction, often set on Merseyside – Ramsey Campbell, Clive Barker and Peter Atkins among them. A collection of Liverpudlian horror fiction, "Spook City" was edited by a Liverpool expatriate, Angus Mackenzie, and introduced by Doug Bradley, also from Liverpool. Bradley is famed for portraying Barker's creation Pinhead in the "Hellraiser" series of films. Liverpool also has a long history of performing arts, reflected in several annual theatre festivals such as the Liverpool Shakespeare Festival, which takes place inside Liverpool Cathedral and in the adjacent historic St James' Gardens every summer; the Everyword Festival of new theatre writing, the only one of its kind in the country; Physical Fest, an international festival of physical theatre; the annual festivals organised by Liverpool John Moores University's drama department and the Liverpool Institute for Performing Arts; and other festivals by the large number of theatres in the city, such as the Empire, Epstein, Everyman, Playhouse, Royal Court, and Unity theatres. Notable actors and actresses from Liverpool include Arthur Askey, Tom Baker, Kim Cattrall, Jodie Comer, Stephen Graham, Rex Harrison, Jason Isaacs, Tina Malone, the McGann brothers (Joe, Mark, Paul, and Stephen), David Morrissey, Elizabeth Morton, Peter Serafinowicz, Elisabeth Sladen, Alison Steadman, and Rita Tushingham. Actors and actresses from elsewhere in the world have strong ties to the city, such as Canadian actor Mike Myers (whose parents were both from Liverpool) and American actress Halle Berry (whose mother was from Liverpool). Liverpool has a thriving and varied nightlife, with the majority of the city's late night bars, pubs, nightclubs, live music venues and comedy clubs being located in a number of distinct districts. A 2011 TripAdvisor poll voted Liverpool as having the best nightlife of any UK city, ahead of Manchester, Leeds and even London. Concert Square, St. Peter's Square and the adjoining Seel, Duke and Hardman Streets are home to some of Liverpool's largest and most famed nightclubs including Alma de Cuba, Blue Angel, Bumper, Chibuku, Heebie Jeebies, Korova, The Krazyhouse (Now Electrik Warehouse), The Magnet, Nation (home of the Cream brand, and Medication, the UK's largest and longest running weekly student event), Popworld as well as countless other smaller establishments and chain bars. Another popular nightlife destination in the city centre is Mathew Street and the Gay Quarter, located close to the city's commercial district, this area is famed for The Cavern Club alongside numerous gay bars including Garlands and G-Bar. The Albert Dock and Lark Lane in Aigburth also contain an abundance of bars and late night venues. In Liverpool primary and secondary education is available in various forms supported by the state including secular, Church of England, Jewish, and Roman Catholic. Islamic education is available at primary level, but there is no secondary provision. One of Liverpool's important early schools was The Liverpool Blue Coat School; founded in 1708 as a charitable school. The Liverpool Blue Coat School is the top-performing school in the city with 100% 5 or more A*-C grades at GCSE resulting in the 30th best GCSE results in the country and an average point score per student of 1087.4 in A/AS levels. Other notable schools include Liverpool College founded in 1840 Merchant Taylors' School founded in 1620. Another of Liverpool's notable senior schools is St. Edward's College situated in the West Derby area of the city. Historic grammar schools, such as the Liverpool Institute High School and Liverpool Collegiate School—both closed in the 1980s—are still remembered as centres of academic excellence. Bellerive Catholic College is the city's top performing non-selective school, based upon GCSE results in 2007. Liverpool has three universities: the University of Liverpool, Liverpool John Moores University and Liverpool Hope University. Edge Hill University, founded as a teacher-training college in the Edge Hill district of Liverpool, is now located in Ormskirk in South-West Lancashire. Liverpool is also home to the Liverpool Institute for Performing Arts (LIPA). The University of Liverpool was established in 1881 as University College Liverpool. In 1884, it became part of the federal Victoria University. Following a Royal Charter and Act of Parliament in 1903, it became an independent university, the University of Liverpool, with the right to confer its own degrees. It was the first university to offer degrees in biochemistry, architecture, civic design, veterinary science, oceanography and social science. Liverpool Hope University, which was formed through the merger of three colleges, the earliest of which was founded in 1844, gained university status in 2005. It is the only ecumenical university in Europe. It is situated on both sides of Taggart Avenue in Childwall and has a second campus in the city centre (the Cornerstone). The Liverpool School of Tropical Medicine, founded to address some of the problems created by trade, continues today as a post-graduate school affiliated with the University of Liverpool and houses an anti-venom repository. Liverpool John Moores University was previously a polytechnic, and gained status in 1992. It is named in honour of Sir John Moores, one of the founders of the Littlewoods football pools and retail group, who was a major benefactor. The institution was previously owned and run by Liverpool City Council. It traces it lineage to the Liverpool Mechanics Institute, opened in 1823, making it by this measure England's third-oldest university. The city has one further education college, Liverpool Community College in the city centre. Liverpool City Council operates Burton Manor, a residential adult education college in nearby Burton, on the Wirral Peninsula. There are two Jewish schools in Liverpool, both belonging to the King David Foundation. King David School, Liverpool is the High School and the King David Primary School. There is also a King David Kindergarten, featured in the community centre of Harold House. These schools are all run by the King David Foundation located in Harold House in Childwall; conveniently next door to the Childwall Synagogue. The City of Liverpool is the most successful footballing city in England. Football is the most popular sport in the city, home to Everton F.C. and Liverpool F.C.. Between them, the clubs have won 28 English First Division titles, 12 FA Cup titles, 10 League Cup titles, 6 European Cup titles, 1 European Cup Winners' Cup title, 3 UEFA Cup titles, and 24 FA Charity Shields. The clubs both compete in the Premier League, of which they are founding members, and contest the Merseyside derby, dubbed the 'friendly derby' despite there having been more sending-offs in this fixture than any other. However, unlike many other derbies, it is not rare for families in the city to contain supporters of both clubs. Liverpool F.C. is the English and British club with the most European Cup titles with six, the latest in 2019. Everton F.C. were founded in 1878 and play at Goodison Park and Liverpool F.C. were founded in 1892 and play at Anfield. Many high-profile players have played for the clubs, including Dixie Dean, Alan Ball, Gary Lineker, Neville Southall and Wayne Rooney for Everton F.C. and Kenny Dalglish, Alan Hansen, Kevin Keegan, Ian Rush and Steven Gerrard for Liverpool F.C.. Notable managers of the clubs include Harry Catterick and Howard Kendall of Everton, and Bill Shankly and Bob Paisley of Liverpool. Famous professional footballers from Liverpool include Peter Reid, Gary Ablett, Wayne Rooney, Steven Gerrard, Jamie Carragher and Tony Hibbert. The City of Liverpool is the only one in England to have staged top division football every single season since the formation of the Football League in 1888, and both of the city's clubs play in high-capacity stadiums. Boxing is massively popular in Liverpool. The city has a proud heritage and history in the sport and is home to around 22 amateur boxing clubs, which are responsible for producing many successful boxers, such as Ike Bradley, Alan Rudkin, John Conteh, Andy Holligan, Paul Smith, Shea Neary, Tony Bellew and David Price. The city also boasts a consistently strong amateur contingent which is highlighted by Liverpool being the most represented city on the GB Boxing team, as well as at the 2012 London Olympics, the most notable Liverpool amateur fighters include; George Turpin, Tony Willis, Robin Reid and David Price who have all medalled at the Olympic Games. Boxing events are usually hosted at the Echo Arena and Liverpool Olympia within the city, although the former home of Liverpool boxing was the renowned Liverpool Stadium. Aintree is home to the world's most famous steeple-chase, the John Smith's Grand National which takes place annually in early April. The race meeting attracts horse owners/ jockeys from around the world to compete in the demanding and 30 fence course. There have been many memorable moments of the Grand National, for instance the 100/1 outsider Foinavon in 1967, the dominant Red Rum and Ginger McCain of the 1970s and Mon Mome (100/1) who won the 2009 meeting. In 2010, the National became the first horse race to be televised in high-definition in the UK. The Royal Liverpool Golf Club, situated in the nearby town of Hoylake on the Wirral Peninsula, has hosted The Open Championship on a number of occasions, most recently in 2014. It also hosted the Walker Cup in 1983. Liverpool once contained four greyhound tracks, Seaforth Greyhound Stadium (1933-1965), Breck Park Stadium (1927-1948), Stanley Greyhound Stadium (1927-1961) and White City Stadium (1932-1973). Breck Park also hosted boxing bouts and both Stanley and Seaforth hosted Motorcycle speedway. Wavertree Sports Park is home to the Liverpool Harriers athletics club, which has produced such athletes as Curtis Robb, Allyn Condon (the only British athlete to compete at both the Summer and Winter Olympics), and Katarina Johnson-Thompson; Great Britain was represented by Johnson-Thompson at the 2012 London Olympics in the women's heptathlon, and she would go on to win the gold medal at the 2019 World Championships, giving Liverpool its first gold medal and breaking the British record in the process. In August 2012, Liverpool gymnast Beth Tweddle won an Olympic bronze medal in London 2012 in the uneven bars at her third Olympic Games, thus becoming the most decorated British gymnast in history. Park Road Gymnastics Centre provides training to a high level. Liverpool has produced several swimmers who have represented their nation at major championships such as the Olympic Games. The most notable of which is Steve Parry who claimed a bronze medal at the 2004 Athens Olympics in the 200m butterfly. Others include Herbert Nickel Haresnape, Margaret Kelly, Shellagh Ratcliffe and Austin Rawlinson. There is a purpose-built aquatics centre at Wavertree Sports Park, which opened in 2008. The City of Liverpool Swimming Club has been National Speedo League Champions 8 out of the last 11 years. The city is the hub of the Liverpool and District Cricket Competition, an ECB Premier League. Sefton Park and Liverpool are the league's founder members based in the city with Wavertree, Alder and Old Xaverians clubs having joined the league more recently. Liverpool plays host Lancashire County Cricket Club as an outground most seasons, including six of eight home County Championship games during Lancashire's 2011 title winning campaign whilst Old Trafford was refurbished. Since 2014 Liverpool Cricket Club has played host to the annual Tradition-ICAP Liverpool International tennis tournament, which has seen tennis stars such as Novak Djokovic, David Ferrer, Mardy Fish, Laura Robson and Caroline Wozniacki. Previously this had been held at Calderstones Park, situated in Allerton in the south of the city. Liverpool Tennis Development Programme at Wavertree Tennis Centre is one of the largest in the UK. Professional basketball came to the city in 2007 with the entry of Everton Tigers, now known as Mersey Tigers, into the elite British Basketball League. The club was originally associated with Everton F.C., and was part of the "Toxteth Tigers" youth development programme, which reached over 1,500 young people every year. The Tigers began play in Britain's top league for the 2007–08 season, playing at the Greenbank Sports Academy before moving into the newly completed Echo Arena during that season. After the 2009–10 season, Everton F.C. withdrew funding from the Tigers, who then changed their name to Mersey Tigers. Their closest professional rivals are the Cheshire Jets, based away in Chester. Liverpool is one of three cities which still host the traditional sport of British baseball and it hosts the annual England-Wales international match every two years, alternating with Cardiff and Newport. Liverpool Trojans are the oldest existing baseball club in the UK. The 2014 Tour of Britain cycle race began in Liverpool on 7 September, utilising a city centre circuit to complete of racing. The Tour of Britain took nine stages and finished in London on 14 September. A 2016 study of UK fitness centres found that, of the top 20 UK urban areas, Liverpool had the highest number of leisure and sports centres per capita, with 4.3 centres per 100,000 of the city population. Liverpool is home to the Premier League football clubs Everton and Liverpool F.C. Liverpool have played at Anfield since 1892, when the club was formed to occupy the stadium following Everton's departure due to a dispute with their landlord. Liverpool are still playing there 125 years later, although the ground has been completely rebuilt since the 1970s. The Spion Kop (rebuilt as an all-seater stand in 1994–95) was the most famous part of the ground, gaining cult status across the world due to the songs and celebrations of the many fans who packed onto its terraces. Anfield is classified as a 4 Star UEFA Elite Stadium with capacity for 54,000 spectators in comfort, and is a distinctive landmark in an area filled with smaller and older buildings. Liverpool club also has a multimillion-pound youth training facility called The Academy. After leaving Anfield in 1892, Everton moved to Goodison Park on the opposite side of Stanley Park. Goodison Park was the first major football stadium built in England. Molineux (Wolves' ground) had been opened three years earlier but was still relatively undeveloped. St. James's Park, Newcastle, opened in 1892, was little more than a field. Only Scotland had more advanced grounds. Rangers opened Ibrox in 1887, while Celtic Park was officially inaugurated at the same time as Goodison Park. Everton performed a miraculous transformation at Mere Green, spending up to £3000 on laying out the ground and erecting stands on three sides. For £552 Mr. Barton prepared the land at 4½d a square yard. Kelly Brothers of Walton built two uncovered stands each for 4,000 people, and a covered stand seating 3,000, at a total cost of £1,460. Outside, hoardings cost a further £150, gates and sheds cost £132 10s and 12 turnstiles added another £7 15s to the bill. The ground was immediately renamed Goodison Park and proudly opened on 24 August 1892, by Lord Kinnaird and Frederick Wall of the FA. But instead of a match the 12,000 crowd saw a short athletics meeting followed by a selection of music and a fireworks display. Everton's first game there was on 2 September 1892 when they beat Bolton 4–2. It now has the capacity for just under 40,000 spectators all-seated, but the last expansion took place in 1994 when a new goal-end stand gave the stadium an all-seater capacity. The Main Stand dates back to the 1970s, while the other two stands are refurbished pre-Second World War structures. Everton are currently looking to relocate. The club have had previously raised the subject before in 1996, and in 2003 were forced to scrap plans for a 55,000-seat stadium at King's Dock due to financial reasons and also Destination Kirkby to move just beyond Liverpool's council boundary in Kirkby. The latest plan is currently being drawn up to move to nearby Bramley-Moore Dock on Liverpool's waterfront. Made in Liverpool is a local television station serving Liverpool City Region and surrounding areas. The station is owned and operated by Made Television Ltd and forms part of a group of eight local TV stations. It broadcasts from studios and offices in Liverpool. The ITV region which covers Liverpool is ITV Granada. In 2006, the Television company opened a new newsroom in the Royal Liver Building. Granada's regional news broadcasts were produced at the Albert Dock News Centre during the 1980s and 1990s. The BBC also opened a new newsroom on Hanover Street in 2006. ITV's daily magazine programme "This Morning" was broadcast from studios at Albert Dock until 1996, when production was moved to London. Granada's short-lived shopping channel "Shop!" was also produced in Liverpool until it was cancelled in 2002. Liverpool is the home of the TV production company Lime Pictures, formerly Mersey Television, which produced the now-defunct soap operas "Brookside" and "Grange Hill". It also produces the soap opera "Hollyoaks", which was formerly filmed in Chester and began on Channel 4 in 1995. All three series were/are largely filmed in the Childwall area of Liverpool. The city has one daily newspaper: the "Echo", published by the Trinity Mirror group. "The Liverpool Daily Post" was also published until 2013. The UK's first online only weekly newspaper called "Southport Reporter" ("Southport and Mersey Reporter"), is also one of the many other news outlets that covers the city. Radio stations include BBC Radio Merseyside, Capital Liverpool, Radio City, Greatest Hits Liverpool and Radio City Talk. The last three are located in Radio City Tower which, along with the two cathedrals, dominates the city's skyline. The independent media organisation Indymedia also covers Liverpool, while "Nerve" magazine publishes articles and reviews of cultural events. Liverpool has also featured in films; see List of films set in Liverpool for some of them. In films the city has "doubled" for London, Paris, New York, Chicago, Moscow, Dublin, Venice and Berlin. Liverpool is twinned with: Liverpool has friendship links (without formal constitution) with the following cities: The first overseas consulate of the United States was opened in Liverpool in 1790, and it remained operational for almost two centuries. Today, a large number of consulates are located in the city serving Chile, Denmark, Estonia, Finland, France, Germany, Hungary, Iceland, Italy, Netherlands, Norway, Romania, Sweden and Thailand. Tunisian & Ivory Coast Consulates are located in the neighbouring Metropolitan Borough of Sefton The following people and military units have received the Freedom of the City of Liverpool.
https://en.wikipedia.org/wiki?curid=18081
Long jump The long jump is a track and field event in which athletes combine speed, strength and agility in an attempt to leap as far as possible from a take off point. Along with the triple jump, the two events that measure jumping for distance as a group are referred to as the "horizontal jumps". This event has a history in the Ancient Olympic Games and has been a modern Olympic event for men since the first Olympics in 1896 and for women since 1948. At the elite level, competitors run down a runway (usually coated with the same rubberized surface as running tracks, crumb rubber also vulcanized rubber—known generally as an all-weather track) and jump as far as they can from a wooden board 20 cm or 8 inches wide that is built flush with the runway into a pit filled with finely ground gravel or sand. If the competitor starts the leap with any part of the foot past the foul line, the jump is declared a foul and no distance is recorded. A layer of plasticine is placed immediately after the board to detect this occurrence. An official (similar to a referee) will also watch the jump and make the determination. The competitor can initiate the jump from any point behind the foul line; however, the distance measured will always be perpendicular to the foul line to the nearest break in the sand caused by any part of the body or uniform. Therefore, it is in the best interest of the competitor to get as close to the foul line as possible. Competitors are allowed to place two marks along the side of the runway in order to assist them to jump accurately. At a lesser meet and facilities, the plasticine will likely not exist, the runway might be a different surface or jumpers may initiate their jump from a painted or taped mark on the runway. At a smaller meet, the number of attempts might also be limited to four or three. Each competitor has a set number of attempts. That would normally be three trials, with three additional jumps being awarded to the best 8 or 9 (depending on the number of lanes on the track at that facility, so the event is equatable to track events) competitors. All legal marks will be recorded but only the longest legal jump counts towards the results. The competitor with the longest legal jump (from either the trial or final rounds) at the end of competition is declared the winner. In the event of an exact tie, then comparing the next best jumps of the tied competitors will be used to determine place. In a large, multi-day elite competition (like the Olympics or World Championships), a set number of competitors will advance to the final round, determined in advance by the meet management. A set of 3 trial round jumps will be held in order to select those finalists. It is standard practice to allow at a minimum, one more competitor than the number of scoring positions to return to the final round, though 12 plus ties and automatic qualifying distances are also potential factors. (For specific rules and regulations in United States Track & Field see Rule 185).
https://en.wikipedia.org/wiki?curid=18084
Lonsdaleite Lonsdaleite (named in honour of Kathleen Lonsdale), also called hexagonal diamond in reference to the crystal structure, is an allotrope of carbon with a hexagonal lattice. In nature, it forms when meteorites containing graphite strike the Earth. The great heat and stress of the impact transforms the graphite into diamond, but retains graphite's hexagonal crystal lattice. Lonsdaleite was first identified in 1967 from the Canyon Diablo meteorite, where it occurs as microscopic crystals associated with diamond. Hexagonal diamond has also been synthesized in the laboratory (1966 or earlier; published in 1967) by compressing and heating graphite either in a static press or using explosives. It has also been produced by chemical vapor deposition, and also by the thermal decomposition of a polymer, poly(hydridocarbyne), at atmospheric pressure, under argon atmosphere, at . It is translucent, brownish-yellow, and has an index of refraction of 2.40 to 2.41 and a specific gravity of 3.2 to 3.3. Its hardness is theoretically superior to that of cubic diamond (up to 58% more), according to computational simulations, but natural specimens exhibited somewhat lower hardness through a large range of values (from 7 to 8 on Mohs hardness scale). The cause is speculated as being due to the samples having been riddled with lattice defects and impurities. The property of lonsdaleite as a discrete material has been questioned, since specimens under crystallographic inspection showed not a bulk hexagonal lattice, but instead cubic diamond dominated by structural defects that include hexagonal sequences. A quantitative analysis of the X-ray diffraction data of lonsdaleite has shown that about equal amounts of hexagonal and cubic stacking sequences are present. Consequently, it has been suggested that "stacking disordered diamond" is the most accurate structural description of lonsdaleite. On the other hand, recent shock experiments with in situ X-ray diffraction show strong evidence for creation of relatively pure lonsdaleite in dynamic high-pressure environments such as meteorite impacts. According to the traditional picture, Lonsdaleite has a hexagonal unit cell, related to the diamond unit cell in the same way that the hexagonal and cubic close packed crystal systems are related. The diamond structure can be considered to be made up of interlocking rings of six carbon atoms, in the chair conformation. In lonsdaleite, some rings are in the boat conformation instead. At the nanoscale dimensions cubic diamond is represented by diamondoids while hexagonal diamond is represented by wurtzoids. In diamond, all the carbon-to-carbon bonds, both within a layer of rings and between them, are in the staggered conformation, thus causing all four cubic-diagonal directions to be equivalent; while in lonsdaleite the bonds between layers are in the eclipsed conformation, which defines the axis of hexagonal symmetry. Lonsdaleite is simulated to be 58% harder than diamond on the <100> face and to resist indentation pressures of 152 GPa, whereas diamond would break at 97 GPa. This is yet exceeded by IIa diamond's <111> tip hardness of 162 GPa. Lonsdaleite occurs as microscopic crystals associated with diamond in several meteorites: Canyon Diablo, Kenna, and Allan Hills 77283. It is also naturally occurring in non-bolide diamond placer deposits in the Sakha Republic. Material with d-spacings consistent with Lonsdaleite has been found in sediments with highly uncertain dates at Lake Cuitzeo, in the state of Guanajuato, Mexico, by proponents of the controversial Younger Dryas impact hypothesis. Its presence in local peat deposits is claimed as evidence for the Tunguska event being caused by a meteor rather than by a cometary fragment.
https://en.wikipedia.org/wiki?curid=18087
Labrador duck The Labrador duck ("Camptorhynchus labradorius") was a North American bird; it has the distinction of being the first endemic North American bird species to become extinct after the Columbian Exchange, with the last known sighting occurring in 1878 in Elmira, New York. It was already a rare duck before European settlers arrived, and as a result of its rarity information on the Labrador duck is not abundant, although some, such as its habitat, characteristics, dietary habits and reasons behind its extinction, are known. There are 55 specimens of the Labrador duck preserved in museum collections worldwide. The Labrador duck is considered a sea duck. A basic difference in the shape of the process of metacarpal I divides the sea ducks into two groups: The position of the nutrient foramen of the tarsometatarsus also separates the two groups of sea ducks. In the first group, the foramen is lateral to the long axis of the lateral groove of the hypotarsus; in the second, the foramen is on or medial to the axis of that groove. The Labrador duck was also known as the pied duck and skunk duck, the former being a vernacular name that it shared with the surf scoter and the common goldeneye (and even the American oystercatcher), a fact that has led to difficulties in interpreting old records of these species. Both names refer to the male's striking white/black piebald colouration. Yet another common name was sand shoal duck, referring to its habit of feeding in shallow water. The closest evolutionary relatives of the Labrador duck are apparently the scoters ("Melanitta"). A mitogenomic study of the placement of the Labrador duck found the species to be closely related to the Steller's eider as shown below. The female plumage was grey. Although weakly patterned, the pattern was scoter-like. The male's plumage was black and white in an eider-like pattern, but the wings were entirely white except for the primaries. The trachea of the male was scoter-like. An expansion of the tracheal tube occurred at the anterior end, and two enlargements (as opposed to one enlargement as seen in scoters) were near the middle of the tube. The bulla was bony and round, puffing out from the left side. This asymmetrical and osseus bulla was unlike that of scoters; this bulla was similar to eiders and harlequin duck's bullae. The Labrador duck has been considered the most enigmatic of all North American birds. The Labrador duck had an oblong head with small, beady eyes. Its bill was almost as long as its head. The body was short and depressed with short, strong feet that were far behind the body. The feathers were small and the tail was short and rounded. The Labrador duck belongs to a monotypic genus. The Labrador duck migrated annually, wintering off the coasts of New Jersey and New England in the eastern United States, where it favoured southern sandy coasts, sheltered bays, harbors, and inlets, and breeding in Labrador and northern Quebec in the summer. John James Audubon's son reported seeing a nest belonging to the species in Labrador. Some believe that it may have laid its eggs on the islands in the Gulf of Saint Lawrence.The breeding biology of the Labrador duck is largely unknown. The Labrador duck fed on small molluscs, and some fishermen reported catching it on fishing lines baited with mussels. The structure of the bill was highly modified from that of most ducks, having a wide, flattened tip with numerous lamellae inside. In this way, it is considered an ecological counterpart of the North Pacific/North Asian Steller's eider. The beak was also particularly soft and may have been used to probe through sediment for food. Another, completely unrelated, duck with similar (but even more specialized) bill morphology is the Australian pink-eared duck, which feeds largely on plankton, but also mollusks; the condition in the Labrador duck probably resembled that in the blue duck most in outward appearance. Its peculiar bill suggests it ate shellfish and crustaceans from silt and shallow water. The Labrador duck may have survived by eating snails. The Labrador duck is thought to have been always rare, but between 1850 and 1870, populations waned further. Its extinction (sometime after 1878) is still not fully explained. Although hunted for food, this duck was considered to taste bad, rotted quickly, and fetched a low price. Consequently, it was not sought much by hunters. However, the eggs may have been overharvested, and it may have been subject to depredations by the feather trade in its breeding area, as well. Another possible factor in the bird's extinction was the decline in mussels and other shellfish on which they are believed to have fed in their winter quarters, due to growth of population and industry on the Eastern Seaboard. Although all sea ducks readily feed on shallow-water molluscs, no Western Atlantic bird species seems to have been as dependent on such food as the Labrador duck. Another theory that was said to lead to their extinction was a huge increase of human influence on the coastal ecosystems in North America, causing the birds to flee their niches and find another habitat. These ducks were the only birds whose range was limited to the American coast of the North Atlantic, so changing niches was a difficult task. The Labrador duck became extinct in the late 19th century. The duck disappeared soon after the first wave of European settlement.
https://en.wikipedia.org/wiki?curid=18088